Amadeus Code


60951 amadeus 20code 20  20combination 2002
  • 60951 amadeus 20code 20  20combination 2002
  • 61716 synthbass update
  • 61720 tempo edit
  • 61718 shareableweb
  • 60950 amadeus 20code 20  20combination 2001
  • 60952 amadeus 20code 20  20iphone 208 20  20discover
  • 60953 amadeus 20code 20  20iphone 208 20  20genres 20  20moods
  • 60954 amadeus 20code 20  20iphone 208 20  20melody 20settings
  • 60955 amadeus 20code 20  20iphone 208 20  20player
  • 60956 amadeus 20code 20  20iphone 208 20  20song 20attributes
  • 60957 amadeus 20code 20  20screenshot 20  20discover
  • 60958 amadeus 20code 20  20screenshot 20  20genres 20  20moods
  • 60959 amadeus 20code 20  20screenshot 20  20melody 20settings
  • 60960 amadeus 20code 20  20screenshot 20  20player
  • 60961 amadeus 20code 20  20screenshot 20  20song 20attributes
Loading twitter feed

About

Amadeus Code is an artificial intelligence-powered songwriting assistant. The technology is a new approach that breaks centuries of melodies down into their constituent parts (“licks”) and transforms them into data. By eschewing more traditional methods of musical information transfer--the score and MIDI, for example--Japanese researchers have created a system to generate ...

+ Show More

Contact

Publicist
Tyler Volkmar
(812) 339-1195 x 203

Current News

  • 04/26/201904/26/2019

Amadeus Code Lets Songwriters Get Crazy Rhythm Thanks to New Beats Feature

Rhythmic patterns often define musical genres, from hip hop to disco. Amadeus Code, the AI-powered songwriting assistant, is rolling out several rhythmic options to underpin its melodies, basslines, and chord progressions, giving music makers yet another layer of potential inspiration.

“A quick YouTube search for covers of your favorite song will show you two things,” explains Taishi Fukuyama, Amadeus Code co-founder and COO. “One, that a powerful song transcends genre. Many...

Press

  • Fast Company, Feature story, 03/14/2019, To celebrate Pi Day, listen to Don McLean’s “American Pie” rewritten by robots Text
  • TechCrunch, Feature story, 09/17/2018, Mumford & Sons beware! An AI can now write indie music Text
  • Music Business Worldwide, Feature story, 10/31/2018, What's the Value of a Song When Artificial Intelligence is Everywhere? Text
  • Billboard, Feature story, 01/23/2019, Entering the Artprocess Era: How Influence, Ownership & Creation Will Change With AI (Guest Column) Text
  • + Show More

News

04/26/2019, Amadeus Code Lets Songwriters Get Crazy Rhythm Thanks to New Beats Feature
04/26/201904/26/2019, Amadeus Code Lets Songwriters Get Crazy Rhythm Thanks to New Beats Feature
Announcement
04/26/2019
Announcement
04/26/2019
Rhythmic patterns often define musical genres, from hip hop to disco. Amadeus Code, the AI-powered songwriting assistant, is rolling out several rhythmic options to underpin its melodies, basslines, and chord progressions, giving music makers yet another layer of potential inspiration. MORE» More»

Rhythmic patterns often define musical genres, from hip hop to disco. Amadeus Code, the AI-powered songwriting assistant, is rolling out several rhythmic options to underpin its melodies, basslines, and chord progressions, giving music makers yet another layer of potential inspiration.

“A quick YouTube search for covers of your favorite song will show you two things,” explains Taishi Fukuyama, Amadeus Code co-founder and COO. “One, that a powerful song transcends genre. Many songs are genre agnostic and can work in a bunch of musical styles. Two, that these styles revolve around rhythms and sound design that define them.”

A new channel

Amadeus Code users will now have a chance to experiment with beats, thanks to a new audio channel that incorporates rhythmic ideas into the AI’s already robust melodic elements. Users can select one of four popular defined styles--urban pop/mellow, urban pop/uplifting, chill disco, and hip hop--or just leave things in the more open-ended Songwriter Mode. 

From there, they can head to the Discover Library section, find a chord progression they like, and let Amadeus Code generate a new melody on top of it. The resulting sketch can be exported to a DAW via MIDI or audio, and it can be shared directly to the web for further exploration and arrangement.

Playing with genre

“Rather than just collaborating with the app to discover new topline melody ideas without any rhythmic information, we have added a simple way for users to apply popular genre rhythms to the in-app creations to fast track inspiration according to the selected style,” says Fukuyama.  “Hardcore genre-defying users can continue to use the app without any beats in Songwriting Mode.” 

By expanding the ways users can interact with AI melodies, Amadeus Code lets musicians, producers, and music lovers remain flexible and use the app in ways that mesh with their creative process. This flexibility enables experimentation and the creation of new genre-bending work.

Announcement
04/26/2019

01/28/2019, Hear the Music: Songwriting Assistant Amadeus Code’s New Features Give Users More Power to Play with and Share AI Generated Melodies
01/28/201901/28/2019, Hear the Music: Songwriting Assistant Amadeus Code’s New Features Give Users More Power to Play with and Share AI Generated Melodies
Announcement
01/28/2019
Announcement
01/28/2019
Amadeus Code’s latest updates underline its vision for AI-assisted music creation, which insists that artists want inspiration, not computer-generated ditties. MORE» More»

Amadeus Code’s latest updates underline its vision for AI-assisted music creation, which insists that artists want inspiration, not computer-generated ditties. “We have purposefully based our approach on giving humans tracks that they can then flesh out,” explains Amadeus Code COO Taishi Fukuyama. “Our AI is designed to support creative people, especially those who want to or have to compose prolifically.”

However, to hear how well an idea is going to work, a composer or producer needs some sonic options to play around with. To help users hear more when Amadeus Code generates a melody for them to work with, the app has incorporated a few key sounds, including four bass voices, as well as giving users the capacity to mute any and all voices. They can shift the BPM of a generated track, too, allowing them to jump off from a favorite hit found in the Harmony Library--and then take it down tempo or hype it up. 

“We wanted to put a few more sounds in the app, without completely putting ‘words into your mouth’ sort to speak, to give users more ways to uncover how a particular AI-generated melody might fit into their projects,” Fukuyama notes. “The point of our AI-powered songwriting assistant is that it creates a shared control principle with the user and does not just autopilot the process.  Also, sometimes you want to isolate one part or voice, and that was impossible before.” Now the app is even better at revealing a melody’s strengths and possibilities. “We’ve got more choices that can highlight the generated music’s nuances,” says Fukuyama.

Amadeus Code has also enabled social sharing, when a melody is just what the user is looking for. By sending a simple URL to a collaborator or bandmate, users can exchange ideas rapidly outside of the app. The URL contains a player, allowing collaborators to listen without logging in. Press play, and hear the AI ideas. Then humans can take them to the next, more developed level.

“Lots of AI outputs focus also include performance, and we don’t think that makes any sense,” Fukuyama says. “Performance is more compelling when humans are involved. What an AI system can do, however, is suggest an infinite number of ideas humans can evaluate and develop, and that shared control principle makes for a far more powerful engine for creativity.” The future of AI music isn’t just more robot music; it’s a tool that will connect human- and machine-made ideas for wilder, more productive creative exploration.

Announcement
01/28/2019

01/23/2019, Entering the Artprocess Era: How Influence, Ownership & Creation Will Change With AI
01/23/201901/23/2019, Entering the Artprocess Era: How Influence, Ownership & Creation Will Change With AI
Announcement
01/23/2019
Announcement
01/23/2019
Art used to be done, finished and discrete. The artist stepped away and there was the final artwork. This finished product -- be it a painting, sculpture, book or sound recording -- could be bought and sold and, in more recent human history, reproduced for a mass market. MORE» More»
Appeared in Billboard
 
Art used to be done, finished and discrete. The artist stepped away and there was the final artwork. This finished product -- be it a painting, sculpture, book or sound recording -- could be bought and sold and, in more recent human history, reproduced for a mass market.
 
The final piece had a life of its own. Its finality obscured the creator or creators' influences, hiding years of training, thinking and experimenting (and borrowing). It could be owned, with that ownership defined by format -- be it a physical object or file type, the way copyright is still defined today.
 
Artificial intelligence is poised to transform these dynamics. We're moving from fixed ownership to licensing as our thought framework. We're moving from imagining art as the final work completed by brilliant individuals to seeing it a series of ongoing transformations, enabling multiple interventions by a range of creators from all walks of life. We're entering the era of the artprocess.
 
The early signs of this shift are already apparent in the debate about who deserves credit (and royalties or payment) for AI-based images and sounds. This debate is heating up, as evidenced by the assertion by an algorithm developer that he was owed a cut of proceeds from Christie's sale of an AI-generated portrait, despite the algorithm's open-source origins. This debate will only get thornier as more works are created in different ways using machine learning and other algorithmic tools, and as open-source software and code get increasingly commercialized. (See investments in GitHub or IBM's purchase of Red Hat.) Will the final producers of a work powered by AI gain all the spoils, or will new licensing approaches evolve that give creators tools in return for a small fee for the tool-makers?
 
We see another part of this shift toward process with the advent of musical memes and the smash success of apps like musical.ly (now TikTok). Full-length songs that are finished works are easily accessible to young internet or app users, but kids often care less about the entire piece than they do about an excerpt they make on their own. Even before the lip synching app craze, viral YouTube compilations connected to particular hits predated musical.ly and predicted it. Think of that rash of videos of "Call Me Maybe" and "Harlem Shake": In both cases, users got excited about a few seconds of the chorus in a song and made their own snippets. As a collection, these snippets became more relevant to fans than the songs themselves. Users are reinventing the value of content, creating the need for a new framework for attribution and reward.
 
We may not all respond to this art -- or even consider these iterations to be "art" -- but users are finding joy and value through new interactive ways of consuming music. It's not passive, it's not pressing play and listening start to finish, it's not even about unbundling albums into singles or tracks. It's about unravelling parts of songs and adding your own filters and images, using methods not unlike how art and music is made by professionals. It's creating something new and it's not always purely derivative. There's a long history of this kind of content dismantling and reassembly, one stretching back centuries, the very process that created traditional or folk art. People have long built songs from whatever poetic and melodic materials they have at the ready, rearranging ballads, for example, to include a favorite couplet, lick, or plot twist. The app ecosystem is creating the next iteration of folk art, in a way.
 
It's also speaking to how AI may shape and be shaped by creators. Though not exactly stems in the traditional sense, stem-like fragments are first provided to app users in a confined playground, and then re-arranged or imagined by these users, in a way similar to how an AI builds new melodies.
 
To grasp the connection, it's important to understand how an AI system creates new music. In the case of Amadeus Code, the goal of the AI is to create new melodies based on existing tastes and styles. An initial dataset is necessary for any AI to generate results. The process of curating, compiling and optimizing this ever-evolving dataset demands as much creativity as figuring out how to turn this data into acceptable melodies. Melodies are generated from these building blocks, called "licks" in our system, using algorithms, sets of directions that with enough data and processing power can learn to improve results over time, as humans tell the system what is an acceptable melody -- and what just doesn't work.
 
What we have learned is, that once a sufficiently complex agent (artificial or not) is presented with the right data, a strong set of rules and a stage to output, creation takes place. Where this creation goes next can only be determined by human users -- the performers or producers who create a new work around this melody -- but the initial inspiration comes from a machine processing fragments.
 
This creation parallels practices already gleefully employed by millions of app fans. AI promises to give these next-generation, digitally inspired creative consumers new tools -- maybe something like an insane meme library -- they can build art with and from. This art may wind up altered by the next creator, remixed, reimagined, enhanced via other media, further built upon. It will be something totally different and it will not be "owned" in the traditional sense. This looping creativity will bear a striking resemblance to the way algorithms create novel results within an AI system.
 
How could these little bits and pieces, these jokes and goofy video snippets add up to art? The short-form nature of these creations has so far been constrained by mobile bandwidth, something about to expand thanks to 5G. Fifth-generation cellular networks will allow richer content to be generated on the fly, be it by humans alone or with AI assistance. We can do crazy things now, but the breadth, depth and length of time are throttled, which explains the fragmented short form and limited merger of human-AI capacity. Given longer formats and more bandwidth, we could have ever-evolving artprocesses that blur the human-machine divide completely. We could find not just new genres, but perhaps completely new media to express ourselves and connect with each other.
 
Though with Amadeus Code, we have built an AI that composes melodies, ironically we anticipate that this era of artprocess won't lead to more songs being written -- or it won't be just about songs. This era's tools will allow creators, app developers, musicians and anyone else to use music more expressively and creatively, folding it into novel modes of reflecting human experience, via the mirrors and prisms of AI. This creation will demand a new definition of what a "work" is, one that takes into account the fluidity of process. And it will require new approaches to licensing and ownership, one where code, filters, interfaces, algorithms or fragmented elements may all become part of the licensing equation.
Announcement
01/23/2019

11/01/2018, What's the Value of a Song When Artificial Intelligence is Everywhere?
11/01/201811/01/2018, What's the Value of a Song When Artificial Intelligence is Everywhere?
Announcement
11/01/2018
Announcement
11/01/2018
In a world where exponentially advancing technology is created to solve various human basic needs, could the next frontier be the need for creative ideas? What’s the value of a computer-generated idea? But why buy an idea from a machine? MORE» More»

Appeared in Music Business Worldwide

In a world where exponentially advancing technology is created to solve various human basic needs, could the next frontier be the need for creative ideas? What’s the value of a computer-generated idea? But why buy an idea from a machine?

It’s a bizarre question at first glance. It reads like absurdist poetry. Yet we hold that our ideas are discrete entities with definable intrinsic value. Is it possible a machine could generate such a thing?

As AI slowly seeps into business, culture, everywhere, we are being forced to answer this question. If the results of the data ingesting, pattern recognizing, and predictions have some validity, they may qualify as ideas worthy of the same consideration as human-created ideas. After all, determining worthiness or value is a deeply human-inflected system, and as is AI. Without human input, machine learning cannot happen. The machine is merely the surface layer of the ideation process. We are buying the distillation of human experience and perspective, processed on a scale unheard of before our time, using methods that feel alien and opaque--for now.

Accepting this possible answer--yes, I’ll buy a machine’s idea--somehow upends some of our notions of creativity. They shake their very core. The machine is not a person, not conscious, has no awareness or context. It has nothing to say. It has merely generated something. We are used to considering human artists as the driving force behind value. In the traditional definitions of artistic merit, the value of an object, utterance, or performance depends on the artist’s unique abilities and perspective. A machine’s idea is perceived as less valuable. After all, it didn’t really put anything into its creation.

Or did it? Our relationship with machines has been relatively one directional.  We’ve created technology to solve basic problems and only when said problems were solved, was technology paid for.  AI too is suited for this but, assistive creative AI, particularly in music and other artforms, introduces a new paradigm because the problems it attempts to solve is our limited creative nature itself. Many are conflicted, sometimes even offended by the demand pay for such technology, as the transaction would be an admission of defeat.

We are being introduced a new paradigm, a new relationship with machines. Creative AI by design is a combination of both human intuition and machine intelligence. This newly shared control principle frees us humans to imagine new creative processes introduced by the machine, not possible independently by either human or machine.    

We’re still struggling to understand the relationship between human and machine, just as we’re struggling to think of all humans as equals. A similar tension arose with the advent of photography. Was it really art, if an image did not have to be manually produced and could be reproduced fairly easily? It continued with film, which felt even more divorced from the “authenticity” and “aura” (to use terms critics kicked around in the early 20th century) of painting or sculpture.

When machines get involved in making art, it makes creativity more accessible, lowers the time between intention and execution (as limiting as it may be) and that democratization of creativity ultimately has shifted the center of commercial value. It moves from the prizing of a unique object or the restricted access to a certain live performance to the audience. This is a very natural source of value:  We love to give a number or value to objects and even more so to the people around us. We yearn to be valued, and we also yearn to value.

In a modern post-internet society, content value is post-creation. Machines can take an inhuman number of human hours and produce novel ideas in a very non-human way. That still doesn’t feel right to us. “You’re taking a millenium of work and giving me a random idea!” we want to counter. No matter how good that idea sounds, I may struggle to say that it is worth purchasing.

However, that is not where value is made now. In a world where every piece of music is equally accessible, the worth of a piece of music isn’t associated with the music itself, but its ability to attract listeners’ attention, the amount of time that people listen, share, talk about it. There are dozens of examples of mediocre artists who have large, passionate fan bases. Even if it’s a masterpiece, if they don’t share it or devote time to a track or album or video, it doesn’t have much value. It’s not about the artistic merit, the file type, or any other aspect of consumption. This dynamic is present, with or without machine learning, of course. AI is merely forcing us to reckon even more explicitly with the tension between originality and value, collaboration and consumption.

According to this theory, the hours put into consumption are more determinate of value compared to the endless hours put into production. I may have practiced longer but I may not be more skilled--or more able to produce something that attracts sufficient attention. Advances in tech can allow people to skip those long production hours and start creating, as these hours are not really rewarded (though they can be truly rewarding to the creator). The value of a piece of clothing or artwork is only quantifiable by the consumer, if they want to see it or take people to see it, if they want to wear it.

As AI becomes as normal to us as the drum machine and vocoder, as photos and film, we may change our answer. We may be happy to spend a few bucks on a machine’s idea, if it might pay our bills for a month, when developed into a hit. We have come to a new age of reckoning with machines in art. It’s a time that may completely reframe our understanding of artistry and value.

 
Announcement
11/01/2018

09/05/2018, Amadeus Code, the AI-Powered Songwriting Assistant App, Launches with New Harmony Library
09/05/201809/05/2018, Amadeus Code, the AI-Powered Songwriting Assistant App, Launches with New Harmony Library
Announcement
09/05/2018
Announcement
09/05/2018
Amadeus Code is now open to all users. To celebrate, it is rolling out a new way to construct songs for composers unafraid to explore the possibilities of AI-assisted songwriting. MORE» More»

Amadeus Code is now open to all users. To celebrate, it is rolling out a new way to construct songs for composers unafraid to explore the possibilities of AI-assisted songwriting. “Our AI has the ability to find really unexpected, but yet compelling melodies and match them to the harmonies suggested by chord progressions,” explains the co-founder of the Tokyo-based tech startup Taishi Fukuyama. 

Harmony Library gives users direct access to the chord progressions that power Amadeus Code’s AI songwriting assistant.  These chord progressions can be searched by specific parameters, including genre, mood, tempo, and key, or by title or artist. Amadeus Code then generates an infinite number of melodies on top of the chords found in the selected progressions, melodies a user can tweak, dissect, regenerate, and eventually export to their favorite DAW. Composers can then share their creations with other Amadeus Code users and to the world. By providing the chords and tools that help shape the melodies granularly, users can proactively collaborate with Amadeus Code to create original ideas.   

“Radio, MTV, and streaming have all made an enormous impact on how creators were exposed to the music of their time. That has inherently shaped how and what they produced. Now that we’ve reached maximum capacity to consume the content we actually have access to, it’s only inevitable that we’re now getting creative and taking control over what ultimately influences our work,” Fukuyama notes.  The founding team of Amadeus Code are active professional music producers with experience producing some of the most famous artists of Japan and Korea, and a technologist and serial entrepreneur in music tech.

“Amadeus Code, unlike existing music AI on the market today, is not intended to create finished works to put against a home video, for example,” Fukuyama adds. “We hope that users will complete the work started by Amadeus Code by telling their own unique stories, which will continue to be what makes music truly irreplaceable by artificial intelligence.”  

This is not a copy-and-paste operation for hit making, however. This is about more than robo-songwriting. It’s about using algorithms to suggest unfamiliar melodies and expanding one’s imagination efficiently. “AI has this peculiar ability to find novel solutions--some successful, some not so much. These are suggestions which a composer can take or leave,” says Fukuyama. “Its decisions can spark a new idea for a composer, getting her into new creative territory.”

Announcement
09/05/2018