Amadeus Code


64104 logo 20pic 20for 20opk
  • 64104 logo 20pic 20for 20opk
  • 60951 amadeus 20code 20  20combination 2002
  • 61716 synthbass update
  • 61720 tempo edit
  • 61718 shareableweb
  • 60950 amadeus 20code 20  20combination 2001
  • 60952 amadeus 20code 20  20iphone 208 20  20discover
  • 60953 amadeus 20code 20  20iphone 208 20  20genres 20  20moods
  • 60954 amadeus 20code 20  20iphone 208 20  20melody 20settings
  • 60955 amadeus 20code 20  20iphone 208 20  20player
  • 60956 amadeus 20code 20  20iphone 208 20  20song 20attributes
  • 60957 amadeus 20code 20  20screenshot 20  20discover
  • 60958 amadeus 20code 20  20screenshot 20  20genres 20  20moods
  • 60959 amadeus 20code 20  20screenshot 20  20melody 20settings
  • 60960 amadeus 20code 20  20screenshot 20  20player
  • 60961 amadeus 20code 20  20screenshot 20  20song 20attributes
Loading twitter feed

About

Amadeus Code is an artificial intelligence-powered songwriting assistant. The technology is a new approach that breaks centuries of melodies down into their constituent parts (“licks”) and transforms them into data. By eschewing more traditional methods of musical information transfer--the score and MIDI, for example--Japanese researchers have created a system to generate ...

+ Show More

Contact

Senior Tech Publicist
Tyler Volkmar
(812) 961-3723

Current News

  • 05/29/201905/29/2019

Amadeus Code Raises Series A Round from World Innovation Lab

Amadeus Code, the AI songwriting assistant, has received 200M JPY (approximately $1.8 M US) to support the development of its AI music generation platform. This Series A round is led by VC firm World Innovation Lab. WiL’s Masataka Matsumoto will join Amadeus Code’s board.

“We’re thrilled to find others in the tech world as excited by the prospects of AI in music and creativity as we are,” notes co-founder and COO of Amadeus Code Taishi Fukuyama. “We welcome...

Press

  • Dope Cause We Said, Feature story, 02/22/2018, The Amadeus Code App, the Only AI Powered Songwriting Assistant You'll Ever Need, Is Launching Soon on iOS Text
  • The Cab Portal, Feature story, 02/23/2018, The Amadeus Code App, the Only AI Powered Songwriting Assistant You'll Ever Need, Is Launching Soon on iOS Text
  • Keyboards, Feature story, 03/08/2018, Preview: songwriting app Amadeus Code Text
  • The Cab Portal, Feature story, 03/27/2018, Amadeus Code: Sleek Mobile AI App Turns Centuries of Song into New Melodies and Chords for Greater Creativity Text
  • + Show More

News

05/29/2019, Amadeus Code Raises Series A Round from World Innovation Lab
05/29/201905/29/2019, Amadeus Code Raises Series A Round from World Innovation Lab
Announcement
05/29/2019
Announcement
05/29/2019
Amadeus Code, the AI songwriting assistant, has received 200M JPY (approximately $1.8 M US) to support the development of its AI music generation platform. MORE» More»

Amadeus Code, the AI songwriting assistant, has received 200M JPY (approximately $1.8 M US) to support the development of its AI music generation platform. This Series A round is led by VC firm World Innovation Lab. WiL’s Masataka Matsumoto will join Amadeus Code’s board.

“We’re thrilled to find others in the tech world as excited by the prospects of AI in music and creativity as we are,” notes co-founder and COO of Amadeus Code Taishi Fukuyama. “We welcome the support and insight of WiL.”

“Recently, AI technology has been adopted in various fields, and like the Amadeus Code, the music industry is no exception,” says Masataka Matsumoto, general partner and co-founder of WiL.  “In the music industry, which has long been called for a change in the business model, the company believes that its activities will be a new path. I think that I want to support them trying to challenge the global market from the time of foundation, making use of our global knowledge and the network of Japan-US.”

Drawing on investments that range from Twitter to DocuSign to Sling Media, WiL and Matsumoto will offer additional guidance as the Amadeus Code team continues to expand its products and offerings. The app has quickly grown from a simple melody generator to an engine that can create chord changes, basslines, and beats based on user preferences and a library of popular tracks.

Amadeus Code will be unveiling a new set of groundbreaking features later this year.

About Amadeus Code

Amadeus Code is an artificial intelligence-powered songwriting assistant. The technology is a new approach that breaks centuries of melodies down into their constituent parts (“licks”) and transforms them into data. By eschewing more traditional methods of musical information transfer--the score and MIDI, for example--Japanese researchers have created a system to generate the kind of endless stream of melody Mozart proclaimed ran through his head. Composers, producers, and songwriters now have the distilled praxis of thousands of artists and composers dating back to the 17th century at their fingertips.

About WiL

Based in Tokyo and Palo Alto, World Innovation Lab are positioned at the gateways of the world’s two most innovative countries. Staying true to their commitment towards stable growth, Japan and the United States both offer active startup ecosystems, deep sources of capital, and the two largest IT economies worldwide. With a network of government and corporate partners in Japan and throughout Asia, World Innovation Lab provides unparalleled access, capital, and expertise to guide startups through global growth.

Announcement
05/29/2019

05/18/2019, We need to embrace AI’s humanity to unlock its creative promise
05/18/201905/18/2019, We need to embrace AI’s humanity to unlock its creative promise
Announcement
05/18/2019
Announcement
05/18/2019
As we consider AI’s power, we seem to forget one central, indisputable fact: AI is a product of human interactions. MORE» More»

Appeared in VentureBeat

As we consider AI’s power, we seem to forget one central, indisputable fact: AI is a product of human interactions.

You’d never guess this from reading the frequent headlines on the subject. Commentators see future AI as a kind of Skynet-style artificial general intelligence (AGI), where computer systems will not just beat someone at Go but will become the next Picasso or Drake or merciless cyber-overlords. Even Stanford’s admirable attempt to bridge disciplinary gaps in the field of machine learning and computer intelligence, its new Institute for Human-Centered Artificial Intelligence, plays subtly into this fallacy that humans are somehow tangential to AI.

Yet there can be no data without humans. There can be no training of models or analysis of results without humans. There can be no application of those results without humans. We impact every single moment in the process, directly and indirectly. The division between artificial — the product of human artifice — and human is one we’ve made up.

This mysterious division seeps deep into our understanding and imagination as we contemplate the future of algorithms, machine learning, and AI. It’s clouding our ability to see the striking potential of AI as a helpmate to human knowledge and creativity. It’s making us willfully ignorant of the true stakes of AI, how it will shape us, and most importantly, how we will shape it.

We’ve both devoted years to developing generative AI systems that help artists create unexpected works. As researchers and entrepreneurs who have had to fold a large body of human knowledge spanning continents and centuries into our models and data sets, we have a historical perspective that enriches our concept of AI’s possibilities and pitfalls.

So we see clearly that we are not facing some radically new dilemma in AI. In fact, society is having an eerily similar debate to the one we had about photography and recorded or electronic sound back in the day. And the artists whose perspectives triumphed in those debates long ago because of the profound expressiveness of their art can speak to our concerns about AI. Machine learning systems and algorithms may prove not to disinherit us, but to become a new medium for human expression.

Photography did not replace painting, just as 808s or algorithmic composition didn’t eliminate society’s need for drummers and composers. New technology that automates certain tasks does not erase humans who perform those tasks. Like ATMs changed the jobs of bank tellers, these new technologies often transform a job without eliminating it. Innovations like 5G may rapidly expand the number of jobs demanding constant visual and auditory creativity.

It’s worth looking back at the convictions of early arts commentators, to understand why technology often supplements, rather than replaces, human creative work. A March 1855 essay in The Crayon, the leading arts criticism and aesthetic theory journal of mid-19th century America, exclaimed: “However ingenious the process or surprising the results of photography, it must be remembered that this art aspires only to copy, it cannot invent.” That argument will sound ridiculous to anyone who has seen works by the likes of Man Ray and Diane Arbus.

The inventive potential of photography was already emerging for photographers at the time, including American landscape photographer John Moran, who noted in 1865 that the image-taker’s and -viewer’s perception, the humans interacting with the machine, gave the copied images the potential to become art: “If there is not the perceiving mind to note and feel the relative degrees of importance in the various aspects which nature presents, nothing worthy of the name of pictures can be produced. It is this knowledge, or art of seeing, which gives value and importance to the works of certain photographers over all others.”

We now have more than a century of experience to consider, and Moran’s perspective has become essential to the way we currently view art. Ravishing images eventually won over viewers, who became less concerned with techniques and more focused on what the image said, what it made them feel. Invention happens not inside a camera, but in the relationship between the creative photographer and the imagination of the viewer.

The 20th-century debates around sound recording and computational or electronic composition methods have similar elements. Take, for example, the view of John Philip Sousa, one of the best impresarios of his generation as well as a highly skilled composer. Writing a scathing condemnation of recorded music in a 1906 article, Sousa decried the death of sincere, human music and its appreciation: “I foresee a marked deterioration in American music and musical taste, an interruption in the musical development of the country, and a host of other injuries to music in its artistic manifestations, by virtue — or rather by vice — of the multiplication of the various music-reproducing machines.” They were soulless, and made soulless sounds, he argued.

Yet, as we all know, some of the 20th century’s most groundbreaking and soulful music was brought to listeners via recordings — the entire jazz canon, arguably — or brought to life by machines. These soulless machines sparked entire new music-making communities in creative human hands.

Machines have their severe limitations, however, and forward-thinking music-makers understand this. Iannis Xenakis presciently saw that the marriage of human and machine, of creativity and mathematical operations, could yield the most interesting possibilities: “The great idea is to be able to introduce randomness in order to break up the periodicity of mathematical functions, but we’re only at the beginning. The products of the intelligence are so complex that it is impossible to purify them in order to submit them totally to mathematical laws.” Randomness is key to pushing technology beyond its narrow limits and allowing it to unlock powerful human impulses.

To assume that AI does not support human creativity as its technological predecessors did is to misunderstand the essence of AI and what it promises to give to creative human minds. AI-generated results can be purely random, as Xenakis wished, or can follow sets of rules and boundaries, while still remaining malleable and ever-evolving, responding to and deepening with human input. Creative humans take these emergent results and frame them for their fellow humans, who will themselves forge meaning on their own terms. Every step of the process is inherently socially embedded. That’s why we can make meaning out of vibrations, plays of color, movement, and gesture AI generates.

AI is different from past technological innovations, of course. It transforms itself as you create with it, responding to your input, rejecting what you reject or presenting bizarre associations or results you might never have come up with, left to your own devices. In other realms, it has profound ethical and social implications we need to examine openly and soberly.

Yet first we need to embrace AI’s humanness, to acknowledge that it is us, distilled and transformed in new and unpredictable ways, much like a work of great and lasting art is.

Announcement
05/18/2019

04/26/2019, Amadeus Code Lets Songwriters Get Crazy Rhythm Thanks to New Beats Feature
04/26/201904/26/2019, Amadeus Code Lets Songwriters Get Crazy Rhythm Thanks to New Beats Feature
Announcement
04/26/2019
Announcement
04/26/2019
Rhythmic patterns often define musical genres, from hip hop to disco. Amadeus Code, the AI-powered songwriting assistant, is rolling out several rhythmic options to underpin its melodies, basslines, and chord progressions, giving music makers yet another layer of potential inspiration. MORE» More»

Rhythmic patterns often define musical genres, from hip hop to disco. Amadeus Code, the AI-powered songwriting assistant, is rolling out several rhythmic options to underpin its melodies, basslines, and chord progressions, giving music makers yet another layer of potential inspiration.

“A quick YouTube search for covers of your favorite song will show you two things,” explains Taishi Fukuyama, Amadeus Code co-founder and COO. “One, that a powerful song transcends genre. Many songs are genre agnostic and can work in a bunch of musical styles. Two, that these styles revolve around rhythms and sound design that define them.”

A new channel

Amadeus Code users will now have a chance to experiment with beats, thanks to a new audio channel that incorporates rhythmic ideas into the AI’s already robust melodic elements. Users can select one of four popular defined styles--urban pop/mellow, urban pop/uplifting, chill disco, and hip hop--or just leave things in the more open-ended Songwriter Mode. 

From there, they can head to the Discover Library section, find a chord progression they like, and let Amadeus Code generate a new melody on top of it. The resulting sketch can be exported to a DAW via MIDI or audio, and it can be shared directly to the web for further exploration and arrangement.

Playing with genre

“Rather than just collaborating with the app to discover new topline melody ideas without any rhythmic information, we have added a simple way for users to apply popular genre rhythms to the in-app creations to fast track inspiration according to the selected style,” says Fukuyama.  “Hardcore genre-defying users can continue to use the app without any beats in Songwriting Mode.” 

By expanding the ways users can interact with AI melodies, Amadeus Code lets musicians, producers, and music lovers remain flexible and use the app in ways that mesh with their creative process. This flexibility enables experimentation and the creation of new genre-bending work.

Announcement
04/26/2019

01/28/2019, Hear the Music: Songwriting Assistant Amadeus Code’s New Features Give Users More Power to Play with and Share AI Generated Melodies
01/28/201901/28/2019, Hear the Music: Songwriting Assistant Amadeus Code’s New Features Give Users More Power to Play with and Share AI Generated Melodies
Announcement
01/28/2019
Announcement
01/28/2019
Amadeus Code’s latest updates underline its vision for AI-assisted music creation, which insists that artists want inspiration, not computer-generated ditties. MORE» More»

Amadeus Code’s latest updates underline its vision for AI-assisted music creation, which insists that artists want inspiration, not computer-generated ditties. “We have purposefully based our approach on giving humans tracks that they can then flesh out,” explains Amadeus Code COO Taishi Fukuyama. “Our AI is designed to support creative people, especially those who want to or have to compose prolifically.”

However, to hear how well an idea is going to work, a composer or producer needs some sonic options to play around with. To help users hear more when Amadeus Code generates a melody for them to work with, the app has incorporated a few key sounds, including four bass voices, as well as giving users the capacity to mute any and all voices. They can shift the BPM of a generated track, too, allowing them to jump off from a favorite hit found in the Harmony Library--and then take it down tempo or hype it up. 

“We wanted to put a few more sounds in the app, without completely putting ‘words into your mouth’ sort to speak, to give users more ways to uncover how a particular AI-generated melody might fit into their projects,” Fukuyama notes. “The point of our AI-powered songwriting assistant is that it creates a shared control principle with the user and does not just autopilot the process.  Also, sometimes you want to isolate one part or voice, and that was impossible before.” Now the app is even better at revealing a melody’s strengths and possibilities. “We’ve got more choices that can highlight the generated music’s nuances,” says Fukuyama.

Amadeus Code has also enabled social sharing, when a melody is just what the user is looking for. By sending a simple URL to a collaborator or bandmate, users can exchange ideas rapidly outside of the app. The URL contains a player, allowing collaborators to listen without logging in. Press play, and hear the AI ideas. Then humans can take them to the next, more developed level.

“Lots of AI outputs focus also include performance, and we don’t think that makes any sense,” Fukuyama says. “Performance is more compelling when humans are involved. What an AI system can do, however, is suggest an infinite number of ideas humans can evaluate and develop, and that shared control principle makes for a far more powerful engine for creativity.” The future of AI music isn’t just more robot music; it’s a tool that will connect human- and machine-made ideas for wilder, more productive creative exploration.

Announcement
01/28/2019

01/23/2019, Entering the Artprocess Era: How Influence, Ownership & Creation Will Change With AI
01/23/201901/23/2019, Entering the Artprocess Era: How Influence, Ownership & Creation Will Change With AI
Announcement
01/23/2019
Announcement
01/23/2019
Art used to be done, finished and discrete. The artist stepped away and there was the final artwork. This finished product -- be it a painting, sculpture, book or sound recording -- could be bought and sold and, in more recent human history, reproduced for a mass market. MORE» More»
Appeared in Billboard
 
Art used to be done, finished and discrete. The artist stepped away and there was the final artwork. This finished product -- be it a painting, sculpture, book or sound recording -- could be bought and sold and, in more recent human history, reproduced for a mass market.
 
The final piece had a life of its own. Its finality obscured the creator or creators' influences, hiding years of training, thinking and experimenting (and borrowing). It could be owned, with that ownership defined by format -- be it a physical object or file type, the way copyright is still defined today.
 
Artificial intelligence is poised to transform these dynamics. We're moving from fixed ownership to licensing as our thought framework. We're moving from imagining art as the final work completed by brilliant individuals to seeing it a series of ongoing transformations, enabling multiple interventions by a range of creators from all walks of life. We're entering the era of the artprocess.
 
The early signs of this shift are already apparent in the debate about who deserves credit (and royalties or payment) for AI-based images and sounds. This debate is heating up, as evidenced by the assertion by an algorithm developer that he was owed a cut of proceeds from Christie's sale of an AI-generated portrait, despite the algorithm's open-source origins. This debate will only get thornier as more works are created in different ways using machine learning and other algorithmic tools, and as open-source software and code get increasingly commercialized. (See investments in GitHub or IBM's purchase of Red Hat.) Will the final producers of a work powered by AI gain all the spoils, or will new licensing approaches evolve that give creators tools in return for a small fee for the tool-makers?
 
We see another part of this shift toward process with the advent of musical memes and the smash success of apps like musical.ly (now TikTok). Full-length songs that are finished works are easily accessible to young internet or app users, but kids often care less about the entire piece than they do about an excerpt they make on their own. Even before the lip synching app craze, viral YouTube compilations connected to particular hits predated musical.ly and predicted it. Think of that rash of videos of "Call Me Maybe" and "Harlem Shake": In both cases, users got excited about a few seconds of the chorus in a song and made their own snippets. As a collection, these snippets became more relevant to fans than the songs themselves. Users are reinventing the value of content, creating the need for a new framework for attribution and reward.
 
We may not all respond to this art -- or even consider these iterations to be "art" -- but users are finding joy and value through new interactive ways of consuming music. It's not passive, it's not pressing play and listening start to finish, it's not even about unbundling albums into singles or tracks. It's about unravelling parts of songs and adding your own filters and images, using methods not unlike how art and music is made by professionals. It's creating something new and it's not always purely derivative. There's a long history of this kind of content dismantling and reassembly, one stretching back centuries, the very process that created traditional or folk art. People have long built songs from whatever poetic and melodic materials they have at the ready, rearranging ballads, for example, to include a favorite couplet, lick, or plot twist. The app ecosystem is creating the next iteration of folk art, in a way.
 
It's also speaking to how AI may shape and be shaped by creators. Though not exactly stems in the traditional sense, stem-like fragments are first provided to app users in a confined playground, and then re-arranged or imagined by these users, in a way similar to how an AI builds new melodies.
 
To grasp the connection, it's important to understand how an AI system creates new music. In the case of Amadeus Code, the goal of the AI is to create new melodies based on existing tastes and styles. An initial dataset is necessary for any AI to generate results. The process of curating, compiling and optimizing this ever-evolving dataset demands as much creativity as figuring out how to turn this data into acceptable melodies. Melodies are generated from these building blocks, called "licks" in our system, using algorithms, sets of directions that with enough data and processing power can learn to improve results over time, as humans tell the system what is an acceptable melody -- and what just doesn't work.
 
What we have learned is, that once a sufficiently complex agent (artificial or not) is presented with the right data, a strong set of rules and a stage to output, creation takes place. Where this creation goes next can only be determined by human users -- the performers or producers who create a new work around this melody -- but the initial inspiration comes from a machine processing fragments.
 
This creation parallels practices already gleefully employed by millions of app fans. AI promises to give these next-generation, digitally inspired creative consumers new tools -- maybe something like an insane meme library -- they can build art with and from. This art may wind up altered by the next creator, remixed, reimagined, enhanced via other media, further built upon. It will be something totally different and it will not be "owned" in the traditional sense. This looping creativity will bear a striking resemblance to the way algorithms create novel results within an AI system.
 
How could these little bits and pieces, these jokes and goofy video snippets add up to art? The short-form nature of these creations has so far been constrained by mobile bandwidth, something about to expand thanks to 5G. Fifth-generation cellular networks will allow richer content to be generated on the fly, be it by humans alone or with AI assistance. We can do crazy things now, but the breadth, depth and length of time are throttled, which explains the fragmented short form and limited merger of human-AI capacity. Given longer formats and more bandwidth, we could have ever-evolving artprocesses that blur the human-machine divide completely. We could find not just new genres, but perhaps completely new media to express ourselves and connect with each other.
 
Though with Amadeus Code, we have built an AI that composes melodies, ironically we anticipate that this era of artprocess won't lead to more songs being written -- or it won't be just about songs. This era's tools will allow creators, app developers, musicians and anyone else to use music more expressively and creatively, folding it into novel modes of reflecting human experience, via the mirrors and prisms of AI. This creation will demand a new definition of what a "work" is, one that takes into account the fluidity of process. And it will require new approaches to licensing and ownership, one where code, filters, interfaces, algorithms or fragmented elements may all become part of the licensing equation.
Announcement
01/23/2019