Amadeus Code is an artificial intelligence-powered songwriting assistant. The technology is a new approach that breaks centuries of melodies down into their constituent parts (“licks”) and transforms them into data. By eschewing more traditional methods of musical information transfer--the score and MIDI, for example--Japanese researchers have created a system to generate ...
Amadeus Code, the AI songwriting assistant, has received 200M JPY (approximately $1.8 M US) to support the development of its AI music generation platform. This Series A round is led by VC firm World Innovation Lab. WiL’s Masataka Matsumoto will join Amadeus Code’s board.
“We’re thrilled to find others in the tech world as excited by the prospects of AI in music and creativity as we are,” notes co-founder and COO of Amadeus Code Taishi Fukuyama. “We welcome...
Amadeus Code, the AI songwriting assistant, has received 200M JPY (approximately $1.8 M US) to support the development of its AI music generation platform. This Series A round is led by VC firm World Innovation Lab. WiL’s Masataka Matsumoto will join Amadeus Code’s board.
“We’re thrilled to find others in the tech world as excited by the prospects of AI in music and creativity as we are,” notes co-founder and COO of Amadeus Code Taishi Fukuyama. “We welcome the support and insight of WiL.”
“Recently, AI technology has been adopted in various fields, and like the Amadeus Code, the music industry is no exception,” says Masataka Matsumoto, general partner and co-founder of WiL. “In the music industry, which has long been called for a change in the business model, the company believes that its activities will be a new path. I think that I want to support them trying to challenge the global market from the time of foundation, making use of our global knowledge and the network of Japan-US.”
Drawing on investments that range from Twitter to DocuSign to Sling Media, WiL and Matsumoto will offer additional guidance as the Amadeus Code team continues to expand its products and offerings. The app has quickly grown from a simple melody generator to an engine that can create chord changes, basslines, and beats based on user preferences and a library of popular tracks.
Amadeus Code will be unveiling a new set of groundbreaking features later this year.
About Amadeus Code
Amadeus Code is an artificial intelligence-powered songwriting assistant. The technology is a new approach that breaks centuries of melodies down into their constituent parts (“licks”) and transforms them into data. By eschewing more traditional methods of musical information transfer--the score and MIDI, for example--Japanese researchers have created a system to generate the kind of endless stream of melody Mozart proclaimed ran through his head. Composers, producers, and songwriters now have the distilled praxis of thousands of artists and composers dating back to the 17th century at their fingertips.
About WiL
Based in Tokyo and Palo Alto, World Innovation Lab are positioned at the gateways of the world’s two most innovative countries. Staying true to their commitment towards stable growth, Japan and the United States both offer active startup ecosystems, deep sources of capital, and the two largest IT economies worldwide. With a network of government and corporate partners in Japan and throughout Asia, World Innovation Lab provides unparalleled access, capital, and expertise to guide startups through global growth.
Appeared in VentureBeat
As we consider AI’s power, we seem to forget one central, indisputable fact: AI is a product of human interactions.
You’d never guess this from reading the frequent headlines on the subject. Commentators see future AI as a kind of Skynet-style artificial general intelligence (AGI), where computer systems will not just beat someone at Go but will become the next Picasso or Drake or merciless cyber-overlords. Even Stanford’s admirable attempt to bridge disciplinary gaps in the field of machine learning and computer intelligence, its new Institute for Human-Centered Artificial Intelligence, plays subtly into this fallacy that humans are somehow tangential to AI.
Yet there can be no data without humans. There can be no training of models or analysis of results without humans. There can be no application of those results without humans. We impact every single moment in the process, directly and indirectly. The division between artificial — the product of human artifice — and human is one we’ve made up.
This mysterious division seeps deep into our understanding and imagination as we contemplate the future of algorithms, machine learning, and AI. It’s clouding our ability to see the striking potential of AI as a helpmate to human knowledge and creativity. It’s making us willfully ignorant of the true stakes of AI, how it will shape us, and most importantly, how we will shape it.
We’ve both devoted years to developing generative AI systems that help artists create unexpected works. As researchers and entrepreneurs who have had to fold a large body of human knowledge spanning continents and centuries into our models and data sets, we have a historical perspective that enriches our concept of AI’s possibilities and pitfalls.
So we see clearly that we are not facing some radically new dilemma in AI. In fact, society is having an eerily similar debate to the one we had about photography and recorded or electronic sound back in the day. And the artists whose perspectives triumphed in those debates long ago because of the profound expressiveness of their art can speak to our concerns about AI. Machine learning systems and algorithms may prove not to disinherit us, but to become a new medium for human expression.
Photography did not replace painting, just as 808s or algorithmic composition didn’t eliminate society’s need for drummers and composers. New technology that automates certain tasks does not erase humans who perform those tasks. Like ATMs changed the jobs of bank tellers, these new technologies often transform a job without eliminating it. Innovations like 5G may rapidly expand the number of jobs demanding constant visual and auditory creativity.
It’s worth looking back at the convictions of early arts commentators, to understand why technology often supplements, rather than replaces, human creative work. A March 1855 essay in The Crayon, the leading arts criticism and aesthetic theory journal of mid-19th century America, exclaimed: “However ingenious the process or surprising the results of photography, it must be remembered that this art aspires only to copy, it cannot invent.” That argument will sound ridiculous to anyone who has seen works by the likes of Man Ray and Diane Arbus.
The inventive potential of photography was already emerging for photographers at the time, including American landscape photographer John Moran, who noted in 1865 that the image-taker’s and -viewer’s perception, the humans interacting with the machine, gave the copied images the potential to become art: “If there is not the perceiving mind to note and feel the relative degrees of importance in the various aspects which nature presents, nothing worthy of the name of pictures can be produced. It is this knowledge, or art of seeing, which gives value and importance to the works of certain photographers over all others.”
We now have more than a century of experience to consider, and Moran’s perspective has become essential to the way we currently view art. Ravishing images eventually won over viewers, who became less concerned with techniques and more focused on what the image said, what it made them feel. Invention happens not inside a camera, but in the relationship between the creative photographer and the imagination of the viewer.
The 20th-century debates around sound recording and computational or electronic composition methods have similar elements. Take, for example, the view of John Philip Sousa, one of the best impresarios of his generation as well as a highly skilled composer. Writing a scathing condemnation of recorded music in a 1906 article, Sousa decried the death of sincere, human music and its appreciation: “I foresee a marked deterioration in American music and musical taste, an interruption in the musical development of the country, and a host of other injuries to music in its artistic manifestations, by virtue — or rather by vice — of the multiplication of the various music-reproducing machines.” They were soulless, and made soulless sounds, he argued.
Yet, as we all know, some of the 20th century’s most groundbreaking and soulful music was brought to listeners via recordings — the entire jazz canon, arguably — or brought to life by machines. These soulless machines sparked entire new music-making communities in creative human hands.
Machines have their severe limitations, however, and forward-thinking music-makers understand this. Iannis Xenakis presciently saw that the marriage of human and machine, of creativity and mathematical operations, could yield the most interesting possibilities: “The great idea is to be able to introduce randomness in order to break up the periodicity of mathematical functions, but we’re only at the beginning. The products of the intelligence are so complex that it is impossible to purify them in order to submit them totally to mathematical laws.” Randomness is key to pushing technology beyond its narrow limits and allowing it to unlock powerful human impulses.
To assume that AI does not support human creativity as its technological predecessors did is to misunderstand the essence of AI and what it promises to give to creative human minds. AI-generated results can be purely random, as Xenakis wished, or can follow sets of rules and boundaries, while still remaining malleable and ever-evolving, responding to and deepening with human input. Creative humans take these emergent results and frame them for their fellow humans, who will themselves forge meaning on their own terms. Every step of the process is inherently socially embedded. That’s why we can make meaning out of vibrations, plays of color, movement, and gesture AI generates.
AI is different from past technological innovations, of course. It transforms itself as you create with it, responding to your input, rejecting what you reject or presenting bizarre associations or results you might never have come up with, left to your own devices. In other realms, it has profound ethical and social implications we need to examine openly and soberly.
Yet first we need to embrace AI’s humanness, to acknowledge that it is us, distilled and transformed in new and unpredictable ways, much like a work of great and lasting art is.
Rhythmic patterns often define musical genres, from hip hop to disco. Amadeus Code, the AI-powered songwriting assistant, is rolling out several rhythmic options to underpin its melodies, basslines, and chord progressions, giving music makers yet another layer of potential inspiration.
“A quick YouTube search for covers of your favorite song will show you two things,” explains Taishi Fukuyama, Amadeus Code co-founder and COO. “One, that a powerful song transcends genre. Many songs are genre agnostic and can work in a bunch of musical styles. Two, that these styles revolve around rhythms and sound design that define them.”
A new channel
Amadeus Code users will now have a chance to experiment with beats, thanks to a new audio channel that incorporates rhythmic ideas into the AI’s already robust melodic elements. Users can select one of four popular defined styles--urban pop/mellow, urban pop/uplifting, chill disco, and hip hop--or just leave things in the more open-ended Songwriter Mode.
From there, they can head to the Discover Library section, find a chord progression they like, and let Amadeus Code generate a new melody on top of it. The resulting sketch can be exported to a DAW via MIDI or audio, and it can be shared directly to the web for further exploration and arrangement.
Playing with genre
“Rather than just collaborating with the app to discover new topline melody ideas without any rhythmic information, we have added a simple way for users to apply popular genre rhythms to the in-app creations to fast track inspiration according to the selected style,” says Fukuyama. “Hardcore genre-defying users can continue to use the app without any beats in Songwriting Mode.”
By expanding the ways users can interact with AI melodies, Amadeus Code lets musicians, producers, and music lovers remain flexible and use the app in ways that mesh with their creative process. This flexibility enables experimentation and the creation of new genre-bending work.
Amadeus Code’s latest updates underline its vision for AI-assisted music creation, which insists that artists want inspiration, not computer-generated ditties. “We have purposefully based our approach on giving humans tracks that they can then flesh out,” explains Amadeus Code COO Taishi Fukuyama. “Our AI is designed to support creative people, especially those who want to or have to compose prolifically.”
However, to hear how well an idea is going to work, a composer or producer needs some sonic options to play around with. To help users hear more when Amadeus Code generates a melody for them to work with, the app has incorporated a few key sounds, including four bass voices, as well as giving users the capacity to mute any and all voices. They can shift the BPM of a generated track, too, allowing them to jump off from a favorite hit found in the Harmony Library--and then take it down tempo or hype it up.
“We wanted to put a few more sounds in the app, without completely putting ‘words into your mouth’ sort to speak, to give users more ways to uncover how a particular AI-generated melody might fit into their projects,” Fukuyama notes. “The point of our AI-powered songwriting assistant is that it creates a shared control principle with the user and does not just autopilot the process. Also, sometimes you want to isolate one part or voice, and that was impossible before.” Now the app is even better at revealing a melody’s strengths and possibilities. “We’ve got more choices that can highlight the generated music’s nuances,” says Fukuyama.
Amadeus Code has also enabled social sharing, when a melody is just what the user is looking for. By sending a simple URL to a collaborator or bandmate, users can exchange ideas rapidly outside of the app. The URL contains a player, allowing collaborators to listen without logging in. Press play, and hear the AI ideas. Then humans can take them to the next, more developed level.
“Lots of AI outputs focus also include performance, and we don’t think that makes any sense,” Fukuyama says. “Performance is more compelling when humans are involved. What an AI system can do, however, is suggest an infinite number of ideas humans can evaluate and develop, and that shared control principle makes for a far more powerful engine for creativity.” The future of AI music isn’t just more robot music; it’s a tool that will connect human- and machine-made ideas for wilder, more productive creative exploration.