Amadeus Code is an artificial intelligence-powered songwriting assistant. The technology is a new approach that breaks centuries of melodies down into their constituent parts (“licks”) and transforms them into data. By eschewing more traditional methods of musical information transfer--the score and MIDI, for example--Japanese researchers have created a system to generate ...
Amadeus Code’s latest updates underline its vision for AI-assisted music creation, which insists that artists want inspiration, not computer-generated ditties. “We have purposefully based our approach on giving humans tracks that they can then flesh out,” explains Amadeus Code COO Taishi Fukuyama. “Our AI is designed to support creative people, especially those who want to or have to compose prolifically.”
However, to hear how well an idea is going to work, a composer or producer needs some sonic options to play around with. To help users hear more when Amadeus Code generates a melody for them to work with, the app has incorporated a few key sounds, including four bass voices, as well as giving users the capacity to mute any and all voices. They can shift the BPM of a generated track, too, allowing them to jump off from a favorite hit found in the Harmony Library--and then take it down tempo or hype it up.
“We wanted to put a few more sounds in the app, without completely putting ‘words into your mouth’ sort to speak, to give users more ways to uncover how a particular AI-generated melody might fit into their projects,” Fukuyama notes. “The point of our AI-powered songwriting assistant is that it creates a shared control principle with the user and does not just autopilot the process. Also, sometimes you want to isolate one part or voice, and that was impossible before.” Now the app is even better at revealing a melody’s strengths and possibilities. “We’ve got more choices that can highlight the generated music’s nuances,” says Fukuyama.
Amadeus Code has also enabled social sharing, when a melody is just what the user is looking for. By sending a simple URL to a collaborator or bandmate, users can exchange ideas rapidly outside of the app. The URL contains a player, allowing collaborators to listen without logging in. Press play, and hear the AI ideas. Then humans can take them to the next, more developed level.
“Lots of AI outputs focus also include performance, and we don’t think that makes any sense,” Fukuyama says. “Performance is more compelling when humans are involved. What an AI system can do, however, is suggest an infinite number of ideas humans can evaluate and develop, and that shared control principle makes for a far more powerful engine for creativity.” The future of AI music isn’t just more robot music; it’s a tool that will connect human- and machine-made ideas for wilder, more productive creative exploration.