On September 5th we announced the AI-assisted songwriting App, Amadeus Code, was open to all users globally. Today the AI music startup, Amadeus Code, announced their new updates that features updated synth sounds, a new bass channel, and a new shareable web feature for easy collaboration between creators.
As reported back in September Amadeus Code users can search Spotify’s library from within the app and use the chord progressions from their favorite songs, and Amadeus Code will compose new melodies on top of them. The chord progressions can be searched by specific parameters, including genre, mood, tempo, and key, or by title or artist. By using algorithms to suggest unfamiliar melodies and expanding one’s imagination efficiently. Composers can then share their creations with other Amadeus Code users and to the world.
Amadeus Code newest update will help enhance the user experience and inspire songwriters with more immersive sounds than the original app. The shareable web feature adds a new collaborative element to the app for creators who prefer additional human input. Below is the full press release and you can download Amadeus Code in the App Store here.
Hear the Music: Songwriting Assistant Amadeus Code’s New Features Give Users More Power to Play with and Share AI Generated Melodies
Amadeus Code’s latest updates underline its vision for AI-assisted music creation, which insists that artists want inspiration, not computer-generated ditties. “We have purposefully based our approach on giving humans tracks that they can then flesh out,” explains Amadeus Code COO Taishi Fukuyama. “Our AI is designed to support creative people, especially those who want to or have to compose prolifically.”
However, to hear how well an idea is going to work, a composer or producer needs some sonic options to play around with. To help users hear more when Amadeus Code generates a melody for them to work with, the app has incorporated a few key sounds, including four bass voices, as well as giving users the capacity to mute any and all voices. They can shift the BPM of a generated track, too, allowing them to jump off from a favorite hit found in the Harmony Library–and then take it down tempo or hype it up.
“We wanted to put a few more sounds in the app, without completely putting ‘words into your mouth’ sort to speak, to give users more ways to uncover how a particular AI-generated melody might fit into their projects,” Fukuyama notes. “The point of our AI-powered songwriting assistant is that it creates a shared control principle with the user and does not just autopilot the process. Also, sometimes you want to isolate one part or voice, and that was impossible before.” Now the app is even better at revealing a melody’s strengths and possibilities. “We’ve got more choices that can highlight the generated music’s nuances,” says Fukuyama.
Amadeus Code has also enabled social sharing, when a melody is just what the user is looking for. By sending a simple URL to a collaborator or bandmate, users can exchange ideas rapidly outside of the app. The URL contains a player, allowing collaborators to listen without logging in. Press play, and hear the AI ideas. Then humans can take them to the next, more developed level.
“Lots of AI outputs focus also include performance, and we don’t think that makes any sense,” Fukuyama says. “Performance is more compelling when humans are involved. What an AI system can do, however, is suggest an infinite number of ideas humans can evaluate and develop, and that shared control principle makes for a far more powerful engine for creativity.” The future of AI music isn’t just more robot music; it’s a tool that will connect human- and machine-made ideas for wilder, more productive creative exploration.