Press Clipping
04/04/2019
Article
Strictly algo-rhythm: AI music is nothing to be scared of

Mood-music playlists have become a popular part of the streaming ecosystem. Check the growing audiences of Spotify’s ‘Peaceful Piano’ (5.2 million followers), ‘Sleep’ (3.2 million) and ‘Deep Focus’ (3.1 million) playlists, as examples.

Artists whose tracks get on to these playlists can get impressive, long-lasting spikes in their stream-counts. But recent headlines about an artificial-intelligence startup called Endel ‘signing a record deal’ with Warner Music Group raised the prospect of those human artists competing with algorithms for slots on the big properties in the mood-music world.

Those headlines were over-egging Endel’s news somewhat: it’s a standard 50/50 distribution agreement rather than a record deal, covering 20 albums that will be released on DSPs during 2019. With five released so far, Endel has fewer than 5,000 monthly listeners on Spotify, and as yet no high-profile playlist placements. The humans aren’t being overthrown just yet, then. No disrespect to Endel, either: it’s early days, and the company is (as it should be) experimenting with different models.

Endel’s announcement, and the potential for algorithms to pump out music – which can then be packaged up into albums (or playlists) on the main DSPs – is a trend that labels should be thinking about, though. And not just in the now-traditional existential-dread debate about humans versus algorithms.

Up until now, the most prominent commercial model for AI-generated-music startups has been a B2B focus on production music, with companies like Jukedeck and Amper Music initially pitching their products as faster, cheaper ways for YouTubers and other online-video creators to get royalty-free soundtracks for their content. AIVA (the first AI to be registered with a collecting society) and recent entrant Boomy is in similar territory.

At scale, this is more of a threat to production-libraries like Epidemic Sound than it is to labels and the artists signed to them. YouTube soundtracks don’t figure highly in the industry’s sync business, although startups like Lickd are trying to kickstart that as a new revenue stream.

Endel and another recently-launched startup, Mubert, are tackling a different challenge which does encroach onto labels’ territory. Both initially focused on mobile apps as the delivery mechanism for their music.

They work by asking the user what they’re doing – relaxing, working, studying, trying to get to sleep etc – and then serve up endless ‘generative’ soundtracks to suit. They’re more soundscapes than they are discrete tracks, although as Endel’s album plans show, they can be cut up and packaged in that way too.

These generative apps – Endel has also just launched an Alexa skill for Amazon’s Echo smart speakers – could become competition for the mood-music playlists on DSPs, as well as competing with human musicians for slots on those playlists.

An AI can generate a huge amount of music. “Amper could create 100 million songs this year,” its CEO Drew Silverstein’s told Music Ally in April 2018. Even if only a tiny percentage of that music passes a quality bar for commercial release – say, 0.1% – that could still be enough (if Silverstein was right) to release 100,000 tracks a year. That’s nearly 2,000 a week.

Spare a thought for the poor ‘Peaceful Piano’ curator trying to wade through those tracks ahead of an update! But this may be another reason for labels to feel relaxed about algorithmic competition: AI may outgun humans for quantity, but human curators will surely gravitate towards the known quality of fellow humans like Lang Lang or Ludovico Einaudi – who are both building thriving audiences in the streaming world.

(Perhaps labels should be thinking more about the prospect of DSPs buying or developing in-house the kind of technology that Endel and Mubert have been showcasing, however. If AI can prove itself capable of not just composing original music, but also personalising it to individual listeners, it would be surprising if streaming services did NOT invest in this area. Would now be a useful time for a reminder that AI-music guru François Pachet has been working for Spotify since 2017?)

This isn’t the whole story about AI music, though. In fact, the most exciting aspect of the sector may be AI as a tool for musicians: a successor to the synthesizer and the drum machine – both of which were similarly controversial in a ‘will this put humans out of work?’ way in their early days – that will help talented humans create new songs and sounds.

British startup Vochlea uses AI to turn beatboxing into drum patterns; Humtap and HumOn turn humming into melodies; Amadeus Code and WaveAI’s Alysia apps want to be composition tools to nudge songwriters out of their comfort zone or writers’ block; AI Music (the company) is exploring whether songs can be remixed in real-time to suit different contexts.

These are just a few examples of AI-music startups who see musicians as their customers, and AI as those musicians’ creative foils, rather than as their replacement. That’s why Abbey Road Studios has been engaging with these kinds of startups so early and so enthusiastically – Vochlea, Humtap and AI Music are among the alumni of its Abbey Road Red incubator.

Labels, too, can be helpful partners to (and even investors in) these kinds of startups, as well as others. Even the B2B-focused ones like Jukedeck and Amper Music have been keen to collaborate with artists and producers – EnterArts and Taryn Southern respectively – to see if their algorithms can generate musical seeds for humans to run with.

Music Ally recently surveyed our readers about AI music, including asking whether they thought musicians should be experimenting with AI music-creation technology. 56.1% said ‘definitely’ and 31.7% said ‘maybe’. Labels can play a constructive role in sparking that experimentation, by bringing startups in to meet their artists and explore collaboration opportunities.

This technology is improving at a rapid pace. A recent video from Australian startup Popgun showed its AI’s progression from trying to predict what a human pianist would play next, in early 2017, to composing and playing an entire bass, drums and piano backing track by itself by July 2018. The latest Abbey Road Red startup, LifeScore – whose first app will generate a constant stream of music from fragments originally recorded by a human orchestra – won rave reviews for its quality in the incubator’s February 2019 demo day.

In short, whatever AI music you’ve heard already, something better is coming next month, and the month after. That’s why there’s so much benefit in reading about and engaging with this technology, as often as possible, rather than (just) worrying about competition for slots on the big DSP playlists in the near future, or doom-laden predictions of destroyed human livelihoods in the future.

Want to know more?
– Read Music Ally’s AI-music primer from November 2018
– Check out our archive of AI-music news stories and interviews
– Read journalist Cherie Hu’s excellent explainer on Endel’s WMG deal
– Read about artist Holly Herndon’s fascinating work with ‘Spawn’
– Listen to Music Ally talking about AI music on the BBC’s Business Matters show