JAPAN (CelebrityAccess) There has been a lot of misinformation, or misunderstanding about a recent business deal between Warner Music Group and a company called Endel, which creates personalized sounds to help people focus and relax.
Basically, the headlines were that a record company signed an algorithm, which isn’t the case. It was more of a partnership or a distribution deal, not a signing. And Warner will be distributing music created by Endel and Warner is interested in that content, not the actual Artificial Intelligence.
Taishi Fukuyama, the COO of another phone app that is driven by algos and AI called the Amadeus Code, talked to us from Japan and helped clarify the deal and its significance.
First, does your company have a relationship with Endel?
We don’t have any relationship with them. Just the fact that we are both in the AI space and that, I think, our relationship with them is that both are in the generative composition AI space.
So from what I understand, you not only believe this is significant but also misunderstood.
Right. First, you already clarified that this isn’t a record deal but a distribution deal. Also, Warner Music didn’t buy the algorithm, per se. It was just for the content. It is almost like a traditional distribution deal, which is different than the coverage so far.
Everyone is mostly concerned about how artificial intelligence will affect human musicians. That’s what everybody is most interested in. I think the news between Endel and Warner is interesting because, in a sense, it validated that the industry has actually valued this technology and that there is a place for it in the market.
In the art world, several months ago, an AI-generated piece of artwork was actually sold at Christie’s Auction House for almost a half-million dollars.
In a sense, I think what’s interesting is that the industry didn’t have to create a whole new business framework for tools like this to be accepted into the marketplace.
I think a lot of companies like ours that are in the AI space have been asked if the algorithm will be able to retain copyrights and ownership. I think what this piece of news has proved is that these types of discussions will continue but, perhaps, it’s too early to tackle those topics head on but we can still have a space for these tools and technologies in the existing frameworks.
Can you tell us a little more about Amadeus Code? And do you see any parallel with Endel?
We call ourselves an AI-powered songwriting assistant. Our product is a mobile app available in the iOS and iPhone app stores. It can create melodies on top of chord progressions.
For convenience sake, for the smartphone, the app comes pre-loaded with a lot of chord progressions and the user can instantly create unlimited variations of melodies on top of them, which gives you inspiration for new songs.
I think, previously, when we used to talk about music AI, we would envision an algorithm generating fully produced tracks and people were worried if that would eventually become good enough for it to take enough to take over human jobs and human creativity.
I think we’re finally seeing companies like Endel, like Amadeus Code specializing in a specific part of this creative AI space.
For example, we specialize in test topline right now. Endel, in a sense, is specializing in this adaptive, personalized utility AI rather than your traditional songwriting.
So it’s very specialized. I often compare this to self-driving cars. If you follow that space, there are specific levels of self-driving cars and right now the most sophisticated is said to be level four. Before that, all these moving parts, be it pumping the brakes or sensors being able to sense their surroundings, even just turning the steering wheel and automating that, all of those are specific elements.
If you break that down into music, the AI needs to learn how to write melodies, analyze chord progressions and find the pattern recognition behind that. All these moving parts are slowly coming together and I think, yes, AI will eventually be able to write a pretty good song.
The app generates topline melodies on top of chord progressions and I say it’s an AI-powered songwriting assistant because we think it’s a collaborative technology, not something that’s meant to replace the human.
We have three co-founders, and my two fellow co-founders are Berklee alums. We have all been music producers in our careers. We very much are not just researchers and entrepreneurs but users of these technologies ourselves.
I think that, in terms of AI actually being in the marketplace now, not just as tools but as finished pieces of music in Spotify, through record labels like Warner, it’s easy to quickly begin to think about how that is now competing with humans. But, at the same time, we also have to highlight the fact that moving forward into the very near future, we’re entering a phase where the demand for music is going to explode just as much with the advent of 5G technology.
There will be an explosion of demand for video content and all that video will need music. If you look at that market, there will be a huge opportunity for musicians to supply music.
I think it’s interesting that, at this moment in time, that companies like ours and others are creating tools for musicians that will enable them to create more efficiently and bring more musicians into the marketplace because, at the end of the day, be it AI or not, these technologies will enable an acceleration to mastery, whether it’s auto-tune or the drum machine.
We’re lowering the bar to entry, which is a great thing.