As we consider AI’s power, we seem to forget one central, indisputable fact: AI is a product of human interactions.
You’d never guess this from reading the frequent headlines on the subject. Commentators see future AI as a kind of Skynet-style artificial general intelligence (AGI), where computer systems will not just beat someone at Go but will become the next Picasso or Drake or merciless cyber-overlords. Even Stanford’s admirable attempt to bridge disciplinary gaps in the field of machine learning and computer intelligence, its new Institute for Human-Centered Artificial Intelligence, plays subtly into this fallacy that humans are somehow tangential to AI.
Yet there can be no data without humans. There can be no training of models or analysis of results without humans. There can be no application of those results without humans. We impact every single moment in the process, directly and indirectly. The division between artificial — the product of human artifice — and human is one we’ve made up.
This mysterious division seeps deep into our understanding and imagination as we contemplate the future of algorithms, machine learning, and AI. It’s clouding our ability to see the striking potential of AI as a helpmate to human knowledge and creativity. It’s making us willfully ignorant of the true stakes of AI, how it will shape us, and most importantly, how we will shape it.
We’ve both devoted years to developing generative AI systems that help artists create unexpected works. As researchers and entrepreneurs who have had to fold a large body of human knowledge spanning continents and centuries into our models and data sets, we have a historical perspective that enriches our concept of AI’s possibilities and pitfalls.
So we see clearly that we are not facing some radically new dilemma in AI. In fact, society is having an eerily similar debate to the one we had about photography and recorded or electronic sound back in the day. And the artists whose perspectives triumphed in those debates long ago because of the profound expressiveness of their art can speak to our concerns about AI. Machine learning systems and algorithms may prove not to disinherit us, but to become a new medium for human expression.
Photography did not replace painting, just as 808s or algorithmic composition didn’t eliminate society’s need for drummers and composers. New technology that automates certain tasks does not erase humans who perform those tasks. Like ATMs changed the jobs of bank tellers, these new technologies often transform a job without eliminating it. Innovations like 5G may rapidly expand the number of jobs demanding constant visual and auditory creativity.
It’s worth looking back at the convictions of early arts commentators, to understand why technology often supplements, rather than replaces, human creative work. A March 1855 essay in The Crayon, the leading arts criticism and aesthetic theory journal of mid-19th century America, exclaimed: “However ingenious the process or surprising the results of photography, it must be remembered that this art aspires only to copy, it cannot invent.” That argument will sound ridiculous to anyone who has seen works by the likes of Man Ray and Diane Arbus.
The inventive potential of photography was already emerging for photographers at the time, including American landscape photographer John Moran, who noted in 1865 that the image-taker’s and -viewer’s perception, the humans interacting with the machine, gave the copied images the potential to become art: “If there is not the perceiving mind to note and feel the relative degrees of importance in the various aspects which nature presents, nothing worthy of the name of pictures can be produced. It is this knowledge, or art of seeing, which gives value and importance to the works of certain photographers over all others.”
We now have more than a century of experience to consider, and Moran’s perspective has become essential to the way we currently view art. Ravishing images eventually won over viewers, who became less concerned with techniques and more focused on what the image said, what it made them feel. Invention happens not inside a camera, but in the relationship between the creative photographer and the imagination of the viewer.
The 20th-century debates around sound recording and computational or electronic composition methods have similar elements. Take, for example, the view of John Philip Sousa, one of the best impresarios of his generation as well as a highly skilled composer. Writing a scathing condemnation of recorded music in a 1906 article, Sousa decried the death of sincere, human music and its appreciation: “I foresee a marked deterioration in American music and musical taste, an interruption in the musical development of the country, and a host of other injuries to music in its artistic manifestations, by virtue — or rather by vice — of the multiplication of the various music-reproducing machines.” They were soulless, and made soulless sounds, he argued.
Yet, as we all know, some of the 20th century’s most groundbreaking and soulful music was brought to listeners via recordings — the entire jazz canon, arguably — or brought to life by machines. These soulless machines sparked entire new music-making communities in creative human hands.
Machines have their severe limitations, however, and forward-thinking music-makers understand this. Iannis Xenakis presciently saw that the marriage of human and machine, of creativity and mathematical operations, could yield the most interesting possibilities: “The great idea is to be able to introduce randomness in order to break up the periodicity of mathematical functions, but we’re only at the beginning. The products of the intelligence are so complex that it is impossible to purify them in order to submit them totally to mathematical laws.” Randomness is key to pushing technology beyond its narrow limits and allowing it to unlock powerful human impulses.
To assume that AI does not support human creativity as its technological predecessors did is to misunderstand the essence of AI and what it promises to give to creative human minds. AI-generated results can be purely random, as Xenakis wished, or can follow sets of rules and boundaries, while still remaining malleable and ever-evolving, responding to and deepening with human input. Creative humans take these emergent results and frame them for their fellow humans, who will themselves forge meaning on their own terms. Every step of the process is inherently socially embedded. That’s why we can make meaning out of vibrations, plays of color, movement, and gesture AI generates.
AI is different from past technological innovations, of course. It transforms itself as you create with it, responding to your input, rejecting what you reject or presenting bizarre associations or results you might never have come up with, left to your own devices. In other realms, it has profound ethical and social implications we need to examine openly and soberly.
Yet first we need to embrace AI’s humanness, to acknowledge that it is us, distilled and transformed in new and unpredictable ways, much like a work of great and lasting art is.
Ahmed Elgammal is founder and director of the Art and Artificial Intelligence Laboratory, a professor of computer science at Rutgers University, and the developer of AICAN, an autonomous AI artist and collaborative creative partner.
Taishi Fukuyama is cofounder and Chief Operating Officer at Amadeus Code, an AI-powered songwriting assistant, and Chief Marketing Officer of Qrates, the world’s first vinyl crowdfunding marketplace.