The Future of Music A.I: Artificial Intelligence vs. Artist Ingenuity

by Camden Cassels

As humans, we interact with artificial intelligence everyday. From our phones, to our tablets, to our cars, these objects have become an intelligent extension of the human form. What does this mean for music? Will A.I. ever be able to replace human creativity? How will musicians and technology interact in a future inevitably shaped by A.I.? Here’s what you need to know:



Technology is accelerating at a pace humans have never before seen. In a world increasingly reliant on machines, computers, and algorithms, we depend on technology to achieve optimal efficiency. Even as these advancements creep into creative fields, we assume that technology can never fully imitate human creativity.

Most view machines as operators of the mechanical tasks that humans assign to them. The notion that a machine could be creative on its own is often saved for sci-fi. However, with artificial intelligence, or A.I., we are witnessing the beginning stages of machines mimicking human creativity. In recent years, the music industry in particular has seen an increase in the adoption of A.I., a development worrisome to some creatives.  

But is this fear warranted or is it an overreaction?

A musician improvises alongside A.I. Duet, software developed in part by Google’s Magenta (pic: @google)

A musician improvises alongside A.I. Duet, software developed in part by Google’s Magenta (pic: @google)

Multiple companies have already begun developing A.I. capable of making music. IBM has created “IBM Watson Beat.” Google has started “Google Magenta.” Spotify has created the “Spotify Creator Technology Lab.” Startups like Jukedeck, Melodrive, and Amper have also taken steps towards growing similar technologies.

Most of these programs function by combing through massive amounts of music and looking for patterns. They analyze intricacies like chord progression, length, pitch, and tempo to find commonalities, then spit out new compositions based on human preference. These outputs are relayed using MIDI (Musical Instrument Digital Interface), the standard language over which electronic instruments, computers, and audio devices can communicate. A.I. technologies are outputting these signals and ideas in a language that artists can use with their own electronic instruments.

Midi controller (pic: @lucabravo)

Midi controller (pic: @lucabravo)

Additionally, companies like Amper are building technology that can output full audio independently of external instruments or outside computer programs. Amper allows the user to pick a “genre” and a “mood.” The program then delivers a beat or song that the algorithm deems to fit those predetermined categories. After the initial song output, the user can then modify instruments, tempo, pitch, and other components. After the human selects the preferred mood, Amper acts as a launch pad for the creation of a song.

A predominant fear in the music industry, at least among some artists, is that  A.I. will replace human creativity. The possibility that a computer could create a song better than a song made by a human fosters uneasiness. However, this reality remains far in the future, and may never arrive.  

Amidst society’s current advancements in A.I., many may not realize that A.I. has already been used for decades in the creation of commercial music:

  • David Bowie famously employed a basic word generator to create song lyrics in the late 70’s.

  • A company called Popgun used A.I. to create ‘Daddy’s Car,’ a song designed to echo the style of the Beatles.

  • Pop artist Taryn Southern has used and formally credited A.I. in her music.

However, none of these A.I. to artist crossovers have shown that A.I. can create “good” music without humans. Bowie essentially used a word randomizer for lyric ideas. ‘Daddy’s Car’ was composed by A.I., but the production, arrangement, vocals, and lyrics were all human. Also, Taryn Southern used A.I. in her music the same way that anyone with Garageband on a MacBook can build upon preset loops. The A.I. that Southern applied simply gave her more starting points to choose from. None of the previously cited A.I. programs can yet create a commercially successful song from start to finish.

Concerning song replication, many smash hits do not have a formula that can be broken down into code. In a study done last year by Columbia Business School professor Michael Mauskapf, 27,000 Hot 100 songs from the past 60 years were analyzed to find patterns and consistencies. What the team found was actually the opposite. Mauskapf writes:

“Our analysis suggests that, yes, you can certainly listen to what's around you and try to match it and that will certainly serve you well, but there is an element of randomness and an element of art — for lack of a better word — that means you can't just scientifically determine what is going to become a hit.”



At the moment, The most viable use for A.I. music creation is “functional” music, or music that is made to be used in elevators, commercials, video games, and other background environments. These background songs could be a valuable avenue for A.I. music companies, allowing their technologies to serve as a consistent revenue stream for cheap, functional music. These jingle-type creations are within the realm of A.I., but anything requiring more complexity, lyrics, and, for lack of a better word, talent, remains outside of the current scope of A.I.

Today, no technology exists that can create music like a human. No song created by A.I. alone has even remotely resembled a commercially successful hit. A.I. serves as a tool to help humans create music, much like autotune, automatic loops, and other computer-based enhancements. There may be a future where artificial intelligence can compete with humans in the capacity to create music, but for now, artistic ingenuity remains the true A.I.