With advancements in the quality, accessibility, and affordability of artificial intelligence (AI), usage of AI tools like ChatGPT has recently exploded. AI-generated text, images, and videos are rampant across the internet, provoking both excitement and concern. This new level of AI technology invites careful consideration and discussion and prompts us to examine how AI works in our world.

Let’s examine AI in the music world and how it’s being used by both listeners and creators. We’ll look at new possibilities for AI-generated music and some of the pros and cons of this rapidly advancing technology.

Music and Artificial Intelligence

AI and Music Listening

Artificial intelligence has transformed the way we listen to music and discover new artists. Platforms like Spotify have leveraged AI to create personalized playlists for users, helping them discover new music that they are likely to enjoy. Spotify’s Discover Weekly feature uses machine learning algorithms to analyze user behavior—including their listening history, playlist creations, and likes—to create customized playlists of songs they may enjoy. This technology has enabled listeners to easily discover new music.

Spotify’s new AI DJ feature takes this one step further by using machine learning to generate a continuous mix of songs based on a user’s listening history and preferences. Still in the beta phase, Spotify AI is a recent development that uses AI voice technology to bring realistic DJ audio to life and generative AI through OpenAI technology to select tailored music. Spotify explains:

We put this in the hands of our music editors to provide you with insightful facts about the music, artists, or genres you’re listening to. The expertise of our editors is something that’s really important to our philosophy at Spotify. We have experts in genres who know music and culture inside and out. And no one knows the music scene better than they do. With this generative AI tooling, our editors are able to scale their innate knowledge in ways never before possible.

With these developments, Spotify and other music streaming platforms are using AI not as a replacement for human expertise, but as a way to extend the reach of that expertise. But can AI similarly fit into the workflow of music creators? Let’s take a look at AI music production.

AI-in-Music

AI Music Composition

AI is increasingly available in the music industry to generate new melodies, harmonies, and rhythms. The process of developing AI-generated songs typically involves training a machine learning model on a large dataset of existing songs. The model learns the patterns and structures of the songs in the dataset, then generates new music that is similar in style and structure. The output of the model can be modified and refined by a human composer, resulting in a finished piece of music that incorporates both the creation of the AI and the artistic vision of the composer.

In some ways, this AI music can be helpful. It can be an interesting tool for learners experimenting with music, for example. AI song generators also provide creators with the ability to quickly generate large quantities of music. This can be useful for composers who need to create large amounts of music within a tight timeframe, such as composers working on television series or video content for platforms like YouTube. Additionally, AI-generated music can be used to inspire human composers, providing new ideas and inspiration for creative work.

However, there are also potential drawbacks to using AI in music composition. One of the main concerns is the potential for AI-generated music to be formulaic or derivative, lacking the creativity and originality of human-created music. While AI-generated music can be impressive in terms of technical skill and complexity, it may not have the emotional depth and artistic vision that a human composer could provide.

It’s also important to pay attention to the ethical considerations surrounding the use of AI in music creation. There are more technical concerns, who owns the rights to AI-generated music, and can it be copyrighted in the same way as human-created music? There are also broader concerns that AI technology could replace human composers in some instances, potentially leading to a decline in artistic diversity.

Despite these concerns, the use of AI in music composition is likely to continue to grow in popularity as technology advances. AI music is a fascinating and rapidly evolving field that has the potential to revolutionize the music industry. While there are some potential drawbacks and ethical concerns surrounding the use of AI in music creation, it is, in effect, a genie that cannot be put back in the bottle. The music industry must work with AI’s benefits, temper its negatives, and figure out how to move forward in our futuristic world.

AI song generators such as Google’s MusicLM are trained on large musical datasets, where each song has been labeled with different categories related to emotion, musical style, and instrumentation. The training identifies statistical correlations between these categories and musical elements such as notes and rhythms. A user can then generate music with a text prompt, such as “a calming violin melody backed by a distorted guitar riff.”

However, AI song generators do not “understand” musical concepts in the same way that human composers do through musical training and experience. Composers develop a deep knowledge of the fundamentals of music theory, melody, harmony, rhythm, and musical styles. They also draw on intuition, emotions, and cultural influences to create music that is truly unique and groundbreaking. If you are interested in music composition—whether or not you would like to involve AI technology—you can learn from our expert educators at Levine! Check out our Composition, Songwriting, or Music Theory classes today!