AI and Machine Learning making a splash in music
It would be an understatement to say that artificial intelligence has transformed the music industry. AI is present in every step of the music-making and listening process, changing how we consume the media at its core. Some of the ways it has impacted the industry in ways you would expect, through algorithms and data crunching, but it has evolved past that to the point where AI is able to create its own unique music.
Here are some ways Artificial Intelligence and Machine Learning is changing and making a huge impact on the music industry:
How we Listen
The way we consume music is fundamentally different from the previous generation. Instead of listening to the radio, there are services out there that provide personalized streams based on what we listen to. For example, Spotify — a popular streaming service — uses countless data points in order to recommend the listener new music. They track everything like what songs we play together and what songs we skip in order to best fit the listener’s desires.
Spotify’s particular approach is called Bandits for Recommendations as Treatments (BART). BART’s goal is to get satisfaction from the user, which in this case is them listening to the song for more than 30 seconds. Once 30 seconds pass, the stream is monetized and BART notates that song as a positive data point from which to explore recommendations.
Spotify gets its recommendations by tracking what everybody listens to. It takes the mood, style, and genre of your music, weighing what you have on heavy rotation, and looks for someone with similar taste. Then it recommends you based on what they listen to, but you do not already.
How Music is Created
The impact of AI on the creation of music is probably the most frequently misunderstood topic. Artificial Intelligence directly affects the creative process of musicians. Since Spotify monetizes when the listener reaches 30 seconds, artists trying to game the algorithm front-load their songs to get the listener to listen past those first 30 seconds. It is also widely accepted that it is best to release music on Fridays in an effort to get onto Friday’s Release Radar playlist. Or you know to key in on the Friday night pre-game and party. 🙂
Some artists use AI to give their tracks a finishing touch. Landr is a software that does such a thing, using machine learning to replicate the process artists make when mastering a track. It does various things such as stereo enhancement, saturation, and multi-band compression based on what it thinks your song needs.
The Music Itself
It was inevitable that eventually, we would reach a point where AI would be able to create its own music and that time is now. Some systems are even capable of creating music purely independently from humans.
An interesting AI solution is DeepJams, which relies on human input to learn and produce. You start by recording a short segment of music and feed it into the program. It breaks it down into terms that the AI understands, then it gives a completely original segment loosely based on what you provided it with.
Some Artificial Intelligence requires minimal human input. Amper is an AI music company that specializes in scores for movies or video games. It requires some human input, but the composition part is purely AI. You simply select the mood and style, instrument, tempo, and duration, and then Amper generates something for you based on what you already have. It works with an intuitive drag and drop system so anybody could make music with it.
1. Why Corporate AI projects fail?
2. How AI Will Power the Next Wave of Healthcare Innovation?
3. Machine Learning by Using Regression Model
4. Top Data Science Platforms in 2021 Other than Kaggle
Jukebox by OpenAI is a neural net that generates music based on specific genres. It even generates basic lyrics, they are not always coherent, but they are there. The lyrics are unique too, as it generalizes to lyrics not seen in its training. They currently have a few samples such as Katy Perry and Frank Sinatra. They accomplish this by using a multi-scale Vector Quantised-Variational Audio Encoder — which differs from a regular Variational Audio Encoder in its ability to learn as it produces rather than being a static robot.
Endel has had various projects meshing music and AI. Their first was a collaboration with artist Toro y Moi, in which he created four tracks — “Flow”, “Move”, “Balance”, and “Connect” with other artists such as Nosaj Thing, Madeline Kenney, Washed Out, and Empress Of. Endel creates soundscapes aimed at relaxing your mind. Its algorithm is based on Flow: The Psychology of Optimal Experience by Mihaly Csikszentmihalyi. It focuses on what is the “optimal experience” and a state of mind called “flow”.
In their most recent project, Endel completely crossed the barrier of human input, as their newest project is purely AI-generated music. Dubbed AI Lullaby, they created a system that generates live lullaby music for babies. For this project, they collaborated with artist Grimes for voice audio, and the AI intersperses it with the generated music.
It truly is amazing to see how far Artificial Intelligence has come. Beyond that, how incredible, music, a fascinating art form has changed. Artificial Intelligence, machine learning, and the evolution of neural networks are certainly impacting industries far and wide. Music, a highly personalized and human art form is now able to be replicated by machines. It raises questions on what really is human and what new heights this new technology will bring us to in the future.