I’ve always known that music and math have a deep and complex relationship, and for most of my career, I’ve used computers and software recording solutions as the centerpieces of my workflow. But audiophiles and engineers are a stubborn breed; walk into any modern recording studio, and you’ll see walls of analog equipment, reel-to-reel tape recorders, large-format analog mixing consoles with hundreds of thousands of discrete components, and in general, lots of mid-20th Century artifacts that rely on vacuum tubes, bulky iron transformers, germanium transistors, carbon-composition resistors, and about a million other components that haven’t been produced for decades.
Recordists and engineers, myself included, often have a penchant for these old, “outdated” pieces because of the character they impart on audio, often described in somewhat intangible terms like “warmth”, “fatness”, “airiness”, and other immeasurable qualities. What we’re really latching onto, however, are the inconsistencies and idiosyncrasies related to the varying plate voltage of a tube, harmonic excitement from overdriving a preamplifier circuit, the crosstalk between left and right channels on an analog console — in other words, the mechanical limitations and imperfections that the advent of digital recording sought to perfect.
1. Preparing for the Great Reset and The Future of Work in the New Normal
2. Feature Scaling in Machine Learning
3. Understanding Confusion Matrix
4. 8 Myths About AI in the Workplace
While I still prefer recording as much in the analog domain as possible, routinely choosing hardware over plugins, my time at General Assembly has taught me that data science, and more specifically, the intersection of digital signal processing (DSP) and machine learning can really help to quantify some of these mysterious metrics that have evaded tangible classification for so long.
My final project, which I’ll discuss in depth in my following posts, seeks to do just that. With data sourced from Spotify’s API and data I generated using the Python library Librosa, along with the help of many amazing friends and teachers at General Assembly, I’m beginning to see audio in a new, more mathematical way. For example, the advent of convolutional neural networks and their ability to recognize and classify images, along with high-resolution spectrograms representing something of a “sonic fingerprint” for audio files (extractable via Librosa) opens the door to some pretty amazing possibilities when it comes to quantifying and comparing audio. With more research, I hope to turn this into a means for young, budding artists to easily compare their songs against popular music worldwide, and see both who and what songs they most resemble, as well as where those artists and songs are popular.
So while my time at General Assembly still hasn’t changed my mind about fully pivoting away from music creation, it has certainly opened up a multitude of new doors and new ways of looking at the subjects I love more than anything. What’s more, the community I found myself a part of during this process became more like family than classmates and instructors. I’m excited to keep pushing the boundaries of my industry with these exceptional people by my side, and in the process, find a new and better way to connect artists with music lovers around the world.