Musical Mosaic AI Project by Antonio Medina

🎵 Nothing’s Ever Gonna Change My Love For Ai (Except This Project) 🎼

What you’re about to see (I’m sorry) is my attempt at playing the first ~2 minutes of Nothing’s Ever Gonna Change My Love For You on the sax accompanied by my AI-trained remix on Kaori Kobayashi’s iconic live performance of the song. Please enjoy, and I apologize in advance.

https://youtu.be/MIzSDs_QLKU

~Reflections

Yikes! With that only slightly disturbing, wannabe play-along remix aside, this project was a lot of fun and also a confusing whirlwind. After milestone 1, I set out with the goal of using the musical mosaic project to train an AI to be a potential accompaniment for live musical performance. Since I play the saxophone, I imagined a world where I could use other instrumentation or saxophone performances, including my own, to generatively provide a duet or backing track to a live song. For a while, I tried using audio I recorded of myself on the sax for training, but (as you can hear from the video), my poor microphone quality just stacked crappy sound onto crappy sound, and I didn’t produce anything remotely worth listening to. This approach also made it difficult to fine-tune the various parameters in the code to get it to sound like what I wanted. I eventually decided to train it on Kaori Kobayashi’s 7 minute live performance of Nothing’s Ever Gonna Change My Love for You, one of my all-time favorite sax recordings. This helped me figure out what I wanted to do a whole lot faster, because I realized what I really wanted was for the ai to produce snippets of full jazzy licks, motifs, or small but otherwise recognizable snippets of the song, opposite to the random hits of noise I was achieving until then. This realization helped me refine the parameters towards a much longer HOP time and larger Extract time multiplier, and I also added a 4 second delay chucked into the dac so I could have some space between what I played and what the code “responded” with, which finally ended up being more full segments of the song. With this, I added in the code for sending Osc to processing and overlayed Kaori’s video performance on my godawful iPhone recording of me in my workspace. More time, more inspiration, more everything? Would love to have figured out how to make the segments line up to more discrete delineations of her phrases, as well as train on more of the song (I used a version that I cut heavily on Audacity in order to get rid of much of the intermittent applause, which really screwed up the resulting sounds, but this also messed up the timestamps of the video…). All in all, this was very fun, but also very cursed.

-Antonio

All Chuck Code

https://drive.google.com/file/d/1RRWZjvS-kIaQXy5luXi9nvtTM_kg9TVb/view?usp=sharing

Acknowledgements