Finger Circle Snakes ft Sandesh

Brought to you by the HandposeOSC from #cools-tools sending finger position information (63 “landmarks”) to Wekinator, trained to classify open vs closed hand position and regress on X and Y position of the circle snake head.

https://youtu.be/Y-8lO85cwRE

Joystick Jingles Ft Kaylin

Joysticks stuck into play-doh that’s stuck into a concrete block helping to play along to “jazzy” chuck chords generated in part with ChatGPT’s ‘help’.

https://youtu.be/l5E1xwme_Os

Table-Taar Beep Boop ft Ayisha

I got my friend Ayisha to try out the first experiment of this new instrument!

https://youtu.be/tWB1d3QmvGQ

I routed outputs from an Arduino Mega through Serial signals into Processing and then out as OSC into wekinator. Using just the basic starting 3 parameter output synth program provided in the examples from the wekinator page, I whipped up an experiment where I connected a Linear Force Sensitive Resistor (FSR) to an Arduino sending position data as analog signals to wekinator. Because I can wire the FSR to essentially act as two potentiometers with their upper resistance limits at either end of the FSR, I can send two distinct numbers as data into wekinator.

IMG_2510.JPG

Reflection:

I’ve been building things for a long time - doodads, goobers, knick knacks, and full blown machines and devices. Music has been embedded into a large chunk of those things, and this etude felt like a cool opportunity to bring a little bit of AI (which I’m much less experienced with deploying) into my tinkering. My biggest takeaway is: it’s hard, but not for the reasons I would have thought.

I’m realizing that with embedding AI into an interactive, creative system, it matters a lot to be realistic about what AI is good for and where I’m better off mapping something out myself. I had a bunch of grand visions for ways to use AI to make a system super interactive and expressive, but the reality is that so much of my time was spent fine-tuning, trying to get systems to understand each other, and trying to understand the behavior of the thing I built. I often felt like with a bit of hard-coding, I could get the instruments or interactions to feel smoother or more expressive in the way I wanted without using wekinator for regression or running a neural net. At the same time, seeing what everyone put together in class made me realize - yes, you can always fine tune a bit better and yes, sometimes you have tons of input data that would be easier not to deal with - but that I can and should branch out of my own understanding of inputs and outputs and data and have fun with finding avenues where some sort of AI or ML system can achieve a type of interaction I would have a lot of trouble fabricating on my own…. more thoughts and experiments still to come for me…