Sparklesound 0.8

Link

I’ve updated the Sparklesound patch to work with delay lines instead of sound grains. This cleans up both the interaction (which is much more straightforward, presenting a set of pervasive on/off options instead of triggering semi-random events) and also the sound.

I’m pleased with the interaction, but I do need to implement a way to dynamically alter the delay lengths: currently they are set at 125ms for each step to the right; this will have to be done externally because I’m committed to a modeless interaction on the monome itself. Also to be implemented are quick toggles for each delay step, i.e. 1 connects all the filter banks to the current adc input, 2 to the 125 ms delay, etc.

This video makes use of a classic Max Fleischer cartoon with Betty Boop (lapsed into the public domain and available here at archive.org) and, once again, the Cab Calloway classic Minnie the Moocher.

Get your up-to-date copy of the sparklesound ChucK script here!

Monome Sparklesound

Link

This is a demo for a monome 40h/64 patch I’ve been working on in ChucK. It’s a tool for granularizing the sound input through your computer’s soundcard.

Basically, there is a set of eight filter banks, one for each row of monome keys. If you don’t touch anything, the sound just plays through without much modification. If you do hit some keys, however, the live sound will be turned off and instead, slices of delayed sound will be played through that filter bank. You can also add in random grains, too.

The cool thing about it (I think) is that it’s a modeless interaction — that means a button press will always do the same thing. I wanted to make a monome app that was simple and intuitive, and didn’t have a row of mode-changing keys taking up an eighth of the surface. Obviously, you give up a lot of flexibility in favor of simplicity by doing this, but I’m pleased with the results.

The captions are really fast in the video, though, so feel free to pause. Also, there is a nasty clicking problem which I’m going to work on in the next version.

Here is a link to the latest version.

Road Movie / Ocular Harpsichord

Link

For this project, I used some video that Brett and I shot when we were driving from out to California several years ago. I wrote the music to go along with it, and the process seemed to fix the memories in a new way.

Most of the audio was composed beforehand, but the video was performed live using a patch written in Puredata and Gem.

This is really documentation of an ephemeral project, as opposed to a document in itself. The project culminated as a performance where I mixed the video live using the midi keyboard seen in the opening frames; I also hooked my rig up to a small television and made viewers crowd around it instead of using a larger projection screen. As you can imagine, the result was quite different experience than the one you see here.

Video -> Audio

Link

For a class assignment on synesthesia, I decided to build something to turn a video input into sound. This was in some ways a culmination of a several interactions I created with the clear purpose of being counter-intuitive. In this case, the video camera provides the material to be translated into sound, but it’s set up such that only moving sections of video will result in nonzero samples: the video output being scanned is seen on the television in the background. At the same time, a wii remote is used to control the rate that the pixels are being scanned, which adds some gross control of pitch. The two simultaneous interactions are sometimes at odds with each other.

I set this up for my classmates and let them try it out, which added the element of physical performance. Afterwards, my professor asked me, “Where is the piece?” or, in other words, where should we be looking? At the screen or at the person gesticulating wildly? I didn’t have an answer: later I realized that I was pleased with the ambiguity.

Here is the Max Patch

and here is my Osculator Conf

Anymore

Link

This piece is a simple music video, as opposed to documentation of a performance; there’s not really any technical wizardry here, but I am really pleased with the way it came out, especially the song. The recording isn’t very clean, but my vocal happened quickly and was really heartfelt. The assignment was called “Voice, Word, Glyph,” which should explain the long passage at the end where I’m writing…but it was also what got me thinking about memory and my mother.

Enjoy, though this one is a bit sad.

Swing Set: Musical Controllers with Inherent Dynamics

Link

This video was created as documentation of a final project for a class on human-computer interaction. This video gives a fairly clear idea of the interactions, and hints at some of the musical material that Jeff, one of my collaborators and a fantastic dj, was able to do with it. Jeff’s ability to actually use the pendulums to make some compelling music was the reason why I felt like this project was a success.

Laptop feedback instrument

Link

This is a piece that I did at the end of my first semester at CCRMA. While it may seem a bit goofy to some, once the elements were there, it all seemed quite obvious to me.

This video is about five minutes of me controlling a feedback loop with the tilt sensor in an Apple laptop; I’ll make the code available as well, here. It’s written in ChucK, a free language designed for audio usage. It’s a bit buggy, but it can do a lot of stuff, including synthesis, wave file manipulation, and interaction with other stuff via OSC or midi.

I know I should have smashed the laptop or set it on fire at the end, but I’m saving that for my first stadium show.