Dispatches on Max Patches

Updates from Paul, as he works on the Max Patch for our window installation project, currently in R&D phase.

This is an image of the main patch in its larval form. Generates a filename with a time and date stamp and records audio to it with adjustable amplification. Next step is to make it upload the file, generate the tweet and then post the tweet. Then cleanup and putting a nice presentation face on it:)

Just put this subpatch together. It will be the patch that generates the filename for the audio to be uploaded. I’m going to feed the audio through a filter subpatch(right now it’s just a straight feed through), then use sfrecord~ to write it to a file on the disk using the filename generated by this subpatch.

New LCDs and Serial Ports and some Max/MSP for good measure

I haven’t posted on this project for a little while, partially because of the preparation for the ongoing Save the City project, and partially because the little time that I’ve had to work on this has only resulted in small increments. So, I figured I would wait until I had some more significant updates to make to post, and here they are.

Basically, I’ve been working on a couple parts of the project. I’ve been updating a Max/MSP/Jitter project that BCL had previously used for our projection performances to try to automate some of the scaling of text depending on what the input is, while also continuing with the Arduino and LCD integration.

Continue reading “New LCDs and Serial Ports and some Max/MSP for good measure”

Interface for our Text Projection Tool


A few hours before we were set to do the 100 Ways to Save the City project, we decided we wanted to make it interactive in some way. I had gone ahead and put all of our ideas on how we might suggest saving the city into a nice Keynote presentation that we could easily play and have that project, but it really limited what the projection could be.

When it came down to actually figuring out how exactly to do this though, we were a bit unsure. There was nothing that I could think of that would do this fairly simple thing we wanted: input controls for basically just text on the laptop screen, and then displaying the resulting text on the projector. So, I went searching through old project files from Quartz Composer, Processing, and Max/MSP/Jitter.

It’s been a while since I’ve worked in any of those programs, and so I was a bit rusty. I knew that I had seen something like this before, and it seemed to me that somewhere I had already hacked together the exact thing we needed. I found the Max patch that detected the dominant colour in a video signal and then overlayed the word on the video (for example, Red), dynamically resizing the text depending on the intensity of that colour, which seemed hopeful, but ultimately didn’t have any manual input.

Finally, I found what I was looking for. It was based on a tutorial on Cycling74‘s website, meant to be dynamic subtitling or something like that. I downloaded the tutorial, changed what I needed and it worked for our performance. Since then, I’ve cleaned it up, got rid of the live video part we didn’t need and simplified the functionality. This was probably the first time that I was in a situation that proved Max/MSP/Jitter’s strengths—quick prototyping, troubleshooting, finessing that ca quickly lead to performance. If you have Max 5, you can download the patch, I’m not sure if it works with 4.6.

This might come in handy this week, depending on what we take on in Peterborough.