Dispatches on Max Patches

Updates from Paul, as he works on the Max Patch for our window installation project, currently in R&D phase.

This is an image of the main patch in its larval form. Generates a filename with a time and date stamp and records audio to it with adjustable amplification. Next step is to make it upload the file, generate the tweet and then post the tweet. Then cleanup and putting a nice presentation face on it:)

Just put this subpatch together. It will be the patch that generates the filename for the audio to be uploaded. I’m going to feed the audio through a filter subpatch(right now it’s just a straight feed through), then use sfrecord~ to write it to a file on the disk using the filename generated by this subpatch.

iPhone/mobile app dev diagraming

It’s been a long time coming, but I’m getting a lot closer to completing a mobile app. I’ve abandoned objective-c and native app development in favour of what I actually know how to do — namely php, html, css, and a bit of javascript.

Access to phone hardware to make this app do what it needs to do will be accomplished through phonegap.

I’m aiming to try to wrap up a working version of this in the next 10 days or so, baring any major issues I could very likely run into. I’m anxious to share this!!!

Scratch Markup Language (.sml)



From FAT (Free Art & Technology):

SML (Scratch Markup Language) is a new file format for recording and replaying turntablism. We’ve developed open-source tools for accurately capturing the record and crossfader movements of a scratch DJ, allowing us to analyze, transcribe, and recreate scratch performances.

We want to do for turntablism what Graffiti Markup Language has done for tagging — especially teaching giant robot arms how to scratch.

At Art Hack Day we collaborated with other artists and programmers to develop the first prototypes of ScratchML. We used timecode vinyl to capture record movements ($10) and a hacked VCA fader + Arduino to record the crossfader ($30).

Scratch data was saved to disk as .sml and broadcast as OSC, which allowed other Art Hack Day participants to build visualizations based on what the DJ was scratching during the exhibition. The apps ranged from spinning-vinyl animations and TTM transcriptions to insane exploding 3D pizzas and a side-scrolling videogame shooter controlled by scratches.

Our goal is to make capturing, replaying, and sharing a scratch performance accurate and easy. SML files can be freely uploaded and downloaded from the ScratchML.com database. We’re particularly looking forward to improving the experience of learning how to scratch — e.g. by building apps that show you just how accurate your autobahn scratches actually are.

Throughout the week here on FAT we’ll be publishing ScratchML projects created during Art Hack Day, data specs, source code, hardware modification details and more.

Want to get involved? Join the ScratchML mailing list, follow us on GitHub, or email mewith any questions. More info to come at scratchML.com

Not sure what else to add.

I’m pretty sure that this is where all digital culture schools, programs, and practices will be heading — thinking about how to encapsulate data that we might normally take for granted, creating solutions very quickly and inexpensively, making it insanely fun, opening it all up for the world to use, and fostering big imaginations.

Sergio Albiac’s ‘Content is Queen’

Content is Queen is a new generative video painting by Sergio Albiac. Using computer programming language, he modifies the tools a painter would normally use and creates dynamic “paintings” from found video. If you’re having a hard time discerning what the image is, take a few steps back. Now the title makes sense!

Sergio states, “My technique uses regions of video content to effectively represent or “paint” heterogeneous regions of the image. Both the partial content of the videos and the whole image are fully visible at the same time, widening the possibilities to deliver meaning in a contemporary aesthetic language.”

Via: Today and Tomorrow

Interface for our Text Projection Tool


A few hours before we were set to do the 100 Ways to Save the City project, we decided we wanted to make it interactive in some way. I had gone ahead and put all of our ideas on how we might suggest saving the city into a nice Keynote presentation that we could easily play and have that project, but it really limited what the projection could be.

When it came down to actually figuring out how exactly to do this though, we were a bit unsure. There was nothing that I could think of that would do this fairly simple thing we wanted: input controls for basically just text on the laptop screen, and then displaying the resulting text on the projector. So, I went searching through old project files from Quartz Composer, Processing, and Max/MSP/Jitter.

It’s been a while since I’ve worked in any of those programs, and so I was a bit rusty. I knew that I had seen something like this before, and it seemed to me that somewhere I had already hacked together the exact thing we needed. I found the Max patch that detected the dominant colour in a video signal and then overlayed the word on the video (for example, Red), dynamically resizing the text depending on the intensity of that colour, which seemed hopeful, but ultimately didn’t have any manual input.

Finally, I found what I was looking for. It was based on a tutorial on Cycling74‘s website, meant to be dynamic subtitling or something like that. I downloaded the tutorial, changed what I needed and it worked for our performance. Since then, I’ve cleaned it up, got rid of the live video part we didn’t need and simplified the functionality. This was probably the first time that I was in a situation that proved Max/MSP/Jitter’s strengths—quick prototyping, troubleshooting, finessing that ca quickly lead to performance. If you have Max 5, you can download the patch, I’m not sure if it works with 4.6.

This might come in handy this week, depending on what we take on in Peterborough.