I’m trying out a new experiment here, a series of blog posts on weekend hacks and projects, that I’m calling ‘voiding the warranty’. The unifying theme is to use things in some way other than their intended purpose.
I’ve always loved tinkering. From childhood on, I’ve been that kid who loves to take apart the VCR or the cordless phone (on a good day, I could even put them back together again). And so I’m really interested in ways in which we can repurpose existing technology to do new and creative things - things that they weren’t necessarily designed to do, but that are fun and inspiring.
But it’s always been frustrating to take stuff apart. Increasingly, technology isn’t designed for us to look under the hood (and certainly not to fiddle with anything there). Instead, it’s become a black box whose insides make sense only to the most über of über-techies. As consumers, when we own a black box, we’re letting other people design our world for us.
Nonetheless, there is hope. There’s a growing movement of people who are trying to take technology back, and shrink the learning curve for building stuff. It’s often called the maker movement, or maker culture. I think that this movement is really important because it’s empowering - it lets you tinker with things once again, to learn, and to adapt and build things. And you don’t need to be an electrical engineer to take part - it’s open to anyone who wants to learn how things tick. There are tools available, like Arduino, Processing, Makey Makey, or Raspberry Pi, and tutorials and starter kits available from SparkFun, Sylvia’s super-awesome Maker Show, Adafruit, Make, and dozens of other places, that make it easier than ever for us to make stuff. Technology doesn’t have to be mysterious, it can be a tool to explore and a way to learn. And tinkering can be an immensely enjoyable and fruitful process.
So with that in mind, let’s get our hands dirty.
A week ago, I bought a Kinect Sensor ($99 on Amazon, although you can find it cheaper used. If you’re buying it, get the one for Xbox, not Windows, and check that the power adapter is included). It’s a sensor that allows your computer to see where you are. Unlike webcams that provides only images, which are notoriously difficult for computers to understand, the Kinect uses infra-red cameras to capture depth information. It measures the distance of every point in the room within the sensor’s range. It’s a bit like a 3D scanner, and can even detect people and gestures.
If you just want to play with the Kinect but don’t want to get into all this coding stuff, plug it in, get Synapse (mac only), and you’ll see a depth map of your room. This is an image where the brightness of each pixel represents how close it is to the camera. Looking at this is kind of like stepping in to the future, because for the first time your computer can see you, as an object with a wire-frame skeleton, as distinct from your chair, lamp, or table. It can track you as you move around, and it’s just freaking cool to use your body to control your on-screen avatar. (It even works if you turn out the lights.)
In this post, I’m accessing the Kinect through Processing, a versatile programming language similar to C++, that’s used by many artists and designers. The first step was to get Processing, and the second step was to get Simple-OpenNI, a Processing library that allows it to interface with the Kinect.
Happily, this library comes with a bunch of really great examples that you can open up in Processing, hit play, and you’re up and running with the Kinect! (Once you restart Processing with this library installed, you should find these examples under File > Examples > Contributed Libraries)
In particular, one of the programs (called User3D) will display a point cloud of everything that the Kinect sees. If it recognizes that there’s a person in the room, it’ll color them differently (this works for multiple people too). The cool thing about this point cloud is that it’s really in 3D - you can use your keyboard arrow keys to change the camera angle, and look at yourself from the side, or above your head, or below your feet. This is possible because unlike a webcam, the Kinect knows where things are in 3D. What’s more, the Kinect assigns a wire-frame skeleton to each person - including joints and limbs, so it knows where your head or hand or foot or torso is. Here’s what this looks like when I strike a Frankenstein pose.
Sweet. So I went in made a few changes to the code.
I edited the code to display only the people in the scene, and not the background. This was doable because for each pixel on screen, there’s a handy variable called userMap that is 0 if the pixel is part of the background, 1 if it’s part of the first user, 2 for the second user, and so on. So all I had to do was write a line saying not to draw anything when userMap[pixel] is 0.
I set the camera to automatically swivel to-and-fro (from +90 to -90 degrees.)
I put in some extra colors to cycle through, and made a small edit to the code so that it changes the color every 100 frames.
I got rid of the lines of code that displayed the skeleton or other shapes on screen.
I lowered the resolution a bit (plotting one in every 3 points) so that there was no lag. You can play with this value to get something that looks nice and runs smoothly.
The result of these tweaks was really fun, like something from a tripped out disco. I would totally try this out the next time I host a dance party.
Here’s Pharrell Williams’ Happy to go with the gifs below. If you don’t play that song, the next few gifs are going to look really silly. You’ve been warned. (In fact, all blog posts are 100% better with this song playing in the background).
That’s all for now. Happy grooving! Here are some great resources to get you started with learning Processing and Kinect.
Making Things See by Greg Borenstein. This is really the best and most readable introduction there is to Kinect hacking with Processing.
Learning Processing by Daniel Shiffman. A nice, readable introduction to Processing. If you’ve never programmed before, this is a great place to start.
And here’s the rest of our silly dance video in which I recklessly flail my limbs around for SCIENCE. The copyright gods wouldn’t let us use the Pharrell Williams track, so the audio is some other song instead. It was 100% cooler with the original song, though. Trust me.
Here is my modified code (original by Max Rheiner). If you do something cool with it, or if this demo inspires some ideas, I would love to hear from you.