Lab 1 Proposal

My aim for the first lab is to create a piece of software that is able to track a subject and follow while it moves. Computer graphics will be created to interact with the subject. Graphical lines will be drawn between multiple subjects when they enter the visual space that is recognised by Kinect.

I will use an application that works with Kinect to track people and objects, the application will send OSC signals to communicate with Isadora which will be used to design the digital aesthetic for the lines joining objects in the digital space.

Graphics will be designed in Photoshop and After Effects to be used as visual signifiers that appear in the digital space. The final image produced in Isadora will be projected onto the space that is being recorded.

This work is exploring the theme of hyperconnectivity and individuality – how people can be connected to objects and other people in a digital environment. Asking the question is digital technology bringing us closer together, or further apart physically?

I will require:

  • A computer with Isadora
  • An HD Projector
  • Mini DisplayPort to HDMI adapter
  • A minimum of 1 metre HDMI cable
  • 3m-4m space distance from the Kinect sensor

My following objectives for this lab are:

  • To recognise the body and object in a digital space in order to visualise connections between people and objects.
  • Use digital technology to highlight the space and the body and how people and objects can be brought together and apart.
  • Become more familiar with Isadora and Kinect in order to help me design my own choreographed dance piece.



Connected – An interactive installation based on hyperconnectivity and individuality.

Connected was an installation piece that I created for my first lab. The installation was set up in an empty space where people were invited to enter and interact with a video projection on a wall. The visuals resembled an iPhone home screen with applications people could touch which would respond back to them. The concept was based around the themes of hyperconnectivity and individuality.

The technologies used to track people were a Kinect sensor and a computer running Isadora and Processing.

This report covers my process for the lab, my overall aims and objectives and a conclusion with my thoughts about the lab and what feedback I got.


At the start of the week I had an Isadora Workshop, which proved to be very useful towards the lab. I knew I would have to start my lab the next day but I prepared in advance at home the Sunday and Monday evening testing software and hardware. I also found out that I would have the lab space to myself. At the time I didn’t know if this would this be a problem or a good thing, ultimately it caused me some issues, as I needed people around me to test my work.

Once I was set up in the space I began exploring ways in which I could use Kinect in Isadora. At the time I only knew one method, which was using numeric data from Kinect from tracking subjects through an external program such as Processing and send it via OSC to communicate with Isadora. The question was what numeric data would I use and how would it be used? My notes from the experimentation can be found in the appendix.

I used Isadora to create graphical lines that would draw points according to the X and Y centre of mass positions of a tracked user. These were a digital representation of connecting a person to an object. My work was intended to explore objects in a digital space and our interaction with them.

Later in the week I had Renee and Jordan from the course come in to help me set up the Kinect tracking. This meant I was able to get 2 detected users working and draw a tracked line between both of them, highlighting the relationship between two people through a digital space. I took this opportunity to discuss ideas with them. The ideas that came out of this were:

  • More graphical lines between two people instead of one
  • Use sound in some form
  • Use colour, change colour as two people get closer

I took on board these ideas and used sound as well as multiple graphical lines. The combination of lines and sound was very effective and transformed the experience into a multi-sensory one.

I began creating animations of the digital objects in After Effects (apps and icons). I chose 4 apps all related to themes of connecting. Facebook, iMessage, Tinder and Skype. Each of these apps were interactive and once opened they took you to a new state which performed differently depending on which was chosen. Each one was intended to highlight a problem in connecting. For example:

  • Facebook you were flooded with Candy Crush requests, highlighting the fact that notifications can be misleading. You may be expecting to hear from a friend or loved one and instead get spammed with silly game requests.
  • Skype the dial up tone started and the phone was ringing, after ringing for a while the disconnect sound played. Problems with connecting online depending on services which can be unreliable.
  • Tinder once opened a big nope sign followed you around, highlighting how we can just be made to feel small over something so simple
  • iMessage a text message appeared which read “I know you read my message, why aren’t you replying????” which mimics the issue of having delivered and read notifications on a phone and the problems it can cause.

On the day of the open installation, I set up some posters around the college inviting people to see my work as well as posting online. I also asked some friends and people within the course. I set up the installation as an opportunity for people to walk in and interact as they wished without giving any instructions.


The week proved challenging as I found I was working alone when I needed people as subjects to test my coding. This meant that I switch between being a subject myself while programming which was difficult. As a result I couldn’t get the coding as good as I would have liked and overall this had an impact on the interactive experience of the installation itself.

Isadora proved to be a solid working platform from which to develop my code and create an installation in four days, showing that it can be an acceptable solution to work with in my practice. So far it has proved to be a reasonable balance between creating new digital ideas within a short amount of time. The four days weren’t enough to create a full working installation but enough to create an idea that would be suitable to develop and build on, even as a prototype which could then be built through core programming.

Kinect was unreliable, the tracking didn’t always work therefore I had to create a manual override for my installation in case of any problem and I had several issues causing me to revert to the manual controls. Reliable 3d depth sensing cameras is not affordable for projects of this nature however there are alternatives such as using different hardware, for example, a tactile sensor like a button, which is something I would consider next time in order to create professional and reliable work on a tight budget.

There was a small amount of feedback however it was positive and constructive. My aim was to create an installation based on an idea and meaning and people said that the concept was clear. I intended to work on a straightforward idea of the problems communicating online and develop it to be shown within an installation. At first people didn’t know how to interact with this installation and they had to be instructed to touch the digital icons. A majority of people thought hovering their hands over the shadow of the objects would work. In future I would either take more consideration into the variety of ways people would interact or I would provide some simple instructions for a more inviting experience.

Overall I felt that this lab was a step forward in taking my programming background and begin to explore how it can be used within a creative and artistic field. I felt I reached my original objectives including exploring a concept and developing an idea related to that concept.



Experimentation Notes

“First test – general depth threshold tracking.

Doesn’t require skeleton or hand tracking. Just tracks an average point at a certain depth

Pros: Accurate tracking all the time, will always find an average point within threshold.

Useful for object tracking as doesn’t require skeleton tracking

Cons: Only works with certain areas in depth so if subject moves up and down in depth from the camera the tracking will be lost if the threshold is crossed.


Second test – skeleton tracking

Use pre-programmed libraries to get skeleton data. Relies on library to work out that user is detected and then pull skeleton data from that user.

Pros: Works in depth and able to get information for skeleton joints (up to 14)

Cons: Tracking can randomly disappear or get lost and won’t “retrack” until you leave the space and re-enter.

I tried using the KinectA mac app to get skeleton data and send via OSC to Isadora but wasn’t working – perhaps port/remote address issue?

Ended up modifying an example from the Simple OpenNI library for Processing by adding OSC messages to send the x and y centre of mass coordinates for each tracked user.


Skeleton Tracking with Isadora to draw lines.

Used Isadora to map the numbers from Processing OSC with the limit-scale value actor. Experimented with numbers to get accurate values – difficult when by myself as I had to constantly walk into the space and back to my laptop to change the code.”