This Floating World was a dance piece performed in 2015 created by Tim Murray-Browne and Jan Lee. The performance “is a journey of building and dismantling the self. It explores the different ways we define ourselves and how this shapes our relationships with those around us.”

Tim Murray-Browne defines himself as an artist and creative coder and he collaborated with dancer and choreographer Jan Lee to create This Floating World.

I will be detailing the dance, how technology is used and I will give my own interpretation of the dance and what I thought of the technology with regards to body, space and spectator. I will compare my thoughts to the artist’s original intentions through an interview.

The dance began with Jan Lee centre stage; Tim Murray-Browne was sat by a desk stage left with a laptop connected to a 3D camera. The Kinect sensor was down stage pointing up stage roughly 5 metres away from a screen at the back, which was used for projecting onto.

Screen Shot 2015-03-09 at 15.45.06

The piece began with an abstract blurred image resembling a wave pattern. This image became sharper depending on the movement of Lee’s hands as she reached into the detection space. Her movement also controlled the sound as the intensity of the musical score grew when she interacted with the space. By increasing the rate she moved her hands in and out of the detected space the volume grew louder. The scene that followed demonstrated the use of a deeper level tracking the dancer as her body was completely within the detection space. The visuals were drawn uniquely based on her movement with coloured paths sculptured in collaboration with the computer.

These scenes were interesting and at first I wasn’t aware of the interaction. Only after Jan started interacting with the detection space was I aware how the visuals were being affected. The waves pulsed as she reached into the space. It was visually effective and I enjoyed the randomness, as it wasn’t a straightforward interaction that could be understood but one that provoked thinking and understanding. The second scene with coloured paths was more straightforward although it was still random as lines were only drawn some of the time during Jan’s movement.

The digital space, which represents the video projection on the screen, was drawn and manipulated in real time depending on the dancer’s movements. Murray-Browne designed a software framework in which the level of recognized tracking and overall aesthetic would be controlled however the dancer could interact with this aesthetic as they wished, limited only by the boundaries of the 3d camera specifications and the coding parameters of the software.

This Floating World was a very interesting dance piece in that it was very similar to work I have explored and created in both my practice and research. Tim Murray-Browne and Jan Lee have collaborated to create their first piece This Floating World using the same technology, Kinect, which I have experience with unlike major established companies such as Troika Ranch or Wayne McGregor who have adopted digital technologies in different ways.

It was a partially difficult piece for me to enjoy purely as a spectator due to my experience with the hardware and software. I watched the piece with a more technical emphasis, trying to understand how the technology worked and the way in which it was used. Ultimately I felt let down; as it was very similar to other work I have seen and I recognized techniques that I have used in my own practice, which has been fairly safe in terms of testing new ideas. I hoped to see something new and different conceptually and technically resulting in a unique piece. The effect of randomness was understood to me as a approach to coding. As part of this review I interviewed Tim to find out some design reasons.

“…We stuck to a 1 projector, 1 kinect setup with the projection behind the dancer, fairly conventional, partly through practical reasons – budget and time, mixing stuff I was familiar with things I wanted to learn but also because we wanted the focus to be on the artistic side of the piece rather than having used technology in a novel way that hadn’t been done before.”

As Tim mentions, the technology and set up were chosen to be within a familiar environment so that the dance would focus on the artistic elements. Aesthetically I thought the piece looked pleasing and that the abstract shapes and images produced responded well with the choreography. Conceptually I was a bit lost trying to find a link between the images and the dance, perhaps this could be down to the fact I was technically biased.

The physical space in which the piece was performed felt disjointed to the digital space. The seating in the theatre meant that the viewing angles for the piece varied greatly so that someone sat on the far sides of the seating rows wouldn’t have seen as close a relationship between the digital space and the dancer compared to someone in the middle far back. The space in which the dancer performed was at least 3 meters from the screen. This might have been set for technical reasons as not all venues can provide rear projections and if the dancer was closer to the screen she may have been in the light of the image casting a shadow on the screen.

Lee’s movement felt controlled and slow and kept a constant dynamism. It is hard to determine whether this was intentional for her choreography or if that was a compromise in order to be recognized fully by the tracking software from the 3d camera. Personally I prefer a greater variety of movement in a contemporary piece with changing dynamics and speed.

In answer to the development/choreographic process and if the code was built during the choreography or prior to the choreography. Tim answered, “Everything happened at the same time. Initially I did a fair amount of work separately as I needed to get a large amount of work done like the tracking. But this presented challenges in the studio as Jan was left a bit in the dark and although she was choreographing material it was made more difficult for her not knowing how the tech would work. Later on we arrived at a setup of me working in the mornings alone and then working in the studio together.”

Tim’s approach to coding was a balance “…between writing good code that was maintainable and never crashed vs being able to write stuff quickly to respond to ideas, and keeping the structure fluid enough to be able to go in new directions.” This is a problem that has not only affected Tim but also other companies and dancers who have adopted programming based technologies in their work. “…I think the main challenge was dealing with the different timeframes between coding and choreography. The coding happens much slower than the rate ideas are made in the studio.”

It raises a question as to how can coding be seamlessly integrated within a dance. Should either one be affected by the other or perhaps are there different unexplored approaches?

In conclusion potentially this is why software such as Isadora is increasingly popular within dance and performances in general as they allow some form of coding that can be developed within a quick timeframe however the pitfalls are that the code can be limited and messy. So the question still remains. How can dance and coding collaborate effectively?

Interview

What technology did you use to create and show This Floating World?

The dancer was tracked using a Kinect v2. This was connected to a laptop running software I wrote using the Cinder framework. I used the Microsoft Kinect SDK to extract a 3D silhouette of the dancer (but avoided the SDK’s skeleton tracking as it’s not too reliable for unusual body movements). The silhouette was reduced to a pixel skeleton through normal 2D erosion techniques and then the end points extracted from this. These were tracked using a Kalman Filter.

The visuals were all coded from scratch by me in OpenGL. The Score was composed by my friend Zac Gvirtzman and then I added a few interactive effects in Ableton, which was controlled by the software I wrote using MIDI. On stage control was from an Android tablet running TouchOSC.

How did you make your choice of technology?

We needed something fairly reliable and predictable to work with (hence no SDK-based skeleton tracking). Visually, we wanted the projection to work with the shapes being created by the dancer (although not too directly). We stuck to a 1 projector, 1 kinect setup with the projection behind the dancer, fairly conventional, partly through practical reasons – budget and time, mixing stuff I was familiar with with things I wanted to learn but also because we wanted the focus to be on the artistic side of the piece rather than having used technology in a novel way that hadn’t been done before.

How did you go about developing the code?

Very first thing I did was knock up a small environment in Cinder for developing visual sketches that I could switch between with a few basics in place like a gui to edit parameters and save/loading tweaked variables to a json file. I knocked out a sketch which was effectively most of the blue opening scene one evening in April 2013. When coding visuals like that I make really messy code and try to keep my mind out of software engineering to get into the flow. (Rewriting that sketch eight months later to optimise it took 2-3 days).

In general, there was an endless balancing act between writing good code that was maintainable and never crashed vs being able to write stuff quickly to respond to ideas, and keeping the structure fluid enough to be able to go in new directions.

For the tracking code I was working mostly with recordings taken from the Kinect. It was several weeks of work, a lot of which was caused by going down wrong paths.

The middle section with the gold lines and plants growing around was done mostly in a week. For the plants I spent some time looking at timelapses of plants growing. The first version had no visual postprocessing, the lines didn’t fade, there was no on/off switch so they were drawing all the time, and the tracking was really buggy so there were just lots of short lines without such a clear connection with the movement. Postprocessing the visuals made a huge difference to how they looked, the tracking was slowly improved over time and the grouping of them and different ways of controlling the plants developed through working together in the studio.

All the time there was a long todo list, around half of which never happened. Many things were done to 50% or 80% but then dropped due to time restrictions and recognising the need to focus on making what we already had better rather than adding new stuff.

Was it finished prior to creating choreography or did you work while the dance was being choreographed?

Everything happened at the same time. Initially I did a fair amount of work separately as I needed to get a large amount of work done like the tracking. But this presented challenges in the studio as Jan was left a bit in the dark and although she was choreographing material it was made more difficult for her not knowing how the tech would work. Later on we arrived at a setup of me working in the mornings alone and then working in the studio together.

During the performance you were using your laptop and iPad, what specifically were you controlling during the performance?

The tablet controlled the transition between the different scenes. In the first section I turned on overall effects such as making visuals react to movement, turning on/off the white lines tracing movement. In the middle section I had start/stop buttons for drawing the orange lines, a continuous control to adjust the amount of plant growth, continuous control over the bloom effect towards the climax, continuous control over the underwater distortion effect in the final scene.

For the continuous controls I could change the parameter directly but I also had a gradual transition mode where I would set a target value and it would slowly move to it (with the speed of movement also controllable from the tablet). There were also some buttons on there which weren’t used in the actual performance like clearing all the screen, growing lots of plants at once.

Can you describe your collaborative process with Jan?

We learnt a lot during the project, about how we work and finding ways of working together. Coding can take a long time to do things, which can be disconnecting from the other person. Choreographing happens quickly and with live present energy. Balancing these two was challenging. Jan’s practice involves movement, creating energy, connecting people together. She would devise creative games for us to play in the studio together to ensure we were connected and communicating well, which as very different from how I’m used to working but made a big difference in things. We also approach things in different ways – Jan does lots of improvising, creative exercises, free association. She draws stuff from everywhere. I’m quite conceptual in my approach and I work a lot with abstract ideas, plans, narratives, concepts. A lot of developing a good collaborative relationship was finding common ground to communicate with each other. We made a big storyboard together of reference images, sounds, words, colours, ideas which described the overall shape of the work.

What were the key challenges and greatest successes?

Some of the key challenges I mentioned above – finding a common language to communicate with each other. There was no creative hierarchy between – the direction and concept behind the piece we created together. It took a while to get to know each other well enough to do this. But I think the main challenge was dealing with the different timeframes between coding and choreography. The coding happens much slower than the rate ideas are made in the studio.

Greatest success… The thing we worked really hard to do was to create a work where the different disciplines (visuals, sound, interaction, dance) were all integrated with each other and mutually supportive. We didn’t want the tech to be an amendment to the dance, but everything part of a single unified work that speaks the same message. And I think we achieved this.

You were on the stage next to Jan during the performance, was this intentional and what was the reason for it?

It was something we discussed a lot. I was up for being on stage because, as mentioned, the piece was not intended as a dance piece with technology added or supporting but an even collaboration between dance and tech. And also as I mentioned it was an even collaboration, so we wanted both of us there presented to the audience.

But on the other hand, the work was about the relationship between an individual and the environment around her, and I was concerned that me being on stage would turn me into a ‘god’ figure controlling the environment which I didn’t want. There was agency in the environment but it wasn’t meant to be presented as an individual pulling all the strings to manipulate it.

So we had that balance already, but in the end there was a technical reason that pushed it – the Kinect cable is of limited length and there didn’t appear to be any USB3 extenders available, so the laptop had to be on the stage. I could have extended the tablet way up to the sound box and controlled it from there, but if something had have started misbehaving (like the connection between tablet and computer which occasionally needed resetting) it would have been a disaster. Also, in all of the development and rehearsing of the work I had been next to the dance area so Jan and I had a level of communication which was essential for the control of the middle scene and would have been disrupted if I was miles away and she couldn’t see me.