Designing for Choreography:
A New Architecture for Hybrid Interaction
The future of interaction design lies not in isolated digital interfaces or standalone physical objects, but in the choreography—the dynamic, spatial, and narrative-driven connection—between people, objects, environments, and stories whether they are physical or digital. The goal is a set of tools that make physical and digital sensing and actuation as easy as possible to focus on the choreography itself. I realized that I was missing some software packages that I developed previously that had gone stale from neglect.
Spacebrew 2.0: The Message Backbone (was Spacebrew (2014) )
Rewritten in Python to use the MQTT protocol Spacebrew is an open, dynamically re-routable software toolkit for choreographing interactive spaces. Or, in other words, a simple way to connect interactive things to one another. Every element you connect to the system is identified as either a subscriber (reading data in) and/or a publisher (pushing data out). Data is in standardized formats making AI created routing easy.
OpenTSPS 2.0 (was OpenTSPS (2010) )
TSPS started as a toolkit for sensing people in spaces meant for both prototyping and permanent installations. It used openCV to analyze a camera, kinect, or video data, and sends what it finds via OSC, TCP, TUIO, or Websockets. There is still a need to understand people and their actions in environments but AI and Computer vision has developed considerably and will be incorporated in new ways in OpenTSPS 2.0.
Yuxi 2.0: Hybrid Object Toolkit (was Yuxi: Mixed Reality Hardware Toolkit (2018) )
The original inspiration behind YUXI was to create a set of methodologies, toolchains, and examples to mix physical computing with VR/AR/MR projects. The project examples used a Raspberry Pi and breadboard mounted to a trackable base plate. I am in the process of rewriting Yuxi to work on the Oculus Quest and use QR and Object tracking to synchronize between physical space and digital space.
Vaporware 1.0: Hybrid Reality Choreography Tool
The culmination of these efforts is Vaporware 1.0, a proposed new choreography software and spatial storytelling tool. This system is where the concept of choreography materializes, powered by AI.
Seamless Sensor Integration: By relying on Spacebrew 2.0 (MQTT), Vaporware 1.0 inherently satisfies the requirement to connect physical or digital sensors or actuators easily. The focus shifts from how to connect devices to how to tell a story with the data they generate. It is designed to easily integrate:
Human Conversation: Text, speech, and intent analysis as narrative input.
Locomotion: The body's movement through the environment as spatial input.
Manipulation: The body's interaction with objects as object-centric input.
Flexibility and Scale: The system’s architecture must be versatile enough to flex between wearable ideas (inside out) and environment ideas (outside in). This means it must handle close-range personal data streams (heart rate, gesture) alongside large-scale ambient data (room occupancy, light levels).
AI-Driven Storytelling: The introduction of AI is key to making this a choreography tool, not just a data router. The AI component should analyze the core inputs (conversation, locomotion, manipulation) and the states of hybrid objects to suggest, automate, or even generate narrative transitions, lighting changes, or sonic cues. Stories can be created by users and the tool acts as the intelligent stage manager, ensuring the interaction is fluid and responsive, embodying the "live" quality of choreography.
Help Needed:
Does any of this resonate with you or do you have any feedback? I’d love to hear your thoughts.
I’d love to find collaborators for the design and programming of these tools. The Python parts of Spacebrew 2.0 are pretty far along but the web interface could use some help. I am creating one version of OpenTSPS 2.0 to run on the new Arduino Uno Q which also gives constraints on the types of CV that I am using. I plan to create a desktop version and could use some advice about platforms for accelerated CV and AI projects. Yuxi 2.0 is being written in Unity and I’d love to chat with anyone who has used the experimental QR Tag integration that is now on Quest 3.
Hosting and participating in Jam Sessions, a weekly meetup to create new things and learn about the tools. Message me if you would be interested in participating.
My residency is coming to an end and I will be transitioning to teaching Designing Immersive Experiences at CIID in December. My plan is to publish drafts of these tools before the end of January. All of them exist in some form now and Spacebrew 2.0 is probably the furthest along.