Harrison Weingard's Portfolio & Personal Website
Symagery df1070d72f0ad400a9e814e16f6ff1c490626f84 Harrison Weingard<> Jan. 6, 2012, 12:56 p.m.



This is a group project I worked on my junior year at UW. The assignment was geared towards giving undergraduates an experience at a real research project. Our team decided to investigate interactive real-time visualizations; based on things you might see in at concert & all those multi-touch cell phones you might have in your pockets.

Conglomerated Multi-Touch Interactions

We found a similar project to what we wanted to accomplish called SuperDraw, however SuperDraw only provided one interactive model. The basic idea being, no matter how many people you put into one visualization, they would all be doing the same thing and it would quickly get cluttered and their contributions would go unnoticed.

Our research was the creation and testing of different models to see how well we could ultimately scale the model and gage the level of involvement of the contributor.

A screenshot of a visualization model

In the above screenshot, some users will create colored fish, some users will create stars, some may control the sway of the ocean, others might apply filters to the whole visualization, etc.

What We Created

In order to test all these wonderful ideas, we needed some solid framework to build and collect metrics from these visualizations. We used Quartz Composer as our framework. We built a simple text based network protocol and used a UDP port for all our transmissions. The Quartz Composition could thusly accept users as they were available and provide a lifespan of inactivity to determine if they no longer were participating. We also needed some sort of logging application that could help us measure what interactions were happening and when. Also, during development of each component in order for us to successfully test, we needed a multiude of personality bots.

In all we created the following components:

The Result

We successfully built and tested everything we wanted to, though its hard to say we really gathered conclusive results. Here is an excerpt of our testing:

None of the codecs are supportive natively in your browser.

Our testers really enjoyed the visual feedback and experimented with lots of different gestures. Though unfortunately, our ideas ultimately couldn't solve the confusion as the number of participants increased. text/plain