Grasshopper Canvas with Kinect Interaction: Part 1

In a previous post, we elaborated on how more real estate for the Grasshopper Canvas can be beneficial and usable with Wiimote interaction. Since then, we have been toying around with the Microsoft Kinect (more so after the release of the beta version of the Kinect SDK for the PC), looking to identify a more fluid, natural and multi-touch interface for the table top.  The intent is to create an interface not for long-term day-to-day editing, but rather for communication and collaboration.  We’ve noticed that we tend to talk “around” definitions more and more, so want to create an intuitive and democratic way of manipulating them that isn’t so tied to a screen, a mouse, and a keyboard.

The release of the Kinect sensor has already seen hacks from game designers, hobbyists and academics, almost all of which involve skeletal tracking and gesture recognition that translates to many interesting applications. One of the lesser utilized affordances of the Kinect sensor is its ability to detect touch on an arbitrary surface as long as there is a line-of-sight between the camera and the touch surface. Andy Wilson from Microsoft Research’s Surface Computing group explains the idea in this ITS paper. Using this as the principle, we were able to create a pseudo-touch sensitive table-top interface…

Our first challenge was to overcome issues with specular reflections from the glass top of the table from the IR light projections from the depth camera. These specular reflections blacked out depth information making it impossible to use the interface on any specular surface. We flipped the glass over, exposing the ground side up to overcome this problem. Not only did this avoid reflections, it also maintained the display integrity on the table top.  The setup remains the same as the one we used before with the Wiimote, just with the addition of the Kinect mounted overhead with the field-of-view covering the table…

  1. Kinect: on the table (as in the image above) was “ok,” but ceiling-mounted worked best.
  2. Touch surface: this is just a giant pane of tinted glass…nothing fancy.
  3. Projector: mounted under adjacent table.  Projects the image on a…
  4. Huge mirror.
  5. Modeling viewing window (the second projector is not shown).

With image processing, computer vision routines, and a calibration step that model the surface as well as map the interaction space with the screen display space, we were able to achieve reasonable accuracy for multi-point touch prediction. Next was the work of implementing multi-touch gestures for the Grasshopper UI canvas navigation such as zoom, pan, component select and move. Not everything went as smooth as we would have wanted it: this is still a work-in-progress.  However, we’ve made some strides forward with basic interaction. Currently, the gestures are mapped as mouse and keyboard events on the projected display which means mapping touch points from a low spatial depth resolution of 640×480 to a higher display resolution of 1024×768. With this come uncertainties with mouse pointer positioning, which dictates the accuracy that is required to access certain graphical aspects on the Grasshopper canvas. This also poses interesting questions on the ideal table space characteristics, and the position and scale of graphical components displayed, not to mention ergonomics.  For those that can notice a jitter in the display during certain interactions, it is due to the mismatch in the depth camera frame rate (20 fps after video processing) and screen refresh rate (72Hz). We are trying to smooth this out through optimization and other various tricks.

We soon expect to have a larger touch gesture library implemented that would enable editing and navigation simultaneously with more fluidity in the interface. In the future, we are also interested in exploring aerial gestures that track user skeletal features for navigation of the Rhino viewport.  The implementation of the gestures and touch prediction is tool-agnostic and the touch surface is not restricted to a flat table top.  This provides us with an opportunity to use any tool or user interface abstraction with the display and the Kinect (more on that in another post).

Check out the video and send feedback!  Stay tuned for the next iteration…

3 Comments

  1. Cornelius Linquist says:

    It’s the first time when i’ve seen your site. I can understand lots of hard work has gone in to it. It’s really good.

Leave a Comment