Skip to content

Instantly share code, notes, and snippets.

@amonks
Created May 14, 2014 18:38
Show Gist options
  • Select an option

  • Save amonks/db8d9b25b9dc36678b0c to your computer and use it in GitHub Desktop.

Select an option

Save amonks/db8d9b25b9dc36678b0c to your computer and use it in GitHub Desktop.
final paper

Motion Control

Human Machine Interaction / Andrew Monks

Intro

I'm personally super excited about realtime 3d gesture sensors like the Kinect or the LeapMotion.

I don't think there's an established name for this category of devices yet, so I'm gonna use the more general term, Motion Controllers as a synechdoche for the smaller subset of devices I'm talking about.

How they work

As we talked about in class, they emit and observe structured light, sometimes in conjunction with rgb computer vision, to capture a space in 3d.

Kinect

Microsoft's Kinect was the first mainstream consumer device to really catch on with this technology. Microsoft was super active about supporting the developer community with an open SDK, APIs and good documentation. Because of this, there's now a large group of people working with Kinects, they've thoroughly integrated themselves into the art world. You can even check one out from the SAIC Media Center. There are now tons and tons of examples of beautiful and creative work made with Kinect hardware.

As you can see, the Kinect was designed to capture full-body motions from a distance of at least a few feet. This is awesome, but people usually do things primarily with their hands. Here's a video from Microsoft of surgeons using the Kinect to control robotic surgery.

For applications like this, a device that captures a smaller area with higher resolution would be more appropriate. Enter the Leap Motion.

Leap Motion

The LeapMotion was released to a ton of hype in 2013. Because of the newness of the technology, the app market hasn't really caught up to the hardware yet, so the device has a reputation for being disappointing.

However, from a hardware perspective, it's super capable. It captures reliably at a high resolution and with a higher frame rate than the Kinect. It lacks the Kinect's rgb camera, but makes up for it with a higher resolution point cloud.

The Future

An optical system like this definitely has downsides. Since it observes from one position, objects cast shadows. If your two hands are colinear with the sensor, your closer hand blocks the other.

The Myo Armband attempts to get around this by using an entirely different technique for gesture detection.

It uses EMG sensors in an armband, with a trainable ML system to correlate muscle electricity to specific gestures. It also features an IMU (gyroscope, accelerometer, and magnetometer) to determine its position in space.

The Myo doesn't come out until later this year, and I'm hesitant to judge it before it ships. I've been consistantly disappointed by consumer-targetted EEG and EKG systems, but electromyography is different technology; perhaps it'll deliver. I'm worried that it won't be able to detect things like fine finger motions from that far up one's arm. Who knows.

The Past

When the Kinect was released, the media often compared it to the Wiimote; another gaming-targetted device designed to encourage more active play. Instead of being an external sensor, observing your motion, the Wiimote is physically manipulated in space by the user, and uses an IMU to capture its own position and motion.

Additionally, it contains an optical sensor at its tip, which determines where the Wiimote is pointing by looking for 2 IR lights at a fixed position (typically at the top right and left corners of the display, when used in a gaming context).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment