Method and system for touch-free control of devices
Inventors
Plagemann, Christian • DAHLKAMP, HENDRIK • Ganapathi, Hariraam Varun • Thrun, Sebastian
Assignees
US Department of Navy • Leland Stanford Junior University
Publication Number
US-9063573-B2
Publication Date
2015-06-23
Expiration Date
2031-02-17
Interested in licensing this patent?
MTEC can help explore whether this patent might be available for licensing for your application.
Abstract
The present invention provides a system and computerized method for receiving image information and translating it to computer inputs. In an embodiment of the invention, image information is received for a predetermined action space to identify an active body part. From such image information, depth information is extracted to interpret the actions of the active body part. Predetermined gestures can then be identified to provide input to a computer. For example, gestures that can be interpreted to mimic computerized touchscreen operation. Also, touchpad or mouse operations can be mimicked.
Core Innovation
The present invention provides a system and computerized method for receiving image information and translating it to computer inputs. An embodiment receives image information for a predetermined action space to identify an active body part. Depth information is extracted from the image information to interpret the actions of the active body part, allowing identification of predetermined gestures to provide input to a computer. These gestures can mimic computerized touchscreen, touchpad, or mouse operations.
The invention addresses the problem in conventional computer systems where natural human gestures are not directly interpretable as input. Current input devices like mice, trackpads, and keyboards do not allow for direct and natural interaction with digital content. Prior attempts to input natural gestures typically required attached devices such as gloves, which restricted natural movement.
The present invention solves these issues by enabling users to provide input to a computer using natural gestures without the need for any attached device. It facilitates standard computer inputs without physical contact to input devices like mice, touchpads, or keyboards. Additionally, it can interpret gestures beyond standard inputs, including grab and move hand movements and use of objects as active parts to provide input, enabling applications such as gaming.
Claims Coverage
The patent includes two independent claims outlining a method and a system for touch-free device control. Seven main inventive features are identified that describe the method and system components and their interactions.
Method for touch-free input based on depth and image information
Receiving multiple image data from a camera positioned near a display device capturing light via a light-bending apparatus; displaying images with predetermined input areas; identifying at least one active body part; receiving depth cue information; identifying a three-dimensional gesture of the active part; generating action information based on gesture and proximity to predetermined areas including depth information; performing a computing action based on this information.
System for touch-free input integrating camera and computer system
A computer system comprising a camera positioned near a display device that receives image information with light bent by a light-bending apparatus, displaying images including predetermined input areas, identifying active parts and their depth cues, recognizing three-dimensional gestures, generating action information incorporating depth and proximity to input areas, and performing computing actions based on the generated information.
Utilization of light-bending apparatus near the camera
Use of a light-bending apparatus, such as a mirror or prism, positioned near the camera to redirect light from substantially near and in front of the display device to the camera, enabling capture of image information from various predetermined action spaces.
Identification and use of active body parts for input
Identifying active parts which can include hand, fingertip, multiple body parts, upper body parts, or props, and using depth cue information to interpret three-dimensional gestures as input to the computer.
Gesture-based initiation and recognition
Recognizing three-dimensional gestures including finger and hand movements, and performing computing actions responsive to these gestures, including the initiation of further gesture identification and input operations.
Mapping and adaptation between action spaces and display
Mapping a first predetermined space (e.g., above a keyboard or between user and display) to a second predetermined space that corresponds to the display device, enabling input actions in three-dimensional space to correspond to computer inputs.
Use of multiple types of cameras
Implementation using various camera types including webcams, time-of-flight cameras, and stereoscopic cameras to receive image information and depth cues for gesture recognition.
The claims cover a comprehensive touch-free input method and system involving cameras with light-bending apparatus, identification of active body parts, extraction of depth cues, three-dimensional gesture recognition, and mapping of action spaces to display inputs enabling computing actions without physical contact.
Stated Advantages
Enables natural gesture-based input without attached devices, allowing more intuitive user interaction with computers.
Provides standard computer inputs such as touchscreen, touchpad, and mouse operations without contact to input devices.
Avoids physical damage and blemishing to computer displays by enabling hover operations instead of touch.
Supports a broad field of view and multiple action spaces through the use of light-bending apparatus and multiple camera setups.
Adapts to varying illumination through robust skin segmentation for detecting active body parts.
Documented Applications
Mimicking touchscreen input by detecting finger touches or hover gestures over a display device.
Mouse and trackpad-like input operations detected in a space above a keyboard or on arbitrary surfaces such as a tabletop.
Gaming applications using hand gestures or props like sports equipment to provide input signals for interactive gaming.
Detection of writing instruments such as pens for text or signature input.
Supplementing or replacing conventional input devices like keyboards and mice with gesture and depth-based inputs.
Interested in licensing this patent?