Apple works on a 3D interface for Macs that can recognizes gestures

Shades of Minority Report! Apple has been granted a patent (number 9,218,063) for a “sessionless pointing user interface” that involves a sequence of 3D maps on a Mac and other devices that could detect your gestures and respond accordingly.

Apple notes that many different types of user interface devices and methods are currently available. Common tactile interface devices include the computer keyboard, mouse and joystick. Touch screens detect the presence and location of a touch by a finger or other object within the display area. Infrared remote controls are widely used, and "wearable" hardware devices have been developed, as well, for purposes of remote control. 

FIG. 1 is a schematic, pictorial illustration of a computer system executing a sessionless pointing user interface (SPUI).

FIG. 1 is a schematic, pictorial illustration of a computer system executing a sessionless pointing user interface (SPUI).

However, computer interfaces based on 3D sensing of parts of the user's body have also been proposed in which a 3D sensor provides position information, which is used to identify gestures.The gestures are recognized based on a shape of a body part and its position and orientation. Three-dimensional human interface systems may identify not only the user's hands, but also other parts of the body, including the head, torso and limbs.

A related Apple patent, U.S. Patent No. 7,348,963, describes an interactive video display system, in which a display screen displays a visual image, and a camera captures 3D information regarding an object in an interactive area located in front of the display screen. A computer system directs the display screen to change the visual image in response to changes in the object. 

What’s more, U.S. Patent No. 2010/0034457 describes a method for modeling humanoid forms from depth maps. The depth map is segmented so as to find a contour of the body. The contour is processed in order to identify a torso and one or more limbs of the subject. An input is generated to control an application program running on a computer by analyzing a disposition of at least one of the identified limbs in the depth map. 

A 3D interface could be used for gaming, manipulating onscreen objects, and turning on off gadgets, such as lights in a room. The gestures could be used to scrolling, selecting, zooming, and more.