Apple granted yet another 3D-related patent, this one for for ‘optical pattern projection’

Apple continues to work on 3D interfaces and has been granted a patent (number 9,235,753) by the U.S. Patent & Trademark Office for “optical pattern projection” that could be used for, among other things, gesture-based gaming.

In the patent filing, Apple notes that optical pattern projection is used in a variety of applications, such as optical three-dimensional (3D) mapping, area illumination and LCD backlighting. In some applications, diffractive optical elements (DOEs) are used in creating a desired projection pattern. 

Here’s the summary of the patent: “Optical apparatus includes first and second diffractive optical elements (DOEs) arranged in series to diffract an input beam of radiation. The first DOE is configured to apply to the input beam a pattern with a specified divergence angle, while the second DOE is configured to split the input beam into a matrix of output beams with a specified fan-out angle. The divergence and fan-out angles are chosen so as to project the radiation onto a region in space in multiple adjacent instances of the pattern.”

This isn’t Apple’s first patent regarding 3D interfaces. Patent No. 9,235,753 is for the “extraction of skeletons from 3D maps.” It involves a method for processing data that includes receiving a timed sequence of depth maps of a scene containing a humanoid form having a head. 

Patent number 9,218,063 is for a “sessionless pointing user interface” that involves a sequence of 3D maps on a Mac and other devices that could detect your gestures and respond accordingly.

Patent No. 7,348,963, describes an interactive video display system, in which a display screen displays a visual image, and a camera captures 3D information regarding an object in an interactive area located in front of the display screen. A computer system directs the display screen to change the visual image in response to changes in the object. 

What’s more, U.S. Patent No. 2010/0034457 describes a method for modeling humanoid forms from depth maps. The depth map is segmented so as to find a contour of the body. The contour is processed in order to identify a torso and one or more limbs of the subject. An input is generated to control an application program running on a computer by analyzing a disposition of at least one of the identified limbs in the depth map. 

Apple has also been scooping up companies that make 3D technologies. Earlier this month Apple acquired Emotient Inc, a startup that uses artificial-intelligence technology to read people’s emotions by analyzing facial expressions. 

Last year it bought Faceshift, which makes Faceshift Studio, a markerless facial motion capture system. It analyzes the face motions of an actor, and describes them as a mixture of basic expressions, plus head orientation and gaze, to create a custom 3D avatar and to record facial animation data in real time. The animation data may be streamed live into Maya, MotionBuilder or Unity, or exported in a range of standard file formats, including BVH and FBX.

In 2013, Apple bought PrimeSense, an Israeli maker of chips that enable three-dimensional (3D) machine vision. The chip's 3D sensors are designed to enable nature interaction between people and devices and between devices and their surroundings. Its machine vision products map out 3D environments and track movements of bodies, faces and facial expressions.

And in 2010 Apple scooped up all of the shares of a Swedish face recognition company called Polar Rose. The company had a service that allowed users to name people in their photos on photo sharing sites like Flickr and using their Facebook contacts. Using facial recognition, Polar Rose applied auto-tagging for users.

A 3D interface could be used for gaming, manipulating onscreen objects and turning on and off gadgets such as lights in a room. The gestures could be used for scrolling, selecting, zooming and more.