Apple has filed for a patent (number 2) for “devices, methods, and graphical user interfaces for interacting with three-dimensional environments.” It involves Mac with displays capable of sensing and reacting to movements and gestures so that users can manipulate 3D objects.
Apple note that the development of computer systems for augmented reality has increased significantly in recent years. Example augmented reality environments include at least some virtual elements that replace or augment the physical world. Input devices, such as cameras, controllers, joysticks, touch-sensitive surfaces, and touch-screen displays for computer systems and other electronic computing devices are used to interact with virtual/augmented reality environments. Example virtual elements include virtual objects include digital images, video, text, icons, and control elements such as buttons and other graphics.
However, Apple says that methods and interfaces for interacting with environments that include at least some virtual elements (e.g., applications, augmented reality environments, mixed reality environments, and virtual reality environments) are “cumbersome, inefficient, and limited.” For example, systems that provide insufficient feedback for performing actions associated with virtual objects, systems that require a series of inputs to achieve a desired outcome in an augmented reality environment, and systems in which manipulation of virtual objects are “complex, tedious and error-prone, create a significant cognitive burden on a user, and detract from the experience with the virtual/augmented reality environment.”
In addition, these methods take longer than necessary, thereby wasting energy, according to Apple. This latter consideration is particularly important in battery-operated devices. The tech giant wants to overcome such limitations.
Here’s the summary of the patent filing: “While displaying a three-dimensional environment, a computer system detects a hand at a first position that corresponds to a portion of the three-dimensional environment. In response to detecting the hand at the first position: in accordance with a determination that the hand is being held in a first predefined configuration, the computer system displays a visual indication of a first operation context for gesture input using hand gestures in the three-dimensional environment; and in accordance with a determination that the hand is not being held in the first predefined configuration, the computer system forgoes display of the visual indication.”