US Patent Application 18217711. USER INTERACTION INTERPRETER simplified abstract

From WikiPatents
Jump to navigation Jump to search

USER INTERACTION INTERPRETER

Organization Name

Apple Inc.


Inventor(s)

Edwin Iskandar of San Jose CA (US)

Ittinop Dumnernchanvanit of Mountain View CA (US)

Samuel L. Iglesias of Palo Alto CA (US)

Timothy R. Oriol of San Jose CA (US)

USER INTERACTION INTERPRETER - A simplified explanation of the abstract

This abstract first appeared for US patent application 18217711 titled 'USER INTERACTION INTERPRETER

Simplified Explanation

- This patent application describes various implementations of devices, systems, and methods that create a mixed reality environment where virtual objects from different apps can be included. - The system detects and interprets user interactions with these virtual objects, using a separate system that is not part of the apps themselves. - User interactions are received through different input methods and are interpreted as events, providing a higher-level abstraction of the detected interactions. - The system uses UI capability data provided by the apps to understand how users can interact with the virtual objects. - The UI capability data can specify whether a virtual object can be moved, acted upon, hovered over, etc. - Based on this information, the system interprets user interactions with the virtual objects accordingly.


Original Abstract Submitted

Various implementations disclosed herein include devices, systems, and methods that provide a CGR environment in which virtual objects from one or more apps are included. User interactions with the virtual objects are detected and interpreted by a system that is separate from the apps that provide the virtual objects. The system detects user interactions received via one or more input modalities and interprets those user interactions as events. These events provide a higher-level, input modality-independent, abstractions of the lower-level input-modality dependent user interactions that are detected. The system uses UI capability data provided by the apps to interpret user interactions with respect to the virtual object provided by the apps. For example, the UI capability data can identify whether a virtual object is moveable, actionable, hover-able, etc. and the system interprets user interactions at or near the virtual object accordingly.