Apple inc. (20240248532). METHOD AND DEVICE FOR VISUALIZING MULTI-MODAL INPUTS simplified abstract
METHOD AND DEVICE FOR VISUALIZING MULTI-MODAL INPUTS
Organization Name
Inventor(s)
Thomas G. Salter of Foster City CA (US)
Brian W. Temple of Santa Clara CA (US)
Gregory Lutter of Boulder Creek CA (US)
METHOD AND DEVICE FOR VISUALIZING MULTI-MODAL INPUTS - A simplified explanation of the abstract
This abstract first appeared for US patent application 20240248532 titled 'METHOD AND DEVICE FOR VISUALIZING MULTI-MODAL INPUTS
- Simplified Explanation:
This patent application describes a method for visualizing multi-modal inputs in an extended reality (XR) environment, where a focus indicator changes appearance based on user gaze direction and pose changes.
- Key Features and Innovation:
- Displaying a user interface element in an XR environment - Changing the appearance of a focus indicator based on user gaze direction - Modifying the focus indicator based on changes in user pose
- Potential Applications:
- Augmented reality (AR) and virtual reality (VR) applications - User interface design for XR environments - Interactive experiences in XR environments
- Problems Solved:
- Enhancing user interaction in XR environments - Improving user experience with multi-modal inputs - Providing visual cues for user engagement
- Benefits:
- Enhanced user engagement and interaction - Improved user experience in XR environments - More intuitive and responsive interfaces
- Commercial Applications:
"Enhancing User Engagement in Extended Reality Environments: Potential Applications and Market Implications"
- Prior Art:
Further research can be conducted in the field of XR user interface design and gaze interaction technologies.
- Frequently Updated Research:
Stay updated on advancements in XR user interface design and gaze interaction technologies for the latest innovations.
Questions about XR: 1. How does this technology improve user engagement in XR environments? 2. What are the potential commercial applications of this method in XR user interface design?
Original Abstract Submitted
in one implementation, a method for visualizing multi-modal inputs includes: displaying a first user interface element within an extended reality (xr) environment; determining a gaze direction based on first input data; in response to determining that the gaze direction is directed to the first user interface element, displaying a focus indicator with a first appearance in association with the first user interface element; detecting a change in pose of at least one of a head pose or a body pose of a user of the computing system; and, in response to detecting the change of pose, modifying the focus indicator from the first appearance to a second appearance different from the first appearance.