Jump to content

18272261. METHOD AND DEVICE FOR VISUALIZING MULTI-MODAL INPUTS simplified abstract (Apple Inc.)

From WikiPatents

METHOD AND DEVICE FOR VISUALIZING MULTI-MODAL INPUTS

Organization Name

Apple Inc.

Inventor(s)

Thomas G. Salter of Foster City CA (US)

Brian W. Temple of Santa Clara CA (US)

Gregory Lutter of Boulder Creek CA (US)

METHOD AND DEVICE FOR VISUALIZING MULTI-MODAL INPUTS - A simplified explanation of the abstract

This abstract first appeared for US patent application 18272261 titled 'METHOD AND DEVICE FOR VISUALIZING MULTI-MODAL INPUTS

The abstract describes a method for visualizing multi-modal inputs in an extended reality (XR) environment. This method involves displaying user interface elements, determining gaze direction, displaying focus indicators, detecting changes in user pose, and modifying focus indicators accordingly.

  • Displaying a first user interface element within an XR environment
  • Determining gaze direction based on input data
  • Displaying a focus indicator when gaze is directed at the user interface element
  • Detecting changes in user pose (head or body)
  • Modifying the focus indicator based on pose changes

Potential Applications: - Augmented reality applications - Virtual reality training simulations - Interactive gaming experiences

Problems Solved: - Enhancing user interaction in XR environments - Improving user experience with multi-modal inputs

Benefits: - Increased user engagement - Enhanced immersion in XR environments - Improved usability for various applications

Commercial Applications: Title: "Enhancing User Interaction in Extended Reality Environments" This technology could be used in industries such as: - Gaming - Education - Healthcare - Design and visualization

Questions about the technology: 1. How does this method improve user interaction in XR environments? - This method enhances user interaction by dynamically adjusting focus indicators based on gaze direction and user pose changes. 2. What are the potential applications of this technology beyond XR environments? - This technology could potentially be applied to other interactive interfaces to improve user experience and engagement.


Original Abstract Submitted

In one implementation, a method for visualizing multi-modal inputs includes: displaying a first user interface element within an extended reality (XR) environment; determining a gaze direction based on first input data; in response to determining that the gaze direction is directed to the first user interface element, displaying a focus indicator with a first appearance in association with the first user interface element; detecting a change in pose of at least one of a head pose or a body pose of a user of the computing system; and, in response to detecting the change of pose, modifying the focus indicator from the first appearance to a second appearance different from the first appearance.

Cookies help us deliver our services. By using our services, you agree to our use of cookies.