Robert Bosch GmbH (20240231580). SYSTEM AND METHOD FOR MULTI MODAL INPUT AND EDITING ON A HUMAN MACHINE INTERFACE simplified abstract

From WikiPatents
Jump to navigation Jump to search

SYSTEM AND METHOD FOR MULTI MODAL INPUT AND EDITING ON A HUMAN MACHINE INTERFACE

Organization Name

Robert Bosch GmbH

Inventor(s)

Zhengyu Zhou of Fremont CA (US)

Jiajing Guo of Mountain View CA (US)

Nan Tian of Foster City CA (US)

Nicholas Feffer of Stanford CA (US)

William Ma of Lagrangeville NY (US)

SYSTEM AND METHOD FOR MULTI MODAL INPUT AND EDITING ON A HUMAN MACHINE INTERFACE - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240231580 titled 'SYSTEM AND METHOD FOR MULTI MODAL INPUT AND EDITING ON A HUMAN MACHINE INTERFACE

The abstract describes a virtual reality apparatus with features such as a display for outputting information, a microphone for receiving spoken word commands, an eye gaze sensor for tracking user eye movement, and a processor for various functions related to text input and editing.

  • Display outputs information related to the user interface of the virtual reality device.
  • Microphone receives spoken word commands from the user during a voice recognition session.
  • Eye gaze sensor tracks the user's eye movement.
  • Processor responds to inputs by outputting text, emphasizing words, toggling through words, highlighting and editing words, and suggesting words based on contextual information.

Potential Applications: - Virtual reality gaming - Virtual meetings and collaboration - Accessibility tools for individuals with disabilities - Language learning and translation applications

Problems Solved: - Enhancing user interaction in virtual reality environments - Improving text input and editing capabilities in virtual reality - Providing a more intuitive and efficient way to communicate in virtual reality

Benefits: - Improved user experience in virtual reality - Enhanced productivity and communication in virtual reality applications - Increased accessibility for users with different needs - More natural and intuitive text input and editing processes

Commercial Applications: Title: Enhanced Virtual Reality User Interface with Text Input and Editing Features This technology can be utilized in virtual reality gaming, virtual meetings, language learning applications, and accessibility tools. It has the potential to improve user engagement, productivity, and communication in various virtual reality settings.

Questions about Virtual Reality Apparatus with Text Input and Editing Features:

1. How does the eye gaze sensor enhance user interaction in virtual reality? The eye gaze sensor tracks the user's eye movement, allowing for features such as emphasizing words and toggling through text, providing a more intuitive and efficient way to interact with virtual reality environments.

2. What are the potential benefits of using a language model to suggest words in a virtual reality text input interface? By utilizing contextual information associated with the text field, the language model can suggest relevant words, improving the speed and accuracy of text input and editing in virtual reality applications.


Original Abstract Submitted

a virtual reality apparatus that includes a display configured to output information related to a user interface of the virtual reality device, a microphone configured to receive one or more spoken word commands from a user upon activation of a voice recognition session, an eye gaze sensor configured to track eye movement of the user, and a processor programmed to, in response to a first input, output one or more words of a text field, in response to an eye gaze of the user exceeding a threshold time, emphasize a group of one or more words of the text field, toggle through a plurality of words of only the group utilizing the input interface, in response to a second input, highlight and edit an edited word from the group, and in response to utilizing contextual information associated with the group a language model, outputting one or more suggested words.