20240015465. IMMERSIVE AUDIO PLATFORM simplified abstract (Magic Leap, Inc.)

From WikiPatents
Jump to navigation Jump to search

IMMERSIVE AUDIO PLATFORM

Organization Name

Magic Leap, Inc.

Inventor(s)

Jean-Marc Jot of Aptos CA (US)

Michael Minnick of Fort Lauderdale FL (US)

Dmitry Pastouchenko of Folsom CA (US)

Michael Aaron Simon of Fort Lauderdale FL (US)

John Emmitt Scott, Iii of Plantation FL (US)

Richard St. Clair Bailey of Plantation FL (US)

Shivakumar Balasubramanyan of San Diego CA (US)

Harsharaj Agadi of Plantation FL (US)

IMMERSIVE AUDIO PLATFORM - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240015465 titled 'IMMERSIVE AUDIO PLATFORM

Simplified Explanation

The disclosed patent application relates to systems and methods for presenting audio content in mixed reality environments. The method involves receiving inputs from an application program and sensors of a wearable head device, and generating and presenting a spatialized audio stream based on these inputs.

  • The method starts by receiving a first input from an application program.
  • In response to the first input, an encoded audio stream is received via a first service.
  • The first service then generates a decoded audio stream based on the encoded audio stream.
  • The decoded audio stream is received via a second service.
  • A second input is received from one or more sensors of a wearable head device.
  • A third input is received from the application program, which corresponds to the position of one or more virtual speakers.
  • The second service generates a spatialized audio stream based on the decoded audio stream, the second input, and the third input.
  • Finally, the spatialized audio stream is presented via one or more speakers of the wearable head device.

Potential applications of this technology:

  • Virtual reality (VR) and augmented reality (AR) experiences: This technology can enhance the immersive audio experience in VR and AR applications by providing spatialized audio that matches the virtual environment.
  • Gaming: The spatialized audio stream can be used to create a more realistic and immersive gaming experience, allowing players to hear sounds coming from specific directions within the game world.
  • Entertainment and media: This technology can be utilized in the production of movies, TV shows, and other media content to create a more immersive audio experience for viewers.

Problems solved by this technology:

  • Limited audio immersion: Traditional audio systems may not provide an immersive experience, as the sound is not spatially accurate or dynamic. This technology solves this problem by generating a spatialized audio stream that matches the virtual environment and user's head movements.
  • Audio localization: By using the position of virtual speakers and the user's head movements, this technology can accurately localize audio sources in the mixed reality environment, enhancing the overall audio experience.

Benefits of this technology:

  • Enhanced immersion: The spatialized audio stream provides a more realistic and immersive experience in mixed reality environments, making the user feel more present in the virtual world.
  • Accurate audio localization: By accurately localizing audio sources, this technology improves the user's ability to perceive and locate sounds within the mixed reality environment.
  • Dynamic audio experience: The spatialized audio stream can dynamically adjust based on the user's head movements and the position of virtual speakers, creating a more interactive and engaging audio experience.


Original Abstract Submitted

disclosed herein are systems and methods for presenting audio content in mixed reality environments. a method may include receiving a first input from an application program; in response to receiving the first input, receiving, via a first service, an encoded audio stream; generating, via the first service, a decoded audio stream based on the encoded audio stream; receiving, via a second service, the decoded audio stream; receiving a second input from one or more sensors of a wearable head device; receiving, via the second service, a third input from the application program, wherein the third input corresponds to a position of one or more virtual speakers; generating, via the second service, a spatialized audio stream based on the decoded audio stream, the second input, and the third input; presenting, via one or more speakers of the wearable head device, the spatialized audio stream.