Magic Leap, Inc. (20240284138). PHYSICS-BASED AUDIO AND HAPTIC SYNTHESIS simplified abstract

From WikiPatents
Jump to navigation Jump to search

PHYSICS-BASED AUDIO AND HAPTIC SYNTHESIS

Organization Name

Magic Leap, Inc.

Inventor(s)

Colby Nelson Leider of Coral Gables FL (US)

Justin Dan Mathew of Fort Lauderdale FL (US)

Michael Z. Land of Mill Valley CA (US)

Blaine Ivin Wood of Eagle Mountain UT (US)

Jung-Suk Lee of Santa Clara CA (US)

Anastasia Andreyevna Tajik of Fort Lauderdale FL (US)

Jean-Marc Jot of Aptos CA (US)

PHYSICS-BASED AUDIO AND HAPTIC SYNTHESIS - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240284138 titled 'PHYSICS-BASED AUDIO AND HAPTIC SYNTHESIS

The patent application describes systems and methods for generating and presenting virtual audio for mixed reality systems.

  • Determining collisions between virtual objects and synthesizing audio signals based on these collisions.
  • Accessing stored audio models to generate custom audio based on the objects involved in the collision.
  • Presenting the synthesized audio signals to users via head-wearable devices.

Potential Applications: This technology can be used in virtual reality gaming, immersive experiences, virtual training simulations, and augmented reality applications.

Problems Solved: This technology addresses the challenge of creating realistic audio feedback in mixed reality environments where virtual objects interact with each other.

Benefits: Enhanced user experience, increased immersion, realistic audio feedback, improved training simulations, and interactive storytelling capabilities.

Commercial Applications: This technology can be applied in the gaming industry, entertainment sector, education and training fields, virtual tours, and virtual collaboration platforms.

Prior Art: Prior art related to this technology may include research on audio synthesis in virtual environments, collision detection algorithms, and virtual reality audio rendering techniques.

Frequently Updated Research: Stay updated on advancements in audio rendering technologies, virtual reality hardware improvements, and user experience studies in mixed reality environments.

Questions about Virtual Audio Generation in Mixed Reality Systems: 1. How does this technology improve user engagement in virtual reality experiences? 2. What are the key challenges in accurately synthesizing audio signals based on virtual object collisions?


Original Abstract Submitted

disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. a method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. a memory storing one or more audio models can be accessed. it can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. in accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device. in accordance with a determination that the one or more audio models does not comprise an audio model corresponding to the first object, an acoustic property of the first object can be determined, a custom audio model based on the acoustic property of the first object can be generated, an audio signal can be synthesized, wherein the audio signal is based on the collision and the custom audio model, and the audio signal can be presented, via a speaker of a head-wearable device, to a user.