20240048936. A Method and Apparatus for Scene Dependent Listener Space Adaptation simplified abstract (Nokai Technogies Oy)

From WikiPatents
Jump to navigation Jump to search

A Method and Apparatus for Scene Dependent Listener Space Adaptation

Organization Name

Nokai Technogies Oy

Inventor(s)

Jussi Artturi Leppanen of Tampere (FI)

Sujeet Shyamsundar Mate of Tampere (FI)

Lasse Juhani Laaksonen of Tampere (FI)

Arto Juhani Lehtiniemi of Lempala (FI)

A Method and Apparatus for Scene Dependent Listener Space Adaptation - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240048936 titled 'A Method and Apparatus for Scene Dependent Listener Space Adaptation

Simplified Explanation

The patent application describes an apparatus for rendering a combined audio scene by modifying a first audio scene based on a further audio scene parameter. The apparatus includes circuitry that obtains information to define the first audio scene parameter and further information to define the further audio scene parameter. It then identifies a location for modifying at least part of the first audio scene based on the further audio scene parameter. Finally, it prepares the combined audio scene for rendering by incorporating the modified first audio scene based on the identified location using the further scene parameter.

  • The apparatus modifies a first audio scene based on a further audio scene parameter to create a combined audio scene.
  • It obtains information to define the first audio scene parameter and further information to define the further audio scene parameter.
  • It identifies a location for modifying the first audio scene, which can be partially based on the further audio scene parameter.
  • The modified first audio scene is incorporated into the rendering of the combined audio scene.
  • The apparatus allows for dynamic modification of audio scenes to create a more immersive and realistic audio experience.

Potential applications of this technology:

  • Virtual reality (VR) and augmented reality (AR) applications can benefit from the apparatus by providing more realistic and immersive audio experiences.
  • Gaming industry can use this technology to enhance the audio effects and create a more immersive gaming environment.
  • Movie and entertainment industry can utilize this technology to create more realistic and immersive sound effects in movies and virtual experiences.
  • Audio production and mixing studios can use this apparatus to enhance their capabilities in creating dynamic and immersive audio scenes.

Problems solved by this technology:

  • Traditional audio rendering techniques may not provide sufficient flexibility to modify audio scenes dynamically.
  • Existing methods may not allow for seamless integration of modified audio scenes into a combined audio scene.
  • The apparatus solves the problem of identifying the location for modifying the first audio scene based on the further audio scene parameter, allowing for precise and targeted modifications.

Benefits of this technology:

  • Provides a more immersive and realistic audio experience for users in various applications.
  • Allows for dynamic modification of audio scenes, providing flexibility and adaptability.
  • Enhances the capabilities of audio production and mixing studios, enabling them to create more dynamic and immersive audio content.
  • Improves the overall quality and realism of audio effects in virtual reality, augmented reality, gaming, and entertainment industries.


Original Abstract Submitted

an apparatus for rendering a combined audio scene including circuitry configured to: obtain information configured to define, for a first audio scene, a first audio scene parameter; obtain further information configured to define, for a further audio scene, a further audio scene parameter; identify a location for a modification of at least in part the first audio scene, the location being configurable at least partially based on the further audio scene parameter; and prepare the combined audio scene for rendering, by modifying at least in part the first audio scene based on the further audio scene parameter such that the rendering of the combined audio scene incorporates the modified at least in part first audio scene based on the identified location using the further scene parameter.