18637073. ADAPTABLE SPATIAL AUDIO PLAYBACK simplified abstract (DOLBY INTERNATIONAL AB)

From WikiPatents
Jump to navigation Jump to search

ADAPTABLE SPATIAL AUDIO PLAYBACK

Organization Name

DOLBY INTERNATIONAL AB

Inventor(s)

Alan J. Seefeldt of Alameda CA (US)

Joshua B. Lando of Mill Valley CA (US)

Daniel Arteaga of Barcelona (ES)

Glenn N. Dickins of Como (AU)

Mark Richard Paul Thomas of Walnut Creek CA (US)

ADAPTABLE SPATIAL AUDIO PLAYBACK - A simplified explanation of the abstract

This abstract first appeared for US patent application 18637073 titled 'ADAPTABLE SPATIAL AUDIO PLAYBACK

The abstract describes a method for determining a rendering mode for received audio data, including audio signals and spatial data, to be reproduced via loudspeakers in an environment.

  • The audio data is rendered according to the determined rendering mode to produce rendered audio signals.
  • The relative activation of a set of loudspeakers in the environment is determined during the rendering process.
  • The rendering mode can vary between a reference spatial mode and one or more distributed spatial modes.
  • In the distributed spatial modes, elements of the audio data are rendered in a more spatially distributed manner, and the spatial locations of remaining elements are warped to span the environment more completely.

Potential Applications: - Immersive audio experiences in virtual reality environments - Enhanced spatial audio for gaming and entertainment systems - Improved sound localization for home theater systems

Problems Solved: - Providing more realistic and immersive audio experiences - Enhancing the spatial perception of audio content - Optimizing audio reproduction in various listening environments

Benefits: - Enhanced audio quality and spatial accuracy - Greater immersion and realism in audio content - Improved user experience and engagement

Commercial Applications: Title: Spatial Audio Rendering Technology for Enhanced Immersive Experiences This technology can be utilized in virtual reality systems, gaming consoles, home theater setups, and audio production studios to deliver superior spatial audio rendering capabilities, catering to the growing demand for immersive entertainment experiences.

Questions about Spatial Audio Rendering Technology: 1. How does this technology improve sound localization in virtual reality environments? 2. What are the key differences between the reference spatial mode and distributed spatial modes in audio rendering?


Original Abstract Submitted

A rendering mode may be determined for received audio data, including audio signals and associated spatial data. The audio data may be rendered for reproduction via a set of loudspeakers of an environment according to the rendering mode, to produce rendered audio signals. Rendering the audio data may involve determining relative activation of a set of loudspeakers in an environment. The rendering mode may be variable between a reference spatial mode and one or more distributed spatial modes. The reference spatial mode may have an assumed listening position and orientation. In the distributed spatial mode(s), one or more elements of the audio data may each be rendered in a more spatially distributed manner than in the reference spatial mode and spatial locations of remaining elements of the audio data may be warped such that they span a rendering space of the environment more completely than in the reference spatial mode.