Sony Interactive Entertainment Inc. patent applications published on November 30th, 2023

From WikiPatents
Revision as of 08:52, 6 December 2023 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Patent applications for Sony Interactive Entertainment Inc. on November 30th, 2023

METHODS AND SYSTEMS TO ACTIVATE SELECTIVE NAVIGATION OR MAGNIFICATION OF SCREEN CONTENT (17827475)

Main Inventor

Victoria Dorn


GAZE-BASED COORDINATION OF VIRTUAL EFFECTS INDICATORS (17828813)

Main Inventor

Kristie Ramirez


COOPERATIVE AND COACHED GAMEPLAY (17828861)

Main Inventor

Olga Rudi


SYSTEMS AND METHODS FOR ENABLING INTERACTIVE GAME ASSISTANCE DURING GAMEPLAY (17827335)

Main Inventor

Katie Egeland


ADAPTIVE DIFFICULTY CALIBRATION FOR SKILLS-BASED ACTIVITIES IN VIRTUAL ENVIRONMENTS (17828707)

Main Inventor

Victoria Dorn


MULTIPLAYER VIDEO GAME SYSTEMS AND METHODS (18321152)

Main Inventor

Christopher William Henderson


Brief explanation

The patent application describes a method for executing a multiplayer video game. Here are the key points:
  • The method involves connecting a game client to a game server using a communication network.
  • Each player's game is executed on the game server, creating a game environment.
  • Game data is synchronized between multiple instances of the game to link the generated game environments.
  • User inputs are transmitted to the game server to initiate interactive gameplay.
  • A video stream of the linked game environment is generated from each player's perspective.
  • The video streams for individual players are mixed into a split screen view during a split screen mode.

Abstract

A method of executing a multiplayer video game includes: connecting a game client to a game server via a communication network; for each player, executing an instance of a game on the game server to generate a game environment; synchronizing game data between multiple instances of the game to link the generated game environments; transmitting user inputs to the game server to initiate interactive gameplay; for each player, generating a video stream of the linked game environment from the player's perspective; and mixing the video streams for individual players into split screen view during a split screen mode.

eSPORTS SPECTATOR ONBOARDING (17828971)

Main Inventor

Mahdi Azmandian


TRIGGERING VIRTUAL HELP OR HINDRANCE BASED ON AUDIENCE PARTICIPATION TIERS (17828974)

Main Inventor

Lachmin Singh


METHODS AND SYSTEMS FOR ADDING REAL-WORLD SOUNDS TO VIRTUAL REALITY SCENES (17827462)

Main Inventor

Victoria Dorn


WEARABLE DATA PROCESSING APPARATUS, SYSTEM AND METHOD (18318839)

Main Inventor

Maria Chiara Monti


Brief explanation

The abstract describes a wearable device that can process data.
  • The device can be attached to a user's limb using attachment members.
  • It includes sensors that can generate data based on user inputs.
  • The device can wirelessly transmit this data to an external device.
  • It can also receive control data from the external device based on the user input data.
  • The device has processing circuitry that can generate output signals based on the control data.
  • An output unit is included to output these signals.

Abstract

A wearable data processing apparatus includes one or more attachment members for attaching the wearable data processing apparatus to a part of a limb of a user, one or more sensors to generate user input data in response to one or more user inputs, wireless communication circuitry to transmit the user input data to an external device and to receive control data based on the user input data from the external device, processing circuitry to generate one or more output signals in dependence upon the control data and an output unit to output one or more of the output signals.

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING TERMINAL (18032477)

Main Inventor

Takayoshi SHIMIZU


TRAINING A SOUND EFFECT RECOMMENDATION NETWORK (18217745)

Main Inventor

Sudha Krishnamurthy


Brief explanation

The patent application describes a machine learning algorithm that trains a network to recommend sound effects based on visual elements in an image.
  • The network takes a reference image, a positive audio embedding, and a negative audio embedding as inputs.
  • It uses a visual-to-audio correlation neural network to find a smaller distance between the positive audio embedding and the reference image compared to the negative audio embedding and the reference image.
  • The neural network is trained to identify visual elements in the reference image and map them to sound categories or subcategories in an audio database.

Abstract

A Sound effect recommendation network is trained using a machine learning algorithm with a reference image, a positive audio embedding and a negative audio embedding as inputs to train a visual-to-audio correlation neural network to output a smaller distance between the positive audio embedding and the reference image than the negative audio embedding and the reference image. The visual-to-audio correlation neural network is trained to identify one or more visual elements in the reference image and map the one or more visual elements to one or more sound categories or subcategories within an audio database.

METHOD FOR ADJUSTING NOISE CANCELLATION IN HEADPHONES BASED ON REAL-WORLD ACTIVITY OR GAME CONTEXT (17827678)

Main Inventor

Celeste Bean


Brief explanation

- The patent application describes a method for executing an interactive application that includes rendering video and audio of a virtual environment.

- The video is presented on a display viewed by the user, and the audio is presented through headphones worn by the user. - The interactive application is responsive to user input generated from the user's interaction with the presented video and audio. - The method also involves receiving environmental input from at least one sensor that senses the user's local environment. - The environmental input is then analyzed to identify activity occurring in the local environment. - Once the activity is identified, the method adjusts the level of active noise cancellation applied by the headphones. - The purpose of adjusting the noise cancellation is to enhance the user's experience by adapting to the surrounding environment. - This innovation aims to provide a more immersive and personalized interactive experience for the user.

Abstract

A method is provided, including: executing an interactive application, wherein executing the interactive application includes rendering video and audio of a virtual environment, the video being presented on a display viewed by a user, and the audio being presented through headphones worn by the user, and wherein executing the interactive application is responsive to user input generated from interactivity by the user with the presented video and audio; receiving environmental input from at least one sensor that senses a local environment in which the user is disposed; analyzing the environmental input to identify activity occurring in the local environment; responsive to identifying the activity, then adjusting a level of active noise cancellation applied by the headphones.

METHODS FOR EXAMINING GAME CONTEXT FOR DETERMINING A USER'S VOICE COMMANDS (17827680)

Main Inventor

Mahdi Azmandian


Brief explanation

The patent application describes a method for enhancing the gameplay experience in a video game session by analyzing and utilizing the speech of the player. Here are the key points:
  • The method involves recording the speech of a player while they are playing the video game.
  • The game state, which represents the current state of the gameplay, is analyzed to understand the context of the gameplay.
  • The recorded speech is then analyzed using a speech recognition model, taking into account the identified context of the gameplay.
  • The analysis of the speech helps in identifying the textual content of what the player said.
  • The identified textual content is then used as input for the gameplay, enhancing the interaction and experience of the player.

Abstract

A method for executing a session of a video game is provided, including the following operations: recording speech of a player engaged in gameplay of the session of the video game; analyzing a game state generated by the execution of the session of the video game, wherein analyzing the game state identifies a context of the gameplay; analyzing the recorded speech using the identified context of the gameplay and a speech recognition model, to identify textual content of the recorded speech; applying the identified textual content as a gameplay input for the session of the video game.

AUTOMATED VISUAL TRIGGER PROFILING AND DETECTION (17828775)

Main Inventor

Celeste Bean


DYNAMIC AUDIO OPTIMIZATION (17828668)

Main Inventor

Bethany Tinklenberg


INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING APPARATUS CONTROL METHOD, AND PROGRAM (18032906)

Main Inventor

Hajime HORIKOSHI


Brief explanation

The patent application describes an information processing apparatus that processes audio signals and is connected to a speaker system with multiple speakers.
  • The apparatus receives input of audio output setting information that indicates the direction of each speaker relative to the user's orientation.
  • The received audio output setting information is recorded by the apparatus.
  • The recorded audio output setting information can be requested and used to process the audio signal.
  • The innovation aims to improve the audio experience by accurately adjusting the audio output based on the speaker system's configuration and the user's orientation.

Abstract

An information processing apparatus processes an audio signal and is connected to a speaker system having a plurality of speakers. The information processing apparatus receives input of audio output setting information indicating the direction of each speaker of the speaker system relative to an orientation of a user, records the received audio output setting information, and outputs the recorded audio output setting information as requested for use in processing the audio signal.