Jump to content

18440024. APPARATUS AND METHODS FOR CONTENT DESCRIPTION simplified abstract (SONY INTERACTIVE ENTERTAINMENT INC.)

From WikiPatents

APPARATUS AND METHODS FOR CONTENT DESCRIPTION

Organization Name

SONY INTERACTIVE ENTERTAINMENT INC.

Inventor(s)

Ryan Spick of London (GB)

Timothy Edward Bradley of London (GB)

Guy David Moss of London (GB)

Ayush Raina of London (GB)

Pierluigi Amadori of London (GB)

APPARATUS AND METHODS FOR CONTENT DESCRIPTION - A simplified explanation of the abstract

This abstract first appeared for US patent application 18440024 titled 'APPARATUS AND METHODS FOR CONTENT DESCRIPTION

The data processing apparatus described in the patent application is designed to determine description data for content using a video captioning model.

  • The video captioning model is trained to detect predetermined motions of animated objects in video images and generate captions based on these motions.
  • Captions include caption data with words describing the motions, comprising audio, text, and image data.
  • The output circuitry produces description data based on the generated captions.

Potential Applications:

  • This technology can be used in video content creation to automatically generate descriptive captions for animated objects.
  • It can enhance accessibility for individuals with hearing impairments by providing detailed descriptions of visual content.

Problems Solved:

  • Streamlines the process of creating descriptive content for videos.
  • Improves the accessibility of video content for a wider audience.

Benefits:

  • Saves time and resources in manually creating descriptions for video content.
  • Increases the inclusivity of video content by providing detailed descriptions for all viewers.

Commercial Applications:

  • This technology can be utilized by video streaming platforms, educational institutions, and content creators to improve the accessibility and reach of their videos.

Questions about the Technology: 1. How does the video captioning model differentiate between different predetermined motions of animated objects?

  - The video captioning model is trained on a diverse dataset to recognize and distinguish various motions accurately.

2. What are the potential limitations of using automated captioning for describing content?

  - Automated captioning may not always capture the nuances and context of the content accurately, leading to potential inaccuracies in the descriptions.


Original Abstract Submitted

A data processing apparatus for determining description data for describing content includes: a video captioning model to receive an input comprising at least video images associated with the content, wherein the video captioning model is trained to detect one or more predetermined motions of one or more animated objects in the video images and determine one or more captions in dependence on one or more of the predetermined motions, one or more of the captions comprising respective caption data comprising one or more words for describing one or more of the predetermined motions, the respective caption data comprising one or more of audio data, text data and image data; and output circuitry to output description data in dependence on one or more of the captions.

(Ad) Transform your business with AI in minutes, not months

Custom AI strategy for your specific industry
Step-by-step implementation with clear ROI
5-minute setup - no technical skills needed
Get your AI playbook
Cookies help us deliver our services. By using our services, you agree to our use of cookies.