Amazon Technologies, Inc. (20240242413). Automated Generation and Presentation of Sign Language Avatars for Video Content simplified abstract

From WikiPatents
Jump to navigation Jump to search

Automated Generation and Presentation of Sign Language Avatars for Video Content

Organization Name

Amazon Technologies, Inc.

Inventor(s)

Avijit Vajpayee of Seattle WA (US)

Vimal Bhat of Redmond WA (US)

Arjun Cholkar of Bothell WA (US)

Louis Kirk Barker of Montara CA (US)

Abhinav Jain of Vancouver (CA)

Automated Generation and Presentation of Sign Language Avatars for Video Content - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240242413 titled 'Automated Generation and Presentation of Sign Language Avatars for Video Content

The patent application describes systems and methods for automated generation and presentation of sign language avatars for video content.

  • Determining a first segment of video content, including frames, audio content, and subtitle data.
  • Using machine learning to determine sign gestures associated with words in the subtitle data.
  • Generating an avatar to perform the sign gestures with motion and facial expression data.

Potential Applications: - Enhancing accessibility for deaf or hard of hearing individuals. - Improving communication in educational or training videos. - Enhancing user experience in entertainment content.

Problems Solved: - Bridging the communication gap between hearing and non-hearing individuals. - Providing a more inclusive viewing experience for all audiences.

Benefits: - Increased accessibility to video content for individuals with hearing impairments. - Improved understanding and engagement with sign language communication. - Enhanced user experience and inclusivity in multimedia content.

Commercial Applications: Automated sign language avatar generation can be utilized in video streaming platforms, educational content creation, and communication applications to enhance accessibility and user engagement.

Questions about Sign Language Avatars: 1. How can automated sign language avatars benefit individuals with hearing impairments? 2. What are the potential commercial applications of this technology in the entertainment industry?

Frequently Updated Research: Stay informed about advancements in machine learning algorithms for sign language recognition and avatar generation to improve the accuracy and realism of sign language avatars.


Original Abstract Submitted

systems, methods, and computer-readable media are disclosed for systems and methods for automated generation and presentation of sign language avatars for video content. example methods may include determining, by one or more computer processors coupled to memory, a first segment of video content, the first segment including a first set of frames, first audio content, and first subtitle data, where the first subtitle data comprises a first word and a second word. methods may include determining, using a first machine learning model, a first sign gesture associated with the first word, determining first motion data associated with the first sign gesture, and determining first facial expression data. methods may include generating an avatar configured to perform the first sign gesture using the first motion data, where a facial expression of the avatar while performing the first sign gesture is based on the first facial expression data.