Amazon technologies, inc. (20240242413). Automated Generation and Presentation of Sign Language Avatars for Video Content simplified abstract

From WikiPatents
Jump to navigation Jump to search

Automated Generation and Presentation of Sign Language Avatars for Video Content

Organization Name

amazon technologies, inc.

Inventor(s)

Avijit Vajpayee of Seattle WA (US)

Vimal Bhat of Redmond WA (US)

Arjun Cholkar of Bothell WA (US)

Louis Kirk Barker of Montara CA (US)

Abhinav Jain of Vancouver (CA)

Automated Generation and Presentation of Sign Language Avatars for Video Content - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240242413 titled 'Automated Generation and Presentation of Sign Language Avatars for Video Content

The patent application discusses systems and methods for automated generation and presentation of sign language avatars for video content.

  • Determining segments of video content, including frames, audio content, and subtitle data.
  • Using machine learning models to associate sign gestures with words and generate motion data.
  • Creating avatars to perform sign gestures based on motion data and facial expression data.

Potential Applications: This technology can be used in video content creation, accessibility features for the hearing impaired, and educational tools for learning sign language.

Problems Solved: This technology addresses the need for automated sign language interpretation and representation in video content.

Benefits: The technology provides a more inclusive viewing experience for the hearing impaired, enhances communication through sign language, and streamlines the process of creating sign language content.

Commercial Applications: This technology can be utilized by video streaming platforms, educational institutions, and content creators to make their content more accessible and inclusive.

Questions about Sign Language Avatars: 1. How does this technology improve accessibility for the hearing impaired? 2. What are the potential educational applications of sign language avatars?

Frequently Updated Research: Researchers are continuously exploring ways to improve the accuracy and efficiency of sign language avatar generation and presentation.


Original Abstract Submitted

systems, methods, and computer-readable media are disclosed for systems and methods for automated generation and presentation of sign language avatars for video content. example methods may include determining, by one or more computer processors coupled to memory, a first segment of video content, the first segment including a first set of frames, first audio content, and first subtitle data, where the first subtitle data comprises a first word and a second word. methods may include determining, using a first machine learning model, a first sign gesture associated with the first word, determining first motion data associated with the first sign gesture, and determining first facial expression data. methods may include generating an avatar configured to perform the first sign gesture using the first motion data, where a facial expression of the avatar while performing the first sign gesture is based on the first facial expression data.