Cisco Technology, Inc. (20240378782). SIGN LANGUAGE GENERATION AND DISPLAY simplified abstract
Contents
SIGN LANGUAGE GENERATION AND DISPLAY
Organization Name
Inventor(s)
Elena Gribanova of Dublin CA (US)
Valentin Filippov of Maple (CA)
Pedro Jesus Garcia Chavez of Mexico City (MX)
David C. White, Jr. of St. Petersburg FL (US)
SIGN LANGUAGE GENERATION AND DISPLAY - A simplified explanation of the abstract
This abstract first appeared for US patent application 20240378782 titled 'SIGN LANGUAGE GENERATION AND DISPLAY
The method described in the abstract involves using video and audio data from participants in a video session to create an animated avatar that provides sign language representation of the words spoken by a speaker participant. This avatar is then integrated into the video session to enhance communication.
- Receiving video and audio data from participants in a video session
- Determining words spoken by a speaker participant based on the audio data
- Locating the speaker participant in the video framing
- Generating an animated avatar to provide sign language representation
- Modifying the video data to include the animated avatar
- Outputting the video data with the animated avatar
Potential Applications: - Enhancing communication for individuals who are deaf or hard of hearing - Improving accessibility in virtual meetings and conferences - Providing real-time sign language interpretation in video sessions
Problems Solved: - Overcoming communication barriers for individuals with hearing impairments - Facilitating inclusive and effective communication in virtual settings
Benefits: - Increased accessibility for individuals with hearing disabilities - Improved communication and understanding in video sessions - Enhanced inclusivity and diversity in virtual environments
Commercial Applications: Title: "Enhancing Communication with Animated Avatars in Video Sessions" This technology could be utilized in video conferencing platforms, online education systems, and virtual event platforms to provide real-time sign language interpretation for participants with hearing impairments. This innovation has the potential to improve accessibility and inclusivity in various industries.
Questions about the technology: 1. How does this method impact the overall user experience in video sessions? 2. What are the technical requirements for implementing this technology in existing video communication platforms?
Original Abstract Submitted
a method includes receiving, via one or more processors, video data and audio data associated with respective participants in a video session, determining, via the one or more processors, words spoken by a speaker participant in the video session based on the audio data, determining, via the one or more processors, a location of the speaker participant in a framing of the video data based on the video data and the audio data, generating, via the one or more processors, an animated avatar to provide sign language representing the words spoken by the speaker participant, modifying, via the one or more processors, the video data of the video session to include the animated avatar based on the location of the speaker participant, and outputting, via the one or more processors, the video data that includes the animated avatar.