20240020489. PROVIDING SUBTITLE FOR VIDEO CONTENT IN SPOKEN LANGUAGE simplified abstract (VoyagerX, Inc.)

From WikiPatents
Jump to navigation Jump to search

PROVIDING SUBTITLE FOR VIDEO CONTENT IN SPOKEN LANGUAGE

Organization Name

VoyagerX, Inc.

Inventor(s)

Hyeonsoo Oh of Seoul (KR)

PROVIDING SUBTITLE FOR VIDEO CONTENT IN SPOKEN LANGUAGE - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240020489 titled 'PROVIDING SUBTITLE FOR VIDEO CONTENT IN SPOKEN LANGUAGE

Simplified Explanation

The present disclosure is about systems and methods for providing subtitles for a video. It involves transcribing the audio of the video to obtain caption text. A first machine-trained model is used to identify sentences in the caption text. Then, a second model identifies breaks within the sentences. Based on the identified sentences and breaks, words in the caption text are grouped into clip captions to be displayed for corresponding clips of the video.

  • The innovation involves using machine-trained models to automatically generate subtitles for videos.
  • The first model identifies sentences in the caption text, while the second model identifies breaks within the sentences.
  • The identified sentences and breaks are used to group words into clip captions for specific parts of the video.
  • This technology aims to improve the accessibility of videos by providing accurate and synchronized subtitles.

Potential applications of this technology:

  • Video streaming platforms can use this technology to automatically generate subtitles for their content, making it accessible to a wider audience, including those with hearing impairments or language barriers.
  • Content creators can utilize this technology to save time and effort in manually transcribing and adding subtitles to their videos.
  • Educational platforms can benefit from this technology by automatically generating subtitles for instructional videos, enhancing the learning experience for students.

Problems solved by this technology:

  • Manual transcription and addition of subtitles to videos can be time-consuming and prone to errors. This technology automates the process, saving time and ensuring accuracy.
  • Lack of subtitles in videos can exclude individuals with hearing impairments or those who do not understand the video's language. This technology provides accessible subtitles, promoting inclusivity.

Benefits of this technology:

  • Improved accessibility: Subtitles generated by this technology make videos accessible to a wider audience, including those with hearing impairments or language barriers.
  • Time and effort savings: Content creators and video platforms can save time and effort by automating the process of generating subtitles.
  • Enhanced learning experience: Educational platforms can enhance the learning experience by providing automatically generated subtitles for instructional videos.


Original Abstract Submitted

the present disclosure relates to systems and methods for providing subtitle for a video. the video's audio is transcribed to obtain caption text for the video. a first machine-trained model identifies sentences in the caption text. a second model identifies intra-sentence breaks with in the sentences identified using the first machine-trained model. based on the identified sentences and intra-sentence breaks, one or more words in the caption text are grouped into a clip caption to be displayed for a corresponding clip of the video.