Lemon Inc. (20240380949). VIDEO CAPTIONING GENERATION SYSTEM AND METHOD simplified abstract

From WikiPatents
Jump to navigation Jump to search

VIDEO CAPTIONING GENERATION SYSTEM AND METHOD

Organization Name

Lemon Inc.

Inventor(s)

Linjie Yang of Los Angeles CA (US)

Heng Wang of Los Angeles CA (US)

Yuhan Shen of Los Angeles CA (US)

Longyin Wen of Los Angeles CA (US)

Haichao Yu of Los Angeles CA (US)

VIDEO CAPTIONING GENERATION SYSTEM AND METHOD - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240380949 titled 'VIDEO CAPTIONING GENERATION SYSTEM AND METHOD

The patent application describes a system and method for generating captions for videos using multi-modal embeddings.

  • Processor executes caption generation program
  • Receives input video and samples video frames
  • Extracts video frames, video embeddings, and audio embeddings
  • Input local video tokens and local audio tokens into transformer layer
  • Generates multi-modal embeddings
  • Generates video captions using caption decoder

Potential Applications: - Automated video captioning for accessibility purposes - Enhancing video search and indexing capabilities - Improving video content understanding for AI applications

Problems Solved: - Enhances accessibility for individuals with hearing impairments - Streamlines video content analysis and categorization - Facilitates content creation and editing processes

Benefits: - Improved user experience for video content consumption - Increased efficiency in video content management - Enhanced accessibility and inclusivity in digital media

Commercial Applications: - Video streaming platforms - Content creation tools - AI and machine learning applications in video analysis

Questions about the technology: 1. How does the system handle different languages in video captions? 2. Can the system accurately generate captions for videos with complex audio content?

Frequently Updated Research: - Stay updated on advancements in multi-modal embeddings for video analysis and captioning technologies.


Original Abstract Submitted

a system and a method are provided that include a processor executing a caption generation program to receive an input video, sample video frames from the input video, extract video frames from the input video, extract video embeddings and audio embeddings from the video frames, including local video tokens and local audio tokens, respectively, input the local video tokens and the local audio tokens into at least a transformer layer of a cross-modal encoder to generate multi-modal embeddings, and generate video captions based on the multi-modal embeddings using a caption decoder.