Snap inc. (20240355063). EMBEDDINGS REPRESENTING VISUAL AUGMENTATIONS simplified abstract

From WikiPatents
Revision as of 06:09, 25 October 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

EMBEDDINGS REPRESENTING VISUAL AUGMENTATIONS

Organization Name

snap inc.

Inventor(s)

Zhenpeng Zhou of Newark CA (US)

Patrick Poirson of Gilbert AZ (US)

Maksim Gusarov of Marina del Rey CA (US)

Chen Wang of Great Neck NY (US)

Oleg Tovstyi of Los Angeles CA (US)

EMBEDDINGS REPRESENTING VISUAL AUGMENTATIONS - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240355063 titled 'EMBEDDINGS REPRESENTING VISUAL AUGMENTATIONS

The abstract of the patent application describes a process where a machine learning model generates an embedding from an input video item with a target visual augmentation. This embedding represents a vector representation of the visual effect of the target visual augmentation. The model is trained in an unsupervised phase to minimize loss between training video representations from different training sets, each containing various training video items with predefined visual augmentations. The target visual augmentation is then mapped to an augmentation identifier based on the generated embedding.

  • Machine learning model generates embedding from input video with target visual augmentation
  • Embedding represents vector representation of visual effect of target visual augmentation
  • Model trained in unsupervised phase to minimize loss between training video representations from different training sets
  • Training sets contain various training video items with predefined visual augmentations
  • Target visual augmentation mapped to augmentation identifier based on generated embedding

Potential Applications: - Video editing software - Augmented reality applications - Virtual reality experiences

Problems Solved: - Efficiently mapping visual augmentations to identifiers - Enhancing visual effects in videos - Improving machine learning models for video processing

Benefits: - Streamlined video editing process - Enhanced visual effects in augmented reality applications - Improved accuracy in mapping visual augmentations

Commercial Applications: Title: "Enhanced Visual Augmentation Mapping Technology for Video Editing and AR Applications" This technology can be used in video editing software to streamline the process of mapping visual augmentations to identifiers, enhancing the visual effects in videos. It can also be applied in augmented reality applications to improve the accuracy and efficiency of mapping visual augmentations.

Questions about Visual Augmentation Mapping Technology: 1. How does this technology improve the efficiency of mapping visual augmentations in videos? This technology utilizes machine learning models to generate embeddings that represent visual effects, allowing for more accurate and streamlined mapping of visual augmentations to identifiers.

2. What are the potential applications of this technology beyond video editing and augmented reality? This technology could also be applied in virtual reality experiences, interactive media installations, and digital art creation tools.


Original Abstract Submitted

an input video item that includes a target visual augmentation is accessed. a machine learning model uses the input video item to generate an embedding. the embedding may comprise a vector representation of a visual effect of the target visual augmentation. the machine learning model is trained, in an unsupervised training phase, to minimize loss between training video representations generated within each of a plurality of training sets. each training set comprises a plurality of different training video items that each include a predefined visual augmentation. based on the generation of the embedding of the input video item, the target visual augmentation is mapped to an augmentation identifier.