Snap inc. (20240355010). TEXTURE GENERATION USING MULTIMODAL EMBEDDINGS simplified abstract

From WikiPatents
Revision as of 06:08, 25 October 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

TEXTURE GENERATION USING MULTIMODAL EMBEDDINGS

Organization Name

snap inc.

Inventor(s)

Bohdan Ahafonov of Santa Monica CA (US)

Matthew Hallberg of Santa Monica CA (US)

Sergei Korolev of Santa Monica CA (US)

William Miles Miller of San Francisco CA (US)

Daria Skrypnyk of Los Angeles CA (US)

Aleksei Stoliar of Santa Monica CA (US)

TEXTURE GENERATION USING MULTIMODAL EMBEDDINGS - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240355010 titled 'TEXTURE GENERATION USING MULTIMODAL EMBEDDINGS

Simplified Explanation: The patent application describes methods and systems for creating an extended reality try-on experience by generating artificial textures based on interaction data and object detection in images.

  • Detect object in image captured by user
  • Generate prompt based on object and interaction data
  • Create artificial texture based on prompt
  • Modify object's texture in image using artificial texture

Key Features and Innovation:

  • Multimodal memory stores interaction data
  • Machine learning model generates prompts based on detected objects
  • Artificial textures enhance try-on experience
  • Seamless integration of interaction data and object detection

Potential Applications: This technology can be applied in various industries such as fashion, interior design, and gaming for virtual try-on experiences and customization options.

Problems Solved: This technology addresses the need for realistic and interactive virtual try-on experiences by generating artificial textures based on user interactions and detected objects.

Benefits:

  • Enhanced user experience
  • Personalized virtual try-on options
  • Improved engagement and satisfaction

Commercial Applications: Potential commercial applications include virtual shopping experiences, virtual interior design consultations, and virtual gaming experiences with customizable textures.

Prior Art: Prior art related to this technology may include research on machine learning models for image recognition and virtual try-on experiences in the retail industry.

Frequently Updated Research: Researchers may be exploring advancements in machine learning algorithms for more accurate object detection and texture generation in virtual environments.

Questions about Extended Reality Try-On Experience: 1. How does this technology improve user engagement in virtual try-on experiences? 2. What are the potential limitations of using artificial textures in modifying object textures in images?


Original Abstract Submitted

methods and systems are disclosed for generating an extended reality (xr) try-on experience. the methods and systems store, in a multimodal memory, interaction data representing use of one or more interaction functions including data in different modalities. the methods and systems detect an object depicted in an image captured by an interaction client and generate, by a machine learning model, a prompt based on the object depicted in the image and the interaction data in the multimodal memory. the methods and systems generate an artificial texture based on the prompt and modify a texture of the object depicted in the image using the artificial texture that has been generated based on the prompt.