Difference between revisions of "Snap Inc. patent applications published on November 30th, 2023"

From WikiPatents
Jump to navigation Jump to search
(Creating a new page)
 
Line 1: Line 1:
 +
'''Summary of the patent applications from Snap Inc. on November 30th, 2023'''
 +
 +
Snap Inc. has recently filed several patents related to image processing, motion blur reduction, storage location assignment, electronic messaging, beauty product tutorials, 3D object editing, augmented reality, location-based services, and entity recognition in multimodal messages.
 +
 +
Summary:
 +
Snap Inc. has filed patents for various innovative technologies. These include a customized image reprocessing system powered by machine learning techniques, a method for reducing motion blur in a visual tracking system, a system for dynamically assigning storage locations based on a user's device location, an electronic messaging system that deletes message content after a certain time, a system for adding beauty products to tutorials, an editing system for 3D objects using 2D sketches or RGB views, a technology for applying a 3D effect to image and depth data, a method for creating augmented reality experiences on a messaging platform using machine learning, a system for location-based services on a messaging platform, and a system for identifying named entities in multimodal messages using visual attention mechanisms.
 +
 +
Notable Applications:
 +
* Customized image reprocessing system using machine learning.
 +
* Method for reducing motion blur in visual tracking systems.
 +
* Dynamic assignment of storage locations based on device location.
 +
* Electronic messaging system with automatic deletion of message content.
 +
* Addition of beauty products to tutorials.
 +
* Editing system for 3D objects using 2D sketches or RGB views.
 +
* Application of 3D effects to image and depth data using augmented reality.
 +
* Creation of augmented reality experiences on a messaging platform using machine learning.
 +
* Location-based services on a messaging platform.
 +
* Identification of named entities in multimodal messages using visual attention mechanisms.
 +
 +
Overall, Snap Inc. has been actively developing innovative technologies in the fields of image processing, motion blur reduction, storage management, messaging systems, beauty product tutorials, 3D object editing, augmented reality, location-based services, and entity recognition. These patents demonstrate the organization's commitment to advancing technology and providing unique experiences to its users.
 +
 +
 +
 +
 
==Patent applications for Snap Inc. on November 30th, 2023==
 
==Patent applications for Snap Inc. on November 30th, 2023==
  

Revision as of 05:18, 5 December 2023

Summary of the patent applications from Snap Inc. on November 30th, 2023

Snap Inc. has recently filed several patents related to image processing, motion blur reduction, storage location assignment, electronic messaging, beauty product tutorials, 3D object editing, augmented reality, location-based services, and entity recognition in multimodal messages.

Summary: Snap Inc. has filed patents for various innovative technologies. These include a customized image reprocessing system powered by machine learning techniques, a method for reducing motion blur in a visual tracking system, a system for dynamically assigning storage locations based on a user's device location, an electronic messaging system that deletes message content after a certain time, a system for adding beauty products to tutorials, an editing system for 3D objects using 2D sketches or RGB views, a technology for applying a 3D effect to image and depth data, a method for creating augmented reality experiences on a messaging platform using machine learning, a system for location-based services on a messaging platform, and a system for identifying named entities in multimodal messages using visual attention mechanisms.

Notable Applications:

  • Customized image reprocessing system using machine learning.
  • Method for reducing motion blur in visual tracking systems.
  • Dynamic assignment of storage locations based on device location.
  • Electronic messaging system with automatic deletion of message content.
  • Addition of beauty products to tutorials.
  • Editing system for 3D objects using 2D sketches or RGB views.
  • Application of 3D effects to image and depth data using augmented reality.
  • Creation of augmented reality experiences on a messaging platform using machine learning.
  • Location-based services on a messaging platform.
  • Identification of named entities in multimodal messages using visual attention mechanisms.

Overall, Snap Inc. has been actively developing innovative technologies in the fields of image processing, motion blur reduction, storage management, messaging systems, beauty product tutorials, 3D object editing, augmented reality, location-based services, and entity recognition. These patents demonstrate the organization's commitment to advancing technology and providing unique experiences to its users.



Patent applications for Snap Inc. on November 30th, 2023

DEVICE-BASED IMAGE MODIFICATION OF DEPICTED OBJECTS (18232146)

Main Inventor

Theresa Barton


Brief explanation

- The patent application describes a system of machine learning schemes for image processing tasks on a user device.

- The system is designed to efficiently process images on mobile phones. - It can detect and transform specific regions within each frame of a live streaming video. - The system allows for selective partitioning and toggling of image effects within the live streaming video. - The innovation aims to improve the image processing capabilities of mobile devices.

Abstract

A system of machine learning schemes can be configured to efficiently perform image processing tasks on a user device, such as a mobile phone. The system can selectively detect and transform individual regions within each frame of a live streaming video. The system can selectively partition and toggle image effects within the live streaming video.

AR-BASED VIRTUAL KEYBOARD (17804818)

Main Inventor

Sharon Moll


Brief explanation

- The patent application describes a gesture-based text entry user interface for an Augmented Reality (AR) device.

- The AR system detects a specific gesture made by the user to initiate text entry. - It generates a virtual keyboard user interface with virtual keys for the user to input text. - The system uses cameras to track the user's selection of virtual keys. - Based on the selected virtual keys, the system generates entered text data. - The entered text data is then displayed to the user on the AR device's display.

Abstract

A gesture-based text entry user interface for an Augmented Reality (AR) device is provided. The AR system detects a start text entry gesture made by a user of the AR system, generates a virtual keyboard user interface including a virtual keyboard having a plurality of virtual keys, and provides to the user the virtual keyboard user interface. The AR system determines using the one or more cameras, the user's selection of one or more selected virtual keys of the plurality of virtual keys and generates entered text data based on the one or more selected virtual keys. The AR system provides the entered text data to the user using a display of the AR system.

GRAPH-BASED PREDICTION FOR CONTACT SUGGESTION IN A LOCATION SHARING SYSTEM (18450110)

Main Inventor

Pierre Leveau


Brief explanation

The patent application describes methods, systems, and devices for generating contact suggestions for a user of a social network.
  • A first score is calculated for each user based on their connections in the social network.
  • A second score is calculated for each user using a machine learning model, which takes into account the first score.
  • The second score represents the probability of a user receiving a connection request from another user.
  • Based on the second score, a ranked list of contact suggestions is generated for the user.

Abstract

Methods, systems, and devices for generating contact suggestions for a user of a social network. A first score is computed for each one of the plurality of users, the first score being computed using an edge-weighted ranking algorithm based on the user graph. A second score is computed, using a machine learning model, for each user of the plurality of users, the second score of each user being, at least partially, based on the first score of said user, with the second score of each user being representative of a probability of a first user sending a connection request to said user. A ranked contact suggestion list of one or more users of the plurality of users is generated, the one or more users being ranked based on their respective second score.

NAMED ENTITY RECOGNITION VISUAL CONTEXT AND CAPTION DATA (18201075)

Main Inventor

Di Lu


Brief explanation

- The patent application is about a system that can identify named entities in captions of multimodal messages, such as social media posts.

- The system uses an entity recognition system that incorporates a visual attention mechanism. - The visual attention mechanism generates a visual context representation from an image and caption. - The visual context representation is then used to identify one or more terms in the caption as named entities.

Abstract

A caption of a multimodal message (e.g., social media post) can be identified as a named entity using an entity recognition system. The entity recognition system can use a visual attention based mechanism to generate a visual context representation from an image and caption. The system can use the visual context representation to identify one or more terms of the caption as a named entity.

DEVICE LOCATION BASED ON MACHINE LEARNING CLASSIFICATIONS (18234226)

Main Inventor

Ebony James Charlton


Brief explanation

- The patent application describes a system where a client device can request location information from a server.

- The server returns a list of venues that are near the client device's location. - The client device uses machine learning algorithms, such as convolutional neural networks, to determine which specific venue it is located in. - Based on the venue selection, the system selects appropriate imagery for presentation. - The presentation is published as an ephemeral message on a network platform.

Abstract

A venue system of a client device can submit a location request to a server, which returns multiple venues that are near the client device. The client device can use one or more machine learning schemes (e.g., convolutional neural networks) to determine that the client device is located in one of specific venues of the possible venues. The venue system can further select imagery for presentation based on the venue selection. The presentation may be published as ephemeral message on a network platform.

AUTOMATED AUGMENTED REALITY EXPERIENCE CREATION SYSTEM (17804500)

Main Inventor

Konstantin Gudkov


Brief explanation

This patent application describes methods and systems for creating augmented reality (AR) experiences on a messaging platform. Here are the key points:
  • The invention involves receiving input through a graphical user interface (GUI) that specifies various image transformation parameters.
  • A set of sample source images is accessed and modified based on the specified image transformation parameters to generate a set of sample target images.
  • A machine learning model is trained to generate a target image from a source image by establishing a relationship between the set of sample source images and the set of sample target images.
  • The trained machine learning model is then used to automatically generate an augmented reality experience.
  • The invention aims to simplify the process of creating AR experiences on a messaging platform by automating the image transformation and generation of AR content using machine learning.

Abstract

Methods and systems are disclosed for performing automatically creating AR experiences on a messaging platform. The methods and systems perform operations that include: receiving, via a graphical user interface (GUI), input that specifies a plurality of image transformation parameters; accessing a set of sample source images; modifying the set of sample source images based on the plurality of image transformation parameters to generate a set of sample target images; training a machine learning model to generate a given target image from a given source image by establishing a relationship between the set of sample source images and the set of sample target images; and automatically generating an augmented reality experience comprising the trained machine learning model.

GENERATING 3D DATA IN A MESSAGING SYSTEM (18450193)

Main Inventor

Kyle Goodrich


Brief explanation

The subject technology described in the patent application applies a three-dimensional (3D) effect to image data and depth data using an augmented reality content generator.
  • The technology generates a segmentation mask based on the image data, which helps in identifying different objects or areas in the image.
  • Background inpainting and blurring techniques are applied to the image data using the segmentation mask, resulting in background inpainted image data.
  • A packed depth map is generated based on the depth data, which provides information about the distance of objects in the image.
  • The technology uses a processor to generate a message that includes information about the applied 3D effect, the image data, and the depth data.

Abstract

The subject technology applies a three-dimensional (3D) effect to image data and depth data based at least in part on an augmented reality content generator. The subject technology generates a segmentation mask based at least on the image data. The subject technology performs background inpainting and blurring of the image data using at least the segmentation mask to generate background inpainted image data. The subject technology generates a packed depth map based at least in part on the a depth map of the depth data. The subject technology generates, using the processor, a message including information related to the applied 3D effect, the image data, and the depth data.

CROSS-MODAL SHAPE AND COLOR MANIPULATION (17814391)

Main Inventor

Menglei Chai


Brief explanation

- The patent application describes an editing system for 3D objects using 2D sketches or RGB views.

- The system utilizes multi-modal variational auto-decoders (MM-VADs) trained with a shared latent space. - Editing 2D sketches of a 3D object allows for editing the corresponding 3D object. - A latent code is determined based on the edited or sketched 2D sketch. - The latent code is used to generate a 3D object using the MM-VADs. - The latent space is divided into separate spaces for shapes and colors. - The MM-VADs are trained with variational auto-encoders (VAE) and a ground truth.

Abstract

Systems, computer readable media, and methods herein describe an editing system where a three-dimensional (3D) object can be edited by editing a 2D sketch or 2D RGB views of the 3D object. The editing system uses multi-modal (MM) variational auto-decoders (VADs)(MM-VADs) that are trained with a shared latent space that enables editing 3D objects by editing 2D sketches of the 3D objects. The system determines a latent code that corresponds to an edited or sketched 2D sketch. The latent code is then used to generate a 3D object using the MM-VADs with the latent code as input. The latent space is divided into a latent space for shapes and a latent space for colors. The MM-VADs are trained with variational auto-encoders (VAE) and a ground truth.

ADDING BEAUTY PRODUCTS TO AUGMENTED REALITY TUTORIALS (18232680)

Main Inventor

Christine Barron


Brief explanation

The patent application describes a system and method for adding beauty products to tutorials.
  • The system can access video data of a presenter creating a tutorial and applying a beauty product.
  • The video data is processed to identify changes to the presenter's body part from the application of the beauty product.
  • The system then identifies the beauty product used in the tutorial.
  • Information about the beauty product is retrieved and presented on a display device.

Abstract

Systems, methods, and computer-readable media for adding beauty products to tutorials are presented. Methods include accessing video data comprising images of a presenter creating a tutorial, the tutorial depicting the presenter applying a beauty product to a body part of the presenter. Methods further include processing the video data to identify changes to the body part of the presenter from an application of the beauty product, and responding to identifying changes to the body part of the presenter from the application of the beauty product by processing the video data to identify the beauty product. Methods further include retrieving information regarding the beauty product and causing presentation of information regarding the beauty product on a display device.

ELECTRONIC MESSAGING REPLY METHOD (18298109)

Main Inventor

Joseph Collins


Brief explanation

- The patent application describes an electronic messaging system that receives messages from a sender and sends them to a recipient.

- The system deletes the header information of the message after a certain time once the recipient has accessed the message. - After deleting the header information, the system also deletes the message content itself. - The system generates a reply ID that is associated with the sender and allows the system to send a reply message back to the sender. - The reply ID is included in the header information of the original message. - The reply ID is stored in a database, lookup table, or file system for easy retrieval and use.

Abstract

An electronic messaging system that receives an original electronic message from a sender direct access client associated with a sender, the original electronic message including header and message information and being directed to a recipient. The electronic messaging system deletes the header information after a first predetermined time after the message information is accessed by the recipient, deletes the message information after deleting the header information, and generates a reply ID correlated with the sender of the electronic message in at least one of a database, a lookup table and an entry in a file system. The reply ID enables the electronic messaging system to direct a reply electronic message back to the sender, the reply ID being part of the header information of the original electronic message.

DYNAMICALLY ASSIGNING STORAGE LOCATIONS FOR MESSAGING SYSTEM DATA (18306077)

Main Inventor

Bradley Baron


Brief explanation

This patent application describes a method for dynamically assigning storage locations based on the location of a user's device.
  • The method begins with the processor receiving a signal from a user's device.
  • The processor stores the current location of the user's device in a historical database.
  • The processor then checks if the user's home location matches the current location.
  • If the home location does not match the current location, the processor checks the frequency of the user being associated with the current location compared to the home location.
  • If the user has been associated with the current location more frequently, the processor updates the home location data to the current location.

This method allows for the automatic updating of a user's home location based on their device's location history.

Abstract

Method of dynamically assigning storage locations starts with the processor receiving a signal from a first client device associated with a first user. Processor stores a current location of the first client device in a historical database and determines whether a home location data associated with the first user matches the current location. In response to determining that the home location data associated with the first user does not match the current location, processor determines whether the first user has been associated with the current location at a greater frequency than the home location data based on the historical database. In response to determining that the first user has been associated with the current location at a greater frequency, processor updates the home location data associated with the first user to the current location. Other embodiments are described.

DYNAMIC ADJUSTMENT OF EXPOSURE AND ISO TO LIMIT MOTION BLUR (18233729)

Main Inventor

Bo Ding


Brief explanation

- The patent application describes a method for reducing motion blur in a visual tracking system.

- The method involves accessing an initial image captured by an optical sensor in the system. - The camera operating parameters of the optical sensor for the initial image are identified. - The motion of the optical sensor for the initial image is determined. - Based on the camera operating parameters and the motion of the optical sensor, the motion blur level of the initial image is calculated. - The camera operating parameters of the optical sensor are adjusted based on the calculated motion blur level. - The purpose of the method is to improve the quality and accuracy of visual tracking by minimizing motion blur in the captured images.

Abstract

A method for limiting motion blur in a visual tracking system is described. In one aspect, the method includes accessing a first image generated by an optical sensor of the visual tracking system, identifying camera operating parameters of the optical sensor for the first image, determining a motion of the optical sensor for the first image, determining a motion blur level of the first image based on the camera operating parameters of the optical sensor and the motion of the optical sensor, and adjusting the camera operating parameters of the optical sensor based on the motion blur level.

CUSTOMIZED IMAGE REPROCESSING SYSTEM USING A MACHINE LEARNING MODEL (18233685)

Main Inventor

Newar Husam Al Majid


Brief explanation

- This patent application addresses the problem of automatically reprocessing images captured by a camera to produce personalized results.

- The innovation is a customized image reprocessing system powered by machine learning techniques. - The system can automatically reprocess images on a pixel level using a machine learning model. - The machine learning model takes the image's pixel values, sensor data detected by the camera's digital sensor at the time of capture, and flash calibration parameters generated for that specific user as input. - The system aims to provide personalized and customized image reprocessing based on individual user preferences and characteristics.

Abstract

The technical problem of automatically reprocessing an image captured by a camera in a manner that produces a personalized result is addressed by providing a customized image reprocessing system powered by machine learning techniques. The customized image reprocessing system is configured to automatically reprocess an image on a pixel level using a machine learning model that takes, as input, the image represented by pixel values, sensor data detected by the digital sensor of a camera at the time the image was captured, and, also, flash calibration parameters previously generated for that specific user.