Jump to content

Snap Inc. patent applications on 2025-05-15

From WikiPatents
Revision as of 18:48, 21 May 2025 by Wikipatents (talk | contribs) (Automated patent report)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Patent Applications by Snap Inc. on May 15th, 2025

Snap Inc.: 9 patent applications


With keywords such as: augmented, object, system, reality, systems, rendering, receives, embodiments, herein, describe in patent application abstracts.


Patent Applications by Snap Inc.

20250153754. SYSTEMS AND EMBODIMENTS HEREIN DESCRIBE _simplified_abstract_(snap inc.)


Abstract: systems and embodiments herein describe an augmented reality (ar) object rendering system. the ar object rendering system receives an image, generates a set of noise parameters and a set of blur parameters for the image using a neural network trained on a paired dataset of images, identifies an ar object associated with the image, modifies the ar object using the set of noise parameters and the set of blur parameters, displays the modified augmented reality object within the image.


20250156047. ASPECTS OF THE PRESENT DISCLOSURE INVOLV_simplified_abstract_(snap inc.)


Abstract: aspects of the present disclosure involve a system and a method for performing operations comprising: receiving, by a client device implementing a messaging application, a request to access a display of a plurality of augmented reality experiences; retrieving a plurality of identifiers of each of the plurality of augmented reality experiences; determining that a given augmented reality experience of the plurality of augmented reality experiences is associated with an access restriction; modifying a given identifier of the plurality of identifiers associated with the given augmented reality experience in response to determining that the given augmented reality experience is associated with the access restriction; and generating, for display on the client device, a graphical user interface that includes the plurality of identifiers comprising the modified given identifier.


20250156202. SYSTEMS AND METHODS ARE PROVIDED FOR DET_simplified_abstract_(snap inc.)


Abstract: systems and methods are provided for determining a set of selectors associated with the publisher identifier, each selector comprising specified content to extract from source data and one or more rules for extracting the specified content. the system and methods further provided for each location data in the list of location data, extracting, from the source data, specified content for each selector of at least a subset of the set of selectors based on the one or more rules specified in each selector of the at least the subset of the set of selectors; determining a template to use to generate the media content item, the template comprising regions corresponding to the one or more selectors; populating each region of the template using specified content for the corresponding selector; and generating the media content item from the populated template.


20250157064. A METHOD FOR AR-GUIDED DEPTH ESTIMATION _simplified_abstract_(snap inc.)


Abstract: a method for ar-guided depth estimation is described. the method includes identifying a virtual object rendered in a first frame that is generated based on a first pose of an augmented reality (ar) device, determining a second pose of the ar device, the second pose following the first pose, identifying an augmentation area in the second frame based on the virtual object rendered in the first frame, and the second pose, determining depth information for the augmentation area in the second frame, and rendering the virtual object in the second frame based on the depth information.


20250157138. SYSTEMS AND METHODS FOR RENDERING THREE-_simplified_abstract_(snap inc.)


Abstract: systems and methods for rendering three-dimensional (3d) scenes having improved visual characteristics from a pair of 2d images having different viewpoints. the 3d scene is created by obtaining a first two-dimensional (2d) image of a scene object from a first viewpoint, obtaining a second 2d image of the scene object from a second viewpoint that is different than the first viewpoint, creating a depth map from the first and second 2d images, creating a 3d scene from the depth map and the first and second 2d images, detecting regions of the initial 3d scene with incomplete image information, reconstructing the detected regions of the 3d scene, determining replacement information and modify the reconstructed regions, and rendering the 3d scene with the modified reconstructed regions.


20250157162. A HEAD-WEARABLE APPARATUS DETERMINES AN _simplified_abstract_(snap inc.)


Abstract: a head-wearable apparatus determines an imaginary reference plane intersecting a head of a user viewing augmented content in a viewing pane having vertical and lateral dimensions in a display of the head-wearable apparatus. the imaginary reference plane coincides with a first viewing direction of the head of the user. the apparatus detects a rotational movement of the head of the user in a vertical direction while viewing the augmented content. in response to the detected rotational movement, the apparatus determines a second viewing direction of the head of the user when viewing the augmented content in the second viewing direction and determines a reference angle between the imaginary reference plane and the second viewing direction. based on the reference angle, the apparatus assign one of a billboard display mode and a headlock display mode (or combination) to the augmented content presented in the display.


20250157234. A SYSTEM AND METHOD FOR AUTOMATICALLY GE_simplified_abstract_(snap inc.)


Abstract: a system and method for automatically generating captions for images and videos using artificial intelligence is disclosed. the system receives image or video data and analyzes the pixels to detect visual features including objects, people, text, and backgrounds. these detected features are used to generate a prompt summarizing the contents. the prompt is provided to a trained natural language processing model which outputs caption text describing the image/video data. the system can incorporate contextual factors to enhance relevance of the generated captions. the captions are ranked using relevance, diversity, and quality metrics then displayed to the user as an overlay on the media or in a separate interface pane. users can cycle through different generated captions having varying tones and styles. the techniques combine computer vision, natural language processing, and deep learning to automatically generate context-relevant captions without manual user input.


20250158940. A SYSTEM AND METHOD FOR GENERATING CONTE_simplified_abstract_(snap inc.)


Abstract: a system and method for generating contextually relevant reply suggestions for media posts is disclosed. the system analyzes media content to identify visual objects, scenes, text, and metadata attributes using computer vision techniques. identified objects and attributes are incorporated into structured prompt templates to construct detailed natural language descriptions of the media context. the prompts are provided to a text generation artificial intelligence (ai) that outputs a plurality of contextual reply suggestions based on the media analysis. suggestions are displayed as selectable options adjacent to the media post. users can cycle through suggestions and select a reply to send. selections are logged to improve the ai model. feedback on suggestion quality can also be collected. by integrating computer vision and ai generation driven by engineered prompts, the system produces highly relevant, personalized responses tailored to media content. the techniques enhance user engagement with media posts through intelligent ai reply suggestions.


20250159101. SYSTEMS, DEVICES, MEDIA, AND METHODS ARE_simplified_abstract_(snap inc.)


Abstract: systems, devices, media, and methods are presented for generating graphical representations within frames of a video stream in real time. the systems and methods receive a frame depicting a portion of a face, identify user input, identify positions on the portion of the face corresponding to the user input. the systems and methods generate a graphical representation of the user input linked to positions on the portion of the face and render the graphical representation within frames of the video stream in real time.

(Ad) Transform your business with AI in minutes, not months

Custom AI strategy tailored to your specific industry needs
Step-by-step implementation with measurable ROI
5-minute setup that requires zero technical skills
Get your AI playbook

Trusted by 1,000+ companies worldwide

Cookies help us deliver our services. By using our services, you agree to our use of cookies.