Snap Inc. patent applications on 2025-07-03
Patent Applications by Snap Inc. on July 3rd, 2025
Snap Inc.: 15 patent applications
Snap Inc. has applied for patents in the areas of G06T19/006 ({Mixed reality (object pose determination, tracking or camera calibration for mixed reality )}, 3), G06F3/017 ({Gesture based interaction, e.g. based on a set of recognized hand gestures (interaction based on gestures traced on a digitiser )}, 2), G02B27/017 (Head-up displays, 1), G02B27/64 (Imaging systems using optical elements for stabilisation of the lateral and angular position of the image, 1), G05D1/46 (SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES, 1), G06F9/3877 ({using a slave processor, e.g. coprocessor (peripheral processor ; vector processor )}, 1), G06T11/60 (Editing figures and text; Combining figures or text, 1), G06V20/40 (IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING, 1), G06V40/172 (IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING, 1), H04L51/216 (Handling conversation history, e.g. grouping of messages in sessions or threads, 1)
With keywords such as: device, using, sensors, eyewear, inertial, visual-inertial, tracking, monitors, visual, odometry in patent application abstracts.
Top Inventors:
- Olha Borys of Vienna AT (1 patents)
- Georg Halmetschlager-Funek of Vienna AT (1 patents)
- Matthias Kalkgruber of Vienna AT (1 patents)
- Daniel Wolf of Modling AT (1 patents)
- Jakob Zillner of Krems AT (1 patents)
Patent Applications by Snap Inc.
20250216679. DYNAMIC SENSOR SELECTION VISUAL INERTIAL ODOMETRY SYSTEMS (Snap .)
Abstract: visual-inertial tracking of an eyewear device using sensors. the eyewear device monitors the sensors of a visual inertial odometry system (vios) that provide input for determining a position of the device within its environment. the eyewear device determines the status of the vios based information from the sensors and adjusts the plurality of sensors (e.g., by turning on/off sensors, changing the sampling rate, of a combination thereof) based on the determined status. the eyewear device then determines the position of the eyewear device within the environment using the adjusted plurality of sensors.
20250216693. AUTOMATED VIDEO CAPTURE COMPOSITION SYSTEM (Snap .)
Abstract: systems, devices, media, and methods are described for capturing a series of video clips, together with position, orientation, and motion data collected from an inertial measurement unit during filming. the methods in some examples include calculating camera orientations based on the data collected, computing a stabilized output path based on the camera orientations, and then combining the video segments in accordance with said stabilized output path to produce a video composition that is stable, short, and easy to share. the video clips are filmed in accordance with a set of conditions called a capture profile. in some implementations, the capture profile conditions are reactive, adjusting in real time, during filming, in response to sensor data gathered in real time from a sensor array.
20250216857. AUTONOMOUS DRONE NAVIGATION BASED VISION (Snap .)
Abstract: systems, computer readable medium and methods for autonomous drone navigation based on vision are disclosed. example methods include capturing an image using an image capturing device of the autonomous drone, processing the image to identify an object, and navigating the autonomous drone relative to the object for a period of time. after the period of time a second type of navigation is used based on determining structure from motion navigation. images are captured during the period of time to transition to the second type of navigation. the second type of navigation uses a downward pointing navigation camera and other sensors.
20250216947. REFINING GESTURE MODELS (Snap .)
Abstract: a model generation system captures, using cameras, hand-tracking data of a gesture made by a user demonstrating the gesture. the model generation system generates a three-dimensional model of the gesture using the hand-tracking data and provides a display of the three-dimensional model to the user. the model generation system receives, from the user, model refining data refining the three-dimensional model. the model generation system generates a refined three-dimensional model of the gesture using the model refining data and the three-dimensional model. the refined three-dimensional model is used for detecting the gesture.
20250216948. TRANSLATING 3D VOLUME EXTENDED REALITY (Snap .)
Abstract: an extended reality (xr) system that translates virtual objects in an xr user interface using gestures is provided. the xr system displays a virtual object to a user in an extended reality interface, detects a pinch gesture and its selection location within the virtual object's boundaries, and determines an offset vector from the virtual object's center point to the pinch gesture's selection location. the xr system generates a translated virtual object using the offset vector and a current location of the pinch gesture, and displays the translated virtual object to the user.
20250217159. MESSAGE-BASED PROCESSING ASSIGNMENT NEURAL NETWORK LAYERS PROCESSOR CLUSTERS (Snap .)
Abstract: systems and methods described herein relate to a multi-processor system for processing neural networks. the multi-processor system includes multiple processor clusters, each comprising processor cluster elements, with neural network layers assigned to one or more processor clusters. in response to an activation signal associated with a processor cluster element of a source processor cluster, the multi-processor system performs computations using control data for a set of destination processor clusters. the control data includes an offset computed using coordinates associated with the source and destination processor clusters. based on these computations, the multi-processor system can selectively identify target destination processor clusters from the set of destination processor clusters and transmit output messages to them.
20250218087. GENERATING MODIFIED USER CONTENT THAT INCLUDES ADDITIONAL TEXT CONTENT (Snap .)
Abstract: in one or more implementations, user content items generated using a client application may be shared with users that are not contacts of the user within the client application. a user interface that indicates a number of recipients of the user content item may be generated that also includes a first section that displays the user content item and a second section to add text content to the user content item. in various examples, one or more classifications may be associated with the user content item.
20250218130. PIXEL-BASED MULTI-VIEW GARMENT TRANSFER (Snap .)
Abstract: methods and systems are disclosed for using machine learning models to perform pixel-based deformation of fashion items. the methods and systems receive one or more images depicting a person in an individual pose and receive a first source image depicting a first view of a target fashion item and a second source image depicting a second view of the target fashion item. the methods and systems process, using one or more machine learning models, the one or more images that depict the person in the individual pose together with the first and second source images to generate a flow field, the flow field indicating a likelihood of existence and location of each pixel of the one or more images relative to the first and second source images. the methods and systems modify a portion of the one or more images to overlay the target fashion item on the person.
20250218140. DEVICE POSE DETERMINATION USING INTERPOLATED IMU POSE (Snap .)
Abstract: an augmented reality device generates an updated position and orientation (pose) value by initially determining, using image-based processing, a pose estimate from a current image frame, a previous image frame, and a previous pose. first and second inertial measurement unit (imu) poses, having timestamps on either side of a timestamp of the current image frame are determined. an interpolated imu pose is determined from the two imu poses and the three timestamps. a transformation between the current-image pose estimate and the interpolated imu pose is determined and applied to the current-image pose estimate to generate a pose update, for use in operating the augmented reality device.
20250218143. GENERATING USER INTERFACES DISPLAYING AUGMENTED REALITY GRAPHICS (Snap .)
Abstract: an augmented reality (ar) graphics system is provided. the ar graphics system may detect an object in a real-world scene that corresponds to an ar graphics display surface. the ar graphics system may generate ar graphics that are displayed as overlays of the ar graphics display surface. the ar graphics system may track the motion of a graphics input tool with respect to the ar graphics display surface to generate ar graphics based on the motion of the graphics input tool. the ar graphics may be comprised of a number of markings generated based on the motion of the graphics input tool.
20250218175. IMAGE CLASSIFICATION SEQUENCE FRAMES (Snap .)
Abstract: examples relate to image classification. a method includes converting frame image data into frame event data. the frame image data represents an appearance of image content in a sequence of image frames. the frame event data represents one or more events. a conversion process includes determining, for each event of the one or more events, one or more event parameters including positional coordinates corresponding to a location of a respective pixel in the sequence of image frames. the method can include combining the frame event data of multiple image frames in the sequence of image frames using a weight factor. event parameters of the frame event data are processed to determine an event-based region of interest in the sequence of image frames. a classification associated with the image content is determined based on the event-based region of interest.
20250218216. SECURE BIOMETRIC METADATA GENERATION (Snap .)
Abstract: systems, devices, media and methods are presented for generating biometric image data. in one example, a system accesses a set of images stored on a mobile computing device. the system identifies one or more faces depicted in the set of images and generates a set of face images from the set of images. the system determines a set of positions of a set of facial features depicted within the set of face images and generates a set of biometric reference maps based on the set of positions. the system transmits the set of face images to a reference server and stores the set of biometric reference maps on the mobile computing device.
Abstract: a server has a processor and a memory storing a multiple channel message thread module with instructions executed by the processor to identify when participants at client devices are actively viewing a common message thread at the same time to establish a participant viewing state. an alternate channel communication lock prompt is supplied to the client devices in response to the participant viewing state. an alternate channel communication is delivered to the client devices in response to activation of the alternate channel communication lock prompt by at least one participant.
20250220146. AUGMENTED REALITY PROP INTERACTIONS (Snap .)
Abstract: augmented reality (ar) systems, devices, media, and methods are described for generating ar experiences including interactions with virtual or physical prop objects. the ar experiences are generated by capturing images of a scene with a camera system, identifying an object receiving surface and corresponding surface coordinates within the scene, identifying an ar primary object and a prop object (physical or virtual), establishing a logical connection between the ar primary object and the prop object, generating ar overlays including actions associated with the ar primary object responsive to commands received via a user input system that position the ar primary object adjacent the object receiving surface responsive to the primary object coordinates and the surface coordinates within the scene and that position the ar primary object and the prop object with respect to one another in accordance with the logical connection, and presenting the generated ar overlays with a display system.
20250220743. DEVICE RELATIVE POSE DETERMINATION AUGMENTED REALITY SESSIONS (Snap .)
Abstract: a first mobile device scans an image including fiducial markings, displayed on a second mobile device, and determines a relative pose between the first mobile device and the second mobile device by generating a pose transformation of the fiducial markings displayed by the second mobile device. information specifying updates to a pose of the second mobile device are received and processed device as user input to the first mobile device. touch inputs received on the second mobile device may be processed as additional user input by the first mobile device. the touch inputs may be processed to select an augmented reality object displayed by the first mobile device, and the updates to the pose of the second mobile device may be processed to move the augmented reality object displayed by the first mobile device.