Jump to content

Snap Inc. patent applications on 2025-06-12

From WikiPatents

Patent Applications by Snap Inc. on June 12th, 2025

Snap Inc.: 12 patent applications

Snap Inc. has applied for patents in the areas of G06T19/006 ({Mixed reality (object pose determination, tracking or camera calibration for mixed reality )}, 3), G06F3/012 ({Head tracking input arrangements}, 1), G06F16/48 (Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually, 1), G06F16/9537 (Spatial or temporal dependent retrieval, e.g. spatiotemporal queries, 1), G06Q30/0643 ({Graphical representation of items or shoppers}, 1), H04L63/0853 ({using an additional device, e.g. smartcard, SIM or a different communication terminal (cryptographic mechanisms or cryptographic arrangements for entity authentication involving additional secure or trusted devices )}, 1), H04L63/102 ({Entity profiles}, 1), H04S7/303 ({Tracking of listener position or orientation}, 1), H04W4/021 (WIRELESS COMMUNICATION NETWORKS (broadcast communication ; communication systems using wireless links for non-selective communication, e.g. wireless extensions ), 1), H04W52/0261 (WIRELESS COMMUNICATION NETWORKS (broadcast communication ; communication systems using wireless links for non-selective communication, e.g. wireless extensions ), 1)

With keywords such as: device, data, tracking, reality, bending, operations, used, such, facilitate, extended in patent application abstracts.

Top Inventors:

Patent Applications by Snap Inc.

20250190050. BENDING-ASSISTED CALIBRATION EXTENDED REALITY TRACKING (Snap .)

Abstract: bending data is used to facilitate tracking operations of an extended reality (xr) device, such as hand tracking or other object tracking operations. the xr device obtains bending data indicative of bending of the xr device to accommodate a body part of a user wearing the xr device. the xr device determines, based on the bending data, whether to use previously identified biometric data in a tracking operation. a mode of the xr device is selected based on this determination. the xr device performs the tracking operation based on the selected mode. the selected mode may be a first mode in which the previously identified biometric data is used in the tracking operation or a second mode which does not apply previously identified biometric data in the tracking operation.

20250190483. AUTOMATED LOCAL STORY GENERATION CURATION (Snap .)

Abstract: systems and methods for automated local story generation and curation are described. in one example embodiment, a server computer receives content from client devices, and processes the content to identify content characteristics. stories are then generated based on the characteristics of the received content, and the stories are communicated to client devices. in certain embodiments, selection at a client device of an individual piece of content within a story may further be used by the system to provide the client device with a sub-story that includes pieces of content sharing content characteristics with the characteristics of the selected image or video.

20250190505. INTERFACES ORGANIZE SHARE LOCATIONS DESTINATION GEOLOCATION MESSAGING SYSTEM (Snap .)

Abstract: the subject technology causes, at a client device, display of a graphical interface comprising a plurality of selectable graphical items, each selectable graphical item corresponding to a respective content item associated with a different geolocation. the subject technology receives, at the client device, a selection of a first selectable graphical item from the plurality of selectable graphical items, the first selectable graphical item corresponding to a particular geolocation. the subject technology causes display, at the client device, a second plurality of selectable graphical items, each of the second plurality of selectable graphical items corresponding to a particular second geolocation of an activity or place of business within a geographical area associated with the particular geolocation.

20250191056. PRODUCT CARDS PROVIDED AUGMENTED REALITY CONTENT GENERATORS (Snap .)

Abstract: the subject technology requests a set of augmented reality (ar) content generators based on a group id, generated by an extension application programming interface (api), using a camera api. the subject technology receives the set of ar content generators. the subject technology provides for display representations of the set of ar content generators in an interface. the subject technology receives a selection of a first ar content generator from the set of ar content generators. the subject technology renders the first ar content generator for display, using the camera api. the subject technology requests metadata for a set of products based on the selected first ar content generator using the extension api. the subject technology receives the metadata from the extension api. the subject technology provides for display the set of representations of products based on the received metadata.

20250191312. PERSISTING AUGMENTED REALITY EXPERIENCES (Snap .)

Abstract: methods and systems are disclosed for performing generating ar experiences on a messaging platform. the methods and systems perform operations including: receiving, from a client device, a request to access an augmented reality (ar) experience; adding one or more ar elements to a first image captured by the client device, the first image depicting a real-world object; storing data representing a position of the one or more ar elements relative to the real-world object, the data being maintained after the ar experience is terminated; receiving a request to resume the ar experience after the ar experience has been terminated; and in response to receiving the request to resume the ar experience, accessing the data that was stored prior to termination of the ar experience to generate a display of the ar experience that depicts the one or more ar elements at a particular position within a second image.

20250191313. INPUT MODALITIES AR WEARABLE DEVICES (Snap .)

Abstract: systems, methods, and computer readable media for input modalities for an augmented reality (ar) wearable device are disclosed. the ar wearable device captures images using an image capturing device and processes the images to identify objects. the objects may be people, places, things, and so forth. the ar wearable device associates the objects with tags such as the name of the object or a function that can be provided by the selection of the object. the ar wearable device then matches the tags of the objects with tags associated with ar applications. the ar wearable device presents on a display of the ar wearable device indications of the ar applications with matching tags, which provides a user with the opportunity to invoke one of the ar applications. the ar wearable device recognizes a selection of an ar application in a number of different ways including gesture recognition and voice commands.

20250191315. AUGMENTED REALITY EYEWEAR SPEECH BUBBLES TRANSLATION (Snap .)

Abstract: eyewear presenting text corresponding to spoken words (e.g., in speech bubbles) and optionally translating from one language to another. in one example, an interactive augmented reality experience is provided between two users of eyewear devices to allow one user of an eyewear device to share a personal attribute of the user with a second user. the personal attribute can be speech spoken by a remote second user of eyewear converted to text. the converted text can be displayed on a display of eyewear of the first user proximate the viewed second user. the personal attribute may be displayed in a speech bubble proximate the second user, such as proximate the head or mouth of the second user. the language of the spoken speech can be recognized by the second user eyewear, and translated to a language that is understood by the first user. in another example, the spoken words of a remote person is captured by the eyewear of a user, the position of the remote person is identified, the spoken word are converted to text, and the text is displayed (e.g., in a speech bubble) on an ar display of the eyewear adjacent the remote person.

20250193177. ONE-OF-A-KIND OPEN EDITION NON-FUNGIBLE TOKEN DYNAMICS (Snap .)

Abstract: a system to perform operations that include: minting a non-fungible token (nft) that comprises a media object and mutable metadata; allocating the nft to a user of a client device; granting the user of the client device a permission to change the mutable metadata of the nft based on the allocating the nft to the user of the client device; generating an open-edition of the nft, the open-edition of the nft comprising a reference to the mutable metadata; receiving a change to the mutable metadata from the user of the client device; and updating the open-edition of the nft based on the change.

20250193200. COLLABORATIVE PUBLIC USER PROFILE (Snap .)

Abstract: a system to generate a graphical user interface to display a presentation of a set of shared user groups between users of a social networking service is described. embodiments of the present disclosure relate generally to systems for: receiving an identification of a second user from a user account of a first user; identifying a user group that includes the first user and the second user in response to the identification of the second user from the user account of the first user; retrieving user identifiers of the first user and the second user, wherein the user identifiers may include graphical avatars; generating a group identifier based on the user identifiers; and causing display of a presentation of the user group at a client device.

20250193626. IMMERSIVE AUGMENTED REALITY EXPERIENCES USING SPATIAL AUDIO (Snap .)

Abstract: systems, devices, media, and methods are presented for an immersive augmented reality (ar) experience using an eyewear device with spatial audio. the eyewear device has a processor, a memory, an image sensor, and speakers. the eyewear device captures image information for an environment surrounding the device, identifies a match between objects in the image information and predetermined objects in previously obtained information for the same environment. the eyewear device then identifies a target location within the environment, which may be associated with a physical or a virtual object. the eyewear device monitors its orientation with respect to the target location and presents audio signals to guide the user toward the target location.

20250193636. CONTENT REQUEST LOCATION (Snap .)

Abstract: a method of obtaining media content of an event, comprising: identifying a real-life event and, a time of said real-life event a geographic location of the real-life event; identifying a subset of a plurality of client terminals of users located in proximity to the geographic location of the real-life event at said time of said real-life event; sending a message to the subset of client terminals containing a request to acquire media content documenting the real-life event; and receiving at least one media content item documenting the real-life event from at least one client terminal of the subset of client terminals, the at least one media content item acquired by at least one user of the users using the at least one client terminal in response to the message.

20250193801. DETERMINING LOCATION USING MULTI-SOURCE GEOLOCATION DATA (Snap .)

Abstract: systems, methods, and computer readable media that determine a location of a device using multi-source geolocation data, where the methods include accessing new location data from a location source of a plurality of location sources, where the new location data includes a new position and an accuracy of the new position, and determining a current position and an accuracy of the current position based on the new position, the accuracy of the new position, an previous current position, and an accuracy of the previous current position. the method further includes determining a change in location based on a difference between the current position and the previous current position. some systems, methods, and computer readable media are directed to scheduling location requests to generate location data where the scheduling and the actual requests are made based on a number of conditions.

Cookies help us deliver our services. By using our services, you agree to our use of cookies.