Snap Inc. patent applications published on October 26th, 2023

From WikiPatents
Jump to navigation Jump to search

Summary of the patent applications from Snap Inc. on October 26th, 2023

Snap Inc. has recently filed several patents related to location sharing, camera settings customization, eyewear device alerts, virtual conferences, augmented reality experiences, shared augmented reality during video chats, video playback, passive flash system, video manipulation, and enhanced reality eyewear.

Summary of Recent Patents Filed by Snap Inc.:

- Patent 1: Methods, systems, and user interfaces for sharing location information during a communication session through a messaging system. The location of participants is determined based on data received from a location sensor on a client device, and the current location is displayed on another client device's messaging user interface.

- Patent 2: A method for customizing visual settings on a device with a display and camera. It involves displaying a video with camera settings and visual effects information, and allowing users to select camera effects shortcuts to apply live video feed with specified visual effects and camera settings.

- Patent 3: A system and method for generating alerts on an eyewear device. Alerts are triggered based on specific combinations of notification attributes received from a mobile device, and visual animations are retrieved from the eyewear device's storage to activate visual indicators for generating the alerts.

- Patent 4: A system and method for communicating with an external user during a virtual conference. An interface allows users to configure an external communication element and set properties for it. The external communication element is included in the virtual room based on the set properties, facilitating communication with the external user.

- Patent 5: Methods and systems for creating augmented reality experiences on a messaging platform. Event types associated with an AR experience are used to generate metrics, and interaction data is generated based on user interactions with the AR experience. The interaction data is sent to a remote server when the AR experience ends.

- Patent 6: Methods and systems for enabling a shared augmented reality experience during a video chat. Users' body parts in videos are modified to include AR elements related to an AR experience when a user on one device requests to activate it.

- Patent 7: A method for displaying a sequence of videos on an electronic device. A summary of the current video is shown when a user wants to move on to the next video, and the next video starts playing when a timer expires or the user interacts with the summary.

- Patent 8: A passive flash system for improving lighting conditions when capturing images. A preview of the content being captured is displayed, along with a brighter portion of the screen to increase lighting in the environment without an active flash.

- Patent 9: A system and method for manipulating a video showing a person. Skeletal joints of the person in the video are identified and tracked in 3D, and a 3D virtual object with additional limbs is displayed and moved based on the movement of the person's skeletal joints.

- Patent 10: A technology using eyewear devices to enhance the environment using augmented reality and virtual reality. The eyewear device's camera and position detection system identify specific points in the environment, and the display overlays graphics and images onto these points to enhance the user's experience.

Notable Applications:

  • Location sharing during communication sessions through a messaging system.
  • Customization of visual settings on devices with displays and cameras.
  • Generation of alerts on eyewear devices based on specific notification attributes.
  • Communication with external users during virtual conferences.
  • Creation of augmented reality experiences on a messaging platform.
  • Shared augmented reality experiences during video chats.
  • Display of video summaries and sequential playback on electronic devices.
  • Passive flash system for improving lighting conditions during image capture.
  • Manipulation of videos showing people by adding virtual limbs.
  • Enhanced reality eyewear using augmented and virtual reality technologies.



Contents

Patent applications for Snap Inc. on October 26th, 2023

BAROMETER CALIBRATION IN A LOCATION SHARING SYSTEM (18347287)

Main Inventor

Eric Guillaume


Brief explanation

The abstract describes a method, system, and device for calibrating the barometer of a client device. A server computer collects historical data, including location and atmospheric pressure data, from multiple client devices over time. Using this data, an equation system is solved to determine unknown parameters, including the barometer bias of a specific client device. The first client device is then calibrated using this barometer bias.

Abstract

Methods, systems, and devices for calibrating a barometer of a client device. A server computer accesses historical data including location data, and atmospheric pressure data collected from a plurality of client devices over a period of time. An equation system defined by the historical data is solved. The equation system has a plurality of unknown parameters, the plurality of unknown parameters comprising a barometer bias of a first client device among the plurality of client devices. The first client device is calibrated using the barometer bias.

LED ILLUMINATED WAVEGUIDE PROJECTOR DISPLAY (18344281)

Main Inventor

David Woods


Brief explanation

The abstract describes a projection display and a method for illuminating it. The display includes a waveguide with an input grating that has diffractive features. The display also has an array of LEDs that create an illumination pupil, which is then relayed onto the input grating. The shape of the input pupil at the grating is larger in the direction parallel to the diffractive features than in the direction perpendicular to them.

Abstract

There is provided a projection display, and a method for illuminating a projection display. The projection display comprising a waveguide comprising an input grating having a plurality of linear diffractive features, the input grating configured to couple in light into the waveguide, and an array of LEDs configured to form an illumination pupil which is optically relayed as an input pupil onto the input grating, such that at the input grating the input pupil has a shape that is larger in a direction parallel to the linear diffractive features than in a direction perpendicular to the linear diffractive features.

OPTICAL STRUCTURE FOR AUGMENTED REALITY DISPLAY (18344696)

Main Inventor

Mohmed Salim Valera


Brief explanation

This abstract describes an augmented reality display that uses a color projector to emit an image in red, green, and blue. The projected light passes through a pair of waveguides. The first waveguide receives the light and separates the different colors, coupling the red and green wavelengths into the waveguide and directing the green and blue wavelengths towards the second waveguide. The second waveguide then receives the green and blue wavelengths and couples them into the waveguide.

Abstract

An augmented reality display is disclosed. A colour projector  emits an image in a narrow beam comprising three primary colours: red, green and blue. A pair of waveguides ,  is provided in the path of the projected beam. A first input grating  receives light from the projector  and diffracts the received light so that diffracted wavelengths of the light in first and second primary colours are coupled into the first waveguide , and so that diffracted wavelengths of the light in second and third primary colours are coupled out of the first waveguide in a direction towards the second waveguide . A second input diffraction grating  receives light coupled out of the first waveguide  and diffracts the second and third primary colours so that they are coupled into the second waveguide .

SYSTEMS AND METHODS FOR ESTIMATING USER INTENT TO LAUNCH AUTONOMOUS AERIAL VEHICLE (18215665)

Main Inventor

Qiaokun Huang


Brief explanation

The abstract describes a method for detecting the launch of an autonomous vehicle using various sensors, such as acceleration sensors and touch sensors. The method involves receiving inputs from these sensors, determining if a launch has occurred based on the inputs, and controlling the vehicle accordingly. If a launch is detected, the motor of the vehicle is activated. This approach aims to improve safety and reliability by reducing false positive launch detections, which could lead to unnecessary activation of the motor.

Abstract

Detection of a launch event of an autonomous vehicle may consider input from a variety of sensors, including acceleration sensors and touch sensors In some aspects, a method includes receiving a first input from a touch sensor, receiving a second input from an accelerometer, determining whether a launch of the autonomous vehicle is detected based on the first input and the second input, and controlling the autonomous vehicle in response to the determining. In some aspects, when a launch is detected, a motor of the autonomous vehicle may be energized. By detecting a launch event in this manner, improved safety and reliability may be realized. A reduced occurrence of false positive launch events may reduce a risk that the motor of the autonomous vehicle is energized when the vehicle has not actually been launched.

MULTIMODAL UI WITH SEMANTIC EVENTS (18307260)

Main Inventor

Daniel Colascione


Brief explanation

This abstract describes an augmented reality (AR) system that uses multiple input methods. It includes a hand-tracking feature that allows users to directly manipulate virtual objects and use gestures as input. The system also has a voice processing feature for speech input. The hand-tracking data is quickly communicated to the user interface through a direct memory buffer access, ensuring low latency. The system framework component routes higher level hand-tracking data, such as gesture identification and symbols generated based on hand positions, to gesture-based user interfaces using the Snips protocol.

Abstract

An AR system includes multiple input-modalities. A hand-tracking pipeline supports Direct Manipulation of Virtual Object (DMVO) and gesture input methodologies. In addition, a voice processing pipeline provides for speech inputs. Direct memory buffer access to preliminary hand-tracking data, such as skeletal models, allows for low latency communication of the data for use by DMVO-based user interfaces. A system framework component routes higher level hand-tracking data, such as gesture identification and symbols generated based on hand positions, via a Snips protocol to gesture-based user interfaces.

GESTURE-BASED KEYBOARD TEXT ENTRY (17729808)

Main Inventor

Sharon Moll


Brief explanation

This abstract describes a gesture-based text entry user interface for an Augmented Reality (AR) system. The AR system detects when a user starts a text entry gesture and generates a virtual keyboard interface with virtual keys. The user is then provided with this virtual keyboard interface. The AR system also detects when the user holds an "enter text" gesture and collects continuous motion gesture data as the user moves their hand through the virtual keys. When the user releases the "enter text" gesture, the AR system generates entered text data based on the collected continuous motion gesture data.

Abstract

A gesture-based text entry user interface for an Augmented Reality (AR) system is provided. The AR system detects a start text entry gesture made by a user of the AR system, generates a virtual keyboard user interface including a virtual keyboard having a plurality of virtual keys, and provides to the user the virtual keyboard user interface. The AR system detects a hold of an enter text gesture made by the user. While the user holds the enter text gesture, the AR system collects continuous motion gesture data of a continuous motion as the user makes the continuous motion through the virtual keys of the virtual keyboard. The AR system detects a release of the enter text gesture by the user and generates entered text data based on the continuous motion gesture data.

LOCATION-BASED SHARED AUGMENTED REALITY EXPERIENCE SYSTEM (17725319)

Main Inventor

Pawel Wawruch


Brief explanation

The abstract describes a system that allows users in the same location to easily participate in a shared augmented reality (AR) experience. This is done by creating separate instances of the AR experience for different predefined geographic areas. When a user wants to join the shared AR experience, the system uses the location information of their device to determine the appropriate AR experience area and provides them with the necessary information to access it.

Abstract

A location-based shared augmented reality (AR) experience system is configured to permit users that find themselves in the same geographic area to easily join in a shared AR experience by creating respective instances of the shared AR experience for different previously defined geographic areas. When a user indicates a request to launch a shared AR experience accessible via a messaging client, the location-based shared AR experience system obtains or receives from the user device executing the messaging client location information of the user device, determines a previously-defined AR experience area that encompasses the location of the user device, and communicates to the user device an address of an associated instance of the shared AR experience.

SUMMARY GENERATION BASED ON TRIP (18215355)

Main Inventor

Alexander Collins


Brief explanation

This abstract describes a system and method for generating a summary of a trip based on trip information. The system includes a computer-readable storage medium that stores a program. The method involves determining criteria associated with a user's trip during a specific time period, retrieving visual media items generated by the user's device during that time period, determining the location information for those media items, automatically generating a trip graphic using the visual media items and location information, and displaying the trip graphic on the user's device.

Abstract

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for generating a summary based on trip information. The program and method include operations for: determining that one or more criteria associated with a user correspond to a trip taken by the user during a given time interval; retrieving a plurality of visual media items generated by a client device of the user during the given time interval; determining location information for the plurality of visual media items; automatically generating a trip graphic to represent the trip based on the plurality of visual media items generated by the user during the given time interval and the determined location information; and causing the trip graphic to be displayed on the client device.

SEARCHING SOCIAL MEDIA CONTENT (18341542)

Main Inventor

Newar Husam Al Majid


Brief explanation

The abstract describes different ways to enhance the search and organization of media content. It mentions the ability to display media content items dynamically as a collection when a user types into a search bar. It also discusses improving search functionality by prioritizing user-facing search features based on input signals.

Abstract

Various embodiments provide for systems, methods, and computer-readable storage media that improve media content search functionality and curation of media content. For instance, various embodiments described in this document provide features that can present media content items in the form of dynamic collection of media content items upon a user typing into a search bar. In another instance, various embodiments described herein improve media content search functionality by ranking user facing search features using input signals.

DYNAMIC IMAGE FILTERS BASED ON PURCHASE TRANSACTIONS (18344727)

Main Inventor

Krish Jayaram


Brief explanation

This abstract describes a system that provides dynamic image filters based on purchase transactions. When a user receives an offer code from a merchant and makes a purchase using a corresponding purchase code, the system updates the available image filters on the user's device. This update includes a new image filter that adds a visual indicator associated with the merchant to any media content captured on the device.

Abstract

Examples disclosed herein relate to providing dynamic image filters based on purchase transactions. An offer code associated with an offer from a merchant is received from a device associated with a user. The offer is identified based on the offer code. An association between the offer and the user is stored. A purchase code is received from the device. The purchase code is associated with the offer from the merchant. Responsive to detecting completion of a purchase transaction based on the association between the offer and the purchase code, a list of available image filters on the device is updated to include an additional image filter that is configured to display a visual indicator associated with the merchant. The additional image filter enables the device to add the visual indicator to a media content item comprising image data captured on the device.

CACHED CLOUD RENDERING (17725139)

Main Inventor

Edward Lee Kim-Koon


Brief explanation

The abstract describes a system that uses a cloud server to render and store frames for a mobile device, reducing the device's power consumption and rendering time. By offloading the processing work to the server, the mobile device's battery life is extended, and the workload on its GPU is reduced.

Abstract

A cached cloud rendering system for saving power and rendering time, which reduces motion to photon time. The cached cloud rendering system may utilize a cloud server to render and cache frames requested by a mobile computing device, which distributes the processing workload to a server system and results in increased battery life of the mobile computing device. Distributing the processing workload to the server system provides an additional benefit of reducing workload on the Graphics Processing Unit (GPU) of the mobile computing device.

IMAGE VISUAL QUALITY ASSESSMENT (17726360)

Main Inventor

Meena De Schutter


Brief explanation

This abstract describes a method for assessing the quality of an image captured by a device. The method involves receiving a notification of the image capture and initiating an assessment task. This task includes determining if the image is suitable for assessment, running a model to assess the image quality, collecting data about the image capture, and transmitting the assessment results to a repository. The assessment task is a lower priority task that can be performed asynchronously.

Abstract

A method of image quality assessment, performed by one or more processors in an image capture device, is disclosed. The method comprising receiving notification of capture of an image and in response, initiating an image assessment task to assess the quality of the image. The assessment task comprises determining suitability of the image for image quality assessment, running an image quality assessment model on the image to generate image quality assessment results, collecting data related to the capture of the image, and transmitting the results to an image quality assessment repository. The image assessment task may be a lower priority asynchronous task.

REAL-TIME MODIFICATIONS IN AUGMENTED REALITY EXPERIENCES (17728494)

Main Inventor

Kevin Yimeng Hong


Brief explanation

This abstract describes methods and systems for creating augmented reality (AR) experiences on a messaging platform. These methods and systems allow multiple devices to share the same AR experience and enable users to interact with AR elements in real-time. When a user requests to interact with a specific AR element, the system allows them to make modifications to that element while preventing other users from doing the same. The system then synchronizes these modifications across all devices in real-time.

Abstract

Methods and systems are disclosed for generating AR experiences on a messaging platform. The methods and systems establish a shared augmented reality (AR) experience across a plurality of client devices and receive, from a first client device of the plurality of client devices, a request to perform a real-time interaction with a given AR element that is presented on displays of the plurality of client devices. In response to receiving the request, the methods and system enable the first client device to perform one or more modifications to the given AR element while preventing a second of the plurality of client devices from performing real-time interactions with the given AR element. The method and system synchronize the one or more modifications of the given AR element performed by the first client device across each of the plurality of client devices in real time.

AUGMENTED REALITY EXPERIENCES WITH DUAL CAMERAS (17662745)

Main Inventor

Kyle Goodrich


Brief explanation

This abstract describes methods and systems for creating augmented reality (AR) experiences on a messaging platform. The system detects a real-world object in a captured image, extracts textures from it, and then selects a target object in another captured image. It then generates an AR element by modifying the target object using the extracted textures and displays it within the second image.

Abstract

Methods and systems are disclosed for generating AR experiences on a messaging platform. The methods and systems perform operations including: detecting a real-world object depicted in a first image captured by a first camera of a client device, the client device comprising a second camera; extracting one or more textures from the real-world object depicted in the first image; selecting a target object depicted in a second image captured by the second camera, the second image being captured by the second camera simultaneously with the first image captured by the first camera; generating an augmented reality (AR) element comprising the target object modified based on the one or more textures extracted from the real-world object depicted in the first image; and causing display of the AR element within the second image.

CONTROLLING INTERACTIVE FASHION BASED ON VOICE (18215465)

Main Inventor

Itamar Berger


Brief explanation

This abstract describes a method and system for enhancing images of people wearing fashion items using augmented reality. The system receives an image of a person wearing a fashion item and generates a segmentation of the fashion item. It also receives voice input associated with the person in the image. In response to the voice input, the system generates augmented reality elements representing the voice input. These augmented reality elements are then applied to the fashion item based on the segmentation, enhancing the overall appearance of the fashion item in the image.

Abstract

Methods and systems are disclosed for performing operations comprising: receiving an image that includes a depiction of a person wearing a fashion item; generating a segmentation of the fashion item by the person depicted in the image; receiving voice input associated with the person depicted in the image; in response to receiving the voice input, generating one or more augmented reality elements representing the voice input; and applying the one or more augmented reality elements to the fashion item worn by the person based on the segmentation of the fashion item worn by the person.

SINGLE IMAGE-BASED REAL-TIME BODY ANIMATION (18214538)

Main Inventor

Egor Nemchinov


Brief explanation

This abstract describes a method for animating a person's body using a single input image. The method involves fitting a model to the input image and generating an output image of the body in a specific pose based on a set of pose parameters. The method can also generate an output image of the body in a different pose by providing a new set of pose parameters to the model. Finally, the method can generate a frame of an output video using the output image.

Abstract

Systems and methods for single image-based body animation are provided. An example method includes receiving an input image including an image of a body of a person, fitting a model to the image of the body of the person, where the model is configured to receive a set of pose parameters corresponding to a pose of the body and generate, based on the set of pose parameters, an output image including an image of the body adopting the pose, receiving a further set of pose parameters corresponding to a further pose of the body, providing the further set of pose parameters to the model to generate the output image of the body adopting the further pose, and generating, based on the output image, a frame of an output video including the output image.

OBJECT MODELING USING LIGHT PROJECTION (18216327)

Main Inventor

Chen Cao


Brief explanation

This abstract describes a system that can create a 3D model of an object using a 2D image of the object. The system achieves this by projecting vectors onto light cones formed from the 2D image. These projected vectors are then used to create a more precise 3D model of the object, taking into account the pixel values of the image.

Abstract

A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.

PERSISTING AUGMENTED REALITY EXPERIENCES (17728473)

Main Inventor

Alan Buzdar


Brief explanation

This abstract describes methods and systems for creating augmented reality (AR) experiences on a messaging platform. The system receives a request from a client device to access an AR experience. It then adds AR elements to a real-world image captured by the client device, and stores data representing the position of these AR elements relative to the real-world object. This data is maintained even after the AR experience is terminated. If a request to resume the AR experience is received, the system accesses the stored data to generate a display of the AR experience with the AR elements positioned correctly within a new image.

Abstract

Methods and systems are disclosed for performing generating AR experiences on a messaging platform. The methods and systems perform operations including: receiving, from a client device, a request to access an augmented reality (AR) experience; adding one or more AR elements to a first image captured by the client device, the first image depicting a real-world object; storing data representing a position of the one or more AR elements relative to the real-world object, the data being maintained after the AR experience is terminated; receiving a request to resume the AR experience after the AR experience has been terminated; and in response to receiving the request to resume the AR experience, accessing the data that was stored prior to termination of the AR experience to generate a display of the AR experience that depicts the one or more AR elements at a particular position within a second image.

AUGMENTED REALITY ENVIRONMENT ENHANCEMENT (18216784)

Main Inventor

Ilteris Canberk


Brief explanation

The abstract describes a technology that uses a special eyewear device to enhance the environment using augmented reality (AR) and virtual reality (VR). The eyewear device has a camera, a display, and a system to detect the position of the user. The camera and position detection system work together to identify specific points in the environment. The display then shows graphics and images that enhance the user's experience by overlaying them onto these identified points.

Abstract

Augmented reality (AR) and virtual reality (VR) environment enhancement using an eyewear device. The eyewear device includes an image capture system, a display system, and a position detection system. The image capture system and position detection system identify feature points within a point cloud that represents captured images of an environment. The display system presents image overlays to a user including enhancement graphics positioned at the feature points within the environment.

REAL-TIME MOTION TRANSFER FOR PROSTHETIC LIMBS (18216958)

Main Inventor

Avihay Assouline


Brief explanation

The present disclosure describes a system and method for manipulating a video that shows a person. The system includes a computer-readable storage medium and a program. The method involves receiving a video and identifying the skeletal joints of the person in the video. The movement of these skeletal joints is then tracked in 3D. A 3D virtual object with additional limbs is displayed, and the extra limbs of the virtual object are moved based on the movement of the person's skeletal joints in the video.

Abstract

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program, and a method for performing operations comprising: receiving a video that depicts a person; identifying a set of skeletal joints corresponding to limbs of the person; tracking 3D movement of the set of skeletal joints corresponding to the limbs of the person in the video; causing display of a 3D virtual object that has a plurality of limbs including one or more extra limbs than the limbs of the person in the video; and moving the one or more extra limbs of the 3D virtual object based on the movement of the set of skeletal joints corresponding to the limbs of the person in the video.

PASSIVE FLASH IMAGING (18216931)

Main Inventor

Newar Husam Al Majid


Brief explanation

The abstract describes a passive flash system that can be used on a user device to improve the lighting conditions when capturing images. This system allows the user to see a preview of the content being captured while also displaying a brighter portion of the screen that surrounds or overlaps the content. This elevated brightness element helps to increase the lighting in the environment without the need for an active flash.

Abstract

A passive flash system for illuminating images being captured on a user device while maintaining preview of the content being captured. The passive flash system can display a portion of a screen in as an elevated brightness element that is brighter than the content being captured. The elevated brightness element can surround or overlap the content being captured to passively increase the lighting of the imaged environment.

SUMMARY INFORMATION DISPLAY DURING VIDEO SEQUENCE PLAYBACK (18215674)

Main Inventor

Brent Davis


Brief explanation

This abstract describes a method for displaying a sequence of videos on an electronic device. When a user wants to stop watching the current video and move on to the next one, they can use a swipe or tap/click gesture. In response, a summary of the current video is shown on the screen, providing information without playing the video. A timer is started, and when it expires, the next video starts playing. However, if the user interacts with the summary before the timer expires, the current video resumes playing.

Abstract

A method is disclosed for displaying a sequence of video items on an electronic device. During playback of a current video item, a dismissal input such as a swipe or tap/click is received via a user input mechanism, indicating that display of the current video item is to be ceased and that display of the next video item in the sequence is to be commenced. In response to the dismissal input, summary information is displayed on the display screen, providing non-video communication of an informational payload of the current video item. A timer is started at the commencement of display of the summary information, at the expiry of which display of the next video item is to be commenced. In response to a resumption input such as a touchscreen tap or cursor click on the summary information before expiry of the timer, playback of the current video item is resumed via the display screen.

SHARED AUGMENTED REALITY EXPERIENCE IN VIDEO CHAT (17660520)

Main Inventor

Nathan Richard Banks


Brief explanation

This abstract describes methods and systems for enabling a shared augmented reality (AR) experience during a video chat. Multiple client devices can participate in the video chat, and users' videos are displayed during the chat. If a user on one device requests to activate an AR experience, the system modifies the users' body parts in the videos to include AR elements related to that experience.

Abstract

Methods and systems are disclosed for performing operations for providing a shared augmented reality experience in a video chat. A video chat can be established between a plurality of client devices. During the video chat, videos of users associated with the client devices can be displayed. During the video chat, a request from a first client device to activate a first AR experience can be received, and in response, and body parts of users depicted in the videos are modified to include one or more AR elements associated with the first AR experience.

AUGMENTED REALITY EXPERIENCE EVENT METRICS SYSTEM (17727972)

Main Inventor

Benjamin Todd Grover


Brief explanation

This abstract describes methods and systems for creating augmented reality (AR) experiences on a messaging platform. The system receives a request from a client device to access an AR experience and retrieves a list of event types associated with that experience. These event types are used to generate metrics. The system then determines if an interaction with the AR experience matches a specific event type from the list and generates interaction data for that event type. When a request is made to end the AR experience, the system sends the interaction data to a remote server.

Abstract

Methods and systems are disclosed for performing generating AR experiences on a messaging platform. The methods and systems receive, from a client device, a request to access an augmented reality (AR) experience and access a list of event types associated with the AR experience used to generate one or more metrics. The methods and systems determine that an interaction associated with the AR experience corresponds to a first event type of the list of event types and generates interaction data for the first event type representing the interaction. In response to receiving a request to terminate the AR experience, the systems and methods transmit the interaction data to a remote server.

COMMUNICATING WITH A USER EXTERNAL TO A VIRTUAL CONFERENCE (18344746)

Main Inventor

Andrew Cheng-min Lin


Brief explanation

The present disclosure describes a system and method for communicating with an external user during a virtual conference. The system includes a computer-readable storage medium that stores a program. The program provides an interface for configuring an external communication element to communicate with the external user. The program also allows the user to set properties for the external communication element. During the virtual conference, the program includes the external communication element in the virtual room based on the set properties. The program also allows the user to select the external communication element and facilitates communication with the external user based on the set properties.

Abstract

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for communicating with a user external to a virtual conference. The program and method provide, in association with designing a room for virtual conferencing, an interface for configuring an external communication element to communicate with an external user; receive, via the interface, an indication of first user input for setting properties for the external communication element; provide, in association with virtual conferencing for the room, the external communication element in the room based on the properties; receive an indication of second user input selecting the external communication element; and provide, in response to receiving indication of the second user input, for communication with the external user based on the properties.

EYEWEAR WITH CUSTOMIZABLE NOTIFICATIONS (18216856)

Main Inventor

John James Robertson


Brief explanation

This abstract describes a system and method for generating alerts on an eyewear device. The system receives data from a mobile device, indicating a specific combination of notification attributes that trigger an alert on the eyewear device. When the mobile device receives a new notification that matches the specified combination of attributes, the system retrieves a visual animation from the eyewear device's storage and activates a visual indicator on the eyewear device to generate the alert.

Abstract

Systems and methods for generating an alert on an eyewear device are provided. The systems and methods include receiving, by an eyewear device, from a mobile device, data indicative of a first combination of notification attributes that trigger a first alert on the eyewear device; determining that the mobile device has received a new notification based on additional data received from the mobile device; determining that a combination of attributes of the new notification matches the first combination of notification attributes; and in response to determining that the combination of the attributes of the new notification matches the first combination of notification attributes, retrieving, from a storage device of the eyewear device, a first visual indicator animation that represents the first alert; and activating a visual indicator of the eyewear device in accordance with the retrieved first visual indicator animation to generate the first alert on the eyewear device.

CAMERA SETTINGS AND EFFECTS SHORTCUTS (17937980)

Main Inventor

Kaveh Anvaripour


Brief explanation

The abstract describes a method for customizing visual settings on a device with a display and camera. It involves displaying a video on the device's display, which includes information about the camera settings and visual effects used during the video capture. The method also includes displaying a shortcut for camera effects, allowing the user to select it. Upon selecting the shortcut, the device displays a live video feed from the camera with the specified visual effects and camera settings.

Abstract

Disclosed is a method for providing custom visual settings on a device including a display and at least one camera. The method comprises displaying a video on the display of the device, the video including data specifying the camera settings and visual effects that were applied during capture of the video, displaying a camera effects shortcut, receiving user selection of the camera effects shortcut, and, based on receipt of user selection of the camera effects shortcut, displaying, on the display, a video feed from the at least one camera on the with the visual effects and camera settings specified by the data.

LOCATION-BASED CONTEXT INFORMATION SHARING IN A MESSAGING SYSTEM (18343517)

Main Inventor

Nicolas Dancie


Brief explanation

This abstract describes methods, systems, user interfaces, media, and devices for sharing the location of participants in a communication session through a messaging system. It explains that location information is received from a location sensor on a first client device, and the current location of the user is determined based on this information. The current location is then displayed on the screen of a second client device within a messaging user interface during the communication session. The location information can be updated as messages are exchanged and as the user's location changes. Additional information, such as the time period associated with the location, may also be included.

Abstract

Methods, systems, user interfaces, media, and devices are described for sharing the location of participants of a communication session established via a messaging system. Consistent with some embodiments, an electronic communication containing location information is received from a location sensor coupled to a first client device. A current location of the first user is determined based on the location information. A current location of the first user is displayed, on a display screen of a second client device, the current location of the first user being displayed within a messaging UI during a communication session between the computing device and the second computing device. The location information may be updated during the communication session as messages are exchanged and as a current location changes. Various embodiments may include additional information with the current location, such as a time period associated with the location, or other such information.