Snap Inc. patent applications published on December 28th, 2023

From WikiPatents
Revision as of 18:37, 1 January 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Contents

Patent applications for Snap Inc. on December 28th, 2023

AUGMENTED REALITY RIDING COURSE GENERATION (18467425)

Main Inventor

Edmund Graves Brown


Brief explanation

The patent application describes a technology that enhances gameplay by using augmented reality (AR) to display virtual objects on a map of a real-world location. Participants can ride on personal mobility systems like scooters along a track defined by the map. The virtual objects are displayed in the participants' AR devices, corresponding to their positions in the real world on the course. When a participant or their mobility system gets close to a virtual object, the technology detects the proximity and modifies a performance characteristic of the mobility system.
  • AR-enhanced gameplay with virtual objects displayed on a real-world map.
  • Participants ride on personal mobility systems like scooters along a track defined by the map.
  • Virtual objects are displayed in the participants' AR devices, corresponding to their positions in the real world.
  • When a participant or their mobility system gets close to a virtual object, the technology detects the proximity.
  • In response to the detection, a performance characteristic of the mobility system is modified.

Potential Applications

  • Enhancing gameplay experiences by integrating virtual objects into real-world locations.
  • Providing interactive and immersive experiences for participants using personal mobility systems.
  • Creating new forms of entertainment and outdoor activities that combine physical movement with virtual elements.

Problems Solved

  • Lack of interactive and immersive gameplay experiences in real-world locations.
  • Limited options for combining physical movement with virtual elements in outdoor activities.
  • Difficulty in modifying performance characteristics of personal mobility systems based on proximity to virtual objects.

Benefits

  • Enhanced gameplay experiences by merging virtual and real-world elements.
  • Increased engagement and excitement for participants using personal mobility systems.
  • Opportunities for new forms of entertainment and outdoor activities that blend physical and virtual elements.

Abstract

AR-enhanced gameplay includes a map of a course including a plurality of virtual objects, the map corresponding to a location in the real world and defining a track along which participants can ride on personal mobility systems such as scooters. Virtual objects are displayed in the fields of view of participants' augmented reality devices in a positions corresponding to positions in the real world on the course. Proximity of a participant or their personal mobility system with the position of a virtual object in the real world is detected, and in response to the detection of proximity, a performance characteristic of the participant's personal mobility system is modified.

COLOR CALIBRATION TOOL FOR SEE-THROUGH AUGMENTED REALITY ENVIRONMENT (17848179)

Main Inventor

Pawel Wawruch


Brief explanation

Abstract:

A color calibration system allows users to adjust tint/temperature parameters on a see-through display of a computing device. A virtual color reference card is displayed on the see-through display, corresponding to a physical color reference card placed in front of AR glasses. The color calibration system modifies the color properties of the display based on user adjustments. Users can continue adjusting the display properties until the virtual color reference card matches the physical color reference card seen through the AR glasses.

Patent/Innovation:

  • Color calibration system for see-through displays on computing devices.
  • Allows users to adjust tint/temperature parameters.
  • Displays a virtual color reference card on the see-through display.
  • Corresponds to a physical color reference card placed in front of AR glasses.
  • Changes the color properties of the display based on user adjustments.
  • Users can interact with the color calibration UI to continue adjusting the display properties.
  • Aims to match the colors of the virtual color reference card with the physical color reference card seen through the AR glasses.

Potential Applications:

  • Augmented reality (AR) glasses and headsets.
  • Virtual reality (VR) devices with see-through displays.
  • Color-critical applications such as graphic design, photography, and video editing.
  • Medical imaging and diagnostics.
  • Industrial applications requiring accurate color representation.

Problems Solved:

  • Inaccurate color representation on see-through displays.
  • Difficulty in adjusting tint/temperature parameters for AR glasses.
  • Lack of a visual reference for color calibration in AR environments.
  • Limited control over color properties of see-through displays.

Benefits:

  • Improved color accuracy and representation in AR environments.
  • User-friendly interface for adjusting tint/temperature parameters.
  • Real-time adjustments and visual feedback for color calibration.
  • Enhanced user experience in color-critical applications.
  • Increased control over color properties of see-through displays.

Abstract

A color calibration system is configured to permit a user to adjust tint/temperature parameters while displaying a virtual color reference card on a see-through display of a computing device. The virtual color reference card corresponds to a physical color reference card that is placed in front of the AR glasses. Based on the adjustments made by the user via the color calibration UI, the color calibration system makes changes to the color properties of the see-through display. The user can continue adjusting the properties of the see-through display by interacting with the user-selectable elements in the color calibration UI, until the colors of the virtual color reference card overlayed over the field of view of the wearer of the AR glasses match the colors of the physical color reference card seen by the wearer of the AR glasses.

AUGMENTED REALITY WAVEGUIDES WITH DYNAMICALLY ADDRESSABLE DIFFRACTIVE OPTICAL ELEMENTS (18305953)

Main Inventor

Juan Russo


Brief explanation

The patent application describes a waveguide that can transmit light by total internal reflection and includes diffractive optical elements (DOEs) that can change their diffraction efficiency in response to a stimulus. The DOEs are controlled independently by a DOE driver.
  • The waveguide body is made of a material with a different refractive index from the surrounding medium.
  • Light is propagated through the waveguide by total internal reflection along the output surface.
  • The waveguide includes one or more DOEs that can change their diffraction efficiency.
  • Each DOE can respond to a specific stimulus.
  • The DOE driver is responsible for providing the stimuli to each DOE independently.

Potential applications of this technology:

  • Augmented reality (AR) glasses: The waveguide can be used to display virtual information in the user's field of view.
  • Head-up displays (HUDs): The waveguide can project information onto the windshield of a vehicle.
  • Optical communication systems: The waveguide can be used to transmit data over long distances.

Problems solved by this technology:

  • Efficient light propagation: The waveguide allows for the efficient transmission of light through total internal reflection.
  • Dynamic control of diffraction efficiency: The DOEs can change their diffraction efficiency, allowing for dynamic control of the transmitted light.
  • Independent control of DOEs: The DOE driver enables independent control of each DOE, providing flexibility in manipulating the transmitted light.

Benefits of this technology:

  • Compact and lightweight: The waveguide can be made thin and lightweight, making it suitable for wearable devices.
  • Versatile display capabilities: The DOEs can create various visual effects, enhancing the user experience.
  • Energy-efficient: The waveguide minimizes light loss, resulting in energy-efficient transmission.

Abstract

A waveguide includes a waveguide body including an optically transmissive material having a refractive index different from a surrounding medium and defining an output surface. The waveguide body is configured to propagate light by total internal reflection in one or more directions substantially tangential to the output surface. The waveguide includes one or more diffractive optical elements (DOEs), each configured to change its diffraction efficiency in response to a respective stimulus, and a DOE driver configured to provide the stimuli to each of the DOEs independently.

LOW-POWER HAND-TRACKING SYSTEM FOR WEARABLE DEVICE (17851465)

Main Inventor

Alex Feinman


Brief explanation

The patent application describes a method for a low-power hand-tracking system using a wearable device with a proximity sensor. Here are the key points:
  • The method involves polling the proximity sensor of the wearable device to detect a proximity event.
  • The wearable device has both a low-power processor and a high-power processor.
  • Upon detecting the proximity event, a low-power hand-tracking application is operated on the low-power processor using proximity data from the sensor.
  • The operation of the hand-tracking application can end in three ways:
 * By detecting and recognizing a gesture based on the proximity data.
 * By detecting a gesture without recognizing it based on the proximity data.
 * By detecting a lack of activity from the proximity sensor within a timeout period based on the proximity data.

Potential Applications

  • Virtual reality and augmented reality systems
  • Gaming and interactive entertainment
  • Gesture-based control systems for various devices and appliances

Problems Solved

  • Reduces power consumption by utilizing a low-power processor for hand-tracking applications.
  • Provides efficient and accurate hand-tracking capabilities using proximity data from a wearable device.
  • Enables gesture recognition and control without the need for continuous hand-tracking, conserving power.

Benefits

  • Extended battery life for wearable devices by utilizing a low-power processor for hand-tracking applications.
  • Improved user experience with accurate and responsive hand-tracking capabilities.
  • Enables gesture-based control in various applications without draining the device's power.

Abstract

A method for a low-power hand-tracking system is described. In one aspect, a method includes polling a proximity sensor of a wearable device to detect a proximity event, the wearable device includes a low-power processor and a high-power processor, in response to detecting the proximity event, operating a low-power hand-tracking application on the low-power processor based on proximity data from the proximity sensor, and ending an operation of the low-power hand-tracking application in response to at least one of: detecting and recognizing a gesture based on the proximity data, detecting without recognizing the gesture based on the proximity data, or detecting a lack of activity from the proximity sensor within a timeout period based on the proximity data.

GALLERY OF MESSAGES FROM INDIVIDUALS WITH A SHARED INTEREST (18464013)

Main Inventor

Timothy Sehn


Brief explanation

Abstract:

A machine with a processor and memory is described. The memory stores instructions for the processor to receive a message and a message parameter that indicates a characteristic of the message, which can be a photograph or a video. The machine determines if the message parameter corresponds to a selected gallery, which is a sequence of photographs or videos. If it does, the message is posted to the selected gallery in response to this determination. The selected gallery is provided upon request.

Explanation:

  • A machine is described with a processor and memory.
  • The memory stores instructions for the processor to perform certain tasks.
  • The machine can receive a message and a message parameter.
  • The message parameter indicates a characteristic of the message, which can be a photograph or a video.
  • The machine determines if the message parameter corresponds to a selected gallery.
  • The selected gallery is a sequence of photographs or videos.
  • If the message parameter matches the selected gallery, the message is posted to that gallery.
  • The selected gallery is provided when requested.

Potential Applications:

  • Social media platforms could use this technology to automatically categorize and post user-submitted photographs or videos to relevant galleries.
  • Online photo or video sharing platforms could benefit from automatically organizing and posting content to specific galleries based on user preferences or characteristics.
  • E-commerce websites could use this technology to categorize and display product images or videos in specific galleries based on customer preferences or product characteristics.

Problems Solved:

  • Manual categorization and posting of photographs or videos to specific galleries can be time-consuming and prone to errors.
  • Users often struggle to find relevant content in large collections of photographs or videos.
  • Organizing and managing large amounts of user-generated content can be challenging for online platforms.

Benefits:

  • Automation of categorization and posting saves time and reduces the risk of human error.
  • Users can easily find and access relevant content in specific galleries.
  • Online platforms can efficiently manage and organize large amounts of user-generated content.

Abstract

A machine includes a processor and a memory connected to the processor. The memory stores instructions executed by the processor to receive a message and a message parameter indicative of a characteristic of the message, where the message includes a photograph or a video. A determination is made that the message parameter corresponds to a selected gallery, where the selected gallery includes a sequence of photographs or videos. The message is posted to the selected gallery in response to the determination. The selected gallery is supplied in response to a request.

COLOCATED SHARED AUGMENTED REALITY (18243815)

Main Inventor

Ana Maria Cardenas Gasca


Brief explanation

The patent application describes methods and systems for creating a shared augmented reality (AR) session. Here is a simplified explanation of the abstract:
  • Users can select a shared AR experience from multiple options using their client device.
  • The client device identifies the resources associated with the selected AR experience.
  • The client device determines if there are two or more users within a certain distance from it.
  • If the users are within the specified distance, the selected AR experience is activated.

Potential applications of this technology:

  • Collaborative gaming experiences where multiple users can interact with each other in an AR environment.
  • Virtual tours or guided experiences where users can explore a location together in AR.
  • Training simulations where multiple users can practice skills or scenarios in an AR setting.

Problems solved by this technology:

  • Facilitates shared AR experiences by identifying users in close proximity to each other.
  • Allows users to easily select and activate a shared AR experience on their client devices.

Benefits of this technology:

  • Enhances social interaction by enabling users to engage in shared AR experiences.
  • Provides a more immersive and interactive AR experience for users.
  • Simplifies the process of setting up and activating shared AR sessions.

Abstract

Methods and systems are disclosed for creating a shared augmented reality (AR) session. The methods and systems perform operations comprising: receiving, by a client device, input that selects a shared augmented reality (AR) experience from a plurality of shared AR experiences; in response to receiving the input, determining one or more resources associated with the selected shared AR experience; determining, by the client device, that two or more users are located within a threshold proximity of the client device; and activating the selected shared AR experience in response to determining that the two or more users are located within the threshold proximity of the client device.

CONTEXTUAL NAVIGATION MENU (18467505)

Main Inventor

Newar Husam Al Majid


Brief explanation

The patent application describes systems and methods for generating and displaying a contextual navigation menu in a graphical user interface (GUI). The menu presents interface elements that are relevant to the context.
  • The invention focuses on creating a contextual navigation menu within a GUI.
  • The menu is generated based on the current context of the user interface.
  • The menu presents interface elements that are relevant to the current context.
  • The invention aims to improve user experience by providing contextually relevant options.
  • The system and methods can be implemented in various software applications.

Potential Applications

This technology can be applied in various software applications, including:

  • Web browsers: The contextual navigation menu can provide relevant options based on the current webpage or user activity.
  • Mobile applications: The menu can adapt to the current screen or user interaction, offering contextually relevant actions.
  • Productivity software: The invention can enhance productivity tools by presenting relevant options based on the user's current task.
  • Gaming interfaces: The contextual menu can provide game-specific options based on the player's current situation.

Problems Solved

The technology addresses the following problems:

  • Lack of context-awareness: Traditional GUIs often present the same navigation options regardless of the current context, leading to a cluttered and less efficient user experience.
  • Difficulty in finding relevant options: Users may struggle to locate the appropriate actions within a complex interface, resulting in frustration and reduced productivity.
  • Limited screen space: Mobile devices and small screens have limited space for displaying navigation options, making it crucial to present only the most relevant choices.

Benefits

The technology offers several benefits:

  • Improved user experience: By presenting contextually relevant options, users can quickly access the actions they need, enhancing efficiency and reducing frustration.
  • Streamlined interface: The contextual navigation menu helps declutter the GUI by only displaying relevant interface elements, leading to a cleaner and more intuitive user interface.
  • Increased productivity: Users can save time and effort by easily finding and accessing the most appropriate actions for their current context.
  • Adaptability: The system and methods can be implemented in various software applications, providing flexibility and adaptability to different user interfaces.

Abstract

Systems and methods to generate and cause display of a contextual navigation menu within a GUI, wherein the contextual navigation menu presents contextually relevant interface elements.

APPLICATION LAUNCH SUPPORT (18243486)

Main Inventor

Phong Le


Brief explanation

The patent application describes a method for testing software updates before launching them to all users. Here is a simplified explanation of the abstract:
  • The method involves monitoring how users engage with an existing application on multiple devices.
  • Based on this user engagement data, a probability interval is determined.
  • A candidate update version of the application is then launched to a subset of devices.
  • The user engagement data of the candidate update version is monitored on these devices.
  • If the user engagement data falls within the probability interval, a testing pass notification is provided.

Potential applications of this technology:

  • Software companies can use this method to test new versions of their applications before releasing them to all users.
  • It can help identify any issues or bugs in the update version before it is widely distributed.
  • The method can be applied to various types of software, including mobile apps, web applications, and desktop software.

Problems solved by this technology:

  • Ensures that software updates are thoroughly tested before being released to all users.
  • Helps identify any potential issues or bugs in the update version, allowing them to be fixed before causing problems for a large user base.
  • Reduces the risk of negative user experiences and improves the overall quality of software updates.

Benefits of this technology:

  • Provides a systematic approach to regression testing, ensuring that updates are thoroughly evaluated.
  • Saves time and resources by testing updates on a subset of devices before a full release.
  • Helps maintain a positive user experience by minimizing the chances of releasing faulty software updates.

Abstract

A method of software launch regression testing comprises monitoring a user engagement parameter of an existing application running on a plurality of client devices and determining a probability interval from the user engagement parameter of the existing application. A candidate update application is then launched to a subset of the plurality of client devices. The method then proceeds with monitoring a corresponding user engagement parameter of the candidate update version running on the subset of client devices, determining if the corresponding user engagement parameter of the candidate update version falls within the probability interval, and, based on the probability interval falling within the probability interval, providing a testing pass notification.

MULTIMODAL SENTIMENT CLASSIFICATION (18244543)

Main Inventor

Jianfei Yu


Brief explanation

The patent application describes a neural network for sentiment classification in social media posts that takes into account different entities and includes image data.
  • The neural network includes separate subnetworks for the left, right, and target entities mentioned in a social media post.
  • An image network generates representation data from images associated with the post.
  • The output of the left, right, and target entity subnetworks is combined and weighted with the representation data from the image network.
  • The combined data is used to classify the sentiment of the entity mentioned in the post.

Potential Applications

  • Social media sentiment analysis
  • Brand monitoring and reputation management
  • Customer feedback analysis
  • Market research and consumer insights

Problems Solved

  • Accurate sentiment classification in social media posts
  • Handling multiple entities mentioned in a post
  • Incorporating image data for sentiment analysis

Benefits

  • Improved understanding of sentiment towards specific entities in social media
  • More accurate analysis of customer opinions and feedback
  • Enhanced brand monitoring and reputation management capabilities
  • Better insights for market research and decision-making

Abstract

Sentiment classification can be implemented by an entity-level multimodal sentiment classification neural network. The neural network can include left, right, and target entity subnetworks. The neural network can further include an image network that generates representation data that is combined and weighted with data output by the left, right, and target entity subnetworks to output a sentiment classification for an entity included in a network post.

IDENTIFYING PERSONALLY IDENTIFIABLE INFORMATION WITHIN AN UNSTRUCTURED DATA STORE (18244797)

Main Inventor

Vasyl Pihur


Brief explanation

Methods and systems for identifying personally identifiable information (PII) are disclosed in this patent application. The invention involves generating frequency maps of fields that store known PII information, counting occurrences of unique bigrams in these PII fields. A field of interest is then analyzed to generate a second frequency map. Correlations between the first frequency maps and the second frequency map are generated. If one of the correlations meets certain criteria, it is determined whether the field of interest includes PII or not. Access control for the field of interest is then based on whether it includes PII. Additionally, the storage location of data included in the field of interest may be determined based on whether it includes PII.
  • Frequency maps of fields storing known PII information are generated.
  • Occurrences of unique bigrams in the PII fields are counted.
  • A field of interest is analyzed to generate a second frequency map.
  • Correlations between the first frequency maps and the second frequency map are generated.
  • Access control for the field of interest is determined based on whether it includes PII.
  • The storage location of data included in the field of interest may be determined based on whether it includes PII.

Potential Applications

  • Data privacy protection in various industries such as healthcare, finance, and e-commerce.
  • Compliance with data protection regulations, such as GDPR or HIPAA.
  • Enhancing security measures for sensitive information.

Problems Solved

  • Efficient identification of personally identifiable information within data fields.
  • Automating the process of determining whether a field contains PII.
  • Enabling access control and storage location decisions based on the presence of PII.

Benefits

  • Improved data privacy and protection against unauthorized access.
  • Streamlined compliance with data protection regulations.
  • Enhanced efficiency and accuracy in identifying and handling PII.

Abstract

Methods and systems for identifying personally identifiable information (PII) are disclosed. In some aspects, frequency maps of fields storing known PII information are generated. The frequency maps may count occurrences of unique bigrams in the PII fields. A field of interest may then be analyzed to generate a second frequency map. Correlations between the first frequency maps and the second frequency map may be generated. If one of the correlations meets certain criterion, the disclosed embodiments may determine that the field of interest does or does not include PII. Access control for the field of interest may then be based on whether the field includes PII. In some aspects, a storage location of data included in the field of interest may be based on whether the field includes PII.

VIRTUAL OBJECT MACHINE LEARNING (18244016)

Main Inventor

Xuehan Xiong


Brief explanation

Abstract:

A machine learning scheme is described that can be trained on a set of labeled training images of a subject in different poses, textures, and background environments. The scheme utilizes metadata stored as 3D models or rendered images of the subject to identify the labeled data and create a classification model. This model can accurately classify a depicted subject in various environments and poses.

Patent/Innovation:

  • Machine learning scheme trained on labeled training images of a subject in different poses, textures, and background environments.
  • Utilizes metadata stored as 3D models or rendered images of the subject.
  • Automatically identifies labeled data to create a classification model.
  • Classification model can accurately classify a depicted subject in various environments and poses.

Potential Applications:

  • Facial recognition systems for security purposes.
  • Virtual reality and augmented reality applications.
  • Human-computer interaction and gesture recognition.
  • Animation and gaming industries.
  • Medical imaging and diagnostics.

Problems Solved:

  • Overcomes the challenge of accurately classifying subjects in different poses, textures, and background environments.
  • Provides a solution for training machine learning models on labeled data with diverse variations.
  • Enables accurate identification and classification of subjects in various real-world scenarios.

Benefits:

  • Improved accuracy and reliability in classifying subjects in different environments and poses.
  • Enhanced performance of facial recognition systems and other related applications.
  • Increased efficiency in human-computer interaction and gesture recognition.
  • Enables realistic and immersive experiences in virtual reality and augmented reality.
  • Advances medical imaging and diagnostics by accurately identifying subjects in diverse scenarios.

Abstract

A machine learning scheme can be trained on a set of labeled training images of a subject in different poses, with different textures, and with different background environments. The label or marker data of the subject may be stored as metadata to a 3D model of the subject or rendered images of the subject. The machine learning scheme may be implemented as a supervised learning scheme that can automatically identify the labeled data to create a classification model. The classification model can classify a depicted subject in many different environments and arrangements (e.g., poses).

SYSTEM TO DISPLAY USER PATH (18096357)

Main Inventor

Jacob Catalano


Brief explanation

The abstract of the patent application describes a system that displays a user's route over a period of time. Here is a simplified explanation of the abstract:
  • The system displays a map image showing a location.
  • It accesses user profile data, which includes a user identifier and location data associated with the user profile.
  • Based on the user profile data, the system identifies a sequence of locations associated with the user profile.
  • The system then displays a trail indicating the sequence of locations, with the trail ending at the display of the user identifier.

Potential applications of this technology:

  • Fitness tracking: The system can be used to track and display a user's running or cycling route over time, helping them monitor their progress and set goals.
  • Travel tracking: It can be used to track and display a user's travel route, allowing them to share their experiences with others or keep a record of their journeys.
  • Delivery tracking: Companies can use the system to track and display the route taken by their delivery personnel, ensuring efficient and accurate deliveries.

Problems solved by this technology:

  • Lack of visual representation: The system provides a visual representation of a user's route, making it easier for them to understand and analyze their movements.
  • Difficulty in tracking multiple locations: The system identifies and displays a sequence of locations, making it convenient to track and visualize the user's route.
  • Limited user identification: By displaying the user identifier at the end of the trail, the system ensures that the user's identity is clearly associated with their route.

Benefits of this technology:

  • Enhanced tracking and analysis: Users can easily track and analyze their routes over time, helping them improve their performance or gain insights from their movements.
  • Improved communication: The system allows users to share their routes with others, facilitating communication and sharing of experiences.
  • Efficient logistics management: Companies can use the system to optimize delivery routes and monitor the movements of their personnel, leading to improved efficiency and customer satisfaction.

Abstract

A system to display a route of a user over a period of time is configured to perform operations that include: causing display of a map image that depicts a location; accessing user profile data associated with a user profile, the user profile data comprising a user identifier and location data associated with the user profile; identifying a sequence of locations associated with the user profile based on the user profile data; and causing display of a presentation of a trail indicating the sequence of locations associated with the user profile, the trail terminating at a display of the user identifier.

WHOLE BODY SEGMENTATION (18243444)

Main Inventor

Gal Dudovitch


Brief explanation

The patent application describes methods and systems for enhancing monocular images of a user's whole body by applying visual effects based on a smoothed segmentation of the body.
  • The system receives a monocular image of the user's whole body.
  • It generates a segmentation of the body based on the monocular image.
  • The system accesses a video feed containing previous monocular images.
  • It uses the video feed to smooth the segmentation of the body generated from the monocular image.
  • The system applies visual effects to the monocular image based on the smoothed segmentation.

Potential Applications

  • Augmented reality applications that enhance the appearance of a user's body in real-time.
  • Virtual try-on systems for clothing or accessories, where visual effects can be applied to the user's body to simulate different styles or designs.
  • Fitness or wellness applications that provide visual feedback on body movements or posture.

Problems Solved

  • Enhances the visual quality of monocular images by applying visual effects based on a smoothed segmentation of the user's body.
  • Provides a more realistic and immersive experience in augmented reality or virtual try-on applications.
  • Enables accurate tracking and analysis of body movements or posture in fitness or wellness applications.

Benefits

  • Improved visual appearance of monocular images by applying visual effects based on a smoothed segmentation.
  • Real-time application of visual effects allows for interactive and dynamic user experiences.
  • Accurate tracking and analysis of body movements or posture can provide valuable feedback for fitness or wellness applications.

Abstract

Methods and systems are disclosed for performing operations comprising: receiving a monocular image that includes a depiction of a whole body of a user; generating a segmentation of the whole body of the user based on the monocular image; accessing a video feed comprising a plurality of monocular images received prior to the monocular image; smoothing, using the video feed, the segmentation of the whole body generated based on the monocular image to provide a smoothed segmentation; and applying one or more visual effects to the monocular image based on the smoothed segmentation.

DENSE FEATURE SCALE DETECTION FOR IMAGE MATCHING (18367034)

Main Inventor

Shenlong Wang


Brief explanation

The patent application describes a method for detecting dense features in images using multiple convolutional neural networks trained on scale data. This allows for more accurate and efficient matching of pixels between images.
  • Multiple convolutional neural networks are trained on scale data to detect dense features in images.
  • An input image is used to generate multiple scaled images.
  • The scaled images are input into a feature net, which outputs feature data for each scaled image.
  • An attention net generates an attention map from the input image, assigning emphasis to different scales based on texture analysis.
  • The feature data and attention data are combined through a multiplication process and then summed to generate dense features for comparison.

Potential Applications

  • Image recognition and classification
  • Object detection and tracking
  • Image matching and alignment
  • Augmented reality applications

Problems Solved

  • Inaccurate and inefficient pixel matching between images
  • Difficulty in detecting dense features in images
  • Lack of emphasis on different scales based on texture analysis

Benefits

  • More accurate and efficient matching of pixels between images
  • Improved detection of dense features in images
  • Enhanced emphasis on different scales based on texture analysis

Abstract

Dense feature scale detection can be implemented using multiple convolutional neural networks trained on scale data to more accurately and efficiently match pixels between images. An input image can be used to generate multiple scaled images. The multiple scaled images are input into a feature net, which outputs feature data for the multiple scaled images. An attention net is used to generate an attention map from the input image. The attention map assigns emphasis as a soft distribution to different scales based on texture analysis. The feature data and the attention data can be combined through a multiplication process and then summed to generate dense features for comparison.

AUGMENTED REALITY IMAGE REPRODUCTION ASSISTANT (17852134)

Main Inventor

Pawel Wawruch


Brief explanation

An image copying assistant is a computer application that helps users copy a digital image onto a physical canvas using traditional media. It uses augmented reality techniques to project features of the digital image onto the canvas.
  • The image copying assistant detects markers in the camera's output to determine the plane and boundaries of the canvas.
  • It uses this information to position the digital image on the display of the computing device.

Potential Applications

  • Artistic copying: Artists can use this technology to easily transfer digital images onto physical canvases for painting or drawing.
  • Educational tool: Students learning to paint or draw can use the image copying assistant to practice replicating digital images onto paper or canvas.

Problems Solved

  • Difficulty in accurately transferring digital images onto physical canvases.
  • Time-consuming process of manually measuring and positioning the image on the canvas.

Benefits

  • Simplifies the process of copying digital images onto physical canvases.
  • Provides accurate positioning and scaling of the digital image on the canvas.
  • Saves time and effort for artists and students.

Abstract

An image copying assistant is a computing application configured to aid users in copying a digital image to a physical canvas using traditional media on the physical canvas. The image copying assistant utilizes augmented reality techniques to present features of the digital image projected onto the physical canvas. The image copying assistant detects previously generated markers in an output of a digital image sensor of a camera of a computing device and use the detected markers to calculate the plane and boundaries of the surface of the physical canvas. The image copying assistant uses the calculated plane and boundaries to determine a position of the digital image on a display of the computing device.

DOUBLE CAMERA STREAMS (17822022)

Main Inventor

Cai Zhu


Brief explanation

The patent application describes a device that can apply augmented reality effects to live video streams captured by a camera and display them in real time on a display. The device can also save the original video stream and later apply augmented reality effects to it independently.
  • The device includes a display and a camera.
  • It receives a live stream of image frames captured by the camera.
  • Augmented reality effects are applied to the live stream of image frames.
  • The augmented stream of image frames is displayed on the device's display in real time.
  • A second stream of image frames, corresponding to the first stream, is saved to a video file.
  • The saved video file can be retrieved later and augmented reality effects can be applied to it independently.

Potential Applications

  • Entertainment and gaming: Augmented reality effects can enhance the user experience in video games and entertainment applications.
  • Education and training: Augmented reality can be used to provide interactive and immersive learning experiences.
  • Advertising and marketing: Augmented reality effects can be applied to promotional videos or advertisements to make them more engaging and interactive.

Problems Solved

  • Real-time augmented reality: The device allows for the application of augmented reality effects to live video streams, providing an immersive and interactive experience.
  • Independent application of effects: The ability to save the original video stream and apply augmented reality effects later allows for flexibility and customization.

Benefits

  • Enhanced user experience: Augmented reality effects can make video content more engaging and interactive.
  • Flexibility and customization: The ability to apply augmented reality effects independently to saved video streams allows for customization and experimentation.
  • Immersive learning and training: Augmented reality can provide a more immersive and interactive learning experience, enhancing retention and engagement.

Abstract

Image augmentation effects are provided on a device that includes a display and a camera. A first stream of image frames captured by the camera is received and an augmented reality effect is applied thereto to generate an augmented stream of image frames. The augmented stream of image frames is displayed on the display in real time. A second stream of image frames, corresponding to the first stream of image frames, is concurrently saved to an initial video file. The second stream of image frames can later be retrieved from the initial video file and the augmented reality effects applied thereto independently of the first stream of image frames.

STATE-SPACE SYSTEM FOR PSEUDORANDOM ANIMATION (18242964)

Main Inventor

Gurunandan Krishnan Gorumkonda


Brief explanation

The patent application describes a state-space system for creating pseudorandom animation.
  • Animation elements in a computer model are identified.
  • Motion patterns and speed harmonics are identified for each animation element.
  • A set of motion data values is generated, which describes the motion patterns and speed harmonics.
  • Each value in the set is assigned a probability.
  • The probability is used to select and update a specific motion used in an animation created from the computer model.

Potential applications of this technology:

  • Video game development: The state-space system can be used to create more realistic and dynamic animations for characters and objects in video games.
  • Virtual reality: The technology can enhance the immersion and realism of virtual reality experiences by generating more natural and varied animations.
  • Animation production: The system can be utilized in the production of animated films and TV shows to automate the generation of animations with different motion patterns and speed harmonics.

Problems solved by this technology:

  • Lack of variety in animations: The state-space system introduces pseudorandomness to animation generation, allowing for a wider range of motion patterns and speed harmonics.
  • Time-consuming animation creation: By automating the generation of animations, the system reduces the time and effort required to create complex and realistic animations.

Benefits of this technology:

  • Enhanced realism: The state-space system generates animations with more natural and varied motion patterns, improving the realism of virtual environments and animated content.
  • Time and cost savings: By automating animation generation, the system reduces the need for manual animation creation, saving time and resources.
  • Increased creativity: The pseudorandom nature of the system allows for the exploration of new and unique animation styles, fostering creativity in animation production.

Abstract

Methods, devices, media, and other embodiments are described for a state-space system for pseudorandom animation. In one embodiment animation elements within a computer model are identified, and for each animation element motion patterns and speed harmonics are identified. A set of motion data values comprising a state-space description of the motion patterns and the speed harmonics are generated, and a probability assigned to each value of the set of motion data values for the state-space description. The probability can then be used to select and update a particular motion used in an animation generated from the computer model.

LIGHT ESTIMATION METHOD FOR THREE-DIMENSIONAL (3D) RENDERED OBJECTS (17846918)

Main Inventor

Menglei Chai


Brief explanation

The patent application describes a method for applying lighting conditions to a virtual object in an augmented reality (AR) device. Here are the key points:
  • The method involves using a camera on a mobile device to capture an image.
  • A virtual object corresponding to an object in the image is accessed.
  • Lighting parameters of the virtual object are identified using a pre-trained machine learning model.
  • The machine learning model is trained with a paired dataset that includes synthetic source data and synthetic target data.
  • The synthetic source data includes environment maps and 3D scans of items depicted in the environment map.
  • The synthetic target data includes a synthetic sphere image rendered in the same environment map.
  • The identified lighting parameters are then applied to the virtual object.
  • The shaded virtual object is displayed as a layer on top of the image in the display of the mobile device.

Potential applications of this technology:

  • Augmented reality applications can benefit from realistic lighting conditions applied to virtual objects, enhancing the overall user experience.
  • This method can be used in gaming applications to provide more immersive and visually appealing virtual objects.
  • It can also be used in architectural and interior design applications to visualize how virtual objects would look in different lighting conditions.

Problems solved by this technology:

  • Traditional methods of applying lighting conditions to virtual objects in AR devices may not provide realistic or accurate results.
  • This method utilizes a machine learning model trained with a paired dataset to accurately identify and apply lighting parameters, resulting in more realistic virtual objects.

Benefits of this technology:

  • Users of AR devices can experience more realistic and visually appealing virtual objects with accurate lighting conditions.
  • The method is efficient and can be implemented on mobile devices, allowing for real-time application of lighting parameters to virtual objects.
  • By using a machine learning model, the method can adapt to different lighting conditions and provide consistent results across various environments.

Abstract

A method for applying lighting conditions to a virtual object in an augmented reality (AR) device is described. In one aspect, the method includes generating, using a camera of a mobile device, an image, accessing a virtual object corresponding to an object in the image, identifying lighting parameters of the virtual object based on a machine learning model that is pre-trained with a paired dataset, the paired dataset includes synthetic source data and synthetic target data, the synthetic source data includes environment maps and 3D scans of items depicted in the environment map, the synthetic target data includes a synthetic sphere image rendered in the same environment map, applying the lighting parameters to the virtual object, and displaying, in a display of the mobile device, the shaded virtual object as a layer to the image.

ROBOTIC LEARNING OF ASSEMBLY TASKS USING AUGMENTED REALITY (17846930)

Main Inventor

Kai Zhou


Brief explanation

The patent application describes a method for programming a robotic system using an augmented reality (AR) device. Here are the key points:
  • The method involves displaying a virtual object in the AR device that corresponds to a physical object in the real world.
  • The AR device tracks the manipulation of the virtual object by a user, identifying the initial and final states of the object based on the tracking.
  • The method then uses this tracking data to program a robotic system, taking into account the initial and final poses of the virtual object.

Potential applications of this technology:

  • Industrial automation: This method can be used to program robots in manufacturing settings, allowing operators to easily demonstrate the desired tasks and movements.
  • Healthcare: Robotic systems used in surgeries or rehabilitation can be programmed using this method, making it easier for medical professionals to teach the robots specific movements.
  • Education and training: This technology can be used in educational settings to teach students about robotics and programming, allowing them to interact with virtual objects and program robots through demonstration.

Problems solved by this technology:

  • Simplified programming: Traditional methods of programming robots can be complex and time-consuming. This method simplifies the programming process by allowing users to demonstrate the desired movements instead of writing code.
  • Intuitive interaction: By using augmented reality, users can interact with virtual objects in a more natural and intuitive way, making it easier to program robots.
  • Flexibility: This method allows for easy reprogramming of robotic systems, as users can simply demonstrate the desired movements instead of rewriting code.

Benefits of this technology:

  • Time and cost savings: The simplified programming process reduces the time and resources required to program robotic systems, making them more accessible and cost-effective.
  • Increased productivity: By simplifying the programming process, operators can quickly and easily program robots to perform specific tasks, improving overall productivity.
  • Enhanced user experience: The use of augmented reality and demonstration-based programming provides a more engaging and user-friendly experience for operators and programmers.

Abstract

A method for programming a robotic system by demonstration is described. In one aspect, the method includes displaying a first virtual object in a display of an augmented reality (AR) device, the first virtual object corresponding to a first physical object in a physical environment of the AR device, tracking, using the AR device, a manipulation of the first virtual object by a user of the AR device, identifying an initial state and a final state of the first virtual object based on the tracking, the initial state corresponding to an initial pose of the first virtual object, the final state corresponding to a final pose of the first virtual object, and programming by demonstration a robotic system using the tracking of the manipulation of the first virtual object, the first initial state of the first virtual object, and the final state of the first virtual object.

APPLYING PREGENERATED VIRTUAL EXPERIENCES IN NEW LOCATION (17848087)

Main Inventor

Gal Dudovitch


Brief explanation

The abstract of this patent application describes a system for providing virtual experiences by modifying images of real-world environments. The system selects a virtual experience representing a previously captured real-world environment and accesses an image of a new real-world environment. It then allows users to select a real-world object from the image and modifies the image to depict the previously captured real-world environment with the selected object.
  • The system selects a virtual experience representing a previously captured real-world environment.
  • It accesses an image of a new real-world environment depicting multiple real-world objects.
  • Users can select a specific real-world object from the image.
  • The system modifies the image to show the previously captured real-world environment with the selected object.

Potential Applications

  • Virtual tourism: Users can experience different real-world environments and interact with them virtually.
  • Interior design: Users can visualize how different objects would look in their own spaces.
  • Gaming: The system can be used to create immersive gaming experiences by integrating virtual objects into real-world environments.

Problems Solved

  • Lack of interaction with real-world environments in virtual experiences.
  • Difficulty in visualizing how objects would look in different real-world environments.
  • Limited options for integrating virtual and real-world elements in gaming.

Benefits

  • Enhanced virtual experiences by combining real-world environments with virtual elements.
  • Improved visualization of objects in different real-world environments.
  • Increased immersion and interactivity in gaming experiences.

Abstract

Aspects of the present disclosure involve a system for providing virtual experiences. The system performs operations including selecting, by a messaging application, a virtual experience that represents a previously captured real-world environment at a first location; accessing an image representing a new real-world environment at a second location, the image depicting a plurality of real-world objects; receiving input that selects a first real-world object from the plurality of real-world objects depicted in the image; and modifying the image, accessed at the second location, based on the virtual experience to depict the previously captured real-world environment with the first real-world object.

INFERRING INTENT FROM POSE AND SPEECH INPUT (18243360)

Main Inventor

Matan Zohar


Brief explanation

The patent application describes a system and method for performing augmented reality (AR) operations based on the pose of a person depicted in an image and their speech input. Here are the key points:
  • The system includes a computer-readable storage medium and a program for processing the operations.
  • The method starts by receiving an image that shows a person.
  • The system then identifies the skeletal joints of the person in the image.
  • Based on the positioning of these skeletal joints, the system determines the pose of the person.
  • Next, the system receives speech input from the person, which includes a request to perform an AR operation and an ambiguous intent.
  • The system uses the pose of the person to discern the ambiguous intent of the speech input.
  • Finally, the system performs the AR operation based on the discerned intent and the pose of the person.

Potential applications of this technology:

  • AR gaming: The system can interpret the pose and speech input of a player to perform specific actions or trigger events in an AR game.
  • Virtual shopping: By understanding the intent of a person's speech input and their pose, the system can provide relevant AR product information or virtual try-on experiences.
  • Fitness and health: The system can analyze a person's pose and speech input to guide them through exercise routines or provide personalized health advice.

Problems solved by this technology:

  • Ambiguous speech input: By using the person's pose as context, the system can better understand the intended meaning of ambiguous speech input, improving the accuracy of AR operations.
  • Enhanced user experience: The system combines pose recognition and speech input to create a more intuitive and interactive AR experience, reducing the need for complex user interfaces.

Benefits of this technology:

  • Improved accuracy: By considering the pose of the person, the system can better discern the intended meaning of their speech input, leading to more accurate AR operations.
  • Enhanced user interaction: The combination of pose recognition and speech input allows for more natural and seamless interactions with AR systems.
  • Personalized experiences: The system can tailor AR operations based on the individual's pose and speech input, providing personalized and context-aware experiences.

Abstract

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program, and a method for performing operations comprising receiving an image that depicts a person, identifying a set of skeletal joints of the person and identifying a pose of the person depicted in the image based on positioning of the set of skeletal joints. The operations also include receiving speech input comprising a request to perform an AR operation and an ambiguous intent, discerning the ambiguous intent of the speech input based on the pose of the person depicted in the image and in response to receiving the speech input, performing the AR operation based on discerning the ambiguous intent of the speech input based on the pose of the person depicted in the image.

MOBILE COMPUTING DEVICE FOR USE IN CONTROLLING WIRELESSLY CONTROLLED VEHICLES (18367301)

Main Inventor

Ari Krupnik


Brief explanation

The patent application describes methods and systems for using a mobile computing device, such as a mobile phone, to control a model vehicle. 
  • The mobile computing device provides user controls that generate signals.
  • These signals are sent to a radio transmitter device connected to the mobile computing device.
  • The radio transmitter broadcasts the signals to a receiver on the model vehicle.
  • The mobile computing device can be integrated with a controller housing that has additional user controls.
  • The combination of controls on the mobile computing device and the controller housing allows for precise control of the model vehicle.

Potential Applications

This technology can be used in various applications, including:

  • Remote-controlled model cars, boats, drones, and airplanes.
  • Educational toys and kits that teach children about remote control and technology.
  • Entertainment and gaming devices that enhance the user experience with realistic controls.
  • Research and development of autonomous vehicles, where remote control is necessary for testing and evaluation.

Problems Solved

This technology solves several problems related to controlling model vehicles:

  • Convenience: The use of a mobile computing device as a controller eliminates the need for separate, dedicated controllers.
  • Cost: Instead of purchasing expensive controllers, users can utilize their existing mobile devices.
  • Integration: The integration of user controls on the mobile device and the controller housing provides a more versatile and intuitive control interface.
  • Accessibility: Mobile devices are widely available and familiar to users, making this technology accessible to a larger audience.

Benefits

The use of a mobile computing device for controlling model vehicles offers several benefits:

  • Flexibility: Users can control their model vehicles from a distance, without being limited by the range of traditional controllers.
  • Portability: Mobile devices are compact and easy to carry, allowing users to control their model vehicles on the go.
  • Enhanced User Experience: The combination of user controls on the mobile device and the controller housing provides a more immersive and enjoyable experience.
  • Cost Savings: Users can save money by utilizing their existing mobile devices instead of purchasing separate controllers.

Abstract

Methods and systems for utilizing a mobile computing device (e.g., such as a mobile phone) for use in controlling a model vehicle are described. Consistent with some embodiments, a mobile computing device provides various user controls for generating signals that are communicated to a radio transmitter device coupled with the mobile computing device, and ultimately broadcast to a receiver residing at a model vehicle. With some embodiments, the mobile computing device may be integrated with a controller housing which provides separate user controls, such that a combination of user controls present on the mobile computing device and the controller housing can be used to control a model vehicle.

ADVANCED VIDEO EDITING TECHNIQUES USING SAMPLING PATTERNS (18243487)

Main Inventor

Nathan Kenneth Boyd


Brief explanation

The patent application describes a system and method for advanced video editing using sampling patterns. Here is a simplified explanation of the abstract:
  • The system allows a user to select a clip from a video and a sampling pattern.
  • The system determines the number of frames to sample from the clip for each interval of time over a specified length of time.
  • Different approaches can be used to determine the number of frames, such as a function, histogram, or definite integral corresponding to the pattern.
  • The system extracts the determined number of frames from the clip and generates a new clip using these frames.
  • The new clip can be previewed and shared with other devices.

Potential applications of this technology:

  • Professional video editing software and tools
  • Social media platforms with video editing features
  • Online video sharing platforms
  • Video production companies and studios

Problems solved by this technology:

  • Simplifies the process of video editing by automating the selection and extraction of frames based on a sampling pattern.
  • Allows for more efficient editing by reducing the number of frames to be processed and analyzed.
  • Provides a consistent and reproducible method for sampling frames from a video clip.

Benefits of this technology:

  • Saves time and effort in video editing tasks.
  • Enables users to create visually appealing and engaging video clips.
  • Facilitates collaboration and sharing of edited video clips across different devices and platforms.

Abstract

Systems and methods provide for advanced video editing techniques using sampling patterns. In one example, a computing device can receive a selection of a clip of a video and a sampling pattern. The computing device can determine a respective number of frames to sample from the clip for each interval of time over a length of time for a new clip. For example, the computing device can determine a function corresponding the pattern that relates time and the number of frames to sample, a histogram corresponding to the pattern, or a definite integral corresponding to the pattern, among other approaches. The computing device can extract these numbers of frames from the clip and generate the new clip from the extracted frames. The computing device can present the new clip as a preview and send the new clip to other computing devices.

MIXING PARTICIPANT AUDIO FROM MULTIPLE ROOMS WITHIN A VIRTUAL CONFERENCING SYSTEM (18462745)

Main Inventor

Andrew Cheng-min Lin


Brief explanation

The present disclosure describes a system and method for mixing participant audio from multiple rooms within a virtual conferencing system. The system includes a computer-readable storage medium storing a program that enables the mixing of audio from different rooms during virtual conferences.
  • The program provides a user interface for mixing participant audio from one or more second rooms into an audio channel for the first room.
  • The user can adjust the settings for mixing the audio from the second rooms through the user interface.
  • The program then mixes the participant audio from the second rooms with the audio channel for the first room during virtual conferencing.

Potential applications of this technology:

  • Virtual conferencing platforms: This technology can be used in virtual conferencing platforms to enhance the audio experience by allowing participants from different rooms to be mixed into a single audio channel.
  • Remote collaboration: It can facilitate remote collaboration by enabling participants from different locations to have a seamless audio experience during virtual meetings.
  • Webinars and online events: This technology can be utilized in webinars and online events to ensure that participants from different rooms can be heard clearly by all attendees.

Problems solved by this technology:

  • Audio mixing in virtual conferencing: This technology solves the problem of mixing participant audio from multiple rooms in a virtual conferencing system, ensuring that all participants can be heard clearly.
  • Seamless audio experience: It addresses the issue of inconsistent audio quality when participants from different rooms join a virtual conference, providing a seamless audio experience for all attendees.

Benefits of this technology:

  • Improved audio quality: By mixing participant audio from multiple rooms, this technology enhances the overall audio quality during virtual conferences.
  • Enhanced collaboration: It enables better collaboration by ensuring that all participants can hear each other clearly, regardless of their physical location.
  • User-friendly interface: The user interface provided by this technology makes it easy for users to adjust the settings for mixing participant audio, enhancing the user experience.

Abstract

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for mixing participant audio from multiple rooms within a virtual conferencing system. The program and method provide, in association with designing a first room for virtual conferencing, display of a user interface for mixing participant audio from one or more second rooms into an audio channel for the first room; receive indication of user input via the user interface, the user input corresponding to settings for mixing the participant audio from the one or more second rooms; and provide, based on the settings and in association with virtual conferencing within the first room, for mixing the participant audio from one or more second rooms with respect to the audio channel for the first room.

MEDIA GALLERY SHARING AND MANAGEMENT (17852163)

Main Inventor

David James Kennedy


Brief explanation

The abstract describes a system for sharing and managing media galleries. Here is a simplified explanation of the abstract:
  • The system allows users to share media galleries, which can include images, videos, or other media files.
  • When a user wants to share a media gallery, they send a request from their device.
  • The system generates metadata for the media gallery, which includes information about the content and the user who created it.
  • A message is created that includes the identifier of the media gallery and the identifier of the user's avatar.
  • The message is then sent to the recipient user's device.

Potential applications of this technology:

  • Social media platforms can use this system to allow users to easily share and manage their media galleries with friends and followers.
  • Online photo sharing platforms can utilize this system to enable users to share their photo albums with specific individuals or groups.
  • Content management systems can incorporate this system to facilitate the sharing and organization of media files within a team or organization.

Problems solved by this technology:

  • Simplifies the process of sharing media galleries by generating metadata and creating a message with the necessary identifiers.
  • Provides a standardized method for sharing media galleries across different devices and platforms.
  • Enhances the user experience by allowing users to easily share and manage their media content.

Benefits of this technology:

  • Streamlines the sharing and management of media galleries, saving users time and effort.
  • Enables users to maintain control over their media content by specifying who can access it.
  • Enhances collaboration and communication by providing a convenient way to share media files within a group or team.

Abstract

Various embodiments include systems, methods, and non-transitory computer-readable media for sharing and managing media galleries. Consistent with these embodiments, a method includes receiving a request from a first device to share a media gallery that includes a user avatar; generating metadata associated with the media gallery; generating a message associated with the media gallery, the message at least including the media gallery identifier and the identifier of the user avatar; and transmitting the message to a second device of the recipient user.

VIRTUAL SELFIE STICK (17851448)

Main Inventor

Kai Zhou


Brief explanation

The abstract describes a method for generating a virtual selfie stick image. The method involves using the optical sensor of a device to capture an original self-portrait image of a user's face. The user is then guided to move the device at arm's length around their face within a limited range at various poses. The image data captured by the optical sensor at these poses is accessed and used to generate a virtual selfie stick self-portrait image based on the original image.
  • The method involves using the optical sensor of a device to capture a self-portrait image.
  • The user is guided to move the device at arm's length around their face within a limited range at various poses.
  • Image data captured by the optical sensor at these poses is used to generate a virtual selfie stick self-portrait image.
  • The virtual selfie stick image is based on the original self-portrait image and the image data captured during the poses.

Potential Applications

  • Selfie apps and camera features on smartphones and other devices.
  • Social media platforms and photo-sharing apps.
  • Virtual reality and augmented reality applications.

Problems Solved

  • Eliminates the need for a physical selfie stick, providing a virtual alternative.
  • Allows users to capture self-portrait images from various angles and poses without assistance.
  • Provides a more immersive and interactive selfie experience.

Benefits

  • Convenience and portability as no physical selfie stick is required.
  • Increased flexibility in capturing self-portrait images from different angles and poses.
  • Enhanced user experience with interactive guidance and virtual selfie stick functionality.

Abstract

A method for generating a virtual selfie stick image is described. In one aspect, the method includes generating, at a device, an original self-portrait image with an optical sensor of the device, the optical sensor directed at a face of a user of the device, the device being held at an arm length from the face of the user, displaying, on a display of the device, an instruction guiding the user to move the device at the arm length about the face of the user within a limited range at a plurality of poses, accessing, at the device, image data generated by the optical sensor at the plurality of poses, and generating a virtual selfie stick self-portrait image based on the original self-portrait image and the image data.

PERSONALIZED VIDEOS (18242016)

Main Inventor

Victor Shaburov


Brief explanation

The patent application describes systems and methods for creating personalized videos. Here are the key points:
  • The method involves receiving preprocessed videos with a target face and facial expression parameters.
  • The preprocessed videos are modified by replacing the target face with a source face that adopts the facial expression parameters of the target face.
  • A user interface is provided for sharing the personalized videos with other users.
  • If the application used for sharing does not support auto-play of videos, the personalized video is exported as an image file.
  • The image file can then be shared via the application.

Potential applications of this technology:

  • Social media platforms could use this technology to allow users to create and share personalized videos with their friends and followers.
  • Messaging apps could integrate this technology to enable users to send personalized videos to their contacts.
  • E-commerce platforms could use personalized videos to enhance product demonstrations and marketing campaigns.

Problems solved by this technology:

  • This technology allows users to easily create personalized videos by replacing the target face with their own or someone else's face.
  • It addresses the issue of auto-play restrictions on certain applications by providing an alternative method of sharing the personalized videos as image files.

Benefits of this technology:

  • Users can create fun and engaging personalized videos by replacing faces in existing videos.
  • The ability to export personalized videos as image files allows for easy sharing on platforms that do not support video auto-play.
  • This technology enhances user creativity and personalization in video sharing.

Abstract

Systems and methods for providing personalized videos are provided. An example method includes receiving preprocessed videos including a target face and facial expression parameters of the target face, modifying the preprocessed videos to generate one or more personalized videos by replacing the target face with a source face, where the source face is modified to adopt the facial expression parameters of the target face, providing a user interface enabling a user to share at least one personalized video of the one or more personalized videos with a further user of a further computing device, determining that an application to be used to share the personalized video does not allow auto-play of the personalized video in a video format, in response the determination, exporting the personalized video of the one or more personalized videos into an image file, and sharing the image file via the application.

WEARABLE DEVICE LOCATION SYSTEMS (18242392)

Main Inventor

Yu Jiang Tham


Brief explanation

==Abstract Explanation==

The patent application describes methods and systems for improving the performance of wearable electronic devices in terms of location management. It focuses on reducing the time it takes for the device to determine its location while also operating on low power.

  • The invention involves low-power circuitry that manages high-speed circuitry and location circuitry in order to provide location assistance data automatically when location fix operations are initiated.
  • The high-speed circuitry and location circuitry are booted from low-power states, and the high-speed circuitry captures content associated with the initiation of the location fix.
  • In some cases, the high-speed circuitry is returned to a low-power state before the location fix is completed, and after capturing the content.
  • After the location fix is completed, the high-speed circuitry is booted again to update the location data associated with the content.

Potential Applications

  • Wearable electronic devices such as smartwatches or fitness trackers could benefit from this technology to improve their location tracking capabilities.
  • This technology could be used in navigation systems to provide faster and more accurate location information.
  • Emergency response systems could utilize this technology to quickly determine the location of individuals in need of assistance.

Problems Solved

  • The technology addresses the problem of slow location fix operations in wearable electronic devices.
  • It solves the issue of high power consumption during location tracking, which can drain the device's battery quickly.
  • The invention also tackles the challenge of capturing content associated with the initiation of the location fix while operating on low power.

Benefits

  • The innovation reduces the time it takes for wearable electronic devices to determine their location, improving user experience.
  • By managing high-speed circuitry and location circuitry efficiently, the technology enables low-power operations, extending the device's battery life.
  • The ability to capture content associated with the initiation of the location fix while operating on low power allows for more comprehensive location data.

Abstract

Systems, methods, devices, computer readable media, and other various embodiments are described for location management processes in wearable electronic devices. Performance of such devices is improved with reduced time to first fix of location operations in conjunction with low-power operations. In one embodiment, low-power circuitry manages high-speed circuitry and location circuitry to provide location assistance data from the high-speed circuitry to the low-power circuitry automatically on initiation of location fix operations as the high-speed circuitry and location circuitry are booted from low-power states. In some embodiments, the high-speed circuitry is returned to a low-power state prior to completion of a location fix and after capture of content associated with initiation of the location fix. In some embodiments, high-speed circuitry is booted after completion of a location fix to update location data associated with content.