Difference between revisions of "Google LLC patent applications published on November 9th, 2023"

From WikiPatents
Jump to navigation Jump to search
 
(45 intermediate revisions by the same user not shown)
Line 30: Line 30:
 
Kenneth Mixter
 
Kenneth Mixter
  
 
'''Brief explanation'''
 
The patent application describes a device that records sound, movement, and ambient conditions during a user's sleep session. It analyzes this data to determine the user's sleep state, including sleep quality and sleep phase. The device also identifies any sleep disturbances that occur during the session. After the session, the device presents a sleep summary on its screen, showing visual indications of the sleep quality and disturbances.
 
 
* Device records sound, movement, and ambient conditions during user's sleep session
 
* Analyzes data to determine user's sleep state, including sleep quality and sleep phase
 
* Identifies any sleep disturbances that occur during the session
 
* Presents a sleep summary on the device's screen after the session
 
* Sleep summary includes visual indications of sleep quality and disturbances
 
 
'''Abstract'''
 
During a sleep session of a user of a display assistant device, the device records sound, movement, and ambient conditions in proximity to the device. The ambient conditions include a light level. The device analyzes the recorded sound and movement to identify throughout the sleep session of the user a time-varying sleep state of the user. The sleep state is characterized by a sleep quality and a sleep phase. The device also analyzes the recorded ambient conditions and the recorded sound throughout the sleep session to identify a plurality of time-varying sleep disturbances occurring during the sleep session of the user. After the sleep session of the user has concluded, the device presents on a screen of the device a sleep summary of the sleep session. The sleep summary includes visual indications of the sleep quality and disturbances identified throughout the sleep session.
 
  
 
===LOW RESIDUAL LAYER THICKNESS WAVEGUIDE WITH HIGH-INDEX COATING ([[US Patent Application 18141674. LOW RESIDUAL LAYER THICKNESS WAVEGUIDE WITH HIGH-INDEX COATING simplified abstract|18141674]])===
 
===LOW RESIDUAL LAYER THICKNESS WAVEGUIDE WITH HIGH-INDEX COATING ([[US Patent Application 18141674. LOW RESIDUAL LAYER THICKNESS WAVEGUIDE WITH HIGH-INDEX COATING simplified abstract|18141674]])===
Line 50: Line 38:
 
Eliezer Glik
 
Eliezer Glik
  
 
'''Brief explanation'''
 
The patent application describes a method to improve the field of view and color bandwidth of a waveguide without using high-index resin.
 
 
* Waveguide grating is formed on a substrate with a low-index resin and a high refractive index conformal coating.
 
* Nano imprint lithography is used to imprint the substrate with the resin, minimizing the thickness of the resin layer.
 
* The conformal coating uniformly coats the surface with a consistent thickness, conforming to its geometry.
 
 
'''Abstract'''
 
To increase the field of view and/or the wavelengths of light (i.e., color bandwidth) transmitted by a waveguide without using high-index resin, some embodiments include a waveguide grating formed on a substrate that is imprinted with a relatively low-index resin and a conformal coating having a relatively high refractive index. In some embodiments, the substrate is imprinted with the resin using nano imprint lithography and the residual layer thickness of the resin layer is minimized. The conformal coating conforms to the geometry of the surface it coats with a substantially uniform thickness.
 
  
 
===MANAGING DISPLAY CONTENT ON A WEARABLE DEVICE USING A CONVERSATION GRAPH ([[US Patent Application 18246448. MANAGING DISPLAY CONTENT ON A WEARABLE DEVICE USING A CONVERSATION GRAPH simplified abstract|18246448]])===
 
===MANAGING DISPLAY CONTENT ON A WEARABLE DEVICE USING A CONVERSATION GRAPH ([[US Patent Application 18246448. MANAGING DISPLAY CONTENT ON A WEARABLE DEVICE USING A CONVERSATION GRAPH simplified abstract|18246448]])===
Line 68: Line 46:
 
Alexander James Faaborg
 
Alexander James Faaborg
  
 
'''Brief explanation'''
 
- The patent application describes a method for a wearable device to detect facial features of a person and analyze their interaction with another entity.
 
- The wearable device uses imaging sensors to capture images and detect interactive communication between the user and the entity.
 
- A conversation graph is updated based on the detected interaction, which represents the flow and context of the conversation.
 
- Content displayed on the wearable device is managed based on the conversation graph, ensuring relevant information is shown to the user.
 
 
'''Abstract'''
 
According to an aspect, a method includes detecting, by at least one imaging sensor of a wearable device, facial features of an entity, detecting an interactive communication between a user of the wearable device and the entity based on at least image data from the at least one imaging sensor, updating a conversation graph in response to the interactive communication being detected between the user and the entity, and managing content for display on the wearable device based on the conversation graph.
 
  
 
===ALGORITHMICALLY ADJUSTING THE HIT BOX OF ICONS BASED ON PRIOR GAZE AND CLICK INFORMATION ([[US Patent Application 17662175. ALGORITHMICALLY ADJUSTING THE HIT BOX OF ICONS BASED ON PRIOR GAZE AND CLICK INFORMATION simplified abstract|17662175]])===
 
===ALGORITHMICALLY ADJUSTING THE HIT BOX OF ICONS BASED ON PRIOR GAZE AND CLICK INFORMATION ([[US Patent Application 17662175. ALGORITHMICALLY ADJUSTING THE HIT BOX OF ICONS BASED ON PRIOR GAZE AND CLICK INFORMATION simplified abstract|17662175]])===
Line 85: Line 54:
 
Dongeek Shin
 
Dongeek Shin
  
 
'''Brief explanation'''
 
The patent application describes a method for interacting with objects on a wearable device based on historical user data and eye tracking.
 
 
* The method involves analyzing user data related to past events on the wearable device.
 
* Based on this historical data, the method calculates the likelihood of the user interacting with an object displayed on the device.
 
* A hitbox, which represents the area around the object that can be interacted with, is then adjusted in size according to the calculated probability.
 
* The method uses eye tracking technology to detect when the user's gaze falls within the adjusted hitbox.
 
* If the user's gaze is detected within the hitbox, the method triggers an action corresponding to the object being interacted with.
 
 
Overall, this patent application introduces a method that enhances user interaction with objects on a wearable device by leveraging historical data and eye tracking technology.
 
 
'''Abstract'''
 
A method including determining historical user data associated with an event occurring on a wearable device, determining a probability of interacting with an object on a display of the wearable device based on the historical user data, scaling a hitbox associated with the object to form a scaled hitbox, detecting a user input based on an eye tracking being within the scaled hitbox, and in response to detecting the user input, initiating an action corresponding to the object.
 
  
 
===TRACKING ALGORITHM FOR CONTINUOUS AR EXPERIENCES ([[US Patent Application 18312448. TRACKING ALGORITHM FOR CONTINUOUS AR EXPERIENCES simplified abstract|18312448]])===
 
===TRACKING ALGORITHM FOR CONTINUOUS AR EXPERIENCES ([[US Patent Application 18312448. TRACKING ALGORITHM FOR CONTINUOUS AR EXPERIENCES simplified abstract|18312448]])===
Line 107: Line 62:
 
Luca Ballan
 
Luca Ballan
  
 
'''Brief explanation'''
 
The patent application describes a tracking system and algorithms for providing a continuous augmented reality (AR) experience without the need for resetting.
 
* The system uses an AR headset with a camera and an inertial measurement unit (IMU) to track the position and orientation of the headset in its environment.
 
* Motion sensor data from the IMU is combined with image data from the camera to create a device pose.
 
* When a reset occurs, a six-degrees-of-freedom (6DoF) algorithm supports the pose until re-initialization is completed.
 
* A neural network is used to correct for IMU integration drifts in the 6DoF algorithm.
 
* The IMU-based 6DoF algorithm utilizes the device's past motion to predict its future motion.
 
 
'''Abstract'''
 
A tracking system and associated algorithms are disclosed that can provide a user with a continuous, reset-free augmented reality (AR) experience. When the user wears an AR headset equipped with a camera and an inertial measurement unit (IMU), motion sensor data from the IMU can be combined with image data from the camera to create a device pose, representing a position and an orientation of the headset relative to its environment. In some implementations, when a reset occurs, a six-degrees-of-freedom (6DoF) algorithm can be configured to support the pose until a re-initialization is completed. In some implementations, a neural network can be used to correct for IMU integration drifts in the 6DoF algorithm. In some implementations, the IMU-based 6DoF uses a neural network that exploits the device's past motion to infer its future motion.
 
  
 
===Efficiently Augmenting Images with Related Content ([[US Patent Application 18354101. Efficiently Augmenting Images with Related Content simplified abstract|18354101]])===
 
===Efficiently Augmenting Images with Related Content ([[US Patent Application 18354101. Efficiently Augmenting Images with Related Content simplified abstract|18354101]])===
Line 126: Line 70:
 
Charles Yang
 
Charles Yang
  
 
'''Brief explanation'''
 
- This patent application is about a system that provides content related to text depicted in images.
 
- The system includes a data processing apparatus that can extract text from an image.
 
- The extracted text is divided into multiple blocks and presented as selectable targets on a user interface at a certain zoom level.
 
- When a user selects a block of text, the system detects the selection and presents portions of the text within that block as selectable targets at a higher zoom level.
 
- If the user selects a portion of the text within the block, an action is initiated based on the content of the selected text.
 
 
'''Abstract'''
 
The subject matter of this specification generally relates to providing content related to text depicted in images. In one aspect, a system includes a data processing apparatus configured to extract text from an image. The extracted text is partitioned into multiple blocks. The multiple blocks are presented as respective first user-selectable targets on a user interface at a first zoom level. A user selection of a first block of the multiple blocks is detected. In response to detecting the user selection of the first block, portions of the extracted text in the first block are presented as respective second user-selectable targets on the user interface at a second zoom level greater than the first zoom level. In response to detecting a user selection of a portion of the extracted text within the first block, an action is initiated based on content of the user-selected text.
 
  
 
===Methods and Systems for Positioning Animated Images Within a Dynamic Keyboard Interface ([[US Patent Application 18351890. Methods and Systems for Positioning Animated Images Within a Dynamic Keyboard Interface simplified abstract|18351890]])===
 
===Methods and Systems for Positioning Animated Images Within a Dynamic Keyboard Interface ([[US Patent Application 18351890. Methods and Systems for Positioning Animated Images Within a Dynamic Keyboard Interface simplified abstract|18351890]])===
Line 144: Line 78:
 
David McIntosh
 
David McIntosh
  
 
'''Brief explanation'''
 
The patent application is about positioning animated images within a dynamic keyboard interface.
 
* The methods and systems described in the application can receive data indicating the selection of a specific animated image from a variety of animated images presented by a dynamic keyboard interface.
 
* The application can also receive data indicating the context of the dynamic keyboard interface or the associated application, based on which the animated image was selected.
 
* Based on the selection data and context data, the application can determine the position within the dynamic keyboard interface to present the selected animated image.
 
* This positioning is done in response to data indicating a subsequent context of the dynamic keyboard interface, the application, or a different application.
 
 
'''Abstract'''
 
The present disclosure is directed to positioning animated images within a dynamic keyboard interface. In particular, the methods and systems of the present disclosure can: receive data indicating a selection of a particular animated image from amongst a plurality of different animated images presented by a dynamic keyboard interface provided in association with an application; receive data indicating a context of: the dynamic keyboard interface, and/or the application based at least in part on which the plurality of different animated images was selected for presentation by the dynamic keyboard interface; and determine, based at least in part on the data indicating the selection and the data indicating the context, a position within the dynamic keyboard interface for presenting the particular animated image in response to data indicating a subsequent context of the dynamic keyboard interface, the application, and/or a different and distinct application.
 
  
 
===Independent Fragment Compactions of Striped Data ([[US Patent Application 17662547. Independent Fragment Compactions of Striped Data simplified abstract|17662547]])===
 
===Independent Fragment Compactions of Striped Data ([[US Patent Application 17662547. Independent Fragment Compactions of Striped Data simplified abstract|17662547]])===
Line 162: Line 86:
 
Michael Lai
 
Michael Lai
  
 
'''Brief explanation'''
 
- The patent application describes a method for compacting data by storing files at different datacenters and generating a parity file.
 
- The method involves storing a first set of files at one datacenter and a second set of files at another datacenter.
 
- A parity file is created that includes calculations of the files from both sets, and this parity file is stored at a third datacenter.
 
- If a request is received to delete a file from the first set, the parity file is compacted in response.
 
- The method then checks if a data compaction cost threshold is met, and if so, the first set of files at the first datacenter is compacted.
 
- The goal of this method is to efficiently store and manage data by compacting files and reducing storage costs.
 
 
'''Abstract'''
 
A method for compacting data includes storing a first plurality of files at a first datacenter and storing a second plurality of files at a second datacenter. The method also includes generating a parity file that includes parity calculations of the first plurality of files and the second plurality of files. The method includes storing the parity file at a third datacenter. The method also includes receiving a request to delete a first file of the first plurality of files stored at the first datacenter and, in response to the request to delete the first file, compacting the parity file stored at the third datacenter. After compacting the parity file, the method includes determining whether a data compaction cost threshold is satisfied. When the data compaction cost threshold is satisfied, the method includes compacting the first plurality of files stored at the first datacenter.
 
  
 
===SORTING FOR DATA-PARALLEL COMPUTING DEVICES ([[US Patent Application 18221506. SORTING FOR DATA-PARALLEL COMPUTING DEVICES simplified abstract|18221506]])===
 
===SORTING FOR DATA-PARALLEL COMPUTING DEVICES ([[US Patent Application 18221506. SORTING FOR DATA-PARALLEL COMPUTING DEVICES simplified abstract|18221506]])===
Line 181: Line 94:
 
Allan Stuart Mackinnon, JR.
 
Allan Stuart Mackinnon, JR.
  
 
'''Brief explanation'''
 
The patent application is about determining relevant content in response to a request for information.
 
* Computing devices load data elements into registers associated with parallel processors.
 
* The data elements in each register are sorted in parallel in descending order.
 
* The sorted data elements from each processor are merged with the sorted data elements from other processors.
 
* The merged and sorted data elements are transposed and stored.
 
 
'''Abstract'''
 
Aspects of the disclosure relate to determining relevant content in response to a request for information. One or more computing devices  may load data elements into registers A-B, wherein each register is associated with at least one parallel processor in a group of parallel processors A-B. For each of the parallel processors, the data elements loaded in its associated registers may be sorted, in parallel, in descending order. The sorted data elements, for each of the parallel processors, may be merged with the sorted data elements of other processors in the group. The merged and sorted data elements may be transposed and stored.
 
  
 
===Transferral Of Process State And/Or Components In Computing Environments ([[US Patent Application 18355826. Transferral Of Process State And/Or Components In Computing Environments simplified abstract|18355826]])===
 
===Transferral Of Process State And/Or Components In Computing Environments ([[US Patent Application 18355826. Transferral Of Process State And/Or Components In Computing Environments simplified abstract|18355826]])===
Line 199: Line 102:
 
Christopher Jonathan Phoenix
 
Christopher Jonathan Phoenix
  
 
'''Brief explanation'''
 
The patent application is about a technology that allows the transfer of state information between processes or software programs in a computing environment.
 
* State information refers to the current status or data of a process or software program.
 
* The technology enables a new instance of a process or software program to receive the state information, even if the original instance that owned the state information has terminated.
 
* The termination of the original instance can be either natural (normal termination) or unnatural (abrupt termination).
 
* This technology ensures that the state information is not lost and can be seamlessly transferred to a new instance.
 
* It improves the continuity and efficiency of processes or software programs in a computing environment.
 
 
'''Abstract'''
 
This technology relates to transferring state information between processes or active software programs in a computing environment where a new instance of a process or software program may receive such state information even after an original or old instance of the process or software program that owned the state information has terminated either naturally or unnaturally.
 
  
 
===User Triggered Virtual Machine Cloning for Recovery/Availability/Scaling ([[US Patent Application 17737305. User Triggered Virtual Machine Cloning for Recovery/Availability/Scaling simplified abstract|17737305]])===
 
===User Triggered Virtual Machine Cloning for Recovery/Availability/Scaling ([[US Patent Application 17737305. User Triggered Virtual Machine Cloning for Recovery/Availability/Scaling simplified abstract|17737305]])===
Line 218: Line 110:
 
Diwakar Gupta
 
Diwakar Gupta
  
 
'''Brief explanation'''
 
The patent application describes a method for cloning virtual machines during live migration.
 
* Cloning involves creating new copies of a virtual machine while the original virtual machine remains operational.
 
* The new copies retain the processing state, memory state, and local storage state of the original virtual machine.
 
* Each new copy is distinguished from the others and the original by having different attributes.
 
* This approach allows for efficient and seamless migration of virtual machines without interrupting their operation.
 
 
'''Abstract'''
 
Generally disclosed herein is an approach for cloning virtual machines during live migration where one or more new copies of a virtual machine can be created while the original virtual machine continues to run. The new copies can preserve a processing state, memory state, and local storage state. The new copies can also be distinguished from each other and the original by including different attributes for each copy.
 
  
 
===Maintaining Transactional Consistency in Columnar Engine ([[US Patent Application 17951193. Maintaining Transactional Consistency in Columnar Engine simplified abstract|17951193]])===
 
===Maintaining Transactional Consistency in Columnar Engine ([[US Patent Application 17951193. Maintaining Transactional Consistency in Columnar Engine simplified abstract|17951193]])===
Line 236: Line 118:
 
Anjan Kumar Amirishetty
 
Anjan Kumar Amirishetty
  
 
'''Brief explanation'''
 
- The patent application is about maintaining transaction consistency when using a columnar cache.
 
- The columnar cache is initially loaded with all-visible data.
 
- As data is modified, the corresponding data in the columnar cache is invalidated.
 
- Invalidated data in the columnar cache is refreshed as more data gets invalidated.
 
- The latest all-visible data is populated in the columnar cache while queries are still using the old data.
 
- Once all queries transition to using the newly populated data, old data is removed from the columnar cache.
 
- A query can use valid blocks of the columnar cache and go to a row store for invalid blocks.
 
- When a query starts using the columnar cache, a request is submitted to prefetch the invalid blocks from the row store asynchronously.
 
 
'''Abstract'''
 
Aspects of the disclosure are directed to maintaining transaction consistency when using a columnar cache. The columnar cache can be initially loaded with all-visible data, and as the data gets modified, respective data is invalidated in the columnar cache. As more data gets invalidated in the columnar cache, respective data can be refreshed in the columnar cache. As part of the refresh, the latest all-visible data can be populated while the queries are still using the old data in the columnar cache. When all the queries transition to use the newly populated data, old data can be removed from the columnar cache. A query can use valid blocks of columnar cache and go to a row store for invalid blocks. When a query starts to use the columnar cache, a request can be submitted to asynchronously prefetch the invalid blocks from the row store.
 
  
 
===Systems and Methods for Anonymizing Large Scale Datasets ([[US Patent Application 18345657. Systems and Methods for Anonymizing Large Scale Datasets simplified abstract|18345657]])===
 
===Systems and Methods for Anonymizing Large Scale Datasets ([[US Patent Application 18345657. Systems and Methods for Anonymizing Large Scale Datasets simplified abstract|18345657]])===
Line 257: Line 126:
 
Alessandro Epasto
 
Alessandro Epasto
  
 
'''Brief explanation'''
 
The abstract describes a computer-implemented method for anonymizing a dataset to protect privacy.
 
* The method involves obtaining a dataset with data about multiple entities and at least one data item for each entity.
 
* The entities are clustered into groups called entity clusters.
 
* A majority condition is determined for each entity cluster, indicating that a data item is associated with a majority of the entities in the cluster.
 
* The data item is then assigned to the entities in an anonymized dataset based on the majority condition.
 
 
This method aims to provide privacy guarantees for all columns in the dataset by anonymizing the data items and ensuring that they are assigned to the entities in a way that protects their identities.
 
 
'''Abstract'''
 
A computer-implemented method for k-anonymizing a dataset to provide privacy guarantees for all columns in the dataset can include obtaining, by a computing system including one or more computing devices, a dataset comprising data indicative of a plurality of entities and at least one data item respective to at least one of the plurality of entities. The computer-implemented method can include clustering, by the computing system, the plurality of entities into at least one entity cluster. The computer-implemented method can include determining, by the computing system, a majority condition for the at least one entity cluster, the majority condition indicating that the at least one data item is respective to at least a majority of the plurality of entities. The computer-implemented method can include assigning, by the computing system, the at least one data item to the plurality of entities in an anonymized dataset based at least in part on the majority condition.
 
  
 
===ON-DEVICE GRAMMAR CHECKING ([[US Patent Application 18246326. ON-DEVICE GRAMMAR CHECKING simplified abstract|18246326]])===
 
===ON-DEVICE GRAMMAR CHECKING ([[US Patent Application 18246326. ON-DEVICE GRAMMAR CHECKING simplified abstract|18246326]])===
Line 277: Line 134:
 
Matthew Sharifi
 
Matthew Sharifi
  
 
'''Brief explanation'''
 
- This patent application describes a computing device that can perform on-device grammar checking of inputted text.
 
- The device uses one or more neural networks to determine a grammatically correct version of a sequence of words in the inputted text.
 
- If the sequence of words does not match the grammatically correct version, the device suggests a replacement for the incorrect sequence.
 
- The suggested replacement is displayed on a display device for the user to see.
 
 
'''Abstract'''
 
A computing device may receive inputted text and perform, using one or more neural networks, on-device grammar checking of a sequence of words in the inputted text, including determining, using the one or more neural networks, a grammatically correct version of the sequence of words and determining that the sequence of words does not match the grammatically correct version of the sequence of words. The computing device may, in response to determining that the sequence of words does not match the grammatically correct version of the sequence of words, output, for display at a display device, at least a portion of the grammatically correct version of the sequence of words as a suggested replacement for at least a sequence of the sequence of words in the inputted text.
 
  
 
===Systems and Methods for Machine-Learned Models Having Convolution and Attention ([[US Patent Application 18355243. Systems and Methods for Machine-Learned Models Having Convolution and Attention simplified abstract|18355243]])===
 
===Systems and Methods for Machine-Learned Models Having Convolution and Attention ([[US Patent Application 18355243. Systems and Methods for Machine-Learned Models Having Convolution and Attention simplified abstract|18355243]])===
Line 294: Line 142:
 
Zihang Dai
 
Zihang Dai
  
 
'''Brief explanation'''
 
The patent application describes a method for computer vision that reduces computational cost and improves accuracy.
 
* The method involves using a machine-learned convolutional attention network to process input data.
 
* The convolutional attention network consists of multiple stages and includes at least one attention block.
 
* The attention block uses a relative attention mechanism, which combines a static convolution kernel with an adaptive attention matrix.
 
* This approach improves the generalization, capacity, and efficiency of the convolutional attention network compared to existing models.
 
 
'''Abstract'''
 
A computer-implemented method for performing computer vision with reduced computational cost and improved accuracy can include obtaining, by a computing system including one or more computing devices, input data comprising an input tensor having one or more dimensions, providing, by the computing system, the input data to a machine-learned convolutional attention network, the machine-learned convolutional attention network including two or more network stages, and, in response to providing the input data to the machine-learned convolutional attention network, receiving, by the computing system, a machine-learning prediction from the machine-learned convolutional attention network. The convolutional attention network can include at least one attention block, wherein the attention block includes a relative attention mechanism, the relative attention mechanism including the sum of a static convolution kernel with an adaptive attention matrix. This provides for improved generalization, capacity, and efficiency of the convolutional attention network relative to some existing models.
 
  
 
===Modeling Dependencies with Global Self-Attention Neural Networks ([[US Patent Application 18044842. Modeling Dependencies with Global Self-Attention Neural Networks simplified abstract|18044842]])===
 
===Modeling Dependencies with Global Self-Attention Neural Networks ([[US Patent Application 18044842. Modeling Dependencies with Global Self-Attention Neural Networks simplified abstract|18044842]])===
Line 312: Line 150:
 
Zhuoran Shen
 
Zhuoran Shen
  
 
'''Brief explanation'''
 
The patent application describes a system for modeling dependencies in a network using a global-self attention model with a content attention layer and a positional attention layer.
 
* The model takes input data with content values and context positions.
 
* The content attention layer generates output features for each context position based on a global attention operation applied to the content values.
 
* The positional attention layer generates an attention map for each context position based on the content values of the context position and its neighboring positions.
 
* The output is determined using the output features from the content attention layer and the attention map from the positional attention layer.
 
* This model improves efficiency and can be used in deep networks.
 
 
'''Abstract'''
 
The present disclosure provides systems, methods, and computer program products for modeling dependencies throughout a network using a global-self attention model with a content attention layer and a positional attention layer that operate in parallel. The model receives input data comprising content values and context positions. The content attention layer generates one or more output features for each context position based on a global attention operation applied to the content values independent of the context positions. The positional attention layer generates an attention map for each of the context positions based on one or more content values of the respective context position and associated neighboring positions. Output is determined based on the output features generated by the content attention layer and the attention map generated for each context position by the positional attention layer. The model improves efficiency and can be used throughout a deep network.
 
  
 
===TRAINING NEURAL NETWORKS USING SIGN AND MOMENTUM BASED OPTIMIZERS ([[US Patent Application 18313291. TRAINING NEURAL NETWORKS USING SIGN AND MOMENTUM BASED OPTIMIZERS simplified abstract|18313291]])===
 
===TRAINING NEURAL NETWORKS USING SIGN AND MOMENTUM BASED OPTIMIZERS ([[US Patent Application 18313291. TRAINING NEURAL NETWORKS USING SIGN AND MOMENTUM BASED OPTIMIZERS simplified abstract|18313291]])===
Line 331: Line 158:
 
Xiangning Chen
 
Xiangning Chen
  
 
'''Brief explanation'''
 
The patent application describes a method, system, and apparatus for training a neural network using a specific type of optimizer called momentum and sign based optimizer.
 
 
* The patent application focuses on training a neural network to perform a machine learning task.
 
* The proposed method involves using a momentum and sign based optimizer to optimize the training process.
 
* The optimizer is designed to improve the efficiency and effectiveness of the neural network training.
 
* The method includes encoding computer programs on computer storage media to implement the optimizer.
 
* The invention aims to enhance the performance of machine learning tasks by utilizing this specific optimizer.
 
 
'''Abstract'''
 
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network to perform a machine learning task using a momentum and sign based optimizer.
 
  
 
===Augmentation of Audiographic Images for Improved Machine Learning ([[US Patent Application 18350464. Augmentation of Audiographic Images for Improved Machine Learning simplified abstract|18350464]])===
 
===Augmentation of Audiographic Images for Improved Machine Learning ([[US Patent Application 18350464. Augmentation of Audiographic Images for Improved Machine Learning simplified abstract|18350464]])===
Line 351: Line 166:
 
Daniel Sung-Joon Park
 
Daniel Sung-Joon Park
  
 
'''Brief explanation'''
 
The patent application is about systems and methods that generate augmented training data for machine-learned models using audiographic images.
 
* The patent introduces new augmentation techniques applied to audiographic images to improve model performance.
 
* The augmentation operations are performed directly on the audiographic image, rather than the raw audio data.
 
* The audiographic images can be spectrograms or filter bank sequences.
 
* The innovation aims to enhance the training data for machine learning models.
 
 
'''Abstract'''
 
Generally, the present disclosure is directed to systems and methods that generate augmented training data for machine-learned models via application of one or more augmentation techniques to audiographic images that visually represent audio signals. In particular, the present disclosure provides a number of novel augmentation operations which can be performed directly upon the audiographic image (e.g., as opposed to the raw audio data) to generate augmented training data that results in improved model performance. As an example, the audiographic images can be or include one or more spectrograms or filter bank sequences.
 
  
 
===SYSTEM(S) AND METHOD(S) FOR JOINTLY LEARNING MACHINE LEARNING MODEL(S) BASED ON SERVER DATA AND CLIENT DATA ([[US Patent Application 17848947. SYSTEM(S) AND METHOD(S) FOR JOINTLY LEARNING MACHINE LEARNING MODEL(S) BASED ON SERVER DATA AND CLIENT DATA simplified abstract|17848947]])===
 
===SYSTEM(S) AND METHOD(S) FOR JOINTLY LEARNING MACHINE LEARNING MODEL(S) BASED ON SERVER DATA AND CLIENT DATA ([[US Patent Application 17848947. SYSTEM(S) AND METHOD(S) FOR JOINTLY LEARNING MACHINE LEARNING MODEL(S) BASED ON SERVER DATA AND CLIENT DATA simplified abstract|17848947]])===
Line 369: Line 174:
 
Sean Augenstein
 
Sean Augenstein
  
 
'''Brief explanation'''
 
The patent application is about techniques to prevent catastrophic forgetting in federated learning of global machine learning models.
 
* The implementation identifies a global machine learning model that is initially trained on a remote server using server data.
 
* The server-based data includes EWC loss terms, client augmenting gradients, server augmenting gradients, and server-based data.
 
* The global ML model and server-based data are transmitted to multiple client devices.
 
* The client devices generate client gradients based on processing predicted output using the global ML model and the server-based data.
 
* The client gradients are then transmitted back to the remote server.
 
* An updated global ML model is generated based on the client gradients.
 
 
'''Abstract'''
 
Implementations disclosed herein are directed to various techniques for mitigating and/or preventing catastrophic forgetting in federated learning of global machine learning (ML) models. Implementations may identify a global ML model that is initially trained at a remote server based on a server data set, determine server-based data for global weight(s) of the global ML model, and transmit the global ML model and the server-based data to a plurality of client devices. The server-based data may include, for example, EWC loss term(s), client augmenting gradients, server augmenting gradients, and/or server-based data. Further, the plurality client devices may generate, based on processing corresponding predicted output and using the global ML model, and based on the server-based data, a corresponding client gradient, and transmit the corresponding client gradient to the remote server. Implementations may further generate an updated global ML model based on at least the corresponding client gradients.
 
  
 
===Iterative Supervised Learning of Quantum Processor Error Models ([[US Patent Application 17738642. Iterative Supervised Learning of Quantum Processor Error Models simplified abstract|17738642]])===
 
===Iterative Supervised Learning of Quantum Processor Error Models ([[US Patent Application 17738642. Iterative Supervised Learning of Quantum Processor Error Models simplified abstract|17738642]])===
Line 389: Line 182:
 
Paul Victor Klimov
 
Paul Victor Klimov
  
 
'''Brief explanation'''
 
- This patent application describes systems and methods for generating error models for quantum algorithms implemented on quantum processors.
 
- The method involves obtaining data associated with a benchmark model, which includes error indicators, benchmarks, and trainable parameters.
 
- Each error indicator is associated with a distinct quantum gate calibrated in a distinct operating configuration.
 
- The method determines parameter values for the trainable parameters.
 
- The quantum computing system operates based on the determined parameter values.
 
 
'''Abstract'''
 
Systems and methods for generating error models for quantum algorithms implemented on quantum processors having a plurality of qubits are provided in one example, a method includes obtaining data associated with a benchmark model, the benchmark model having one or more error indicators as features, one or more benchmarks as targets, and one or more trainable parameters, wherein each error indicator is associated with a distinct quantum gate calibrated in a distinct operating configuration associated with a plurality of operating parameters for the quantum gate and associated with a calibration data for the operating configuration. The method includes determining parameter values for the trainable parameters. The method include operating a quantum computing system based on operating parameters determined based on the parameter values.
 
  
 
===CHARACTERIZATION OF TIME-CORRELATED QUANTUM ERRORS THROUGH ENTANGLEMENT ([[US Patent Application 17928349. CHARACTERIZATION OF TIME-CORRELATED QUANTUM ERRORS THROUGH ENTANGLEMENT simplified abstract|17928349]])===
 
===CHARACTERIZATION OF TIME-CORRELATED QUANTUM ERRORS THROUGH ENTANGLEMENT ([[US Patent Application 17928349. CHARACTERIZATION OF TIME-CORRELATED QUANTUM ERRORS THROUGH ENTANGLEMENT simplified abstract|17928349]])===
Line 407: Line 190:
 
Yuezhen NIU
 
Yuezhen NIU
  
 
'''Brief explanation'''
 
- The patent application is about efficiently measuring and characterizing errors in a quantum computer.
 
- The method involves placing the quantum computer in a highly-entangled state called a Greenberger-Horne-Zeilinger (GHZ) state.
 
- Quantum errors are accumulated in this highly entangled state.
 
- The accumulated errors are then measured to characterize them.
 
- One approach involves measuring the parity oscillations of the GHZ state.
 
- A quantum error model is fitted to a power spectrum of the parity oscillations.
 
- The fitted quantum error model can be used to select a suitable fault-tolerant error correction scheme for the quantum computer based on its environmental noise.
 
 
'''Abstract'''
 
Errors that affect a quantum computer can be efficiently measured and characterized by placing the quantum computer in a highly-entangled state such as a Greenberger-Horne-Zeilinger (GHZ) state, accumulating quantum errors in the highly entangled state, and then measuring the accumulated errors. In some approaches, the error characterization includes measuring parity oscillations of the GHZ state and fitting a quantum error model to a power spectrum of the parity oscillations. The fitted quantum error model can be used to select a suitable fault-tolerant error correction scheme for the quantum computer given its environmental noise.
 
  
 
===Contrastive Sequence-to-Sequence Data Selector ([[US Patent Application 18351397. Contrastive Sequence-to-Sequence Data Selector simplified abstract|18351397]])===
 
===Contrastive Sequence-to-Sequence Data Selector ([[US Patent Application 18351397. Contrastive Sequence-to-Sequence Data Selector simplified abstract|18351397]])===
Line 427: Line 198:
 
Wei Wang
 
Wei Wang
  
 
'''Brief explanation'''
 
The patent application describes a method for training a target model using a combination of data pairs and contrastive scores. Here are the key points:
 
 
* The method starts by generating a base model using a first dataset of data pairs.
 
* An adapted model is then generated by training the base model on a second dataset of data pairs.
 
* A third dataset of data pairs is used to determine a contrastive score for each pair.
 
* The contrastive score reflects the probability of quality for each data pair.
 
* Finally, a target model is trained using the data pairs from the third dataset and the corresponding contrastive scores.
 
 
'''Abstract'''
 
A method includes generating a base model by training with a first dataset of data pairs and generating an adapted model by training the base model on a second dataset of data pairs. The method also includes determining a contrastive score for each data pair of a third dataset of data pairs using the base model and the adapted model. The contrastive score is indicative of a probability of quality of the respective data pair. The method also includes training a target model using the data pairs of the third dataset and the contrastive scores.
 
  
 
===Machine Learning for High Quality Image Processing ([[US Patent Application 18013802. Machine Learning for High Quality Image Processing simplified abstract|18013802]])===
 
===Machine Learning for High Quality Image Processing ([[US Patent Application 18013802. Machine Learning for High Quality Image Processing simplified abstract|18013802]])===
Line 447: Line 206:
 
Noritsugu Kanazawa
 
Noritsugu Kanazawa
  
 
'''Brief explanation'''
 
- The patent application describes a system or method for inpainting, which is the process of filling in missing or damaged parts of an image.
 
- The system or method uses machine learning and ground truth data training to improve the efficiency and accuracy of inpainting.
 
- By training machine-learning models with ground truth image data, the inpainting process can be more precise and effective.
 
- The machine-learning models can predict and inpaint various types of data, making them versatile and applicable in different scenarios.
 
- The trained models can make predictions without the need for ground truth reassurance, thanks to calibrated parameters obtained through the training process.
 
 
'''Abstract'''
 
A system or method for inpainting can be aided through the use of machine learning and ground truth data training. The training of machine-learning inpainting models through the use of ground truth image data may add efficiency and precision to the field of image inpainting. Furthermore, machine-learning inpainting models can aid in the non-deterministic prediction of a variety of data types and can be applicable to the removing and/or replacing of a variety of data types. The trained models can be enabled to make predictions without ground truth reassurance due to calibrated parameters tuned through the training.
 
  
 
===Enhanced Photo Relighting Based on Machine Learning Models ([[US Patent Application 18028930. Enhanced Photo Relighting Based on Machine Learning Models simplified abstract|18028930]])===
 
===Enhanced Photo Relighting Based on Machine Learning Models ([[US Patent Application 18028930. Enhanced Photo Relighting Based on Machine Learning Models simplified abstract|18028930]])===
Line 465: Line 214:
 
Sean Ryan Francesco Fanello
 
Sean Ryan Francesco Fanello
  
 
'''Brief explanation'''
 
- The patent application is related to applying lighting models to images of objects.
 
- The method involves using a geometry model to determine the distribution of lighting on an object based on its surface geometry.
 
- An environmental light estimation model is then used to determine the direction of synthetic lighting to be applied to the image.
 
- A light energy model is applied based on the surface orientation map and the direction of synthetic lighting to determine the amount of light energy to be applied to each pixel of the image.
 
- The method also includes enhancing a portion of the image based on the determined light energy.
 
- The patent application mentions the use of one or more neural networks to perform these aspects.
 
 
'''Abstract'''
 
Apparatus and methods related to applying lighting models to images of objects are provided. An example method includes applying a geometry model to an input image to determine a surface orientation map indicative of a distribution of lighting on an object based on a surface geometry. The method further includes applying an environmental light estimation model to the input image to determine a direction of synthetic lighting to be applied to the input image. The method also includes applying, based on the surface orientation map and the direction of synthetic lighting, a light energy model to determine a quotient image indicative of an amount of light energy to be applied to each pixel of the input image. The method additionally includes enhancing, based on the quotient image, a portion of the input image. One or more neural networks can be trained to perform one or more of the aforementioned aspects.
 
  
 
===SYSTEM AND METHOD FOR CONCURRENT ODOMETRY AND MAPPING ([[US Patent Application 18224414. SYSTEM AND METHOD FOR CONCURRENT ODOMETRY AND MAPPING simplified abstract|18224414]])===
 
===SYSTEM AND METHOD FOR CONCURRENT ODOMETRY AND MAPPING ([[US Patent Application 18224414. SYSTEM AND METHOD FOR CONCURRENT ODOMETRY AND MAPPING simplified abstract|18224414]])===
Line 484: Line 222:
 
Esha Nerurkar
 
Esha Nerurkar
  
 
'''Brief explanation'''
 
- The patent application describes an electronic device that tracks its motion in an environment and builds a three-dimensional visual representation of the environment.
 
- The device uses feature descriptors, which are visual representations of spatial features of objects in the environment, to estimate its poses.
 
- A mapping module combines the feature descriptors and estimated poses to create a three-dimensional visual representation of the environment.
 
- This representation is then used by a localization module to identify correspondences between stored and observed feature descriptors.
 
- The localization module performs a loop closure by minimizing discrepancies between matching feature descriptors to compute a localized pose.
 
- The localized pose corrects any drift in the estimated pose generated by the motion tracking module.
 
 
'''Abstract'''
 
An electronic device tracks its motion in an environment while building a three-dimensional visual representation of the environment that is used to correct drift in the tracked motion. A motion tracking module estimates poses of the electronic device based on feature descriptors corresponding to the visual appearance of spatial features of objects in the environment. A mapping module builds a three-dimensional visual representation of the environment based on a stored plurality of maps, and feature descriptors and estimated device poses received from the motion tracking module. The mapping module provides the three-dimensional visual representation of the environment to a localization module, which identifies correspondences between stored and observed feature descriptors. The localization module performs a loop closure by minimizing the discrepancies between matching feature descriptors to compute a localized pose. The localized pose corrects drift in the estimated pose generated by the motion tracking module.
 
  
 
===Use Of Image Sensors To Query Real World for Geo-Reference Information ([[US Patent Application 18197364. Use Of Image Sensors To Query Real World for Geo-Reference Information simplified abstract|18197364]])===
 
===Use Of Image Sensors To Query Real World for Geo-Reference Information ([[US Patent Application 18197364. Use Of Image Sensors To Query Real World for Geo-Reference Information simplified abstract|18197364]])===
Line 503: Line 230:
 
Juan David Hincapie
 
Juan David Hincapie
  
 
'''Brief explanation'''
 
- This patent application describes a system and method for using image sensors in a device to provide users with information about nearby points of interest.
 
- The image sensors detect features and objects in the device's field of view.
 
- Based on the detected features and objects, the device determines its location and orientation.
 
- Using this pose data, the device identifies a range of points of interest within a specific geographical area.
 
- The device can query a mapping database to find points of interest located near the user's location.
 
- The device then provides the user with information about one or more of these points of interest.
 
 
'''Abstract'''
 
The present disclosure provides systems and methods that makes use of one or more image sensors of a device to provide users with information relating to nearby points of interest. The image sensors may be used to detect features and/or objects in the field of view of the image sensors. Pose data, including a location and orientation of the device is then determined based on the one or more detected features and/or objects. A plurality of points of interest that are within a geographical area that is dependent on the pose data are then determined. The determination may, for instance, be made by querying a mapping database for points of interest that are known to be located within a particular distance of the location of the user. The device then provides information to the user indicating one or more of the plurality of points of interest.
 
  
 
===IDENTIFYING A POSITION OF A CONTROLLABLE DEVICE USING A WEARABLE DEVICE ([[US Patent Application 18246464. IDENTIFYING A POSITION OF A CONTROLLABLE DEVICE USING A WEARABLE DEVICE simplified abstract|18246464]])===
 
===IDENTIFYING A POSITION OF A CONTROLLABLE DEVICE USING A WEARABLE DEVICE ([[US Patent Application 18246464. IDENTIFYING A POSITION OF A CONTROLLABLE DEVICE USING A WEARABLE DEVICE simplified abstract|18246464]])===
Line 522: Line 238:
 
Shengzhi Wu
 
Shengzhi Wu
  
 
'''Brief explanation'''
 
- The patent application describes a method for identifying the position of a controllable device using visual data from a wearable device.
 
- An object recognition module generates identification data based on the visual data.
 
- The identification data is used to identify a specific 3D map from a map database.
 
- The 3D maps in the database are associated with different controllable devices.
 
- The method obtains the position of the controllable device in physical space using visual positioning data from the identified 3D map.
 
- A user interface (UI) object is then rendered on a display, positioned within a certain distance of the controllable device's position.
 
 
'''Abstract'''
 
According to an aspect, a method of identifying a position of a controllable device includes receiving visual data from an image sensor on a wearable device, generating, by an object recognition module, identification data based on the visual data, and identifying, using the identification data, a first three-dimensional (3D) map from a map database that stores a plurality of 3D maps including the first 3D map and a second 3D map, where the first 3D map is associated with a first controllable device and the second 3D map is associated with a second controllable device. The method includes obtaining a position of the first controllable device in a physical space based on visual positioning data of the first 3D map and rendering a user interface (UI) object on a display in a position that is within a threshold distance of the position of the first controllable device.
 
  
 
===OPEN-VOCABULARY OBJECT DETECTION IN IMAGES ([[US Patent Application 18144045. OPEN-VOCABULARY OBJECT DETECTION IN IMAGES simplified abstract|18144045]])===
 
===OPEN-VOCABULARY OBJECT DETECTION IN IMAGES ([[US Patent Application 18144045. OPEN-VOCABULARY OBJECT DETECTION IN IMAGES simplified abstract|18144045]])===
Line 541: Line 246:
 
Matthias Johannes Lorenz Minderer
 
Matthias Johannes Lorenz Minderer
  
 
'''Brief explanation'''
 
The patent application describes a method for object detection using a neural network.
 
* The method involves obtaining an image and a set of query embeddings representing different categories of objects.
 
* The image and query embeddings are processed using an object detection neural network.
 
* The image is processed using an image encoding subnetwork to generate object embeddings.
 
* Each object embedding is processed using a localization subnetwork to determine the region of the image where the object is located.
 
* The object embeddings and query embeddings are processed using a classification subnetwork to generate a classification score distribution for each object embedding.
 
* This method allows for accurate detection and classification of objects in an image.
 
 
'''Abstract'''
 
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for object detection. In one aspect, a method comprises: obtaining: (i) an image, and (ii) a set of one or more query embeddings, wherein each query embedding represents a respective category of object; processing the image and the set of query embeddings using an object detection neural network to generate object detection data for the image, comprising: processing the image using an image encoding subnetwork of the object detection neural network to generate a set of object embeddings; processing each object embedding using a localization subnetwork to generate localization data defining a corresponding region of the image; and processing: (i) the set of object embeddings, and (ii) the set of query embeddings, using a classification subnetwork to generate, for each object embedding, a respective classification score distribution over the set of query embeddings.
 
  
 
===GUIDING FINGERPRINT SENSING VIA USER FEEDBACK ([[US Patent Application 18245454. GUIDING FINGERPRINT SENSING VIA USER FEEDBACK simplified abstract|18245454]])===
 
===GUIDING FINGERPRINT SENSING VIA USER FEEDBACK ([[US Patent Application 18245454. GUIDING FINGERPRINT SENSING VIA USER FEEDBACK simplified abstract|18245454]])===
Line 561: Line 254:
 
Scott Jenson
 
Scott Jenson
  
 
'''Brief explanation'''
 
The abstract describes a method for fingerprint sensing using haptic signals.
 
* The method involves initiating a fingerprint sensing operation using a fingerprint sensor.
 
* Over a defined period of time, a sequence of haptic signals with varying intensity is outputted by a computing device.
 
* Each haptic signal in the sequence has a different intensity than any previously outputted haptic signal.
 
* Fingerprint data associated with the user's finger is obtained from the fingerprint sensor.
 
* If the user's finger is still positioned at the fingerprint sensor after the sensing operation, a discrete haptic signal is outputted to indicate a successful completion of the fingerprint sensing operation.
 
 
'''Abstract'''
 
An example method includes initiating, by a computing device, a fingerprint sensing operation that is associated with a fingerprint sensor, outputting, by the computing device and over a defined period of time, a sequence of haptic signals with varying intensity, wherein each haptic signal in the sequence has an intensity that is different than a respective intensity of any haptic signal that was previously output in the sequence, obtaining, by the computing device and from the fingerprint sensor, fingerprint data associated with a fingerprint of the finger of the user, and responsive to determining that the finger of the user is still positioned at the fingerprint sensor upon completion of the fingerprint sensing operation, outputting, by the computing device, a discrete haptic signal indicating a successful completion of the fingerprint sensing operation.
 
  
 
===Speaker Embeddings for Improved Automatic Speech Recognition ([[US Patent Application 17661832. Speaker Embeddings for Improved Automatic Speech Recognition simplified abstract|17661832]])===
 
===Speaker Embeddings for Improved Automatic Speech Recognition ([[US Patent Application 17661832. Speaker Embeddings for Improved Automatic Speech Recognition simplified abstract|17661832]])===
Line 580: Line 262:
 
Fadi Biadsy
 
Fadi Biadsy
  
 
'''Brief explanation'''
 
The patent application describes a method for converting atypical speech into a more typical representation using a speaker embedding network.
 
* The method involves receiving a reference audio signal of a target speaker with atypical speech.
 
* A speaker embedding network generates a speaker embedding that captures the speaker characteristics of the target speaker.
 
* The method also involves receiving a speech conversion request with input audio data of the target speaker's utterance.
 
* The speaker embedding is used to bias a speech conversion model to convert the input audio data into a more typical representation of the target speaker's utterance.
 
 
'''Abstract'''
 
A method includes receiving a reference audio signal corresponding to reference speech spoken by a target speaker with atypical speech, and generating, by a speaker embedding network configured to receive the reference audio signal as input, a speaker embedding for the target speaker. The speaker embedding conveys speaker characteristics of the target speaker. The method also includes receiving a speech conversion request that includes input audio data corresponding to an utterance spoken by the target speaker associated with the atypical speech. The method also includes biasing, using the speaker embedding generated for the target speaker by the speaker embedding network, a speech conversion model to convert the input audio data corresponding to the utterance spoken by the target speaker associated with atypical speech into an output canonical representation of the utterance spoken by the target speaker.
 
  
 
===Speech Personalization and Federated Training Using Real World Noise ([[US Patent Application 18356743. Speech Personalization and Federated Training Using Real World Noise simplified abstract|18356743]])===
 
===Speech Personalization and Federated Training Using Real World Noise ([[US Patent Application 18356743. Speech Personalization and Federated Training Using Real World Noise simplified abstract|18356743]])===
Line 598: Line 270:
 
Matthew Sharifi
 
Matthew Sharifi
  
 
'''Brief explanation'''
 
The patent application describes a method for training a speech model using a voice-enabled device.
 
* The method involves receiving a set of training utterances, each consisting of a transcription and a speech representation.
 
* Noisy audio data is sampled from the device's environment.
 
* The speech representation of each training utterance is augmented with the sampled noisy audio data to create noisy audio samples.
 
* Each noisy audio sample is paired with the corresponding transcription.
 
* A speech model is then trained using these noisy audio samples.
 
* This method helps improve the accuracy and robustness of the speech model by incorporating real-world noise into the training process.
 
 
'''Abstract'''
 
A method of training a speech model includes receiving, at a voice-enabled device, a fixed set of training utterances where each training utterance in the fixed set of training utterances includes a transcription paired with a speech representation of the corresponding training utterance. The method also includes sampling noisy audio data from an environment of the voice-enabled device. For each training utterance in the fixed set of training utterances, the method further includes augmenting, using the noisy audio data sampled from the environment of the voice-enabled device, the speech representation of the corresponding training utterance to generate noisy audio samples and pairing each of the noisy audio samples with the corresponding transcription of the corresponding training utterance. The method additionally includes training a speech model on the noisy audio samples generated for each speech representation in the fixed set of training utterances.
 
  
 
===MANAGING DIALOG DATA PROVIDERS ([[US Patent Application 18222325. MANAGING DIALOG DATA PROVIDERS simplified abstract|18222325]])===
 
===MANAGING DIALOG DATA PROVIDERS ([[US Patent Application 18222325. MANAGING DIALOG DATA PROVIDERS simplified abstract|18222325]])===
Line 618: Line 278:
 
David Kliger Elson
 
David Kliger Elson
  
 
'''Brief explanation'''
 
- This patent application is about managing dialogs using methods, systems, and computer programs.
 
- The method involves receiving a task request from a user device.
 
- The request is then submitted to multiple data providers.
 
- The data providers provide suggested dialog responses.
 
- The suggested responses are scored based on certain factors.
 
- Based on the scoring, a particular dialog response is determined.
 
- The determined dialog response is provided back to the user device.
 
 
'''Abstract'''
 
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for managing dialogs. In one aspect, a method includes receiving a request associated with a task from a user device; submitting the request to each of a plurality of distinct data providers; receiving a plurality of suggested dialog responses from two or more of the data providers; scoring the one or more suggested dialog responses based on one or more scoring factors; determining a particular dialog response to provide to the user based on the scoring; and providing the determined dialog response to the user device.
 
  
 
===MULTIMODE HIGH-ISOLATION ANTENNA SYSTEM ([[US Patent Application 18222684. MULTIMODE HIGH-ISOLATION ANTENNA SYSTEM simplified abstract|18222684]])===
 
===MULTIMODE HIGH-ISOLATION ANTENNA SYSTEM ([[US Patent Application 18222684. MULTIMODE HIGH-ISOLATION ANTENNA SYSTEM simplified abstract|18222684]])===
Line 638: Line 286:
 
Ming Zheng
 
Ming Zheng
  
 
'''Brief explanation'''
 
- The patent application describes a multimode high-isolation antenna system and its associated methods and systems.
 
- The antenna system is implemented on a circular printed circuit board and can be used for wideband and ultra-wideband applications.
 
- The system includes two orthogonal antennas that are separated by a decoupling structure.
 
- This arrangement provides high isolation between the antennas, meaning that the signals from one antenna do not interfere with the other.
 
- The system enables five unique resonant modes of operation, allowing for versatile and efficient use of the antenna system.
 
 
'''Abstract'''
 
This document describes a multimode high-isolation antenna system and associated methods and systems. The described antenna system is implemented on a generally-circular printed circuit board and can be used for wideband and ultra-wideband applications. The multimode high-isolation antenna system includes two orthogonal antennas separated by a decoupling structure. This arrangement provides high isolation between the antennas and enables five unique resonant modes of operation for the multimode high-isolation antenna system.
 
  
 
===SEMI-PERSISTENT SCHEDULING IN LATENCY-SENSITIVE SYSTEMS ([[US Patent Application 18018825. SEMI-PERSISTENT SCHEDULING IN LATENCY-SENSITIVE SYSTEMS simplified abstract|18018825]])===
 
===SEMI-PERSISTENT SCHEDULING IN LATENCY-SENSITIVE SYSTEMS ([[US Patent Application 18018825. SEMI-PERSISTENT SCHEDULING IN LATENCY-SENSITIVE SYSTEMS simplified abstract|18018825]])===
Line 656: Line 294:
 
Kao-Peng Chou
 
Kao-Peng Chou
  
 
'''Brief explanation'''
 
The patent application describes techniques for processing data using semi-persistent scheduling. Here is a simplified explanation of the abstract:
 
 
* The techniques involve receiving transmissions and retransmissions of data associated with a periodically-scheduled occasion.
 
* If the data cannot be recovered from these transmissions, it is considered as undelivered.
 
* The undelivered data is then persisted in a buffer corresponding to the occasion for future recovery attempts.
 
* The persisted payload information is stored in the buffer for a longer period of time than the periodicity of the occasion.
 
* A retransmission timer prevents the persisted payload information from being overwritten or cleared.
 
* The persisted payload information can be reallocated to another buffer for better management.
 
 
Bullet points to explain the patent/innovation:
 
 
* Techniques for processing data in semi-persistent scheduling.
 
* Automatic retransmission of undelivered data is used.
 
* Failed data recovery results in persisting the transmission payload in a buffer.
 
* The persisted payload is stored for a longer time than the periodicity of the occasion.
 
* A retransmission timer prevents overwriting or clearing of the persisted payload.
 
* The persisted payload information can be reallocated to another buffer for better management.
 
 
'''Abstract'''
 
Techniques for processing data in accordance with semi-persistent scheduling include receiving, in accordance with a mechanism for automatic retransmission of undelivered data, one or more transmissions and/or retransmissions of data associated with a periodically-scheduled occasion (), failing to recover data from the (re)transmissions (), and persisting the (re)transmission payload(s) (e.g., in a combined form) in a buffer corresponding to the occasion for use in future attempts at recovering the data (), e.g., persisting the payload(s) over a length of time greater than a periodicity of the occurrences of the occasion. For example, the UE may utilize a retransmission timer () which, while activated, prevents the persisted payload information from being overwritten or cleared, and/or the UE may reallocate the persisted payload information from being maintained in the buffer initially associated with occasion to being maintained/persisted in another buffer ().
 
  
 
===Rate Update Engine For Reliable Transport Protocol ([[US Patent Application 18222590. Rate Update Engine For Reliable Transport Protocol simplified abstract|18222590]])===
 
===Rate Update Engine For Reliable Transport Protocol ([[US Patent Application 18222590. Rate Update Engine For Reliable Transport Protocol simplified abstract|18222590]])===
Line 686: Line 302:
 
Xiaoming Wang
 
Xiaoming Wang
  
 
'''Brief explanation'''
 
The patent application describes a system that analyzes data packets received over a communication protocol system to determine congestion indicators.
 
* The system focuses on network congestion for data packets transmitted over a reliable transport protocol layer.
 
* It includes a first processor that performs the analysis of the data packets to determine congestion indicators.
 
* A rate update engine, separate from the packet datapath, operates a second processor to receive the congestion indicators and determine congestion control parameters.
 
* The congestion control parameters are used to control the transmission of data packets.
 
* The rate update engine outputs a congestion control result based on the determined congestion control parameters.
 
 
'''Abstract'''
 
A system includes a first processor configured to analyze packets received over a communication protocol system and determine one or more congestion indicators from the analysis of the data packets, the one or more congestion indicators being indicative of network congestion for data packets transmitted over a reliable transport protocol layer of the communication protocol system. The system also includes a rate update engine separate from the packet datapath and configured to operate a second processor to receive the determined one or more congestion indicators, determine one or more congestion control parameters for controlling transmission of data packets based on the received one or more congestion indicators, and output a congestion control result based on the determined one or more congestion control parameters.
 
  
 
===WATERMARK-BASED MESSAGE QUEUE ([[US Patent Application 18352689. WATERMARK-BASED MESSAGE QUEUE simplified abstract|18352689]])===
 
===WATERMARK-BASED MESSAGE QUEUE ([[US Patent Application 18352689. WATERMARK-BASED MESSAGE QUEUE simplified abstract|18352689]])===
Line 705: Line 310:
 
Yi Cui
 
Yi Cui
  
 
'''Brief explanation'''
 
This patent application describes a watermark-based message queue system and method. Here are the key points:
 
 
* The system receives a connection request for messages from a user device.
 
* It establishes a connection session with the user device.
 
* The system identifies a message queue associated with the user device, where each message in the queue has a timestamp.
 
* The message queue is associated with a current watermark that represents the first timestamp.
 
* The system identifies the oldest message in the queue at the time the connection session was established.
 
* It associates an updated watermark with the message queue, representing the second timestamp associated with the oldest message.
 
* The system provides one or more messages to the user device that have a timestamp newer than or equal to the first timestamp identified by the current watermark.
 
 
'''Abstract'''
 
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for a watermark-based message queue. One of the methods includes receiving a first connection request for messages associated with a user device. A first connection session is established with the user device. A message queue of messages associated with the user device is identified, each message in the message queue is associated with a respective timestamp, and the message queue is associated with a current watermark that identifies a first timestamp. An oldest message in the message queue at the time the first connection session was established is identified. An updated watermark that identifies a second timestamp associated with the oldest message is associated with the message queue. One or more messages that have a timestamp newer than or equal to the first timestamp identified by the current watermark is provided to the user device.
 
  
 
===REMOTE ATTESTATION TRANSPORT LAYER SECURITY AND SPLIT TRUST ENCRYPTION ([[US Patent Application 18352373. REMOTE ATTESTATION TRANSPORT LAYER SECURITY AND SPLIT TRUST ENCRYPTION simplified abstract|18352373]])===
 
===REMOTE ATTESTATION TRANSPORT LAYER SECURITY AND SPLIT TRUST ENCRYPTION ([[US Patent Application 18352373. REMOTE ATTESTATION TRANSPORT LAYER SECURITY AND SPLIT TRUST ENCRYPTION simplified abstract|18352373]])===
Line 727: Line 318:
 
Keith Moyer
 
Keith Moyer
  
 
'''Brief explanation'''
 
The abstract describes a method for remote attestation, which involves establishing a secure communication session between two computing devices using a cryptographic protocol.
 
 
* The method allows a first computing device to receive an attestation request from a second computing device via the secure communication session.
 
* The attestation request asks the first computing device to provide an attestation report.
 
* The first computing device generates the attestation report based on an ephemeral session key, which ensures the security of the report.
 
* The attestation report is then sent back to the second computing device using the same secure communication session.
 
 
'''Abstract'''
 
A method for remote attestation includes establishing, using a cryptographic protocol, a communication session between a first computing device and a second computing device. The communication session includes communications encrypted by an ephemeral session key. The method includes receiving, at the first communication device via the communication session, from the second computing device, an attestation request requesting the first computing device to provide an attestation report. The method includes generating, by the first computing device, the attestation report based on the ephemeral session key and sending, using the communication session, the attestation report to the second computing device.
 
  
 
===Cloud-Based Application of Visual Effects to Video ([[US Patent Application 17738176. Cloud-Based Application of Visual Effects to Video simplified abstract|17738176]])===
 
===Cloud-Based Application of Visual Effects to Video ([[US Patent Application 17738176. Cloud-Based Application of Visual Effects to Video simplified abstract|17738176]])===
Line 746: Line 326:
 
Stéphane Hervé Loïc Hulaud
 
Stéphane Hervé Loïc Hulaud
  
 
'''Brief explanation'''
 
The patent application describes a server system that receives a video stream and visual effects information from a client device during a videoconferencing session. The server system then applies the visual effects to the video stream and generates modified video streams. These modified video streams are then transmitted to other client devices participating in the videoconferencing session.
 
 
* Server system receives video stream and visual effects information from a client device
 
* Visual effects information specifies the desired visual effects to be applied to the video stream
 
* Server system applies the visual effects to the video stream
 
* Modified video streams are generated with the applied visual effects
 
* Modified video streams are transmitted to other client devices participating in the videoconferencing session
 
 
'''Abstract'''
 
A server system receives, from a first client device, a video stream relating to a videoconferencing session and receives, from the first client device, visual effects information relating to one or more visual effects to be applied to the video stream. The server system applies, based on the received visual effects information, the one or more visual effects to the video stream to generate one or more modified video streams, and transmits the one or more modified video streams to one or more other client devices participating in the videoconferencing session.
 
  
 
===END-TO-END WATERMARKING SYSTEM ([[US Patent Application 18008789. END-TO-END WATERMARKING SYSTEM simplified abstract|18008789]])===
 
===END-TO-END WATERMARKING SYSTEM ([[US Patent Application 18008789. END-TO-END WATERMARKING SYSTEM simplified abstract|18008789]])===
Line 766: Line 334:
 
Xiyang Luo
 
Xiyang Luo
  
 
'''Brief explanation'''
 
This patent application describes a method for training an encoder and decoder to generate and decode watermarks in data items. The training process involves generating multiple watermarks for training images, adding distortions to the watermarked images, and adjusting the training parameters based on the error values.
 
 
* The patent application focuses on jointly training an encoder and decoder for watermark generation and decoding in data items.
 
* The training process involves obtaining a set of training images and data items.
 
* For each training image, a first watermark is generated using an encoder, and then a second watermark is generated by tiling multiple first watermarks.
 
* The second watermark is used to watermark the training image, and a first error value is calculated.
 
* Distortions are added to the watermarked image, and a distortion detector predicts these distortions.
 
* The distorted image is modified based on the predicted distortions.
 
* The modified image is decoded by the decoder to generate a predicted data item and a second error value.
 
* The training parameters of the encoder and decoder are adjusted based on the first and second error values.
 
 
'''Abstract'''
 
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for jointly training an encoder that generates a watermark and a decoder that decodes a data item encoded within the watermark. The training comprises obtaining a plurality of training images and data items. For each training image, a first watermark is generated using an encoder and a subsequent second watermark is generated by tiling two or more first watermarks. The training image is watermarked using the second watermark to generate a first error value and distortions are added to the watermarked image. A distortion detector predicts the distortions based on which the distorted image is modified. The modified image is decoded by the decoder to generate a predicted data item and a second error value. The training parameters of the encoder and decoder are adjusted based on the first and the second error value.
 
  
 
===PROVIDING A MESSAGE BASED ON A CHANGE IN WATCH TIME ([[US Patent Application 18221731. PROVIDING A MESSAGE BASED ON A CHANGE IN WATCH TIME simplified abstract|18221731]])===
 
===PROVIDING A MESSAGE BASED ON A CHANGE IN WATCH TIME ([[US Patent Application 18221731. PROVIDING A MESSAGE BASED ON A CHANGE IN WATCH TIME simplified abstract|18221731]])===
Line 789: Line 342:
 
Prachi Gupta
 
Prachi Gupta
  
 
'''Brief explanation'''
 
This patent application describes a system for recommending video content to users based on their preferences and context. Here are the key points:
 
 
* The system receives a request from a user to view a video content item.
 
* The user is associated with a set of preferences or a context.
 
* The system identifies a group of similar users based on these preferences or context.
 
* A number of candidate video items that correspond to the requested video content item are identified.
 
* The system determines a watch time difference for each candidate video item.
 
* A subset of candidate video items is determined based on the watch time difference.
 
* The candidate video items in the subset are ranked based on an activity rate associated with each item.
 
* The system provides the user with the candidate video item that has the highest ranking.
 
 
'''Abstract'''
 
A request from a user to view a video content item may be received, the requesting user being associated with at least one of a set of preferences or a context. group of similar users may be identified based the set of preferences or the context. A number of candidate video items corresponding to the video content item may be identified. A watch time difference may be determined for each candidate video item of the number of candidate video items. A subset may be determined based on the watch time difference associated with each candidate video items. The candidate video items in the subset may be ranked based on an activity rate associated with the corresponding candidate video item. A candidate video item with the highest ranking may be provided to the user.
 
  
 
===Video Integration with Home Assistant ([[US Patent Application 18351370. Video Integration with Home Assistant simplified abstract|18351370]])===
 
===Video Integration with Home Assistant ([[US Patent Application 18351370. Video Integration with Home Assistant simplified abstract|18351370]])===
Line 812: Line 350:
 
Jessica Yuan
 
Jessica Yuan
  
 
'''Brief explanation'''
 
This patent application describes a system for managing video recording using a network-enabled video camera and a cloud-based home assistant integration platform.
 
 
* The system includes a video camera that captures a video stream of a location and streams it to a cloud-based platform.
 
* A home assistant device with a microphone and wireless network interface is used to receive spoken commands requesting video recording.
 
* The platform analyzes the video stream to determine the user's identity.
 
* Based on the spoken command and user identity, a portion of the video stream is stored in the user's account.
 
 
'''Abstract'''
 
Various arrangements are detailed herein related to managing video recording. A system can include a network-enabled video camera that captures a video stream of a location and streams the video stream to a cloud-based home assistant integration platform. The system can include a home assistant device comprising a microphone and wireless network interface, the home assistant device configured to receive, via the microphone, a spoken command that requests video, captured by the network-enabled video camera, be recorded. The cloud-based home assistant integration platform may be configured to analyze the video stream captured using the network-enabled video camera to determine an identity of the user. The platform may be further configured to store a portion of the video stream linked to a user account of the user based on the spoken command and the identity of the user.
 
  
 
===Systems and Methods for Detecting Improper Implementation of Presentation of Content Items by Applications Executing on Client Devices ([[US Patent Application 18223797. Systems and Methods for Detecting Improper Implementation of Presentation of Content Items by Applications Executing on Client Devices simplified abstract|18223797]])===
 
===Systems and Methods for Detecting Improper Implementation of Presentation of Content Items by Applications Executing on Client Devices ([[US Patent Application 18223797. Systems and Methods for Detecting Improper Implementation of Presentation of Content Items by Applications Executing on Client Devices simplified abstract|18223797]])===
Line 831: Line 358:
 
Priyanshu Jain
 
Priyanshu Jain
  
 
'''Brief explanation'''
 
The patent application describes a system and method for detecting improper presentation of content items by applications on client devices.
 
 
* The system receives content requests from multiple client devices, where each request is generated by an application running on a client device.
 
* In response to a content request, the system transmits a content package to the client device, which includes a content item and an interaction confirmation script.
 
* The system determines a location parameter associated with the interaction confirmation script.
 
* It also determines a performance metric of the content item.
 
* Based on the performance metric and the location parameter, the system determines whether the application receives inadvertent clicks on the content item.
 
 
'''Abstract'''
 
Systems and methods for detecting improper presentation of content items by applications executing on client devices. A method can include: (i) receiving content requests from a plurality of client devices, wherein each of the content requests is generated by an application executing on a respective client device of the plurality of client devices; (ii) transmitting to each client device of the plurality of client devices, responsive to a content request from the client device, a content package including at least a content item and an interaction confirmation script; (iii) determining a location parameter associated with the interaction confirmation script; (iv) determining a performance metric of the content item; and (v) determining whether the application receives inadvertent clicks at the content item based on the performance metric and the location parameter.
 
  
 
===Systems and Methods of Power-Management on Smart Devices ([[US Patent Application 18351404. Systems and Methods of Power-Management on Smart Devices simplified abstract|18351404]])===
 
===Systems and Methods of Power-Management on Smart Devices ([[US Patent Application 18351404. Systems and Methods of Power-Management on Smart Devices simplified abstract|18351404]])===
Line 851: Line 366:
 
Sahana Mysore
 
Sahana Mysore
  
 
'''Brief explanation'''
 
This patent application describes methods, devices, and systems for power-management on camera devices. The invention focuses on conserving power by deactivating the wireless communication component of the camera device when not in use.
 
 
* The camera device captures a series of images when a motion event is detected.
 
* The motion event is characterized to determine its significance.
 
* Based on the characterization, the camera device decides whether to send video data to a remote computing system.
 
* If the decision is to send video data, the wireless communication component is activated.
 
* The camera device establishes a wireless connection with the remote computing system.
 
* Video information is then sent to the remote computing system via the established wireless connection.
 
 
'''Abstract'''
 
The various embodiments described herein include methods, devices, and systems for power-management on camera devices. In one aspect, a method is performed at a camera device having memory, one or more processors, and an image sensor. The method includes: (1) while a wireless communication component of the camera device is deactivated: (a) capturing a plurality of images containing a motion event; (b) characterizing the motion event; and (c) determining, based on the characterization of the motion event, whether to send video data to a remote computing system; and (2) in accordance with a determination to send video data to the remote computing system: (i) activating the wireless communication component of the camera device; (ii) establishing a wireless connection to the remote computing system via the wireless communication component; and (iii) sending video information to the remote computing system via the established wireless connection.
 
  
 
===LENS SHADING CORRECTION TO MINIMIZE DATA LOSS ([[US Patent Application 18022710. LENS SHADING CORRECTION TO MINIMIZE DATA LOSS simplified abstract|18022710]])===
 
===LENS SHADING CORRECTION TO MINIMIZE DATA LOSS ([[US Patent Application 18022710. LENS SHADING CORRECTION TO MINIMIZE DATA LOSS simplified abstract|18022710]])===
Line 872: Line 374:
 
Karl Rasche
 
Karl Rasche
  
 
'''Brief explanation'''
 
The patent application describes a method for correcting images taken with a camera lens. Here are the key points:
 
 
* The method starts by receiving a raw image and a stored calibration.
 
* The stored calibration is used to determine a lens shading correction (LSC) gain.
 
* The LSC gain is then factored into a factored gain, which includes a local tone mapping (LTM) gain and the factored LSC gain.
 
* Finally, the factored gain is applied to the raw image to generate a corrected image.
 
 
Overall, this method improves the quality of images by correcting for lens shading and applying local tone mapping.
 
 
'''Abstract'''
 
A method including receiving a raw image and a stored calibration, determining a lens shading correction (LSC) gain based on the stored calibration, factoring the LSC gain into a factored gain including a local tone mapping (LTM) gain and a factored LSC gain, and applying the factored gain to the raw image to generate a corrected image.
 
  
 
===MULTI-USIM DEVICE ACCESSING SERVICES OF A SECOND CELLULAR NETWORK THROUGH A FIRST CELLULAR NETWORK VIA A GATEWAY ([[US Patent Application 18011497. MULTI-USIM DEVICE ACCESSING SERVICES OF A SECOND CELLULAR NETWORK THROUGH A FIRST CELLULAR NETWORK VIA A GATEWAY simplified abstract|18011497]])===
 
===MULTI-USIM DEVICE ACCESSING SERVICES OF A SECOND CELLULAR NETWORK THROUGH A FIRST CELLULAR NETWORK VIA A GATEWAY ([[US Patent Application 18011497. MULTI-USIM DEVICE ACCESSING SERVICES OF A SECOND CELLULAR NETWORK THROUGH A FIRST CELLULAR NETWORK VIA A GATEWAY simplified abstract|18011497]])===
Line 893: Line 382:
 
Pavan NUGGEHALLI
 
Pavan NUGGEHALLI
  
 
'''Brief explanation'''
 
The patent application describes a user device (UE) that has two universal subscriber identity modules (USIMs) for connecting to two different cellular networks.
 
* The first cellular network has a first radio access network (RAN) and the second cellular network has a second RAN.
 
* The UE determines if the first cellular network allows access to the gateway of the second cellular network.
 
* If access is allowed, the UE connects to the second cellular network through the first RAN and the gateway.
 
* The UE also prevents the second cellular network from sending information to the UE through the second RAN.
 
 
'''Abstract'''
 
A user device (UE) equipped with a first universal subscriber identity module (USIM) for communicating with a first cellular network and a second USIM for communicating with a second cellular network, wherein the first cellular network includes a first radio access network (RAN) and the second cellular network includes a second RAN, determines whether the first cellular network supports access to a gateway of the second cellular network (), and when the first cellular network supports the access: (i) connects to the second cellular network via the first RAN and the gateway (), and (ii) prevents the second cellular network from transmitting information to the UE via the second RAN ().
 
  
 
===Camera Device with an Adjustable Stand ([[US Patent Application 18027309. Camera Device with an Adjustable Stand simplified abstract|18027309]])===
 
===Camera Device with an Adjustable Stand ([[US Patent Application 18027309. Camera Device with an Adjustable Stand simplified abstract|18027309]])===
Line 910: Line 389:
  
 
Cindy Ngoc-Tran Au
 
Cindy Ngoc-Tran Au
 
 
'''Brief explanation'''
 
- The patent application describes a camera device with an adjustable stand.
 
- The camera device consists of a head assembly and a stand assembly connected by a hinge.
 
- The hinge allows for a 360-degree range of pan and a 45-degree range of tilt of the head assembly.
 
- The stand assembly can be rotated to configure the camera device in different states, including a tabletop state and a wall state.
 
- The tabletop state allows the camera device to rest on a horizontal surface with a low profile.
 
- The wall state provides additional clearance between the head assembly and a vertical surface to which the stand assembly is affixed.
 
- In the wall state, the stand assembly can route and constrain the camera device's cable.
 
 
'''Abstract'''
 
The present document describes a camera device with an adjustable stand. The camera device includes a head assembly and a stand assembly pivotally connected together by a stem forming a hinge. The stem provides a 360-degree range of pan and a 45-degree range of tilt of the head assembly relative to the stand assembly. The stand assembly is rotatably movable relative to the head assembly to configure the camera device in different configuration states, including a tabletop state and a wall state. The tabletop state has a low profile for resting on a horizontal surface, and the wall state has a high profile, which provides additional clearance between the head assembly and a vertical surface to which the stand assembly is affixed. In the wall state, a cable of the camera device can be routed through and constrained by the stand assembly.
 

Latest revision as of 03:55, 4 December 2023

Summary of the patent applications from Google LLC on November 9th, 2023

Google LLC has recently filed several patents for various technologies. These patents cover a range of inventions, including camera devices with adjustable stands, dual SIM capabilities in user devices, image correction methods, power management on camera devices, detection of improper content presentation, video recording management using cloud-based platforms, video content recommendation systems, training encoders and decoders for watermark generation, visual effects in videoconferencing, and remote attestation methods.

Notable recent patent applications from Google LLC include:

  • A camera device with an adjustable stand that allows for 360-degree pan and 45-degree tilt, and can be configured in tabletop or wall states.
  • A user device with dual SIM capabilities for connecting to two different cellular networks, with the ability to determine if access to a second network is allowed through the first network's gateway.
  • A method for correcting images taken with a camera lens by applying lens shading correction and local tone mapping.
  • Power management on camera devices that conserves power by deactivating the wireless communication component when not in use, and activating it when video data needs to be sent to a remote computing system.
  • A system for detecting improper presentation of content items by applications on client devices, based on performance metrics and location parameters.
  • A system for managing video recording using a network-enabled video camera and a cloud-based home assistant integration platform, where video streams are analyzed to determine user identity and specific portions are stored in the user's account.
  • A system for recommending video content to users based on their preferences and context, by identifying similar users, ranking candidate video items, and providing the highest-ranked item to the user.
  • A method for training encoders and decoders to generate and decode watermarks in data items, involving generating multiple watermarks, adding distortions, and adjusting training parameters based on error values.
  • A server system for applying visual effects to video streams during videoconferencing sessions and transmitting the modified streams to other client devices.
  • A method for remote attestation, which involves establishing a secure communication session between two computing devices and generating an attestation report based on an ephemeral session key.

Overall, these recent patent applications from Google LLC demonstrate the organization's focus on improving camera devices, user devices, image correction, power management, content detection, video recording management, video content recommendation, watermarking, videoconferencing, and secure communication.



Contents

Patent applications for Google LLC on November 9th, 2023

USING AMBIENT LIGHT SENSORS AND AMBIENT AUDIO SENSORS TO DETERMINE SLEEP QUALITY (18014588)

Main Inventor

Kenneth Mixter


LOW RESIDUAL LAYER THICKNESS WAVEGUIDE WITH HIGH-INDEX COATING (18141674)

Main Inventor

Eliezer Glik


MANAGING DISPLAY CONTENT ON A WEARABLE DEVICE USING A CONVERSATION GRAPH (18246448)

Main Inventor

Alexander James Faaborg


ALGORITHMICALLY ADJUSTING THE HIT BOX OF ICONS BASED ON PRIOR GAZE AND CLICK INFORMATION (17662175)

Main Inventor

Dongeek Shin


TRACKING ALGORITHM FOR CONTINUOUS AR EXPERIENCES (18312448)

Main Inventor

Luca Ballan


Efficiently Augmenting Images with Related Content (18354101)

Main Inventor

Charles Yang


Methods and Systems for Positioning Animated Images Within a Dynamic Keyboard Interface (18351890)

Main Inventor

David McIntosh


Independent Fragment Compactions of Striped Data (17662547)

Main Inventor

Michael Lai


SORTING FOR DATA-PARALLEL COMPUTING DEVICES (18221506)

Main Inventor

Allan Stuart Mackinnon, JR.


Transferral Of Process State And/Or Components In Computing Environments (18355826)

Main Inventor

Christopher Jonathan Phoenix


User Triggered Virtual Machine Cloning for Recovery/Availability/Scaling (17737305)

Main Inventor

Diwakar Gupta


Maintaining Transactional Consistency in Columnar Engine (17951193)

Main Inventor

Anjan Kumar Amirishetty


Systems and Methods for Anonymizing Large Scale Datasets (18345657)

Main Inventor

Alessandro Epasto


ON-DEVICE GRAMMAR CHECKING (18246326)

Main Inventor

Matthew Sharifi


Systems and Methods for Machine-Learned Models Having Convolution and Attention (18355243)

Main Inventor

Zihang Dai


Modeling Dependencies with Global Self-Attention Neural Networks (18044842)

Main Inventor

Zhuoran Shen


TRAINING NEURAL NETWORKS USING SIGN AND MOMENTUM BASED OPTIMIZERS (18313291)

Main Inventor

Xiangning Chen


Augmentation of Audiographic Images for Improved Machine Learning (18350464)

Main Inventor

Daniel Sung-Joon Park


SYSTEM(S) AND METHOD(S) FOR JOINTLY LEARNING MACHINE LEARNING MODEL(S) BASED ON SERVER DATA AND CLIENT DATA (17848947)

Main Inventor

Sean Augenstein


Iterative Supervised Learning of Quantum Processor Error Models (17738642)

Main Inventor

Paul Victor Klimov


CHARACTERIZATION OF TIME-CORRELATED QUANTUM ERRORS THROUGH ENTANGLEMENT (17928349)

Main Inventor

Yuezhen NIU


Contrastive Sequence-to-Sequence Data Selector (18351397)

Main Inventor

Wei Wang


Machine Learning for High Quality Image Processing (18013802)

Main Inventor

Noritsugu Kanazawa


Enhanced Photo Relighting Based on Machine Learning Models (18028930)

Main Inventor

Sean Ryan Francesco Fanello


SYSTEM AND METHOD FOR CONCURRENT ODOMETRY AND MAPPING (18224414)

Main Inventor

Esha Nerurkar


Use Of Image Sensors To Query Real World for Geo-Reference Information (18197364)

Main Inventor

Juan David Hincapie


IDENTIFYING A POSITION OF A CONTROLLABLE DEVICE USING A WEARABLE DEVICE (18246464)

Main Inventor

Shengzhi Wu


OPEN-VOCABULARY OBJECT DETECTION IN IMAGES (18144045)

Main Inventor

Matthias Johannes Lorenz Minderer


GUIDING FINGERPRINT SENSING VIA USER FEEDBACK (18245454)

Main Inventor

Scott Jenson


Speaker Embeddings for Improved Automatic Speech Recognition (17661832)

Main Inventor

Fadi Biadsy


Speech Personalization and Federated Training Using Real World Noise (18356743)

Main Inventor

Matthew Sharifi


MANAGING DIALOG DATA PROVIDERS (18222325)

Main Inventor

David Kliger Elson


MULTIMODE HIGH-ISOLATION ANTENNA SYSTEM (18222684)

Main Inventor

Ming Zheng


SEMI-PERSISTENT SCHEDULING IN LATENCY-SENSITIVE SYSTEMS (18018825)

Main Inventor

Kao-Peng Chou


Rate Update Engine For Reliable Transport Protocol (18222590)

Main Inventor

Xiaoming Wang


WATERMARK-BASED MESSAGE QUEUE (18352689)

Main Inventor

Yi Cui


REMOTE ATTESTATION TRANSPORT LAYER SECURITY AND SPLIT TRUST ENCRYPTION (18352373)

Main Inventor

Keith Moyer


Cloud-Based Application of Visual Effects to Video (17738176)

Main Inventor

Stéphane Hervé Loïc Hulaud


END-TO-END WATERMARKING SYSTEM (18008789)

Main Inventor

Xiyang Luo


PROVIDING A MESSAGE BASED ON A CHANGE IN WATCH TIME (18221731)

Main Inventor

Prachi Gupta


Video Integration with Home Assistant (18351370)

Main Inventor

Jessica Yuan


Systems and Methods for Detecting Improper Implementation of Presentation of Content Items by Applications Executing on Client Devices (18223797)

Main Inventor

Priyanshu Jain


Systems and Methods of Power-Management on Smart Devices (18351404)

Main Inventor

Sahana Mysore


LENS SHADING CORRECTION TO MINIMIZE DATA LOSS (18022710)

Main Inventor

Karl Rasche


MULTI-USIM DEVICE ACCESSING SERVICES OF A SECOND CELLULAR NETWORK THROUGH A FIRST CELLULAR NETWORK VIA A GATEWAY (18011497)

Main Inventor

Pavan NUGGEHALLI


Camera Device with an Adjustable Stand (18027309)

Main Inventor

Cindy Ngoc-Tran Au