Difference between revisions of "Google LLC patent applications published on October 26th, 2023"

From WikiPatents
Jump to navigation Jump to search
(Creating a new page)
 
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
'''Summary of the patent applications from Google LLC on October 26th, 2023'''
 +
 +
* Google LLC has recently filed patents for various technologies and methods related to managing mobility in user equipment devices, memory devices, sharing location information on social networking services, reversible assembly of textiles to electronic-speaker devices, doorbell cameras, adjusting the quality level of media content during synchronized presentation, matching live media content, providing information items during communication sessions, adapting machine learning configurations in communication systems, and blind battery connectors.
 +
 +
* Notable applications include:
 +
  * A method for managing mobility in a user equipment device operating in dual connectivity with a master node and a secondary node in a radio access network.
 +
  * A memory device consisting of a memory cell array block and control logic for erasing, reading, or programming memory cells.
 +
  * A computer system for sharing location information between users on a social networking service.
 +
  * A toolkit for the reversible assembly of textiles to electronic-speaker devices.
 +
  * A doorbell camera with various features such as a camera module, image sensor, infrared illuminators, waterproof button assembly, and microphone and speaker.
 +
  * A method for adjusting the quality level of media content during synchronized presentation on multiple user devices.
 +
  * A system and method for matching live media content by comparing portions of media content received from a client device with content feeds.
 +
  * A method for providing information items during a communication session between two computing devices.
 +
  * Techniques and apparatuses for adapting a machine learning configuration in an end-to-end communication system.
 +
  * A blind battery connector that uses magnets to automatically align and engage the battery connector with the system-side connector.
 +
 +
 +
 +
 
==Patent applications for Google LLC on October 26th, 2023==
 
==Patent applications for Google LLC on October 26th, 2023==
  
Line 8: Line 27:
 
Joseph Daniel Lowney
 
Joseph Daniel Lowney
  
 
'''Brief explanation'''
 
The abstract describes a testing mechanism that is used to test structures on a wafer. The mechanism includes waveguides and test structures on the wafer. Light sources are used to send beams of light through the wafer to the test structures. When the test structures receive the light, they guide it to an exit location. A conoscope is then used to measure the diffraction efficiency of the test structure by analyzing the light that exits it.
 
 
'''Abstract'''
 
An on-wafer testing mechanism includes multiple waveguides and test structures disposed on a wafer. Light sources are coupled to the wafer and provide beams of light to the structures disposed on the wafer by propagating the light through the wafer. In response to receiving at least a portion of a beam of light, a test structure is configured to guide the light to an exit location on the test structure. As light exits a test structure, a conoscope determines the diffraction efficiency of the test structure based on a measurement taken of the light exiting the test structure.
 
  
 
===SPATIALLY SELECTIVE TINTING ARCHITECTURE FOR HEAD-WORN DEVICES ([[US Patent Application 18137393. SPATIALLY SELECTIVE TINTING ARCHITECTURE FOR HEAD-WORN DEVICES simplified abstract|18137393]])===
 
===SPATIALLY SELECTIVE TINTING ARCHITECTURE FOR HEAD-WORN DEVICES ([[US Patent Application 18137393. SPATIALLY SELECTIVE TINTING ARCHITECTURE FOR HEAD-WORN DEVICES simplified abstract|18137393]])===
Line 22: Line 35:
 
Joseph Daniel Lowney
 
Joseph Daniel Lowney
  
 
'''Brief explanation'''
 
The abstract describes a head-worn device (HWD) that is designed to reduce glints. The HWD includes a frame that is worn by the user and a lens with a tint that blocks some ambient light. The lens also has an aperture. The HWD also includes a waveguide that directs light to the aperture of the lens. The lens and waveguide are aligned in a way that blocks ambient light from reaching the user, while allowing unattenuated light from the waveguide to pass through.
 
 
'''Abstract'''
 
To help reduce glints in head-worn device (HWD), a HWD includes a spatially selective tinting architecture. To this end, the HWD includes a frame to be worn by a user. Further, the HWD includes a first lens that has a first portion including a tint configured to block at least a portion of ambient light and a second portion including an aperture. Additionally, the HWD includes a waveguide having an outcoupler configured to provide light to the aperture of the first lens. The first lens and waveguide are aligned such that a portion of ambient light from outside the HWD is blocked before reaching the user while light from the waveguide passes through to the user unattenuated.
 
  
 
===Variable Mesh Low Mass MEMS Mirrors ([[US Patent Application 18211910. Variable Mesh Low Mass MEMS Mirrors simplified abstract|18211910]])===
 
===Variable Mesh Low Mass MEMS Mirrors ([[US Patent Application 18211910. Variable Mesh Low Mass MEMS Mirrors simplified abstract|18211910]])===
Line 36: Line 43:
 
Kevin Yasumura
 
Kevin Yasumura
  
 
'''Brief explanation'''
 
The abstract describes a component, like a MEMS mirror, that has a variable mesh pattern on its backside surface. The mesh pattern consists of ribs that are thicker near the center or axis of rotation of the component and narrower as they move away from the center.
 
 
'''Abstract'''
 
The present disclosure provides a component, such as a MEMS mirror or other generally disc-shaped component, having a variable mesh pattern across a backside surface thereof. The variable mesh includes ribs having a first thickness near a center portion or axis of rotation of the components, and a second narrower thickness at portions farther from the center or axis of rotation.
 
  
 
===HIGH-RELIABILITY STACKED WAVEGUIDE WITH REDUCED SENSITIVITY TO PUPIL WALK ([[US Patent Application 18137397. HIGH-RELIABILITY STACKED WAVEGUIDE WITH REDUCED SENSITIVITY TO PUPIL WALK simplified abstract|18137397]])===
 
===HIGH-RELIABILITY STACKED WAVEGUIDE WITH REDUCED SENSITIVITY TO PUPIL WALK ([[US Patent Application 18137397. HIGH-RELIABILITY STACKED WAVEGUIDE WITH REDUCED SENSITIVITY TO PUPIL WALK simplified abstract|18137397]])===
Line 50: Line 51:
 
Joseph Daniel Lowney
 
Joseph Daniel Lowney
  
 
'''Brief explanation'''
 
The abstract describes a stacked waveguide system that is used in a user interface, such as a virtual reality headset. The system consists of two waveguides, with the first waveguide located closest to the user's eye and the second waveguide located further away.
 
 
The first waveguide operates in a reflection mode, which means that the grating structures (a type of optical component) are placed on the surface of the waveguide that is opposite to the user's eye and facing towards the interior of the stacked waveguide. This allows the waveguide to reflect and guide light towards the user's eye.
 
 
On the other hand, the second waveguide operates in a transmission mode. This means that the grating structures are placed on the surface of the waveguide that is facing the user's eye and also facing towards the interior of the stacked waveguide. This allows the waveguide to transmit light towards the user's eye.
 
 
Overall, this stacked waveguide system is designed to efficiently guide and transmit light to the user's eye, providing an enhanced visual experience in the user interface.
 
 
'''Abstract'''
 
A first waveguide of a stacked waveguide located closest to the eye of a user operates in a reflection mode such that the set of grating structures of the first waveguide are disposed on a surface of the first waveguide opposite the eye of the user and towards the interior of the stacked waveguide. Additionally, a second waveguide of the stacked waveguide located further from the eye of the user operates in a transmission mode such that the grating structures of the second waveguide are disposed on a surface of the second waveguide facing the eye of the user and facing the interior of the stacked waveguide.
 
  
 
===FITTING MECHANISMS FOR EYEWEAR WITH VISIBLE DISPLAYS AND ACCOMODATION OF WEARER ANATOMY ([[US Patent Application 18245845. FITTING MECHANISMS FOR EYEWEAR WITH VISIBLE DISPLAYS AND ACCOMODATION OF WEARER ANATOMY simplified abstract|18245845]])===
 
===FITTING MECHANISMS FOR EYEWEAR WITH VISIBLE DISPLAYS AND ACCOMODATION OF WEARER ANATOMY ([[US Patent Application 18245845. FITTING MECHANISMS FOR EYEWEAR WITH VISIBLE DISPLAYS AND ACCOMODATION OF WEARER ANATOMY simplified abstract|18245845]])===
Line 70: Line 59:
 
Alex Olwal
 
Alex Olwal
  
 
'''Brief explanation'''
 
The abstract describes a type of eyewear that has an adjustable nose bridge connecting two half-frames. These half-frames hold see-through lenses, and at least one of the lenses has a virtual display embedded or overlaid on it. The eyeglasses frame uses the adjustable nose bridge and temple pieces to hold the lenses in front of the person's eyes. The virtual display has a designated area for the person to fully view its content. The adjustable nose bridge allows for independent adjustments to align the eyeglasses frame's optics and the virtual display with the user's face and eye shape.
 
 
'''Abstract'''
 
An eyewear includes an adjustable nose bridge adapted to interconnect two half-frames of an eyeglasses frame. The two half-frames are adapted to hold a pair of see-through lenses. A virtual display is embedded in, or overlaid on, at least one of the pair of see-through lenses. The eyeglasses frame utilizes the adjustable nose bridge over a person's nose and temple pieces that rest over the person's ears to hold the pair of see-through lenses in position in front of the person's eyes. The virtual display has an associated eye box for full viewing of the virtual display by the person. The adjustable nose bridge includes one or more bridge-frame fastener arrangements adapted to provide independent adjustments of one or more geometrical parameters of the eyeglasses frame for aligning optics of the eyeglasses frame and the eye box to the user's face and eye geometry.
 
  
 
===REDUCING HOLE BEZEL REGION IN DISPLAYS ([[US Patent Application 18202877. REDUCING HOLE BEZEL REGION IN DISPLAYS simplified abstract|18202877]])===
 
===REDUCING HOLE BEZEL REGION IN DISPLAYS ([[US Patent Application 18202877. REDUCING HOLE BEZEL REGION IN DISPLAYS simplified abstract|18202877]])===
Line 84: Line 67:
 
Sun-il Chang
 
Sun-il Chang
  
 
'''Brief explanation'''
 
The abstract describes a device that consists of an array of light emitting elements and an array of pixel driver elements. The light emitting elements emit light, while the pixel driver elements control the light emitting elements. There is also a hole in the device that goes through both arrays. The light emitting elements near the hole provide a higher resolution, while the ones further away provide a lower resolution.
 
 
'''Abstract'''
 
A device includes: an array of light emitting elements extending in a first plane, each light emitting element being arranged to emit light; an array of pixel driver elements extending in a second plane beneath the array of pixels, in which each pixel driver element is configured to drive a corresponding light emitting element of the array of light emitting elements; a hole positioned within the array of light emitting elements and the array of pixel driver elements, in which the hole extends from the first plane through the second plane, a first multiple of light emitting elements from the array of light emitting elements in a first region adjacent the hole are arranged to provide a first resolution, and a second multiple of light emitting elements from the array of elements in a second region away from the hole are arranged to provide a second resolution.
 
  
 
===Selecting an Input Mode for a Virtual Assistant ([[US Patent Application 18338969. Selecting an Input Mode for a Virtual Assistant simplified abstract|18338969]])===
 
===Selecting an Input Mode for a Virtual Assistant ([[US Patent Application 18338969. Selecting an Input Mode for a Virtual Assistant simplified abstract|18338969]])===
Line 98: Line 75:
 
Ibrahim Badr
 
Ibrahim Badr
  
 
'''Brief explanation'''
 
This abstract describes methods, systems, and apparatus for selecting an input mode for a virtual assistant application on a mobile device. The method involves receiving a request to launch the virtual assistant application from the lock screen of the device. Input signals are then obtained in response to this request. The method further involves selecting an input mode for the virtual assistant application from a set of candidate input modes based on the input signals. Each candidate input mode is of a different input type, such as image or audio. The input mode of the image type receives pixel data as input, while the input mode of the audio type receives audio input. The virtual assistant application then presents content that is selected based on the input signals received using the selected input mode.
 
 
'''Abstract'''
 
Methods, systems, and apparatus for selecting an input mode are described. In one aspect, a method includes receiving request data specifying a request to launch a virtual assistant application from a lock screen of a mobile device. In response to receiving the request data, input signals are obtained. A selection of an input mode for the virtual assistant application is made, from candidate input modes, based on the input signals. Each candidate input mode is of an input type different from each other input type of each other candidate input mode. The input types include an image type and an audio type. The input mode of the image type receives pixel data for input to the virtual assistant application. The input mode of the audio type receives audio input for the virtual assistant application. The virtual assistant application presents content selected based on input signals received using the selected input mode.
 
  
 
===ENHANCED COMPUTING DEVICE REPRESENTATION OF AUDIO ([[US Patent Application 18044831. ENHANCED COMPUTING DEVICE REPRESENTATION OF AUDIO simplified abstract|18044831]])===
 
===ENHANCED COMPUTING DEVICE REPRESENTATION OF AUDIO ([[US Patent Application 18044831. ENHANCED COMPUTING DEVICE REPRESENTATION OF AUDIO simplified abstract|18044831]])===
Line 112: Line 83:
 
Dimitri Kanevsky
 
Dimitri Kanevsky
  
 
'''Brief explanation'''
 
This abstract describes a method for processing audio data recorded by microphones on a computing device. The method involves generating structured sound records based on the audio data, where each record includes a description of a sound, a time stamp indicating when the sound occurred, and a label that is different from a text transcription of the sound. The method also includes outputting a graphical user interface that displays a timeline representation of the structured sound records.
 
 
'''Abstract'''
 
An example method includes receiving, by one or more processors of a computing device, audio data recorded by one or more microphones of the computing device; and generating, based on the audio data and by the one or more processors, one or more structured sound records, a first structured sound record of the one or more structured sound records including: a description of a first sound, the description including a descriptive label of the first sound, the descriptive label different than a text transcription of the first sound, and a time stamp indicating a time at which the first sound occurred; and outputting a graphical user interface including timeline representation of the one or more structured sound records.
 
  
 
===High Availability Multi-Single-Tenant Services ([[US Patent Application 18337566. High Availability Multi-Single-Tenant Services simplified abstract|18337566]])===
 
===High Availability Multi-Single-Tenant Services ([[US Patent Application 18337566. High Availability Multi-Single-Tenant Services simplified abstract|18337566]])===
Line 126: Line 91:
 
Grigor Avagyan
 
Grigor Avagyan
  
 
'''Brief explanation'''
 
This abstract describes a method for managing virtual machine instances in a pool. The method involves running multiple primary virtual machine instances, each running a specific service. Additionally, a shared secondary virtual machine instance is created. If a primary instance becomes unavailable, the corresponding service is moved to the secondary instance. After the move, the method adjusts the resources of the secondary instance based on the needs of the service.
 
 
'''Abstract'''
 
A method includes executing a pool of primary virtual machine (VM) instances, each primary VM instance executing a corresponding individual service instance, and instantiating a shared secondary VM instance. The method includes identifying unavailability of a particular primary VM instance of the pool of primary VM instances, and causing the corresponding individual service instance executing on the particular primary VM instance to failover to the shared secondary VM instance to commence executing the corresponding individual service instance. The method includes, after the failover to the shared secondary VM instance, determining a difference between a current resource level of the shared secondary VM instance and a target resource level associated with the corresponding individual service instance, and adjusting the current resource level of the secondary VM instance based on the difference.
 
  
 
===IMAGE MODELS TO PREDICT MEMORY FAILURES IN COMPUTING SYSTEMS ([[US Patent Application 17727454. IMAGE MODELS TO PREDICT MEMORY FAILURES IN COMPUTING SYSTEMS simplified abstract|17727454]])===
 
===IMAGE MODELS TO PREDICT MEMORY FAILURES IN COMPUTING SYSTEMS ([[US Patent Application 17727454. IMAGE MODELS TO PREDICT MEMORY FAILURES IN COMPUTING SYSTEMS simplified abstract|17727454]])===
Line 140: Line 99:
 
Gufeng Zhang
 
Gufeng Zhang
  
 
'''Brief explanation'''
 
This abstract describes a method for predicting the likelihood of a computer memory failure in the future. The method involves obtaining training data inputs that include information about correctable memory errors and whether these errors led to a failure of the computer memory. Image representations of the correctable memory error data are generated and processed using a machine learning model to estimate the likelihood of a future failure. The estimated likelihood is then compared to the actual failure data, and the model parameters are updated based on the difference between the two.
 
 
'''Abstract'''
 
Methods, systems and apparatus, including computer programs encoded on computer storage medium, for predicting a likelihood of a future computer memory failure. In one aspect training data inputs are obtained, where each training data input includes correctable memory error data that describes correctable errors that occurred in a computer memory and data indicating whether the correctable errors produced a failure of the computer memory. For each training data input, image representations of the correctable memory error data included in the training data input are generated. The image representations are processed using a machine learning model to output an estimated likelihood of a future failure of the computer memory. A difference between the estimated likelihood of the future failure of the computer memory and the data indicating whether the correctable errors produced a failure of the computer memory is computed. Values of model parameters are updated using the computed difference.
 
  
 
===INCREMENTAL VAULT TO OBJECT STORE ([[US Patent Application 18342581. INCREMENTAL VAULT TO OBJECT STORE simplified abstract|18342581]])===
 
===INCREMENTAL VAULT TO OBJECT STORE ([[US Patent Application 18342581. INCREMENTAL VAULT TO OBJECT STORE simplified abstract|18342581]])===
Line 154: Line 107:
 
Christopher Murphy
 
Christopher Murphy
  
 
'''Brief explanation'''
 
The abstract describes a method for handling changes in a data volume. When a chunk of data in a current version of the data volume is changed, the method creates a block of data representing the changed chunk and stores it in an object store. The object store also stores previous versions of the data volume. The method then updates the corresponding entry in the index stored on the object store with the new data. Finally, the method deletes any blocks of data from the object store that are no longer associated with an entry in any index.
 
 
'''Abstract'''
 
A method includes receives data representing a changed chunk of data in a current revision of a data volume, the changed chunk includes data having changes from previous data of a previous revision of the data volume. The method creates a block of data representing the changed chunk of data on the object store, the object store also stores previous revision data of the previous revision. The method determines a previous index stored on the object store corresponding to the previous revision, which includes entries including at least one corresponding to the previous revision data. The method creates a revised index that updates the corresponding entry with updated entry data representing the changed chunk of data. The method includes deleting, from the object store, each particular block of data stored on the object store that is no longer associated with an entry on any index stored on the object store.
 
  
 
===Uncorrectable Memory Error Recovery For Virtual Machine Hosts ([[US Patent Application 18216988. Uncorrectable Memory Error Recovery For Virtual Machine Hosts simplified abstract|18216988]])===
 
===Uncorrectable Memory Error Recovery For Virtual Machine Hosts ([[US Patent Application 18216988. Uncorrectable Memory Error Recovery For Virtual Machine Hosts simplified abstract|18216988]])===
Line 168: Line 115:
 
Jue Wang
 
Jue Wang
  
 
'''Brief explanation'''
 
The abstract describes a method for recovering from memory errors in a computer system. Instead of causing a complete shutdown, the system can identify the source of the error and take appropriate recovery actions. This includes handling errors caused by the system's kernel accessing memory in virtual machines, as well as errors caused by known defects in the hardware. The system can also detect overflow errors raised by the processor due to memory errors.
 
 
'''Abstract'''
 
Methods, systems, and apparatus, including computer-readable storage media for uncorrectable memory recovery. Different sources of uncorrectable memory error are handled to provide for recovery actions by a host kernel of a machine hosting one or more virtual machines. Rather than defaulting to kernel panic behavior, the host kernel can identify the source of uncorrectable error, and cause the host machine and/or the affected virtual machines to take recovery action that is less disruptive than abrupt shutdown from panic. For example, the host kernel can handle uncorrectable memory error caused by kernel accesses to guest memory of a host virtual machine, as well as uncorrectable memory error improperly raised as a result of known defects in host machine hardware. The host kernel can also be configured to detect sources of overflow in exceptions raised by a processor as a result of uncorrectable memory error.
 
  
 
===Memory Request Timeouts Using a Common Counter ([[US Patent Application 18044353. Memory Request Timeouts Using a Common Counter simplified abstract|18044353]])===
 
===Memory Request Timeouts Using a Common Counter ([[US Patent Application 18044353. Memory Request Timeouts Using a Common Counter simplified abstract|18044353]])===
Line 182: Line 123:
 
Nagaraj Ashok Putti
 
Nagaraj Ashok Putti
  
 
'''Brief explanation'''
 
This abstract describes techniques and devices that allow for memory request timeouts using a shared counter. When a memory request is received, a timeout value is generated for that request based on the current value of the shared counter and the required latency for the request. If there are any related memory requests already in the memory request buffer, their timeouts are adjusted accordingly. The memory request is then placed in the buffer, and the shared counter is incremented. If the incremented value matches the timeout value for the memory request, it is marked as timed out.
 
 
'''Abstract'''
 
Techniques and apparatuses are described that enable memory request timeouts using a common counter. A memory request is received, and a common count timeout is generated for the memory request based on a common count at a time of receipt and a latency requirement of the memory request. Common count timeouts of one or more related memory requests within a memory request buffer (if they exist) are adjusted as needed, and the memory request is placed in the memory request buffer. The common count is incremented, and the memory request is indicated as timed out in response to an incrementation of the common count matching the common count timeout for the memory request.
 
  
 
===FILE SYSTEMS WITH GLOBAL AND LOCAL NAMING ([[US Patent Application 18343096. FILE SYSTEMS WITH GLOBAL AND LOCAL NAMING simplified abstract|18343096]])===
 
===FILE SYSTEMS WITH GLOBAL AND LOCAL NAMING ([[US Patent Application 18343096. FILE SYSTEMS WITH GLOBAL AND LOCAL NAMING simplified abstract|18343096]])===
Line 196: Line 131:
 
Shahar Frank
 
Shahar Frank
  
 
'''Brief explanation'''
 
The abstract describes a method for storing data using multiple file systems (FSs) that are assigned both global identifiers and client-specific names. These FSs are managed using the global identifiers, and files are stored for clients using their respective client-specific names.
 
 
'''Abstract'''
 
A method for data storage includes specifying a plurality of File Systems (FSs) for use by multiple clients, including assigning to the FSs both respective global identifiers and respective client-specific names. The plurality of FSs is managed using the global identifiers, and files are stored for the clients in the FSs using the client-specific names.
 
  
 
===CONTEXTUAL ESTIMATION OF LINK INFORMATION GAIN ([[US Patent Application 18215032. CONTEXTUAL ESTIMATION OF LINK INFORMATION GAIN simplified abstract|18215032]])===
 
===CONTEXTUAL ESTIMATION OF LINK INFORMATION GAIN ([[US Patent Application 18215032. CONTEXTUAL ESTIMATION OF LINK INFORMATION GAIN simplified abstract|18215032]])===
Line 210: Line 139:
 
Victor Carbune
 
Victor Carbune
  
 
'''Brief explanation'''
 
This abstract describes techniques for determining an information gain score for documents and presenting information from those documents to the user based on the score. The information gain score represents the additional information present in a document compared to previously viewed documents. This score is calculated using a machine learning model that analyzes the data in the documents. By considering the information gain scores of a set of documents, the system can provide the user with documents that are likely to provide the most valuable information.
 
 
'''Abstract'''
 
Techniques are described herein for determining an information gain score for one or more documents of interest to the user and present information from the documents based on the information gain score. An information gain score for a given document is indicative of additional information that is included in the document beyond information contained in documents that were previously viewed by the user. In some implementations, the information gain score may be determined for one or more documents by applying data from the documents across a machine learning model to generate an information gain score. Based on the information gain scores of a set of documents, the documents can be provided to the user in a manner that reflects the likely information gain that can be attained by the user if the user were to view the documents.
 
  
 
===PUBLISHER TOOL FOR CONTROLLING SPONSORED CONTENT QUALITY ACROSS MEDIATION PLATFORMS ([[US Patent Application 18214719. PUBLISHER TOOL FOR CONTROLLING SPONSORED CONTENT QUALITY ACROSS MEDIATION PLATFORMS simplified abstract|18214719]])===
 
===PUBLISHER TOOL FOR CONTROLLING SPONSORED CONTENT QUALITY ACROSS MEDIATION PLATFORMS ([[US Patent Application 18214719. PUBLISHER TOOL FOR CONTROLLING SPONSORED CONTENT QUALITY ACROSS MEDIATION PLATFORMS simplified abstract|18214719]])===
Line 224: Line 147:
 
Thomas Price
 
Thomas Price
  
 
'''Brief explanation'''
 
The abstract describes a system and method for providing an interface that allows publishers to select sponsored content networks. The system includes a mediation server that provides a user interface to the publisher server, allowing access to stored data. The system receives a metric associated with a rule for filtering content items and applies it to a content network list, generating an updated list. The system then transmits mediation code, including the updated list, to the publisher server. When executed on a user device, the mediation code controls the display of content items according to the updated list and allows the user device to flag content items for modifying the list.
 
 
'''Abstract'''
 
Systems and methods are described for providing an interface and facilitating selection of sponsored content networks that provide sponsored content items. This may include providing, by a mediation server, a user interface to a publisher server, the user interface configured to provide access to data stored on the mediation server; receiving a metric associated with a rule for filtering content items associated with the publisher; applying the metric to a content network list associated with the publisher using the user interface to generate an updated content network list; and transmitting mediation code including the updated content network list to the publisher server, wherein the mediation code, when executed by a user device, (i) causes the user device to control display of content items according to the updated content network list and (ii) allows the user device to flag at least one content item for modifying the updated content network list.
 
  
 
===MULTI SOURCE EXTRACTION AND SCORING OF SHORT QUERY ANSWERS ([[US Patent Application 18000152. MULTI SOURCE EXTRACTION AND SCORING OF SHORT QUERY ANSWERS simplified abstract|18000152]])===
 
===MULTI SOURCE EXTRACTION AND SCORING OF SHORT QUERY ANSWERS ([[US Patent Application 18000152. MULTI SOURCE EXTRACTION AND SCORING OF SHORT QUERY ANSWERS simplified abstract|18000152]])===
Line 238: Line 155:
 
Preyas Dalsukhbhai Popat
 
Preyas Dalsukhbhai Popat
  
 
'''Brief explanation'''
 
The abstract describes a method used by search engines to generate short answers for queries. This method involves training a score prediction engine using a set of training data, which includes passages that can be used as short answers and other passages. The goal is to find the best short answer from the available options. The training data also includes titles for the passages.
 
 
'''Abstract'''
 
Techniques of generating short answers for queries by a search engine include performing a training operation on a corpus of training data to train the score prediction engine, the corpus of training data including candidate passages providing short answers for display in callouts and remaining respective passages, from which a top scoring short answer is generated. In such implementations, the corpus of training data further includes the remaining respective passages and the respective titles of the candidate passage and remaining respective passages.
 
  
 
===Secure Provisioning with Hardware Verification ([[US Patent Application 18245678. Secure Provisioning with Hardware Verification simplified abstract|18245678]])===
 
===Secure Provisioning with Hardware Verification ([[US Patent Application 18245678. Secure Provisioning with Hardware Verification simplified abstract|18245678]])===
Line 252: Line 163:
 
Andrei Tudor Stratan
 
Andrei Tudor Stratan
  
 
'''Brief explanation'''
 
The abstract describes a method for securely provisioning sensitive data to an integrated circuit (IC) device. The provisioning data is divided into fragments and encrypted using cryptographic keys. The IC device generates corresponding cryptographic keys. The encrypted fragments are transferred to the IC device through a secure process that involves sending a seed value, validating integrity data of the IC device, and transferring the encrypted fragment. Once the secure transfer is complete, the IC device can reconstruct the provisioning data using the encrypted fragments and cryptographic keys.
 
 
'''Abstract'''
 
The present disclosure describes various aspects of secure provisioning with hardware verification. In some aspects, sensitive data are provisioned to an integrated circuit (IC) device through a provisioning process. Provisioning data for the IC device are divided into a plurality of fragments, and each fragment is encrypted in one of a plurality of cryptographic keys. Corresponding cryptographic keys are generated at the IC device. The encrypted fragments are transferred to the IC device in respective secure transfer operations, each including sending a seed value to the IC device, validating integrity data configured to characterize integrated circuitry within a portion of the IC device specified by the seed value, and transferring the encrypted fragment to the IC device in response to validating the integrity data. In response to completing the secure transfer operation, the IC device may reconstruct the provisioning data from the encrypted fragments and corresponding cryptographic keys.
 
  
 
===ADAPTIVE NATURAL LANGUAGE STEGANOGRAPHY AND WATERMARKING FOR VIRTUAL ASSISTANTS ([[US Patent Application 18217351. ADAPTIVE NATURAL LANGUAGE STEGANOGRAPHY AND WATERMARKING FOR VIRTUAL ASSISTANTS simplified abstract|18217351]])===
 
===ADAPTIVE NATURAL LANGUAGE STEGANOGRAPHY AND WATERMARKING FOR VIRTUAL ASSISTANTS ([[US Patent Application 18217351. ADAPTIVE NATURAL LANGUAGE STEGANOGRAPHY AND WATERMARKING FOR VIRTUAL ASSISTANTS simplified abstract|18217351]])===
Line 266: Line 171:
 
Sebastian Millius
 
Sebastian Millius
  
 
'''Brief explanation'''
 
The abstract describes methods, systems, and computer programs for identifying and responding to automated conversations. It explains that the methods involve initiating a conversation with a participant using natural language communication. The system then determines if the participant is automated by applying a predefined protocol that includes linguistic transformations. If the participant is found to be automated, the conversation can be switched to a different communication method.
 
 
'''Abstract'''
 
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for announcing and detecting automated conversation are disclosed. One of the methods includes initiating, over a natural language communication channel, a conversation with a communication participant using a natural language communication method that includes a dialogue of natural language communications. The communication participant is determined to be automated using a pre-defined adaptive interactive protocol that specifies natural language linguistic transformations defined in a sequence. The conversation can be transitioned to a communication method that is different form the natural language communication method in response to determining that the communication participant is automated.
 
  
 
===TRANSITIONING BETWEEN PRIOR DIALOG CONTEXTS WITH AUTOMATED ASSISTANTS ([[US Patent Application 18214404. TRANSITIONING BETWEEN PRIOR DIALOG CONTEXTS WITH AUTOMATED ASSISTANTS simplified abstract|18214404]])===
 
===TRANSITIONING BETWEEN PRIOR DIALOG CONTEXTS WITH AUTOMATED ASSISTANTS ([[US Patent Application 18214404. TRANSITIONING BETWEEN PRIOR DIALOG CONTEXTS WITH AUTOMATED ASSISTANTS simplified abstract|18214404]])===
Line 280: Line 179:
 
Justin Lewis
 
Justin Lewis
  
 
'''Brief explanation'''
 
This abstract describes techniques for retrieving prior context in an automated assistant. During a conversation between a user and the assistant, the user's input is used to generate a dialog context, which includes the user's intentions and associated information. Additional inputs can be used to generate a new dialog context that is different from the previous one. If the user commands the assistant to go back to the previous context, the assistant generates natural language output that conveys the intentions and information from that context, which is then presented to the user.
 
 
'''Abstract'''
 
Techniques are described related to prior context retrieval with an automated assistant. In various implementations, instance(s) of free-form natural language input received from a user during a human-to-computer dialog session between the user and an automated assistant may be used to generate a first dialog context. The first dialog context may include intent(s) and slot value(s) associated with the intent(s). Similar operations may be performed with additional inputs to generate a second dialog context that is semantically distinct from the first dialog context. When a command is received from the user to transition the automated assistant back to the first dialog context, natural language output may be generated that conveys at least one or more of the intents of the first dialog context and one or more of the slot values of the first dialog context. This natural language output may be presented to the user.
 
  
 
===Optimization of Parameter Values for Machine-Learned Models ([[US Patent Application 18347406. Optimization of Parameter Values for Machine-Learned Models simplified abstract|18347406]])===
 
===Optimization of Parameter Values for Machine-Learned Models ([[US Patent Application 18347406. Optimization of Parameter Values for Machine-Learned Models simplified abstract|18347406]])===
Line 294: Line 187:
 
Daniel Reuben Golovin
 
Daniel Reuben Golovin
  
 
'''Brief explanation'''
 
The abstract describes a computing system and methods for optimizing adjustable parameters of a system. It introduces a parameter optimization system that uses black-box optimization techniques to suggest new parameter values for evaluation. This iterative process aims to improve the overall performance of the system, as measured by an objective function. The abstract also mentions a new optimization technique called "Gradientless Descent," which is faster than random search while maintaining its positive qualities.
 
 
'''Abstract'''
 
The present disclosure provides computing systems and associated methods for optimizing one or more adjustable parameters (e.g. operating parameters) of a system. In particular, the present disclosure provides a parameter optimization system that can perform one or more black-box optimization techniques to iteratively suggest new sets of parameter values for evaluation. The iterative suggestion and evaluation process can serve to optimize or otherwise improve the overall performance of the system, as evaluated by an objective function that evaluates one or more metrics. The present disclosure also provides a novel black-box optimization technique known as “Gradientless Descent” that is more clever and faster than random search yet retains most of random search's favorable qualities.
 
  
 
===Systems and Methods for Contrastive Learning of Visual Representations ([[US Patent Application 18343579. Systems and Methods for Contrastive Learning of Visual Representations simplified abstract|18343579]])===
 
===Systems and Methods for Contrastive Learning of Visual Representations ([[US Patent Application 18343579. Systems and Methods for Contrastive Learning of Visual Representations simplified abstract|18343579]])===
Line 308: Line 195:
 
Ting Chen
 
Ting Chen
  
 
'''Brief explanation'''
 
This abstract describes a method for improving visual representations using semi-supervised contrastive learning. The method involves using specific data augmentation techniques and a learnable transformation to enhance the visual representations. It also includes improvements for semi-supervised contrastive learning, such as generating an image classification model based on unlabeled training data, fine-tuning the model using labeled training data, and distilling the model into a smaller student model with fewer parameters.
 
 
'''Abstract'''
 
Systems, methods, and computer program products for performing semi-supervised contrastive learning of visual representations are provided. For example, the present disclosure provides systems and methods that leverage particular data augmentation schemes and a learnable nonlinear transformation between the representation and the contrastive loss to provide improved visual representations. Further, the present disclosure also provides improvements for semi-supervised contrastive learning. For example, computer-implemented method may include performing semi-supervised contrastive learning based on a set of one or more unlabeled training data, generating an image classification model based on a portion of a plurality of layers in a projection head neural network used in performing the contrastive learning, performing fine-tuning of the image classification model based on a set of one or more labeled training data, and after performing the fine-tuning, distilling the image classification model to a student model comprising a relatively smaller number of parameters than the image classification model.
 
  
 
===Reducing Parasitic Interactions in a Qubit Grid ([[US Patent Application 18195876. Reducing Parasitic Interactions in a Qubit Grid simplified abstract|18195876]])===
 
===Reducing Parasitic Interactions in a Qubit Grid ([[US Patent Application 18195876. Reducing Parasitic Interactions in a Qubit Grid simplified abstract|18195876]])===
Line 322: Line 203:
 
John MARTINIS
 
John MARTINIS
  
 
'''Brief explanation'''
 
The abstract describes a method, system, and apparatus for performing an entangling operation on a system of qubits. The system of qubits consists of first qubits and second qubits, which are connected through qubit couplers. The qubits are arranged in a two-dimensional grid. The method involves pairing multiple first qubits with neighboring second qubits and performing an entangling operation on each paired qubit. This operation includes detuning each second qubit in parallel with its corresponding first qubit.
 
 
'''Abstract'''
 
Methods, systems, and apparatus for performing an entangling operation on a system of qubits. In one aspect, a method includes operating the system of qubits, wherein the system of qubits comprises: a plurality of first qubits, a plurality of second qubits, a plurality of qubit couplers defining nearest neighbor interactions between the first qubits and second qubits, wherein the system of qubits is arranged as a two dimensional grid and each qubit of the multiple first qubits is coupled to multiple second qubits through respective qubit couplers, and wherein operating the system of qubits comprises: pairing multiple first qubits with respective neighboring second qubits; performing an entangling operation on each paired first and second qubit in parallel, comprising detuning each second qubit in the paired first and second qubits in parallel.
 
  
 
===QUBIT LEAKAGE REMOVAL ([[US Patent Application 18305174. QUBIT LEAKAGE REMOVAL simplified abstract|18305174]])===
 
===QUBIT LEAKAGE REMOVAL ([[US Patent Application 18305174. QUBIT LEAKAGE REMOVAL simplified abstract|18305174]])===
Line 336: Line 211:
 
Kevin Chenghao Miao
 
Kevin Chenghao Miao
  
 
'''Brief explanation'''
 
The abstract describes methods, systems, and apparatus for transferring data qubit leakage in quantum computing. It explains that an apparatus is used to transport leakage from a data qubit, which holds logical information, to an ancilla qubit. This is achieved by preparing the ancilla qubit in a known initial state and performing a leakage transport operation using two-qubit gates on both qubits.
 
 
'''Abstract'''
 
Methods, systems and apparatus for transporting data qubit leakage. In one aspect, an apparatus includes, for a data qubit that has been operated on by a quantum computing system to place the data qubit in a first state, wherein the first state encodes logical information: preparing, by the quantum computing system, an ancilla qubit in a known initial state; and performing, by the quantum computing system, a leakage transport operation using one or more two-qubit gates on the data qubit and the ancilla qubit to transfer leakage from the data qubit to the ancilla qubit.
 
  
 
===Multi-Modal Directions with a Ride Service Segment in a Navigation Application ([[US Patent Application 18214353. Multi-Modal Directions with a Ride Service Segment in a Navigation Application simplified abstract|18214353]])===
 
===Multi-Modal Directions with a Ride Service Segment in a Navigation Application ([[US Patent Application 18214353. Multi-Modal Directions with a Ride Service Segment in a Navigation Application simplified abstract|18214353]])===
Line 350: Line 219:
 
Scott Ogden
 
Scott Ogden
  
 
'''Brief explanation'''
 
The abstract describes a feature in a mapping application that allows users to request ride services without needing a separate ride service app. The mapping app uses ride service APIs to access data from different ride service providers. When a user requests directions, the app can include a segment where a ride service is used as the mode of transportation. The app retrieves information such as price estimates and wait times from the ride service APIs and provides this information to the user along with the directions.
 
 
'''Abstract'''
 
To provide ride services within a mapping application in a client computing device without directing the user to a separate ride service application, the mapping application invokes one or several ride service APIs to access ride service data from various ride service providers. For example, the mapping application receives a request for travel directions to a destination and generates multi-modal travel directions which include a route segment where the mode of transportation is a ride service. The mapping application invokes one or several ride service APIs to retrieve a price estimate, estimated wait time, or any other suitable information regarding the ride service route segment. Accordingly, the mapping application provides the multi-modal travel directions to a user including information regarding the ride service route segment.
 
  
 
===METHODS, SYSTEMS AND MEDIA FOR PRESENTING MEDIA CONTENT THAT WAS ADVERTISED ON A SECOND SCREEN DEVICE USING A PRIMARY DEVICE ([[US Patent Application 18218270. METHODS, SYSTEMS AND MEDIA FOR PRESENTING MEDIA CONTENT THAT WAS ADVERTISED ON A SECOND SCREEN DEVICE USING A PRIMARY DEVICE simplified abstract|18218270]])===
 
===METHODS, SYSTEMS AND MEDIA FOR PRESENTING MEDIA CONTENT THAT WAS ADVERTISED ON A SECOND SCREEN DEVICE USING A PRIMARY DEVICE ([[US Patent Application 18218270. METHODS, SYSTEMS AND MEDIA FOR PRESENTING MEDIA CONTENT THAT WAS ADVERTISED ON A SECOND SCREEN DEVICE USING A PRIMARY DEVICE simplified abstract|18218270]])===
Line 364: Line 227:
 
Adam Champy
 
Adam Champy
  
 
'''Brief explanation'''
 
This abstract describes methods, systems, and media for presenting media content that was advertised on a second screen device using a primary screen device. The method involves receiving an advertisement request from a computing device and association information indicating that the device is associated with a media presentation device. User account information associated with the user account is also received. Based on this information, an advertisement for media content is selected. The method also involves determining whether an indicator of subscription status should be presented with the selected advertisement. The advertisement is then presented on the computing device, and interaction with the advertisement can instruct the media presentation device to present the media content and the subscription status indicator. Input indicating the selection of the subscription status indicator is received, and an application associated with the service is transmitted to the computing device. Finally, the subscription status of the user account is updated in connection with the service.
 
 
'''Abstract'''
 
Methods, systems and media for presenting media content that was advertised on a second screen device using a primary screen device are provided. In some implementations, a method for advertising media content to a user is provided, the method comprising: receiving an advertisement request from a computing device; receiving association information indicating that the computing device is associated with a media presentation device; receiving user account information associated with the user account; in response to the advertisement request, selecting an advertisement for media content based at least in part on the association information and the user account information; determining whether an indicator of subscription status of the user account to a service is to be presented in connection with the selected advertisement; causing the advertisement to be presented by computing device, wherein the advertisement is associated with instructions that, in response to interaction with the advertisement, cause the computing device to instruct the media presentation device to present the media content and the indicator of subscription status; receiving input indicating that the indicator of subscription status has been selected; causing an application associated with the service to be transmitted to the computing device; and updating the subscription status of the user account in connection with the service.
 
  
 
===High Resolution Inpainting with a Machine-learned Augmentation Model and Texture Transfer ([[US Patent Application 17726720. High Resolution Inpainting with a Machine-learned Augmentation Model and Texture Transfer simplified abstract|17726720]])===
 
===High Resolution Inpainting with a Machine-learned Augmentation Model and Texture Transfer ([[US Patent Application 17726720. High Resolution Inpainting with a Machine-learned Augmentation Model and Texture Transfer simplified abstract|17726720]])===
Line 378: Line 235:
 
Noritsugu Kanazawa
 
Noritsugu Kanazawa
  
 
'''Brief explanation'''
 
This abstract describes a system and method for enhancing images using image augmentation models and texture transfer blocks. The system takes input images and segmentation masks as input and generates first output data. This first output data, along with the segmentation masks, is then processed using the texture transfer block to create an augmented image. The purpose of this augmentation is to replace occluded areas in the original image with predicted pixel data, resulting in an enhanced version of the scene.
 
 
'''Abstract'''
 
Systems and methods for augmenting images can utilize one or more image augmentation models and one or more texture transfer blocks. The image augmentation model can process input images and one or more segmentation masks to generate first output data. The first output data and the one or more segmentation masks can be processed with the texture transfer block to generate an augmented image. The input image can depict a scene with one or more occlusions, and the augmented image can depict the scene with the one or more occlusions replaced with predicted pixel data.
 
  
 
===Photorealistic Talking Faces from Audio ([[US Patent Application 17796399. Photorealistic Talking Faces from Audio simplified abstract|17796399]])===
 
===Photorealistic Talking Faces from Audio ([[US Patent Application 17796399. Photorealistic Talking Faces from Audio simplified abstract|17796399]])===
Line 392: Line 243:
 
Vivek Kwatra
 
Vivek Kwatra
  
 
'''Brief explanation'''
 
This abstract describes a framework for creating realistic 3D talking faces based solely on audio input. The framework includes methods for inserting these generated faces into existing videos or virtual environments. The process involves breaking down the faces from video into separate components such as 3D geometry, head pose, and texture. This allows for separate analysis and prediction of the face shape and texture. To ensure smooth transitions, an auto-regressive approach is used that takes into account the previous visual state of the model. Additionally, the model incorporates face illumination using audio-independent 3D texture normalization.
 
 
'''Abstract'''
 
Provided is a framework for generating photorealistic 3D talking faces conditioned only on audio input. In addition, the present disclosure provides associated methods to insert generated faces into existing videos or virtual environments. We decompose faces from video into a normalized space that decouples 3D geometry, head pose, and texture. This allows separating the prediction problem into regressions over the 3D face shape and the corresponding 2D texture atlas. To stabilize temporal dynamics, we propose an auto-regressive approach that conditions the model on its previous visual state. We also capture face illumination in our model using audio-independent 3D texture normalization.
 
  
 
===NOVEL CATEGORY DISCOVERY USING MACHINE LEARNING ([[US Patent Application 17729878. NOVEL CATEGORY DISCOVERY USING MACHINE LEARNING simplified abstract|17729878]])===
 
===NOVEL CATEGORY DISCOVERY USING MACHINE LEARNING ([[US Patent Application 17729878. NOVEL CATEGORY DISCOVERY USING MACHINE LEARNING simplified abstract|17729878]])===
Line 406: Line 251:
 
Xuhui Jia
 
Xuhui Jia
  
 
'''Brief explanation'''
 
This abstract describes a method for discovering new categories using computer programs. The method involves generating local feature tensors from training images and comparing them to previous feature tensors. A similarity tensor is created to represent the similarity between the current and previous feature tensors. A neural network is used to process the training images and predict their classes. The similarity between the feature tensors and the training outputs is used to update the neural network.
 
 
'''Abstract'''
 
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing novel category discovery. One of the methods includes generating first local feature tensors from a first training image; obtaining previous local feature tensors generated from a previous training image; generating a first similarity tensor representing a similarity between the first local feature tensors and the previous local feature tensors; obtaining a second similarity tensor for a second training image; processing, using a neural network, the first training image to generate a first training output representing a class prediction for the first training image; obtaining a second training output representing a class prediction for the second training image; and generating an update to the neural network from (i) a similarity between the first similarity tensor and the second similarity tensor and (ii) a similarity between the first training output and the second training output.
 
  
 
===ACTIONABLE EVENT DETERMINATION BASED ON VEHICLE DIAGNOSTIC DATA ([[US Patent Application 18216418. ACTIONABLE EVENT DETERMINATION BASED ON VEHICLE DIAGNOSTIC DATA simplified abstract|18216418]])===
 
===ACTIONABLE EVENT DETERMINATION BASED ON VEHICLE DIAGNOSTIC DATA ([[US Patent Application 18216418. ACTIONABLE EVENT DETERMINATION BASED ON VEHICLE DIAGNOSTIC DATA simplified abstract|18216418]])===
Line 420: Line 259:
 
Haris Ramic
 
Haris Ramic
  
 
'''Brief explanation'''
 
This abstract describes systems and methods that can identify and report an important event related to a vehicle using diagnostic data. The event and its recommended action can be communicated to the user through a suitable interface. With the user's approval, the system can also assist in completing the recommended action.
 
 
'''Abstract'''
 
The present disclosure is generally related to systems and methods for determining an actionable event associated with a vehicle. The systems and methods can determine the event based on vehicle diagnostic data, and can report the event to a user via an appropriate interface. The systems and methods can also determine a recommended action to address the event, and can facilitate completion of the action upon approval by the user.
 
  
 
===Seamless Transition for Multiple Display Brightness Modes ([[US Patent Application 18036323. Seamless Transition for Multiple Display Brightness Modes simplified abstract|18036323]])===
 
===Seamless Transition for Multiple Display Brightness Modes ([[US Patent Application 18036323. Seamless Transition for Multiple Display Brightness Modes simplified abstract|18036323]])===
Line 434: Line 267:
 
Chien-Hui Wen
 
Chien-Hui Wen
  
 
'''Brief explanation'''
 
This abstract describes a device with a display that can operate at different brightness levels. The device has processors that can perform various operations. One operation is detecting a fingerprint authentication event while the display is at a certain brightness level. Based on this event, a specific part of the user interface is determined to operate at a different brightness level. The display then transitions from the initial brightness level to the new brightness level. Additionally, another part of the display is shown using either the initial brightness level or a slightly adjusted gray level for the new brightness level.
 
 
'''Abstract'''
 
An example device includes a display component that is configured to operate at a first brightness level or a second brightness level. The device also includes one or more processors operable to perform operations. The operations include detecting, by the display component and while the display component is operating at the first brightness level, a fingerprint authentication triggering event. The operations further include determining, based on the fingerprint authentication triggering event, a first portion of the graphical user interface to operate at the second brightness level. The operations also include transitioning the display component from the first brightness level to the second brightness level. The operations additionally include displaying a second portion of the display component based on applying, to the second portion, one of: (1) the first brightness level, or (2) a value offset to a gray level for the second brightness level.
 
  
 
===DYNAMICALLY ADAPTING GIVEN ASSISTANT OUTPUT BASED ON A GIVEN PERSONA ASSIGNED TO AN AUTOMATED ASSISTANT ([[US Patent Application 17726244. DYNAMICALLY ADAPTING GIVEN ASSISTANT OUTPUT BASED ON A GIVEN PERSONA ASSIGNED TO AN AUTOMATED ASSISTANT simplified abstract|17726244]])===
 
===DYNAMICALLY ADAPTING GIVEN ASSISTANT OUTPUT BASED ON A GIVEN PERSONA ASSIGNED TO AN AUTOMATED ASSISTANT ([[US Patent Application 17726244. DYNAMICALLY ADAPTING GIVEN ASSISTANT OUTPUT BASED ON A GIVEN PERSONA ASSIGNED TO AN AUTOMATED ASSISTANT simplified abstract|17726244]])===
Line 448: Line 275:
 
Martin Baeuml
 
Martin Baeuml
  
 
'''Brief explanation'''
 
This abstract describes implementations for dynamically adapting the output of an automated assistant based on a specific persona assigned to it. The output can be generated and adapted based on the assigned persona, or it can be generated specifically for the persona without the need for subsequent adaptation. The output may include textual content for audible presentation to the user and visual cues for controlling a display or visual representation of the assistant. These implementations utilize large language models to reflect the assigned persona in the assistant's output.
 
 
'''Abstract'''
 
Implementations relate to dynamically adapting a given assistant output based on a given persona, from among a plurality of disparate personas, assigned to an automated assistant. In some implementations, the given assistant output can be generated and subsequently adapted based on the given persona assigned to the automated assistant. In other implementations, the given assistant output can be generated specific to the given persona and without having to subsequently adapt the given assistant output to the given persona. Notably, the given assistant output can include a stream of textual content to be synthesized for audible presentation to the user, and a stream of visual cues utilized in controlling a display of a client device and/or in controlling a visualized representation of the automated assistant. Various implementations utilize large language models (LLMs), or output previously generated utilizing LLMs, to reflect the given persona in the given assistant output.
 
  
 
===DYNAMICALLY ADAPTING GIVEN ASSISTANT OUTPUT BASED ON A GIVEN PERSONA ASSIGNED TO AN AUTOMATED ASSISTANT ([[US Patent Application 17744440. DYNAMICALLY ADAPTING GIVEN ASSISTANT OUTPUT BASED ON A GIVEN PERSONA ASSIGNED TO AN AUTOMATED ASSISTANT simplified abstract|17744440]])===
 
===DYNAMICALLY ADAPTING GIVEN ASSISTANT OUTPUT BASED ON A GIVEN PERSONA ASSIGNED TO AN AUTOMATED ASSISTANT ([[US Patent Application 17744440. DYNAMICALLY ADAPTING GIVEN ASSISTANT OUTPUT BASED ON A GIVEN PERSONA ASSIGNED TO AN AUTOMATED ASSISTANT simplified abstract|17744440]])===
Line 462: Line 283:
 
Martin Baeuml
 
Martin Baeuml
  
 
'''Brief explanation'''
 
This abstract describes a technology that allows an automated assistant to adapt its output based on a specific persona assigned to it. The assistant's output, which can include both text and visual cues, can be generated specifically for the assigned persona or can be dynamically adapted to match the persona. This technology utilizes large language models to ensure that the assistant's output reflects the assigned persona accurately.
 
 
'''Abstract'''
 
Implementations relate to dynamically adapting a given assistant output based on a given persona, from among a plurality of disparate personas, assigned to an automated assistant. In some implementations, the given assistant output can be generated and subsequently adapted based on the given persona assigned to the automated assistant. In other implementations, the given assistant output can be generated specific to the given persona and without having to subsequently adapt the given assistant output to the given persona. Notably, the given assistant output can include a stream of textual content to be synthesized for audible presentation to the user, and a stream of visual cues utilized in controlling a display of a client device and/or in controlling a visualized representation of the automated assistant. Various implementations utilize large language models (LLMs), or output previously generated utilizing LLMs, to reflect the given persona in the given assistant output.
 
  
 
===EFFICIENT STREAMING NON-RECURRENT ON-DEVICE END-TO-END MODEL ([[US Patent Application 18336211. EFFICIENT STREAMING NON-RECURRENT ON-DEVICE END-TO-END MODEL simplified abstract|18336211]])===
 
===EFFICIENT STREAMING NON-RECURRENT ON-DEVICE END-TO-END MODEL ([[US Patent Application 18336211. EFFICIENT STREAMING NON-RECURRENT ON-DEVICE END-TO-END MODEL simplified abstract|18336211]])===
Line 476: Line 291:
 
Tara Sainath
 
Tara Sainath
  
 
'''Brief explanation'''
 
The abstract describes an Automatic Speech Recognition (ASR) model that consists of three main components: a first encoder, a second encoder, and a decoder.
 
 
The first encoder takes in a sequence of acoustic frames (representing audio input) and generates a higher order feature representation for each frame.
 
 
The second encoder then takes the higher order feature representation generated by the first encoder and generates a second higher order feature representation for each frame.
 
 
The decoder receives the second higher order feature representation and generates a probability distribution over possible speech recognition hypotheses.
 
 
Finally, a language model takes the probability distribution and generates a rescored probability distribution.
 
 
In simpler terms, the ASR model processes audio input, extracts important features, generates probabilities for different speech recognition options, and further refines those probabilities using a language model.
 
 
'''Abstract'''
 
An ASR model includes a first encoder configured to receive a sequence of acoustic frames and generate a first higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames. The ASR model also includes a second encoder configured to receive the first higher order feature representation generated by the first encoder at each of the plurality of output steps and generate a second higher order feature representation for a corresponding first higher order feature frame. The ASR model also includes a decoder configured to receive the second higher order feature representation generated by the second encoder at each of the plurality of output steps and generate a first probability distribution over possible speech recognition hypothesis. The ASR model also includes a language model configured to receive the first probability distribution over possible speech hypothesis and generate a rescored probability distribution.
 
  
 
===Joint Segmenting and Automatic Speech Recognition ([[US Patent Application 18304064. Joint Segmenting and Automatic Speech Recognition simplified abstract|18304064]])===
 
===Joint Segmenting and Automatic Speech Recognition ([[US Patent Application 18304064. Joint Segmenting and Automatic Speech Recognition simplified abstract|18304064]])===
Line 500: Line 299:
 
Ronny Huang
 
Ronny Huang
  
 
'''Brief explanation'''
 
The abstract describes a model that combines speech segmentation and automatic speech recognition (ASR). The model consists of an encoder and a decoder. The encoder takes in a sequence of acoustic frames that represent spoken utterances and generates a higher-level feature representation for each frame. The decoder uses this feature representation to generate a probability distribution over possible speech recognition hypotheses and determine if an output step corresponds to the end of a speech segment.
 
 
The joint segmenting and ASR model is trained on a set of training samples, each containing audio data of spoken utterances and their corresponding transcriptions. The transcriptions are modified by inserting an "end of speech segment" token based on a set of heuristic rules and exceptions applied to the training sample.
 
 
'''Abstract'''
 
A joint segmenting and ASR model includes an encoder and decoder. The encoder configured to: receive a sequence of acoustic frames characterizing one or more utterances; and generate, at each output step, a higher order feature representation for a corresponding acoustic frame. The decoder configured to: receive the higher order feature representation and generate, at each output step: a probability distribution over possible speech recognition hypotheses, and an indication of whether the corresponding output step corresponds to an end of speech segment. The j oint segmenting and ASR model trained on a set of training samples, each training sample including: audio data characterizing a spoken utterance; and a corresponding transcription of the spoken utterance, the corresponding transcription having an end of speech segment ground truth token inserted into the corresponding transcription automatically based on a set of heuristic-based rules and exceptions applied to the training sample.
 
  
 
===DIRECTING A VEHICLE CLIENT DEVICE TO USE ON-DEVICE FUNCTIONALITY ([[US Patent Application 18214384. DIRECTING A VEHICLE CLIENT DEVICE TO USE ON-DEVICE FUNCTIONALITY simplified abstract|18214384]])===
 
===DIRECTING A VEHICLE CLIENT DEVICE TO USE ON-DEVICE FUNCTIONALITY ([[US Patent Application 18214384. DIRECTING A VEHICLE CLIENT DEVICE TO USE ON-DEVICE FUNCTIONALITY simplified abstract|18214384]])===
Line 516: Line 307:
 
Vikram Aggarwal
 
Vikram Aggarwal
  
 
'''Brief explanation'''
 
This abstract describes a method for phasing out older versions of vehicle computing devices while still ensuring that they remain functional. When newer versions of the computing devices are released with additional features, older versions may not be able to support these features due to hardware limitations. This can cause crashes and inefficient data transmissions. To address this, a server device can respond to requests from the older computing devices by providing speech to text data or natural language understanding data, allowing the older devices to continue using server resources effectively.
 
 
'''Abstract'''
 
Implementations set forth herein relate to phasing-out of vehicle computing device versions while ensuring useful responsiveness of any vehicle computing device versions that are still in operation. Certain features of updated computing devices may not be available to prior versions of computing devices because of hardware limitations. The implementations set forth herein eliminate crashes and wasteful data transmissions caused by prior versions of computing devices that have not been, or cannot be, upgraded. A server device can be responsive to a particular intent request provided to a vehicle computing device, despite the intent request being associated with an action that a particular version of the vehicle computing device cannot execute. In response, the server device can elect to provide speech to text data, and/or natural language understanding data, in furtherance of allowing the vehicle computing device to continue leveraging resources at the server device.
 
  
 
===MULTI-MODAL INTERACTION BETWEEN USERS, AUTOMATED ASSISTANTS, AND OTHER COMPUTING SERVICES ([[US Patent Application 18217326. MULTI-MODAL INTERACTION BETWEEN USERS, AUTOMATED ASSISTANTS, AND OTHER COMPUTING SERVICES simplified abstract|18217326]])===
 
===MULTI-MODAL INTERACTION BETWEEN USERS, AUTOMATED ASSISTANTS, AND OTHER COMPUTING SERVICES ([[US Patent Application 18217326. MULTI-MODAL INTERACTION BETWEEN USERS, AUTOMATED ASSISTANTS, AND OTHER COMPUTING SERVICES simplified abstract|18217326]])===
Line 530: Line 315:
 
Ulas Kirazci
 
Ulas Kirazci
  
 
'''Brief explanation'''
 
The abstract describes techniques for users to interact with automated assistants and other computing services using multiple modes of input, such as voice and visual/tactile inputs. Users can engage with the automated assistant to access third-party computing services, and can navigate through dialog state machines using different input modalities.
 
 
'''Abstract'''
 
Techniques are described herein for multi-modal interaction between users, automated assistants, and other computing services. In various implementations, a user may engage with the automated assistant in order to further engage with a third party computing service. In some implementations, the user may advance through dialog state machines associated with third party computing service using both verbal input modalities and input modalities other than verbal modalities, such as visual/tactile modalities.
 
  
 
===SERVER SIDE HOTWORDING ([[US Patent Application 18345077. SERVER SIDE HOTWORDING simplified abstract|18345077]])===
 
===SERVER SIDE HOTWORDING ([[US Patent Application 18345077. SERVER SIDE HOTWORDING simplified abstract|18345077]])===
Line 544: Line 323:
 
Alexander H. Gruenstein
 
Alexander H. Gruenstein
  
 
'''Brief explanation'''
 
This abstract describes methods, systems, and apparatus for detecting specific words or phrases (hotwords) using a server. The methods involve receiving an audio signal containing one or more utterances, determining if a portion of the first utterance matches a key phrase, and if so, sending the audio signal to a server system for further analysis. The server system then determines if the first utterance fully matches the key phrase based on a more restrictive threshold. If it does, tagged text data representing the utterances is received.
 
 
'''Abstract'''
 
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for detecting hotwords using a server. One of the methods includes receiving an audio signal encoding one or more utterances including a first utterance; determining whether at least a portion of the first utterance satisfies a first threshold of being at least a portion of a key phrase; in response to determining that at least the portion of the first utterance satisfies the first threshold of being at least a portion of a key phrase, sending the audio signal to a server system that determines whether the first utterance satisfies a second threshold of being the key phrase, the second threshold being more restrictive than the first threshold; and receiving tagged text data representing the one or more utterances encoded in the audio signal when the server system determines that the first utterance satisfies the second threshold.
 
  
 
===Machine-Learned Differentiable Digital Signal Processing ([[US Patent Application 18344567. Machine-Learned Differentiable Digital Signal Processing simplified abstract|18344567]])===
 
===Machine-Learned Differentiable Digital Signal Processing ([[US Patent Application 18344567. Machine-Learned Differentiable Digital Signal Processing simplified abstract|18344567]])===
Line 558: Line 331:
 
Jesse Engel
 
Jesse Engel
  
 
'''Brief explanation'''
 
The abstract describes a new approach to digital signal processing using machine-learned differentiable digital signal processors. This involves incorporating these processors into the training process of a machine learning model, allowing for more efficient and high-quality signal processing. This approach can lead to smaller models, resulting in reduced energy costs for storage and processing of digital signals.
 
 
'''Abstract'''
 
Systems and methods of the present disclosure are directed toward digital signal processing using machine-learned differentiable digital signal processors. For example, embodiments of the present disclosure may include differentiable digital signal processors within the training loop of a machine-learned model (e.g., for gradient-based training). Advantageously, systems and methods of the present disclosure provide high quality signal processing using smaller models than prior systems, thereby reducing energy costs (e.g., storage and/or processing costs) associated with performing digital signal processing.
 
  
 
===Optical Communication for Memory Disaggregation in High Performance Computing ([[US Patent Application 17992241. Optical Communication for Memory Disaggregation in High Performance Computing simplified abstract|17992241]])===
 
===Optical Communication for Memory Disaggregation in High Performance Computing ([[US Patent Application 17992241. Optical Communication for Memory Disaggregation in High Performance Computing simplified abstract|17992241]])===
Line 572: Line 339:
 
Horia Alexandru Toma
 
Horia Alexandru Toma
  
 
'''Brief explanation'''
 
The abstract describes a technology that separates memory from a specific type of integrated circuit called an ASIC package. In this case, a high-bandwidth memory (HBM) optics module package is connected to the ASIC package using optical links. The HBM optics module package includes HBM dies, HBM chiplets, and an optical chiplet. The optical chiplet connects the HBM optics module to optical fibers, which form an optical link with other components of the ASIC package. By including the optical chiplet, the HBM optics module package can be separated from the ASIC package.
 
 
'''Abstract'''
 
The technology generally relates to disaggregating memory from an application specific integrated circuit (“ASIC”) package. For example, a high-bandwidth memory (“HBM”) optics module package may be connected to an ASIC package via one or more optical links. The HBM optics module package may include HBM dies(s), HBM chiplet(s) and an optical chiplet. The optical chiplet may be configured to connect the HBM optics module to one or more optical fibers that form an optical link with one or more other components of the ASIC package. By including an optical chiplet in the HBM optics module package, the HBM optics module package may be disaggregated from an ASIC package.
 
  
 
===Blind Battery Connector ([[US Patent Application 18173607. Blind Battery Connector simplified abstract|18173607]])===
 
===Blind Battery Connector ([[US Patent Application 18173607. Blind Battery Connector simplified abstract|18173607]])===
Line 586: Line 347:
 
James Robert Lim
 
James Robert Lim
  
 
'''Brief explanation'''
 
The document discusses a blind battery connector that allows users to safely and securely connect a battery to a system without needing to see the connectors. The connector uses magnets to automatically align and engage the battery connector with the system-side connector in the correct orientation. These magnets can be embedded or removable. The blind battery connector also provides additional mechanical strength to protect against drops, vibrations, and shocks. Implementing these techniques can reduce assembly time, increase productivity, lower costs, and minimize the risk of connector damage or reverse polarity engagement.
 
 
'''Abstract'''
 
The present document describes techniques associated with a blind battery connector. The blind battery connector described herein enables a user to blindly engage, safely and securely, a battery connector with a system-side connector. In aspects, the blind battery connector includes polarity-oriented magnets at both the battery connector and the system-side connector to automatically align and engage the battery connector with the system-side connector with correct orientation. The magnets may be embedded or removably assembled to the battery connector and the system-side connector. The blind battery connector controls initial alignment of the battery connector for coupling with the system-side connector and provides additional mechanical strength to the coupling against drop, vibration, and shock. The techniques described herein may decrease battery connection time at factory assembly, increase units per hour, and lower operating costs, while decreasing the likelihood of battery connector damage and/or reverse polarity engagement.
 
  
 
===End-to-End Deep Neural Network Adaptation for Edge Computing ([[US Patent Application 17998323. End-to-End Deep Neural Network Adaptation for Edge Computing simplified abstract|17998323]])===
 
===End-to-End Deep Neural Network Adaptation for Edge Computing ([[US Patent Application 17998323. End-to-End Deep Neural Network Adaptation for Edge Computing simplified abstract|17998323]])===
Line 600: Line 355:
 
Jibing Wang
 
Jibing Wang
  
 
'''Brief explanation'''
 
This abstract describes techniques and apparatuses for adapting a machine learning configuration for processing communications in an end-to-end (E2E) communication system. The system involves a network entity, user equipment (UE), base station, and an edge compute server (ECS). Initially, the network entity directs the UE and base station to form a deep neural network (DNN) based on a specific machine learning configuration. However, if there is a change in the participation mode of the ECS, the network entity determines to update the machine learning configuration. It identifies a new configuration and directs either the UE or the base station to update the DNN using this new configuration.
 
 
'''Abstract'''
 
Techniques and apparatuses are described for adapting an end-to-end, E2E, machine-learning, ML, configuration for processing communications transferred through an E2E communication. A network entity directs a user equipment (UE) and a base station participating in the E2E communication to implement the E2E communication by forming at least a portion of an E2E deep neural network, DNN, based on a first E2E ML configuration. The network entity determines to update the first E2E ML configuration based on a change in a participation mode of an edge compute server (ECS) in the E2E communication. The network entity identifies a second E2E ML configuration based on the change in participation mode and directs the UE or the base station to update the portion of the E2E DNN using the second E2E ML configuration.
 
  
 
===ASSISTANCE DURING AUDIO AND VIDEO CALLS ([[US Patent Application 18215221. ASSISTANCE DURING AUDIO AND VIDEO CALLS simplified abstract|18215221]])===
 
===ASSISTANCE DURING AUDIO AND VIDEO CALLS ([[US Patent Application 18215221. ASSISTANCE DURING AUDIO AND VIDEO CALLS simplified abstract|18215221]])===
Line 614: Line 363:
 
Fredrik BERGENLID
 
Fredrik BERGENLID
  
 
'''Brief explanation'''
 
This abstract describes a method for providing information items during a communication session between two computing devices. The method involves receiving media content from the session and using that content to determine an information item to display. A command is then sent to one or both of the devices to display the information item.
 
 
'''Abstract'''
 
Implementations relate to providing information items for display during a communication session. In some implementations, a computer-implemented method includes receiving, during a communication session between a first computing device and a second computing device, first media content from the communication session. The method further includes determining a first information item for display in the communication session based at least in part on the first media content. The method further includes sending a first command to at least one of the first computing device and the second computing device to display the first information item.
 
  
 
===SYSTEMS AND METHODS FOR LIVE MEDIA CONTENT MATCHING ([[US Patent Application 18208570. SYSTEMS AND METHODS FOR LIVE MEDIA CONTENT MATCHING simplified abstract|18208570]])===
 
===SYSTEMS AND METHODS FOR LIVE MEDIA CONTENT MATCHING ([[US Patent Application 18208570. SYSTEMS AND METHODS FOR LIVE MEDIA CONTENT MATCHING simplified abstract|18208570]])===
Line 628: Line 371:
 
Matthew Sharifi
 
Matthew Sharifi
  
 
'''Brief explanation'''
 
This abstract describes a system and method for matching live media content. The system involves a server that receives first media content from a client device, which corresponds to a portion of media content currently being played on the client device. The first media content has a predefined expiration time. The server also obtains second media content from one or more content feeds, which also corresponds to a portion of the media content being played on the client device. If the second media content matches a portion of the media content that has already been played on the client device, the server obtains third media content from the content feeds before the expiration time. Finally, the server compares the first media content with the third media content.
 
 
'''Abstract'''
 
Systems and methods for matching live media content are disclosed. At a server, obtaining first media content from a client device, herein the first media content corresponds to a portion of media content being played on the client device, and the first media content is associated with a predefined expiration time; obtaining second media content from one or more content feeds, wherein the second media content also corresponds to a portion of the media content being played on the client device; in accordance with a determination that the second media content corresponds to a portion of the media content that has been played on the client device: before the predefined expiration time, obtaining third media content corresponding to the media content being played on the client device, from the one or more content feeds; and comparing the first media content with the third media content.
 
  
 
===METHODS, SYSTEMS, AND MEDIA FOR ADJUSTING QUALITY LEVEL DURING SYNCHRONIZED MEDIA CONTENT PLAYBACK ON MULTIPLE DEVICES ([[US Patent Application 18216751. METHODS, SYSTEMS, AND MEDIA FOR ADJUSTING QUALITY LEVEL DURING SYNCHRONIZED MEDIA CONTENT PLAYBACK ON MULTIPLE DEVICES simplified abstract|18216751]])===
 
===METHODS, SYSTEMS, AND MEDIA FOR ADJUSTING QUALITY LEVEL DURING SYNCHRONIZED MEDIA CONTENT PLAYBACK ON MULTIPLE DEVICES ([[US Patent Application 18216751. METHODS, SYSTEMS, AND MEDIA FOR ADJUSTING QUALITY LEVEL DURING SYNCHRONIZED MEDIA CONTENT PLAYBACK ON MULTIPLE DEVICES simplified abstract|18216751]])===
Line 642: Line 379:
 
Joe Bertolami
 
Joe Bertolami
  
 
'''Brief explanation'''
 
This abstract describes a method for adjusting the quality level of media content during synchronized presentation. It involves transmitting different streams of media content to multiple user devices, storing the content in buffers, and instructing the devices to start presenting the content simultaneously. If one device's buffer is filling up slower than the other, a lower quality stream is selected for that device and corresponding media content data is transmitted.
 
 
'''Abstract'''
 
Methods, systems, and media for adjusting quality level during synchronized media content presentation are provided. In some embodiments, the method comprises: transmitting, from a server to a first user device, first media content data corresponding to a first stream of a media content item and from the server to a second user device, second media content data corresponding to a second stream of the media content item, wherein the first media content data is to be stored in a buffer of the first user device, and wherein the second media content data is to be stored in a buffer of the second user device; transmitting, from the server to the first user device and to the second user device, instructions that cause the first user device and the second user device to begin presenting the media content item simultaneously; determining, by the server, that the first media content data is being stored in the buffer of the first user device at a slower rate than the second media content data is being stored in the buffer of the second user device; in response to determining that the first media content data is being stored in the buffer of the first user device at a slower rate than the second media content data is being stored in the buffer of the second user device, selecting a third stream of the media content item corresponding to the first stream of the media content item, wherein the third stream of the media content item has a lower quality level than the first stream of the media content item; and transmitting third media content data corresponding to the third stream of the media content item to the first user device.
 
  
 
===Doorbell Camera ([[US Patent Application 18301820. Doorbell Camera simplified abstract|18301820]])===
 
===Doorbell Camera ([[US Patent Application 18301820. Doorbell Camera simplified abstract|18301820]])===
Line 656: Line 387:
 
Haerim Jeong
 
Haerim Jeong
  
 
'''Brief explanation'''
 
This application describes a doorbell camera that has a camera module, an image sensor, infrared illuminators, a waterproof button assembly, and a microphone and speaker. The camera captures video of a scene, while the illuminators provide light. The waterproof button assembly allows users to press a button without water entering the device, and it also displays a visual pattern using LEDs and a light guide component. This allows for real-time conversations between a visitor and the user of a remote client device.
 
 
'''Abstract'''
 
This application is directed to a doorbell camera for illuminating and capturing scenes. The doorbell camera includes at least a subset of processors for operating a camera module, an image sensor having a field of view of a scene and configured to capture video of a portion of the scene, one or more infrared (IR) illuminators for providing illumination, a waterproof button assembly, and a microphone and a speaker for enabling a real-time conversation between a visitor located at the doorbell camera and a user of a remote client device. The waterproof button assembly is configured to receive a user press on a button top, block water from entering the electronic device, and display a visual pattern uniformly at a peripheral region of the button assembly using LEDs and light guide component that are disposed under the button top.
 
  
 
===TEXTILE-ASSEMBLY TOOLKIT FOR REVERSIBLE ASSEMBLY OF A TEXTILE TO AN ELECTRONIC-SPEAKER DEVICE ([[US Patent Application 18344255. TEXTILE-ASSEMBLY TOOLKIT FOR REVERSIBLE ASSEMBLY OF A TEXTILE TO AN ELECTRONIC-SPEAKER DEVICE simplified abstract|18344255]])===
 
===TEXTILE-ASSEMBLY TOOLKIT FOR REVERSIBLE ASSEMBLY OF A TEXTILE TO AN ELECTRONIC-SPEAKER DEVICE ([[US Patent Application 18344255. TEXTILE-ASSEMBLY TOOLKIT FOR REVERSIBLE ASSEMBLY OF A TEXTILE TO AN ELECTRONIC-SPEAKER DEVICE simplified abstract|18344255]])===
Line 670: Line 395:
 
Laura Charlotte Shumaker
 
Laura Charlotte Shumaker
  
 
'''Brief explanation'''
 
The abstract describes a toolkit that allows for the reversible assembly of a textile to an electronic-speaker device. The toolkit includes various attachment features that enable the textile to be securely and accurately aligned with the device without distorting its pattern or leaving any visible edges or attachment features. Additionally, the toolkit ensures that the textile is securely attached to the device to avoid any acoustic distortion.
 
 
'''Abstract'''
 
The present document describes a textile-assembly toolkit for reversible assembly of a textile to an electronic-speaker device. The toolkit includes multiple attachment features, including rigid features with matched purposefully-designed knit types that can be combined to enable repeatable, mass-producible, reversible assembly of the textile to the electronic-speaker device. The techniques described herein enable accurate alignment of the textile on the electronic-speaker device without distorting the textile's cosmetic pattern and in a manner that results in no visible edges of the textile or visible attachment features on the exterior of the electronic-speaker device. Also, the textile-assembly toolkit includes attachment features that secure the textile with sufficient tension to avoid acoustic distortion such as rub and buzz.
 
  
 
===LOCATION-BASED SOCIAL SOFTWARE FOR MOBILE DEVICES ([[US Patent Application 18305262. LOCATION-BASED SOCIAL SOFTWARE FOR MOBILE DEVICES simplified abstract|18305262]])===
 
===LOCATION-BASED SOCIAL SOFTWARE FOR MOBILE DEVICES ([[US Patent Application 18305262. LOCATION-BASED SOCIAL SOFTWARE FOR MOBILE DEVICES simplified abstract|18305262]])===
Line 684: Line 403:
 
Dennis P. Crowley
 
Dennis P. Crowley
  
 
'''Brief explanation'''
 
This abstract describes a method for sharing location information between users on a social networking service. The computer system receives the location information of a device belonging to one user and associates it with their profile. It then generates a message based on this location information and sends it to another user's device.
 
 
'''Abstract'''
 
A method for communicating location information to a device includes receiving, at a computer system that implements a social networking service, location information that represents a geographic location of a device associated with a first user; associating, by the computer system, the received location information with a profile associated with the first user; and sending, from the computer system to a device associated with a second user, a message that is generated based at least in part on the location information.
 
  
 
===DISCOVERING AN EMBEDDED SUBSCRIBER IDENTIFICATION MODULE ROOT DISCOVERY SERVICE ENDPOINT ([[US Patent Application 18344761. DISCOVERING AN EMBEDDED SUBSCRIBER IDENTIFICATION MODULE ROOT DISCOVERY SERVICE ENDPOINT simplified abstract|18344761]])===
 
===DISCOVERING AN EMBEDDED SUBSCRIBER IDENTIFICATION MODULE ROOT DISCOVERY SERVICE ENDPOINT ([[US Patent Application 18344761. DISCOVERING AN EMBEDDED SUBSCRIBER IDENTIFICATION MODULE ROOT DISCOVERY SERVICE ENDPOINT simplified abstract|18344761]])===
Line 698: Line 411:
 
Aguibou BARRY
 
Aguibou BARRY
  
 
'''Brief explanation'''
 
The abstract describes a memory device that consists of a memory cell array block and a control logic. The memory cell array block is made up of multiple layers of memory cells and corresponding word line layers. The block is further divided into two subblocks, each containing a number of layers of memory cells and word line layers. The control logic is connected to the memory cell array block and is responsible for erasing, reading, or programming the memory cells using either a block mode or a subblock mode. When the memory cell array block is being erased, read, or programmed in the subblock mode, the control logic determines the operation strategy for the other subblock based on the state of one of the subblocks.
 
 
'''Abstract'''
 
A memory device includes at least one memory cell array block and a control logic. The memory cell array block includes multiple layers of memory cells and word line layers provided corresponding to individual layers of memory cells. The memory cell array block is divided into at least two memory cell array subblocks, each subblock comprising a number of layers of memory cells and word line layers provided corresponding to individual layers of memory cells. The control logic is coupled to the memory cell array block, and configured to: erase, read or program the memory cell array block using a block mode or a subblock mode, and when the memory cell array block is erased, read, or programmed under the subblock mode, determine, at least based on a state of one of the two memory cell array subblocks, an operation strategy of the other memory cell array subblock.
 
  
 
===MANAGING CONDITIONAL CONFIGURATION WHEN A SECONDARY CELL IS UNAVAILABLE ([[US Patent Application 17799214. MANAGING CONDITIONAL CONFIGURATION WHEN A SECONDARY CELL IS UNAVAILABLE simplified abstract|17799214]])===
 
===MANAGING CONDITIONAL CONFIGURATION WHEN A SECONDARY CELL IS UNAVAILABLE ([[US Patent Application 17799214. MANAGING CONDITIONAL CONFIGURATION WHEN A SECONDARY CELL IS UNAVAILABLE simplified abstract|17799214]])===
Line 711: Line 418:
  
 
Chih-Hsiang Wu
 
Chih-Hsiang Wu
 
 
'''Brief explanation'''
 
The abstract describes a method for managing mobility in a user equipment (UE) device. The UE device operates in dual connectivity (DC) with a master node (MN) through a primary cell and a secondary node (SN) through a primary secondary cell in a radio access network (RAN). The method involves receiving a conditional configuration from the RAN, which is a configuration that the UE device can apply under certain conditions during a conditional procedure. The method also includes detecting a failure in the secondary cell group (SCG), which is a group of secondary cells. In response to detecting the SCG failure, the method suspends the conditional procedure.
 
 
'''Abstract'''
 
A user equipment (UE) can implement a method for managing mobility. The method includes operating in dual connectivity (DC) with a master node (MN) via a primary cell and a secondary node (SN) via a primary secondary cell, the MN and the SN operating in a radio access network (RAN) (). The method further includes receiving, from the RAN, a conditional configuration related to a candidate primary secondary cell, the conditional configuration associated with a condition to be satisfied before the UE applies the conditional configuration during a conditional procedure (). The method also includes detecting a secondary cell group (SCG) failure. Still further, the method includes, in response to the detecting, suspending the conditional procedure.
 

Latest revision as of 07:48, 2 November 2023

Summary of the patent applications from Google LLC on October 26th, 2023

  • Google LLC has recently filed patents for various technologies and methods related to managing mobility in user equipment devices, memory devices, sharing location information on social networking services, reversible assembly of textiles to electronic-speaker devices, doorbell cameras, adjusting the quality level of media content during synchronized presentation, matching live media content, providing information items during communication sessions, adapting machine learning configurations in communication systems, and blind battery connectors.
  • Notable applications include:
 * A method for managing mobility in a user equipment device operating in dual connectivity with a master node and a secondary node in a radio access network.
 * A memory device consisting of a memory cell array block and control logic for erasing, reading, or programming memory cells.
 * A computer system for sharing location information between users on a social networking service.
 * A toolkit for the reversible assembly of textiles to electronic-speaker devices.
 * A doorbell camera with various features such as a camera module, image sensor, infrared illuminators, waterproof button assembly, and microphone and speaker.
 * A method for adjusting the quality level of media content during synchronized presentation on multiple user devices.
 * A system and method for matching live media content by comparing portions of media content received from a client device with content feeds.
 * A method for providing information items during a communication session between two computing devices.
 * Techniques and apparatuses for adapting a machine learning configuration in an end-to-end communication system.
 * A blind battery connector that uses magnets to automatically align and engage the battery connector with the system-side connector.



Contents

Patent applications for Google LLC on October 26th, 2023

ON-WAFER TEST MECHANISM FOR WAVEGUIDES (18136798)

Main Inventor

Joseph Daniel Lowney


SPATIALLY SELECTIVE TINTING ARCHITECTURE FOR HEAD-WORN DEVICES (18137393)

Main Inventor

Joseph Daniel Lowney


Variable Mesh Low Mass MEMS Mirrors (18211910)

Main Inventor

Kevin Yasumura


HIGH-RELIABILITY STACKED WAVEGUIDE WITH REDUCED SENSITIVITY TO PUPIL WALK (18137397)

Main Inventor

Joseph Daniel Lowney


FITTING MECHANISMS FOR EYEWEAR WITH VISIBLE DISPLAYS AND ACCOMODATION OF WEARER ANATOMY (18245845)

Main Inventor

Alex Olwal


REDUCING HOLE BEZEL REGION IN DISPLAYS (18202877)

Main Inventor

Sun-il Chang


Selecting an Input Mode for a Virtual Assistant (18338969)

Main Inventor

Ibrahim Badr


ENHANCED COMPUTING DEVICE REPRESENTATION OF AUDIO (18044831)

Main Inventor

Dimitri Kanevsky


High Availability Multi-Single-Tenant Services (18337566)

Main Inventor

Grigor Avagyan


IMAGE MODELS TO PREDICT MEMORY FAILURES IN COMPUTING SYSTEMS (17727454)

Main Inventor

Gufeng Zhang


INCREMENTAL VAULT TO OBJECT STORE (18342581)

Main Inventor

Christopher Murphy


Uncorrectable Memory Error Recovery For Virtual Machine Hosts (18216988)

Main Inventor

Jue Wang


Memory Request Timeouts Using a Common Counter (18044353)

Main Inventor

Nagaraj Ashok Putti


FILE SYSTEMS WITH GLOBAL AND LOCAL NAMING (18343096)

Main Inventor

Shahar Frank


CONTEXTUAL ESTIMATION OF LINK INFORMATION GAIN (18215032)

Main Inventor

Victor Carbune


PUBLISHER TOOL FOR CONTROLLING SPONSORED CONTENT QUALITY ACROSS MEDIATION PLATFORMS (18214719)

Main Inventor

Thomas Price


MULTI SOURCE EXTRACTION AND SCORING OF SHORT QUERY ANSWERS (18000152)

Main Inventor

Preyas Dalsukhbhai Popat


Secure Provisioning with Hardware Verification (18245678)

Main Inventor

Andrei Tudor Stratan


ADAPTIVE NATURAL LANGUAGE STEGANOGRAPHY AND WATERMARKING FOR VIRTUAL ASSISTANTS (18217351)

Main Inventor

Sebastian Millius


TRANSITIONING BETWEEN PRIOR DIALOG CONTEXTS WITH AUTOMATED ASSISTANTS (18214404)

Main Inventor

Justin Lewis


Optimization of Parameter Values for Machine-Learned Models (18347406)

Main Inventor

Daniel Reuben Golovin


Systems and Methods for Contrastive Learning of Visual Representations (18343579)

Main Inventor

Ting Chen


Reducing Parasitic Interactions in a Qubit Grid (18195876)

Main Inventor

John MARTINIS


QUBIT LEAKAGE REMOVAL (18305174)

Main Inventor

Kevin Chenghao Miao


Multi-Modal Directions with a Ride Service Segment in a Navigation Application (18214353)

Main Inventor

Scott Ogden


METHODS, SYSTEMS AND MEDIA FOR PRESENTING MEDIA CONTENT THAT WAS ADVERTISED ON A SECOND SCREEN DEVICE USING A PRIMARY DEVICE (18218270)

Main Inventor

Adam Champy


High Resolution Inpainting with a Machine-learned Augmentation Model and Texture Transfer (17726720)

Main Inventor

Noritsugu Kanazawa


Photorealistic Talking Faces from Audio (17796399)

Main Inventor

Vivek Kwatra


NOVEL CATEGORY DISCOVERY USING MACHINE LEARNING (17729878)

Main Inventor

Xuhui Jia


ACTIONABLE EVENT DETERMINATION BASED ON VEHICLE DIAGNOSTIC DATA (18216418)

Main Inventor

Haris Ramic


Seamless Transition for Multiple Display Brightness Modes (18036323)

Main Inventor

Chien-Hui Wen


DYNAMICALLY ADAPTING GIVEN ASSISTANT OUTPUT BASED ON A GIVEN PERSONA ASSIGNED TO AN AUTOMATED ASSISTANT (17726244)

Main Inventor

Martin Baeuml


DYNAMICALLY ADAPTING GIVEN ASSISTANT OUTPUT BASED ON A GIVEN PERSONA ASSIGNED TO AN AUTOMATED ASSISTANT (17744440)

Main Inventor

Martin Baeuml


EFFICIENT STREAMING NON-RECURRENT ON-DEVICE END-TO-END MODEL (18336211)

Main Inventor

Tara Sainath


Joint Segmenting and Automatic Speech Recognition (18304064)

Main Inventor

Ronny Huang


DIRECTING A VEHICLE CLIENT DEVICE TO USE ON-DEVICE FUNCTIONALITY (18214384)

Main Inventor

Vikram Aggarwal


MULTI-MODAL INTERACTION BETWEEN USERS, AUTOMATED ASSISTANTS, AND OTHER COMPUTING SERVICES (18217326)

Main Inventor

Ulas Kirazci


SERVER SIDE HOTWORDING (18345077)

Main Inventor

Alexander H. Gruenstein


Machine-Learned Differentiable Digital Signal Processing (18344567)

Main Inventor

Jesse Engel


Optical Communication for Memory Disaggregation in High Performance Computing (17992241)

Main Inventor

Horia Alexandru Toma


Blind Battery Connector (18173607)

Main Inventor

James Robert Lim


End-to-End Deep Neural Network Adaptation for Edge Computing (17998323)

Main Inventor

Jibing Wang


ASSISTANCE DURING AUDIO AND VIDEO CALLS (18215221)

Main Inventor

Fredrik BERGENLID


SYSTEMS AND METHODS FOR LIVE MEDIA CONTENT MATCHING (18208570)

Main Inventor

Matthew Sharifi


METHODS, SYSTEMS, AND MEDIA FOR ADJUSTING QUALITY LEVEL DURING SYNCHRONIZED MEDIA CONTENT PLAYBACK ON MULTIPLE DEVICES (18216751)

Main Inventor

Joe Bertolami


Doorbell Camera (18301820)

Main Inventor

Haerim Jeong


TEXTILE-ASSEMBLY TOOLKIT FOR REVERSIBLE ASSEMBLY OF A TEXTILE TO AN ELECTRONIC-SPEAKER DEVICE (18344255)

Main Inventor

Laura Charlotte Shumaker


LOCATION-BASED SOCIAL SOFTWARE FOR MOBILE DEVICES (18305262)

Main Inventor

Dennis P. Crowley


DISCOVERING AN EMBEDDED SUBSCRIBER IDENTIFICATION MODULE ROOT DISCOVERY SERVICE ENDPOINT (18344761)

Main Inventor

Aguibou BARRY


MANAGING CONDITIONAL CONFIGURATION WHEN A SECONDARY CELL IS UNAVAILABLE (17799214)

Main Inventor

Chih-Hsiang Wu