Microsoft Technology Licensing, LLC patent applications published on October 26th, 2023

From WikiPatents
Revision as of 04:14, 1 November 2023 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Contents

Patent applications for Microsoft Technology Licensing, LLC on October 26th, 2023

3D INTEGRATED CHIPS WITH MICROFLUIDIC COOLING (17725422)

Main Inventor

Bharath RAMAKRISHNAN


Brief explanation

The abstract describes a processor that consists of two separate components connected by a microfluidic volume. Inside this volume, there is at least one pin fin, which is a small, thin structure. The pin fin has a boiling enhancement surface feature on its surface.

Abstract

A processor includes a first die, a second die connected to the first die with a microfluidic volume positioned between the first die and the second die, at least one pin fin positioned in the microfluidic volume, and a boiling enhancement surface feature positioned on a pin surface of the pin fin.

REDUNDANT MACHINE LEARNING ARCHITECTURE FOR HIGH-RISK ENVIRONMENTS (17845959)

Main Inventor

Kingsuk MAITRA


Brief explanation

This abstract describes a technique that improves the reliability of autonomous control systems using a fault-tolerant machine learning architecture. The architecture consists of three components: a selector agent, a nominal agent, and a redundancy agent. The machine learning agent collects data from the control system and its components. The nominal and redundancy agents use this data to generate actions, which are then evaluated by the selector agent. If a failure is detected, the selector agent uses the redundancy agent's lookup table to resolve the issue and restore normal operations.

Abstract

The techniques disclosed herein enable systems to enhance the resilience of autonomous control systems through a fault-tolerant machine learning architecture. To achieve this, a fault-tolerant machine learning agent is constructed with a selector agent, a nominal agent, and a redundancy agent which is a multidimensional lookup table. The fault-tolerant machine learning agent extracts state data from an environment containing a control system and various components. The nominal agent and the redundancy agent generate actions for application to the control system based on the state data which are provided to the selector agent. Based on an analysis of the state data, the selector agent can detect a failure condition. In the event of a failure condition, the selector agent deploys the action generated by the redundancy agent lookup table to resolve the failure condition and restore normal operations.

3-D STRUCTURED TWO-PHASE MICROFLUIDIC COOLING WITH NANO STRUCTURED BOILING ENHANCEMENT COATING (17725428)

Main Inventor

Bharath RAMAKRISHNAN


Brief explanation

This abstract describes a processor that consists of two components connected together with a small space in between. This space contains a microfluidic volume, which is filled with a wicking heat spreader. The wicking heat spreader has a special surface feature that helps enhance boiling.

Abstract

A processor includes a first die, a second die connected to the first die with a microfluidic volume positioned between the first die and the second die, a wicking heat spreader positioned in the microfluidic volume; and a boiling enhancement surface feature positioned on at least one surface of the wicking heat spreader.

REDUCING LATENCY OF CHANGING AN OPERATING STATE OF A PROCESSOR FROM A LOW-POWER STATE TO A NORMAL-POWER STATE (17727685)

Main Inventor

Bharat Srinivas PILLILLI


Brief explanation

This abstract describes techniques for reducing the time it takes for a processor to switch from a low-power state to a normal-power state. One method involves the hardware system notifying the processor that a transaction layer packet will be sent to it in the future, prompting the processor to switch to the normal-power state. Another method involves the processor receiving a transaction layer packet from the hardware system, which also triggers the switch to the normal-power state. These techniques aim to minimize latency in transitioning the processor's operating state.

Abstract

Techniques are described herein that are capable of reducing latency of changing an operating state of a processor from a low-power state to a normal-power state. For example, providing a notification from a hardware system to the processor or receiving the notification at the processor, indicating that a transaction layer packet will be provided to the processor at a future time, may trigger the processor to change the operating state from the low-power state to the normal-power state. In another example, receipt of a transaction layer packet at the processor from a hardware system may trigger the processor to change the operating state from the low-power state to the normal-power state.

DETECTING COMPUTER INPUT BASED UPON GAZE TRACKING WITH MANUALLY TRIGGERED CONTENT ENLARGEMENT (17727657)

Main Inventor

Moshe Randall LUTZ


Brief explanation

This abstract describes technologies that can detect user input based on the user's gaze. The computing system receives a command from the user to initiate gaze-based input. It then uses a camera to generate images and compute gaze points. Based on these gaze points, a portion of the content is progressively enlarged. When the user's gaze point aligns with a desired position in the content, the user can send a selection command. The computing system then performs a computing operation based on the selected position in the content.

Abstract

Technologies for detecting user input based upon computed gaze of a user are described herein. With more specificity, a computing system receives an initiation command, where the initiation command indicates that the user desires to set forth input by way of gaze. Gaze points are then computed based upon images generated by a camera of the computing system; based upon such gaze points, a portion of the content is progressively enlarged. When the gaze point corresponds to a position in the content desirably selected by the user (when the portion of the content is enlarged), a selection command from the user is received. Upon receipt of the selection command, the computing system performs a computing operation with respect to the position of the content.

PLURAL TOUCH-SCREEN SENSOR TO PEN SYNCHRONIZATION (18160438)

Main Inventor

Matan SLASSI


Brief explanation

The abstract describes a touch-screen system that consists of two touch-screen sensors and two digitizers. Each digitizer is connected to a touch-screen sensor and detects the movement of a pen on the screen. The synchronization logic ensures that both digitizers are working together and tracks the pen's movement based on the signals from both digitizers. The return logic then shares the pen tracking information with the operating system of the touch-screen system.

Abstract

A touch-screen system comprises adjacent first and second touch-screen sensors, first and second digitizers, and synchronization and return logic. Each of the first and second digitizers is coupled electronically to the respective touch-screen sensor and configured to provide a pen signal responsive to action of a pen on the touch-screen sensor. The synchronization logic is configured to synchronize the pen to the first and second digitizers and to enable pen tracking by any of the first and second digitizers conditionally, based at least partly on the first and second pen signals. The return logic is configured to expose a result of the pen tracking to an operating system of the touch-screen system.

ZONE HINTS FOR ZONED NAMESPACE STORAGE DEVICES (18044976)

Main Inventor

Scott Chao-Chueh LEE


Brief explanation

The abstract describes the concept of zone hints for use with a zoned namespace (ZNS) storage device. These hints provide instructions to the storage device regarding the allocation of storage resources, the writing process, and the priority of operations related to specific zones. 

The first hint indicates that a zone is part of a group of zones, and instructs the storage device to allocate storage resources that are physically adjacent to resources reserved for other zones in the group.

The second hint instructs the storage device to bypass a staging area when writing to the zone, allowing for faster filling of the zone.

The third hint is associated with a background operation and instructs the storage device to deprioritize at least one operation writing to the zone or bypass the staging area when writing to the zone.

Overall, these hints provide more efficient and optimized storage management for zoned namespace storage devices.

Abstract

Zone hints for use with a zoned namespace (ZNS) storage device. Zone hints include one or more of a first hint indicating that a zone is part of a group of a plurality of zones, a second hint indicating that the zone is to be fast-filled, or a third hint indicating that the zone is associated with a background operation. The first hint is structured to instruct the ZNS storage device to allocate to the zone a first storage resources that are physically adjacent to second storage resources reserved for others of the plurality of zones. The second hint is structured to instruct the ZNS storage device to bypass a staging area when writing to the zone. The third hint is structured to instruct the ZNS storage device to deprioritizing at least one operation writing to the zone, or to bypass the staging area when writing to the zone.

HYBRID SOLID-STATE DRIVE (18345759)

Main Inventor

Peng LI


Brief explanation

This abstract describes a method for writing data to a solid-state drive (SSD) that has both non-volatile and volatile memory regions. The non-volatile memory region stores an address mapping table. When a write request is received, the system determines whether the requested address corresponds to the non-volatile memory or the volatile memory. If it corresponds to the non-volatile memory, the system determines the physical address in the non-volatile memory based on the address mapping table and writes the data to that address. If it corresponds to the volatile memory, the system writes the data to the second memory region based on the requested address.

Abstract

Systems, methods, and devices are described for writing to a solid-state drive (SSD) that includes a non-volatile memory device, the volatile memory device includes first and second memory regions, the first memory region storing an address mapping table. A write request that includes a host logic block address (LBA) and data is received. A determination of whether the received LBA corresponds to the non-volatile memory device or the second memory region is made. In response to the received LBA corresponding to the non-volatile memory device, a physical address of the non-volatile memory device corresponding to the received LBA is determined based on the address mapping table and the included data is written to the determined physical address of the non-volatile memory device. In response to the received LBA corresponding to the second memory region, the included data is written to the second memory region based on the received LBA.

CODING ACTIVITY TASK (CAT) EVALUATION FOR SOURCE CODE GENERATORS (17726413)

Main Inventor

Victor Chukwuma DIBIA


Brief explanation

The abstract describes a method for evaluating source code generators. The evaluation process consists of two stages: offline and online. 

In the offline stage, the input passages of software code are divided into smaller blocks. Each code generator then generates an equivalent block for each constituent block. A coding score is assigned to each equivalent block, and these scores are combined to calculate an aggregate score for each code generator. The aggregate scores are used to rank the code generators and select a smaller number of them for the online evaluation stage.

In the online evaluation stage, the selected code generators produce software code passages. The acceptance of these code outputs by users is used to further rank and narrow down the selection of code generators.

Additionally, some examples of this evaluation method consider the usefulness of the constituent blocks by assigning weights to the coding scores based on a code utility estimate.

Abstract

Solutions for evaluating source code generators use offline and online evaluation stages. Offline evaluation includes separating each of a plurality of input passages of software code into a plurality of constituent blocks. Each code generator (of a plurality of code generators) generates an equivalent block corresponding to each constituent block. A coding score is determined for each equivalent block (for each code generator), and the coding scores are aggregated across the equivalent blocks to provide an aggregate score for each code generator. A ranking of the aggregate scores is used to down-select to a fewer number of code generators for online evaluation. For this stage, the code generators output passages of software code, and user acceptance of the code generators' outputs may be used for further ranking and down-selection. Some examples weight the coding score according to a code utility estimate of the constituent blocks for which equivalent blocks are generated.

STREAMING DATA TO MULTI-TILE PROCESSING SYSTEM (18005246)

Main Inventor

Daniel John Pelham WILKINSON


Brief explanation

The abstract describes a processing system that consists of multiple chips, each containing several tiles. Each tile has its own processing unit and memory, which stores a codelet. The system includes an encryption unit that can encrypt and decrypt data transferred between the tiles and a trusted computing entity through an external computing device. The codelets are designed to guide the tiles in transferring the encrypted data by reading from and writing to different memory regions in the external memory. This process creates multiple streams of encrypted data, with each stream utilizing a specific memory region in the external computing device.

Abstract

A processing system comprising one or more chips, each comprising a plurality of tiles is described. Each tile comprises a respective processing unit and memory, the memory storing a codelet. The processing system has at least one encryption unit configured to encrypt and decrypt data transferred between the tiles and a trusted computing entity via an external computing device. The codelets are configured to instruct the tiles to transfer the encrypted data by reading from and writing to a plurality of memory regions at the external memory such that a plurality of streams of encrypted data are formed, each stream using an individual one of the memory regions at the external computing device.

INTERRUPTION DETECTION DURING AUTOMATED WORKFLOW (17726053)

Main Inventor

Micheal DUNN


Brief explanation

This abstract describes a system and method for detecting interruptions in an automated workflow. An automated workflow consists of a series of actions performed by a computer. A workflow manager executes the workflow by following instructions associated with it. When the workflow moves to a new state, an interruption detection engine checks if there is an interruption by analyzing attributes of the state and the user interface. This engine may use techniques like examining a document object model or computer vision. If an interruption is detected, the workflow is paused until the interruption is resolved, typically by the user providing a required input. Once the interruption is resolved, the workflow resumes and continues until it is completed.

Abstract

Systems and methods are provided for detecting an interruption during an automated workflow. An automated workflow may comprise a series of actions to be performed by or with the assistance of a computer. A workflow manager executes a workflow by progressing through a series of workflow states according to instructions associated with the workflow. When the workflow advances to a new state, an interruption detection engine determines whether the state contains an interruption by examining one or more attributes of the workflow state and/or the user interface associated therewith. An interruption detecting engine may examine a document object model and/or utilize computer vision to determine whether an interruption has occurred. When an interruption is detected, a workflow is paused until the interruption is resolved, such as by a user providing a required input. After an interruption has been resolved, the resumes and continues until completion of the workflow.

DISTRIBUTED, DECENTRALIZED TRAFFIC CONTROL FOR WORKER PROCESSES IN LIMITED-COORDINATION ENVIRONMENTS (17725797)

Main Inventor

Philip Raymond NADEAU


Brief explanation

The abstract describes a solution for managing traffic control in a decentralized system where multiple worker processes (WPs) need to access a shared resource. Each WP receives historical warnings about the resource and uses this information to determine if accessing the resource would exceed a certain limit. If it would exceed the limit, the WP does not access the resource. If it would not exceed the limit, the WP accesses the resource to perform a specific task.

Abstract

Solutions for distributed, decentralized traffic control for worker processes (WPs) in limited-coordination environments include: by each WP of a plurality of WPs: receiving, by the WP, indications of historical warnings corresponding to a target resource; based on at least the indications of historical warnings, autonomously determining, by the WP, whether the WP accessing the target resource would exceed a dynamic threshold of WPs permitted to access the target resource; based on at least determining that the WP accessing the target resource would exceed the dynamic threshold, not accessing the target resource by the WP; and based on at least determining that the WP accessing the target resource would not exceed the dynamic threshold, accessing the target resource, by the WP, to perform a first data management task.

DEEP NEURAL NETWORKS (DNN) INFERENCE USING PRACTICAL EARLY EXIT NETWORKS (17725825)

Main Inventor

Anand PADMANABHA IYER


Brief explanation

The abstract describes methods and systems for using machine learning to make inferences. These methods involve splitting a machine learning model into smaller portions based on a load forecast, determining the batch size for processing requests, and using available resources to execute the model portions and generate inferences.

Abstract

The present disclosure relates to methods and systems for providing inferences using machine learning systems. The methods and systems receive a load forecast for processing requests by a machine learning model and split the machine learning model into a plurality machine learning model portions based on the load forecast. The methods and systems determine a batch size for the requests for the machine learning model portions. The methods and systems use one or more available resources to execute the plurality of machine learning model portions to process the requests and generate inferences for the requests.

MEMORY PAGE MARKINGS AS LOGGING CUES FOR PROCESSOR-BASED EXECUTION TRACING (17921048)

Main Inventor

Jordi MOLA


Brief explanation

This abstract describes a cache-based tracing technique that categorizes memory regions as either logged or not logged. The computer system identifies a memory region in a specific context and determines if that context is in a logging state. It then configures a data structure to categorize the memory region accordingly. Another memory region in a different context is categorized as not logged. The data structure is made accessible to the processor. When the processor detects a memory access, it uses the categorization of the target memory address, the logging state of the executing context, and the type of memory access to decide whether to initiate a logging action or not.

Abstract

Cache-based tracing based on categorizing memory regions as being logged or not logged. A computer system identifies a first memory region within a first memory space of a first context, and determines that the first context is in a logging state. The computer system configures a data structure to categorize the first memory region as being logged. The data structure also categorizes a second memory region corresponding to a second context as being not logged. The computer system exposes the data structure to a processor. Upon detecting a memory access by a processing unit, the processor uses determinations of one or more of (i) whether a target memory address is categorized as being logged or not logged, (ii) whether an executing context is logging or not non-logging, or (iii) a type of the memory access to initiate a logging action or refrain from the logging action.

FETCHING NON-ZERO DATA (17729931)

Main Inventor

Karthikeyan AVUDAIYAPPAN


Brief explanation

The abstract describes techniques for storing and retrieving data. It mentions that data is stored as sub-matrices, with row slices and column slices. A fetch circuit is used to determine if certain slices of one sub-matrix, when combined with corresponding slices of another sub-matrix, result in zero and therefore do not need to be retrieved. The abstract also mentions a memory circuit with memory banks and sub-banks, where slices of sub-matrices are stored. The request for data moves between these memory banks, and slices from different sub-banks can be retrieved simultaneously.

Abstract

Embodiments of the present disclosure include techniques storing and retrieving data. In one embodiment, sub-matrices of data are stored as row slices and column slices. A fetch circuit determines if particular slices of one sub-matrix, when combined with corresponding slices of another sub-matrix, produce a zero result and need not be retrieved. In another embodiment, the present disclosure includes a memory circuit comprising memory banks and sub-banks. The sub-banks store slices of sub-matrices. A request moves between serially configured memory banks and slices in different sub-banks may be retrieved at the same time.

SEMI-PROGRAMMABLE AND RECONFIGURABLE CO-ACCELERATOR FOR A DEEP NEURAL NETWORK WITH NORMALIZATION OR NON-LINEARITY (18218426)

Main Inventor

Stephen Sangho YOUN


Brief explanation

This abstract describes a technology that uses a configurable stacked architecture to accelerate operations or layers of a deep neural network (DNN). The architecture includes a fixed function datapath with micro-execution units that perform various operations for a DNN layer. The datapath can be customized based on the specific DNN or operation being performed.

Abstract

The present disclosure relates to devices for using a configurable stacked architecture for a fixed function datapath with an accelerator for accelerating an operation or a layer of a deep neural network (DNN). The stacked architecture may have a fixed function datapath that includes one or more configurable micro-execution units that execute a series of vector, scalar, reduction, broadcasting, and normalization operations for a DNN layer operation. The fixed function datapath may be customizable based on the DNN or the operation.

SYSTEM AND METHOD FOR MACHINE LEARNING FOR SYSTEM DEPLOYMENTS WITHOUT PERFORMANCE REGRESSIONS (18345789)

Main Inventor

Irene Rogan SHAFFER


Brief explanation

The abstract describes a method of using machine learning to deploy systems and devices without any decrease in performance. It involves using a performance safeguard system to conduct pre-production experiments and determine if learned models are ready for production. This is done by utilizing big data processing infrastructure and deploying a large set of learned or optimized models for the query optimizer. The process involves learning and training different query plans, comparing their impact with and without the learned models, selecting the plan differences that are likely to have the most significant performance difference, conducting a limited number of pre-production experiments to observe the runtime performance, and selecting the models that consistently improve performance for deployment. This performance safeguard system allows for safe deployment of learned or optimized models, as well as other machine learning features for systems.

Abstract

Methods of machine learning for system deployments without performance regressions are performed by systems and devices. A performance safeguard system is used to design pre-production experiments for determining the production readiness of learned models based on a pre-production budget by leveraging big data processing infrastructure and deploying a large set of learned or optimized models for its query optimizer. A pipeline for learning and training differentiates the impact of query plans with and without the learned or optimized models, selects plan differences that are likely to lead to most dramatic performance difference, runs a constrained set of pre-production experiments to empirically observe the runtime performance, and finally picks the models that are expected to lead to consistently improved performance for deployment. The performance safeguard system enables safe deployment not just for learned or optimized models but also for additional of other ML-for-Systems features.

Extension for Third Party Provider Data Access (17725249)

Main Inventor

Robert Peter Damore II


Brief explanation

This abstract describes a computer method that involves retrieving data from multiple providers. The method starts by receiving a request for data from one of the providers. The method then identifies the specific data structure for that provider and formats the request accordingly. The formatted request is then sent to the provider, and the method receives a response from the provider.

Abstract

A computer implemented method includes receiving a first request for data to be retrieved from a first provider of multiple providers. A first provider definition data structure is identified. The first request is then formatted in accordance with the first provider definition data structure to generate a first formatted request. The first formatted request is sent to the first provider and a first response is received from the first provider.

PARTITIONING TIME SERIES DATA USING CATEGORY CARDINALITY (17727647)

Main Inventor

Nazmiye Ceren ABAY


Brief explanation

This abstract describes a method for dividing time series data into subsets without any duplicate time index values. The data is categorized and a probabilistic method is used to estimate the number of unique values in each category. A category is then selected based on its estimated cardinality value. A time series identifier is created using the selected category, and the data is partitioned into subsets based on this identifier. These subsets can be used to train machine learning models.

Abstract

The disclosure herein describes using probabilistic cardinality generation to partition time series data into subsets without entries that have duplicate time index values. Time series data including a plurality of categories and a time index category is obtained. Cardinality estimate values of the categories are generated using a probabilistic cardinality estimator and a candidate category is selected based on the cardinality estimate value of the selected candidate category. A time series identifier is generated using the candidate category and, based on the cardinality estimate value of the time series identifier indicating that subsets of the time series data partitioned based on the time series identifier lack entries with duplicate time index values, the time series data is partitioned into a set of time series grain data sets. The time series grain data sets can be used to train models using machine learning techniques.

QUERY CONVERSION FOR DIFFERENT GRAPH QUERY LANGUAGES (17920215)

Main Inventor

Siming Tian


Brief explanation

This abstract describes a method and apparatus for converting queries between different graph databases. The process involves obtaining a query for a first graph database, generating a syntax tree by parsing the query, creating a query graph based on the syntax tree, and then converting the query graph into a query suitable for a second graph database.

Abstract

The present disclosure provides method and apparatus for query conversion. A first query for a first graph database may be obtained. A syntax tree may be generated through parsing the first query. A query graph may be created based on the syntax tree. The query graph may be converted into a second query for a second graph database.

INFERRING INFORMATION ABOUT A WEBPAGE BASED UPON A UNIFORM RESOURCE LOCATOR OF THE WEBPAGE (18345834)

Main Inventor

Siarhei ALONICHAU


Brief explanation

This abstract describes technologies that can infer information about a webpage based on the semantics of its URL. The URL is broken down into individual tokens, and an embedding is created based on these tokens, which represents the meaning or context of the URL. Using this embedding, information about the webpage linked to by the URL is inferred. The webpage is then retrieved, and information is extracted from it based on the inferred information about the webpage.

Abstract

Described herein are technologies related to inferring information about a webpage based upon semantics of a uniform resource location (URL) of the webpage. The URL is tokenized to create a sequence of tokens. An embedding for the URL is generated based upon the sequence of tokens, wherein the embedding is representative of semantics of the URL. Based upon the embedding for the URL, information about the webpage pointed to by the URL is inferred, the webpage is retrieved, and information is extracted from the webpage based upon the information inferred about the webpage.

MACHINE LEARNING PIPELINE (18003875)

Main Inventor

Martin Philip GRAYSON


Brief explanation

This abstract describes a tool that can be used to analyze and modify a machine learning pipeline. The pipeline consists of multiple stages, each of which takes an input state and produces an output state. The output state of each stage, except the last one, is passed on as the input state to the next stage. Some stages have adjustable parameters that can affect the mapping of the input state to the output state. 

The tool includes a data interface that reads the output state of the pipeline, which is referred to as "probed pipeline data". It also includes a user interface module that presents information about the probed pipeline data to the user through a user interface. The user interface module provides controls that allow the user to adjust the parameters of one or more stages in the pipeline based on the presented information.

Abstract

A tool for probing a machine learning pipeline, wherein each pipeline stage performs a respective mapping of a respective input state to a respective output state, and each but the last provides its output state on to the input state to a respective successive stage in the pipeline. At least one pipeline stage has one or more adjustable parameters which affect the respective mapping. The tool comprises: a data interface for reading probed pipeline data from the pipeline, the probed data comprising at least some of the output state of at least one pipeline stage; and a user interface module configured to present information on the probed pipeline data to a user through a user interface, and to provide at least one user interface control enabling the user to adjust one or more parameters of at least one of the stages in the pipeline based on the presented information.

EXPLORING ENTITIES OF INTEREST OVER MULTIPLE DATA SOURCES USING KNOWLEDGE GRAPHS (17729957)

Main Inventor

Sarah PANDA


Brief explanation

The abstract describes methods and systems for analyzing textual data. It explains that these methods and systems can identify entities and relationships within the text, and create knowledge graphs based on this information. The knowledge graphs can be expanded by applying functions to the nodes, and additional knowledge graphs can be generated from different data sources. Finally, a merged knowledge graph is created by combining the initial and second knowledge graphs.

Abstract

The present disclosure relates to methods and systems for exploring textual data. The methods and systems identify entities and the relations among the entities within the text of an initial data source and generate knowledge graphs on-the-fly for the identified entities and the relations. The methods and systems apply one or more functions on the nodes of an initial knowledge graph and extend the initial knowledge graph in response to the one or more functions applied. The methods and systems use a different data source to generate a second knowledge graph for the extended initial knowledge graph. The methods and systems generate a merged knowledge graph with the initial knowledge graph and the second knowledge graph.

TECHNIQUES FOR POLL INTENTION DETECTION AND POLL CREATION (17724783)

Main Inventor

Bhargavkumar Kanubhai Patel


Brief explanation

This abstract describes a technique that uses supervised machine learning to determine if a content posting on an online service is a poll or survey. If it is determined to be a poll, the content is further analyzed to identify the question and answers. This information is then used to create a formal or structured poll, and the user who posted the content is given the option to convert it into a specific format for polls.

Abstract

Described herein are techniques for using supervised machine learning to determine whether a content posting posted to a feed of an online service, has been posted with the intent that the content posting is a poll or survey. Upon making a determination that a content posting is or includes a poll, the content posting is further analyzed to identify within the content posting a question and/or answers to the question. The identified question and answers are then used to populate data fields associated with a formal or structured poll, and the end-user who posted the content posting is provided an option to convert the poll from a first content posting format to a second content posting format that is specifically for a formal or structured poll.

MACHINE LEARNING BASED MONITORING FOCUS ENGINE (17729330)

Main Inventor

Kiran RAMA


Brief explanation

This abstract describes a machine learning system that monitors computing systems to predict if they will continue running smoothly or if they will experience issues or failures. The system collects numeric and text data from the computing systems and uses this information to determine the likelihood of a particular state or condition. The text data is processed using word embedding techniques to generate embedded text features, which are then combined with numerical features and local characteristic information as inputs to a neural network. The neural network analyzes these inputs and provides an output that predicts the likelihood of the state being monitored.

Abstract

A machine learning based monitoring focus engine is provided. Numeric and text features are collected from a computing system(s) and are utilized to determine if the system(s) will continue to run without issues or failures. That is, external characteristic information is received that corresponds to a predicted likelihood of a state that is associated with a processing system, and textual and numerical portions of the external characteristic information are mapped to neural network inputs. Word embedding is performed on the textual portion to generate embedded text features, and a plurality of inputs are provided to the neural network, where the plurality of inputs includes at least embedded text features, numerical features based on the numerical portion, and local features based on local characteristic information. Accordingly, the predicted likelihood of the state is determined based at least on an output of the neural network from the plurality of inputs.

CONTROLLING APPLICATION STATE IN A MIXED REALITY INSTALLATION (17852322)

Main Inventor

David Ben SILVERMAN


Brief explanation

This abstract describes a system and method for displaying content in mixed reality installations. The method involves identifying the user's position and gaze direction using a user device. It then determines the view volume that intersects with the user's gaze direction while they are in the mixed reality space. The system identifies that this view volume is associated with the mixed reality space and selects a content entity related to the view volume to display to the user through their device.

Abstract

A system and computerized method for rendering content in a mixed reality installations is provided. The method includes identifying a mixed reality space containing a user position of a user, determining a user gaze direction of the user via a user device, identifying a view volume intersecting the user gaze direction while the user is in the mixed reality space, determining that the view volume is associated with the mixed reality space, and selecting a content entity associated with the view volume to render to the user via the user device.

DATA SENSITIVITY ESTIMATION (17728045)

Main Inventor

David TRIGANO


Brief explanation

The disclosed technology is about data classification, specifically identifying sensitive data within a given dataset. It involves using training data and a ground truth to train a model using natural language processing. The model learns features, including a naming feature associated with data resource names. Using supervised learning, a heuristic or machine learning model is created based on the training data and ground truth. When input data is provided, the model calculates a data resource sensitivity estimator (DRSE) value for each part of the data, considering the combination of features. If the DRSE value indicates potential sensitivity, that portion of the input data is flagged as potentially sensitive.

Abstract

The disclosed technology is generally directed to data classification. In one example of the technology, training data and a ground truth that indicates sensitive data within the training data is received. Based at least on the training data, natural language processing is used to learn features. The features include a naming feature that is associated with names of data resources in the training data. Based at least on the training data and the ground truth, using supervised learning, a model that is a heuristic model and/or a machine learning model is created. Input data information that is associated with input data is received. The model is used to determine a data resource sensitivity estimator (DRSE) value for each portion of the input data. The determination is based on the combination of features for the input data. Potentially sensitive data within the input data is flagged based on the DRSE values.

Progressive Transformation of Face Information (17729987)

Main Inventor

Yatao ZHONG


Brief explanation

The abstract describes a system that can generate a new image by combining a source image of a face with driving information. The source image includes data about the identity, pose, and expression of the face. The driving information specifies certain characteristics. The system uses multiple components that work at different levels and resolutions of a neural network. These components use geometric displacement field information to determine the differences between the source image and the driving information. Overall, the system can create a target image that combines features from the source image and the driving information.

Abstract

A face-processing system is described for producing a target image based on a source image and driving information. The source image includes data depicting at least a face of a source subject having a source identity, a source pose, and a source expression. The driving information specifies one or more driving characteristics. The target image combines characteristics of the source image and the driving information. According to illustrative implementations, the face-processing system produces the target image by using plural warping subcomponents that operate at plural respective levels of a neural network and at increasing respective resolutions. Each warping subcomponent operates, in part, based on geometric displacement field (GDF) information that describes differences between a source mesh derived from the source image and a driving mesh derived from the driving information.

METHODS FOR ADJUSTING DISPLAY ENGINE PERFORMANCE PROFILES (17660816)

Main Inventor

Dmitriy CHURIN


Brief explanation

The abstract describes a system for a display engine that uses an optical imaging pathway and an illumination beam pathway. The optical imaging pathway includes a selectively reflective image forming device, while the illumination beam pathway consists of an optical source cluster, optical componentry for generating uniform illumination, and photodiodes for capturing reflected light. A controller commands the image forming device to operate with a specific reflectivity. During this operation, the optical source emits a pulse of light and the photodiodes capture the reflected light. The performance of the optical sources and the image forming device can be adjusted based on the data obtained from the photodiodes.

Abstract

A system is presented for a display engine. An optical imaging pathway comprises at least a selectively reflective image forming device. An illumination beam pathway comprises an optical source cluster including one or more optical sources, optical componentry configured to generate uniform illumination of the selectively reflective image forming device, and one or more photodiodes positioned to capture light reflected off the selectively reflective image forming device. A controller is configured to command the selectively reflective image forming device to operate with a predetermined reflectivity. While the selectively reflective image forming device is operating with the predetermined reflectivity, the optical source is commanded to emit a pulse of light and the one or more photodiodes are read out. A performance profile of one or more of the optical sources and the selectively reflective image forming device is adjusted based on the photodiode readout.

Keyword Detection for Audio Content (17804603)

Main Inventor

Zvi FIGOV


Brief explanation

The present disclosure describes improved systems and methods for detecting keywords in audio content. The audio content is divided into smaller segments, and corresponding text segments are generated for each audio segment. Textual analysis is performed to generate phrase candidate values, and sentence embedding analysis is used to generate sentence embedding values. An average sentence embedding value is calculated, and each phrase candidate value is compared to this average value. If a phrase candidate value exceeds a certain threshold, it is labeled as a keyword.

Abstract

Examples of the present disclosure describe improved systems and methods for detecting keywords in audio content. In one example implementation, audio content is segmented into one or more audio segments. One or more text segments is generated, each text segment corresponding to each of the audio segments. For each text segment, one or more phrase candidate values is generated using a textual analysis, and one or more sentence embedding values is generated using a sentence embedding analysis. Next, an average sentence embedding value is calculated using the one or more sentence embedding values. Each of the one or more phrase candidate values is compared to the average sentence embedding value. Each phrase candidate value having a comparison value above a threshold value is labeled as representing a keyword.

INTELLIGENT DISPLAY OF AUDITORY WORLD EXPERIENCES (17726465)

Main Inventor

Venkata Naga Vijaya Swetha MACHANAVAJHALA


Brief explanation

The techniques described in this abstract involve using specialized artificial intelligence models to display visual representations of auditory experiences. These models can analyze various aspects of speech, such as volume and tone, to identify specific characteristics. They can also recognize keywords in the speech to distinguish different parts of a transcript. Additionally, the models can analyze non-speech audio sounds to identify non-speech events. The system then combines these analyses with user interface attributes to provide visual indicators for the different aspects of the auditory signals.

Abstract

The techniques disclosed herein provide intelligent display of auditory world experiences. Specialized Al models are configured to display integrated visualizations for different aspects of the auditory signals that may be communicated during an event, such as a meeting, chat session, etc. For instance, a system can use a sentiment recognition model to identify specific characteristics of a speech input, such as volume or tone, provided by a participant. The system can also use a speech recognition model to identify keywords that can be used to distinguish portions of a transcript that are displayed. The system can also utilize an audio recognition model that is configured to analyze non-speech audio sounds for the purposes of identifying non-speech events. The system can then integrate the user interface attributes, distinguished portions of the transcript, and visual indicators describing the non-speech events.

FETCHING NON-ZERO DATA (17729948)

Main Inventor

Karthikeyan AVUDAIYAPPAN


Brief explanation

The abstract describes techniques for storing and retrieving data. One approach involves storing data as sub-matrices in row and column slices. A fetch circuit is used to determine if certain slices from one sub-matrix, when combined with corresponding slices from another sub-matrix, result in zero and therefore do not need to be retrieved. Another approach involves a memory circuit with memory banks and sub-banks. The sub-banks store slices of sub-matrices, and a request can move between the memory banks in a serial manner. This allows for the retrieval of slices from different sub-banks simultaneously.

Abstract

Embodiments of the present disclosure include techniques storing and retrieving data. In one embodiment, sub-matrices of data are stored as row slices and column slices. A fetch circuit determines if particular slices of one sub-matrix, when combined with corresponding slices of another sub-matrix, produce a zero result and need not be retrieved. In another embodiment, the present disclosure includes a memory circuit comprising memory banks and sub-banks. The sub-banks store slices of sub-matrices. A request moves between serially configured memory banks and slices in different sub-banks may be retrieved at the same time.

HOMOGENEOUS CHIPLETS CONFIGURABLE AS A TWO-DIMENSIONAL SYSTEM OR A THREE-DIMENSIONAL SYSTEM (17728761)

Main Inventor

Haohua ZHOU


Brief explanation

The abstract describes a type of chip called a chiplet that can be configured either as a two-dimensional system or a three-dimensional system. The chiplet system consists of multiple chiplets stacked on top of each other. Each chiplet contains an integrated circuit (IC) die with a logic block and a memory block. The logic block and memory block are connected through paths for transferring data signals. In this example, the first chiplet is stacked with the second chiplet, creating additional paths for transferring data signals between the logic block of the first chiplet and the memory block of the second chiplet, and vice versa. This configuration allows for more efficient data transfer and improved performance.

Abstract

Homogeneous chiplets configurable both as a two-dimensional system or a three-dimensional system are described. An example chiplet system has a first homogeneous chiplet (HC) including a first integrated circuit (IC) die having a first logic block and a first memory that are interconnected via a first path for transfer of data signals between the first logic block and the first memory block. A second HC including a second IC die having a second logic block and a second memory block, interconnected via a second path for transfer of data signals between the second logic block and the second memory block, is stacked vertically on top of the first HC to provide a third path for transfer of data signals between the first logic block and the second memory block and a fourth path for transfer of data signals between the second logic block and the first memory block.

DYNAMIC SHIFT IN OUTPUTS OF SERIAL AND PARALLEL SCRAMBLERS AND DESCRAMBLERS (17727324)

Main Inventor

Asaf LEVY


Brief explanation

This abstract describes methods and systems for reconfiguring the position of a tap in a descrambler circuit after it has been trained and synchronized with a corresponding scrambler circuit. The reconfiguration, known as the "lock-shift" operation, involves bypassing certain logic elements in the data path to reduce delay in the descrambler circuit. The tap position change can be communicated to the scrambler circuit through a mode manager, either directly or indirectly. The indirect method involves in-band transmissions between two integrated circuits (ICs) that have self-synchronizing scrambler/descrambler pairs. The reconfiguration in the scrambler circuit is based on monitored receiver signals that indicate synchronization or the presence of descrambled data in the descrambler circuit's output.

Abstract

Methods, systems are provided for reconfiguring the position of a first tap in a descrambler circuit LFSR after the LFSR has been trained and synchronized with a corresponding scrambler circuit LFSR. A data path from the second tap position to the descrambler output by-passes logic elements located in the data path from the first tap to the descrambler output, thereby reducing delay in the descrambler circuit after the reconfiguration (i.e., the “lock-shift” operation). The tap position change may be communicated by a mode manager to a corresponding scrambler circuit, for applying a matching reconfiguration in the scrambler circuit, either directly via an I/O line or indirectly. The indirect route includes in-band transmissions between two ICs with two sets of self-synchronizing scrambler/descrambler pairs, and is based on monitored receiver LFSR output signals that indicate when a scrambler/descrambler pair is synchronized or whether the output of a descrambler circuit comprises descrambled data.

RANKING CHANGES TO INFRASTRUCTURE COMPONENTS BASED ON PAST SERVICE OUTAGES (17729278)

Main Inventor

Nidhi VERMA


Brief explanation

The abstract has been revised to reflect changes made in the previous version. The header and footer have been marked out. The revised abstract provides a simplified explanation of the content without overselling anything. No title is provided in the output.

Abstract

Please replace the Abstract of the Disclosure with the following Abstract showing all changes relative to the previous version of the Abstract (In the replacement Abstract, the header and footer have been marked out):

GLOBAL INTENT-BASED CONFIGURATION TO LOCAL INTENT TARGETS (17727411)

Main Inventor

Michael Anthony BROWN


Brief explanation

The abstract describes a system for network configuration that uses intent-based technology. It uses a two-tiered reconciliation model to adapt a global intent to multiple local intent declarations for different regions. The system includes a global reconciliation engine that receives a global intent for the network and generates local intent declarations for each region. It also includes multiple local reconciliation engines that reconcile the local configurations of network functions in each region with their respective local intent declarations.

Abstract

Described are examples for providing intent based network configuration with a two-tiered reconciliation model for adapting a global intent to a plurality of local intent declarations for respective regions. A system for network configuration may include a global reconciliation engine configured to: receive a global intent including requirements, goals, and constraints for a network; generate a plurality of local intent declarations from the global intent, each respective local intent declaration being for a respective region that hosts network functions; and reconcile a plurality of the local configurations with the global intent. The system may include a plurality of local reconciliation engines configured to reconcile a local configuration of the network functions hosted in the respective region with the respective local intent declaration.

THREAT DETECTION USING CLOUD RESOURCE MANAGEMENT LOGS (18208022)

Main Inventor

Roy LEVIN


Brief explanation

This abstract discusses devices, systems, and methods for enhancing the security of cloud resources. The method involves obtaining a log that records the actions performed by users in a cloud portal, such as user IDs, operations performed, target resources, and timestamps. Each action is assigned a score, which is then compared to a specified criterion. If the score meets the criterion, it indicates an anomalous action, and this information is provided as an indication.

Abstract

Generally discussed herein are devices, systems, and methods for improving cloud resource security. A method can include obtaining a cloud resource management log that details actions performed by users of cloud resources in a cloud portal, the actions including entries comprising at least two of a user identification (ID) of a user of the users, an operation of operations performed on the cloud resource, a uniform resource identifier (URI) of a cloud resource of the cloud resources that is a target of the operation, or a time the operation was performed. The method can include determining a respective score for each action in the cloud resource management log, comparing the respective score to a specified criterion, and providing an indication of anomalous action in response to determining the respective score satisfies the specified criterion.

ORGANIZATION-LEVEL RANSOMWARE INCRIMINATION (17727759)

Main Inventor

Arie AGRANONIK


Brief explanation

The abstract describes a method to protect organizations from ransomware attacks by using different types of incrimination logics. These logics help detect and prevent attacks at various levels, such as across multiple machines, on individual machines, and within small groups of machines. The method involves comparing system graphs to known ransomware attack graphs and using statistical analysis and machine learning models. Additional search logics are used to find potential threats that may go undetected. The results of the incrimination logics are then used to enhance the security of the monitored system through various intervention mechanisms.

Abstract

Some embodiments help protect an organization against ransomware attacks by combining incrimination logics. An organizational-level incrimination logic helps detect alert spikes across many machines, which collectively indicate an attack. Graph-based incrimination logics help detect infestations of even a few machines, and local incrimination logics focus on protecting respective individual machines. Graph-based incrimination logics may compare monitored system graphs to known ransomware attack graphs. Graphs may have devices as nodes and device network connectivity, repeated files, repeated processes or actions, or other connections as edges. Statistical analyses and machine learning models may be employed as incrimination logics. Search logics may find additional incrimination candidates that would otherwise evade detection, based on files, processes, IP addresses, devices, accounts, or other computational entities previously incriminated. Incrimination engine results are forwarded to endpoint protection systems, intrusion protection systems, authentication controls, or other intervention mechanisms to enhance monitored system security.

HASH-BASED ENCODER DECISIONS FOR VIDEO CODING (18217034)

Main Inventor

Bin Li


Brief explanation

The abstract discusses new advancements in encoder-side decisions using hash-based block matching. These innovations include methods for creating hash tables that include specific uniform blocks, determining motion vector resolution based on hash-based block matching results, and detecting scene changes. These innovations also involve selecting long-term reference pictures and determining picture quality during encoding.

Abstract

Innovations in encoder-side decisions that use the results of hash-based block matching are presented. For example, some of the innovations relate to ways of building hash tables that include some (but not all) uniform blocks. Other innovations relate to ways of determining motion vector resolution based on results of hash-based block matching. Still other innovations relate to scene change detection, including long-term reference picture selection and picture quality determination during encoding.

FLEXIBLE PRINTED CIRCUIT CABLE ASSEMBLY WITH ELECTROMAGNETIC SHIELDING (17660618)

Main Inventor

Jaejin LEE


Brief explanation

The abstract describes a flexible printed circuit (FPC) cable assembly that includes two layers of ground material and at least one signal line placed between them. The assembly also includes an electromagnetic shielding structure, consisting of two magnetic layers that cover and are electrically connected to the ground layers, as well as multiple magnetic rings that surround the ground layers, signal line, and magnetic layers. This shielding structure provides protection against electromagnetic interference for the signal line.

Abstract

An FPC cable assembly is provided that includes a first ground layer, a second ground layer, and at least one signal line sandwiched by the first and second ground layers. The FPC cable assembly further includes an electromagnetic shielding structure including a first magnetic layer at least partially covering and electrically grounded to the first ground layer, a second magnetic layer at least partially covering and electrically grounded to the second ground layer, and a plurality of magnetic rings magnetically engaged with and electrically contacting the first magnetic layer and the second magnetic layer so as to surround the first and second ground layers, the at least one signal line, and the first and second magnetic layers, thereby providing electromagnetic shielding of the at least one signals line.

CIRCUIT BOARD ARCHITECTURE SUPPORTING MULTIPLE COMPONENT SUPPLIERS (17729801)

Main Inventor

Fatemeh ZOLFAGHAR


Brief explanation

The abstract describes techniques and systems for designing electronic circuit boards. It specifically focuses on an apparatus that connects an input voltage node on the circuit board to multiple voltage regulator circuits. Each voltage regulator circuit has different controller footprints. The apparatus also includes a shared structure that connects the switch nodes of the voltage regulator circuits to a common inductor footprint, and an output structure that connects the second terminal of the inductor footprint to an output voltage node.

Abstract

Techniques and systems for electronic circuit board design is provided herein. An example apparatus comprises an input structure that couples an input voltage node on a circuit board to input nodes of more than one voltage regulator circuit, wherein at least controller footprints differ among each of the more than one voltage regulator circuit. The apparatus further comprises a shared structure that couples switch nodes of the more than one voltage regulator circuit to a first terminal of an inductor footprint common to the more than one voltage regulator circuit, and an output structure that couples a second terminal of the inductor footprint to an output voltage node.

3-D STRUCTURED TWO-PHASE COOLING BOILERS WITH NANO STRUCTURED BOILING ENHANCEMENT COATING (17725420)

Main Inventor

Dennis TRIEU


Brief explanation

The abstract describes a thermal management device that consists of a wicking heat spreader and a boiling enhancement surface feature. The wicking heat spreader helps to distribute heat, while the boiling enhancement surface feature improves the boiling process. These components are positioned on the interior surface of the wicking heat spreader.

Abstract

A thermal management device includes a wicking heat spreader and a boiling enhancement surface feature positioned on at least one interior surface of the wicking heat spreader.

CONFORMAL ELECTROMAGNETIC INTERFERENCE SHIELDING FILM (17660789)

Main Inventor

Jaejin LEE


Brief explanation

The abstract describes a type of film that can be used to shield electronic components from electromagnetic interference (EMI). The film consists of two layers - a thermal-forming film layer and an electrically conductive film layer. The thermal-forming film layer is designed to coat over electronic components on a substrate when heat is applied. The electrically conductive film layer is located on the opposite side of the thermal-forming film layer and contains voids that can deform when heat is applied. This allows the electrically conductive film layer to conform to the shape of the thermal-forming film layer, providing effective EMI shielding.

Abstract

Provided is a conformal electromagnetic interference (EMI) shielding film including a thermal-forming film layer and an electrically conductive film layer. The thermal-forming film layer is configured to conformally coat over one or more electronic components mounted on a substrate with application of heat. The electrically conductive film layer is formed on an opposite side of the thermal-forming film layer from the substrate and has a plurality of voids that are configured to deform during the application of heat and allow the electrically conductive film layer to conform together with the thermal-forming film layer.