Intel Corporation patent applications published on December 14th, 2023
Contents
- 1 Patent applications for Intel Corporation on December 14th, 2023
- 2 Potential Applications
- 3 Problems Solved
- 4 Benefits
- 5 Potential Applications
- 6 Problems Solved
- 7 Benefits
- 8 Potential Applications
- 9 Problems Solved
- 10 Benefits
- 11 Potential Applications
- 12 Problems Solved
- 13 Benefits
- 14 Potential Applications
- 15 Problems Solved
- 16 Benefits
- 17 Potential Applications
- 18 Problems Solved
- 19 Benefits
- 20 Potential Applications
- 21 Problems Solved
- 22 Benefits
- 23 Potential Applications
- 24 Problems Solved
- 25 Benefits
- 26 Potential Applications
- 27 Problems Solved
- 28 Benefits
- 29 Potential Applications
- 30 Problems Solved
- 31 Benefits
- 32 Potential Applications
- 33 Problems Solved
- 34 Benefits
- 35 Potential Applications
- 36 Problems Solved
- 37 Benefits
- 38 Potential Applications
- 39 Problems Solved
- 40 Benefits
- 41 Potential Applications
- 42 Problems Solved
- 43 Benefits
- 44 Potential Applications
- 45 Problems Solved
- 46 Benefits
- 47 Potential Applications
- 48 Problems Solved
- 49 Benefits
- 50 Potential Applications
- 51 Problems Solved
- 52 Benefits
Patent applications for Intel Corporation on December 14th, 2023
TECHNIQUES FOR MEMORY ACCESS IN A REDUCED POWER STATE (18196309)
Main Inventor
BINATA BHATTACHARYYA
Brief explanation
The abstract describes techniques for memory access by a computer in a reduced power state, such as during video playback or connected standby. These techniques involve disabling one or more memory channels during the reduced power state by mapping memory usages to different memory channels.
- The computer identifies low-power mode blocks within its functional blocks.
- The computer has a processor, memory, and multiple memory channels.
- Usage of the low-power mode blocks is mapped to a specific address range associated with a particular memory channel.
Potential applications of this technology:
- Power-efficient video playback on computers.
- Improved performance during connected standby mode.
- Enhanced power management in various computing devices.
Problems solved by this technology:
- Reducing power consumption during specific computer states.
- Optimizing memory access for improved performance.
- Managing power usage in a more efficient manner.
Benefits of this technology:
- Extended battery life for portable devices.
- Enhanced performance during specific computer states.
- Improved power management capabilities.
Abstract
Various embodiments are generally directed to techniques for memory access by a computer in a reduced power state, such as during video playback or connected standby. Some embodiments are particularly directed to disabling one or more memory channels during a reduced power state by mapping memory usages during the reduced power state to one of a plurality of memory channels. In one embodiment, for example, one or more low-power mode blocks in a set of functional blocks of a computer may be identified. In some such embodiments, the computer may include a processor, a memory, and first and second memory channels to communicatively couple the processor with the second memory. In many embodiments, usage of the one or more low-power mode blocks in the set of functional blocks may be mapped to a first address range associated with the first memory channel.
Apparatus, Device, and Method for a Memory Controller, Memory Controller, and System (18334262)
Main Inventor
Sergej DEUTSCH
Brief explanation
The patent application describes an apparatus that includes interface circuitry and processor circuitry to write data bits to a memory. The apparatus applies a diffusion function on the data bits to calculate diffused data bits and calculates error correcting code (ECC) bits based on the data bits or the diffused data bits. It then applies a diffusion function on the ECC bits to calculate diffused ECC bits. The diffused ECC bits are stored in an ECC portion of the memory, while the data bits or the diffused data bits are stored in a data portion of the memory.
- Interface circuitry and processor circuitry are used to write data bits to a memory.
- A diffusion function is applied to the data bits to calculate diffused data bits.
- Error correcting code (ECC) bits are calculated based on the data bits or the diffused data bits.
- A diffusion function is applied to the ECC bits to calculate diffused ECC bits.
- The diffused ECC bits are stored in an ECC portion of the memory.
- The data bits or the diffused data bits are stored in a data portion of the memory.
Potential Applications
- Data storage systems
- Computer memory systems
- Error correction in communication systems
Problems Solved
- Efficient writing of data bits to memory
- Error correction in data storage and communication systems
Benefits
- Improved data reliability and integrity
- Enhanced error correction capabilities
- Efficient utilization of memory space
Abstract
Some aspects of the present disclosure relate to an apparatus comprising interface circuitry and processor circuitry to write data bits to a memory, by applying a diffusion function on the data bits to calculate diffused data bits, calculating error correcting code (ECC) bits based on the data bits or based on the diffused data bits, applying a diffusion function on the ECC bits to calculate diffused ECC bits, storing the diffused ECC bits in an ECC portion of the memory, and storing the data bits or the diffused data bits in a data portion of the memory.
OPTIMIZATION TECHNIQUE FOR MODULAR MULTIPLICATION ALGORITHMS (18237859)
Main Inventor
Erdinc OZTURK
Brief explanation
Methods and apparatus for optimizing modular multiplication algorithms are described in this patent application. These optimization techniques can be applied to different variants of modular multiplication algorithms, such as Montgomery multiplication algorithms and Barrett multiplication algorithms. The goal of these techniques is to reduce the number of serial steps in the Montgomery reduction and Barrett reduction processes.
- The optimization techniques aim to reduce the number of serial steps in modular multiplication algorithms.
- These techniques can be applied to various variants of modular multiplication algorithms, including Montgomery and Barrett multiplication algorithms.
- The optimization techniques allow for parallel execution of modular multiplication operations, resulting in faster computation.
- The number of serial steps in modular reductions is reduced to L, where L is determined by the digit size in bits and the number of digits of the operands.
Potential Applications
- Cryptography: These optimization techniques can be applied to improve the efficiency of modular multiplication operations in cryptographic algorithms, such as RSA and elliptic curve cryptography.
- Computer arithmetic: The techniques can be used to enhance the performance of modular multiplication algorithms in computer arithmetic operations, such as in hardware accelerators or specialized processors.
Problems Solved
- Serial steps in modular multiplication algorithms can be time-consuming and limit the overall performance of the algorithm.
- Traditional modular multiplication algorithms may not fully exploit parallelism, leading to slower computation times.
- The optimization techniques address these issues by reducing the number of serial steps and enabling parallel execution, resulting in faster modular multiplication operations.
Benefits
- Improved efficiency: The optimization techniques reduce the number of serial steps, leading to faster computation times for modular multiplication algorithms.
- Enhanced parallelism: By allowing parallel execution of modular multiplication and reduction operations, the techniques fully exploit the available parallelism, further improving performance.
- Versatility: The techniques can be applied to different variants of modular multiplication algorithms, making them applicable to a wide range of applications in cryptography and computer arithmetic.
Abstract
Methods and apparatus for optimization techniques for modular multiplication algorithms. The optimization techniques may be applied to variants of modular multiplication algorithms, including variants of Montgomery multiplication algorithms and Barrett multiplication algorithms. The optimization techniques reduce the number of serial steps in Montgomery reduction and Barrett reduction. Modular multiplication operations involving products of integer inputs A and B may be performed in parallel to obtain a value C that is reduced to a residual RES. Modular multiplication and modular reduction operations may be performed in parallel. The number of serial steps in the modular reductions are reduced to L, where L serial steps, where w is a digit size in bits, and L is a number of digits of operands=[k/w].
INSTRUCTION PREFETCH BASED ON THREAD DISPATCH COMMANDS (18347964)
Main Inventor
JAMES VALERIO
Brief explanation
The patent application describes a graphics processing device that includes compute units, a cache, and circuitry.
- The compute units execute a workload.
- The cache is coupled with the compute units.
- The circuitry is coupled with the cache and compute units.
- In the event of a cache miss for a read from a first cache, the circuitry broadcasts an event within the graphics processor device to identify the data associated with the cache miss.
- The event is received by a second compute unit in the set of compute units.
- The second compute unit prefetches the identified data into a second cache that is local to the second compute unit before attempting to read the instruction or data by the second thread.
Potential applications of this technology:
- Graphics processing devices in gaming consoles, computers, and mobile devices.
- High-performance computing systems that require efficient data access and processing.
Problems solved by this technology:
- Reduces cache misses, which can significantly impact performance in graphics processing.
- Improves data access and processing efficiency by prefetching data into a local cache.
Benefits of this technology:
- Improved performance and responsiveness in graphics processing.
- Enhanced efficiency in high-performance computing systems.
- Reduced latency in data access and processing.
Abstract
A graphics processing device is provided that includes a set of compute units to execute a workload, a cache coupled with the set of compute units, and circuitry coupled with the cache and the set of compute units. The circuitry is configured to, in response to a cache miss for the read from a first cache, broadcast an event within the graphics processor device to identify data associated with the cache miss, receive the event at a second compute unit in the set of compute units, and prefetch the data identified by the event into a second cache that is local to the second compute unit before an attempt to read the instruction or data by the second thread.
CONCURRENTLY FETCHING INSTRUCTIONS FOR MULTIPLE DECODE CLUSTERS (17840029)
Main Inventor
Mathew Lowes
Brief explanation
The abstract describes an apparatus that includes a branch prediction circuit, a fetch circuit, and two decode clusters. The branch prediction circuit predicts whether a branch is to be taken. The fetch circuit sends a first portion of a fetch region of instructions to the first decode cluster and a second portion to the second decode cluster. The first decode cluster decodes instructions in the first portion, while the second decode cluster decodes instructions in the second portion.
- The apparatus includes a branch prediction circuit to predict branch outcomes.
- A fetch circuit is used to send different portions of a fetch region to separate decode clusters.
- The first decode cluster decodes instructions in the first portion of the fetch region.
- The second decode cluster decodes instructions in the second portion of the fetch region.
Potential Applications
- This technology can be applied in microprocessors and computer architectures.
- It can improve the efficiency and performance of instruction decoding in processors.
Problems Solved
- The apparatus solves the problem of efficiently decoding instructions in a fetch region.
- It addresses the challenge of predicting branch outcomes accurately.
Benefits
- By predicting branch outcomes and distributing the decoding process, the apparatus can improve the overall performance of a processor.
- It allows for parallel decoding of instructions, which can enhance the speed and efficiency of instruction execution.
Abstract
In one embodiment, an apparatus comprises: a branch prediction circuit to predict whether a branch is to be taken; a fetch circuit, in a single fetch cycle, to send a first portion of a fetch region of instructions to a first decode cluster and send a second portion of the fetch region to the second decode cluster; the first decode cluster comprising a first plurality of decode circuits to decode one or more instructions in the first portion of the fetch region; and the second decode cluster comprising a second plurality of decode circuits to decode one or more other instructions in the second portion of the fetch region. Other embodiments are described and claimed.
LOAD BALANCER (18237860)
Main Inventor
Niall D. MCDONNELL
Brief explanation
The abstract describes a load balancer that can selectively order requests, allocate them into queues, and perform various operations such as adjusting the number of queues associated with a core, adjusting the number of target cores, and ordering memory space writes.
- The load balancer can selectively order requests from multiple cores.
- It allocates the requests into queue elements before assigning them to receiver cores for processing.
- The load balancer can adjust the number of consumer queues allocated to a single domain, which helps in load balancing.
- It can also adjust the number of target cores in a group to be load balanced, optimizing resource allocation.
- The load balancer is capable of ordering memory space writes from multiple caching agents, ensuring data consistency.
Potential Applications
- This load balancer technology can be applied in distributed systems where multiple cores are involved in processing requests.
- It can be used in cloud computing environments to efficiently distribute workloads across multiple servers.
- The load balancer can be implemented in web servers to evenly distribute incoming requests and improve response times.
Problems Solved
- The load balancer solves the problem of uneven workload distribution among cores by selectively ordering and allocating requests.
- It addresses the challenge of resource allocation by adjusting the number of consumer queues and target cores.
- The load balancer solves the problem of data consistency by ordering memory space writes from caching agents.
Benefits
- The load balancer improves overall system performance by evenly distributing workloads among cores.
- It optimizes resource allocation, ensuring efficient utilization of computing resources.
- The load balancer enhances data consistency by properly ordering memory space writes.
- It can lead to improved response times and better user experience in web server applications.
Abstract
Examples described herein relate to a load balancer that is configured to selectively perform ordering of requests from the one or more cores, allocate the requests into queue elements prior to allocation to one or more receiver cores of the one or more cores to process the requests, and perform two or more operations of: adjust a number of queues associated with a core of the one or more cores by changing a number of consumer queues (CQs) allocated to a single domain, adjust a number of target cores in a group of target cores to be load balanced, and order memory space writes from multiple caching agents (CAs).
TECHNOLOGIES FOR MANAGING ACCELERATOR RESOURCES BY A CLOUD RESOURCE MANAGER (18456460)
Main Inventor
Malini K. BHANDARU
Brief explanation
The abstract of this patent application describes a technology for managing accelerator resources in a cloud environment. Here is a simplified explanation:
- A cloud resource manager receives information about the usage of accelerators from multiple compute devices and the parameters of a task to be performed.
- The cloud resource manager has access to a task distribution policy.
- Based on the task parameters and the task distribution policy, the cloud resource manager determines which compute device should receive the task.
- The cloud resource manager assigns the task to the chosen compute device.
Potential applications of this technology:
- Cloud computing platforms that utilize accelerators, such as GPUs or FPGAs, can benefit from this technology to efficiently manage and distribute tasks to the appropriate compute devices.
- High-performance computing environments that rely on accelerators can use this technology to optimize resource allocation and improve overall system performance.
Problems solved by this technology:
- Efficiently managing and distributing tasks to accelerator resources can be challenging in cloud environments with multiple compute devices. This technology provides a solution to this problem by considering task parameters and a task distribution policy to determine the most suitable compute device for each task.
Benefits of this technology:
- Improved resource utilization: By assigning tasks to the most appropriate compute devices, this technology ensures that accelerator resources are efficiently utilized, leading to better overall system performance.
- Enhanced scalability: The cloud resource manager can handle a large number of compute devices and tasks, allowing for scalable management of accelerator resources in cloud environments.
- Optimized task distribution: The task distribution policy helps ensure that tasks are distributed in a way that aligns with specific requirements or priorities, leading to better workload balancing and performance optimization.
Abstract
Technologies for managing accelerator resources include a cloud resource manager to receive accelerator usage information from each of a plurality of node compute devices and task parameters of a task to be performed. The cloud resource manager accesses a task distribution policy. The cloud resource manager determines a destination node compute device of the plurality of node compute devices based on the task parameters and the task distribution policy. The cloud resource manager assigns the task to the destination node compute device. Other embodiments are described and claimed.
FPGA BASED PLATFORM FOR POST-SILICON VALIDATION OF CHIPLETS (17840211)
Main Inventor
Rakesh Mehta
Brief explanation
The abstract describes an apparatus that includes a circuit board, an active interposer, and a graphics processor die. The graphics processor die has graphics processor resources for a multi-die system on chip (SoC) device, excluding functionality implemented in a separate die. The apparatus also includes a field-programmable gate array (FPGA) with configurable hardware logic to emulate the functionality of the separate die, allowing validation of the graphics processor die separately from other dies of the multi-die SoC.
- The apparatus includes a circuit board, active interposer, graphics processor die, and field-programmable gate array (FPGA).
- The graphics processor die has graphics processor resources for a multi-die SoC device, excluding functionality implemented in a separate die.
- The active interposer connects the graphics processor die to the circuit board via a debug package.
- The FPGA has configurable hardware logic that can emulate the functionality of the separate die.
- The FPGA enables silicon validation of the graphics processor die independently from other dies of the multi-die SoC.
Potential Applications
- Silicon validation of graphics processor dies in multi-die SoC devices.
- Testing and debugging of graphics processor functionality separately from other components of the SoC.
Problems Solved
- Allows for separate validation of graphics processor dies, reducing the complexity and time required for overall SoC validation.
- Enables testing and debugging of graphics processor functionality independently, facilitating faster identification and resolution of issues.
Benefits
- Simplifies the validation process for multi-die SoC devices.
- Reduces the time and effort required for testing and debugging graphics processor functionality.
- Enables more efficient development and optimization of graphics processor designs.
Abstract
One embodiment provides an apparatus comprising a circuit board; an active interposer coupled with the circuit board via a debug package, and a graphics processor die coupled with the active interposer via the debug package. The graphics processor die includes graphics processor resources configured to execute instructions for a multi-die system on chip (SoC) device and excludes functionality that is implemented in a separate due of the multi-die SoC. The apparatus includes a field-programmable gate array (FPGA) including hardware logic that is configurable to emulate functionality provided by the separate die of the multi-die SoC device, which enables silicon validation of the graphics processor die separately from other dies of the multi-die SoC.
HARDWARE SOFTWARE COMMUNICATION CHANNEL TO SUPPORT DIRECT PROGRAMMING INTERFACE METHODS ON FPGA-BASED PROTOTYPE PLATFORMS (17840239)
Main Inventor
Renu Patle
Brief explanation
The abstract describes a generic hardware/software communication (HSC) channel that allows the reuse of pre-silicon DPI methods for FPGA-based post-silicon validation. The HSC channel translates a DPI interface into a hardware FIFO based mechanism, eliminating the need to re-implement the entire flow in pure hardware. Only a small layer of the transactor is converted into the FIFO based mechanism, while the core logic remains the same.
- The HSC channel enables the reuse of pre-silicon DPI methods for FPGA-based post-silicon validation.
- It translates a DPI interface into a hardware FIFO based mechanism.
- Only a small layer of the transactor is converted into the FIFO based mechanism, while the core logic remains the same.
- This allows for the reuse of methods without the need for complete re-implementation in pure hardware.
Potential Applications
The technology described in this patent application has potential applications in various fields, including:
- Post-silicon validation of integrated circuits.
- FPGA-based testing and verification.
- Hardware/software co-design and validation.
Problems Solved
The technology solves the following problems:
- Avoids the need for re-implementing the entire flow in pure hardware for post-silicon validation.
- Enables the reuse of pre-silicon DPI methods in FPGA-based validation.
- Provides a generic hardware/software communication channel for efficient testing and verification.
Benefits
The technology offers the following benefits:
- Saves time and effort by reusing pre-silicon DPI methods.
- Simplifies the post-silicon validation process by translating DPI interfaces into a hardware FIFO mechanism.
- Allows for efficient testing and verification of integrated circuits using FPGA-based solutions.
Abstract
Described herein is a generic hardware/software communication (HSC) channel that facilitates the re-use of pre-silicon DPI methods to enable FPGA-based post-silicon validation. The HSC channel translates a DPI interface into a hardware FIFO based mechanism. This translation allows the reuse of the methods without having to re-implement the entire flow in pure hardware. The core logic for the transactor remains the same, while only a small layer of the transactor is converted into the FIFO based mechanism.
PROCESSOR EXTENSIONS TO PROTECT STACKS DURING RING TRANSITIONS (18232810)
Main Inventor
Vedvyas Shanbhogue
Brief explanation
The abstract describes a processor that implements techniques to protect stacks during transitions between different privilege levels. The processor includes multiple registers and a processor core. Each register is associated with a privilege level and stores data used during privilege level transitions.
- The processor receives an indicator to change the privilege level of an active application.
- Based on the new privilege level, the processor selects a shadow stack pointer (SSP) stored in a register associated with that privilege level.
- The SSP is used to identify a shadow stack for the processor to use at the new privilege level.
Potential Applications
- This technology can be applied in processors used in operating systems, where different applications or processes may run at different privilege levels.
- It can be used in systems that require secure and efficient context switching between privilege levels.
Problems Solved
- Protecting stacks during transitions between privilege levels can help prevent unauthorized access or modification of data.
- Ensuring the integrity and security of data during privilege level transitions is crucial for system stability and security.
Benefits
- The use of shadow stacks provides an additional layer of protection for sensitive data during privilege level transitions.
- By associating registers with specific privilege levels, the processor can efficiently manage and switch between different privilege levels.
- The techniques implemented in this processor extension help enhance the security and reliability of systems that require privilege level transitions.
Abstract
A processor implementing techniques for processor extensions to protect stacks during ring transitions is provided. In one embodiment, the processor includes a plurality of registers and a processor core, operatively coupled to the plurality of registers. The plurality of registers is used to store data used in privilege level transitions. Each register of the plurality of registers is associated with a privilege level. An indicator to change a first privilege level of a currently active application to a second privilege level is received. In view of the second privilege level, a shadow stack pointer (SSP) stored in a register of the plurality of registers is selected. The register is associated with the second privilege level. By using the SSP, a shadow stack for use by the processor at the second privilege level is identified.
TRAINING NEURAL NETWORK WITH BUDDING ENSEMBLE ARCHITECTURE BASED ON DIVERSITY LOSS (18457002)
Main Inventor
Qutub Syed Sha
Brief explanation
The patent application describes a method for training deep neural networks (DNNs) with budding ensemble architectures using diversity loss. Here are the key points:
- The DNN consists of a backbone and multiple heads.
- The backbone has one or more layers that generate intermediate tensors.
- The heads are organized in pairs, with each pair consisting of a first head and a second head duplicated from the first head.
- The second head has the same tensor operations as the first head but different internal parameters.
- The intermediate tensor from a backbone layer is input into both the first and second heads.
- The first head computes a first detection tensor, and the second head computes a second detection tensor.
- The similarity between the first and second detection tensors is used as a diversity loss for training the DNN.
Potential applications of this technology:
- Image recognition: The method can be used to train DNNs for tasks like object detection and classification in images.
- Natural language processing: DNNs trained using this method can be applied to tasks like sentiment analysis, language translation, and text generation.
- Speech recognition: The method can be used to train DNNs for speech recognition and voice command systems.
Problems solved by this technology:
- Overfitting: The use of diversity loss helps prevent overfitting by encouraging the heads to learn different representations of the data.
- Lack of diversity in ensembles: By duplicating and modifying heads, the method increases the diversity within the ensemble, leading to improved performance.
Benefits of this technology:
- Improved accuracy: The use of diversity loss and budding ensemble architectures can lead to higher accuracy in DNN models.
- Robustness: The diversity within the ensemble makes the DNN more robust to variations in the input data.
- Efficient training: The method allows for efficient training of DNNs with budding ensemble architectures, reducing the computational resources required.
Abstract
Deep neural networks (DNNs) with budding ensemble architectures may be trained using diversity loss. A DNN may include a backbone and a plurality of heads. The backbone includes one or more layers. A layer in the backbone may generate an intermediate tensor. The plurality of heads may include one or more pairs of heads. A pair of heads includes a first head and a second head duplicated from the first head. The second head may include the same tensor operations as the first head but different internal parameters. The intermediate tensor generated by a backbone layer may be input into both the first head and the second head. The first head may compute a first detection tensor, and the second head may compute a second detection tensor. A similarity between the first detection tensor and the second detection tensor may be used as a diversity loss for training the DNN.
LSTM CIRCUIT WITH SELECTIVE INPUT COMPUTATION (18237887)
Main Inventor
Ram KRISHNAMURTHY
Brief explanation
The patent application describes an apparatus that includes a long short term memory (LSTM) circuit with a multiply accumulate circuit (MAC). The MAC circuit has a feature that allows it to use a stored product term instead of performing a multiplication operation, if the accumulation of differences between consecutive input values has not reached a certain threshold.
- The apparatus includes a long short term memory (LSTM) circuit with a multiply accumulate circuit (MAC).
- The MAC circuit has circuitry that can use a stored product term instead of performing a multiplication operation.
- This feature is utilized when the accumulation of differences between consecutive input values has not reached a threshold.
- The stored product term is relied upon to determine the product term in such cases.
- The apparatus provides a more efficient way of handling calculations in the LSTM circuit.
Potential Applications
- Artificial intelligence and machine learning systems
- Natural language processing and speech recognition systems
- Robotics and autonomous systems
- Financial analysis and prediction models
Problems Solved
- Reduces the computational load on the LSTM circuit by avoiding unnecessary multiplication operations.
- Improves the efficiency and speed of calculations in the LSTM circuit.
- Helps to optimize the performance of AI and machine learning systems.
Benefits
- Faster and more efficient processing of data in LSTM circuits.
- Reduced power consumption and improved energy efficiency.
- Enhanced performance and accuracy of AI and machine learning models.
- Cost savings in terms of hardware requirements for AI systems.
Abstract
An apparatus is described. The apparatus includes a long short term memory (LSTM) circuit having a multiply accumulate circuit (MAC). The MAC circuit has circuitry to rely on a stored product term rather than explicitly perform a multiplication operation to determine the product term if an accumulation of differences between consecutive, preceding input values has not reached a threshold.
COMPUTE OPTIMIZATIONS FOR LOW PRECISION MACHINE LEARNING OPERATIONS (18456235)
Main Inventor
Elmoustapha Ould-Ahmed-Vall
Brief explanation
The abstract of the patent application describes a general-purpose graphics processing unit (GPU) that includes a dynamic precision floating-point unit. This unit has a control unit with precision tracking hardware logic to monitor the number of bits of precision for computed data in relation to a target precision. The dynamic precision floating-point unit also has computational logic to output data at multiple precisions.
- The patent application describes a general-purpose GPU with a dynamic precision floating-point unit.
- The dynamic precision floating-point unit includes a control unit with precision tracking hardware logic.
- The precision tracking hardware logic monitors the number of bits of precision for computed data.
- The precision tracking hardware logic compares the computed data precision to a target precision.
- The dynamic precision floating-point unit also includes computational logic to output data at multiple precisions.
Potential applications of this technology:
- Graphics processing: The dynamic precision floating-point unit can enhance the performance and efficiency of graphics processing in GPUs.
- Scientific computing: The ability to output data at multiple precisions can be beneficial for scientific simulations and calculations.
- Machine learning: GPUs are commonly used in machine learning applications, and the dynamic precision floating-point unit can improve the accuracy and efficiency of these computations.
Problems solved by this technology:
- Precision optimization: The precision tracking hardware logic helps optimize the precision of computed data, ensuring it meets the desired target precision without unnecessary over-precision.
- Performance improvement: By outputting data at multiple precisions, the dynamic precision floating-point unit can improve the overall performance of the GPU, especially in scenarios where different levels of precision are required.
Benefits of this technology:
- Improved efficiency: The dynamic precision floating-point unit allows for more efficient use of computational resources by dynamically adjusting the precision of computed data.
- Enhanced accuracy: The precision tracking hardware logic ensures that computed data meets the desired precision, resulting in more accurate calculations.
- Versatility: The ability to output data at multiple precisions makes the GPU suitable for a wide range of applications that require different levels of precision.
Abstract
One embodiment provides a general-purpose graphics processing unit comprising a dynamic precision floating-point unit including a control unit having precision tracking hardware logic to track an available number of bits of precision for computed data relative to a target precision, wherein the dynamic precision floating-point unit includes computational logic to output data at multiple precisions.
MESSAGE AUTHENTICATION GALOIS INTEGRITY AND CORRECTION (MAGIC) FOR LIGHTWEIGHT ROW HAMMER MITIGATION (18145095)
Main Inventor
Sergej Deutsch
Brief explanation
The technology described in this patent application involves the use of bijection diffusion function circuits to diffuse data bits and error correcting code (ECC) bits. These diffused bits are then stored in a memory.
- The technology uses a first set of bijection diffusion function circuits to diffuse data bits and store them in a memory.
- An error correcting code (ECC) generation circuit is used to generate ECC bits for the data bits.
- A second set of bijection diffusion function circuits is used to diffuse the ECC bits and store them in the memory.
Potential applications of this technology:
- Data storage systems
- Communication systems
- Error correction in digital systems
Problems solved by this technology:
- Data loss or corruption during storage or transmission
- Efficient error correction in digital systems
Benefits of this technology:
- Improved data reliability and integrity
- Enhanced error correction capabilities
- Efficient use of memory space
Abstract
The technology described herein includes a first plurality of bijection diffusion function circuits to diffuse data bits into diffused data bits and store the diffused data bits into a memory; an error correcting code (ECC) generation circuit to generate ECC bits for the data bits; and a second plurality of bijection diffusion function circuits to diffuse the ECC bits into diffused ECC bits and store the diffused ECC bits into the memory.
TECHNOLOGIES FOR THIN FILM RESISTORS IN VIAS (17837732)
Main Inventor
Benjamin T. Duong
Brief explanation
The patent application describes techniques for creating thin-film resistors in vias of a glass substrate interposer. The resistors are formed vertically through the vias, saving space on the interposer layer. They can be used for power dissipation, voltage control, current control, or as pull-up/pull-down resistors.
- Thin-film resistors are created in through-glass vias of a glass substrate interposer.
- The resistors are formed vertically through the vias, rather than horizontally on a layer of the interposer.
- The thin-film resistors do not occupy a significant amount of area on the interposer layer.
- The resistors can be used for various purposes such as power dissipation, voltage control, current control, or as pull-up/pull-down resistors.
Potential Applications
- Power dissipation
- Voltage control
- Current control
- Pull-up/pull-down resistors
Problems Solved
- Limited space on the interposer layer for resistors
- Need for vertical resistor placement
- Efficient power dissipation and voltage/current control
Benefits
- Saves space on the interposer layer
- Enables vertical resistor placement
- Provides flexibility for various applications
- Efficient power dissipation and voltage/current control
Abstract
Techniques for thin-film resistors in vias are disclosed. In the illustrative embodiment, thin-film resistors are formed in through-glass vias of a glass substrate of an interposer. The thin-film resistors do not take up a significant amount of area on a layer of the interposer, as the thin-film resistor extends vertically through a via rather than horizontally on a layer of the interposer. The thin-film resistors may be used for any suitable purpose, such as power dissipation or voltage control, current control, as a pull-up or pull-down resistor, etc.
INTERCONNECT VIA METAL-INSULATOR-METAL (MIM) FUSE FOR INTEGRATED CIRCUITRY (17835863)
Main Inventor
Yao-Feng Chang
Brief explanation
The patent application describes a metal-insulator-metal (MIM) fuse for interconnecting integrated circuitry. The fuse is made up of a thin layer of a compound of metal and oxygen, which allows a small leakage current to pass through when a low voltage is applied. However, when a higher programming voltage is applied, the fuse irreversibly forms an open circuit. This is achieved by inducing a void between the electrode metallization features through Joule heating of the fuse material layer.
- The patent application describes a metal-insulator-metal (MIM) fuse for interconnecting integrated circuitry.
- The fuse is made up of a thin layer of a compound of metal and oxygen.
- The fuse allows a small leakage current to pass through when a low voltage is applied.
- When a higher programming voltage is applied, the fuse irreversibly forms an open circuit.
- This is achieved by inducing a void between the electrode metallization features through Joule heating of the fuse material layer.
Potential Applications
- Integrated circuitry interconnection
- Circuit protection
- Memory programming
Problems Solved
- Reliable interconnection of integrated circuitry
- Protection against overcurrent or short circuits
- Efficient memory programming
Benefits
- Improved reliability and performance of integrated circuitry
- Enhanced circuit protection
- Efficient and precise memory programming
Abstract
Interconnect via metal-insulator-metal (MIM) fuse for integrated circuitry. Two electrode metallization features, which may be within a backend of an IC die, are coupled through a via comprising a fuse material layer. The fuse material layer passes a non-zero leakage current when a lower read voltage is applied across the electrode metallization features, and irreversibly forms an open circuit when a higher programming voltage is applied across the electrode metallization features. The fuse material layer may be a compound of a metal and oxygen and be sufficiently thin to ensure a significant leakage current at the read voltage. Joule heating of the fuse material layer may induce a void between the electrode metallization features as the leakage current through the fuse material layer increases under higher voltages, creating an open circuit.
INTEGRATED WORD LINE CONTACT STRUCTURES IN THREE-DIMENSIONAL (3D) MEMORY ARRAY (18235766)
Main Inventor
Nanda Kumar Chakravarthi
Brief explanation
The abstract describes a memory array with integrated word line (WL) contact structures. The memory array includes multiple WLs, each with a WL contact structure. The WL contact structure consists of a first WL contact and a second WL contact, with the second WL contact nested within the first WL contact. An isolation material separates the second WL contact from the first WL contact, preventing contact between them. In one example, the second WL contact extends through a hole in the first WL to reach the second WL.
- The memory array includes integrated word line (WL) contact structures.
- The WL contact structure consists of a first WL contact and a second WL contact.
- The second WL contact is nested within the first WL contact.
- An isolation material isolates the second WL contact from the first WL contact.
- The second WL contact extends through a hole in the first WL to reach the second WL.
- The isolation material prevents contact between the second WL contact and the sidewalls of the hole in the first WL.
Potential Applications
- Memory arrays in electronic devices
- Integrated circuits
- Data storage systems
Problems Solved
- Efficient integration of word line contact structures in memory arrays
- Prevention of contact between word line contacts
- Isolation of word line contacts from sidewalls of holes
Benefits
- Improved memory array design
- Enhanced performance and reliability of memory arrays
- Simplified manufacturing process
Abstract
A memory array including integrated word line (WL) contact structures are disclosed. The memory array comprises a plurality of WLs that includes at least a first WL and a second WL. An integrated WL contact structure includes a first WL contact and a second WL contact for the first WL and the second WL, respectively. The second WL contact extends through the first WL contact. For example, the second WL contact is nested within the first WL contact. An intervening isolation material isolates the second WL contact from the first WL contact. In an example, the second WL contact extends through a hole in the first WL to reach the second WL. The isolation material isolates the second WL contact from sidewalls of the hole in the first WL.
INTEGRATED CIRCUIT STRUCTURES INCLUDING BACKSIDE VIAS (18457453)
Main Inventor
Nicholas A. Thomson
Brief explanation
The patent application describes integrated circuit structures with backside vias and related methods and devices. Here is a simplified explanation of the abstract:
- The integrated circuit structure includes a device layer with active devices.
- A first metallization layer is placed over the device layer, which includes a conductive pathway in contact with at least one of the active devices.
- A second metallization layer is located under the device layer and includes a second conductive pathway.
- A conductive via is present in the device layer, connecting at least one active device with the second conductive pathway.
Potential applications of this technology:
- Integrated circuits with improved connectivity and performance.
- Enhanced functionality and miniaturization of electronic devices.
- Increased efficiency and reliability of electronic systems.
Problems solved by this technology:
- Overcomes limitations in traditional integrated circuit structures by providing backside vias for improved connectivity.
- Addresses challenges in achieving efficient and reliable conductive pathways in complex electronic systems.
Benefits of this technology:
- Enables better integration and connectivity between different layers of an integrated circuit.
- Enhances the performance and functionality of electronic devices.
- Improves the efficiency and reliability of electronic systems.
Abstract
Disclosed herein are integrated circuit (IC) structures including backside vias, as well as related methods and devices. In some embodiments, an IC structure may include: a device layer, wherein the device layer includes a plurality of active devices; a first metallization layer over the device layer, wherein the first metallization layer includes a first conductive pathway in conductive contact with at least one of the active devices in the device layer; a second metallization layer under the device layer, wherein the second metallization layer includes a second conductive pathway; and a conductive via in the device layer, wherein the conductive via is in conductive contact with at least one of the active devices in the device layer and also in conductive contact with the second conductive pathway.
HIGH DENSITY METAL LAYERS IN ELECTRODE STACKS FOR TRANSITION METAL OXIDE DIELECTRIC CAPACITORS (17835854)
Main Inventor
Thomas Sounart
Brief explanation
The patent application describes capacitors used for various purposes in electronic systems, such as decoupling and power delivery in integrated circuits. The capacitors consist of a transition metal oxide dielectric sandwiched between two electrodes. One of the electrodes includes a conductive metal oxide layer on the transition metal oxide dielectric, and a high density metal layer on top of the conductive metal oxide.
- The patent application discloses capacitors for decoupling, power delivery, integrated circuits, and related systems.
- The capacitors utilize a transition metal oxide dielectric between two electrodes.
- One of the electrodes includes a conductive metal oxide layer on top of the transition metal oxide dielectric.
- A high density metal layer is added on top of the conductive metal oxide layer.
- The invention provides improved performance and reliability for capacitors used in electronic systems.
Potential Applications
The technology described in the patent application has potential applications in various electronic systems and devices, including:
- Integrated circuits
- Power delivery systems
- Decoupling capacitors
- Electronic devices requiring high-performance capacitors
Problems Solved
The technology addresses several problems associated with capacitors used in electronic systems, including:
- Inefficient power delivery and decoupling in integrated circuits
- Limited performance and reliability of capacitors
- Challenges in fabricating high-density capacitors
Benefits
The technology offers several benefits for electronic systems and devices:
- Improved power delivery and decoupling performance in integrated circuits
- Enhanced reliability and durability of capacitors
- Simplified fabrication process for high-density capacitors
Abstract
Capacitors for decoupling, power delivery, integrated circuits, related systems, and methods of fabrication are disclosed. Such capacitors include a transition metal oxide dielectric between two electrodes, at least one of which includes a conductive metal oxide layer on the transition metal oxide dielectric and a high density metal layer on the conductive metal oxide.
DUAL METAL SILICIDE FOR STACKED TRANSISTOR DEVICES (17838637)
Main Inventor
Rohit Galatage
Brief explanation
The abstract describes an integrated circuit structure that includes two stacked devices. The first device has a source or drain region, a source or drain contact, and a layer of metal and semiconductor materials between them. The second device also has a source or drain region, a source or drain contact, and a layer of metal and semiconductor materials between them. The metals used in the first and second devices are different.
- The patent application describes an integrated circuit structure with stacked devices.
- The first device has a source or drain region, a source or drain contact, and a metal-semiconductor layer.
- The second device also has a source or drain region, a source or drain contact, and a metal-semiconductor layer.
- The metals used in the first and second devices are different.
Potential Applications
- This technology can be used in various electronic devices that require compact and efficient integrated circuits.
- It can be applied in mobile devices, computers, and other electronic systems that require high-performance integrated circuits.
Problems Solved
- The integrated circuit structure solves the problem of limited space in electronic devices by stacking multiple devices vertically.
- It addresses the need for efficient and compact integrated circuits that can perform complex functions.
Benefits
- The stacked structure allows for increased functionality and performance in integrated circuits.
- It enables the integration of multiple devices in a smaller footprint, saving space in electronic devices.
- The use of different metals in the devices allows for optimized performance and functionality.
Abstract
An integrated circuit structure includes a second device stacked vertically above a first device. The first device includes (i) a first source or drain region, (ii) a first source or drain contact coupled to the first source or drain region, and (iii) a first layer comprising a first metal and first one or more semiconductor materials between at least a section of the first source or drain region and the first source or drain contact. The second device includes (i) a second source or drain region, (ii) a second source or drain contact coupled to the second source or drain region, and (iii) a second layer comprising a second metal and second one or more semiconductor materials between at least a section of the second source or drain region and the second source or drain contact. In an example, the first metal and the second metal are different.
SOURCE AND DRAIN CONTACTS FORMED USING SACRIFICIAL REGIONS OF SOURCE AND DRAIN (17838646)
Main Inventor
Rohit Galatage
Brief explanation
The patent application describes an integrated circuit structure that includes a device with a source region, a drain region, a body, and a source contact. The source region consists of a first region and a second region that is compositionally different and located above the first region. The source contact extends through the second region and within the first region. In a p-channel metal-oxide-semiconductor (PMOS) device, the concentration of germanium in the second region is higher than in the first region. In an n-channel metal-oxide-semiconductor (NMOS) device, the doping concentration level of an n-type dopant in the second region is higher than in the first region.
- The integrated circuit structure includes a device with a source region, drain region, body, and source contact.
- The source region consists of a first region and a second region that is compositionally different and located above the first region.
- The source contact extends through the second region and within the first region.
- In a PMOS device, the concentration of germanium in the second region is higher than in the first region.
- In an NMOS device, the doping concentration level of an n-type dopant in the second region is higher than in the first region.
Potential Applications
- This technology can be applied in the design and manufacturing of integrated circuits.
- It can be used in various electronic devices such as smartphones, computers, and IoT devices.
- The integrated circuit structure can improve the performance and efficiency of PMOS and NMOS devices.
Problems Solved
- The integrated circuit structure addresses the need for improved performance and efficiency in PMOS and NMOS devices.
- It provides a solution for optimizing the concentration of germanium or doping concentration level in different regions of the source region.
- This technology helps in achieving better control and functionality of integrated circuits.
Benefits
- The integrated circuit structure offers enhanced performance and efficiency in PMOS and NMOS devices.
- It allows for improved control and functionality of integrated circuits.
- The technology enables the design of more advanced and powerful electronic devices.
Abstract
An integrated circuit structure includes a device including a source region, a drain region, a body laterally between the source and drain regions, and a source contact coupled to the source region. In an example, the source region includes a first region, and a second region compositionally different from and above the first region. The source contact extends through the second region and extends within the first region. In an example where the device is a p-channel metal-oxide-semiconductor (PMOS) device, a concentration of germanium within the second region is different (e.g., higher) than a concentration of germanium within the first region. In another example where the device is a n-channel metal-oxide-semiconductor (NMOS) device, a doping concentration level of a dopant (e.g., an n-type dopant) within the second region is different (e.g., higher) from a doping concentration level of the dopant within the first region.
ACOUSTIC WAVE CLOCK DISTRIBUTION (18134850)
Main Inventor
Jason A. Mix
Brief explanation
The abstract of the patent application describes a clock distribution technique in an integrated circuit component using acoustic transmitters and receivers.
- Acoustic transmitters generate bulk acoustic waves.
- These bulk acoustic waves propagate across the substrate of the integrated circuit.
- Piezoelectric elements act as acoustic receivers to receive the bulk acoustic waves.
Potential Applications
- Clock distribution in integrated circuit components.
- Improving clock synchronization in complex electronic systems.
Problems Solved
- Efficient and reliable clock distribution in integrated circuits.
- Overcoming limitations of traditional clock distribution techniques.
Benefits
- Improved clock synchronization.
- Reduced power consumption.
- Enhanced performance and reliability of integrated circuits.
Abstract
Clock distribution in an integrated circuit component can comprise the generation of bulk acoustic waves by acoustic transmitters and propagation of the bulk acoustic waves across the substrate where they are received by piezoelectric elements acting as acoustic receivers.
CONGESTION NOTIFICATION IN A MULTI-QUEUE ENVIRONMENT (18239467)
Main Inventor
Md Ashiqur RAHMAN
Brief explanation
The abstract describes a network interface device that uses telemetry data to select the next hop network interface device based on congestion information. Here are the key points:
- The network interface device includes a host interface, DMA circuitry, a network interface, and circuitry.
- The circuitry is configured to select the next hop network interface device based on received telemetry data from at least one switch.
- The telemetry data is based on congestion information of a first queue associated with a first traffic class.
- The telemetry data is also based on per-network interface device hop-level congestion states from at least one network interface device.
- The first queue shares bandwidth of an egress port with a second queue.
- The first traffic class is associated with packet traffic subject to congestion control based on utilization of the first queue.
- The utilization of the first queue is based on a drain rate of the first queue and a transmit rate from the egress port.
Potential applications of this technology:
- Network routing and traffic management systems
- Data centers and cloud computing environments
- Internet service providers and network service providers
Problems solved by this technology:
- Efficient selection of the next hop network interface device based on congestion information
- Improved congestion control for packet traffic
- Optimal utilization of network resources
Benefits of this technology:
- Enhanced network performance and reliability
- Reduced congestion and improved quality of service
- Efficient allocation of bandwidth and resources
Abstract
Examples described herein relate to a network interface device. In some examples, the network interface device includes a host interface; a direct memory access (DMA) circuitry; a network interface; and circuitry. The circuitry can be configured to: based on received telemetry data from at least one switch: select a next hop network interface device from among multiple network interface devices based on received telemetry data. In some examples, the telemetry data is based on congestion information of a first queue associated with a first traffic class, the telemetry data is based on per-network interface device hop-level congestion states from at least one network interface device, the first queue shares bandwidth of an egress port with a second queue, the first traffic class is associated with packet traffic subject to congestion control based on utilization of the first queue, and the utilization of the first queue is based on a drain rate of the first queue and a transmit rate from the egress port.
TECHNIQUES TO SHAPE NETWORK TRAFFIC FOR SERVER-BASED COMPUTATIONAL STORAGE (18238345)
Main Inventor
Michael MESNIER
Brief explanation
The abstract describes techniques for shaping network traffic in server-based computational storage. Here is a simplified explanation of the abstract:
- The patent application discusses methods for managing network traffic in a computational storage server.
- It introduces the concept of a "class of service" associated with a compute offload request, which helps in organizing and prioritizing the storage of compute offload commands.
- The compute offload commands are stored in queues of a network interface device at the computational storage server.
- These queues are then used to schedule block-based compute operations for execution by the compute circuitry at the server, fulfilling the compute offload requests.
Potential Applications
This technology has potential applications in various fields, including:
- Cloud computing: It can enhance the performance and efficiency of cloud-based computational storage systems.
- Big data analytics: By optimizing network traffic and offloading compute operations, it can improve the processing speed and scalability of big data analytics platforms.
- Edge computing: The techniques can be applied to edge devices with computational storage capabilities, enabling faster and more efficient data processing at the edge.
Problems Solved
The technology addresses several problems in server-based computational storage:
- Network congestion: By shaping network traffic, it helps in managing and reducing congestion, ensuring smooth data transfer between the computational storage server and other devices.
- Compute offload management: The class of service and queue-based approach simplify the management and scheduling of compute offload requests, improving overall system performance.
- Resource utilization: By offloading compute operations to the computational storage server, it optimizes resource utilization and reduces the burden on the main server.
Benefits
The technology offers several benefits:
- Improved performance: By efficiently managing network traffic and offloading compute operations, it enhances the overall performance and responsiveness of the computational storage server.
- Scalability: The techniques can be applied to large-scale systems, allowing for seamless scalability and accommodating increasing computational demands.
- Resource optimization: By utilizing computational storage capabilities, it optimizes resource utilization and reduces the need for additional compute resources.
- Enhanced efficiency: The class of service and queue-based approach streamline the process of compute offload management, improving system efficiency and reducing latency.
Abstract
Examples include techniques to shape network traffic for server-based computational storage. Examples include use of a class of service associated with a compute offload request that is to be sent to a computational storage server in a compute offload command, The class of service to facilitate storage of the compute offload command in one or more queues of a network interface device at the computational storage server. The storage of the compute offload command to the one or more queues to be associated with scheduling a block-based compute operation for execution by compute circuitry at the computational storage server to fulfill the compute offload request indicated in the compute offload command.
WEIGHTED PREDICTION MECHANISM (18323186)
Main Inventor
Junhua Hou
Brief explanation
The patent application describes an apparatus that helps in encoding video data. Here is a simplified explanation of the abstract:
- The apparatus includes rendering logic that converts graphics video data into frame data.
- Fade extractor logic is used to extract fade effects data from the frame data. This data is then applied to the frame data to generate frame auxiliary metadata, which includes the fade effects data.
- Weighted prediction logic receives the frame data and auxiliary metadata. It computes one or more weighted predictions on the frame data based on the frame sequences indicated in the fade effects data.
- Encoding logic is responsible for encoding the frame data using the one or more weighted predictions.
Potential applications of this technology:
- Video encoding: The apparatus can be used in video encoding systems to improve the encoding process by considering fade effects and generating weighted predictions accordingly.
- Video editing: This technology can be utilized in video editing software to enhance the editing process by automatically extracting fade effects and applying them to the video frames.
Problems solved by this technology:
- Improved video quality: By considering fade effects and generating weighted predictions, the encoding process can be optimized, resulting in better video quality.
- Efficient video editing: The automatic extraction of fade effects and their application to video frames simplifies the video editing process, saving time and effort.
Benefits of this technology:
- Enhanced video encoding: The use of weighted predictions based on fade effects improves the accuracy and efficiency of video encoding.
- Streamlined video editing: The automatic extraction and application of fade effects simplify the video editing process, making it more user-friendly and efficient.
Abstract
An apparatus to facilitate encoding video data is disclosed. The apparatus includes rendering logic to render graphics video data as frame data, fade extractor logic to extract fade effects data to be applied to the frame data to generate frame auxiliary metadata comprising the fade effects data, weighted prediction logic to receive the frame data and the auxiliary metadata and compute one or more weighted predictions on the frame data at one or more frame sequences indicated in the fade effects data and encoding logic to encode the frame data based on the one or more weighted predictions.
METHODS AND APPARATUS TO SLICE NETWORKS FOR WIRELESS SERVICES (18331799)
Main Inventor
Omid Semiari
Brief explanation
The patent application describes a method for slicing networks for wireless services using an actor-critic neural network.
- The method involves predicting the quality of service for a network slice based on previous slicing decisions using a long short-term memory.
- The predicted quality of service is then compared with a target service level specification.
- The long short-term memory is updated based on the comparison, allowing for continuous improvement of the network slicing decisions.
Potential applications of this technology:
- This technology can be applied in wireless communication networks to optimize the allocation of network resources.
- It can be used to improve the quality of service for different network slices, such as prioritizing certain types of traffic or ensuring a specific level of performance for specific users or applications.
Problems solved by this technology:
- Network slicing is a complex task that requires making decisions on resource allocation and prioritization.
- This technology solves the problem of efficiently slicing networks by using a neural network to predict the quality of service and make informed decisions.
Benefits of this technology:
- By using a neural network, this technology can continuously learn and improve its slicing decisions based on previous experiences.
- It allows for more efficient allocation of network resources, leading to improved quality of service for different network slices.
- The technology can adapt to changing network conditions and user demands, ensuring optimal performance.
Abstract
Systems, apparatus, articles of manufacture, and methods are disclosed to slice networks for wireless services. Example apparatus are to implement an actor-critic neural network to predict a quality of service metric for a network slice based on a long short-term memory representative of one or more prior slicing decisions, compare the quality of service metric with a target service level specification, and update the long short-term memory based on the comparison.
MULTI-SLICE SUPPORT FOR MEC-ENABLED 5G DEPLOYMENTS (18201321)
Main Inventor
Dario Sabella
Brief explanation
The abstract describes a system designed to track network slicing operations in a 5G communication network. It includes processing circuitry that determines a network slice instance (NSI) associated with a Quality of Service (QoS) flow of a User Equipment (UE). The NSI communicates data for a Network Function Virtualization (NFV) instance of a Multi-Access Edge Computing (MEC) system within the 5G network. The system retrieves latency information for a set of communication links used by the NSI, which includes non-MEC links associated with a radio access network (RAN) and MEC links associated with the MEC system. Based on the retrieved latency information and slice-specific attributes of the NSI, a slice configuration policy is generated. This policy is then used to reconfigure the network resources of the 5G network that are used by the NSI.
- The system tracks network slicing operations in a 5G communication network.
- It determines the network slice instance (NSI) associated with a UE's QoS flow.
- The NSI communicates data for an NFV instance of a MEC system.
- Latency information for communication links used by the NSI is retrieved.
- The communication links include non-MEC links associated with the RAN and MEC links associated with the MEC system.
- A slice configuration policy is generated based on the retrieved latency information and slice-specific attributes of the NSI.
- The network resources of the 5G network used by the NSI are reconfigured based on the generated slice configuration policy.
Potential Applications
- Network slicing operations in 5G communication networks.
- Multi-Access Edge Computing (MEC) systems within 5G networks.
- Quality of Service (QoS) management for User Equipment (UE) in 5G networks.
Problems Solved
- Efficient tracking and management of network slicing operations in 5G networks.
- Optimization of network resources based on latency information and slice-specific attributes.
- Improved QoS management for UE in 5G networks.
Benefits
- Enhanced performance and efficiency of network slicing operations in 5G networks.
- Improved QoS for UE by reconfiguring network resources based on slice configuration policies.
- Better utilization of network resources through optimized allocation based on latency information.
Abstract
A system configured to track network slicing operations within a 5G communication network includes processing circuitry configured to determine a network slice instance (NSI) associated with a QoS flow of a UE. The NSI communicates data for a network function virtualization (NFV) instance of a Multi-Access Edge Computing (MEC) system within the 5G communication network. Latency information for a plurality of communication links used by the NSI is retrieved. The plurality of communication links includes a first set of non-MEC communication links associated with a radio access network (RAN) of the 5G communication network and a second set of MEC communication links associated with the MEC system. A slice configuration policy is generated based on the retrieved latency information and slice-specific attributes of the NSI. Network resources of the 5G communication network used by the NSI are reconfigured based on the generated slice configuration policy.