Difference between revisions of "Micron Technology, Inc. patent applications published on October 12th, 2023"

From WikiPatents
Jump to navigation Jump to search
(Creating a new page)
Line 1: Line 1:
'''Summary of the patent applications from Micron Technology, Inc. on October 12th, 2023'''
 
 
Micron Technology, Inc. has recently filed several patents related to various technologies and devices. These patents cover areas such as three-dimensional memory arrays, integrated assemblies, memory devices and systems, lighting systems, clock management circuitry, ferroelectric transistors, memory cells and control gates, scribe asymmetry, and conductive patterns on semiconductor substrates.
 
 
Notable applications of these patents include:
 
 
* Creating trench and pier architectures in three-dimensional memory arrays, allowing for self-alignment in subsequent operations.
 
* Integrated assemblies with different memory regions, intermediate regions, and staircase regions, providing efficient memory storage and organization.
 
* Memory devices with memory cells and transistors, where data lines directly interface with the transistor's gate.
 
* Lighting systems that use solid state lighting devices to generate mixed light, with closed-loop control for efficient and effective lighting.
 
* Clock management circuitry that adjusts the frequency of clocking signals based on detected voltage, current, and/or activity, reducing power consumption.
 
* Ferroelectric transistors with two electrodes, an active region, and a ferroelectric material, allowing for controlled flow of current.
 
* Memory cells and control gates stacked in multiple tiers, with conductive contacts, dielectric structures, and support structures for efficient memory storage.
 
* Devices with scribe asymmetry, where scribes have different widths to improve fabrication efficiency and aid in testing and integration.
 
* Apparatuses with conductive patterns on semiconductor substrates, including sections and slits that extend in different directions, providing versatile electrical connections.
 
 
Overall, Micron Technology, Inc. has filed patents covering a wide range of technologies and devices related to memory arrays, integrated assemblies, lighting systems, clock management, transistors, memory cells, scribe asymmetry, and conductive patterns. These patents demonstrate the organization's commitment to innovation and advancement in the field of semiconductor technology.
 
 
 
 
 
 
==Patent applications for Micron Technology, Inc. on October 12th, 2023==
 
==Patent applications for Micron Technology, Inc. on October 12th, 2023==
  
Line 29: Line 8:
 
Vikas Rana
 
Vikas Rana
  
 
'''Brief explanation'''
 
The abstract describes a type of memory that consists of multiple memory cells arranged in an array. These memory cells are connected to a data line, which is divided into two segments. The first segment is connected to a subset of memory cells, while the second segment is selectively connected to the first segment. There are also two page buffers, with each buffer selectively connected to one of the data line segments.
 
 
'''Abstract'''
 
Memory might include an array of memory cells and a data line selectively connected to a plurality of memory cells of the array of memory cells. The data line might include a first data line segment corresponding to a first subset of memory cells of the plurality of memory cells and a second data line segment corresponding to a second subset of memory cells of the plurality of memory cells. The second data line segment is selectively connected to the first data line segment. A first page buffer might be selectively connected to the first data line segment, and a second page buffer might be selectively connected to the second data line segment.
 
  
 
===FINE GRAINED RESOURCE MANAGEMENT FOR ROLLBACK MEMORY OPERATIONS ([[US Patent Application 17716250. FINE GRAINED RESOURCE MANAGEMENT FOR ROLLBACK MEMORY OPERATIONS simplified abstract|17716250]])===
 
===FINE GRAINED RESOURCE MANAGEMENT FOR ROLLBACK MEMORY OPERATIONS ([[US Patent Application 17716250. FINE GRAINED RESOURCE MANAGEMENT FOR ROLLBACK MEMORY OPERATIONS simplified abstract|17716250]])===
Line 43: Line 16:
 
Tony M. Brewer
 
Tony M. Brewer
  
 
'''Brief explanation'''
 
The abstract describes a method to solve the problem of one undo logging session using too much memory and affecting other sessions. The system maintains a list of available resources for each session and keeps track of the available memory. If a session uses up all its memory, a flag is set to prevent further writes from being mirrored. This ensures that each session has its own resource limits and can function properly.
 
 
'''Abstract'''
 
Disclosed in some examples are methods, systems, computing devices, and machine-readable mediums in which the system maintains a list of resources available for each rollback session. In some examples, state data is kept that indicates available memory. If a write occurs for a particular session and the amount of available memory for a session has been used, a flag is set in metadata for the memory location and the write is not mirrored. In this manner, the technical problem of one undo logging session using too much memory and preventing other undo logging sessions from properly functioning is solved by the technical solution of setting resource limits for each undo logging session.
 
  
 
===TECHNIQUES FOR FOUR CYCLE ACCESS COMMANDS ([[US Patent Application 18161757. TECHNIQUES FOR FOUR CYCLE ACCESS COMMANDS simplified abstract|18161757]])===
 
===TECHNIQUES FOR FOUR CYCLE ACCESS COMMANDS ([[US Patent Application 18161757. TECHNIQUES FOR FOUR CYCLE ACCESS COMMANDS simplified abstract|18161757]])===
Line 57: Line 24:
 
Sujeet V. Ayyapureddi
 
Sujeet V. Ayyapureddi
  
 
'''Brief explanation'''
 
This abstract describes methods, systems, and devices for four cycle access commands. It explains that a memory device can communicate access commands with a host device using a command-address channel and multiple data channels. The host device sends an access command that includes an operation code indicating the type of command, a first address of the memory device (the first target), and a second address of the memory device (the second target). The first address is associated with a first data channel, and the second address is associated with a second data channel. This allows the memory device and the host device to communicate first data related to the first address through the first data channel and second data related to the second address through the second data channel.
 
 
'''Abstract'''
 
Methods, systems, and devices for techniques for four cycle access commands are described. A memory device may communicate access commands with a host device over a command-address (CA) channel associated with multiple data channels. The host device may transmit an access command that includes an operation code indicating a type of the access command, a first address of the memory device that is a first target of the access command, and a second address of the memory device that is a second target of the access command. The first address may be associated with a first data channel, and the second address may be associated with a second data channel. Accordingly, the memory device and the host device may communicate first data corresponding to the first address over the first data channel and second data corresponding to the second address over the second data channel.
 
  
 
===STORAGE SYSTEM WITH MULTIPLE DATA PATHS DEPENDING ON DATA CLASSIFICATIONS ([[US Patent Application 18295482. STORAGE SYSTEM WITH MULTIPLE DATA PATHS DEPENDING ON DATA CLASSIFICATIONS simplified abstract|18295482]])===
 
===STORAGE SYSTEM WITH MULTIPLE DATA PATHS DEPENDING ON DATA CLASSIFICATIONS ([[US Patent Application 18295482. STORAGE SYSTEM WITH MULTIPLE DATA PATHS DEPENDING ON DATA CLASSIFICATIONS simplified abstract|18295482]])===
Line 71: Line 32:
 
Reshmi BASU
 
Reshmi BASU
  
 
'''Brief explanation'''
 
This abstract describes a storage system that receives a write command and associated data. The system then classifies the data and assigns it to a specific queue. A processor in the system retrieves the data from the queue and compresses it. The compressed data is then stored in a memory device based on the original write command.
 
 
'''Abstract'''
 
In some implementations, a storage system may receive, via a system controller of the storage system, a write command and data associated with the write command. The storage system may classify, via the system controller, the data. The storage system may associate, via the system controller, the data with a queue based on classifying the data. The storage system may retrieve, via a processor of the storage system, the data associated with the queue. The storage system may compress, via the processor, the data to form compressed data for storage in a memory device of the storage system based on the write command.
 
  
 
===Split a Tensor for Shuffling in Outsourcing Computation Tasks ([[US Patent Application 17715863. Split a Tensor for Shuffling in Outsourcing Computation Tasks simplified abstract|17715863]])===
 
===Split a Tensor for Shuffling in Outsourcing Computation Tasks ([[US Patent Application 17715863. Split a Tensor for Shuffling in Outsourcing Computation Tasks simplified abstract|17715863]])===
Line 85: Line 40:
 
Andre Xian Ming Chang
 
Andre Xian Ming Chang
  
 
'''Brief explanation'''
 
The abstract discusses a method for protecting access to a tensor in deep learning computations when outsourcing the computations to external entities. The tensor is a data structure used in artificial neural networks, and it is arranged in rows and columns. The method involves dividing the tensor into smaller tasks and shuffling them with other tasks before outsourcing them. The results obtained from the external entities are then used to compute the final result of the tensor in the neural network. This partitioning and shuffling process helps prevent the external entities from accessing or reconstructing the original tensor.
 
 
'''Abstract'''
 
Protection of access to a tensor in outsourcing deep learning computations via shuffling. For example, the tensor in the computation of an artificial neural network can have elements arranged in a first dimension of rows and a second dimension of columns. The tensor can be partitioned along the first dimension and the second dimension to generate computing tasks that are shuffled and/or mixed with other tasks for outsourcing to external entities. Computing results returned from the external entities can be used to generate a computing result of the tensor in the computation of the artificial neural network. The partitioning and shuffling can prevent the external entities from accessing and/or reconstructing the tensor.
 
  
 
===Partition a Tensor with Varying Granularity Levels in Shuffled Secure Multiparty Computation ([[US Patent Application 17715877. Partition a Tensor with Varying Granularity Levels in Shuffled Secure Multiparty Computation simplified abstract|17715877]])===
 
===Partition a Tensor with Varying Granularity Levels in Shuffled Secure Multiparty Computation ([[US Patent Application 17715877. Partition a Tensor with Varying Granularity Levels in Shuffled Secure Multiparty Computation simplified abstract|17715877]])===
Line 99: Line 48:
 
Andre Xian Ming Chang
 
Andre Xian Ming Chang
  
 
'''Brief explanation'''
 
The abstract describes a method for protecting access to a tensor (a mathematical object used in deep learning) when outsourcing computations to external entities. The tensor is divided into different portions, and computing tasks are generated to operate on these portions. The results of these tasks are then combined to obtain the final result of the computation. To prevent external entities from accessing or reconstructing the tensor, the computing tasks are shuffled and distributed out of order. This partitioning and shuffling technique ensures the security of the tensor during the outsourcing process.
 
 
'''Abstract'''
 
Protection of access to a tensor in outsourcing deep learning computations via shuffling. For example, the tensor in the computation of an artificial neural network can be partitioned into portions of different sizes. The computing tasks can be generated for operating on the portions such that the results of the computing tasks can be combined to obtain the result of a computing task operates on the tensor in the computation of the artificial neural network. The computing tasks can be shuffled for distribution out of order to external entities. The partitioning and shuffling can prevent the external entities from accessing and/or reconstructing the tensor.
 
  
 
===Non-uniform Splitting of a Tensor in Shuffled Secure Multiparty Computation ([[US Patent Application 17715885. Non-uniform Splitting of a Tensor in Shuffled Secure Multiparty Computation simplified abstract|17715885]])===
 
===Non-uniform Splitting of a Tensor in Shuffled Secure Multiparty Computation ([[US Patent Application 17715885. Non-uniform Splitting of a Tensor in Shuffled Secure Multiparty Computation simplified abstract|17715885]])===
Line 113: Line 56:
 
Andre Xian Ming Chang
 
Andre Xian Ming Chang
  
 
'''Brief explanation'''
 
The abstract describes a method for protecting access to the values of elements in a tensor during the outsourcing of deep learning computations. The tensor is divided into portions, and some of these portions are further split into parts. These parts are used to generate computing tasks, where each task operates on a portion or part of the tensor. Some portions may share common parts. The computing tasks are generated based on unique parts to avoid redundant computations. The tasks are shuffled and distributed out of order to external entities. The final result of operating on the tensor is obtained by combining the results received back from these external entities.
 
 
'''Abstract'''
 
Protection of access to values of elements in a tensor in outsourcing deep learning computations. For example, the tensor in the computation of an artificial neural network can be partitioned into portions. Some of the portions can be selected for splitting into parts, such that the sum of a set of parts is equal to a respective portion being split to generate computing tasks. Each computing task is configured to operate based on a portion of the tensor or a part of a portion of the tensor. Some of the portions may share common parts. The computing tasks can be generated according to unique parts to eliminate duplicative computing efforts. The computing tasks can be shuffled for distribution out of order to external entities. The result to operate on the tensor can be obtained from results, received back from the external entities, of the outsourced computing tasks.
 
  
 
===MANAGING ERROR-HANDLING FLOWS IN MEMORY DEVICES ([[US Patent Application 18207525. MANAGING ERROR-HANDLING FLOWS IN MEMORY DEVICES simplified abstract|18207525]])===
 
===MANAGING ERROR-HANDLING FLOWS IN MEMORY DEVICES ([[US Patent Application 18207525. MANAGING ERROR-HANDLING FLOWS IN MEMORY DEVICES simplified abstract|18207525]])===
Line 127: Line 64:
 
Kishore Kumar Muchherla
 
Kishore Kumar Muchherla
  
 
'''Brief explanation'''
 
The abstract describes a system and method that involves a memory device and a processing device. The processing device is able to detect read errors in a specific block of the memory device, which is associated with a voltage offset bin. It then determines the most recent error-handling operation performed on another block associated with the same voltage offset bin. Finally, it performs error-handling to recover the data that experienced the read error.
 
 
'''Abstract'''
 
Systems and methods are disclosed including a memory device and a processing device operatively coupled to the memory device. The processing device can perform operations including detecting a read error with respect to data residing in a first block of the memory device, wherein the first block is associated with a voltage offset bin; determining a most recently performed error-handling operation performed on a second block associated with the voltage offset bin; and performing the error-handling to recover the data.
 
  
 
===EFFICIENT CACHE PROGRAM OPERATION WITH DATA ENCODING ([[US Patent Application 18178105. EFFICIENT CACHE PROGRAM OPERATION WITH DATA ENCODING simplified abstract|18178105]])===
 
===EFFICIENT CACHE PROGRAM OPERATION WITH DATA ENCODING ([[US Patent Application 18178105. EFFICIENT CACHE PROGRAM OPERATION WITH DATA ENCODING simplified abstract|18178105]])===
Line 141: Line 72:
 
Sushanth Bhushan
 
Sushanth Bhushan
  
 
'''Brief explanation'''
 
The abstract describes a memory device that uses control logic to program a set of memory cells to specific levels. It generates signals to indicate when the host system should send data for programming operations to the memory device's input/output (I/O) data cache. It also generates encoded data values for each memory cell. The abstract mentions the use of multiple cache storage areas, including an I/O data cache and a third data cache. The generated signals inform the host system when to send data for the next programming operation to the I/O data cache. Overall, the abstract highlights the process of programming memory cells and managing data storage in the memory device.
 
 
'''Abstract'''
 
Control logic in a memory device executes a first programming operation to program the set of memory cells to a set of programming levels. A first cache ready signal is generated, the first cache ready signal indicating to a host system to send first data associated with a second programming operation to an input/output (I/O) data cache of the memory device. A first encoded data value and a second encoded data value associated with each memory cell of the set of memory cells are generated. A second cache ready signal is generated, the second cache ready signal indicating to the host system to send second data associated with the next programming operation to the I/O data cache. The first data associated with the second programming operation is caused to be stored in a third data cache of the cache storage. A third cache ready signal is generated, the third cache ready signal indicating to the host system to send third data associated with the second programming operation to the I/O data cache.
 
  
 
===SECURE OPERATING SYSTEM UPDATE ([[US Patent Application 17717954. SECURE OPERATING SYSTEM UPDATE simplified abstract|17717954]])===
 
===SECURE OPERATING SYSTEM UPDATE ([[US Patent Application 17717954. SECURE OPERATING SYSTEM UPDATE simplified abstract|17717954]])===
Line 155: Line 80:
 
Zhan Liu
 
Zhan Liu
  
 
'''Brief explanation'''
 
This abstract describes a method, system, and device for securely updating an operating system. The process involves sending a message to a server with a request for an operating system update that is stored in a protected area of memory. The server responds with a message containing the updated operating system data, a value that corresponds to the original request, and a signature from the server. The received data is then validated by comparing the signature and the values. If the data is validated, it is written to the protected area of memory.
 
 
'''Abstract'''
 
Methods, systems, and devices for secure operating system update are described. A first message including a first value and a request associated with an operating system that is stored in a write-protected area of memory may be transmitted to a server. In response to the first message, a second message including data associated with the operating system, a second value corresponding to the first value, and a signature of the server may be received. The data associated with the operating system may be validated based on the signature of the server and a comparison of the second value and the first value. Based on validating the data associated with the operating system, the data associated with the operating system may be written to the write-protected area of memory.
 
  
 
===MEMORY ACCESS GATE ([[US Patent Application 18136250. MEMORY ACCESS GATE simplified abstract|18136250]])===
 
===MEMORY ACCESS GATE ([[US Patent Application 18136250. MEMORY ACCESS GATE simplified abstract|18136250]])===
Line 169: Line 88:
 
Giuseppe Cariello
 
Giuseppe Cariello
  
 
'''Brief explanation'''
 
This abstract describes methods, systems, and devices for a memory access gate. It explains that a memory device consists of a controller, memory dice, and a pad for receiving a control signal from an external source. The memory device also includes a switching component that can choose between the externally provided control signal or an internally generated control signal. The controller then sends the selected control signal to a memory die. The memory device can determine whether it is operating in a diagnostic mode or a regular mode and select the control signal accordingly. The abstract mentions that the controller has a secure register, the value of which can affect or control the switching. An authenticated host device can instruct the controller to write a value to the secure register.
 
 
'''Abstract'''
 
Methods, systems, and devices for a memory access gate are described. A memory device may include a controller, memory dice, and a pad for receiving an externally provided control signal, such as a chip enable signal. The memory device may include a switching component for selecting the externally provided control signal or an internally generated control signal. The controller may provide the selected control signal to a memory die. The memory device may determine whether it is operating in a first mode or a second mode, and select the externally provided control signal or the internally generated control signal based on the determination. The first mode may be a diagnostic mode in some cases. The controller may include a secure register whose value may impact or control the switching. An authenticated host device may direct the controller to write the value to the secure register.
 
  
 
===ASSURING INTEGRITY AND SECURE ERASURE OF CRITICAL SECURITY PARAMETERS ([[US Patent Application 18208585. ASSURING INTEGRITY AND SECURE ERASURE OF CRITICAL SECURITY PARAMETERS simplified abstract|18208585]])===
 
===ASSURING INTEGRITY AND SECURE ERASURE OF CRITICAL SECURITY PARAMETERS ([[US Patent Application 18208585. ASSURING INTEGRITY AND SECURE ERASURE OF CRITICAL SECURITY PARAMETERS simplified abstract|18208585]])===
Line 183: Line 96:
 
Walter Andrew Hubis
 
Walter Andrew Hubis
  
 
'''Brief explanation'''
 
The abstract describes a processing device that manages critical security parameters (CSPs) for a memory device. It keeps track of two CSP files, one for the first set of CSPs and another for the second set. The device sets flags to indicate whether each file exists and whether they are valid. Based on the evaluation of these flags, the device selects one of the CSP files as the active one.
 
 
'''Abstract'''
 
A processing device sets a first flag that indicates whether a first critical security parameter (CSP) file exists. The first CSP file includes a first set of CSPs for a memory device. The processing device sets a second flag that indicates whether the first CSP file is valid. The processing device sets a third flag that indicates whether a second CSP file exists. The second CSP file includes a second set of CSPs for the memory device. The processing device sets a fourth flag that indicates whether the second critical security parameter file is valid. The processing device selects one of the first or second CSP file as an active CSP file based on an evaluation of the first, second, third, and fourth flags.
 
  
 
===Secure Artificial Neural Network Models in Outsourcing Deep Learning Computation ([[US Patent Application 17715835. Secure Artificial Neural Network Models in Outsourcing Deep Learning Computation simplified abstract|17715835]])===
 
===Secure Artificial Neural Network Models in Outsourcing Deep Learning Computation ([[US Patent Application 17715835. Secure Artificial Neural Network Models in Outsourcing Deep Learning Computation simplified abstract|17715835]])===
Line 197: Line 104:
 
Andre Xian Ming Chang
 
Andre Xian Ming Chang
  
 
'''Brief explanation'''
 
The abstract discusses a method for protecting access to artificial neural network (ANN) models when outsourcing deep learning computations. This is done by dividing the ANN model into randomized parts, some of which are offset or encrypted. These modified parts are then shuffled and outsourced to external entities. Similarly, the data samples used as inputs to the ANN models are split into parts to protect them. The final result of a data sample applied to an ANN model is obtained by summing the responses of the model parts with the corresponding sample parts as inputs.
 
 
'''Abstract'''
 
Protection of access to artificial neural network (ANN) models in outsourcing deep learning computations via shuffling parts. For example, an ANN model can be configured as the sum of a plurality of randomized model parts. Some of the randomized parts can be applied an offset operation and/or encrypted to generate modified parts for outsourcing. Such model parts from different ANN models can be shuffled and outsourced to one or more external entities to obtain the responses of the model parts to inputs. Data samples as inputs to the ANN models can also be split into sample parts as inputs to model parts to protect the data samples. The result of a data sample as an input applied to an ANN model can be obtained from a sum of responses of model parts with the sample parts applied as inputs.
 
  
 
===Shuffled Secure Multiparty Deep Learning ([[US Patent Application 17715768. Shuffled Secure Multiparty Deep Learning simplified abstract|17715768]])===
 
===Shuffled Secure Multiparty Deep Learning ([[US Patent Application 17715768. Shuffled Secure Multiparty Deep Learning simplified abstract|17715768]])===
Line 211: Line 112:
 
Andre Xian Ming Chang
 
Andre Xian Ming Chang
  
 
'''Brief explanation'''
 
The abstract describes a method for protecting access to data samples in deep learning computations that are outsourced to external entities. This is done by dividing each data sample into randomized parts and shuffling these parts with parts from other data samples. The shuffled and randomized parts are then given to external entities to perform deep learning computations. The order of applying the summation and deep learning computation can be changed. The results obtained by the external entities can be shuffled back to their respective data samples for summation, providing the final result of the deep learning computation for each data sample.
 
 
'''Abstract'''
 
Protection of access to data samples in outsourcing deep learning computations via shuffling parts. For example, each data sample can be configured as the sum of a plurality of randomized parts. Parts from different data samples are shuffled to mix parts from different samples. One or more external entities can be provided with shuffled and randomized parts to generate results of applying a deep learning computation to the parts. The deep learning computation is configured to allow change of the order between applying the summation and applying the deep learning computation. Thus, results of the external entities applying the deep learning computation to their received parts can be shuffled back for the respective data samples for summation. The summation provides the result of applying the deep learning computation to a respective data sample.
 
  
 
===Secure Multiparty Deep Learning via Shuffling and Offsetting ([[US Patent Application 17715798. Secure Multiparty Deep Learning via Shuffling and Offsetting simplified abstract|17715798]])===
 
===Secure Multiparty Deep Learning via Shuffling and Offsetting ([[US Patent Application 17715798. Secure Multiparty Deep Learning via Shuffling and Offsetting simplified abstract|17715798]])===
Line 225: Line 120:
 
Andre Xian Ming Chang
 
Andre Xian Ming Chang
  
 
'''Brief explanation'''
 
The abstract describes a method for protecting access to data samples when outsourcing deep learning computations. This is done by dividing each data sample into randomized parts and applying an offset operation to some of these parts. The modified parts are then shuffled with parts from other data samples and sent to external entities for deep learning computations. The order of applying the summation and deep learning computation can be changed. The results from the external entities are shuffled back, and the reverse offset and summation are applied to obtain the final result of the deep learning computation on the data sample.
 
 
'''Abstract'''
 
Protection of access to data samples in outsourcing deep learning computations via shuffling parts. For example, each data sample can be configured as the sum of a plurality of randomized parts. At least some of the randomized parts can be applied an offset operation to generate modified parts for outsourcing. Such parts from different data samples are shuffled and outsourced to one or more external entities to apply a deep learning computation. The deep learning computation is configured to allow change of the order between applying the summation and applying the deep learning computation. Thus, results of the external entities applying the deep learning computation to their received parts can be shuffled back for a data sample to apply reverse offset and summation. The summation provides the result of applying the deep learning computation to the data sample.
 
  
 
===NON-DESTRUCTIVE PATTERN IDENTIFICATION AT A MEMORY DEVICE ([[US Patent Application 17716580. NON-DESTRUCTIVE PATTERN IDENTIFICATION AT A MEMORY DEVICE simplified abstract|17716580]])===
 
===NON-DESTRUCTIVE PATTERN IDENTIFICATION AT A MEMORY DEVICE ([[US Patent Application 17716580. NON-DESTRUCTIVE PATTERN IDENTIFICATION AT A MEMORY DEVICE simplified abstract|17716580]])===
Line 239: Line 128:
 
Yuan He
 
Yuan He
  
 
'''Brief explanation'''
 
This abstract describes methods, systems, and devices for identifying patterns in a memory device without causing any damage to the data stored in the device. The memory device can compare two data patterns and determine if they match or not. It does this by accessing the memory cells and latching the second data pattern to a sense amplifier. The device then deactivates the word line, isolating the memory cells to protect the data. It writes the first data pattern to the sense amplifier and compares it with the second data pattern. Finally, the device outputs a signal indicating whether the data patterns match or not.
 
 
'''Abstract'''
 
Methods, systems, and devices for non-destructive pattern identification at a memory device are described. A memory device may perform pattern identification within the memory device and output a flag indicating whether a first data pattern matches with a second data pattern. The memory device may access one or more memory cells, via a word line, and latch the second data pattern of the memory cells to a sense amplifier. The memory device may deactivate the word line, which may result in isolating the memory cells from potential destruction of data. The memory device may write a first data pattern to the sense amplifier and compare the first data pattern and second data pattern at the sense amplifier. The memory device may output a signal indicating whether the data patterns match.
 
  
 
===APPARATUS AND METHODS FOR THERMAL MANAGEMENT IN A MEMORY ([[US Patent Application 17704154. APPARATUS AND METHODS FOR THERMAL MANAGEMENT IN A MEMORY simplified abstract|17704154]])===
 
===APPARATUS AND METHODS FOR THERMAL MANAGEMENT IN A MEMORY ([[US Patent Application 17704154. APPARATUS AND METHODS FOR THERMAL MANAGEMENT IN A MEMORY simplified abstract|17704154]])===
Line 253: Line 136:
 
Jeremy Binfet
 
Jeremy Binfet
  
 
'''Brief explanation'''
 
The abstract describes a memory system that consists of memory cells and a controller. The controller is responsible for managing access to the memory cells. It can perform various operations on the memory cells and indicate when it is not available to perform the next operation. It can also add a delay to the time it takes to access the memory cells and indicate when it is available to perform the next operation after the delay is completed. The duration of the delay is determined based on the temperature of the system.
 
 
'''Abstract'''
 
Memories might include an array of memory cells and a controller for access of the array of memory cells. The controller might be configured to cause the memory to initiate an array operation on the array of memory cells, indicate an unavailability to initiate a next array operation, append a delay interval to an array access time of the array operation, and indicate an availability to initiate a next array operation in response to a completion of the delay interval. The delay interval might have a duration determined in response to an indication of temperature.
 
  
 
===FASTER MULTI-CELL READ OPERATION USING REVERSE READ CALIBRATIONS ([[US Patent Application 18117268. FASTER MULTI-CELL READ OPERATION USING REVERSE READ CALIBRATIONS simplified abstract|18117268]])===
 
===FASTER MULTI-CELL READ OPERATION USING REVERSE READ CALIBRATIONS ([[US Patent Application 18117268. FASTER MULTI-CELL READ OPERATION USING REVERSE READ CALIBRATIONS simplified abstract|18117268]])===
Line 267: Line 144:
 
Go Shikata
 
Go Shikata
  
 
'''Brief explanation'''
 
The abstract describes a memory device that has a memory array with multiple memory cells connected to wordlines and bitlines. The device also includes control logic that performs various operations. One of these operations involves determining a metadata value that represents the first read level voltage of the highest threshold voltage distribution of a subset of the memory cells. This metadata value can be a count of failed bytes or failed bits. Based on this metadata value, the control logic adjusts the second read level voltage for the second-highest threshold voltage distribution of the same subset of memory cells. Finally, the control logic applies the adjusted second read level voltage to a wordline to perform an initial calibrated read of the memory cells in the second-highest threshold voltage distribution.
 
 
'''Abstract'''
 
A memory device having a memory array with a plurality of memory cells electrically coupled to a plurality of wordlines and a plurality of bitlines and control logic coupled with the memory array. The control logic perform operations including: determining a metadata value characterizing a first read level voltage of a highest threshold voltage distribution of a subset of the plurality of memory cells, wherein the metadata value comprises at least one of a failed byte count or a failed bit count; adjusting, based on the metadata value, a second read level voltage for a second-highest threshold voltage distribution of the subset of the plurality of memory cells; and causing, to perform an initial calibrated read of the subset of the plurality of memory cells, the adjusted second read level voltage to be applied to a wordline of the plurality of wordlines to read the second-highest threshold voltage distribution.
 
  
 
===TEST CIRCUIT IN SCRIBE REGION FOR MEMORY FAILURE ANALYSIS ([[US Patent Application 17719327. TEST CIRCUIT IN SCRIBE REGION FOR MEMORY FAILURE ANALYSIS simplified abstract|17719327]])===
 
===TEST CIRCUIT IN SCRIBE REGION FOR MEMORY FAILURE ANALYSIS ([[US Patent Application 17719327. TEST CIRCUIT IN SCRIBE REGION FOR MEMORY FAILURE ANALYSIS simplified abstract|17719327]])===
Line 281: Line 152:
 
ATSUKO OTSUKA
 
ATSUKO OTSUKA
  
 
'''Brief explanation'''
 
The abstract describes the invention of apparatuses and methods that involve a test circuit located in a scribe region between two semiconductor chips. The apparatus includes two adjacent semiconductor chips with a scribe region in between them. The scribe region contains test address pads and an address decoder circuit. The test address pads receive address signals, and the address decoder circuit generates first signals in response to these address signals.
 
 
'''Abstract'''
 
Apparatuses and methods including a test circuit in a scribe region between chips are described. An example apparatus includes: a first semiconductor chip and a second semiconductor chip, adjacent to one another; a scribe region between the first and second semiconductor chips; test address pads in the scribe region; and an address decoder circuit in the scribe region. The test address pads receive address signals. The address decoder provides first signals responsive to the address signals from the test address pads.
 
  
 
===MEMORY DEVICE INCLUDING SELF-ALIGNED CONDUCTIVE CONTACTS ([[US Patent Application 18200852. MEMORY DEVICE INCLUDING SELF-ALIGNED CONDUCTIVE CONTACTS simplified abstract|18200852]])===
 
===MEMORY DEVICE INCLUDING SELF-ALIGNED CONDUCTIVE CONTACTS ([[US Patent Application 18200852. MEMORY DEVICE INCLUDING SELF-ALIGNED CONDUCTIVE CONTACTS simplified abstract|18200852]])===
Line 295: Line 160:
 
Kar Wui Thong
 
Kar Wui Thong
  
 
'''Brief explanation'''
 
The abstract describes a technology that involves the creation of apparatuses using conductive and dielectric materials. These apparatuses consist of memory cell strings with pillars that pass through layers of conductive and dielectric materials. A dielectric structure is formed in a slit, separating the conductive and dielectric materials into two portions. There are also first and second conductive structures located over and connected to the pillars of the memory cell strings. Finally, a conductive line is in contact with the dielectric structure and the conductive structures.
 
 
'''Abstract'''
 
Some embodiments include apparatuses and methods of forming the apparatuses. One of the apparatuses includes levels of conductive materials interleaved with levels of dielectric materials; memory cell strings including respective pillars extending through the levels of conductive materials and the levels of dielectric materials; a dielectric structure formed in a slit, the slit extending through the levels of conductive materials and the levels of dielectric materials, the dielectric structure separating the levels of conductive materials and the levels of dielectric materials into a first portion and a second portion; first conductive structures located over and coupled to respective pillars of the first memory cell strings; second conductive structures located over and coupled to respective pillars of the second memory cell strings; and a conductive line contacting the dielectric structure, a conductive structure of the first conductive structures, and a conductive structure of the second conductive structures.
 
  
 
===SEMICONDUCTOR DEVICE HAVING L-SHAPED CONDUCTIVE PATTERN ([[US Patent Application 17714797. SEMICONDUCTOR DEVICE HAVING L-SHAPED CONDUCTIVE PATTERN simplified abstract|17714797]])===
 
===SEMICONDUCTOR DEVICE HAVING L-SHAPED CONDUCTIVE PATTERN ([[US Patent Application 17714797. SEMICONDUCTOR DEVICE HAVING L-SHAPED CONDUCTIVE PATTERN simplified abstract|17714797]])===
Line 309: Line 168:
 
Harunobu Kondo
 
Harunobu Kondo
  
 
'''Brief explanation'''
 
The abstract describes an apparatus that has a semiconductor substrate with a main surface. On this surface, there is a conductive pattern consisting of three sections. The first section extends in one direction, the second section extends in a different direction, and the third section connects the first and second sections. The third section has a slit that extends in a direction different from the first two sections.
 
 
'''Abstract'''
 
Disclosed herein is an apparatus that includes a semiconductor substrate having a main surface extending in a first direction and a second direction different from the first direction and a conductive pattern formed over the main surface of the semiconductor substrate. The conductive pattern includes a first section extending in the first direction, a second section extending in the second direction, and a third section connected between the first and second sections. The third section of the conductive pattern has a first slit extending in a third direction different from the first and second directions.
 
  
 
===TECHNIQUES FOR FORMING A DEVICE WITH SCRIBE ASYMMETRY ([[US Patent Application 17715481. TECHNIQUES FOR FORMING A DEVICE WITH SCRIBE ASYMMETRY simplified abstract|17715481]])===
 
===TECHNIQUES FOR FORMING A DEVICE WITH SCRIBE ASYMMETRY ([[US Patent Application 17715481. TECHNIQUES FOR FORMING A DEVICE WITH SCRIBE ASYMMETRY simplified abstract|17715481]])===
Line 323: Line 176:
 
Anna Maria Conti
 
Anna Maria Conti
  
 
'''Brief explanation'''
 
This abstract describes methods, systems, and devices for creating a device with scribe asymmetry. Scribes, which are the spaces between circuits on a wafer, can be made with different widths to improve the efficiency of the fabrication process. One subset of scribes may have a wider width and be used for placing structures that aid in die testing and integration. Another subset of scribes may have a narrower width and not have any structures placed in them.
 
 
'''Abstract'''
 
Methods, systems, and devices for techniques for forming a device with scribe asymmetry are described. Circuits (e.g., arrays of memory cells) may be printed on a wafer and separated by scribes of various widths to increase an array efficiency of a fabrication process. For example, a scribe that extends in a first direction may have a width in a second direction. A first subset of scribes may have a first width, where one or more structures may be placed in the first subset of scribes to facilitate die testing and integration. A second subset of scribes may have a second width. In some examples, the structures may not be placed in the second subset of scribes and, accordingly, the second width may be less than the first width.
 
  
 
===MEMORY DEVICE INCLUDING SUPPORT STRUCTURES ([[US Patent Application 18209231. MEMORY DEVICE INCLUDING SUPPORT STRUCTURES simplified abstract|18209231]])===
 
===MEMORY DEVICE INCLUDING SUPPORT STRUCTURES ([[US Patent Application 18209231. MEMORY DEVICE INCLUDING SUPPORT STRUCTURES simplified abstract|18209231]])===
Line 337: Line 184:
 
Andrew Zhe Wei Ong
 
Andrew Zhe Wei Ong
  
 
'''Brief explanation'''
 
The abstract describes various embodiments of apparatuses and methods for forming those apparatuses. One specific apparatus includes multiple tiers of memory cells and control gates stacked on top of each other on a substrate. The control gates are arranged in a staircase-like structure, with conductive contacts making contact with them at specific locations. A dielectric structure is located on the sidewalls of the control gates, and support structures are positioned adjacent to the conductive contacts. These support structures have vertical lengths extending from the substrate and are located at a specific distance from the edge of the dielectric structure. The ratio of the width of the support structure to this distance falls within the range of 1.6 to 2.0.
 
 
'''Abstract'''
 
Some embodiments include apparatuses and methods of forming the apparatuses. One of the apparatuses includes tiers of respective memory cells and control gates, the tier located one over another over a substrate, the control gates including a control gate closest to the substrate, the control gates including respective portions forming a staircase structure; conductive contacts contacting the control gates at a location of the staircase structure, the conductive contacts including a conductive contact contacting the control gate; a dielectric structure located on sidewalls of the control gates; and support structures adjacent the conductive contacts and having lengths extending vertically from the substrate, the support structures including a support structure closest to the conductive contact, the support structure located at a distance from an edge of the dielectric structure, wherein a ratio of a width of the support structure over the distance is ranging from 1.6 to 2.0.
 
  
 
===Ferroelectric Transistors and Assemblies Comprising Ferroelectric Transistors ([[US Patent Application 18207905. Ferroelectric Transistors and Assemblies Comprising Ferroelectric Transistors simplified abstract|18207905]])===
 
===Ferroelectric Transistors and Assemblies Comprising Ferroelectric Transistors ([[US Patent Application 18207905. Ferroelectric Transistors and Assemblies Comprising Ferroelectric Transistors simplified abstract|18207905]])===
Line 351: Line 192:
 
Kamal M. Karda
 
Kamal M. Karda
  
 
'''Brief explanation'''
 
The abstract describes a type of transistor called a ferroelectric transistor. This transistor has two electrodes, with one electrode positioned slightly away from the other. In between the electrodes, there is an active region that contains a transistor gate. The active region also includes a source/drain region next to each electrode, and a body region in between. The body region has a channel region next to the transistor gate. Within the active region, there is a barrier that allows electrons to pass through but not holes. Additionally, there is a ferroelectric material located between the transistor gate and the channel region.
 
 
'''Abstract'''
 
Some embodiments include a ferroelectric transistor having a first electrode and a second electrode. The second electrode is offset from the first electrode by an active region. A transistor gate is along a portion of the active region. The active region includes a first source/drain region adjacent the first electrode, a second source/drain region adjacent the second electrode, and a body region between the first and second source/drain regions. The body region includes a gated channel region adjacent the transistor gate. The active region includes at least one barrier between the second electrode and the gated channel region which is permeable to electrons but not to holes. Ferroelectric material is between the transistor gate and the gated channel region.
 
  
 
===TRANSIENT LOAD MANAGEMENT ([[US Patent Application 17715552. TRANSIENT LOAD MANAGEMENT simplified abstract|17715552]])===
 
===TRANSIENT LOAD MANAGEMENT ([[US Patent Application 17715552. TRANSIENT LOAD MANAGEMENT simplified abstract|17715552]])===
Line 365: Line 200:
 
Leon Zlotnik
 
Leon Zlotnik
  
 
'''Brief explanation'''
 
The abstract describes a system that includes sensing circuitry and clock management circuitry. The sensing circuitry detects voltage, current, and/or activity in a system-on-chip (SoC) and determines if it meets a certain threshold. The clock management circuitry generates clocking signals for the SoC and adjusts the frequency of these signals based on the detected voltage, current, and/or activity. This adjustment helps to reduce the power consumed by the SoC.
 
 
'''Abstract'''
 
Sensing circuitry and clock management circuitry provide transient load management. The sensing circuitry detects a voltage, current, and/or activity associated with a system-on-chip (SoC) and determines whether the detected voltage, current, and/or activity meets a threshold. The clock management circuitry generates clocking signals for the SoC and alters a frequency of the generated clocking signals in response to the detected voltage, current, and/or activity meeting the threshold to alter an amount of power consumed by the SoC.
 
  
 
===SOLID STATE LIGHTING SYSTEMS AND ASSOCIATED METHODS OF OPERATION AND MANUFACTURE ([[US Patent Application 18335885. SOLID STATE LIGHTING SYSTEMS AND ASSOCIATED METHODS OF OPERATION AND MANUFACTURE simplified abstract|18335885]])===
 
===SOLID STATE LIGHTING SYSTEMS AND ASSOCIATED METHODS OF OPERATION AND MANUFACTURE ([[US Patent Application 18335885. SOLID STATE LIGHTING SYSTEMS AND ASSOCIATED METHODS OF OPERATION AND MANUFACTURE simplified abstract|18335885]])===
Line 379: Line 208:
 
Anil Tipirneni
 
Anil Tipirneni
  
 
'''Brief explanation'''
 
The abstract describes a lighting system that uses a solid state lighting device to generate mixed light. The lighting device includes light sources and a sensor that detects light from one of the sources. A controller is used to control two or more of the light sources based on the sensor's output. The controller can also communicate with the sensor to provide closed-loop control.
 
 
'''Abstract'''
 
A lighting system includes a solid state lighting device capable of generating mixed light and a controller. The solid state lighting device includes light sources for producing mixed light and a sensor configured to detect light from one of the light sources. The controller controls two or more of the light sources based on output from the sensor. The controller can communicate with the sensor to provide closed-loop control.
 
  
 
===METAL GATE MEMORY DEVICE AND METHOD ([[US Patent Application 17717406. METAL GATE MEMORY DEVICE AND METHOD simplified abstract|17717406]])===
 
===METAL GATE MEMORY DEVICE AND METHOD ([[US Patent Application 17717406. METAL GATE MEMORY DEVICE AND METHOD simplified abstract|17717406]])===
Line 393: Line 216:
 
Hyucksoo Yang
 
Hyucksoo Yang
  
 
'''Brief explanation'''
 
The abstract describes an invention related to memory devices and systems. It mentions the use of memory cells and transistors in an array. The memory cells are connected to a number of data lines, which are made from a different metal than the transistor's gate. These data lines directly interface with the transistor's gate.
 
 
'''Abstract'''
 
Apparatus and methods are disclosed, including memory devices and systems. Example memory devices, systems and methods include an array of memory cells and a transistor located on a periphery of the array of memory cells. A number of data lines are shown coupled to memory cells in the array, wherein the number of data lines extend over a first metal gate of a transistor in the periphery of the array, where the number of data lines are formed from a second metal, and form a direct interface with the first metal gate.
 
  
 
===Integrated Assemblies and Methods of Forming Integrated Assemblies ([[US Patent Application 18207499. Integrated Assemblies and Methods of Forming Integrated Assemblies simplified abstract|18207499]])===
 
===Integrated Assemblies and Methods of Forming Integrated Assemblies ([[US Patent Application 18207499. Integrated Assemblies and Methods of Forming Integrated Assemblies simplified abstract|18207499]])===
Line 407: Line 224:
 
Shuangqiang Luo
 
Shuangqiang Luo
  
 
'''Brief explanation'''
 
The abstract describes an integrated assembly that includes different memory regions and an intermediate region. A stack, made up of conductive and insulative levels, extends across these regions. Channel-material-pillars are arranged within the memory regions, and memory-block-regions extend longitudinally across the memory regions and the intermediate region. Staircase regions are present in the intermediate region, overlapping two of the memory-block-regions. First panel regions extend across the staircase regions, while second panel regions provide lateral separation between adjacent memory-block-regions. The second panel regions are different in size or composition compared to the first panel regions. The abstract also mentions methods of forming these integrated assemblies.
 
 
'''Abstract'''
 
Some embodiments include an integrated assembly having a first memory region, a second memory region, and an intermediate region between the memory regions. A stack extends across the memory regions and the intermediate region. The stack includes alternating conductive levels and insulative levels. Channel-material-pillars are arranged within the memory regions. Memory-block-regions extend longitudinally across the memory regions and the intermediate region. Staircase regions are within the intermediate region. Each of the staircase regions laterally overlaps two of the memory-block-regions. First panel regions extend longitudinally across at least portions of the staircase regions. Second panel regions extend longitudinally and provide lateral separation between adjacent memory-block-regions. The second panel regions are of laterally different dimensions than the first panel regions and/or are compositionally different than the first panel regions. Some embodiments include methods of forming integrated assemblies.
 
  
 
===TRENCH AND PIER ARCHITECTURES FOR THREE-DIMENSIONAL MEMORY ARRAYS ([[US Patent Application 17714771. TRENCH AND PIER ARCHITECTURES FOR THREE-DIMENSIONAL MEMORY ARRAYS simplified abstract|17714771]])===
 
===TRENCH AND PIER ARCHITECTURES FOR THREE-DIMENSIONAL MEMORY ARRAYS ([[US Patent Application 17714771. TRENCH AND PIER ARCHITECTURES FOR THREE-DIMENSIONAL MEMORY ARRAYS simplified abstract|17714771]])===
Line 420: Line 231:
  
 
Fabio Pellizzer
 
Fabio Pellizzer
 
 
'''Brief explanation'''
 
This abstract describes methods, systems, and devices for creating trench and pier architectures in three-dimensional memory arrays. These architectures involve forming pier structures in contact with alternating layers of materials deposited on a substrate. These pier structures provide support for further processing. The memory die includes alternating layers of a first and second material, which can be shaped into different cross-sectional patterns. The pier structures are formed in contact with these patterns, ensuring that when one of the materials is removed to create voids, the pier structures provide mechanical support for the remaining material's cross-sectional pattern. These pier structures can be formed within or along trenches or other features aligned in the direction of the memory array, allowing for self-alignment in subsequent operations.
 
 
'''Abstract'''
 
Methods, systems, and devices for trench and pier architectures for three-dimensional memory arrays are described. A semiconductor device (e.g., a memory die) may include pier structures formed in contact with features formed from alternating layers of materials deposited over a substrate, which may provide support for subsequent processing. For example, a memory die may include alternating layers of a first material and a second material, which may be formed into various cross-sectional patterns. Pier structures may be formed in contact with the cross sectional patterns such that, when either the first material or the second material is removed to form voids, the pier structures may provide mechanical support of the cross-sectional pattern of the remaining material. In some examples, such pier structures may be formed within or along trenches or other features aligned along a direction of a memory array, which may provide a degree of self-alignment for subsequent operations.
 

Revision as of 10:30, 19 October 2023

Contents

Patent applications for Micron Technology, Inc. on October 12th, 2023

APPARATUS HAVING SEGMENTED DATA LINES AND METHODS OF THEIR OPERATION (18117553)

Main Inventor

Vikas Rana


FINE GRAINED RESOURCE MANAGEMENT FOR ROLLBACK MEMORY OPERATIONS (17716250)

Main Inventor

Tony M. Brewer


TECHNIQUES FOR FOUR CYCLE ACCESS COMMANDS (18161757)

Main Inventor

Sujeet V. Ayyapureddi


STORAGE SYSTEM WITH MULTIPLE DATA PATHS DEPENDING ON DATA CLASSIFICATIONS (18295482)

Main Inventor

Reshmi BASU


Split a Tensor for Shuffling in Outsourcing Computation Tasks (17715863)

Main Inventor

Andre Xian Ming Chang


Partition a Tensor with Varying Granularity Levels in Shuffled Secure Multiparty Computation (17715877)

Main Inventor

Andre Xian Ming Chang


Non-uniform Splitting of a Tensor in Shuffled Secure Multiparty Computation (17715885)

Main Inventor

Andre Xian Ming Chang


MANAGING ERROR-HANDLING FLOWS IN MEMORY DEVICES (18207525)

Main Inventor

Kishore Kumar Muchherla


EFFICIENT CACHE PROGRAM OPERATION WITH DATA ENCODING (18178105)

Main Inventor

Sushanth Bhushan


SECURE OPERATING SYSTEM UPDATE (17717954)

Main Inventor

Zhan Liu


MEMORY ACCESS GATE (18136250)

Main Inventor

Giuseppe Cariello


ASSURING INTEGRITY AND SECURE ERASURE OF CRITICAL SECURITY PARAMETERS (18208585)

Main Inventor

Walter Andrew Hubis


Secure Artificial Neural Network Models in Outsourcing Deep Learning Computation (17715835)

Main Inventor

Andre Xian Ming Chang


Shuffled Secure Multiparty Deep Learning (17715768)

Main Inventor

Andre Xian Ming Chang


Secure Multiparty Deep Learning via Shuffling and Offsetting (17715798)

Main Inventor

Andre Xian Ming Chang


NON-DESTRUCTIVE PATTERN IDENTIFICATION AT A MEMORY DEVICE (17716580)

Main Inventor

Yuan He


APPARATUS AND METHODS FOR THERMAL MANAGEMENT IN A MEMORY (17704154)

Main Inventor

Jeremy Binfet


FASTER MULTI-CELL READ OPERATION USING REVERSE READ CALIBRATIONS (18117268)

Main Inventor

Go Shikata


TEST CIRCUIT IN SCRIBE REGION FOR MEMORY FAILURE ANALYSIS (17719327)

Main Inventor

ATSUKO OTSUKA


MEMORY DEVICE INCLUDING SELF-ALIGNED CONDUCTIVE CONTACTS (18200852)

Main Inventor

Kar Wui Thong


SEMICONDUCTOR DEVICE HAVING L-SHAPED CONDUCTIVE PATTERN (17714797)

Main Inventor

Harunobu Kondo


TECHNIQUES FOR FORMING A DEVICE WITH SCRIBE ASYMMETRY (17715481)

Main Inventor

Anna Maria Conti


MEMORY DEVICE INCLUDING SUPPORT STRUCTURES (18209231)

Main Inventor

Andrew Zhe Wei Ong


Ferroelectric Transistors and Assemblies Comprising Ferroelectric Transistors (18207905)

Main Inventor

Kamal M. Karda


TRANSIENT LOAD MANAGEMENT (17715552)

Main Inventor

Leon Zlotnik


SOLID STATE LIGHTING SYSTEMS AND ASSOCIATED METHODS OF OPERATION AND MANUFACTURE (18335885)

Main Inventor

Anil Tipirneni


METAL GATE MEMORY DEVICE AND METHOD (17717406)

Main Inventor

Hyucksoo Yang


Integrated Assemblies and Methods of Forming Integrated Assemblies (18207499)

Main Inventor

Shuangqiang Luo


TRENCH AND PIER ARCHITECTURES FOR THREE-DIMENSIONAL MEMORY ARRAYS (17714771)

Main Inventor

Fabio Pellizzer