Difference between revisions of "Micron Technology, Inc. patent applications published on October 12th, 2023"

From WikiPatents
Jump to navigation Jump to search
(Creating a new page)
 
Line 1: Line 1:
 +
'''Summary of the patent applications from Micron Technology, Inc. on October 12th, 2023'''
 +
 +
Micron Technology, Inc. has recently filed several patents related to various technologies and devices. These patents cover areas such as three-dimensional memory arrays, integrated assemblies, memory devices and systems, lighting systems, clock management circuitry, ferroelectric transistors, memory cells and control gates, scribe asymmetry, and conductive patterns on semiconductor substrates.
 +
 +
Notable applications of these patents include:
 +
 +
* Creating trench and pier architectures in three-dimensional memory arrays, allowing for self-alignment in subsequent operations.
 +
* Integrated assemblies with different memory regions, intermediate regions, and staircase regions, providing efficient memory storage and organization.
 +
* Memory devices with memory cells and transistors, where data lines directly interface with the transistor's gate.
 +
* Lighting systems that use solid state lighting devices to generate mixed light, with closed-loop control for efficient and effective lighting.
 +
* Clock management circuitry that adjusts the frequency of clocking signals based on detected voltage, current, and/or activity, reducing power consumption.
 +
* Ferroelectric transistors with two electrodes, an active region, and a ferroelectric material, allowing for controlled flow of current.
 +
* Memory cells and control gates stacked in multiple tiers, with conductive contacts, dielectric structures, and support structures for efficient memory storage.
 +
* Devices with scribe asymmetry, where scribes have different widths to improve fabrication efficiency and aid in testing and integration.
 +
* Apparatuses with conductive patterns on semiconductor substrates, including sections and slits that extend in different directions, providing versatile electrical connections.
 +
 +
Overall, Micron Technology, Inc. has filed patents covering a wide range of technologies and devices related to memory arrays, integrated assemblies, lighting systems, clock management, transistors, memory cells, scribe asymmetry, and conductive patterns. These patents demonstrate the organization's commitment to innovation and advancement in the field of semiconductor technology.
 +
 +
 +
 +
 
==Patent applications for Micron Technology, Inc. on October 12th, 2023==
 
==Patent applications for Micron Technology, Inc. on October 12th, 2023==
  

Revision as of 17:33, 16 October 2023

Summary of the patent applications from Micron Technology, Inc. on October 12th, 2023

Micron Technology, Inc. has recently filed several patents related to various technologies and devices. These patents cover areas such as three-dimensional memory arrays, integrated assemblies, memory devices and systems, lighting systems, clock management circuitry, ferroelectric transistors, memory cells and control gates, scribe asymmetry, and conductive patterns on semiconductor substrates.

Notable applications of these patents include:

  • Creating trench and pier architectures in three-dimensional memory arrays, allowing for self-alignment in subsequent operations.
  • Integrated assemblies with different memory regions, intermediate regions, and staircase regions, providing efficient memory storage and organization.
  • Memory devices with memory cells and transistors, where data lines directly interface with the transistor's gate.
  • Lighting systems that use solid state lighting devices to generate mixed light, with closed-loop control for efficient and effective lighting.
  • Clock management circuitry that adjusts the frequency of clocking signals based on detected voltage, current, and/or activity, reducing power consumption.
  • Ferroelectric transistors with two electrodes, an active region, and a ferroelectric material, allowing for controlled flow of current.
  • Memory cells and control gates stacked in multiple tiers, with conductive contacts, dielectric structures, and support structures for efficient memory storage.
  • Devices with scribe asymmetry, where scribes have different widths to improve fabrication efficiency and aid in testing and integration.
  • Apparatuses with conductive patterns on semiconductor substrates, including sections and slits that extend in different directions, providing versatile electrical connections.

Overall, Micron Technology, Inc. has filed patents covering a wide range of technologies and devices related to memory arrays, integrated assemblies, lighting systems, clock management, transistors, memory cells, scribe asymmetry, and conductive patterns. These patents demonstrate the organization's commitment to innovation and advancement in the field of semiconductor technology.



Contents

Patent applications for Micron Technology, Inc. on October 12th, 2023

APPARATUS HAVING SEGMENTED DATA LINES AND METHODS OF THEIR OPERATION (18117553)

Inventor Vikas Rana

Brief explanation

The abstract describes a type of memory that consists of multiple memory cells arranged in an array. These memory cells are connected to a data line, which is divided into two segments. The first segment is connected to a subset of memory cells, while the second segment is selectively connected to the first segment. There are also two page buffers, with each buffer selectively connected to one of the data line segments.

Abstract

Memory might include an array of memory cells and a data line selectively connected to a plurality of memory cells of the array of memory cells. The data line might include a first data line segment corresponding to a first subset of memory cells of the plurality of memory cells and a second data line segment corresponding to a second subset of memory cells of the plurality of memory cells. The second data line segment is selectively connected to the first data line segment. A first page buffer might be selectively connected to the first data line segment, and a second page buffer might be selectively connected to the second data line segment.

FINE GRAINED RESOURCE MANAGEMENT FOR ROLLBACK MEMORY OPERATIONS (17716250)

Inventor Tony M. Brewer

Brief explanation

The abstract describes a method for managing resources in a system that allows for undo logging sessions. The system maintains a list of available resources for each session and keeps track of the amount of available memory. If a write operation exceeds the available memory for a session, a flag is set and the write is not mirrored. This solves the problem of one session using too much memory and affecting the functioning of other sessions by setting resource limits for each session.

Abstract

Disclosed in some examples are methods, systems, computing devices, and machine-readable mediums in which the system maintains a list of resources available for each rollback session. In some examples, state data is kept that indicates available memory. If a write occurs for a particular session and the amount of available memory for a session has been used, a flag is set in metadata for the memory location and the write is not mirrored. In this manner, the technical problem of one undo logging session using too much memory and preventing other undo logging sessions from properly functioning is solved by the technical solution of setting resource limits for each undo logging session.

TECHNIQUES FOR FOUR CYCLE ACCESS COMMANDS (18161757)

Inventor Sujeet V. Ayyapureddi

Brief explanation

This abstract describes methods, systems, and devices for four cycle access commands. It explains that a memory device can communicate access commands with a host device using a command-address channel and multiple data channels. The host device sends an access command that includes an operation code indicating the type of command, a first address of the memory device (first target), and a second address of the memory device (second target). The first address is associated with a first data channel, and the second address is associated with a second data channel. This allows the memory device and the host device to communicate first data corresponding to the first address over the first data channel and second data corresponding to the second address over the second data channel.

Abstract

Methods, systems, and devices for techniques for four cycle access commands are described. A memory device may communicate access commands with a host device over a command-address (CA) channel associated with multiple data channels. The host device may transmit an access command that includes an operation code indicating a type of the access command, a first address of the memory device that is a first target of the access command, and a second address of the memory device that is a second target of the access command. The first address may be associated with a first data channel, and the second address may be associated with a second data channel. Accordingly, the memory device and the host device may communicate first data corresponding to the first address over the first data channel and second data corresponding to the second address over the second data channel.

STORAGE SYSTEM WITH MULTIPLE DATA PATHS DEPENDING ON DATA CLASSIFICATIONS (18295482)

Inventor Reshmi BASU

Brief explanation

This abstract describes a storage system that receives a write command and associated data. The system classifies the data and associates it with a queue. A processor retrieves the data from the queue and compresses it to form compressed data for storage in a memory device, based on the write command.

Abstract

In some implementations, a storage system may receive, via a system controller of the storage system, a write command and data associated with the write command. The storage system may classify, via the system controller, the data. The storage system may associate, via the system controller, the data with a queue based on classifying the data. The storage system may retrieve, via a processor of the storage system, the data associated with the queue. The storage system may compress, via the processor, the data to form compressed data for storage in a memory device of the storage system based on the write command.

Split a Tensor for Shuffling in Outsourcing Computation Tasks (17715863)

Inventor Andre Xian Ming Chang

Brief explanation

The abstract describes a method for protecting the access to a tensor used in deep learning computations when outsourcing the computations to external entities. The tensor is a data structure with elements arranged in rows and columns. The method involves dividing the tensor into smaller parts and shuffling them, along with other tasks, before outsourcing them to external entities. The results returned by these entities are then used to compute the final result of the tensor in the neural network. This partitioning and shuffling technique helps prevent the external entities from accessing or reconstructing the original tensor.

Abstract

Protection of access to a tensor in outsourcing deep learning computations via shuffling. For example, the tensor in the computation of an artificial neural network can have elements arranged in a first dimension of rows and a second dimension of columns. The tensor can be partitioned along the first dimension and the second dimension to generate computing tasks that are shuffled and/or mixed with other tasks for outsourcing to external entities. Computing results returned from the external entities can be used to generate a computing result of the tensor in the computation of the artificial neural network. The partitioning and shuffling can prevent the external entities from accessing and/or reconstructing the tensor.

Partition a Tensor with Varying Granularity Levels in Shuffled Secure Multiparty Computation (17715877)

Inventor Andre Xian Ming Chang

Brief explanation

The abstract discusses a method for protecting access to a tensor in outsourced deep learning computations. In this method, the tensor is divided into different portions of varying sizes. Computing tasks are then generated to operate on these portions, and the results of these tasks are combined to obtain the final result of the computation. To prevent external entities from accessing or reconstructing the tensor, the computing tasks are shuffled and distributed out of order. This partitioning and shuffling technique ensures the security of the tensor during the outsourcing process.

Abstract

Protection of access to a tensor in outsourcing deep learning computations via shuffling. For example, the tensor in the computation of an artificial neural network can be partitioned into portions of different sizes. The computing tasks can be generated for operating on the portions such that the results of the computing tasks can be combined to obtain the result of a computing task operates on the tensor in the computation of the artificial neural network. The computing tasks can be shuffled for distribution out of order to external entities. The partitioning and shuffling can prevent the external entities from accessing and/or reconstructing the tensor.

Non-uniform Splitting of a Tensor in Shuffled Secure Multiparty Computation (17715885)

Inventor Andre Xian Ming Chang

Brief explanation

The abstract describes a method for protecting access to values in a tensor during the outsourcing of deep learning computations. The tensor is divided into portions, and some portions are further split into parts to create computing tasks. Each task operates on a portion or part of the tensor. Some portions may share common parts. The tasks are generated based on unique parts to avoid redundant computations. The tasks can be shuffled and distributed out of order to external entities. The final result of operating on the tensor is obtained by combining the results of the outsourced computing tasks.

Abstract

Protection of access to values of elements in a tensor in outsourcing deep learning computations. For example, the tensor in the computation of an artificial neural network can be partitioned into portions. Some of the portions can be selected for splitting into parts, such that the sum of a set of parts is equal to a respective portion being split to generate computing tasks. Each computing task is configured to operate based on a portion of the tensor or a part of a portion of the tensor. Some of the portions may share common parts. The computing tasks can be generated according to unique parts to eliminate duplicative computing efforts. The computing tasks can be shuffled for distribution out of order to external entities. The result to operate on the tensor can be obtained from results, received back from the external entities, of the outsourced computing tasks.

MANAGING ERROR-HANDLING FLOWS IN MEMORY DEVICES (18207525)

Inventor Kishore Kumar Muchherla

Brief explanation

The abstract describes a system and method that includes a memory device and a processing device. The processing device can detect read errors in a specific block of the memory device, which is associated with a voltage offset bin. It then determines the most recent error-handling operation performed on another block associated with the same voltage offset bin. Finally, it performs the necessary error-handling operation to recover the data.

Abstract

Systems and methods are disclosed including a memory device and a processing device operatively coupled to the memory device. The processing device can perform operations including detecting a read error with respect to data residing in a first block of the memory device, wherein the first block is associated with a voltage offset bin; determining a most recently performed error-handling operation performed on a second block associated with the voltage offset bin; and performing the error-handling to recover the data.

EFFICIENT CACHE PROGRAM OPERATION WITH DATA ENCODING (18178105)

Inventor Sushanth Bhushan

Brief explanation

The abstract describes a memory device that uses control logic to program memory cells to specific levels. It generates signals to indicate when the host system should send data for programming operations to the memory device's input/output (I/O) data cache. The abstract also mentions the generation of encoded data values for each memory cell. Additionally, it mentions the use of a third data cache for storing data associated with a second programming operation. Signals are generated to indicate when the host system should send data for this second programming operation to the I/O data cache.

Abstract

Control logic in a memory device executes a first programming operation to program the set of memory cells to a set of programming levels. A first cache ready signal is generated, the first cache ready signal indicating to a host system to send first data associated with a second programming operation to an input/output (I/O) data cache of the memory device. A first encoded data value and a second encoded data value associated with each memory cell of the set of memory cells are generated. A second cache ready signal is generated, the second cache ready signal indicating to the host system to send second data associated with the next programming operation to the I/O data cache. The first data associated with the second programming operation is caused to be stored in a third data cache of the cache storage. A third cache ready signal is generated, the third cache ready signal indicating to the host system to send third data associated with the second programming operation to the I/O data cache.

SECURE OPERATING SYSTEM UPDATE (17717954)

Inventor Zhan Liu

Brief explanation

This abstract describes a method, system, and device for securely updating an operating system. The process involves sending a message to a server with a request and a value associated with the operating system. In response, a second message is received from the server, containing data related to the operating system, a value corresponding to the initial value, and a signature from the server. The received data is then validated by comparing the values and checking the server's signature. If the validation is successful, the data is written to a protected area of memory where the operating system is stored.

Abstract

Methods, systems, and devices for secure operating system update are described. A first message including a first value and a request associated with an operating system that is stored in a write-protected area of memory may be transmitted to a server. In response to the first message, a second message including data associated with the operating system, a second value corresponding to the first value, and a signature of the server may be received. The data associated with the operating system may be validated based on the signature of the server and a comparison of the second value and the first value. Based on validating the data associated with the operating system, the data associated with the operating system may be written to the write-protected area of memory.

MEMORY ACCESS GATE (18136250)

Inventor Giuseppe Cariello

Brief explanation

The abstract describes a memory access gate system that includes a memory device with a controller, memory dice, and a pad for receiving a control signal. The memory device can switch between an externally provided control signal and an internally generated control signal. The controller determines the operating mode of the memory device and selects the appropriate control signal. The first mode may be a diagnostic mode, and the controller includes a secure register that can be controlled by an authenticated host device.

Abstract

Methods, systems, and devices for a memory access gate are described. A memory device may include a controller, memory dice, and a pad for receiving an externally provided control signal, such as a chip enable signal. The memory device may include a switching component for selecting the externally provided control signal or an internally generated control signal. The controller may provide the selected control signal to a memory die. The memory device may determine whether it is operating in a first mode or a second mode, and select the externally provided control signal or the internally generated control signal based on the determination. The first mode may be a diagnostic mode in some cases. The controller may include a secure register whose value may impact or control the switching. An authenticated host device may direct the controller to write the value to the secure register.

ASSURING INTEGRITY AND SECURE ERASURE OF CRITICAL SECURITY PARAMETERS (18208585)

Inventor Walter Andrew Hubis

Brief explanation

The abstract describes a processing device that manages critical security parameters (CSPs) for a memory device. It keeps track of two CSP files, one for each set of CSPs. The device sets flags to indicate whether each CSP file exists and whether it is valid. Based on the evaluation of these flags, the device selects one of the CSP files as the active one.

Abstract

A processing device sets a first flag that indicates whether a first critical security parameter (CSP) file exists. The first CSP file includes a first set of CSPs for a memory device. The processing device sets a second flag that indicates whether the first CSP file is valid. The processing device sets a third flag that indicates whether a second CSP file exists. The second CSP file includes a second set of CSPs for the memory device. The processing device sets a fourth flag that indicates whether the second critical security parameter file is valid. The processing device selects one of the first or second CSP file as an active CSP file based on an evaluation of the first, second, third, and fourth flags.

Secure Artificial Neural Network Models in Outsourcing Deep Learning Computation (17715835)

Inventor Andre Xian Ming Chang

Brief explanation

The abstract discusses a method for protecting access to artificial neural network (ANN) models when outsourcing deep learning computations. This is achieved by dividing the ANN model into randomized parts, some of which are offset or encrypted. These modified parts are then shuffled and outsourced to external entities. Similarly, the data samples used as inputs to the ANN models are split into parts to protect them. The final result of a data sample applied to an ANN model is obtained by summing the responses of the model parts with the corresponding sample parts applied as inputs.

Abstract

Protection of access to artificial neural network (ANN) models in outsourcing deep learning computations via shuffling parts. For example, an ANN model can be configured as the sum of a plurality of randomized model parts. Some of the randomized parts can be applied an offset operation and/or encrypted to generate modified parts for outsourcing. Such model parts from different ANN models can be shuffled and outsourced to one or more external entities to obtain the responses of the model parts to inputs. Data samples as inputs to the ANN models can also be split into sample parts as inputs to model parts to protect the data samples. The result of a data sample as an input applied to an ANN model can be obtained from a sum of responses of model parts with the sample parts applied as inputs.

Shuffled Secure Multiparty Deep Learning (17715768)

Inventor Andre Xian Ming Chang

Brief explanation

The abstract describes a method for protecting access to data samples used in outsourcing deep learning computations. It suggests dividing each data sample into randomized parts and shuffling these parts from different samples. External entities can then apply deep learning computations to these shuffled parts to generate results. The order of applying the summation and deep learning computation can be changed. Finally, the results can be shuffled back to their respective data samples and summed up to obtain the final result of applying deep learning to each data sample.

Abstract

Protection of access to data samples in outsourcing deep learning computations via shuffling parts. For example, each data sample can be configured as the sum of a plurality of randomized parts. Parts from different data samples are shuffled to mix parts from different samples. One or more external entities can be provided with shuffled and randomized parts to generate results of applying a deep learning computation to the parts. The deep learning computation is configured to allow change of the order between applying the summation and applying the deep learning computation. Thus, results of the external entities applying the deep learning computation to their received parts can be shuffled back for the respective data samples for summation. The summation provides the result of applying the deep learning computation to a respective data sample.

Secure Multiparty Deep Learning via Shuffling and Offsetting (17715798)

Inventor Andre Xian Ming Chang

Brief explanation

The abstract describes a method for protecting access to data samples when outsourcing deep learning computations. The data samples are divided into randomized parts, and some of these parts are modified using an offset operation. The modified parts from different data samples are then shuffled and sent to external entities for deep learning computations. The order of applying the summation and deep learning computation can be changed. The results from the external entities are shuffled back, and the reverse offset and summation are applied to obtain the final result of the deep learning computation on the data sample.

Abstract

Protection of access to data samples in outsourcing deep learning computations via shuffling parts. For example, each data sample can be configured as the sum of a plurality of randomized parts. At least some of the randomized parts can be applied an offset operation to generate modified parts for outsourcing. Such parts from different data samples are shuffled and outsourced to one or more external entities to apply a deep learning computation. The deep learning computation is configured to allow change of the order between applying the summation and applying the deep learning computation. Thus, results of the external entities applying the deep learning computation to their received parts can be shuffled back for a data sample to apply reverse offset and summation. The summation provides the result of applying the deep learning computation to the data sample.

NON-DESTRUCTIVE PATTERN IDENTIFICATION AT A MEMORY DEVICE (17716580)

Inventor Yuan He

Brief explanation

This abstract describes methods, systems, and devices for identifying patterns in a memory device without causing any damage to the data stored in the device. The memory device can compare two data patterns and determine if they match or not. It does this by accessing memory cells and capturing the second data pattern using a sense amplifier. The memory device then isolates the memory cells to protect the data from being destroyed. It writes the first data pattern to the sense amplifier and compares it with the second data pattern. Finally, the memory device outputs a signal indicating whether the data patterns match or not.

Abstract

Methods, systems, and devices for non-destructive pattern identification at a memory device are described. A memory device may perform pattern identification within the memory device and output a flag indicating whether a first data pattern matches with a second data pattern. The memory device may access one or more memory cells, via a word line, and latch the second data pattern of the memory cells to a sense amplifier. The memory device may deactivate the word line, which may result in isolating the memory cells from potential destruction of data. The memory device may write a first data pattern to the sense amplifier and compare the first data pattern and second data pattern at the sense amplifier. The memory device may output a signal indicating whether the data patterns match.

APPARATUS AND METHODS FOR THERMAL MANAGEMENT IN A MEMORY (17704154)

Inventor Jeremy Binfet

Brief explanation

The abstract describes a memory system that consists of memory cells and a controller. The controller is responsible for managing access to the memory cells. It can initiate operations on the memory cells, but it may also indicate that it is not available to start the next operation. In such cases, it appends a delay interval to the access time of the current operation. Once the delay interval is completed, the controller indicates that it is available to start the next operation. The duration of the delay interval is determined based on the temperature of the system.

Abstract

Memories might include an array of memory cells and a controller for access of the array of memory cells. The controller might be configured to cause the memory to initiate an array operation on the array of memory cells, indicate an unavailability to initiate a next array operation, append a delay interval to an array access time of the array operation, and indicate an availability to initiate a next array operation in response to a completion of the delay interval. The delay interval might have a duration determined in response to an indication of temperature.

FASTER MULTI-CELL READ OPERATION USING REVERSE READ CALIBRATIONS (18117268)

Inventor Go Shikata

Brief explanation

The abstract describes a memory device that has a memory array with multiple memory cells connected to wordlines and bitlines. It also includes control logic that performs various operations. One operation involves determining a metadata value that represents the first read level voltage of the highest threshold voltage distribution of a subset of memory cells. This metadata value can be a count of failed bytes or failed bits. Based on this metadata value, the control logic adjusts the second read level voltage for the second-highest threshold voltage distribution of the same subset of memory cells. Finally, the control logic uses the adjusted second read level voltage to perform an initial calibrated read of the subset of memory cells by applying it to a wordline.

Abstract

A memory device having a memory array with a plurality of memory cells electrically coupled to a plurality of wordlines and a plurality of bitlines and control logic coupled with the memory array. The control logic perform operations including: determining a metadata value characterizing a first read level voltage of a highest threshold voltage distribution of a subset of the plurality of memory cells, wherein the metadata value comprises at least one of a failed byte count or a failed bit count; adjusting, based on the metadata value, a second read level voltage for a second-highest threshold voltage distribution of the subset of the plurality of memory cells; and causing, to perform an initial calibrated read of the subset of the plurality of memory cells, the adjusted second read level voltage to be applied to a wordline of the plurality of wordlines to read the second-highest threshold voltage distribution.

TEST CIRCUIT IN SCRIBE REGION FOR MEMORY FAILURE ANALYSIS (17719327)

Inventor ATSUKO OTSUKA

Brief explanation

The abstract describes the invention of apparatuses and methods that involve a test circuit located in a scribe region between two semiconductor chips. The apparatus includes two adjacent semiconductor chips, with a scribe region between them. The scribe region contains test address pads and an address decoder circuit. The test address pads receive address signals, and the address decoder circuit generates first signals in response to these address signals.

Abstract

Apparatuses and methods including a test circuit in a scribe region between chips are described. An example apparatus includes: a first semiconductor chip and a second semiconductor chip, adjacent to one another; a scribe region between the first and second semiconductor chips; test address pads in the scribe region; and an address decoder circuit in the scribe region. The test address pads receive address signals. The address decoder provides first signals responsive to the address signals from the test address pads.

MEMORY DEVICE INCLUDING SELF-ALIGNED CONDUCTIVE CONTACTS (18200852)

Inventor Kar Wui Thong

Brief explanation

This abstract describes various apparatuses and methods for their formation. One of the apparatuses consists of layers of conductive and dielectric materials, memory cell strings with pillars extending through these layers, and a dielectric structure that separates the conductive and dielectric materials into two portions. The apparatus also includes first and second conductive structures connected to the memory cell strings, as well as a conductive line that contacts the dielectric structure and the conductive structures.

Abstract

Some embodiments include apparatuses and methods of forming the apparatuses. One of the apparatuses includes levels of conductive materials interleaved with levels of dielectric materials; memory cell strings including respective pillars extending through the levels of conductive materials and the levels of dielectric materials; a dielectric structure formed in a slit, the slit extending through the levels of conductive materials and the levels of dielectric materials, the dielectric structure separating the levels of conductive materials and the levels of dielectric materials into a first portion and a second portion; first conductive structures located over and coupled to respective pillars of the first memory cell strings; second conductive structures located over and coupled to respective pillars of the second memory cell strings; and a conductive line contacting the dielectric structure, a conductive structure of the first conductive structures, and a conductive structure of the second conductive structures.

SEMICONDUCTOR DEVICE HAVING L-SHAPED CONDUCTIVE PATTERN (17714797)

Inventor Harunobu Kondo

Brief explanation

The abstract describes an apparatus that consists of a semiconductor substrate with a main surface that extends in two different directions. There is also a conductive pattern on the main surface of the substrate, which includes three sections. The first section extends in one direction, the second section extends in another direction, and the third section connects the first and second sections. The third section of the conductive pattern has a slit that extends in a different direction from the first two sections.

Abstract

Disclosed herein is an apparatus that includes a semiconductor substrate having a main surface extending in a first direction and a second direction different from the first direction and a conductive pattern formed over the main surface of the semiconductor substrate. The conductive pattern includes a first section extending in the first direction, a second section extending in the second direction, and a third section connected between the first and second sections. The third section of the conductive pattern has a first slit extending in a third direction different from the first and second directions.

TECHNIQUES FOR FORMING A DEVICE WITH SCRIBE ASYMMETRY (17715481)

Inventor Anna Maria Conti

Brief explanation

This abstract describes methods, systems, and devices for creating a device with scribe asymmetry. Scribes, which are the spaces between circuits on a wafer, can be made with different widths to improve the efficiency of the fabrication process. One subset of scribes may have a wider width and contain structures that aid in testing and integration of the device. Another subset of scribes may have a narrower width and not contain these structures.

Abstract

Methods, systems, and devices for techniques for forming a device with scribe asymmetry are described. Circuits (e.g., arrays of memory cells) may be printed on a wafer and separated by scribes of various widths to increase an array efficiency of a fabrication process. For example, a scribe that extends in a first direction may have a width in a second direction. A first subset of scribes may have a first width, where one or more structures may be placed in the first subset of scribes to facilitate die testing and integration. A second subset of scribes may have a second width. In some examples, the structures may not be placed in the second subset of scribes and, accordingly, the second width may be less than the first width.

MEMORY DEVICE INCLUDING SUPPORT STRUCTURES (18209231)

Inventor Andrew Zhe Wei Ong

Brief explanation

The abstract describes a technical invention related to memory cells and control gates. The invention involves a structure consisting of multiple tiers of memory cells and control gates stacked on top of each other on a substrate. The control gates are arranged in a staircase-like structure, with conductive contacts making contact with them at specific locations. A dielectric structure is present on the sidewalls of the control gates, and support structures are positioned next to the conductive contacts. These support structures have vertical lengths extending from the substrate and are located at a specific distance from the edge of the dielectric structure. The ratio of the width of the support structure to this distance falls within the range of 1.6 to 2.0.

Abstract

Some embodiments include apparatuses and methods of forming the apparatuses. One of the apparatuses includes tiers of respective memory cells and control gates, the tier located one over another over a substrate, the control gates including a control gate closest to the substrate, the control gates including respective portions forming a staircase structure; conductive contacts contacting the control gates at a location of the staircase structure, the conductive contacts including a conductive contact contacting the control gate; a dielectric structure located on sidewalls of the control gates; and support structures adjacent the conductive contacts and having lengths extending vertically from the substrate, the support structures including a support structure closest to the conductive contact, the support structure located at a distance from an edge of the dielectric structure, wherein a ratio of a width of the support structure over the distance is ranging from 1.6 to 2.0.

Ferroelectric Transistors and Assemblies Comprising Ferroelectric Transistors (18207905)

Inventor Kamal M. Karda

Brief explanation

The abstract describes a type of transistor called a ferroelectric transistor. This transistor has two electrodes, with one electrode positioned slightly away from the other. In between the electrodes, there is an active region. Within this active region, there is a transistor gate, which controls the flow of current. The active region also contains a source/drain region next to each electrode, and a body region in between. The body region includes a channel region next to the transistor gate. In addition, there is a barrier between the second electrode and the channel region that allows electrons to pass through but not holes. Finally, there is a ferroelectric material located between the transistor gate and the channel region.

Abstract

Some embodiments include a ferroelectric transistor having a first electrode and a second electrode. The second electrode is offset from the first electrode by an active region. A transistor gate is along a portion of the active region. The active region includes a first source/drain region adjacent the first electrode, a second source/drain region adjacent the second electrode, and a body region between the first and second source/drain regions. The body region includes a gated channel region adjacent the transistor gate. The active region includes at least one barrier between the second electrode and the gated channel region which is permeable to electrons but not to holes. Ferroelectric material is between the transistor gate and the gated channel region.

TRANSIENT LOAD MANAGEMENT (17715552)

Inventor Leon Zlotnik

Brief explanation

The abstract describes a system that includes sensing circuitry and clock management circuitry. The sensing circuitry detects voltage, current, and/or activity in a system-on-chip (SoC) and determines if it meets a certain threshold. The clock management circuitry generates clocking signals for the SoC and adjusts the frequency of these signals based on the detected voltage, current, and/or activity meeting the threshold. This adjustment helps to reduce the power consumed by the SoC.

Abstract

Sensing circuitry and clock management circuitry provide transient load management. The sensing circuitry detects a voltage, current, and/or activity associated with a system-on-chip (SoC) and determines whether the detected voltage, current, and/or activity meets a threshold. The clock management circuitry generates clocking signals for the SoC and alters a frequency of the generated clocking signals in response to the detected voltage, current, and/or activity meeting the threshold to alter an amount of power consumed by the SoC.

SOLID STATE LIGHTING SYSTEMS AND ASSOCIATED METHODS OF OPERATION AND MANUFACTURE (18335885)

Inventor Anil Tipirneni

Brief explanation

The abstract describes a lighting system that uses a solid state lighting device to generate mixed light. The lighting device includes light sources and a sensor that detects light from one of the sources. A controller is used to control two or more of the light sources based on the sensor's output. The controller can communicate with the sensor to provide closed-loop control, ensuring efficient and effective lighting.

Abstract

A lighting system includes a solid state lighting device capable of generating mixed light and a controller. The solid state lighting device includes light sources for producing mixed light and a sensor configured to detect light from one of the light sources. The controller controls two or more of the light sources based on output from the sensor. The controller can communicate with the sensor to provide closed-loop control.

METAL GATE MEMORY DEVICE AND METHOD (17717406)

Inventor Hyucksoo Yang

Brief explanation

The abstract describes an invention related to memory devices and systems. It mentions the presence of memory cells and a transistor in the memory device. The memory cells are connected to a number of data lines, which are made of a different metal than the transistor's gate. These data lines directly interface with the transistor's gate.

Abstract

Apparatus and methods are disclosed, including memory devices and systems. Example memory devices, systems and methods include an array of memory cells and a transistor located on a periphery of the array of memory cells. A number of data lines are shown coupled to memory cells in the array, wherein the number of data lines extend over a first metal gate of a transistor in the periphery of the array, where the number of data lines are formed from a second metal, and form a direct interface with the first metal gate.

Integrated Assemblies and Methods of Forming Integrated Assemblies (18207499)

Inventor Shuangqiang Luo

Brief explanation

The abstract describes an integrated assembly that includes different memory regions and an intermediate region. The assembly has a stack made up of conductive and insulative levels. There are channel-material-pillars arranged within the memory regions, memory-block-regions that extend across the memory regions and intermediate region, and staircase regions within the intermediate region. The staircase regions overlap two of the memory-block-regions. There are also first and second panel regions that provide separation between the memory-block-regions. The second panel regions are different in size or composition compared to the first panel regions. The abstract also mentions methods of forming these integrated assemblies.

Abstract

Some embodiments include an integrated assembly having a first memory region, a second memory region, and an intermediate region between the memory regions. A stack extends across the memory regions and the intermediate region. The stack includes alternating conductive levels and insulative levels. Channel-material-pillars are arranged within the memory regions. Memory-block-regions extend longitudinally across the memory regions and the intermediate region. Staircase regions are within the intermediate region. Each of the staircase regions laterally overlaps two of the memory-block-regions. First panel regions extend longitudinally across at least portions of the staircase regions. Second panel regions extend longitudinally and provide lateral separation between adjacent memory-block-regions. The second panel regions are of laterally different dimensions than the first panel regions and/or are compositionally different than the first panel regions. Some embodiments include methods of forming integrated assemblies.

TRENCH AND PIER ARCHITECTURES FOR THREE-DIMENSIONAL MEMORY ARRAYS (17714771)

Inventor Fabio Pellizzer

Brief explanation

This abstract describes methods, systems, and devices for creating trench and pier architectures in three-dimensional memory arrays. These architectures involve the formation of pier structures in contact with alternating layers of materials deposited on a substrate. These pier structures provide support for subsequent processing steps. The memory die includes alternating layers of a first and second material, which can be shaped into different cross-sectional patterns. The pier structures are formed in contact with these patterns, and when one of the materials is removed to create voids, the pier structures help maintain the shape of the remaining material. These pier structures can be formed within or along trenches or other features aligned in the direction of the memory array, allowing for self-alignment in subsequent operations.

Abstract

Methods, systems, and devices for trench and pier architectures for three-dimensional memory arrays are described. A semiconductor device (e.g., a memory die) may include pier structures formed in contact with features formed from alternating layers of materials deposited over a substrate, which may provide support for subsequent processing. For example, a memory die may include alternating layers of a first material and a second material, which may be formed into various cross-sectional patterns. Pier structures may be formed in contact with the cross sectional patterns such that, when either the first material or the second material is removed to form voids, the pier structures may provide mechanical support of the cross-sectional pattern of the remaining material. In some examples, such pier structures may be formed within or along trenches or other features aligned along a direction of a memory array, which may provide a degree of self-alignment for subsequent operations.