Western Digital Technologies, Inc. patent applications published on December 28th, 2023

From WikiPatents
Revision as of 18:22, 1 January 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Contents

Patent applications for Western Digital Technologies, Inc. on December 28th, 2023

INTEGRATED CIRCUIT TEST SOCKET WITH INTEGRATED DEVICE PICKING MECHANISM (17850858)

Main Inventor

Yalaj Goyal


Brief explanation

The abstract describes a patent application for an integrated circuit (IC) device test socket with an integrated IC picking mechanism. The test socket consists of a base member and a cover member, where the base member has a recess to hold the IC device for testing. The cover member can be attached to the base member to secure the IC device in place. The cover member also includes an IC picking mechanism that uses suction to hold the IC device.
  • The patent application is for an IC test socket with an integrated IC picking mechanism.
  • The test socket consists of a base member and a cover member.
  • The base member has a recess to hold the IC device for testing.
  • The cover member can be attached to the base member to secure the IC device.
  • The cover member includes an IC picking mechanism that uses suction to hold the IC device.

Potential Applications

  • Testing and quality control of integrated circuit devices.
  • Manufacturing and production of integrated circuit devices.

Problems Solved

  • Simplifies the process of removing IC devices from test sockets after testing.
  • Reduces the risk of damage to IC devices during removal.

Benefits

  • Efficient and reliable removal of IC devices from test sockets.
  • Minimizes the potential for damage to IC devices.
  • Streamlines the testing and production process of integrated circuit devices.

Abstract

An integrated circuit (IC) device test socket has an integrally formed IC picking mechanism for removing an IC device from the test socket after testing. The test socket has a base member and a cover member. The base member includes a recess that is configured to receive an IC device for testing. The cover member is configured to removably engage the base member to secure the IC device between the cover member and the base member. The cover member includes an IC picking mechanism configured to use suction to retain the IC device to the cover member.

SECURITY INDICATOR ON A DATA STORAGE DEVICE (17849703)

Main Inventor

Ramanathan MUTHIAH


Brief explanation

The abstract describes a data storage device that includes a non-volatile storage medium, a data port, a data security indicator, and a controller. The controller selectively controls access to user data stored in the device based on security configuration data. It also responds to data access and security control operations by generating an indicator control signal to indicate security parameters.
  • The data storage device has a non-volatile storage medium for storing user data.
  • It has a data port for transmitting data between a host computer system and the device.
  • The device includes a data security indicator to indicate security parameters.
  • A controller is responsible for controlling access to user data based on security configuration data.
  • The controller responds to data access and security control operations by generating an indicator control signal.

Potential Applications

  • Secure data storage for sensitive information.
  • Protection of intellectual property and confidential data.
  • Secure data transfer between different computer systems.
  • Controlled access to data in a shared storage device.

Problems Solved

  • Unauthorized access to user data is prevented.
  • Security configuration can be easily updated and controlled.
  • Indicator provides visual indication of security parameters.
  • Data access and security control operations are efficiently managed.

Benefits

  • Enhanced data security and protection.
  • Flexibility in controlling access to user data.
  • Easy management of security configuration.
  • Visual indication of security parameters for user awareness.

Abstract

A data storage device comprising a non-volatile storage medium configured to store user data, a data port configured to transmit data between a host computer system and the data storage device, a data security indicator, and a controller. The controller is configured to selectively control access of the host computer system to the user data based on security configuration data of the data storage device. The controller is further configured to respond to the occurrence of one or more operations, the operations being any of: (i) a data access operation requested or performed, by the host computer system, on the data storage device to access the storage medium via the data port; and (ii) a security control operation requested or performed, by an external device, on the data storage device to store, retrieve or update the security configuration data of the data storage device. The response of the controller includes generating an indicator control signal to cause the data security indicator to indicate one or more security parameters associated with the one or more operations.

Detection and Isolation of Faulty Holdup Capacitors Using Hardware Circuit in Data Storage Devices (17852103)

Main Inventor

Nagi Reddy CHODEM


Brief explanation

The patent application describes systems and methods for detecting and isolating faulty hold-up capacitors in a data storage device, and performing corrective actions. Here is a simplified explanation of the abstract:
  • A hardware circuit is connected to a micro-controller and non-volatile memory dies.
  • The hardware circuit provides backup power for the memory and micro-controller.
  • The method detects if a hold-up capacitor in the hardware circuit is faulty.
  • If a faulty capacitor is detected, it is isolated from the circuit.
  • The micro-controller obtains the status of an interface connected to the hardware circuit.
  • Based on the interface status, the micro-controller determines the status of the hardware circuit.
  • If one or more faulty capacitors are detected, the micro-controller performs corrective actions for the data storage device.

Potential applications of this technology:

  • Data storage devices such as hard drives, solid-state drives, and memory cards.
  • Any electronic device that requires backup power and uses hold-up capacitors.

Problems solved by this technology:

  • Faulty hold-up capacitors can cause power issues and data loss in data storage devices.
  • Detecting and isolating faulty capacitors ensures reliable backup power and prevents data corruption.

Benefits of this technology:

  • Improved reliability and performance of data storage devices.
  • Prevents data loss and corruption due to faulty capacitors.
  • Allows for timely corrective actions to be taken to maintain the functionality of the device.

Abstract

Disclosed are systems and methods detecting and isolating faulty hold-up capacitors and performing corrective actions for a data storage device. A hardware circuit is coupled to a micro-controller and non-volatile memory dies. The method includes, at the hardware circuit: providing a back-up power for the non-volatile memory dies and the micro-controller; and detecting whether a hold-up capacitor of the hardware circuit is faulty and isolating the hold-up capacitor in accordance with a detection that the hold-up capacitor is faulty. The method also includes, at the micro-controller: obtaining a status of an interface coupled to the hardware circuit; determining a status of the hardware circuit based on the status of the interface; and performing a corrective action for the data storage device in accordance with a determination that the status of hardware circuit corresponds to one or more faulty hold-up capacitors.

DATA STORAGE DEVICE WITH NOISE INJECTION (17847068)

Main Inventor

Daniel Joseph Linnen


Brief explanation

The patent application describes procedures for injecting noise into a non-volatile memory (NVM) array. This is done to induce bit flips and information degradation in the data stored in the memory.
  • Noise is injected by adjusting read voltages, causing bit flips, while using feedback to control the amount of information degradation.
  • Random data is combined with itself iteratively to achieve a target percentage of random 1s or 0s. This random data is then combined with data read from the NVM array.
  • Dead pixels in charge coupled devices (CCDs) are emulated by randomly zeroing out pixels.
  • Timing, voltage, and current values used in data transfer are adjusted outside their specified margins to induce bit flips and inject noise into the data.

Potential applications of this technology:

  • Dataset augmentation: The noise-injected data can be used to expand and diversify training datasets for machine learning models.
  • Testing of deep neural networks (DNNs): The noise-injected data can be used to evaluate the robustness and performance of DNNs.

Problems solved by this technology:

  • Lack of diverse training data: By injecting noise into the data, the technology helps address the problem of limited and homogeneous training datasets.
  • Testing limitations: The technology provides a method to test the performance and resilience of DNNs by injecting controlled noise into the data.

Benefits of this technology:

  • Improved machine learning models: Dataset augmentation with noise-injected data can enhance the accuracy and generalization capabilities of machine learning models.
  • Robustness testing: The ability to inject controlled noise into data allows for comprehensive testing of DNNs, ensuring their reliability and performance in real-world scenarios.

Abstract

Noise injection procedures implemented on the die of a non-volatile memory (NVM) array are disclosed. In one example, noise is injected into data by adjusting read voltages to induce bit flips while using feedback to achieve a target amount of information degradation. In another example, random data is iteratively combined with itself to achieve a target percentage of random 1s or 0s, then the random data is combined with data read from the NVM array. In other examples, pixels are randomly zeroed out to emulate dead charge coupled device (CCD) pixels. In still other examples, the timing, voltage, and/or current values used within circuits while transferring data to/from latches or bitlines are adjusted outside their specified margins to induce bit flips to inject noise into the data. The noise-injected data may be used, for example, for dataset augmentation or for the testing of deep neural networks (DNNs).

STORAGE DEVICE POOL MANAGEMENT BASED ON STORAGE DEVICE LOGICAL TO PHYSICAL (L2P) TABLE INFORMATION (17850873)

Main Inventor

Amit Sharma


Brief explanation

The abstract describes a patent application for managing storage devices in a data storage system based on logical to physical (L2P) table information. The system includes data storage devices with non-volatile memory and a storage management device. The storage management device receives L2P table information from multiple data storage devices, receives host data from a host device, selects a target data storage device based on the L2P table information and the size of the host data, and sends the host data to the target data storage device.
  • The patent application focuses on managing storage devices in a data storage system.
  • It utilizes logical to physical (L2P) table information to make decisions.
  • The system includes data storage devices with non-volatile memory.
  • A storage management device is responsible for receiving L2P table information and host data.
  • Based on the L2P table information and the size of the host data, a target data storage device is selected.
  • The host data is then sent to the selected target data storage device.

Potential Applications

  • This technology can be applied in various data storage systems, such as cloud storage, enterprise storage, and personal storage devices.
  • It can be used in systems that require efficient management and allocation of storage resources.
  • The technology can be beneficial in environments where large amounts of data need to be stored and accessed quickly.

Problems Solved

  • The technology solves the problem of managing storage devices in a data storage system efficiently.
  • It addresses the challenge of selecting the most suitable data storage device based on L2P table information and the size of the host data.
  • The system solves the problem of optimizing storage resource allocation to improve overall performance.

Benefits

  • The technology improves the management of storage devices, leading to better utilization of storage resources.
  • It enables faster and more efficient data storage and retrieval processes.
  • The system reduces the risk of data loss or corruption by selecting the most appropriate data storage device.
  • It enhances the overall performance and reliability of the data storage system.

Abstract

Methods, systems, and apparatuses for storage device pool management based on storage device logical to physical (L2P) table information are provided. One such data storage system includes data storage devices each including a non-volatile memory; and a storage management device configured to receive L2P table information from at least two of the data storage devices; receive host data from a host device to be stored in one or more of the data storage devices; select, based on the L2P table information from the at plurality of data storage devices and the size of the host data, a target data storage device from the plurality of data storage devices; and send the host data to the target data storage device.

CONFIGURATION OF NEW STORAGE DEVICES IN A STORAGE DEVICE POOL (17851566)

Main Inventor

Amit Sharma


Brief explanation

The abstract describes aspects related to configuring new storage devices in a storage device pool. 
  • The storage management device receives optimization information from a source data storage device that was part of a data storage system.
  • The new data storage device is configured with the optimization learned by the source data storage device.
  • The data storage device receives optimization information from the storage management device, which includes optimizations learned by other source data storage devices.
  • The data storage device is then configured to include the optimizations learned by the source data storage devices.

Potential Applications

  • Data storage systems
  • Storage management devices
  • Source data storage devices

Problems Solved

  • Configuring new storage devices in a storage device pool
  • Optimizing the performance of data storage devices

Benefits

  • Improved performance of data storage devices
  • Efficient configuration of new storage devices in a storage device pool

Abstract

Aspects directed towards configuring new storage devices in a storage device pool are provided. In one aspect, a storage management device receives optimization information that includes at least one optimization learned by at least one source data storage device while part of a data storage system. A new data storage device for the data storage system is then configured with the at least one device optimization. In another aspect, a data storage device receives optimization information from a storage management device coupled to a plurality of pooled data storage devices, which includes the data storage device and at least one source data storage device. For this aspect, the optimization information includes at least one optimization learned by the at least one source data storage device while coupled to the storage management device. The data storage device is then configured to include the at least one device optimization.

Rate Levelling Among Peer Data Storage Devices (17846316)

Main Inventor

Ramanathan Muthiah


Brief explanation

==Abstract==

Example storage systems, data storage devices, and methods provide rate levelling among peer storage devices. A master storage device among peer storage devices receives host commands, determines the workload states of the peer storage devices, divides the data units in the host commands into data blocks for data striping, allocates the data blocks among the peer storage devices, and sends the data blocks to the peer storage devices using a peer communication channel.

Bullet Points

  • Example storage systems, data storage devices, and methods for rate levelling among peer storage devices.
  • A master storage device receives host commands and determines the workload states of the peer storage devices.
  • The master storage device divides the data units in the host commands into data blocks for data striping.
  • The master storage device allocates the data blocks among the peer storage devices.
  • The master storage device sends the data blocks to the peer storage devices using a peer communication channel.

Potential Applications

  • Data storage systems that require rate levelling among multiple storage devices.
  • Distributed storage systems where data needs to be evenly distributed among peer storage devices.
  • Cloud storage systems that need to balance the workload among different storage devices.

Problems Solved

  • Uneven distribution of workload among peer storage devices.
  • Inefficient allocation of data blocks in storage systems.
  • Lack of communication and coordination among storage devices.

Benefits

  • Improved performance and efficiency of storage systems.
  • Balanced workload distribution among peer storage devices.
  • Enhanced data striping and allocation for optimized storage utilization.

Abstract

Example storage systems, data storage devices, and methods provide rate levelling among peer storage devices. A master storage device among peer storage devices receives host commands, determines the workload states of the peer storage devices, divides the data units in the host commands into data blocks for data striping, allocates the data blocks among the peer storage devices, and sends the data blocks to the peer storage devices using a peer communication channel.

DATA STORAGE DEVICE WITH DATA PADDING AND SAFE AND EFFICIENT ALIGNMENT OF DATA ZONES WITH DATA CONTAINERS (17850945)

Main Inventor

Scott Burton


Brief explanation

The patent application describes a data storage device that includes disks, an actuator mechanism, and processing devices. The processing devices are designed to detect a criterion for inserting padding on the recording medium near the data containers to be written. The containers are used to assign logic blocks and store data in an interleaved pattern across sectors based on a distributed sector encoding scheme. The criterion for inserting padding is detected by identifying a mismatch in size between a zone and the number of containers needed to write that zone. Mapping indicators are then inserted to indicate the presence of padding blocks near the containers.
  • The patent application is for a data storage device with improved data writing efficiency.
  • It introduces the concept of inserting padding blocks on the recording medium to optimize the allocation of data containers.
  • The device uses a distributed sector encoding scheme to store data in an interleaved pattern across sectors.
  • The processing devices detect a criterion for inserting padding by identifying a size mismatch between a zone and the number of containers needed to write that zone.
  • Mapping indicators are inserted to indicate the presence of padding blocks near the containers.

Potential Applications

  • This technology can be applied in various data storage devices such as hard disk drives and solid-state drives.
  • It can improve the efficiency and performance of data writing processes in these devices.
  • The optimized allocation of data containers can lead to faster data access and retrieval.

Problems Solved

  • The technology solves the problem of inefficient data writing in data storage devices.
  • It addresses the issue of mismatched sizes between zones and the number of containers needed to write them.
  • The introduction of padding blocks and mapping indicators helps optimize the allocation of data containers.

Benefits

  • The technology improves the overall efficiency and performance of data storage devices.
  • It allows for faster data access and retrieval due to optimized data writing processes.
  • The use of padding blocks and mapping indicators ensures efficient allocation of data containers.

Abstract

Various illustrative aspects are directed to a data storage device, comprising one or more disks; an actuator mechanism configured to position heads proximate to a recording medium of the disks; and one or more processing devices. The processing devices are configured to detect a criterion for inserting padding on the recording medium proximate to data containers to be written to the recording medium, the containers configured for assigning logic blocks to the containers, the logic blocks configured to store data to be written in an interleaved pattern across sectors based on a distributed sector encoding scheme, wherein detecting the criterion comprises detecting a mismatch in size between at least a portion of a zone and an integer number of containers in which to write the at least a portion of the zone; and insert mapping indicators to a mapping to indicate padding blocks proximate to the containers.

MEMORY PARTITIONED DATA STORAGE DEVICE (17849702)

Main Inventor

Nataniel PEISAKHOV


Brief explanation

The abstract describes a data storage device that includes a non-volatile storage medium with multiple partitions, including at least one secure partition. The device has a data path for communication between a host computer system and the storage medium. A partition controller is connected to a switch and can transition the device between a secure mode and a non-secure mode.
  • The data storage device has a non-volatile storage medium with multiple partitions, including a secure partition.
  • It has a data path that allows communication between a host computer system and the storage medium.
  • The device includes a partition controller that can transition the device between a secure mode and a non-secure mode.
  • In the secure mode, the secure partition is connected to the host via the data path.
  • In the non-secure mode, the secure partition is disconnected from the host via the data path.

Potential Applications

  • Secure data storage for sensitive information
  • Protection of confidential files or documents
  • Secure storage for financial or personal data

Problems Solved

  • Provides a secure partition for storing sensitive data
  • Allows for easy transition between secure and non-secure modes
  • Protects data from unauthorized access or tampering

Benefits

  • Enhanced security for stored data
  • Flexibility in managing secure and non-secure data
  • Easy control over access to sensitive information

Abstract

A data storage device comprising a non-volatile storage medium configured to store user data, where the storage medium is organized as one or more partitions, including at least one secure partition. The partitions are defined by a corresponding set of pre-specified physical memory blocks of the storage medium. The data storage device also includes a data path configured to provide data communication between a host computer system and the storage medium of the data storage device. A partition controller of the data storage device is coupled to a switch. In response to an actuation of the switch, the partition controller is configured to cause the data storage device to selectively transition between: a secure mode in which the set of physical memory blocks of each secure partition is connected to the host via the data path; and a non-secure mode in which the set of physical memory blocks of each secure partition is disconnected from the host via the data path.

STORAGE DEVICE POOL MANAGEMENT BASED ON FRAGMENTATION LEVELS (17851595)

Main Inventor

Amit Sharma


Brief explanation

The patent application describes aspects related to data storage management. Here is a simplified explanation of the abstract:
  • The data storage system receives information about the fragmentation level of data storage devices and data from a host device that needs to be stored.
  • Based on the fragmentation level information, a target data storage device is selected, and the host data is sent to that device.
  • The data storage device itself determines threshold conditions that trigger a defragmentation process.
  • A fragmentation level metric is calculated by the data storage device, indicating how close it is to initiating the defragmentation process.
  • This fragmentation level metric is then sent to a storage management device for further action.

Potential Applications:

  • This technology can be applied in various data storage systems, such as computer hard drives, solid-state drives (SSDs), and network-attached storage (NAS) devices.
  • It can be used in cloud storage systems to optimize data placement and improve overall system performance.

Problems Solved:

  • Fragmentation in data storage devices can lead to decreased performance and increased access times.
  • This technology helps in managing fragmentation by selecting the most suitable storage device and triggering defragmentation processes when necessary.

Benefits:

  • By selecting the target data storage device based on fragmentation level information, the system can distribute data more efficiently and improve overall performance.
  • The ability of the data storage device to determine threshold conditions for defragmentation helps in maintaining optimal storage performance.
  • The fragmentation level metric provides valuable information to the storage management device, enabling proactive maintenance and optimization of the storage system.

Abstract

Aspects directed towards data storage management are provided. In one aspect, a data storage system receives fragmentation level information from data storage devices, and host data from a host device to be stored in the data storage devices. Based on the received fragmentation level information, a target data storage device is selected from the data storage devices, and the host data is sent to the target data storage device. In another aspect, a data storage device determines threshold conditions that trigger a defragmentation process. For this aspect, a fragmentation level metric indicating a proximity of the data storage device to initiating the defragmentation process is calculated based on the threshold conditions and a current amount of data stored in a non-volatile memory (NVM). The fragmentation level metric is then sent to a storage management device.

Optimized Read-Modify-Writes During Relocation of Overlapping Logical Blocks (17847078)

Main Inventor

Duckhoi KOO


Brief explanation

The patent application describes systems and methods for performing read-modify-write operations during the relocation of overlapping logical blocks in a device memory. Here is a simplified explanation of the abstract:
  • The method starts by receiving a write command from a host interface.
  • The logical block address in the write command is translated to a physical address on the device memory.
  • The physical address corresponds to multiple indirection units.
  • If the physical address is not aligned, a read-modify-write operation is performed on one or more indirection units during the relocation process.
  • This is done when a relocation block has an overlapping indirection unit with the one being modified.

Potential applications of this technology:

  • Data storage systems: This technology can be used in storage devices such as solid-state drives (SSDs) to efficiently handle write commands and manage the relocation of logical blocks.
  • Cloud computing: The technology can be applied in cloud storage systems to improve the performance and reliability of data storage and retrieval operations.

Problems solved by this technology:

  • Efficient relocation of overlapping logical blocks: By performing read-modify-write operations during relocation, this technology ensures that data is correctly written to the new location without losing any information.
  • Address translation: The method translates logical block addresses to physical addresses, allowing for efficient data storage and retrieval.

Benefits of this technology:

  • Improved data integrity: By performing read-modify-write operations, the technology ensures that data is correctly relocated without any loss or corruption.
  • Enhanced performance: The method optimizes the relocation process by only performing read-modify-write operations when necessary, reducing the overall time and resources required.
  • Efficient use of memory: By utilizing indirection units and aligning addresses, the technology maximizes the use of device memory and improves storage efficiency.

Abstract

Disclosed are systems and methods for providing read-modify-writes during relocation of overlapping of logical blocks. A method includes receiving a host write command from a host interface. The method also includes translating a logical block address for the host write command to a physical address on a device memory. The physical address corresponds to a plurality of indirection units. The method also includes, in accordance with a determination that the physical address does not correspond to an aligned address, processing a read-modify-write operation for one or more indirection units of the plurality of indirection units during a relocation, in accordance with a determination that a relocation block has an overlapping indirection unit with the one or more indirection units.

Key-To-Physical Table Optimization For Key Value Data Storage Devices (17850423)

Main Inventor

Ran ZAMIR


Brief explanation

The abstract describes a data storage device that includes a memory device and a controller. The controller is designed to segment a key to physical (K2P) table into multiple segments based on the caching priority of key value (KV) pair data. The K2P table is then organized by storing and relocating K2P table entries into their respective segments. The controller utilizes the K2P table to manage the KV pair data stored in the memory device by applying the same management operation, such as prefetching, to each K2P table entry within the same segment.
  • The data storage device includes a memory device and a controller.
  • The controller segments a key to physical (K2P) table into multiple segments based on the caching priority of key value (KV) pair data.
  • The K2P table is organized by storing and relocating K2P table entries into their respective segments.
  • The storing and relocating process involves moving a K2P table entry to the segment with the corresponding caching priority.
  • The K2P table is utilized by the controller to manage the KV pair data stored in the memory device.
  • The controller applies the same management operation, such as prefetching, to each K2P table entry within the same segment.

Potential Applications

  • This technology can be applied in various data storage devices, such as solid-state drives (SSDs) or cloud storage systems.
  • It can improve the efficiency and performance of data storage by optimizing the organization and management of KV pair data.

Problems Solved

  • The technology solves the problem of efficiently managing and organizing KV pair data in a data storage device.
  • It addresses the challenge of effectively utilizing caching priorities to enhance data storage performance.

Benefits

  • The segmentation and organization of the K2P table based on caching priorities allows for more efficient data management.
  • By applying the same management operation to each K2P table entry within a segment, the technology ensures consistent and optimized performance.
  • The improved management and organization of KV pair data can lead to faster data access and retrieval, enhancing overall system performance.

Abstract

A data storage device includes a memory device and a controller coupled to the memory device. The controller is configured to segment a key to physical (K2P) table into two or more segments, wherein each segment of the two or more segments corresponds to a caching priority of key value (KV) pair data, organize the K2P table by storing and relocating one or more K2P table entries into a respective segment of the two or more segments, wherein the storing and relocating comprises moving a K2P table entry based on the caching priority of the KV pair data into the respective segment having the caching priority, and utilize the K2P table to manage KV pair data stored in the memory device, wherein utilizing the K2P table comprises applying a same management operation, such as prefetching, to each K2P table entry of a same segment.

Data Storage Device and Method for Multi-Level Conditional Prediction of Future Random Read Commands (17846335)

Main Inventor

Shay Benisty


Brief explanation

The abstract describes a data storage device and method for predicting future random read commands. 
  • The device includes a memory and a controller.
  • The controller receives a random read command from a host, which is associated with a stream.
  • It predicts the next stream to be received from the host.
  • It also predicts the next random read command based on the received command and the predicted next stream.

Potential applications of this technology:

  • Improving the performance and efficiency of data storage devices.
  • Enhancing the user experience by reducing latency in accessing data.
  • Optimizing data retrieval in various industries such as cloud computing, database management, and data centers.

Problems solved by this technology:

  • Random read commands can be unpredictable, leading to inefficiencies in data storage systems.
  • Traditional methods may not effectively anticipate the next stream and command, resulting in slower data access.
  • This technology addresses these issues by using multi-level conditional prediction to improve the accuracy of future command predictions.

Benefits of this technology:

  • Faster data access and retrieval, leading to improved overall system performance.
  • Enhanced user experience with reduced latency in accessing data.
  • Increased efficiency and productivity in data-intensive industries.
  • Potential cost savings by optimizing data storage and retrieval processes.

Abstract

A data storage device and method for multi-level conditional prediction of future random read commands are provided. In one embodiment, a data storage device is provided comprising a memory and a controller. The controller is configured to receive a random read command from a host, wherein the received random read command is associated with a stream; predict a next stream to be received from the host; and predict a next random read command to be received from the host based on the received random read command and the predicted next stream. Other embodiments are possible, and each of the embodiments can be used alone or together in combination.

Proactive Hardening of Data Storage System (17848300)

Main Inventor

Chakradhar KOMMURI


Brief explanation

The patent application describes systems and methods for proactively recovering files stored in flash storage devices. The method is performed at a flash file system and involves several steps:
  • Receiving a write command for a first file in a flash memory.
  • Generating a reference hash for the first file and storing it in the flash memory.
  • Receiving a read command for the first file.
  • Requesting the logical block address corresponding to the first file from the flash manager.
  • Receiving a response for the read command.
  • If one or more hashes do not map to the first file, performing a file recovery operation for a second file based on the one or more hashes.

Potential applications of this technology:

  • Data recovery in flash storage devices: The method described in the patent application can be used to proactively recover files that may have been lost or corrupted in flash storage devices, improving data reliability and reducing the risk of data loss.
  • Flash file system optimization: By storing reference hashes and performing file recovery operations, the flash file system can optimize its performance and ensure the integrity of stored files.

Problems solved by this technology:

  • Data loss and corruption: Flash storage devices are prone to data loss and corruption due to various factors such as power failures or physical damage. The described method helps mitigate these issues by proactively recovering files.
  • File system inefficiencies: Traditional file systems may not have built-in mechanisms for proactive file recovery. This technology addresses this problem by introducing a method for efficient file recovery in flash storage devices.

Benefits of this technology:

  • Improved data reliability: By proactively recovering files, the technology helps ensure the integrity and availability of data stored in flash storage devices.
  • Enhanced performance: The method optimizes the performance of the flash file system by efficiently recovering files based on reference hashes, reducing the time and resources required for data recovery.
  • Reduced data loss risk: By proactively recovering files, the technology reduces the risk of data loss in flash storage devices, providing increased data protection and peace of mind for users.

Abstract

Disclosed are systems and methods for proactively recovering files stored in flash storage devices. The method may be performed at a flash file system. The method may include receiving a write command targeting a first file in a flash memory. The method may also include generating a reference hash corresponding to the first file, and storing the reference hash in the flash memory. The method may also include receiving a read command targeting the first file. In response to receiving the read command, the method may also include: providing a request for a logical block address corresponding to the first file to the flash manager, and receiving a response for the read command. The method may also include, in accordance with a determination that one or more hashes do not map to the first file, performing a file recovery operation for a second file based on the one or more hashes.

Peer RAID Control Among Peer Data Storage Devices (17849957)

Main Inventor

Judah Gamliel Hahn


Brief explanation

The abstract describes a storage system that uses a redundant array of independent disks (RAID) control to store data across multiple storage devices. The system includes a master storage device that receives commands from a host and determines how to distribute the data blocks among the peer storage devices based on their RAID configuration. The data blocks are then allocated and sent to the peer storage devices using a communication channel.
  • The storage system uses RAID control to distribute data across multiple storage devices.
  • A master storage device receives commands from a host and determines how to allocate the data blocks among the peer storage devices.
  • The data blocks are sent to the peer storage devices using a communication channel.

Potential Applications

  • Data centers and server farms that require efficient and reliable storage systems.
  • Cloud storage providers that need to distribute data across multiple devices for redundancy and performance.
  • High-performance computing environments that require fast and reliable data storage.

Problems Solved

  • Ensures data redundancy and reliability by distributing data blocks across multiple storage devices.
  • Improves performance by allowing data to be accessed in parallel from multiple storage devices.
  • Simplifies the management and control of storage devices by using a master storage device.

Benefits

  • Increased data reliability and availability due to redundancy across multiple storage devices.
  • Improved performance through parallel access to data from multiple storage devices.
  • Simplified management and control of storage devices through the use of a master storage device.

Abstract

Example storage systems, data storage devices, and methods provide redundant array of independent disk (RAID) control among peer storage devices. A master storage device among peer storage devices receives host commands and determines, based on a peer RAID configuration, data blocks for redundantly storing the host data unit among the peer storage devices. The master storage device allocates the data blocks among the peer storage devices and sends them to the peer storage devices using a peer communication channel.

Storage Media Based Search Function For Key Value Data Storage Devices (17850352)

Main Inventor

Alexander BAZARSKY


Brief explanation

The abstract describes a data storage device that includes a memory device and a controller. The controller receives a search command from a host device for a specific value associated with a key value (KV) format. It prepares search buffers and sends them to the memory device. The controller retrieves wordlines containing KV pair data and compares them with the search buffers to find values with the specific sequence. It then provides at least a portion of the value from the KV pair data to the host device.
  • The data storage device includes a memory device and a controller.
  • The controller receives a search command from a host device for a specific value associated with a key value (KV) format.
  • The controller prepares search buffers and sends them to the memory device.
  • The controller retrieves wordlines containing KV pair data.
  • The retrieved wordlines are compared with the search buffers to find values with the specific sequence.
  • At least a portion of the value from the KV pair data is provided to the host device based on the comparison.

Potential Applications

  • This technology can be applied in various data storage devices such as solid-state drives (SSDs) or databases.
  • It can be used in systems that require efficient searching and retrieval of specific values associated with a key value format.

Problems Solved

  • The technology solves the problem of efficiently searching for specific values in a key value format within a data storage device.
  • It eliminates the need for the host device to perform complex search operations, offloading the workload to the data storage device.

Benefits

  • The data storage device provides faster and more efficient searching and retrieval of specific values.
  • It reduces the processing burden on the host device, improving overall system performance.
  • The technology enables quicker access to data, enhancing the user experience and productivity.

Abstract

A data storage device includes a memory device and a controller coupled to the memory device. The controller is configured to receive a search command from a host device, where the search command is for value associated with a key value (KV) format having a specific sequence, prepare one or more search buffers and send the one or more search buffers to the memory device, retrieve one or more wordlines having KV pair data associated with the KV format, where the KV pair data includes a key and a value, compare the retrieved one or more wordlines with the one or more search buffers for values having the specific sequence, and provide at least a portion of the value from one or more KV pair data based on the comparing to the host device.

NON-VOLATILE MEMORY DIE WITH LATCH-BASED MULTIPLY-ACCUMULATE COMPONENTS (17847039)

Main Inventor

Daniel Joseph Linnen


Brief explanation

The abstract of this patent application describes latch-based multiply-accumulate (MAC) operations implemented on the die of a non-volatile memory (NVM) array. The MAC procedures are linear and do not require logic branches. The MAC operation uses a set of linear MAC stages, where each stage processes MAC operations for one bit of a first multi-bit multiplicand multiplied against a second multi-bit multiplicand. The MAC procedures can be used in a neural network feedforward procedure, where the first multiplicand is a synaptic weight and the second multiplicand is an activation value. Multiple plane and multiple die NVM array implementations are also discussed for massive parallel processing.
  • Latch-based multiply-accumulate (MAC) operations implemented on the die of a non-volatile memory (NVM) array.
  • Linear MAC procedures that do not require logic branches.
  • Each linear MAC stage processes MAC operations for one bit of a first multi-bit multiplicand multiplied against a second multi-bit multiplicand.
  • MAC procedures can be used in a neural network feedforward procedure.
  • First multiplicand is a synaptic weight and the second multiplicand is an activation value.
  • Multiple plane and multiple die NVM array implementations for massive parallel processing.

Potential Applications

  • Neural networks and deep learning applications.
  • High-performance computing and parallel processing.
  • Artificial intelligence and machine learning systems.
  • Data analytics and pattern recognition.
  • Signal processing and image recognition.

Problems Solved

  • Efficient implementation of multiply-accumulate (MAC) operations on a non-volatile memory (NVM) array.
  • Elimination of logic branches in MAC procedures for improved performance.
  • Enabling massive parallel processing for neural networks and other applications.

Benefits

  • Improved performance and efficiency in MAC operations.
  • Simplified and linear MAC procedures without logic branches.
  • Integration of MAC operations on the die of a non-volatile memory (NVM) array.
  • Enablement of massive parallel processing for faster computation.
  • Potential for lower power consumption and reduced latency.

Abstract

Latch-based multiply-accumulate (MAC) operations implemented on the die of a non-volatile memory (NVM) array are disclosed. The exemplary latch-based MAC procedures described herein are linear procedures that do not require logic branches. In one example, the MAC operation uses a set of linear MAC stages, wherein each linear stage processes MAC operations corresponding to one bit of a first multi-bit multiplicand being multiplied against a second multi-bit multiplicand. Examples are provided wherein the MAC procedures are performed as part of a neural network feedforward procedure where the first multiplicand is a synaptic weight and the second multiplicand is an activation value. Multiple plane and multiple die NVM array implementations are also described for massive parallel processing.

MEMORY COHERENCE IN VIRTUALIZED ENVIRONMENTS (17850767)

Main Inventor

Marjan Radi


Brief explanation

The abstract describes a Virtual Switching (VS) kernel module that manages packet flows between Virtual Machines (VMs) in a network. The module receives packets from a VM, identifies memory messages and addresses for memory blocks stored in shared memory, and updates the directory in the kernel space accordingly. The module also determines the state of the memory blocks from the directory and responds to memory requests based on that state.
  • The VS kernel module manages packet flows between VMs in a network.
  • It identifies memory messages and addresses for memory blocks stored in shared memory.
  • The module updates the directory in the kernel space based on the memory messages.
  • It determines the state of the memory blocks from the directory.
  • The module responds to memory requests based on the determined state.

Potential Applications

  • Virtualized data centers
  • Cloud computing environments
  • Network function virtualization (NFV)
  • Software-defined networking (SDN)

Problems Solved

  • Efficient management of packet flows between VMs
  • Handling memory messages and addresses for memory blocks in shared memory
  • Updating the directory in the kernel space accurately
  • Determining the state of memory blocks for efficient memory request responses

Benefits

  • Improved performance and efficiency in managing packet flows
  • Enhanced memory management and response to memory requests
  • Simplified network management in virtualized environments
  • Increased scalability and flexibility in network configurations

Abstract

A Virtual Switching (VS) kernel module manages different flows of packets between at least one Virtual Machine (VM) running at a node and one or more other VMs running at the node or at one or more other nodes in a network. A packet is received from a first VM using the VS kernel module and is parsed to identify a memory message and an address for at least one memory block stored in a shared memory. At least one entry for the at least one memory block is updated in a directory in a kernel space using the VS kernel module based on the memory message. According to another aspect, a state for the at least one memory block is determined from the directory and the VS kernel module is used to respond to the memory request based on the determined state.

AUTONOMIC TROUBLESHOOTING OF A SYSTEM OF DEVICES (17852245)

Main Inventor

Shir PINHAS


Brief explanation

The abstract describes a system and method for autonomic troubleshooting in a system of devices. The devices communicate with each other via a system management bus and also with a host device via a separate main bus. The method involves one device sending a query to another device via the system management bus, determining if the second device is in an error state based on the response received or the absence of a response, and sending a control command to the second device based on the error state.
  • The system includes multiple devices communicating with each other and a host device.
  • Communication between devices is done through a system management bus.
  • Communication between devices and the host device is done through a separate main bus.
  • The first device sends a query to the second device via the system management bus.
  • The first device determines if the second device is in an error state based on the response received or the absence of a response.
  • If the second device is in an error state, the first device sends a control command to the second device via the system management bus.
  • This system and method can be applied to data storage devices or any other devices in a system.

Potential Applications

  • Troubleshooting and error handling in systems with multiple devices.
  • Autonomic management of devices in a system.
  • Efficient communication and control between devices and a host device.

Problems Solved

  • Simplifies troubleshooting process in a system of devices.
  • Enables autonomic management of devices.
  • Improves efficiency in communication and control between devices and a host device.

Benefits

  • Reduces manual intervention in troubleshooting and error handling.
  • Enhances system reliability and stability.
  • Improves overall system performance and efficiency.

Abstract

A system and method for autonomic troubleshooting in a system of devices including at least a first device and a second device communicating with each other via a system management bus. The first and second device also communicates with a host device via a separate main bus. The method includes the first device sending a first query, via the system management bus, to the second device. The first device determines if the second device in an error state based on: receiving a response from the second device indicating an error; or absence of a response from the second device. Based on the error state, the first device sends a control command to the second device via the system management bus. In some examples the first device or second device is a data storage device.

MEMORY DEVICE WITH LATCH-BASED NEURAL NETWORK WEIGHT PARITY DETECTION AND TRIMMING (17847089)

Main Inventor

Daniel Joseph Linnen


Brief explanation

The abstract describes latch-based methods and apparatus for detecting bit flip errors in neural network weight data within a non-volatile memory (NVM) array. The methods are particularly useful for floating point number values.
  • The methods detect parity errors in neural network weights and set the erroneous weight to zero, preventing it from significantly affecting the network.
  • The procedures described are linear and do not require logic decisions, making them efficient and straightforward.
  • The methods also assess the degradation of the NVM array based on collected parity bit data in the latches.
  • Multiple plane and multiple die NVM array implementations are described, enabling massive parallel processing.

Potential Applications

  • Artificial intelligence and machine learning systems
  • Neural networks and deep learning models
  • Non-volatile memory arrays used in computing and storage devices

Problems Solved

  • Detection and mitigation of bit flip errors in neural network weight data
  • Prevention of erroneous values from significantly affecting the network, especially in floating-point weight values
  • Efficient and linear procedures that do not require complex logic decisions

Benefits

  • Improved accuracy and reliability of neural networks
  • Enhanced performance and stability of artificial intelligence systems
  • Efficient and straightforward methods for detecting and handling bit flip errors in NVM arrays

Abstract

Latch-based methods and apparatus for performing neural network weight parity detection on the die of a non-volatile memory (NVM) array to detect bit flip errors within neural network weight data are described, particularly for use with floating point number values. Upon detection of a parity error in a neural network weight, the erroneous weight is set to zero to trim the corresponding neuron from the network, thus preventing the erroneous value from significantly affecting the network, particularly in situations where the bit flip would otherwise affect the magnitude of a floating-point weight value. The exemplary latch-based procedures described herein are linear procedures that do not require logic decisions. Procedures are also described that assess an amount of degradation in the NVM array based on parity bit data collected in the latches. Multiple plane and multiple die NVM array implementations are also described for massive parallel processing.

Systems And Methods With Variable Size Super Blocks In Zoned Namespace Devices (17848006)

Main Inventor

Avinash Muthya Narahari


Brief explanation

The patent application describes a method to improve the performance of a data storage device during garbage collection by reducing or eliminating the presence of dummy data in superblocks. Superblocks are sections of the storage device that can be filled with dummy data during the garbage collection process, which can negatively impact device performance.
  • Varying the size of a superblock to reduce or eliminate dummy data in a data storage device.
  • The data storage device consists of multiple superblocks, each containing multiple die blocks.
  • The method aims to improve device performance during garbage collection by minimizing the amount of dummy data in superblocks.

Potential applications of this technology:

  • Data storage devices, such as solid-state drives (SSDs) or flash memory devices.
  • Any system or device that utilizes garbage collection processes for data management.

Problems solved by this technology:

  • Garbage collection processes in data storage devices can result in the filling of superblocks with dummy data, leading to decreased device performance.
  • The method described in the patent application addresses this issue by dynamically adjusting the size of superblocks to reduce or eliminate the presence of dummy data.

Benefits of this technology:

  • Improved performance of data storage devices during garbage collection processes.
  • Reduction or elimination of dummy data in superblocks, leading to more efficient data storage and retrieval.
  • Enhanced overall device performance and lifespan.

Abstract

During a garbage collection process of a data storage device, superblocks may be filled with dummy data, which may decrease device performance. Embodiments described herein provide systems, methods, and computer readable media for varying a size of a superblock to reduce or eliminate dummy data in a data storage device including a plurality of superblocks. Each of the plurality of superblocks including a plurality of die blocks.

DATA STORAGE DEVICE MANAGEMENT SYSTEM (17850483)

Main Inventor

Ramanathan Muthiah


Brief explanation

The patent application describes devices and techniques for remotely managing data storage devices (DSD) using a mobile app interface. 
  • The invention allows end users to perform data management activities on a DSD from a remote location.
  • Activities include creating data snapshots, resetting snapshots, and setting permissions on the DSD.
  • The remote management is done through a mobile app interface on a mobile device.
  • The invention provides a convenient and user-friendly way to manage data storage devices remotely.

Potential Applications

This technology has various potential applications, including:

  • Remote management of data storage devices in a corporate or enterprise setting.
  • Remote management of personal data storage devices, such as external hard drives or cloud storage.
  • Integration with backup and recovery systems to remotely manage data snapshots.

Problems Solved

The technology solves several problems related to remote data management, such as:

  • The need for physical access to a data storage device to perform management activities.
  • The inconvenience of having to be physically present at the location of the data storage device.
  • The complexity of managing data snapshots and permissions on a DSD without a user-friendly interface.

Benefits

The technology offers several benefits, including:

  • Increased convenience and flexibility in managing data storage devices remotely.
  • Time and cost savings by eliminating the need for physical access to the device.
  • Improved data security by allowing users to quickly reset snapshots and manage permissions remotely.

Abstract

Devices and techniques are disclosed wherein an end user can remotely trigger direct data management activities of a data storage device (DSD), such as creating a data snapshot, resetting a snapshot, and setting permissions at the DSD via a remote mobile device app interface.

TRAINING ENSEMBLE MODELS TO IMPROVE PERFORMANCE IN THE PRESENCE OF UNRELIABLE BASE CLASSIFIERS (18459320)

Main Inventor

Yongjune KIM


Brief explanation

The abstract of the patent application describes a system and method for training base classifiers in a boosting algorithm. The system optimally trains base classifiers considering an unreliability model and then uses an aggregator decoder that reverse-flips inputs using inter-classifier redundancy introduced in training.
  • The system and method are used for training base classifiers in a boosting algorithm.
  • The training process considers an unreliability model to optimize the training of the base classifiers.
  • An aggregator decoder is used to reverse-flip inputs using inter-classifier redundancy introduced during training.

Potential Applications

  • Machine learning and artificial intelligence systems
  • Pattern recognition systems
  • Data analysis and prediction models

Problems Solved

  • Improves the training process of base classifiers in a boosting algorithm
  • Addresses the issue of unreliability in training base classifiers
  • Enhances the accuracy and reliability of the boosting algorithm

Benefits

  • Optimal training of base classifiers considering an unreliability model
  • Improved accuracy and reliability of the boosting algorithm
  • Enhanced performance of machine learning and pattern recognition systems.

Abstract

A system and method for training base classifiers in a boosting algorithm includes optimally training base classifiers considering an unreliability model, and then using a scheme with an aggregator decoder that reverse-flips inputs using inter-classifier redundancy introduced in training.

NON-VOLATILE MEMORY DIE WITH BIT-FLIP OBJECT INSERTION (17847101)

Main Inventor

Daniel Joseph Linnen


Brief explanation

The patent application describes techniques for inserting objects into a background image using bit-flip operations in non-volatile memory (NVM). 
  • Bit-flip object insertion techniques for NVM are provided.
  • The techniques involve flipping or inverting bits within the pixels of a background image to insert an object.
  • In one example, pixels corresponding to the shape and insertion location of the object are XORed with binary 1s, changing their color or intensity to make the object appear in the background image.
  • In other examples, only the most significant bits of pixels in the background image are flipped.
  • The patent also describes latch-based procedures for high-speed processing on an NVM die.
  • Multiple plane NVM die implementations are discussed for efficient processing.

Potential Applications

  • Image editing and manipulation software
  • Augmented reality applications
  • Digital signage and advertising
  • Gaming and virtual reality

Problems Solved

  • Simplifies the process of inserting objects into a background image
  • Enables efficient and high-speed processing on NVM
  • Provides a method for changing the color or intensity of pixels to create the appearance of an object in the background image

Benefits

  • Faster and more efficient object insertion in NVM
  • Improved image editing capabilities
  • Enhanced visual effects in augmented reality and gaming applications

Abstract

Bit-flip object insertion techniques are provided for use with a non-volatile memory (NVM) wherein an object is inserted into a background image by flipping or inverting one or more bits within the pixels of the background image that correspond to the shape and insertion location of an object being inserted. In an illustrative example, pixels within the background image that correspond to the shape and insertion location of the object are XORed with binary 1s. This flips the bits of those pixels to change the color (hue) and/or intensity (brightness) of the pixels so the object appears in the background image. In other examples, only the most significant bits of pixels in the background image are inverted (flipped). Exemplary latch-based procedures are described herein for high-speed processing on an NVM die. Multiple plane NVM die implementations are also described for massive processing.

Topological Insulator Based Spin Torque Oscillator Reader (18244555)

Main Inventor

Xiaoyong LIU


Brief explanation

The present disclosure is about a patent application for a bismuth antimony (BiSb) based spin torque oscillator (STO) sensor. The sensor consists of a spin-orbit torque (SOT) device and a magnetic tunnel junction (MTJ) structure. The use of a BiSb layer in the SOT device allows for a larger spin Hall angle (SHA), resulting in improved efficiency and reliability of the STO sensor.
  • The patent application is for a BiSb based STO sensor.
  • The sensor includes a SOT device and a MTJ structure.
  • The BiSb layer in the SOT device enables a larger SHA.
  • The larger SHA improves the efficiency and reliability of the STO sensor.

Potential Applications

This technology has potential applications in various fields, including:

  • Magnetic field sensing
  • Data storage devices
  • Magnetic random-access memory (MRAM)
  • Spintronic devices

Problems Solved

The technology addresses the following problems:

  • Limited efficiency and reliability of STO sensors
  • Insufficient spin Hall angle in conventional STO sensors
  • Inadequate performance of magnetic field sensing and data storage devices

Benefits

The technology offers the following benefits:

  • Improved efficiency and reliability of STO sensors
  • Larger spin Hall angle for enhanced performance
  • Better magnetic field sensing and data storage capabilities

Abstract

The present disclosure generally relates to a bismuth antimony (BiSb) based STO (spin torque oscillator) sensor. The STO sensor comprises a SOT device and a magnetic tunnel junction (MTJ) structure. By utilizing a BiSb layer within the SOT device, a larger spin Hall angle (SHA) can be achieved, thereby improving the efficiency and reliability of the STO sensor.

High Concurrent Channels Magnetic Recording Head Having Same-Gap-Verify And High Density Interconnect (17849510)

Main Inventor

Robert G. BISKEBORN


Brief explanation

The patent application describes a tape drive with a tape head that consists of two modules. Each module has 64 writers, 64 readers, and three pairs of servo readers aligned with the 64 readers in a row. The writers are placed in a parallel row to the readers, with each writer aligned with an adjacent reader. The spacing between each reader and writer is about 150 μm to 200 μm. The modules are designed to write data to a tape using the writers and read verify the newly written data using the readers.
  • The tape drive includes a tape head with two modules.
  • Each module has 64 writers, 64 readers, and three pairs of servo readers.
  • The servo readers are aligned with the 64 readers in a row.
  • The writers are placed in a parallel row to the readers.
  • Each writer is aligned with an adjacent reader.
  • The spacing between each reader and writer is about 150 μm to 200 μm.
  • The modules are used to write data to a tape using the writers and read verify the newly written data using the readers.

Potential Applications

  • Data storage and backup systems
  • Archiving large amounts of data
  • Media production and broadcasting

Problems Solved

  • Efficient and accurate writing and reading of data on tape
  • Improved data storage capacity and speed
  • Enhanced reliability and durability of tape drives

Benefits

  • Higher data storage capacity
  • Faster data transfer rates
  • Improved data integrity and reliability
  • Cost-effective solution for long-term data storage

Abstract

The present disclosure generally relates to a tape drive including a tape head. The tape head comprises two modules, where each module comprises 64 writers, 64 readers, and three pairs of servo readers aligned with the 64 readers in a first row. The three pairs of servo readers comprise a first pair disposed at a first end of the first row, a second pair disposed between two groups of 32 readers, and a third pair disposed at a second end of the first row. The writers are disposed in a second row parallel to the first row, and are each aligned with an adjacent reader. The spacing between each reader and each writer is about 150 μm to about 200 μm. Each module is configured to write data to a tape using the 64 writers and to read verify the newly written data using the 64 readers.

TDS Mitigation Using Different Data Preamble Tones (17849504)

Main Inventor

Derrick E. BURTON


Brief explanation

The patent application describes a tape drive that includes a tape head and control circuitry. The tape head has multiple data elements, each with a write transducer and a read transducer. The control circuitry is designed to control the tape head to write different frequency preamble tones before writing data to the tape.
  • The tape drive includes a tape head with multiple data elements, each having a write and read transducer.
  • The control circuitry controls the tape head to write different frequency preamble tones before writing data to the tape.
  • Each data element reads one or more preamble tones before writing or reading data from the tape.
  • The control circuitry extracts signal content from each preamble tone read by each data element.
  • The control circuitry determines an optimized positioning for the tape head with respect to the tape to reduce alignment errors.

Potential applications of this technology:

  • Data storage: This tape drive technology can be used in data storage systems where high capacity and reliability are required.
  • Archiving: The tape drive can be used for long-term data archiving, providing a cost-effective and efficient solution.
  • Backup systems: The tape drive can be used in backup systems to securely store and retrieve data.

Problems solved by this technology:

  • Alignment errors: The technology helps reduce alignment errors by optimizing the positioning of the tape head, improving data read and write accuracy.
  • Data integrity: By using different frequency preamble tones, the technology ensures accurate data transfer and reduces the risk of data corruption.

Benefits of this technology:

  • Improved data reliability: The optimized positioning of the tape head reduces alignment errors, resulting in more reliable data storage and retrieval.
  • Cost-effective solution: Tape drives are known for their high storage capacity and cost-effectiveness compared to other storage technologies.
  • Efficient data transfer: The use of different frequency preamble tones ensures accurate data transfer, improving overall system efficiency.

Abstract

The present disclosure generally relates to a tape drive comprising a tape head and control circuitry. The tape head comprises a plurality of data elements, each data element including a write transducer and a read transducer. The control circuitry is configured to control the tape head to write at least three different frequency preamble tones prior to writing data to data tracks of a tape. A different preamble tone is written to adjacent data tracks of the tape. The data elements of the tape head are each configured to read one or more preamble tones prior to writing data to or reading data from the tape. The control circuitry is then configured to extract a signal content from each preamble tone read by each data element, and determine an optimized positioning for the tape head with respect to the tape to reduce alignment errors.

DATA STORAGE DEVICE WITH NOISE INJECTION (17847080)

Main Inventor

Daniel Joseph Linnen


Brief explanation

The patent application describes procedures for injecting noise into a non-volatile memory (NVM) array. This is done to induce bit flips and information degradation in the data stored in the memory. The injected noise can be used for dataset augmentation or for testing deep neural networks (DNNs).
  • Noise is injected into data stored in a non-volatile memory array by adjusting read voltages to induce bit flips.
  • Feedback is used to achieve a target amount of information degradation.
  • Random data is combined with itself iteratively to achieve a target percentage of random 1s or 0s.
  • The random data is then combined with data read from the NVM array.
  • Pixels in the array can be randomly zeroed out to simulate dead charge coupled device (CCD) pixels.
  • Timing, voltage, and/or current values used in circuits during data transfer are adjusted outside their specified margins to induce bit flips and inject noise into the data.

Potential Applications

  • Dataset augmentation for machine learning and deep neural networks.
  • Testing and evaluation of deep neural networks.
  • Simulation of faulty or degraded memory conditions for testing purposes.

Problems Solved

  • Lack of diverse and realistic datasets for training machine learning models.
  • Difficulty in testing and evaluating the robustness of deep neural networks.
  • Limited ability to simulate faulty or degraded memory conditions for testing purposes.

Benefits

  • Improved performance and accuracy of machine learning models through dataset augmentation.
  • Better understanding of the robustness and reliability of deep neural networks.
  • More realistic testing of memory systems and algorithms.

Abstract

Noise injection procedures implemented on the die of a non-volatile memory (NVM) array are disclosed. In one example, noise is injected into data by adjusting read voltages to induce bit flips while using feedback to achieve a target amount of information degradation. In another example, random data is iteratively combined with itself to achieve a target percentage of random 1s or 0s, then the random data is combined with data read from the NVM array. In other examples, pixels are randomly zeroed out to emulate dead charge coupled device (CCD) pixels. In still other examples, the timing, voltage, and/or current values used within circuits while transferring data to/from latches or bitlines are adjusted outside their specified margins to induce bit flips to inject noise into the data. The noise-injected data may be used, for example, for dataset augmentation or for the testing of deep neural networks (DNNs).

Data Storage Device and Method for Predicting Future Read Thresholds (18242061)

Main Inventor

David Avraham


Brief explanation

==Abstract==

The patent application describes a method for improving the performance and efficiency of data storage devices by inferring read thresholds in advance and selecting the appropriate one when needed.

Explanation

  • The invention focuses on improving the read threshold process in memory devices.
  • It proposes inferring multiple read thresholds based on possible memory conditions.
  • When the read threshold is required, it is selected from the pre-inferred options based on the current memory conditions.
  • This approach aims to enhance latency, throughput, quality of service, power consumption, and reduce errors in data storage devices.

Potential Applications

This technology can be applied in various data storage devices, including:

  • Solid-state drives (SSDs)
  • Hard disk drives (HDDs)
  • Flash memory devices
  • Cloud storage systems

Problems Solved

The patent application addresses the following problems:

  • Latency and throughput issues in data storage devices
  • Inefficient read threshold selection process
  • Power consumption and energy efficiency concerns
  • Quality of service and error reduction in memory systems

Benefits

The use of inferred read thresholds and dynamic selection offers several benefits:

  • Improved latency and throughput in data storage devices
  • Enhanced quality of service and reduced errors
  • Reduced power consumption and improved energy efficiency
  • Optimized performance and reliability of memory systems

Abstract

Before a read threshold is needed to read a wordline in memory, a data storage device can infer a plurality of read thresholds based on possible conditions of the memory that may exist when the read threshold is eventually needed. When the read threshold is needed, it is selected from the previously-inferred read thresholds based on the current conditions of the memory. This can improve latency and throughput, improve quality of service, reduce power consumption, and reduce errors.

NON-VOLATILE MEMORY WITH PRECISE PROGRAMMING (17852129)

Main Inventor

Ming Wang


Brief explanation

The patent application describes a method to improve the accuracy of reading data stored in memory cells by programming them with tighter threshold voltage distributions. This is achieved by slowing down the programming process as the memory cells approach their target threshold voltage.
  • The memory cells are programmed by applying a series of voltage pulses to a selected word line.
  • Tighter threshold voltage distributions result in fewer errors when reading the data later.
  • To create tighter distributions, the system reduces the effective pulse width of the voltage pulses as the memory cells approach their target threshold voltage.
  • The voltage pulses are divided into portions, with each portion corresponding to a subset of the pulse width or time period.
  • Memory cells nearing their target threshold voltage are slowed down by inhibiting their programming during later portions of the voltage pulses.

Potential Applications

  • Non-volatile memory devices
  • Flash memory
  • Solid-state drives (SSDs)
  • Memory cards

Problems Solved

  • Inaccurate reading of data stored in memory cells
  • Errors caused by wider threshold voltage distributions

Benefits

  • Improved accuracy in reading data
  • Reduced errors in data retrieval
  • Enhanced reliability of memory devices

Abstract

Memory cells are programmed to threshold voltage distributions that correspond to data states by applying a series of voltage pulses to a selected word line connected to a set of non-volatile memory cells selected for programming. Tighter threshold voltage distributions will result in fewer errors when reading the data at a later time. To create tighter threshold voltage distributions during programming, the system slows down the programming of memory cells as the memory cells approach their target threshold voltage by reducing the effective pulse width of the voltage pulses. The voltage pulses are divided into portions, with each portion corresponding to a subset of the pulse width or a subset of the time period that the voltage pulse is applied. Memory cells that are approaching their target threshold voltage will be slowed down by inhibiting those memory cells from programming during later-in-time portions of the voltage pulses.

WIRELESS DEVICE LOSS PREVENTION AND DISCOVERY (17852290)

Main Inventor

Matthew Harris KLAPMAN


Brief explanation

The abstract describes a data storage device that includes a non-volatile storage medium, a data port, an energy harvesting component, and a beacon component. The device can wirelessly transmit a signal using electrical energy produced from an ambient energy source. It also includes an energy store to store the electrical energy.
  • The data storage device can store user data and transmit it between a host computer system and the device.
  • It can generate electrical energy from an ambient energy source using an energy harvesting component.
  • The device has a beacon component that wirelessly transmits a signal using the electrical energy.
  • An energy store is included to store the electrical energy produced by the energy harvesting component.

Potential Applications

  • Remote data storage and transmission devices
  • Internet of Things (IoT) devices
  • Wireless communication devices with limited power sources

Problems Solved

  • Limited power sources for wireless communication devices
  • Need for energy-efficient data storage and transmission devices

Benefits

  • Enables wireless transmission of data without relying on external power sources
  • Energy harvesting component allows for continuous operation without the need for frequent battery replacements
  • Can be used in various applications where power sources are limited or not readily available

Abstract

A data storage device comprises a non-volatile storage medium configured to store user data, a data port configured to transmit data between a host computer system and the data storage device, an energy harvesting component configured to produce electrical energy from an ambient energy source, and a beacon component, configured to wirelessly transmit a signal. The beacon component is configured to consume the electrical energy to wirelessly transmit the signal. The data storage device may further comprise an energy store configured to store the electrical energy produced by the energy harvesting component as stored energy.

POWER MANAGEMENT FOR WIRELESS DEVICE LOSS PREVENTION AND DISCOVERY (17852301)

Main Inventor

Matthew Harris KLAPMAN


Brief explanation

The abstract describes a data storage device that includes a non-volatile storage medium, a data port, a beacon component, and a power manager. The beacon component wirelessly transmits a signal according to a beacon configuration and can adjust its energy consumption based on the power availability level determined by the power manager.
  • The data storage device has a non-volatile storage medium for storing user data.
  • It has a data port for transferring data between a host computer system and the storage device.
  • The device includes a beacon component that wirelessly transmits a signal based on a beacon configuration.
  • A power manager is present to provide electrical energy to the beacon component.
  • The beacon component can adjust its energy consumption by changing the beacon configuration, based on the power availability level determined by the power manager.

Potential Applications

  • This technology can be used in various data storage devices such as external hard drives, USB drives, or solid-state drives.
  • It can be beneficial in scenarios where power availability is limited or needs to be managed efficiently, such as in portable devices or remote locations.

Problems Solved

  • The technology addresses the issue of power consumption in data storage devices with beacon components.
  • It allows the beacon component to adjust its energy consumption based on the available power, optimizing power usage and extending battery life.

Benefits

  • The adjustable beacon configuration helps in managing power consumption effectively.
  • It allows for longer battery life in devices with limited power availability.
  • The technology provides flexibility in adapting to different power conditions, ensuring optimal performance of the data storage device.

Abstract

A data storage device comprises a non-volatile storage medium configured to store user data, a data port configured to transmit data between a host computer system and the data storage device, a beacon component, and a power manager configured to provide electrical energy to the beacon component. The beacon component is configured to wirelessly transmit a signal in accordance with a beacon configuration, and, in response to determining a power availability level associated with the power manager, adjust the beacon configuration to change a rate of consumption of electrical energy by the beacon component.