18390524. Efficient Embedding Table Storage and Lookup simplified abstract (Google LLC)

From WikiPatents
Jump to navigation Jump to search

Efficient Embedding Table Storage and Lookup

Organization Name

Google LLC

Inventor(s)

Gaurav Menghani of Santa Clara CA (US)

Efficient Embedding Table Storage and Lookup - A simplified explanation of the abstract

This abstract first appeared for US patent application 18390524 titled 'Efficient Embedding Table Storage and Lookup

The present disclosure introduces methods for efficient embedding table storage and lookup in machine-learning models. This involves compressing each embedding individually, allowing for independent decompression, and packing the compressed embeddings with the model.

  • Obtaining an embedding table with multiple embeddings associated with corresponding indexes
  • Compressing each embedding individually for independent decompression
  • Packing the compressed embeddings with a machine-learning model
  • Locating an embedding based on a received input and determined lookup value
  • Decompressing the located embedding independently

Potential Applications: - Enhancing the efficiency of embedding table storage and lookup in machine-learning models - Improving the speed and performance of machine-learning algorithms

Problems Solved: - Addressing the need for efficient storage and retrieval of embeddings in machine-learning models - Optimizing the process of embedding table lookup for faster computation

Benefits: - Faster and more efficient storage and retrieval of embeddings - Enhanced performance and speed of machine-learning models

Commercial Applications: Title: Enhanced Efficiency in Machine-Learning Models with Optimized Embedding Table Storage This technology can be applied in various industries such as e-commerce, finance, healthcare, and more to improve the efficiency and performance of machine-learning models.

Questions about Efficient Embedding Table Storage and Lookup: 1. How does compressing each embedding individually improve the efficiency of storage and retrieval in machine-learning models?

  - Compressing each embedding individually allows for independent decompression, reducing the overall computational load and improving efficiency.

2. What are the potential applications of this technology beyond machine-learning models?

  - This technology can also be applied in various industries such as natural language processing, image recognition, and recommendation systems for enhanced performance and speed.


Original Abstract Submitted

The present disclosure provides systems, methods, and computer program products for providing efficient embedding table storage and lookup in machine-learning models. A computer-implemented method may include obtaining an embedding table comprising a plurality of embeddings respectively associated with a corresponding index of the embedding table, compressing each particular embedding of the embedding table individually allowing each respective embedding of the embedding table to be decompressed independent of any other embedding in the embedding table, packing the embedding table comprising individually compressed embeddings with a machine-learning model, receiving an input to use for locating an embedding in the embedding table, determining a lookup value based on the input to search indexes of the embedding table, locating the embedding based on searching the indexes of the embedding table for the determined lookup value, and decompressing the located embedding independent of any other embedding in the embedding table.