17988655. MEMORY-OPTIMIZED CONTRASTIVE LEARNING simplified abstract (Google LLC)

From WikiPatents
Jump to navigation Jump to search

MEMORY-OPTIMIZED CONTRASTIVE LEARNING

Organization Name

Google LLC

Inventor(s)

Hieu Hy Pham of Redwood City CA (US)

Zihang Dai of Pittsburgh PA (US)

Golnaz Ghiasi of Mountain View CA (US)

Hanxiao Liu of Santa Clara CA (US)

Wei Yu of Palo Alto CA (US)

Mingxing Tan of Newark CA (US)

Quoc V. Le of Sunnyvale CA (US)

MEMORY-OPTIMIZED CONTRASTIVE LEARNING - A simplified explanation of the abstract

This abstract first appeared for US patent application 17988655 titled 'MEMORY-OPTIMIZED CONTRASTIVE LEARNING

Simplified Explanation

The patent application describes a method for training image encoder and text encoder neural networks using memory-optimized contrastive learning. This involves using computer programs encoded on computer storage media.

  • Memory-optimized contrastive learning is used to train image encoder and text encoder neural networks.
  • The method involves using computer programs encoded on computer storage media.
  • The image encoder and text encoder neural networks are trained using memory-optimized contrastive learning techniques.
  • The invention provides a more efficient and effective way to train image and text encoder neural networks.

Potential Applications

  • This technology can be applied in various fields such as computer vision, natural language processing, and machine learning.
  • It can be used to improve image and text recognition systems.
  • The trained neural networks can be used in applications like image search, language translation, and content recommendation.

Problems Solved

  • Traditional methods of training image and text encoder neural networks may be inefficient and less effective.
  • This technology solves the problem of optimizing memory usage during the training process.
  • It addresses the challenge of improving the performance and accuracy of image and text recognition systems.

Benefits

  • The memory-optimized contrastive learning technique improves the efficiency and effectiveness of training image and text encoder neural networks.
  • The invention provides a more accurate and reliable way to recognize and process images and text.
  • It enables the development of advanced applications in computer vision and natural language processing.


Original Abstract Submitted

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using memory-optimized contrastive learning to train image encoder and text encoder neural networks.