Intel corporation (20240320179). SYSTEM DECODER FOR TRAINING ACCELERATORS simplified abstract

From WikiPatents
Jump to navigation Jump to search

SYSTEM DECODER FOR TRAINING ACCELERATORS

Organization Name

intel corporation

Inventor(s)

Francesc Guim Bernat of Barcelona (ES)

Da-Ming Chiang of San Jose CA (US)

Kshitij A. Doshi of Tempe AZ (US)

Suraj Prabhakaran of Aachen (DE)

Mark A. Schmisseur of Phoenix AZ (US)

SYSTEM DECODER FOR TRAINING ACCELERATORS - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240320179 titled 'SYSTEM DECODER FOR TRAINING ACCELERATORS

The abstract describes an artificial intelligence system with a first hardware platform, a fabric interface connecting it to a second hardware platform, a processor for AI operations, and a training accelerator with inter-chip links for communication.

  • The AI system includes a first hardware platform and a fabric interface for connecting it to a second hardware platform.
  • A processor on the first hardware platform is programmed to handle AI tasks.
  • A training accelerator is part of the system, with inter-chip links for communication between accelerators on the same and different hardware platforms.
  • A system decoder manages data sharing between the training accelerators without involving the processor.
  • The system is designed to enhance AI processing capabilities through efficient communication and data sharing among accelerators.

Potential Applications: - This technology can be applied in various industries requiring advanced AI processing, such as healthcare, finance, and autonomous vehicles. - It can improve the efficiency and speed of AI training tasks, leading to better performance in complex AI models.

Problems Solved: - The system addresses the need for faster and more efficient AI training by enabling direct communication between training accelerators. - It streamlines the data sharing process among different hardware platforms, reducing the workload on the main processor.

Benefits: - Improved AI processing speed and efficiency. - Enhanced performance of AI models. - Streamlined data sharing process among training accelerators.

Commercial Applications: - This technology can be utilized in AI research labs, data centers, and industries requiring high-performance AI systems. - It has the potential to revolutionize AI training processes and accelerate the development of advanced AI applications.

Questions about the technology: 1. How does the system decoder facilitate data sharing between training accelerators? 2. What are the key advantages of using inter-chip links for communication in AI systems?


Original Abstract Submitted

there is disclosed an example of an artificial intelligence (ai) system, including: a first hardware platform; a fabric interface configured to communicatively couple the first hardware platform to a second hardware platform; a processor hosted on the first hardware platform and programmed to operate on an ai problem; and a first training accelerator, including: an accelerator hardware; a platform inter-chip link (icl) configured to communicatively couple the first training accelerator to a second training accelerator on the first hardware platform without aid of the processor; a fabric icl to communicatively couple the first training accelerator to a third training accelerator on a second hardware platform without aid of the processor; and a system decoder configured to operate the fabric icl and platform icl to share data of the accelerator hardware between the first training accelerator and second and third training accelerators without aid of the processor.