Apple inc. (20240320470). MULICAST AND SIMULCAST IN NEURAL ENGINES simplified abstract
Contents
MULICAST AND SIMULCAST IN NEURAL ENGINES
Organization Name
Inventor(s)
Christopher L. Mills of Saratoga CA (US)
MULICAST AND SIMULCAST IN NEURAL ENGINES - A simplified explanation of the abstract
This abstract first appeared for US patent application 20240320470 titled 'MULICAST AND SIMULCAST IN NEURAL ENGINES
Simplified Explanation:
This patent application describes a system-on-a-chip circuit that includes a neural processor circuit connected to a central processor unit. The neural processor circuit consists of multiple neural engines and a data processor circuit. The central processor unit runs a compiler that determines data broadcast mode and input data dimension configuration based on a neural network description. Task descriptors are generated by the compiler and distributed to components of the neural processor circuit. The data processor circuit broadcasts data to the neural engines according to the determined data dimension configuration mode, and the neural engines perform computational operations based on the input data dimension configuration mode.
- The system-on-a-chip circuit includes a neural processor circuit and a central processor unit.
- The neural processor circuit comprises multiple neural engines and a data processor circuit.
- The central processor unit executes a compiler that determines data broadcast mode and input data dimension configuration.
- Task descriptors are generated by the compiler and distributed to components of the neural processor circuit.
- The data processor circuit broadcasts data to the neural engines based on the determined data dimension configuration mode.
- The neural engines perform computational operations according to the input data dimension configuration mode.
Potential Applications: - Artificial intelligence - Machine learning - Neural network processing
Problems Solved: - Efficient data processing in neural networks - Optimized computational operations - Streamlined task distribution
Benefits: - Faster processing speeds - Improved accuracy in neural network tasks - Enhanced performance of AI systems
Commercial Applications: Title: Advanced Neural Processor Circuit for AI Applications This technology can be utilized in various industries such as healthcare, finance, autonomous vehicles, and robotics for advanced data processing and AI capabilities. The market implications include improved efficiency, accuracy, and performance in AI-driven systems.
Prior Art: Readers can explore prior research on neural processor circuits, system-on-a-chip designs, and compiler technologies in the field of artificial intelligence and machine learning.
Frequently Updated Research: Stay updated on the latest advancements in neural processor circuits, compiler optimizations, and neural network processing techniques to enhance the performance of AI systems.
Questions about Neural Processor Circuits: 1. What are the key components of a neural processor circuit? - A neural processor circuit typically consists of neural engines, a data processor circuit, and a central processor unit.
2. How does a compiler optimize data processing in neural networks? - The compiler determines data broadcast mode and input data dimension configuration to enhance the efficiency of computational operations in neural networks.
Original Abstract Submitted
a system-on-a-chip circuit may include a neural processor circuit coupled to a central processor unit. the neural processor circuit may include a plurality of neural engines and a data processor circuit. the central processor unit is configured to execute a compiler, which is in turn configured to determine a data broadcast mode and an input data dimension configuration mode based on a neural network description. the compiler is configured to generate one or more task descriptors, the task descriptors distributed to components of the neural processor circuit. the data processor circuit is configured to broadcast data from the buffer to the plurality of neural engines based on the determined data dimension configuration mode. the neural engines are configured to perform computational operations according to the determined input data dimension configuration mode.