17996533. ERROR-PROOF INFERENCE CALCULATION FOR NEURAL NETWORKS simplified abstract (Robert Bosch GmbH)

From WikiPatents
Jump to navigation Jump to search

ERROR-PROOF INFERENCE CALCULATION FOR NEURAL NETWORKS

Organization Name

Robert Bosch GmbH

Inventor(s)

Christoph Schorn of Leonberg (DE)

Leonardo Luiz Ecco of Stuttgart (DE)

Andre Guntoro of Weil Der Stadt (DE)

Jo Pletinckx of Sersheim (DE)

Sebastian Vogel of Schaidt (DE)

ERROR-PROOF INFERENCE CALCULATION FOR NEURAL NETWORKS - A simplified explanation of the abstract

This abstract first appeared for US patent application 17996533 titled 'ERROR-PROOF INFERENCE CALCULATION FOR NEURAL NETWORKS

The abstract describes a method for operating a hardware platform for the inference calculation of a convolutional neural network.

  • Input matrix convolved with convolution kernels to generate two-dimensional output matrices.
  • Convolution kernels summed elementwise to form a control kernel.
  • Input matrix convolved with control kernel to generate a two-dimensional control matrix.
  • Elements of control matrix compared with corresponding elements in output matrices.
  • Deviations in control matrix elements trigger additional control calculations to verify output matrix accuracy.

Potential Applications: - Image recognition systems - Autonomous vehicles - Medical imaging analysis

Problems Solved: - Efficient inference calculation for convolutional neural networks - Improved accuracy in neural network outputs

Benefits: - Faster processing speed - Enhanced accuracy in neural network predictions - Reduced computational resources required

Commercial Applications: - AI-powered devices and systems - Cloud computing services for AI applications - Edge computing devices for real-time AI processing

Questions about the technology: 1. How does this method improve the efficiency of convolutional neural network inference calculations? 2. What are the key advantages of using control kernels in the convolution process?

Frequently Updated Research: - Stay updated on advancements in hardware acceleration for neural network computations.


Original Abstract Submitted

A method for operating a hardware platform for the inference calculation of a convolutional neural network. In the method: an input matrix having input data of the neural network is convolved by the acceleration module with a plurality of convolution kernels, so that a multiplicity of two-dimensional output matrices results; the convolution kernels are summed elementwise to form a control kernel; the input matrix is convolved by the acceleration module with the control kernel, so that a two-dimensional control matrix results; each element of the control matrix is compared with the sum of the elements corresponding thereto in the output matrices; if this comparison yields a deviation for an element of the control matrix, then in response it is checked, with at least one additional control calculation, whether an element of at least one output matrix corresponding to this element of the control matrix was correctly calculated.