18353911. SIGNAL PROCESSING APPARATUS FOR REDUCING AMOUNT OF MID-COMPUTATION DATA TO BE STORED, METHOD OF CONTROLLING THE SAME, AND STORAGE MEDIUM simplified abstract (CANON KABUSHIKI KAISHA)

From WikiPatents
Jump to navigation Jump to search

SIGNAL PROCESSING APPARATUS FOR REDUCING AMOUNT OF MID-COMPUTATION DATA TO BE STORED, METHOD OF CONTROLLING THE SAME, AND STORAGE MEDIUM

Organization Name

CANON KABUSHIKI KAISHA

Inventor(s)

Hayato Oura of Tokyo (JP)

Takayuki Komatsu of Kanagawa (JP)

Takaaki Yokoi of Kanagawa (JP)

SIGNAL PROCESSING APPARATUS FOR REDUCING AMOUNT OF MID-COMPUTATION DATA TO BE STORED, METHOD OF CONTROLLING THE SAME, AND STORAGE MEDIUM - A simplified explanation of the abstract

This abstract first appeared for US patent application 18353911 titled 'SIGNAL PROCESSING APPARATUS FOR REDUCING AMOUNT OF MID-COMPUTATION DATA TO BE STORED, METHOD OF CONTROLLING THE SAME, AND STORAGE MEDIUM

Simplified Explanation

The patent application describes a signal processing apparatus that performs a convolution operation on predetermined layers of a neural network. It stores the first form data obtained from the convolution operation in a storage. The apparatus then applies an arithmetic operation on the output data from the first layer's convolution operation, which is a compression layer in the neural network, to compress the data. The compressed first form data is transmitted to the storage. The apparatus also applies an arithmetic operation on the first form data stored in the storage using a restoration layer in the neural network to restore the pre-compression data. The input data obtained after restoration is then used as input for the convolution operation of the second layer among the predetermined layers.

  • The apparatus performs a convolution operation on predetermined layers of a neural network.
  • It stores the first form data obtained from the convolution operation in a storage.
  • The apparatus applies an arithmetic operation on the output data from the first layer's convolution operation to compress the data.
  • The compressed first form data is transmitted to the storage.
  • The apparatus applies an arithmetic operation on the first form data stored in the storage to restore the pre-compression data.
  • The input data obtained after restoration is used as input for the convolution operation of the second layer.

Potential Applications:

  • Image and video processing: The compression and restoration operations can be used to efficiently process and store large amounts of visual data.
  • Speech recognition: The technology can be applied to compress and restore audio data, improving the efficiency of speech recognition algorithms.
  • Data compression: The compression layer can be used to reduce the size of various types of data, enabling more efficient storage and transmission.

Problems Solved:

  • Efficient data processing: The compression and restoration operations help in reducing the size of data, making it easier to process and store.
  • Improved storage efficiency: By compressing the data, the technology allows for more efficient use of storage resources.
  • Enhanced computational efficiency: The use of compression and restoration layers in the neural network can improve the computational efficiency of signal processing tasks.

Benefits:

  • Reduced storage requirements: The compression operation helps in reducing the storage space required for storing data.
  • Faster processing: By compressing the data, the computational load is reduced, leading to faster processing times.
  • Improved accuracy: The restoration operation ensures that the pre-compression data is accurately restored, minimizing information loss.


Original Abstract Submitted

A signal processing apparatus executes a convolution operation of predetermined layers constituting a neural network; and transfers first form data to be stored in a storage. The apparatus executes, on output data outputted from a convolution operation of a first layer among the predetermined layers, an arithmetic operation of a compression layer that is configured by a neural network and compresses data, and outputs the first form data to be transmitted to the storage. The apparatus further executes, on the first form data stored in the storage, an arithmetic operation of a restoration layer that is configured by a neural network and restores pre-compression data, and outputs input data to be inputted to a convolution operation of a second layer among the predetermined layers.