20240054403. RESOURCE EFFICIENT FEDERATED EDGE LEARNING WITH HYPERDIMENSIONAL COMPUTING simplified abstract (Intel Corporation)

From WikiPatents
Jump to navigation Jump to search

RESOURCE EFFICIENT FEDERATED EDGE LEARNING WITH HYPERDIMENSIONAL COMPUTING

Organization Name

Intel Corporation

Inventor(s)

Nikita Zeulin of Tampere (FI)

Olga Galinina of Tampere (FI)

Sergey Andreev of Tampere (FI)

Nageen Himayat of Fremont CA (US)

RESOURCE EFFICIENT FEDERATED EDGE LEARNING WITH HYPERDIMENSIONAL COMPUTING - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240054403 titled 'RESOURCE EFFICIENT FEDERATED EDGE LEARNING WITH HYPERDIMENSIONAL COMPUTING

Simplified Explanation

The abstract describes a device that can train a hyperdimensional computing (HDC) model by using memory and processing circuitry to train independent sub-models of the HDC model. These sub-models can then be transmitted to another computing device, such as a server. The device can be one of many devices, including edge computing devices, IoT nodes, etc. The training process involves transforming training data points into hyperdimensional representations, initializing a prototype using these representations, and iteratively training the prototype.

  • The device trains independent sub-models of a hyperdimensional computing (HDC) model.
  • The trained sub-models can be transmitted to another computing device.
  • The device can be an edge computing device or an IoT node.
  • Training involves transforming training data points into hyperdimensional representations.
  • The prototype is initialized using these representations and then iteratively trained.

Potential Applications:

  • Edge computing: The device can be used in edge computing scenarios where training models locally and transmitting them to a server can reduce latency and bandwidth requirements.
  • IoT: The device can be used in IoT nodes to train models locally and transmit them to a central server for further processing.
  • Machine learning: The device can be used for training machine learning models using the HDC approach, which may offer advantages in certain applications.

Problems Solved:

  • Latency and bandwidth: By training sub-models locally and transmitting them, the device can reduce the need for transmitting large amounts of training data, reducing latency and bandwidth requirements.
  • Distributed training: The device enables distributed training by allowing independent sub-models to be trained on different devices and then combined on a central server.

Benefits:

  • Reduced data transmission: By transmitting trained sub-models instead of raw training data, the device can reduce the amount of data that needs to be transmitted, saving bandwidth and reducing latency.
  • Distributed training: The device enables distributed training by allowing independent sub-models to be trained on different devices, leveraging the computational power of multiple devices.
  • Flexibility: The device can be used in various edge computing and IoT scenarios, providing flexibility in deploying and training models.


Original Abstract Submitted

a device to train a hyperdimensional computing (hdc) model may include memory and processing circuitry to train one or more independent sub models of the hdc model and transmit the one or more independent sub models to another computing device, such as a server. the device may be one of a plurality of devices, such as edge computing devices, edge or internet of things (iot) nodes, or the like. training of the one or more independent sub models of the hdc model may include transforming one or more training data points to one or more hyperdimensional representations, initializing a prototype using the hyperdimensional representations of the one or more training data points, and iteratively training the initialized prototype.