Qualcomm incorporated (20240185088). SCALABLE WEIGHT REPARAMETERIZATION FOR EFFICIENT TRANSFER LEARNING simplified abstract

From WikiPatents
Jump to navigation Jump to search

SCALABLE WEIGHT REPARAMETERIZATION FOR EFFICIENT TRANSFER LEARNING

Organization Name

qualcomm incorporated

Inventor(s)

Byeonggeun Kim of Seoul (KR)

Juntae Lee of Seoul (KR)

Seunghan Yang of Incheon (KR)

Simyung Chang of Suwon (KR)

SCALABLE WEIGHT REPARAMETERIZATION FOR EFFICIENT TRANSFER LEARNING - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240185088 titled 'SCALABLE WEIGHT REPARAMETERIZATION FOR EFFICIENT TRANSFER LEARNING

Simplified Explanation

The present disclosure describes techniques for scalable weight reparameterization for efficient transfer learning in neural networks.

  • Training a first neural network to perform a task using weights from a pre-trained machine learning model.
  • Reparameterizing weights for each layer in the machine learning model.
  • Training a second neural network to generate gating parameters based on a cost factor and the trained first neural network.
  • Updating the machine learning model based on the reparameterized weights, gating parameters, and original weights.

Potential Applications

This technology can be applied in various fields such as computer vision, natural language processing, and speech recognition for efficient transfer learning tasks.

Problems Solved

1. Efficient transfer learning from pre-trained models to new tasks. 2. Scalable weight reparameterization for improved performance in neural networks.

Benefits

1. Faster adaptation to new tasks. 2. Improved accuracy and efficiency in transfer learning scenarios.

Potential Commercial Applications

This technology can be utilized in industries such as healthcare, finance, and e-commerce for developing advanced machine learning models with reduced training time and improved performance.

Possible Prior Art

One possible prior art in this field is the use of transfer learning techniques in neural networks to improve model performance on new tasks.

What are the limitations of the proposed technique?

The abstract does not mention any potential limitations of the proposed technique.

How does this technique compare to existing methods for transfer learning?

The abstract does not provide a direct comparison of this technique to existing methods for transfer learning.


Original Abstract Submitted

certain aspects of the present disclosure provide techniques and apparatus for scalable weight reparameterization for efficient transfer learning. one example method generally includes training a first neural network to perform a task based on weights defined for a machine learning (ml) model trained to perform a different task and learned reparameterizing weights for each of a plurality of layers in the ml model; training a second neural network to generate a plurality of gating parameters based on a cost factor and the trained first neural network, each respective gating parameter of the plurality of gating parameters corresponding to weights in a respective layer of the plurality of layers; and updating the ml model based on the weights defined for the ml model, each gating parameter for each layer of the plurality of layers, and the learned reparameterizing weights for each layer of the plurality of layers.