Parameter Efficient Prompt Tuning for Efficient Models at Scale: abstract simplified (17718738)

From WikiPatents
Jump to navigation Jump to search
  • This abstract for appeared for patent application number 17718738 Titled 'Parameter Efficient Prompt Tuning for Efficient Models at Scale'

Simplified Explanation

This abstract describes a method for natural language processing that uses trained prompts to guide a pre-trained machine-learned model to generate specific outputs for a given task. The approach involves training a subset of parameters for the task and then inputting them along with input data into the pre-trained model. The pre-trained model's parameters are frozen during the prompt training, which helps reduce computational resources while still utilizing the knowledge gained from the pre-trained model.


Original Abstract Submitted

Systems and methods for natural language processing can leverage trained prompts to condition a large pre-trained machine-learned model to generate an output for a specific task. For example, a subset of parameters may be trained for the particular task to then be input with a set of input data into the pre-trained machine-learned model to generate the task-specific output. During the training of the prompt, the parameters of the pre-trained machine-learned model can be frozen, which can reduce the computational resources used during training while still leveraging the previously learned data from the pre-trained machine-learned model.