US Patent Application 17718738. Parameter Efficient Prompt Tuning for Efficient Models at Scale simplified abstract

From WikiPatents
Revision as of 02:12, 19 October 2023 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Parameter Efficient Prompt Tuning for Efficient Models at Scale

Organization Name

Google LLC


Inventor(s)

Brian David Lester of Mountain View CA (US)


Rami Al-rfou of Menlo Park CA (US)


Noah Constant of Los Angeles CA (US)


Parameter Efficient Prompt Tuning for Efficient Models at Scale - A simplified explanation of the abstract

  • This abstract for appeared for US patent application number 17718738 Titled 'Parameter Efficient Prompt Tuning for Efficient Models at Scale'

Simplified Explanation

This abstract describes a method for natural language processing that uses trained prompts to guide a pre-trained machine learning model to generate specific outputs for a given task. The approach involves training a subset of parameters for the task and then inputting them along with input data into the pre-trained model. The pre-trained model's parameters are frozen during the prompt training, which helps reduce computational resources while still benefiting from the knowledge learned by the pre-trained model.


Original Abstract Submitted

Systems and methods for natural language processing can leverage trained prompts to condition a large pre-trained machine-learned model to generate an output for a specific task. For example, a subset of parameters may be trained for the particular task to then be input with a set of input data into the pre-trained machine-learned model to generate the task-specific output. During the training of the prompt, the parameters of the pre-trained machine-learned model can be frozen, which can reduce the computational resources used during training while still leveraging the previously learned data from the pre-trained machine-learned model.