17902505. INJECTION OF USER FEEDBACK INTO LANGUAGE MODEL ADAPTATION simplified abstract (Apple Inc.)
INJECTION OF USER FEEDBACK INTO LANGUAGE MODEL ADAPTATION
Organization Name
Inventor(s)
Jerome R. Bellegarda of Saratoga CA (US)
INJECTION OF USER FEEDBACK INTO LANGUAGE MODEL ADAPTATION - A simplified explanation of the abstract
This abstract first appeared for US patent application 17902505 titled 'INJECTION OF USER FEEDBACK INTO LANGUAGE MODEL ADAPTATION
Simplified Explanation
The present disclosure is about updating a language model based on user feedback. Here is a simplified explanation of the abstract:
- The patent application describes a method for updating a language model based on user feedback.
- A language model predicts a set of tokens and an action based on a user's text input.
- If the predicted action does not match the actual user action, the language model is updated.
- The update involves modifying the output token probability distribution based on the actual user action.
- The language model is then updated to converge with a target language model using the modified output token probability distribution.
Potential Applications
This technology has potential applications in:
- Natural language processing systems
- Virtual assistants and chatbots
- Machine translation systems
- Speech recognition systems
Problems Solved
This technology solves the following problems:
- Improving the accuracy and effectiveness of language models
- Incorporating user feedback to enhance predictions and actions
- Updating language models to align with user preferences and behaviors
Benefits
The benefits of this technology include:
- Enhanced user experience with more accurate predictions and actions
- Improved language model performance through continuous updates
- Better adaptation to individual user preferences and behaviors
Original Abstract Submitted
The present disclosure generally relates to updating a language model based on user feedback. Based on a user text input, a language model predicts a set of tokens and an action that will be taken by the user in response to the predicted set of tokens. If the predicted action does not match a detected actual user action, the language model is updated to reflect the user feedback by modifying an output token probability distribution based on the actual user action and updating the language model to converge with a target language model using the modified output token probability distribution.