Microsoft technology licensing, llc (20240346232). DYNAMIC CONSTRUCTION OF LARGE LANGUAGE MODEL PROMPTS simplified abstract
Contents
DYNAMIC CONSTRUCTION OF LARGE LANGUAGE MODEL PROMPTS
Organization Name
microsoft technology licensing, llc
Inventor(s)
Abed El Kader Asi of Sammamish WA (US)
Alexander Tsvetkov of Tel Aviv (IL)
Shahar Zvi Keren of Hemed (IL)
Roy Eisenstadt of Tel Aviv (IL)
DYNAMIC CONSTRUCTION OF LARGE LANGUAGE MODEL PROMPTS - A simplified explanation of the abstract
This abstract first appeared for US patent application 20240346232 titled 'DYNAMIC CONSTRUCTION OF LARGE LANGUAGE MODEL PROMPTS
- Simplified Explanation:**
The patent application discusses solutions for reducing the likelihood of hallucinations by language models, such as large language models, by injecting curated factual data into prompts to improve accuracy and reliability.
- Key Features and Innovation:**
- Injecting curated factual data into prompts to reduce hallucinations by language models - Enabling language models to be used in settings where accuracy is crucial - Generating summaries and output texts using topic-specific data and language models
- Potential Applications:**
- Improving the accuracy of language models in various industries - Enhancing the reliability of automated text generation processes - Facilitating the use of language models in critical applications where accuracy is paramount
- Problems Solved:**
- Reducing the likelihood of hallucinations by language models - Enhancing the accuracy and reliability of automated text generation - Allowing for the use of language models in settings where accuracy is crucial
- Benefits:**
- Improved accuracy and reliability of language models - Increased trust in automated text generation processes - Expanded use of language models in critical applications
- Commercial Applications:**
Potential commercial applications include automated content generation for news outlets, legal document summarization, and medical report generation.
- Questions about Language Models:**
1. How can injecting curated factual data into prompts improve the accuracy of language models? 2. What are the potential implications of reducing hallucinations by language models in critical applications?
- Frequently Updated Research:**
Stay updated on advancements in language model technology and the integration of curated factual data to enhance accuracy and reliability.
Original Abstract Submitted
example solutions for reducing the likelihood of hallucinations by language models, such as large language models (llms) are disclosed. by injecting a sufficient range and quantity of curated factual data into a prompt, the likelihood of a hallucination by an llm may be reduced. this enables language models to be used in a wider range of settings, in which fabrication of facts is problematic, while reducing the need for a human to carefully check the generated text for accuracy. examples include: generating a summary of a transcript using a summarization model; extracting topic-specific data from stored data using a scoring model; dynamically generating a language model prompt using the topic-specific data and the summary; and generating an output text using a language model and the language model prompt.