18299842. DYNAMIC CONSTRUCTION OF LARGE LANGUAGE MODEL PROMPTS simplified abstract (Microsoft Technology Licensing, LLC)

From WikiPatents
Jump to navigation Jump to search

DYNAMIC CONSTRUCTION OF LARGE LANGUAGE MODEL PROMPTS

Organization Name

Microsoft Technology Licensing, LLC

Inventor(s)

Abed El Kader Asi of Sammamish WA (US)

Alexander Tsvetkov of Tel Aviv (IL)

Royi Ronen of Tel Aviv (IL)

Yarin Kuper of Tel Aviv (IL)

Shahar Zvi Keren of Hemed (IL)

Roy Eisenstadt of Tel Aviv (IL)

DYNAMIC CONSTRUCTION OF LARGE LANGUAGE MODEL PROMPTS - A simplified explanation of the abstract

This abstract first appeared for US patent application 18299842 titled 'DYNAMIC CONSTRUCTION OF LARGE LANGUAGE MODEL PROMPTS

Abstract: Example solutions for reducing the likelihood of hallucinations by language models, such as large language models (LLMs) are disclosed. By injecting a sufficient range and quantity of curated factual data into a prompt, the likelihood of a hallucination by an LLM may be reduced. This enables language models to be used in a wider range of settings, in which fabrication of facts is problematic, while reducing the need for a human to carefully check the generated text for accuracy. Examples include: generating a summary of a transcript using a summarization model; extracting topic-specific data from stored data using a scoring model; dynamically generating a language model prompt using the topic-specific data and the summary; and generating an output text using a language model and the language model prompt.

  • Simplified Explanation:

The patent application discusses methods to reduce the likelihood of hallucinations by language models by injecting curated factual data into prompts.

  • Key Features and Innovation:

- Injecting curated factual data into prompts - Reducing the likelihood of hallucinations by language models - Enabling wider use of language models in various settings

  • Potential Applications:

- Text summarization - Data extraction - Language model prompt generation

  • Problems Solved:

- Reducing hallucinations by language models - Minimizing the need for human verification of generated text

  • Benefits:

- Improved accuracy of generated text - Increased reliability of language models - Enhanced usability in different settings

  • Commercial Applications:

Potential commercial applications include automated content generation, data analysis, and information retrieval systems.

  • Prior Art:

Readers can explore prior research on language model hallucinations, data injection methods, and text summarization techniques.

  • Frequently Updated Research:

Stay informed about the latest advancements in language model accuracy, data curation techniques, and text generation technologies.

Questions about language model hallucinations: 1. How does injecting curated factual data help reduce hallucinations by language models? 2. What are the potential implications of using language models with reduced hallucination likelihood?


Original Abstract Submitted

Example solutions for reducing the likelihood of hallucinations by language models, such as large language models (LLMs) are disclosed. By injecting a sufficient range and quantity of curated factual data into a prompt, the likelihood of a hallucination by an LLM may be reduced. This enables language models to be used in a wider range of settings, in which fabrication of facts is problematic, while reducing the need for a human to carefully check the generated text for accuracy. Examples include: generating a summary of a transcript using a summarization model; extracting topic-specific data from stored data using a scoring model; dynamically generating a language model prompt using the topic-specific data and the summary; and generating an output text using a language model and the language model prompt.