US Patent Application 17726244. DYNAMICALLY ADAPTING GIVEN ASSISTANT OUTPUT BASED ON A GIVEN PERSONA ASSIGNED TO AN AUTOMATED ASSISTANT simplified abstract

From WikiPatents
Jump to navigation Jump to search

DYNAMICALLY ADAPTING GIVEN ASSISTANT OUTPUT BASED ON A GIVEN PERSONA ASSIGNED TO AN AUTOMATED ASSISTANT

Organization Name

Google LLC


Inventor(s)

Martin Baeuml of Zurich (CH)


Thushan Amarasiriwardena of Alameda CA (US)


Roberto Pieraccini of Zurich (CH)


Gianluca Martini of Zurich (CH)


DYNAMICALLY ADAPTING GIVEN ASSISTANT OUTPUT BASED ON A GIVEN PERSONA ASSIGNED TO AN AUTOMATED ASSISTANT - A simplified explanation of the abstract

  • This abstract for appeared for US patent application number 17726244 Titled 'DYNAMICALLY ADAPTING GIVEN ASSISTANT OUTPUT BASED ON A GIVEN PERSONA ASSIGNED TO AN AUTOMATED ASSISTANT'

Simplified Explanation

This abstract describes implementations for dynamically adapting the output of an automated assistant based on a specific persona assigned to it. The output can be generated and adapted based on the assigned persona, or it can be generated specifically for the persona without the need for subsequent adaptation. The output may include textual content for audible presentation to the user and visual cues for controlling a display or visual representation of the assistant. These implementations utilize large language models to reflect the assigned persona in the assistant's output.


Original Abstract Submitted

Implementations relate to dynamically adapting a given assistant output based on a given persona, from among a plurality of disparate personas, assigned to an automated assistant. In some implementations, the given assistant output can be generated and subsequently adapted based on the given persona assigned to the automated assistant. In other implementations, the given assistant output can be generated specific to the given persona and without having to subsequently adapt the given assistant output to the given persona. Notably, the given assistant output can include a stream of textual content to be synthesized for audible presentation to the user, and a stream of visual cues utilized in controlling a display of a client device and/or in controlling a visualized representation of the automated assistant. Various implementations utilize large language models (LLMs), or output previously generated utilizing LLMs, to reflect the given persona in the given assistant output.