20250181138. Multimodal Human-machine (NVIDIA)
MULTIMODAL HUMAN-MACHINE INTERACTIONS FOR INTERACTIVE SYSTEMS AND APPLICATIONS
Abstract: in various examples, an interactive agent platform that hosts an interactive agent may execute one or more flows that implement the logic of the interactive agent and that specify a sequence of multimodal interactions. for example, an interactive avatar may support any number of simultaneous interaction modalities and corresponding interaction channels to engage with the user, such as channels for character or bot actions (e.g., speech, gestures, postures, movement, vocal bursts, etc.), scene actions (e.g., two-dimensional (2d) gui overlays, 3d scene interactions, visual effects, music, etc.), and user actions (e.g., speech, gesture, posture, movement, etc.). actions based on different modalities may occur sequentially or in parallel (e.g., waving and saying hello). as such, the interactive agent may execute any number of flows that specify a sequence of multimodal actions (e.g., different types of bot or user actions) using any number of supported interaction modalities and corresponding interaction channels.
Inventor(s): Christian Eduard Schüller, Razvan Dinu, Severin Achill Klingler, Pascal Joël Bérard
CPC Classification: G06F3/011 ({Arrangements for interaction with the human body, e.g. for user immersion in virtual reality (blind teaching )})
Search for rejections for patent application number 20250181138