20250196363. Llm Driven Multimodal Hu (Honda Motor ., .)
LLM driven multimodal human-robot interaction planning
Abstract: a computer-implemented method for controlling a robot collaborating with a human in an environment of the robot comprises: obtaining, by at least one sensor, multimodal information on the environment of the robot including information on a human acting in the environment; converting, by a first converter, the obtained multimodal information into text information; estimating, by an intent estimator, an intent of the human based on the text information; determining, by a state estimator, a current state of the environment including the human based on the text information; planning, by a behavior planner, based on the current state of the environment and the estimated intent of the human, a behavior of the robot including at least one multimodal interaction output for execution by the robot, and generating control information including text information on the at least one multimodal interaction output; converting, by a second translator, the generated text information into multimodal actuator control information; and controlling at least one actuator of the robot based on the multimodal actuator control information.
Inventor(s): Chao Wang, Michael Gienger, Frank Joublin, Antonio Ceravola
CPC Classification: B25J11/0005 (Manipulators not otherwise provided for)
Search for rejections for patent application number 20250196363