US Patent Application 18232267. DIGITAL ASSISTANT INTERACTION IN A VIDEO COMMUNICATION SESSION ENVIRONMENT simplified abstract

From WikiPatents
Jump to navigation Jump to search

DIGITAL ASSISTANT INTERACTION IN A VIDEO COMMUNICATION SESSION ENVIRONMENT

Organization Name

Apple Inc.

Inventor(s)

Niranjan Manjunath of Sunnyvale CA (US)

Willem Mattelaer of San Jose CA (US)

Jessica Peck of Morgan Hill CA (US)

Lily Shuting Zhang of Seattle WA (US)

DIGITAL ASSISTANT INTERACTION IN A VIDEO COMMUNICATION SESSION ENVIRONMENT - A simplified explanation of the abstract

This abstract first appeared for US patent application 18232267 titled 'DIGITAL ASSISTANT INTERACTION IN A VIDEO COMMUNICATION SESSION ENVIRONMENT

Simplified Explanation

- The patent application describes a context-aware digital assistant that can be used during a video communication session. - The digital assistant can be accessed from multiple user devices and can respond based on the context information provided by one user device to another user device. - Users participating in the video communication session can interact with the digital assistant as if it is another participant in the session. - The digital assistant can automatically determine tasks based on the shared transcription of user voice inputs received during the video communication session. - This allows the digital assistant to proactively suggest tasks that the user may want it to perform based on the conversations held during the session.


Original Abstract Submitted

Embodiments provide a context-aware digital assistant at multiple user devices participating in a video communication session by using context information from a first user device to determine a digital assistant response at a second user device. In this manner, users participating in the video communication session may interact with the digital assistant during the video communication session as if the digital assistant is another participant in the video communication session. Embodiments further describe automatically determining candidate digital assistant tasks based on a shared transcription of user voice inputs received at user devices participating in a video communication session. In this manner, a digital assistant of a user device participating in a video communication session may proactively determine one or more tasks that a user of the user device may want the digital assistant to perform based on conversations held during the video communication session.