20240037371. DETECTING AUDIBLE REACTIONS DURING VIRTUAL MEETINGS simplified abstract (Zoom Video Communications, Inc.)

From WikiPatents
Jump to navigation Jump to search

DETECTING AUDIBLE REACTIONS DURING VIRTUAL MEETINGS

Organization Name

Zoom Video Communications, Inc.

Inventor(s)

Yuhui Chen of San Jose CA (US)

Qiang Gao of Charlotte NC (US)

Zhaofeng Jia of Saratoga CA (US)

Rongrong Liu of Sunnyvale CA (US)

DETECTING AUDIBLE REACTIONS DURING VIRTUAL MEETINGS - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240037371 titled 'DETECTING AUDIBLE REACTIONS DURING VIRTUAL MEETINGS

Simplified Explanation

The abstract describes a method involving a machine learning model that receives audio signals from a client device connected to a virtual meeting via a conference client application. The model determines a variety of candidate reactions associated with the audio signals and selects one reaction to transmit to the virtual conference provider.

  • The method involves using a machine learning model to analyze audio signals from a client device in a virtual meeting.
  • The model determines multiple potential reactions associated with the audio signals.
  • One reaction is selected from the potential reactions.
  • The selected reaction is transmitted to the virtual conference provider.

Potential Applications:

  • This technology can be used in virtual meetings to automatically analyze and interpret audio signals from participants.
  • It can help in providing real-time feedback or reactions based on the audio signals received.
  • The method can enhance the virtual meeting experience by automating certain aspects of participant engagement.

Problems Solved:

  • The method solves the problem of manually analyzing and interpreting audio signals in virtual meetings.
  • It eliminates the need for participants to manually provide reactions or feedback during the meeting.
  • It streamlines the process of understanding and responding to audio signals in a virtual meeting setting.

Benefits:

  • The technology saves time and effort by automating the analysis of audio signals in virtual meetings.
  • It enables real-time reactions or feedback based on the audio signals received.
  • The method enhances participant engagement and interaction in virtual meetings.


Original Abstract Submitted

one example method includes receiving, by a machine learning (“ml”) model of a conference client application, audio signals received from a microphone of a client device, the client device connected to a virtual meeting via the conference client application, the virtual meeting hosted by a virtual conference provider; determining, by the ml model, a plurality of candidate reactions associated with the audio signals, the ml comprising a plurality of convolutional neural network (“cnn”) layers and at least one fully connected layer; selecting a reaction from the plurality of candidate reactions; and transmitting the reaction to the virtual conference provider.