Wonder Technologies Pte. Ltd. (20240374187). MULTI-MODAL SYSTEMS AND METHODS FOR VOICE-BASED MENTAL HEALTH ASSESSMENT WITH EMOTION STIMULATION simplified abstract

From WikiPatents
Revision as of 06:31, 21 November 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

MULTI-MODAL SYSTEMS AND METHODS FOR VOICE-BASED MENTAL HEALTH ASSESSMENT WITH EMOTION STIMULATION

Organization Name

Wonder Technologies Pte. Ltd.

Inventor(s)

Biman Najika Liyanage of Beijing (CN)

Zhengwen Zhu of Beijing (CN)

Tai-ni Wu of Beijing (CN)

MULTI-MODAL SYSTEMS AND METHODS FOR VOICE-BASED MENTAL HEALTH ASSESSMENT WITH EMOTION STIMULATION - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240374187 titled 'MULTI-MODAL SYSTEMS AND METHODS FOR VOICE-BASED MENTAL HEALTH ASSESSMENT WITH EMOTION STIMULATION

The patent application describes a multi-modal system for voice-based mental health assessment with emotion stimulation.

  • Task construction module creates tasks to capture acoustic, linguistic, and affective speech characteristics.
  • Stimulus output module presents stimuli to elicit user behavior triggers.
  • Response intake module collects user responses to stimuli.
  • Autoencoder defines relationships between audio and text features for emotion classification.
  • Shared representation feature dataset is output for mental health assessment.

Potential Applications: - Mental health assessment tools - Emotion recognition systems - Therapy and counseling support technology

Problems Solved: - Difficulty in assessing mental health through voice analysis - Limited tools for emotion stimulation in mental health assessments

Benefits: - Enhanced voice-based mental health assessment - Improved emotion recognition accuracy - Personalized therapy and counseling support

Commercial Applications: Title: Voice-Based Mental Health Assessment System This technology can be used in healthcare settings, research institutions, and mental health clinics to provide accurate and efficient mental health assessments. It can also be integrated into telemedicine platforms for remote mental health support services.

Questions about the technology: 1. How does the autoencoder improve emotion classification in mental health assessments? 2. What are the potential implications of using this technology in therapy and counseling settings?

Frequently Updated Research: Researchers are constantly exploring new ways to improve emotion recognition and mental health assessment through voice analysis. Stay updated on the latest advancements in this field to enhance the effectiveness of this technology.


Original Abstract Submitted

multi-modal systems, for voice-based mental health assessment with emotion stimulation, comprising: a task construction module to construct tasks for capturing acoustic, linguistic, and affective characteristics of speech of a user; a stimulus output module comprising stimuli, basis the constructed tasks, to be presented to a user in order to elicit a trigger of one or more types of user behaviour, the triggers being in the form on input responses; response intake module to present, to a user, the stimuli, and, in response, receive corresponding responses in one or more formats from responses; an autoencoder to define relationship/s, using the fused features, between: an audio modality to output extracted high-level text features; and a text modality to output extracted high-level audio features; the autoencoder to receive extracted high-level text and audio features, in parallel, to output a shared representation feature data set for emotion classification correlative to the mental health assessment.