18457708. MULTI-DOMAIN JOINT SEMANTIC FRAME PARSING simplified abstract (Microsoft Technology Licensing, LLC)
Contents
MULTI-DOMAIN JOINT SEMANTIC FRAME PARSING
Organization Name
Microsoft Technology Licensing, LLC
Inventor(s)
Dilek Z. Hakkani-tur of Kirkland WA (US)
Asli Celikyilmaz of Kirkland WA (US)
Jianfeng Gao of Woodinville WA (US)
Gokhan Tur of Kirkland WA (US)
MULTI-DOMAIN JOINT SEMANTIC FRAME PARSING - A simplified explanation of the abstract
This abstract first appeared for US patent application 18457708 titled 'MULTI-DOMAIN JOINT SEMANTIC FRAME PARSING
Simplified Explanation
The patent application describes a processing unit that can train a joint multi-domain recurrent neural network (JRNN) for spoken language understanding (SLU). This model can perform tasks such as slot filling, intent determination, and domain classification simultaneously.
- The processing unit can train a model called JRNN, which is a combination of bi-directional recurrent neural network (bRNN) and recurrent neural network with long-short term memory (RNN-LSTM).
- The trained model can estimate a complete semantic frame per query, which includes information about intents and slots across multiple domains.
- The JRNN model enables multi-task deep learning by leveraging data from multiple domains.
Potential Applications
- Spoken language understanding systems in various domains such as customer service, virtual assistants, and voice-controlled devices.
- Natural language processing applications that require understanding user queries and extracting relevant information.
Problems Solved
- Traditional SLU models often focus on a single domain, making it difficult to handle queries that involve multiple domains.
- Training separate models for each domain can be time-consuming and resource-intensive.
- Existing models may struggle to accurately estimate semantic frames that include intents and slots across multiple domains.
Benefits
- The joint multi-domain model allows for more accurate and efficient processing of user queries involving multiple domains.
- Training a single JRNN model reduces the need for separate models for each domain, saving time and resources.
- The model leverages the data from multiple domains, enabling better performance and generalization.
Original Abstract Submitted
A processing unit can train a model as a joint multi-domain recurrent neural network (JRNN), such as a bi-directional recurrent neural network (bRNN) and/or a recurrent neural network with long-short term memory (RNN-LSTM) for spoken language understanding (SLU). The processing unit can use the trained model to, e.g., jointly model slot filling, intent determination, and domain classification. The joint multi-domain model described herein can estimate a complete semantic frame per query, and the joint multi-domain model enables multi-task deep learning leveraging the data from multiple domains. The joint multi-domain recurrent neural (JRNN) can leverage semantic intents (such as, finding or identifying, e.g., a domain specific goal) and slots (such as, dates, times, locations, subjects, etc.) across multiple domains.
- Microsoft Technology Licensing, LLC
- Dilek Z. Hakkani-tur of Kirkland WA (US)
- Asli Celikyilmaz of Kirkland WA (US)
- Yun-Nung Chen of Taipei (TW)
- Li Deng of Redmond WA (US)
- Jianfeng Gao of Woodinville WA (US)
- Gokhan Tur of Kirkland WA (US)
- Ye Yi Wang of Redmond WA (US)
- G06N3/08
- G10L15/16
- G10L15/22
- G10L15/18
- G06N3/044