20240028626. NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM STORING INFORMATION PROCESSING PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING DEVICE simplified abstract (FUJITSU LIMITED)

From WikiPatents
Jump to navigation Jump to search

NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM STORING INFORMATION PROCESSING PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING DEVICE

Organization Name

FUJITSU LIMITED

Inventor(s)

Nobuhiro Sakamoto of Nagoya (JP)

Masahiro Kataoka of Kamakura (JP)

Yasuhiro Suzuki of Yokohama (JP)

Kiyoshi Furuuchi of Oyama (JP)

NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM STORING INFORMATION PROCESSING PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING DEVICE - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240028626 titled 'NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM STORING INFORMATION PROCESSING PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING DEVICE

Simplified Explanation

The abstract describes a computer program that performs information processing tasks. These tasks include preprocessing, training, and generation processing.

  • Preprocessing processing involves calculating vectors for different parts of text information found in historical data, which includes question and response sentences.
  • Training processing involves training a model based on training data that defines relationships between the vectors of different subtexts.
  • Generation processing involves calculating vectors for new question sentences by inputting them into the training model and generating a response based on the calculated vectors.

Potential applications of this technology:

  • Natural language processing: This technology can be used in various natural language processing applications, such as chatbots, virtual assistants, and customer support systems.
  • Information retrieval: The trained model can be used to retrieve relevant information based on user queries, improving search engine capabilities.
  • Language translation: By generating responses based on input question sentences, this technology can be applied to language translation systems.

Problems solved by this technology:

  • Improved understanding of context: By calculating vectors for different subtexts and training a model based on these vectors, the system can better understand the context of a question and generate more accurate responses.
  • Efficient information retrieval: The trained model can help retrieve relevant information quickly and accurately, saving time and effort for users.
  • Enhanced communication: This technology can facilitate better communication between humans and machines by generating coherent and contextually appropriate responses.

Benefits of this technology:

  • Improved user experience: By generating accurate and relevant responses, this technology can enhance the user experience in various applications, such as chatbots and virtual assistants.
  • Time and cost savings: The efficient information retrieval capabilities of this technology can save time and reduce costs in tasks that involve searching for information.
  • Scalability: The trained model can be applied to large amounts of data and can handle a wide range of question sentences, making it scalable for different applications.


Original Abstract Submitted

a non-transitory computer-readable recording medium storing an information processing program for causing a computer to perform processing including: executing preprocessing processing that includes calculating vectors for a plurality of subtexts of text information included in a plurality of pieces of history information in which information on a plurality of question sentences and a plurality of response sentences is recorded; executing training processing that includes training a training model based on training data that defines relationships between the vectors of some subtexts and the vectors of other subtexts among the plurality of subtexts; and executing generation processing that includes calculating, when accepting a new question sentence, the vectors of the subtexts by inputting the vectors of the new question sentence to the training model, and generating a response that corresponds to the new question sentence, based on the calculated vectors.