Patent Application 18196069 - SURFACING SUPPLEMENTAL INFORMATION - Rejection
Appearance
Patent Application 18196069 - SURFACING SUPPLEMENTAL INFORMATION
Title: SURFACING SUPPLEMENTAL INFORMATION
Application Information
- Invention Title: SURFACING SUPPLEMENTAL INFORMATION
- Application Number: 18196069
- Submission Date: 2025-05-14T00:00:00.000Z
- Effective Filing Date: 2023-05-11T00:00:00.000Z
- Filing Date: 2023-05-11T00:00:00.000Z
- National Class: 704
- National Sub-Class: 009000
- Examiner Employee Number: 96826
- Art Unit: 2657
- Tech Center: 2600
Rejection Summary
- 102 Rejections: 1
- 103 Rejections: 7
Cited Patents
The following patents were cited in the rejection:
Office Action Text
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference character â450eâ has been used to designate both an indicator for the word âKyuritolâ and an indicator for the word âvertigoâ in figures 4E, 4F, and 4G. The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference signs mentioned in the description: â200â in paragraph 0059, line 1 â460bâ and â460dâ in paragraph 0111, line 4 The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character not mentioned in the description: â460eâ in figures 4E, 4F, and 4G Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either âReplacement Sheetâ or âNew Sheetâ pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The disclosure is objected to because of the following informalities: In paragraph 0038, line 1, the acronym âGUIâ is used without being defined. In paragraph 0051, line 2, âidentified an utterance of beâ should read âidentified an utterance asâ. In paragraph 0051, line 6, âsystem can substitutesâ should read âsystem can substituteâ. In paragraph 0068, line 6, âgrouped together as a sentenceâ should read âgrouped together as a segmentâ. In paragraph 0068, line 11, âdefine ab edgeâ should read âdefine an edgeâ. In paragraph 0070, line 13, âa complete phrasesâ should read âa complete phraseâ or âcomplete phrasesâ. In paragraph 0074, line 13, âto a conceptsâ should read âto a conceptâ or âto conceptsâ. In paragraph 0120, line 6, âfeedback controls 472a-bâ should read âfeedback controls 474a-bâ. In paragraph 0123, line 4, âa candidate termsâ should read âa candidate termâ or âcandidate termsâ. Appropriate correction is required. Claim Objections Claim 16 is objected to because of the following informalities: In claim 16, lines 11-12, âan key pointâ should read âa key pointâ. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 â 15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites a method, comprising: analyzing a transcript of a conversation, by a Natural Language Processing (NLP) system, to identify at least one candidate term in the conversation to provide supplemental information for to a reader of the transcript; in response to receiving, from the reader, a selection of the candidate term, formatting a query that includes the candidate term; in response to receiving a reply to the query: summarizing the reply into an explanation in a human-readable format; and outputting the explanation to the reader. The claim 1 limitations, under their broadest reasonable interpretation, cover managing personal behavior but for the recitation of generic computer components. That is, other than reciting âa Natural Language Processing (NLP) systemâ, nothing in the claim elements preclude the actions from practically being personal behavior. For example, âanalyzingâ in the context of this claim encompasses a person reading a transcript and identifying terms in the transcript for which a reader may want additional information, âformattingâ in the context of this claim encompasses a person writing a question about a term selected by a second person who read the transcript with identified terms, âsummarizingâ in the context of this claim encompasses a person summarizing an answer to a question about a term provided by a third person who was asked the question, and âoutputtingâ in the context of this claim encompasses a person providing the summary to the person who read the transcript and selected the term. If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior then it falls within the "Certain Methods of Organizing Human Activity" grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim only recites the additional element âa Natural Language Processing (NLP) systemâ. The additional element âa Natural Language Processing (NLP) systemâ amounts to no more than mere instructions to apply the exception using generic computer components. There are no details about a particular natural language processing system or how the natural language processing system operates to analyze a transcript of a conversation. The natural language processing system is used to generally apply the abstract idea (analyzing a transcript of a conversation) without placing any limitation on how the natural language processing system operates to analyzing a transcript. The limitation recites only the idea of analyzing a transcript using a natural language processing system without details on how this is accomplished. The claim omits any details as to how the natural language processing system solves a technical problem, and instead recites only the idea of a solution or outcome. The claim invokes a generic natural language processing system merely as a tool for analyzing a transcript rather than purporting to improve the technology or a computer (See MPEP 2106.05(f)). Therefore, the limitation represents no more than mere instructions to apply the judicial exception on a computer. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible. Claims 2 â 7 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 2 â 7 depend from claim 1, and thus recites the limitations of claim 1. For the reasons discussed above for claim 1, the claim 1 limitations recite abstract ideas. The additional limitations of claims 2 â 7 do not preclude the steps of claim 1 from practically being personal behavior. For example, a person using the method of claim 1 to provide additional information to a reader of a transcript could also perform the limitations of claims 2 â 7 as personal behavior: Claim 2: A person could identify a candidate term from an action item in the transcript. Claim 3: A person could provide a hyperlink for a candidate term in the transcript. Claim 4: A person could provide information from a source selected by a participant in the transcript conversation. Claim 5: A person could provide information in a format selected for the reader, and could use vocabulary from the transcript. Claim 6: A person could include the information about the transcript in a report provided to a participant in the transcript conversation. Claim 7: A person could provide a hyperlink for the source of information provided to the reader. If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior, then it falls within the "Certain Methods of Organizing Human Activity" grouping of abstract ideas. Accordingly, the claims recite an abstract idea. The claims do not integrate the judicial exception into a practical application. For the reasons discussed above for claim 1, the additional element amounts to no more than mere instructions to apply the exception using generic computer components. Accordingly, this element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. For the reasons discussed above for claim 1, mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claims are not patent eligible. Claim 8 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites a method, comprising: transmitting, to a Natural Language Processing (NLP) system, audio from a conversation including utterances from a first party and a second party; receiving a transcript of the conversation from the NLP system and a candidate term identified from the transcript; outputting, to the first party, a display of the transcript and an indicator associated with the candidate term; in response to receiving a selection of the indicator, transmitting a request for additional information on the candidate term; receiving an explanation that summarizes data related to the candidate term retrieved from a supplemental data source, wherein the explanation is provided in a human-readable format; and outputting, to the first party, the explanation. The claim 8 limitations, under their broadest reasonable interpretation, cover managing personal behavior but for the recitation of generic computer components. That is, other than reciting âa Natural Language Processing (NLP) systemâ, nothing in the claim elements preclude the actions from practically being personal behavior. For example, âtransmitting audio from a conversationâ in the context of this claim encompasses a person listening to audio of a conversation, âreceiving a transcriptâ in the context of this claim encompasses a person writing a transcript of the conversation and identifying terms in the transcript for which a reader may want additional information, âoutputting a displayâ in the context of this claim encompasses a person providing a written transcript to a second person with a candidate term indicated in the written transcript, âtransmitting a requestâ in the context of this claim encompasses a person writing a question about a term selected by the person who read the transcript, âreceiving an explanationâ in the context of this claim encompasses a person summarizing an answer to a question about a term provided by a third person who was asked the question, and âoutputting the explanationâ in the context of this claim encompasses a person providing the summary to the person who read the transcript and selected the term. If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior then it falls within the "Certain Methods of Organizing Human Activity" grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim only recites the additional element âa Natural Language Processing (NLP) systemâ. The additional element âa Natural Language Processing (NLP) systemâ amounts to no more than mere instructions to apply the exception using generic computer components. There are no details about a particular natural language processing system or how the natural language processing system operates to analyze a transcript of a conversation. The natural language processing system is used to generally apply the abstract idea (analyzing a transcript of a conversation) without placing any limitation on how the natural language processing system operates to analyzing a transcript. The limitation recites only the idea of analyzing a transcript using a natural language processing system without details on how this is accomplished. The claim omits any details as to how the natural language processing system solves a technical problem, and instead recites only the idea of a solution or outcome. The claim invokes a generic natural language processing system merely as a tool for analyzing a transcript rather than purporting to improve the technology or a computer (See MPEP 2106.05(f)). Therefore, the limitation represents no more than mere instructions to apply the judicial exception on a computer. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible. Claims 9 â 15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 9 â 15 depend from claim 8, and thus recites the limitations of claim 8. For the reasons discussed above for claim 8, the claim 8 limitations recite abstract ideas. The additional limitations of claims 9 â 15 do not preclude the steps of claim 8 from practically being personal behavior. For example, a person using the method of claim 8 to provide additional information to a reader of a transcript could also perform the limitations of claims 9 â 15 as personal behavior: Claim 9: A person could assess that type of additional information a participant in the transcript conversation requires based on the terminology used by the participant in the conversation and provide the additional information in a format based on the assessment. Claim 10: A person could provide the additional information with the transcript. Claim 11: A person could provide information from a source selected by a participant in the transcript conversation. Claim 12: A person could identify utterances from a participant in the transcript conversation related to the candidate term, and provide the utterances to a second participant in the conversation. Claim 13: A person could underline segments of the transcript from which the candidate term is extracted. Claim 14: A person could indicate a term included in an action item assigned to a participant in the transcript conversation. Claim 15: A person could write an action item using terminology and context from the transcript. If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior, then it falls within the "Certain Methods of Organizing Human Activity" grouping of abstract ideas. Accordingly, the claims recite an abstract idea. The claims do not integrate the judicial exception into a practical application. For the reasons discussed above for claim 8, the additional element amount to no more than mere instructions to apply the exception using generic computer components. Accordingly, this element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. For the reasons discussed above for claim 8, mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claims are not patent eligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless â (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 3, 7 â 8, 10 and 12 â 13 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Bradley et al. (US Patent No. 12,125,487), hereinafter Bradley. Regarding claim 1, Bradley discloses a method, comprising: analyzing a transcript of a conversation, by a Natural Language Processing (NLP) system, to identify at least one candidate term in the conversation to provide supplemental information for to a reader of the transcript (Column 9, lines 37-48, "According to some embodiments, the system can proceed to transcribe 33 the speech segments into text strings with one or more speech recognition systems. A speech recognition system can be an Automatic Speech Recognition (ASR) and natural language understanding (NLU) system that is configured to infer at least one semantic meaning of an audio segment based on various statistical acoustic and language models and grammars. According to some embodiments, the speech recognition system can comprise at least one acoustic model. The speech recognition system can further comprise one or more pronunciation models, lexicons, and language models for natural language processing."; Column 15, lines 48-54, "According to some embodiments, as another type of metadata, the system can analyze semantic information and create embedded hyperlinks to look up related information in the transcript. The system can identify words or key phrases within a text string and present the word/key phrase as a hyperlink anchor corresponding to a URL, which can provide additional information of the word or key phrase."); Analyzing semantic information in a transcript reads on analyzing a transcript of a conversation, identifying a word or phrase and presenting the word or phrase as a hyperlink anchor corresponding to a URL which can provide additional information of the word or phrase reads on identifying a candidate term in the conversation to provide supplemental information for to a reader of the transcript, and an automatic speech recognition (ASR) and natural language understanding (NLU) system reads on a Natural Language Processing system.); in response to receiving, from the reader, a selection of the candidate term, formatting a query that includes the candidate term (Column 15, lines 48-54, "According to some embodiments, as another type of metadata, the system can analyze semantic information and create embedded hyperlinks to look up related information in the transcript. The system can identify words or key phrases within a text string and present the word/key phrase as a hyperlink anchor corresponding to a URL, which can provide additional information of the word or key phrase."); Column 16, lines 5-27, "According to some embodiments, the identified word or key phrase as the embedded hyperlink anchor can further comprise an argument in the URL when text adjacent to the key phrase matches a pattern corresponding to an argument. Various types of arguments can be supported, such as numbers, an enumerated value such as a month name, or reference to database entries such as employee names. According to some embodiments, the system can recognize terms related to relevant data, such as weather or stock prices, and automatically generate hyperlinks to retrieve such data. For example, the text âwhat's the weather right now inâ followed by a word that matches the name of a city in a list of known cities would cause the name of the city to be tagged with a hyperlink that references an API URL to look up the weather and a query argument that is the name of the city. For dynamic information such as weather, some embodiments can capture the data by accessing the URL, store it, and provide it with the stored transcript to provide an accurate record as of when the meeting was recorded. In some embodiments, key phrases can be specified as natural language grammars with slots in the location of relevant variables."; Generating hyperlinks for words or phrases in a transcript to look up related information when the reader selects the hyperlink reads on receiving a selection of a candidate term, and generating a query to retrieve information that includes the word or phrase as a query argument reads on formatting a query that includes the candidate term.); in response to receiving a reply to the query: summarizing the reply into an explanation in a human-readable format (Column 15, line 58 - Column 16, line 4, "For example, the system can identify words that match the names of Wikipedia articles or companies listed on a stock market. For another example, the system can identify text segments that match the names of company employees. As shown in FIG. 7, at location 71 in the transcript, the system has identified the name of an employee, Peter Gibbons, and displayed the transcript text for the name in a different font color with underscoring. A viewer or editor of the transcript in an appropriately enabled application can click or tap the text to invoke a browser that accesses web information on Peter Gibbons' employee profile. In some editors, hovering a pointer over a hyperlink pops up a mini window with information, and moving the pointer away from the hyperlink causes the mini window to disappear."; Column 16, lines 16-27, "For example, the text âwhat's the weather right now inâ followed by a word that matches the name of a city in a list of known cities would cause the name of the city to be tagged with a hyperlink that references an API URL to look up the weather and a query argument that is the name of the city. For dynamic information such as weather, some embodiments can capture the data by accessing the URL, store it, and provide it with the stored transcript to provide an accurate record as of when the meeting was recorded. In some embodiments, key phrases can be specified as natural language grammars with slots in the location of relevant variables."; Displaying a window with information from an employee profile when the name of an employee is selected reads on summarizing a reply into an explanation in a human-readable format in response to receiving a reply to a query.); and outputting the explanation to the reader (Column 15, line 58 - Column 16, line 4, "For example, the system can identify words that match the names of Wikipedia articles or companies listed on a stock market. For another example, the system can identify text segments that match the names of company employees. As shown in FIG. 7, at location 71 in the transcript, the system has identified the name of an employee, Peter Gibbons, and displayed the transcript text for the name in a different font color with underscoring. A viewer or editor of the transcript in an appropriately enabled application can click or tap the text to invoke a browser that accesses web information on Peter Gibbons' employee profile. In some editors, hovering a pointer over a hyperlink pops up a mini window with information, and moving the pointer away from the hyperlink causes the mini window to disappear."; Displaying a window with information from an employee profile when the name of an employee is selected reads on outputting the explanation to the reader.). Regarding claim 3, Bradley discloses the method as claimed in claim 1. Bradley further discloses: wherein the explanation includes a hyperlink to usage of the candidate term in the transcript associated with an utterance received from a party to the conversation (Column 15, lines 48-54, "According to some embodiments, as another type of metadata, the system can analyze semantic information and create embedded hyperlinks to look up related information in the transcript. The system can identify words or key phrases within a text string and present the word/key phrase as a hyperlink anchor corresponding to a URL, which can provide additional information of the word or key phrase."; Identify words or key phrases within a text string of a transcript and present the word or key phrase as a hyperlink corresponding to a URL which can provide additional information of the word or key phrase reads on an explanation including a hyperlink to usage of the candidate term in the transcript associated with an utterance received from a party to the conversation.). Regarding claim 7, Bradley discloses the method as claimed in claim 1. Bradley further discloses: wherein the explanation includes a hyperlink to an external source used by the NLP system to generate contents of the explanation (Column 15, lines 48-54, "According to some embodiments, as another type of metadata, the system can analyze semantic information and create embedded hyperlinks to look up related information in the transcript. The system can identify words or key phrases within a text string and present the word/key phrase as a hyperlink anchor corresponding to a URL, which can provide additional information of the word or key phrase."; Identifying a word or phrase and presenting the word or phrase as a hyperlink corresponding to a URL which can provide additional information of the word or phrase reads on the explanation includes a hyperlink to an external source used by the NLP system to generate contents of the explanation.). Regarding claim 8, Bradley discloses a method, comprising: transmitting, to a Natural Language Processing (NLP) system, audio from a conversation including utterances from a first party and a second party (Column 1, lines 49-51, "The present subject matter pertains to improved approaches to transcribe multi-speaker conversations or meetings to a textual transcript with metadata."; Column 6, lines 34-36, "The terminal at the first location can have a wireless connection to a smart speaker 19 that captures sound and transmits it to the terminal"; Column 9, lines 37-44, "According to some embodiments, the system can proceed to transcribe 33 the speech segments into text strings with one or more speech recognition systems. A speech recognition system can be an Automatic Speech Recognition (ASR) and natural language understanding (NLU) system that is configured to infer at least one semantic meaning of an audio segment based on various statistical acoustic and language models and grammars."; Capturing sound and transmitting it to a terminal, where the sound is a multi-speaker conversation, reads on transmitting audio from a conversation including utterances from a first party and a second party, and an automatic speech recognition (ASR) and natural language understanding (NLU) system reads on a Natural Language Processing system.); receiving a transcript of the conversation from the NLP system and a candidate term identified from the transcript (Column 9, lines 37-48, "According to some embodiments, the system can proceed to transcribe 33 the speech segments into text strings with one or more speech recognition systems. A speech recognition system can be an Automatic Speech Recognition (ASR) and natural language understanding (NLU) system that is configured to infer at least one semantic meaning of an audio segment based on various statistical acoustic and language models and grammars. According to some embodiments, the speech recognition system can comprise at least one acoustic model. The speech recognition system can further comprise one or more pronunciation models, lexicons, and language models for natural language processing."; Column 15, lines 48-54, "According to some embodiments, as another type of metadata, the system can analyze semantic information and create embedded hyperlinks to look up related information in the transcript. The system can identify words or key phrases within a text string and present the word/key phrase as a hyperlink anchor corresponding to a URL, which can provide additional information of the word or key phrase."; Transcribing the speech segments into text strings reads on receiving a transcript of the conversation, and identifying a word or phrase and presenting the word or phrase as a hyperlink anchor corresponding to a URL which can provide additional information of the word or phrase reads on receiving a candidate term identified from the transcript.); outputting, to the first party, a display of the transcript and an indicator associated with the candidate term (Column 15, lines 14-17, "According to some embodiments, various metadata can be captured and incorporated to enrich the display and functionality of the transcript."; Column 15, lines 48-54, "According to some embodiments, as another type of metadata, the system can analyze semantic information and create embedded hyperlinks to look up related information in the transcript. The system can identify words or key phrases within a text string and present the word/key phrase as a hyperlink anchor corresponding to a URL, which can provide additional information of the word or key phrase."; The display of the transcript reads on outputting a display of the transcript, and presenting a word or phrase as a hyperlink corresponding to a URL which can provide additional information of the word or phrase reads on outputting an indicator associated with the candidate term.); in response to receiving a selection of the indicator, transmitting a request for additional information on the candidate term (Column 15, line 58 - Column 16, line 4, "For example, the system can identify words that match the names of Wikipedia articles or companies listed on a stock market. For another example, the system can identify text segments that match the names of company employees. As shown in FIG. 7, at location 71 in the transcript, the system has identified the name of an employee, Peter Gibbons, and displayed the transcript text for the name in a different font color with underscoring. A viewer or editor of the transcript in an appropriately enabled application can click or tap the text to invoke a browser that accesses web information on Peter Gibbons' employee profile. In some editors, hovering a pointer over a hyperlink pops up a mini window with information, and moving the pointer away from the hyperlink causes the mini window to disappear."; Clicking or tapping text and hovering a pointer over a hyperlink reads on receiving a selection of the indicator, and invoking a browser that accesses web information on an employee profile reads on transmitting a request for additional information on the candidate term.); receiving an explanation that summarizes data related to the candidate term retrieved from a supplemental data source, wherein the explanation is provided in a human-readable format (Column 15, line 58 - Column 16, line 4, "For example, the system can identify words that match the names of Wikipedia articles or companies listed on a stock market. For another example, the system can identify text segments that match the names of company employees. As shown in FIG. 7, at location 71 in the transcript, the system has identified the name of an employee, Peter Gibbons, and displayed the transcript text for the name in a different font color with underscoring. A viewer or editor of the transcript in an appropriately enabled application can click or tap the text to invoke a browser that accesses web information on Peter Gibbons' employee profile. In some editors, hovering a pointer over a hyperlink pops up a mini window with information, and moving the pointer away from the hyperlink causes the mini window to disappear."; Column 16, lines 16-27, "For example, the text âwhat's the weather right now inâ followed by a word that matches the name of a city in a list of known cities would cause the name of the city to be tagged with a hyperlink that references an API URL to look up the weather and a query argument that is the name of the city. For dynamic information such as weather, some embodiments can capture the data by accessing the URL, store it, and provide it with the stored transcript to provide an accurate record as of when the meeting was recorded. In some embodiments, key phrases can be specified as natural language grammars with slots in the location of relevant variables."; Invoking a browser that accesses web information on an employee profile and displaying information from the employee profile in a window reads on receiving an explanation that summarizes data related to the candidate term retrieved from a supplemental data source, wherein the explanation is provided in a human-readable format.); and outputting, to the first party, the explanation (Column 15, line 58 - Column 16, line 4, "For example, the system can identify words that match the names of Wikipedia articles or companies listed on a stock market. For another example, the system can identify text segments that match the names of company employees. As shown in FIG. 7, at location 71 in the transcript, the system has identified the name of an employee, Peter Gibbons, and displayed the transcript text for the name in a different font color with underscoring. A viewer or editor of the transcript in an appropriately enabled application can click or tap the text to invoke a browser that accesses web information on Peter Gibbons' employee profile. In some editors, hovering a pointer over a hyperlink pops up a mini window with information, and moving the pointer away from the hyperlink causes the mini window to disappear."; Displaying a window with information from an employee profile when the name of an employee is selected reads on outputting the explanation.). Regarding claim 10, Bradley discloses the method as claimed in claim 8. Bradley further discloses: wherein the explanation is received from the NLP system with the transcript (Column 15, line 58 - Column 16, line 4, "For example, the system can identify words that match the names of Wikipedia articles or companies listed on a stock market. For another example, the system can identify text segments that match the names of company employees. As shown in FIG. 7, at location 71 in the transcript, the system has identified the name of an employee, Peter Gibbons, and displayed the transcript text for the name in a different font color with underscoring. A viewer or editor of the transcript in an appropriately enabled application can click or tap the text to invoke a browser that accesses web information on Peter Gibbons' employee profile. In some editors, hovering a pointer over a hyperlink pops up a mini window with information, and moving the pointer away from the hyperlink causes the mini window to disappear."; A viewer of the transcript reads on receiving the transcript, and displaying a window with information from an employee profile when the name of an employee is selected reads on receiving the explanation with the transcript.). Regarding claim 12, Bradley discloses the method as claimed in claim 8. Bradley further discloses: further comprising: identifying utterances from the second party related to the candidate term; and in response to receiving the selection of the indicator, outputting the utterances to the first party (Column 1, lines 49-59, "The present subject matter pertains to improved approaches to transcribe multi-speaker conversations or meetings to a textual transcript with metadata. With incorporated metadata, the enriched transcript can extract the meeting content to allow a reviewer to fully understand an audio recording without listening to it. Furthermore, the incorporated metadata can be, for example, speaker diarization, timestamp markers, text format, embedded hyperlinks, playback controls, static or dynamic image repositories. The present subject matter further enables collaborative editing of the transcript among multiple editors."; Column 3, lines 4-10, "According to some embodiments, the method of the present subject matter further comprises: embedding hyperlinks within the plurality of text strings, wherein the hyperlinks are associated with corresponding speech segments of the audio streams. By receiving a selected hyperlink associated with a speech segment, a playback of relevant audio streams is possible."; Column 10, lines 3-5, "According to some embodiments, during a meeting in progress, the system can provide the transcript 35 for live viewing by meeting participants in real-time."; Embedding hyperlinks within the plurality of text strings, where the hyperlinks are associated with corresponding speech segments of the audio streams, reads on identifying utterances from the second party related to the candidate term, a playback of relevant audio streams when receiving a selected hyperlink associated with a speech segment reads on outputting the utterances to the first party in response to receiving the selection of the indicator, and the system providing the transcript for live viewing by meeting participants in real-time demonstrates that the identified utterances can be from the second party and the selection of the indicator for outputting the utterances can be received from the first party.). Regarding claim 13, Bradley discloses the method as claimed in claim 8. Bradley further discloses: wherein the explanation is received with a transcript generated by the NLP system, further comprising: adjusting display of the transcript in a graphical user interface to show segments of the transcript that the NLP system extracted the candidate term from (Column 15, line 58 - Column 16, line 4, "For example, the system can identify words that match the names of Wikipedia articles or companies listed on a stock market. For another example, the system can identify text segments that match the names of company employees. As shown in FIG. 7, at location 71 in the transcript, the system has identified the name of an employee, Peter Gibbons, and displayed the transcript text for the name in a different font color with underscoring. A viewer or editor of the transcript in an appropriately enabled application can click or tap the text to invoke a browser that accesses web information on Peter Gibbons' employee profile. In some editors, hovering a pointer over a hyperlink pops up a mini window with information, and moving the pointer away from the hyperlink causes the mini window to disappear."; Column 18, lines 58-61, "According to some embodiments, the display 82 can have a graphical user interface (GUI) with a pointer controlled by a hand-operated input device such as a mouse. The GUI can show conversation turns, each with a distinct color."). Identifying text segments that match the names of company employees and displayed the transcript text for the name in a different font color reads on adjusting display of the transcript to show segments of the transcript that the NLP system extracted the candidate term from.). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2, 6 and 14 â 15 are rejected under 35 U.S.C. 103 as being unpatentable over Bradley in view of Hilleli et al. (US Patent No. 11,062,270), hereinafter Hilleli. Regarding claim 2, Bradley discloses the method as claimed in claim 1, but does not specifically disclose: wherein the NLP system identifies the candidate term from an action item generated from the transcript. Hilleli teaches: wherein the NLP system identifies the candidate term from an action item generated from the transcript (Column 1, lines 42-50, "Aspects of this disclosure relate to computerized systems for automatically determining and presenting enriched action items of an event, such as a meeting. An enriched action item may include an indication of the action item and related meeting content or contextual data useful for understanding the action item, and which may be personalized to a specific user, such as a meeting attendee. In particular, action items may be determined and then enhanced or clarified in order to be more understandable."; Column 2, lines 3-7, "In some embodiments, an intelligent graphical user interface may be provided that includes functionality to improve the user experience, such as features that allow the user to provide feedback indicating whether or not an item presented is in fact an action item."; Column 15, lines 13-18, "In some embodiments, the event content is or includes documents or transcripts of the order and content of everything that was said in an event written in natural language. For example, the event content can be a written transcript of everything that was said or uttered during an entire duration of a meeting."; Providing an indication of the action item and related meeting content or contextual data useful for understanding the action item reads on identifies a candidate term from an action item generated from the transcript.). Hilleli is considered to be analogous to the claimed invention because it is in the same field of transcript analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bradley to incorporate the teachings of Hilleli to provide an indication of an action item and related meeting content or contextual data useful for understanding the action item. Doing so would allow for automatically modifying or supplementing an action item based on contextual data that is useful for understanding the action item and may be personalized to a specific user, such as a meeting attendee (Hilleli; Column 4, lines 13-24). Regarding claim 6, Bradley discloses the method as claimed in claim 1, but does not specifically disclose: further comprising: in response to receiving, from the reader, the selection of the candidate term, adding the selection to an aggregation report; and providing the aggregation report to a second party to the conversation, other than the reader. Hilleli teaches: in response to receiving, from the reader, the selection of the candidate term, adding the selection to an aggregation report (Column 2, lines 3-7, "In some embodiments, an intelligent graphical user interface may be provided that includes functionality to improve the user experience, such as features that allow the user to provide feedback indicating whether or not an item presented is in fact an action item."; Column 25, lines 61 - Column 26, line 1, "The enriched action item assembler 288 is generally responsible for generating a list of action items and may also provide related information, such as event context. For example, the enriched action item assembler 288 can generate a list of action items and consolidate it with other information determined by the action item enrichment 282, such as who owns the action item, who stated the action item, when the action item is due, and what the action item is."; The user providing feedback indicating whether or not an item presented is in fact an action item reads on receiving the selection of the candidate term from the reader, and a list of action items consolidated with other information reads on an aggregation report.); and providing the aggregation report to a second party to the conversation, other than the reader (Column 11, line 59 - Column 12, line 4, "Continuing with FIG. 2, example system 200 includes a meeting monitor 250. Meeting monitor 250 is generally responsible for determining and/or detecting meeting features from online meetings and/or in-person meetings and making the meeting features available to the other components of the system 200. For example, such monitored activity can be meeting location (e.g., as determined by geo-location of user devices), topic of the meeting, invitees of the meeting, whether the meeting is recurring, related deadlines, projects and the like. In some aspects, meeting monitor 250 determines and provides a set of meeting features (such as described below), for a particular meeting, and for each user associated with the meeting."; Providing meeting features to each user associated with a meeting reads on providing an aggregation report to a second party to the conversation other than the reader.). Hilleli is considered to be analogous to the claimed invention because it is in the same field of transcript analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bradley to incorporate the teachings of Hilleli to provide user feedback indicating whether or not an item presented is an action item, generate a list of action items consolidated with other information, and provide meeting features to each user associated with a meeting. Doing so would allow for automatically modifying or supplementing an action item based on contextual data that is useful for understanding the action item and may be personalized to a specific user, such as a meeting attendee (Hilleli; Column 4, lines 13-24). Regarding claim 14, Bradley discloses the method as claimed in claim 8, but does not specifically disclose: wherein the indicator is included in an action item assigned to the first party based on the transcript. Hilleli teaches: wherein the indicator is included in an action item assigned to the first party based on the transcript (Column 3, lines 23-42, "Certain aspects of this disclosure automatically and intelligently determining and presenting enriched action items of an event (e.g., a meeting, an interactive workshop, an informal gathering, and the like.). An âaction itemâ as described herein may comprise a task indicated in the event that is requested to be completed to further a certain goal or purpose associated with the event. An action item in various instances may be issued via a command or other request by a person to have another person(s) (or themselves) perform some action. In an illustrative example of an action item, during a meeting regarding the development of a certain computer application, a person may say, âBob, can you perform a round of debugging on the app today?,â which is an action item for Bob to perform a debugging action today in order to have the application ready for deployment. An âenrichedâ action item may include an indication of the action item and related meeting content or contextual information useful for understanding the action item, and which may be personalized to a specific user, such as a meeting attendee."; An indication of an action item and related meeting content or contextual information reads on an indicator included in an action item, and an action item requested to be performed by a person in the meeting reads on an action item assigned to the first party based on the transcript.). Hilleli is considered to be analogous to the claimed invention because it is in the same field of transcript analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bradley to incorporate the teachings of Hilleli to include an indication of an action item and related meeting content or contextual information for an action item requested to be performed by a person in the meeting. Doing so would allow for automatically modifying or supplementing an action item based on contextual data that is useful for understanding the action item and may be personalized to a specific user, such as a meeting attendee (Hilleli; Column 4, lines 13-24). Regarding claim 15, Bradley in view of Hilleli discloses the method as claimed in claim 14. Hilleli further teaches: wherein the action item is created by the NLP system using terminology and context from a transcript of the conversation (Column 1, lines 42-50, "Aspects of this disclosure relate to computerized systems for automatically determining and presenting enriched action items of an event, such as a meeting. An enriched action item may include an indication of the action item and related meeting content or contextual data useful for understanding the action item, and which may be personalized to a specific user, such as a meeting attendee. In particular, action items may be determined and then enhanced or clarified in order to be more understandable."; Column 5, lines 8-25, "By way of example, an initial action item candidate determined by an embodiment of the present disclosure may read, âI will do it.â Using natural language processing and/or machine learning components as described herein, it may be determined that this sentence is unclear (e.g., that it is below a threshold clarity score). This may occur, for example, by determining that no names and only pronouns exist within the sentence and/or that no date is identified. Accordingly, it may be initially unclear: who is saying or otherwise inputting this sentence, what âitâ refers to, when âitâ will be completed by, and who is to complete the action item. In various embodiments, in response to determining that this candidate is below a clarity score, missing data can be determined from contextual data, as described above. Various embodiments of the present disclosure automatically modify or supplement an action item based on contextual data by determining and presenting the quantity of contextual data needed for an action item to be clear."; Determining action items from a meeting transcript using natural language processing and using contextual data reads on an action item being created by the NLP system using terminology and context from a transcript of the conversation.). Hilleli is considered to be analogous to the claimed invention because it is in the same field of transcript analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bradley in view of Hilleli to further incorporate the teachings of Hilleli to determine action items from a meeting transcript using natural language processing and using contextual data. Doing so would allow for automatically modifying or supplementing an action item based on contextual data that is useful for understanding the action item and may be personalized to a specific user, such as a meeting attendee (Hilleli; Column 4, lines 13-24). Claims 4 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Bradley in view of Nguyen et al. (US Patent No. 11,404,049), hereinafter Nguyen. Regarding claim 4, Bradley discloses the method as claimed in claim 1, but does not specifically disclose: wherein contents of the explanation are retrieved from a supplemental data source selected by a party in the conversation, different from the reader. Nguyen teaches: wherein contents of the explanation are retrieved from a supplemental data source selected by a party in the conversation, different from the reader (Column 4, lines 34-42, "Providing mechanisms for automatically surfacing a real-time transcription of a speaking user in association with a productivity application and augmenting that surfacing with note taking features also provides an enhanced user experience. For example, a user may take notes related to a speech (e.g., a lecture) in a first window of a productivity application, while having a real-time transcription of the speech surfaced next to that window."; Column 6, line 64 - Column 7, line 13, "For example, if the user that provides speech 230 is presenting a lecture with one or more corresponding documents related to the lecture (e.g., an electronic slide show from a presentation application on organic chemistry, a lecture handout, etc.), the language processing models used to transcribe speech 230 may utilize that material (e.g., via analysis of those electronic documents) to develop a custom corpus and/or dictionary that may be utilized in the language processing models in determining correct output for speech 230. This is illustrated by domain-specific dictionaries/corpus 234. According to some examples, vocabulary (e.g., words, phrases) that is determined to be specific and/or unique to a particular language processing model, custom corpus, and/or custom dictionary, may be automatically highlighted and/or otherwise distinguished from other captions in a transcription pane in a productivity application."; Augmenting a transcription of a lecture utilizing material provided by the user presenting the lecture reads on retrieving contents of the explanation from a supplemental data source selected by a party in the conversation different from the reader.). Nguyen is considered to be analogous to the claimed invention because it is in the same field of transcript analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bradley to incorporate the teachings of Nguyen to augment a transcription of a lecture utilizing material provided by the user presenting the lecture. Doing so would allow for a user taking notes related to a lecture to surface standard definitions and custom definitions for words in a transcription of the lecture (Nguyen; Column 4, lines 32-52). Regarding claim 11, Bradley discloses the method as claimed in claim 8, but does not specifically disclose: wherein the supplemental data source is selected by the second party. Nguyen teaches: wherein the supplemental data source is selected by the second party (Column 4, lines 34-42, "Providing mechanisms for automatically surfacing a real-time transcription of a speaking user in association with a productivity application and augmenting that surfacing with note taking features also provides an enhanced user experience. For example, a user may take notes related to a speech (e.g., a lecture) in a first window of a productivity application, while having a real-time transcription of the speech surfaced next to that window."; Column 6, line 64 - Column 7, line 13, "For example, if the user that provides speech 230 is presenting a lecture with one or more corresponding documents related to the lecture (e.g., an electronic slide show from a presentation application on organic chemistry, a lecture handout, etc.), the language processing models used to transcribe speech 230 may utilize that material (e.g., via analysis of those electronic documents) to develop a custom corpus and/or dictionary that may be utilized in the language processing models in determining correct output for speech 230. This is illustrated by domain-specific dictionaries/corpus 234. According to some examples, vocabulary (e.g., words, phrases) that is determined to be specific and/or unique to a particular language processing model, custom corpus, and/or custom dictionary, may be automatically highlighted and/or otherwise distinguished from other captions in a transcription pane in a productivity application."; Augmenting a transcription of a lecture utilizing material provided by the user presenting the lecture reads on the supplemental data source being selected by the second party.). Nguyen is considered to be analogous to the claimed invention because it is in the same field of transcript analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bradley to incorporate the teachings of Nguyen to augment a transcription of a lecture utilizing material provided by the user presenting the lecture. Doing so would allow for a user taking notes related to a lecture to surface standard definitions and custom definitions for words in a transcription of the lecture (Nguyen; Column 4, lines 32-52). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Bradley in view of Muralitharan et al. (US Patent Application Publication No. 2022/0414348), hereinafter Muralitharan. Regarding claim 5, Bradley discloses the method as claimed in claim 1. Bradley further discloses: wherein the explanation uses vocabulary extracted from the transcript (Column 15, line 58 - Column 16, line 1, "For example, the system can identify words that match the names of Wikipedia articles or companies listed on a stock market. For another example, the system can identify text segments that match the names of company employees. As shown in FIG. 7, at location 71 in the transcript, the system has identified the name of an employee, Peter Gibbons, and displayed the transcript text for the name in a different font color with underscoring. A viewer or editor of the transcript in an appropriately enabled application can click or tap the text to invoke a browser that accesses web information on Peter Gibbons' employee profile.â; Identifying words that match the names of articles, companies, or employees reads on using vocabulary extracted from the transcript.). Bradley does not specifically disclose: wherein the human-readable format is selected from a plurality of human-readable formats by the NLP system based on a supplementation level of the reader. Muralitharan teaches: wherein the human-readable format is selected from a plurality of human-readable formats by the NLP system based on a supplementation level of the reader (Paragraph 0037, lines 1-10, "The techniques introduced herein help to bridge the gap between text for presentation to a user and the reading comprehension level of that user. In various aspects, a multi-layered system is introduced herein that assesses the comprehension level of source material (text/audio transcripts), learns and establishes the comprehension level of the recipient, analyzes comprehension equality at the topic and keyword level, and/or adaptively transforms source material to the required comprehension of the target recipient."; Paragraph 0046, lines 1-14, "According to various embodiments, as detailed further below, STERLA engine 302 may be operable to âtranslateâ the text, audio, video or other content provided by one participant of an online conversation such that it is at, or below, that of the comprehension level of another participant. Consider, for instance, user 306 sending a text message to user 308 via collaboration service 304. This text may range from being very simplistic to using phrases or terminology that is highly specialized. As a result, there are instances in which user 308 may not fully understand what user 306 is saying. A key functionality of STERLA engine 302 is to assess this text and adjust it for presentation to user 308, to increase the potential for user 308 to comprehend the message."; Adaptively transforming source material to the required comprehension of the target recipient reads on selecting a human-readable format based on a supplementation level of the reader.). Muralitharan is considered to be analogous to the claimed invention because it is in the same field of transcript analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bradley to incorporate the teachings of Muralitharan to transform source material to the required comprehension of the target recipient. Doing so would allow for adjusting the original text of a conversation to include additional context to allow a user to better understand the conversation (Muralitharan; Paragraph 0082, lines 1-9). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Bradley in view of Muralitharan and Collins-Thompson et al. ("Personalizing Web Search Results by Reading Level"), hereinafter Collins-Thompson. Regarding claim 9, Bradley discloses the method as claimed in claim 8, but does not specifically disclose: further comprising: assessing a supplementation level of the first party using terminology extracted from utterances associated with the first party in a transcript of the conversation; selecting the human-readable format from a plurality of human-readable formats based on the supplementation level assessed for the first party. Muralitharan teaches: assessing a supplementation level of the first party using terminology extracted from utterances associated with the first party in a transcript of the conversation (Paragraph 0037, lines 1-10, "The techniques introduced herein help to bridge the gap between text for presentation to a user and the reading comprehension level of that user. In various aspects, a multi-layered system is introduced herein that assesses the comprehension level of source material (text/audio transcripts), learns and establishes the comprehension level of the recipient, analyzes comprehension equality at the topic and keyword level, and/or adaptively transforms source material to the required comprehension of the target recipient."; Paragraph 0057, lines 1-5, "According to various embodiments, text comprehension level identifier 406 may assess the text of the conversation, as well as any potential topics extracted by topic extractor 404, to assign a text comprehension level to any portion of the text."; Learning the comprehension level of the recipient by assessing the text of the conversation reads on assessing a supplementation level of the first party using terminology extracted from utterances associated with the first party in a transcript of the conversation.); selecting the human-readable format from a plurality of human-readable formats based on the supplementation level assessed for the first party (Paragraph 0037, lines 1-10, "The techniques introduced herein help to bridge the gap between text for presentation to a user and the reading comprehension level of that user. In various aspects, a multi-layered system is introduced herein that assesses the comprehension level of source material (text/audio transcripts), learns and establishes the comprehension level of the recipient, analyzes comprehension equality at the topic and keyword level, and/or adaptively transforms source material to the required comprehension of the target recipient."; Paragraph 0046, lines 1-14, "According to various embodiments, as detailed further below, STERLA engine 302 may be operable to âtranslateâ the text, audio, video or other content provided by one participant of an online conversation such that it is at, or below, that of the comprehension level of another participant. Consider, for instance, user 306 sending a text message to user 308 via collaboration service 304. This text may range from being very simplistic to using phrases or terminology that is highly specialized. As a result, there are instances in which user 308 may not fully understand what user 306 is saying. A key functionality of STERLA engine 302 is to assess this text and adjust it for presentation to user 308, to increase the potential for user 308 to comprehend the message."; Adaptively transforming source material to the required comprehension of the target recipient reads on selecting the human-readable format from a plurality of human-readable formats based on the supplementation level assessed for the first party.). Muralitharan is considered to be analogous to the claimed invention because it is in the same field of transcript analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bradley to incorporate the teachings of Muralitharan to learn the comprehension level of the recipient by assessing the text of a conversation and transform source material to the required comprehension of the target recipient. Doing so would allow for adjusting the original text of a conversation to include additional context to allow a user to better understand the conversation (Muralitharan; Paragraph 0082, lines 1-9). Bradley in view of Muralitharan does not specifically disclose: requesting the explanation from the NLP system according to the human-readable format selected. Collins-Thompson teaches: requesting the explanation from the NLP system according to the human-readable format selected (Abstract, lines 14-20, "We show how reading level can provide a valuable new relevance signal for both general and personalized Web search. We describe models and algorithms to address the three key problems in improving relevance for search using reading difficulty: estimating user proficiency, estimating result difficulty, and re-ranking based on the difference between user and result reading level profiles."; Section 3.1.1, lines 1-8, "The reading difficulty prediction method that we use for this study, summarized in this section, has been shown to be effective for both short, noisy texts, and full-page Web texts. Unlike traditional measures that compute a single numeric score, methods based on statistical language modeling provide extra information about score reliability by computing the likely distribution over levels, which can be used to compute confidence estimates."; Performing a web search based on the reading level of the user reads on requesting the explanation according to the human-readable format selected, and a statistical language model reads on an NLP system.). Collins-Thompson is considered to be analogous to the claimed invention because it is reasonably pertinent to the problem of personalized information retrieval. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bradley in view of Muralitharan to incorporate the teachings of Collins-Thompson to perform a web search based on the reading level of a user. Doing so would allow for improving relevance for a web search (Collins-Thompson; Section 7, lines 1-8). Claims 16 â 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Bradley in view of Adlersberg et al. (US Patent No. 11,501,780), hereinafter Adlersberg. Regarding claim 16, Bradley discloses a method, comprising: receiving, from a Natural Language Processing (NLP) system, a transcript of a conversation between at least a first party and a second party [and a summary of the transcript] that includes a candidate term to provide supplemental information for (Column 9, lines 37-48, "According to some embodiments, the system can proceed to transcribe 33 the speech segments into text strings with one or more speech recognition systems. A speech recognition system can be an Automatic Speech Recognition (ASR) and natural language understanding (NLU) system that is configured to infer at least one semantic meaning of an audio segment based on various statistical acoustic and language models and grammars. According to some embodiments, the speech recognition system can comprise at least one acoustic model. The speech recognition system can further comprise one or more pronunciation models, lexicons, and language models for natural language processing."; Column 15, lines 48-54, "According to some embodiments, as another type of metadata, the system can analyze semantic information and create embedded hyperlinks to look up related information in the transcript. The system can identify words or key phrases within a text string and present the word/key phrase as a hyperlink anchor corresponding to a URL, which can provide additional information of the word or key phrase."; Transcribing the speech segments into text strings reads on receiving a transcript of the conversation, and identifying a word or phrase and presenting the word or phrase as a hyperlink anchor corresponding to a URL which can provide additional information of the word or phrase reads on receiving a candidate term to provide supplemental information for.); in response to receiving a selection of the indicator from a reader: generating an informational window in the user interface that includes an explanation related to the candidate term (Column 15, line 58 - Column 16, line 4, "For example, the system can identify words that match the names of Wikipedia articles or companies listed on a stock market. For another example, the system can identify text segments that match the names of company employees. As shown in FIG. 7, at location 71 in the transcript, the system has identified the name of an employee, Peter Gibbons, and displayed the transcript text for the name in a different font color with underscoring. A viewer or editor of the transcript in an appropriately enabled application can click or tap the text to invoke a browser that accesses web information on Peter Gibbons' employee profile. In some editors, hovering a pointer over a hyperlink pops up a mini window with information, and moving the pointer away from the hyperlink causes the mini window to disappear."; Clicking or tapping text and hovering a pointer over a hyperlink reads on receiving a selection of the indicator, and displaying a window with information from an employee profile when the name of an employee is selected reads on generating an informational window in the user interface that includes an explanation related to the candidate term.); and adjusting display of the user interface to highlight a section of the transcript from which the NLP system identified the candidate term as part of a key point from the conversation, wherein the informational window is positioned with the section to maintain legibility of the section and the informational window (Column 15, line 58 - Column 16, line 4, "For example, the system can identify words that match the names of Wikipedia articles or companies listed on a stock market. For another example, the system can identify text segments that match the names of company employees. As shown in FIG. 7, at location 71 in the transcript, the system has identified the name of an employee, Peter Gibbons, and displayed the transcript text for the name in a different font color with underscoring. A viewer or editor of the transcript in an appropriately enabled application can click or tap the text to invoke a browser that accesses web information on Peter Gibbons' employee profile. In some editors, hovering a pointer over a hyperlink pops up a mini window with information, and moving the pointer away from the hyperlink causes the mini window to disappear."; Column 18, lines 58-61, "According to some embodiments, the display 82 can have a graphical user interface (GUI) with a pointer controlled by a hand-operated input device such as a mouse. The GUI can show conversation turns, each with a distinct color."). Identifying text segments that match the names of company employees and displayed the transcript text for the name in a different font color reads on highlighting a section of the transcript from which the NLP system identified the candidate term as part of a key point from the conversation, and hovering a pointer over a hyperlink to pop up a mini window with information, where the mini window disappears when the pointer is moved away from the hyperlink reads on positioning the informational window to maintain legibility of the section and the informational window.). Bradley does not specifically disclose: receiving, from a Natural Language Processing (NLP) system, a transcript of a conversation between at least a first party and a second party and a summary of the transcript that includes a candidate term to provide supplemental information for; generating a first display on a user interface that includes the transcript, the summary, and an indicator for the candidate term. Adlersberg teaches: receiving, from a Natural Language Processing (NLP) system, a transcript of a conversation between at least a first party and a second party and a summary of the transcript that includes a candidate term to provide supplemental information for (Column 2, line 58 - Column 3, line 7, "The present invention provides systems and method that may utilize, for example, voice transcription (speech to text engine), speaker identification, textual/contextual analysis, Natural Language Processing (NLP) or Natural Language Understanding (NLU), automatic retrieval of additional information from the Internet and/or from a local repository (e.g., an organizational or enterprise repository) and/or from third-party databases, as well as artificial intelligence (AI) and/or other functionalities, in order to determine and analyze âwho said whatâ in a meeting or a discussion or a presentation, and to generate a visual presentation that includes textual and/or graphical components that correspond to key elements that were discussed or presented, and/or that further expand on such key elements; thereby analyzing what was said or discussed in a meeting, and automatically generating from it a visual layer that corresponds to it."; Column 5, lines 28-30, "A speech-to-text converter 107 may process all the audio from all sources, and may generate a textual transcript of the discussions held in the meeting."; Column 10, line 51 - Column 11, line 4, "An Automatic Presentation Generator 120 may collect the data and/or content items that was generated or detected or prepared by the other components, and may generate or create a presentation that corresponds to the meeting. For example, the Automatic Presentation Generator 120 may generate a slide with the Title of the meeting; a slide with the Agenda or Topic List of the meeting; multiple slides with summaries of each topics or sub-topics; a slide with conclusions and decisions; a slide with action items or âto doâ itemsâ; a slide with meta-data about the meeting (names and type of participants; name and role of presented/leader; time-slot of the meeting; time-slots in which each user participated; means of participation by each user). Optionally, at least one of the generated slides, or at least some of the generated slides, may comprise augmented content-item(s), such as an image of a person or a place that was discussed, or a visual or graphical content-item that corresponds to failure or success or to positive or negative indicators, or a hyperlink to a current event, or a brief description of image of a current event, or other augmented content."; Generating a textual transcript of the discussions held in the meeting reads on receiving a transcript of a conversation between at least a first party and a second party, generating slides with summaries of topics or sub-topics reads on a summary of the transcript, and the generated slides comprising augmented content items such as a hyperlink to a current event reads on a summary of the transcript that includes a candidate term to provide supplemental information for.); generating a first display on a user interface that includes the transcript, the summary, and an indicator for the candidate term (Column 10, line 51 - Column 11, line 4, "An Automatic Presentation Generator 120 may collect the data and/or content items that was generated or detected or prepared by the other components, and may generate or create a presentation that corresponds to the meeting. For example, the Automatic Presentation Generator 120 may generate a slide with the Title of the meeting; a slide with the Agenda or Topic List of the meeting; multiple slides with summaries of each topics or sub-topics; a slide with conclusions and decisions; a slide with action items or âto doâ itemsâ; a slide with meta-data about the meeting (names and type of participants; name and role of presented/leader; time-slot of the meeting; time-slots in which each user participated; means of participation by each user). Optionally, at least one of the generated slides, or at least some of the generated slides, may comprise augmented content-item(s), such as an image of a person or a place that was discussed, or a visual or graphical content-item that corresponds to failure or success or to positive or negative indicators, or a hyperlink to a current event, or a brief description of image of a current event, or other augmented content."; Column 13, lines 28-37, "In some embodiments, the method comprises: generating a plurality of slides in said visual presentation, wherein a first slide comprises: (i) a summary of a first topic, and (ii) a first User Interface element that, when engaged, causes the method to display a first portion of the transcript of said meeting that corresponds to said first topic; wherein a second slide comprises: (i) a summary of a second topic, and (ii) a second User Interface element that, when engaged, causes the method to display a second portion of the transcript of said meeting that corresponds to said second topic."; Generating slides in a visual presentation comprising the transcript and a summary, where the summary includes augmented content items such as a hyperlink to a current event, reads on generating a first display on a user interface that includes the transcript, the summary, and an indicator for the candidate term.). Adlersberg is considered to be analogous to the claimed invention because it is in the same field of transcript analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bradley to incorporate the teachings of Adlersberg to generate a textual transcript of discussions held in the meeting and generate slides with summaries of topics or sub-topics, where the generated slides comprise augmented content items such as a hyperlink to a current event. Doing so would allow for automatic real-time moderation of meetings to manage and guide the meeting in real-time and selectively generate and convey real-time differential notifications and advice to particular participants (Adlersberg; Column 1, lines 33-39). Regarding claim 17, Bradley in view of Adlersberg discloses the method as claimed in claim 16. Bradley further discloses: further comprising: providing audio playback of the section to the reader (Column 3, lines 4-10, "According to some embodiments, the method of the present subject matter further comprises: embedding hyperlinks within the plurality of text strings, wherein the hyperlinks are associated with corresponding speech segments of the audio streams. By receiving a selected hyperlink associated with a speech segment, a playback of relevant audio streams is possible."; Hyperlinks associated with corresponding speech segments of the audio streams and playback of the relevant audio streams reads on providing audio playback of the section to the reader.). Regarding claim 19, Bradley in view of Adlersberg discloses the method as claimed in claim 16. Bradley further discloses: wherein the key point is generated by the NLP system using terminology and context from the transcript (Column 15, lines 48-54, "According to some embodiments, as another type of metadata, the system can analyze semantic information and create embedded hyperlinks to look up related information in the transcript. The system can identify words or key phrases within a text string and present the word/key phrase as a hyperlink anchor corresponding to a URL, which can provide additional information of the word or key phrase."); Analyzing semantic information in a transcript to identify a word or key phrase reads on generating the key point using terminology and context from the transcript.). Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Bradley in view of Adlersberg, and further in view of Muralitharan. Regarding claim 18, Bradley in view of Adlersberg discloses the method as claimed in claim 16, but does not specifically disclose: wherein the reader is the first party and the summary is tuned to a supplementation level of the first party that is based on vocabulary extracted from the transcript. Muralitharan teaches: wherein the reader is the first party and the summary is tuned to a supplementation level of the first party that is based on vocabulary extracted from the transcript (Paragraph 0037, lines 1-10, "The techniques introduced herein help to bridge the gap between text for presentation to a user and the reading comprehension level of that user. In various aspects, a multi-layered system is introduced herein that assesses the comprehension level of source material (text/audio transcripts), learns and establishes the comprehension level of the recipient, analyzes comprehension equality at the topic and keyword level, and/or adaptively transforms source material to the required comprehension of the target recipient."; Paragraph 0046, lines 1-14, "According to various embodiments, as detailed further below, STERLA engine 302 may be operable to âtranslateâ the text, audio, video or other content provided by one participant of an online conversation such that it is at, or below, that of the comprehension level of another participant. Consider, for instance, user 306 sending a text message to user 308 via collaboration service 304. This text may range from being very simplistic to using phrases or terminology that is highly specialized. As a result, there are instances in which user 308 may not fully understand what user 306 is saying. A key functionality of STERLA engine 302 is to assess this text and adjust it for presentation to user 308, to increase the potential for user 308 to comprehend the message."; Paragraph 0068, lines 1-12, "The underlying logic for user comprehension level identifier 408 may be somewhat similar to how text comprehension level identifier 406 computes the comprehension levels for different portions of text. Here, user comprehension level identifier 408 may infer the comprehension level of a particular participant from their demographics or other directory information and/or by learning their comprehension level over time. In other words, user comprehension level identifier 408 may analyze the textual representation of the participants' own words, to score their comprehension level. Note that in some embodiments, this scoring can also be on a per-topic basis."; Adaptively transforming source material to the required comprehension of the target recipient, where the target recipient is a participant of an online conversation, reads on tuning a summary to a supplementation level of the first party, wherein the reader is the first party, and inferring the comprehension level of a participant by analyzing the textual representation of the words of the participant reads on basing the supplementation level of the first party on vocabulary extracted from the transcript.). Muralitharan is considered to be analogous to the claimed invention because it is in the same field of transcript analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bradley in view of Adlersberg to incorporate the teachings of Muralitharan to transform source material to the required comprehension of the target recipient, where the target recipient is a participant of an online conversation, and infer the comprehension level of the participant by analyzing the textual representation of the words of the participant, where the summary as taught by Adlersberg is substituted for the source material as taught by Muralitharan. Doing so would allow for adjusting the original text of a conversation to include additional context to allow a user to better understand the conversation (Muralitharan; Paragraph 0082, lines 1-9). Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Bradley in view of Adlersberg, and further in view of Nguyen. Regarding claim 20, Bradley in view of Adlersberg discloses the method as claimed in claim 16. Bradley further discloses: populating the informational window with supplemental data from an external supplemental data source [identified by the second party, different than the reader,] that is under control of a third party, different from the second party and the reader (Column 15, line 58 - Column 16, line 4, "For example, the system can identify words that match the names of Wikipedia articles or companies listed on a stock market. For another example, the system can identify text segments that match the names of company employees. As shown in FIG. 7, at location 71 in the transcript, the system has identified the name of an employee, Peter Gibbons, and displayed the transcript text for the name in a different font color with underscoring. A viewer or editor of the transcript in an appropriately enabled application can click or tap the text to invoke a browser that accesses web information on Peter Gibbons' employee profile. In some editors, hovering a pointer over a hyperlink pops up a mini window with information, and moving the pointer away from the hyperlink causes the mini window to disappear."; Displaying a window with information from an employee profile when the name of an employee is selected reads on populating the informational window with supplemental data from an external supplemental data source, and invoking a browser that accesses web information on an employee profile reads on an external supplemental data source that is under control of a third party, different from the second party and the reader.). Bradley in view of Adlersberg does not specifically disclose: supplemental data from an external supplemental data source identified by the second party, different than the reader. Nguyen teaches: supplemental data from an external supplemental data source identified by the second party, different than the reader (Column 4, lines 34-42, "Providing mechanisms for automatically surfacing a real-time transcription of a speaking user in association with a productivity application and augmenting that surfacing with note taking features also provides an enhanced user experience. For example, a user may take notes related to a speech (e.g., a lecture) in a first window of a productivity application, while having a real-time transcription of the speech surfaced next to that window."; Column 6, line 64 - Column 7, line 13, "For example, if the user that provides speech 230 is presenting a lecture with one or more corresponding documents related to the lecture (e.g., an electronic slide show from a presentation application on organic chemistry, a lecture handout, etc.), the language processing models used to transcribe speech 230 may utilize that material (e.g., via analysis of those electronic documents) to develop a custom corpus and/or dictionary that may be utilized in the language processing models in determining correct output for speech 230. This is illustrated by domain-specific dictionaries/corpus 234. According to some examples, vocabulary (e.g., words, phrases) that is determined to be specific and/or unique to a particular language processing model, custom corpus, and/or custom dictionary, may be automatically highlighted and/or otherwise distinguished from other captions in a transcription pane in a productivity application."; Augmenting a transcription of a lecture utilizing material provided by the user presenting the lecture reads on supplemental data from an external supplemental data source identified by the second party, different than the reader.). Nguyen is considered to be analogous to the claimed invention because it is in the same field of transcript analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bradley in view of Adlersberg to incorporate the teachings of Nguyen to augment a transcription of a lecture utilizing material provided by the user presenting the lecture. Doing so would allow for a user taking notes related to a lecture to surface standard definitions and custom definitions for words in a transcription of the lecture (Nguyen; Column 4, lines 32-52). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: LĂĄinez Rodrigo et al. (US Patent No. 12,079,573) Cole et al. (US Patent No. 11,765,267) Co et al. (US Patent No. 11,521,722) Biswas et al. (US Patent No. 11,232,266) Chintagunta et al. ("Medically Aware GPT-3 as a Data Generator for Medical Dialogue Summarization") Selvaraj et al. ("Medication Regimen Extraction from Medical Conversations") Schloss et al. ("Towards an Automated SOAP Note: Classifying Utterances from Medical Conversations") The relevant art made of record and not relied upon is considered pertinent to applicant's disclosure: Konam et al. ("Abridge: A Mission Driven Approach to Machine Learning for Healthcare Conversation") Any inquiry concerning this communication or earlier communications from the examiner should be directed to James Boggs whose telephone number is (571)272-2968. The examiner can normally be reached M-F 8:00 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examinerâs supervisor, Daniel Washburn can be reached at (571)272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAMES BOGGS/Examiner, Art Unit 2657