Jump to content

Patent Application 17431920 - Method for processing a payment transaction and - Rejection

From WikiPatents

Patent Application 17431920 - Method for processing a payment transaction and

Title: Method for processing a payment transaction, and corresponding device, system and programs

Application Information

  • Invention Title: Method for processing a payment transaction, and corresponding device, system and programs
  • Application Number: 17431920
  • Submission Date: 2025-04-08T00:00:00.000Z
  • Effective Filing Date: 2021-08-18T00:00:00.000Z
  • Filing Date: 2021-08-18T00:00:00.000Z
  • National Class: 705
  • National Sub-Class: 044000
  • Examiner Employee Number: 78950
  • Art Unit: 3695
  • Tech Center: 3600

Rejection Summary

  • 102 Rejections: 0
  • 103 Rejections: 2

Cited Patents

The following patents were cited in the rejection:

Office Action Text



    DETAILED ACTION
Notice of Pre-AIA  or AIA  Status
The present application, 17/431,920, filed 08/18/2021, is a National Stage entry of PCT/EP2020/054184, International Filing Date: 02/18/2020, and claims foreign priority to FR 1901669, filed 02/19/2019.
The effective filing date is after the AIA  date of March 16, 2013, and so the application is being examined under the “first inventor to file” provisions of the AIA .
In the event the determination of the status of the application as subject to AIA  35 U.S.C. 102 and 103 (or as subject to pre-AIA  35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.

Priority
Acknowledgment is made of applicant's claim for foreign priority, based on French Application FR 1901669, filed 02/19/2019.  On 08/18/2021, the USPTO electronically retrieved the certified priority document from WIPO.  Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.

Status of the Application
This Second Non-Final Office Action is in response to Applicant’s communication of March 27, 2024.
Claims 1 and 4-11 are pending, of which claims 1, 10, and 11 are independent.  No claims are currently amended. In the previous response, claims 1, 4, 10, and 11 were amended.  Claims 2 and 3 were previously cancelled.
All pending claims have been examined on the merits.  

Claim Rejections - 35 USC § 103
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary.  Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.

The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 5-8, 10, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over US 2014/0201080 A1 to Just (“Just”. Eff. Filed Jan. 14, 2013.  Published July 17, 2014) in view of US 2020/0244788 A1 to Adams (“Adams”. Filed Jan. 28, 2019.  Published Jul. 30, 2020), and further in view of US 2004/0031856 A1 to Atsmon et al. (“Atsmon”, Filed Jul. 14, 2003.  Published Feb. 19, 2004).

In regards to claim 1, the “Just” reference teaches:
1. (Currently Amended) A method for processing a purchase order of goods or services, said method being implemented within an electronic voice processing device comprising at least one component for capturing voice orders, called a capturing component, and a sound emission component, called an emission component, wherein the method comprises: 
obtaining, using the capturing component, at least one data item representative of a voice-based purchase order, said purchase order emanating from a voice of a user and relating to the purchase of at least one good or service; 

(See Just, para. [0029]: “FIG. 3 is a flow diagram 300 of an exemplary method for processing a customer purchase transaction using biometric data, consistent with disclosed embodiments. In step 310, processing entity 150 may receive biometric data associated with a purchase transaction. The biometric data may be received, for example, from merchant 130 with a request to process a purchase transaction. In other embodiments, processing entity 150 may received the biometric data from another source and/or entity (for example, directly from customer 140 through client device 160). The biometric data may reflect measurable characteristics unique to each person that remain constant over time. A more detailed discussion is provided below regarding the receiving biometric data (with respect to FIG. 4).”)

(See Just, para. [0034]: “Server 250 may also determine which biometric data to request from customer 140 when making a purchase transaction (step 420). The biometric data may include data regarding customer characteristics identified from voice recognition, iris eye scan, fingerprint, palm print, walking gait, facial recognition, DNA swab, or the like. Server 250 may request one or more biometric data characteristics for use in processing purchase transactions. For example, server 250 may select three characteristics to request from each customer 140 when making a purchase transaction. The characteristics may be randomly selected before or during the purchase transaction to prevent fraudulent transactions. In other embodiments, financial service provider 120, customer 140, or any other component of system 100 may select which characteristics to provide server 250.”)

(See Just, para. [0036]: “In step 430, server 250 may receive the requested biometric data associated with a purchase transaction. For example, server 250 may include devices capable of receiving and analyzing a customer's voice, iris eye scan, fingerprint, palm print, walking gait, facial recognition, DNA swab, or any other biometric data capable of being associated with customer 140. In exemplary embodiments, a payment terminal associated with processing entity 150 may be capable of receiving and/or analyzing the biometric data. For example, server 250 may be communicatively associated with a payment terminal having a video device capable of scanning an iris and/or capturing a voice recording of customer 140. Server 250 may further process this biometric data to determine recognizable features unique to that customer (e.g., iris pattern, syllable pronunciation, etc.).”)

(See Just, para. [0037]: “Furthermore, server 250 may receive transaction data associated with the purchase transaction by customer 140 (step 440). The transaction data may include, for example, the purchase price, time and data of the transaction, product/service identification (e.g., SKU number), and merchant identification (e.g., merchant identification number). Server 250 may receive the transaction data substantially simultaneously as server 250 receives the biometric data. In other embodiments, server 250 may receive the transaction data and biometric data separately, by different means, and/or at different times.”)

authenticating at least one voiceprint representative of said user based on said at least one data item representative of the purchase order; and 

(See Just, para. [0036]: “In step 430, server 250 may receive the requested biometric data associated with a purchase transaction. For example, server 250 may include devices capable of receiving and analyzing a customer's voice, iris eye scan, fingerprint, palm print, walking gait, facial recognition, DNA swab, or any other biometric data capable of being associated with customer 140. In exemplary embodiments, a payment terminal associated with processing entity 150 may be capable of receiving and/or analyzing the biometric data. For example, server 250 may be communicatively associated with a payment terminal having a video device capable of scanning an iris and/or capturing a voice recording of customer 140. Server 250 may further process this biometric data to determine recognizable features unique to that customer (e.g., iris pattern, syllable pronunciation, etc.).”)

(See Just, para. [0037]: “Furthermore, server 250 may receive transaction data associated with the purchase transaction by customer 140 (step 440). The transaction data may include, for example, the purchase price, time and data of the transaction, product/service identification (e.g., SKU number), and merchant identification (e.g., merchant identification number). Server 250 may receive the transaction data substantially simultaneously as server 250 receives the biometric data. In other embodiments, server 250 may receive the transaction data and biometric data separately, by different means, and/or at different times.”)

The Examiner interprets that Just’s “captured voice recording of customer 140” reads upon the claimed “at least one data item representative of a voice-based purchase order”, or in the alternative, Just’s “syllable pronunciation” feature of the captured voice recording reads upon the claimed “at least one data item representative of a voice-based purchase order”.

determining whether said at least one voiceprint representative of said user corresponds to a user authorized to make purchases using said electronic voice control device; and 

(See Just, para. [0031]: “In some aspects, server 250 may authorize the purchase transaction based on the correlation of biometric data to a financial service account of customer 140 (step 330). For example, server 250 may compare transaction data with the customer account associated with financial service provider 120 and verify the customer account contains adequate funds to complete the transaction. Additionally, server 250 may verify the purchase transaction is not fraudulent based on the received biometric data. A more detailed discussion is provided below regarding authorizing the purchase transaction (see FIG. 6).”)

(See Just, para. [0039]: “Server 250 may compare the received biometric data from customer 140 with stored biometric data, as shown in step 520. The stored biometric data may represent previously received biometric data for customers of financial service provider 120. Such stored biometric data may include one or more of the biometric characteristics (e.g., voice recognition, iris eye scan, fingerprint, palm print, walking gait, facial recognition, DNA swab, and the like). Server 250 may compile the stored biometric data into searchable databases. The stored biometric data may be linked to one or more customer accounts associated with financial service provider 120.”)

transmitting, to an electronic processing device to which said electronic voice control device is connected, a request to obtain a purchase authorization, said request comprising at least one data item representative of the payment transaction, as a function of the determination.

(See Just, para. [0032]: “Server 250, in step 340, may send transaction information associated with the purchase transaction to financial service provider 120. Specifically, server 250 may send customer account information, transaction data, and authorization information to financial service provider 120. Financial service provider 120 may use the received transaction information in order to, for example, update customer account balances or provide additional fraud detection. A more detailed discussion is provided below regarding sending transaction information to financial service provider 120 (see FIG. 7).”)

(See Just, para. [0033]: “FIG. 4 depicts a flowchart of an exemplary method for receiving biometric data associated with a purchase transaction consistent with disclosed embodiments. As shown in FIG. 4, processing entity 150 may create a partnership with merchant 150. For example, the two entities may agree to allow customer 140 to purchase service/products from merchant 130 through processing entity 150 (step 410). In some embodiments, server 250 may receive an authorization from financial service provider to process purchase transactions associated with customer 140.”)

The Examiner interprets that “In some embodiments, server 250 may receive an authorization from financial service provider to process purchase transactions associated with customer 140” in para. [0033] means that the “Server 250, in step 340, may send transaction information associated with the purchase transaction to financial service provider 120” in para. [0032] is “transmitting, to an electronic processing device to which said electronic voice control device is connected, a request to obtain a purchase authorization”.

In regards to “to an electronic processing device to which said electronic voice control device is connected”, the Examiner interprets that all of the items connected to a network are connected to one another.

However, under a conservative interpretation of the “Just” reference, it could be argued that the “Just” reference does not explicitly teach the italicized features below, which are taught by the “Adams” reference:
wherein: 
the electronic processing device is a communication terminal of the user with which said voice control device has been previously paired; and 

(See Adams, para. [0014]: “The user interface devices 10 in the network can include smart speakers, smartphones and other wireless communication devices, home automation control systems, smart televisions and other entertainment devices, and the like. User interface devices 10 may be provided in any suitable environment; for example, while not shown in FIG. 1, a user interface device may be provided in a motor vehicle. The user interface devices 10 may operate in a standalone manner, not part of a local area network or mesh network; or they may operate in a local network. For example, FIG. 1 illustrates a smart speaker wirelessly paired (e.g., using the Bluetooth® protocol) with a personal computer 20. Alternatively, the personal computer 20 may be used to control the smart speaker. A smart speaker may also be paired with or controlled by a mobile wireless communication device, as discussed below. A user interface device 10 that operates as a home automation control system may be joined in a mesh network with one or more smart appliances, such as light fixtures 30, or a heating or cooling system 40. Each of these user interface devices 10 may provide a voice interface (e.g., a microphone array, speaker or audio line out, and associated signal processing components) for a local user to interact with the intelligent automated assistant service provided by the system 150 as described above.”)

(See Adams, para. [0029]: “It should be noted that a call may be placed from a smart speaker device 200 a to another communication device, such as a mobile wireless communications device operating on a cellular network, or vice versa. In the case where a call is placed to a mobile device 195 on a cellular network 160, call data may be routed from the call management infrastructure 190 to the cellular network 160, and thence to the mobile device 195, and vice versa. Optionally, the mobile device 195 may be paired with a smart speaker 200 b. The smart speaker 200 b can operate as a microphone and speaker for the paired mobile device 195, thus providing hands-free operation to the user of the mobile device 195. The smart speaker 200 b can still be in communication with the intelligent automated assistant service 150 via the network 100. Further, call sessions may include more than two parties; in this description, only two parties are used in these examples for the sake of simplicity. Thus, one or all users on a call may be using a smart speaker, but some number of users on the call may be using a mobile device or other communication device.”)

(See Adams, para. [0037]: “A current state of the user's respective smart speaker device may include the state of any power or privacy settings (e.g., whether a do-not-disturb or mute mode is enabled); whether the smart speaker device is currently paired with an identified mobile device associated with the user (e.g., whether the user's smartphone is paired with the smart speaker device); whether the smart speaker device detects the presence of a mobile device, even if unpaired, over short-range wireless communication, such as Bluetooth; or whether an identified computing or mobile device associated with the user is connected to the same Wi-Fi network. These conditions may tend to indicate that the user is present with the smart speaker device. Other conditions may tend to indicate that the user may be present, but if so, is not alone. For example, detection of an unknown mobile device or multiple mobile devices available for pairing via Bluetooth may suggest that another person is nearby.”)

The Examiner interprets that if a “smart speaker device is currently paired with an identified mobile device associated with the user”, then an obvious variation is that it was also “previously paired” in the immediate past.

It would have been obvious to a person having ordinary skill in the art (PHOSITA), before the effective filing date of the claimed invention, to include in the “Systems and methods for processing customer purchase transactions using biometric data”, as taught by the “Just” reference, with “Securing of Internet of Things Devices Based on Monitoring of Information Concerning Device Purchases”, as further taught by the “Adams” reference above, because both references are in the same art of processing customer purchase transactions using voice recognition, and because “These conditions may tend to indicate that the user is present with the smart speaker device”:
(See Adams, para. [0037]: “A current state of the user's respective smart speaker device may include the state of any power or privacy settings (e.g., whether a do-not-disturb or mute mode is enabled); whether the smart speaker device is currently paired with an identified mobile device associated with the user (e.g., whether the user's smartphone is paired with the smart speaker device); whether the smart speaker device detects the presence of a mobile device, even if unpaired, over short-range wireless communication, such as Bluetooth; or whether an identified computing or mobile device associated with the user is connected to the same Wi-Fi network. These conditions may tend to indicate that the user is present with the smart speaker device.”)

However, under a conservative interpretation of Just in view of Adams, it could be argued that Just in view of Adams does not explicitly teach the following features, which are taught by Atsmon:
wherein: …
the transmission of the request to obtain the purchase authorization to the communication terminal of the user with which said electronic device is paired comprises: 
building the request to obtain the purchase authorization; 
activating the emission component of the electronic device; 
generating a sound according to the request to obtain the purchase authorization; and 
emitting said sound using the emission component.

(See Atsmon, para. [0700]: “Practically speaking, the CSP interacts with the user (and the user's applications in his client computer station) to allow the user to perform various security related tasks such as choose disk, give fingerprint (or voice print or biometric data), and the like. Because all web browsers and servers support it, the merchants who want a web presence are also required to support it. Thus, the CSP does not represent any change to the existing infrastructure.”)

(See Atsmon, para. [0812]: “For some applications, however, including high security transactions like extremely large financial transactions, there is no security factor equal in value to assuring that the correct person is present. Voice authentication as a biometric security factor is known in the art of computer security. Typically, these schemes will require the authenticating user to read arbitrary text, and then compare these vocal patterns to the authentic user's own speech patterns to verify conformity. One limitation of biometric security schemes, and voice authentication is no exception, is the requirement of special equipment to receive the biometric data (e.g. fingerprint or iris readers, or microphones and sound systems for voice authentication). There are abundant known techniques for deploying voice authentication on a single computer, or over a network by sending voice sounds to and receiving authentication codes from, a network server.”)

(See Atsmon, para. [0813]: “But the fact that any issuer of the acoustic reader-free electronic cards of the present invention can be sure that each and every base station includes a microphone solves this barrier to introducing voice authentication as a security factor. Thus, one embodiment of the present invention enables the proprietor of a security system to deploy a regime including the “something you have” factor in the form of a card or other physical token practicing the acoustic data transmission techniques discussed above, plus any combination of a password or other “something you know” factor, plus biometric verification by voice authentication.”)

The Examiner interprets Atsmon teaches that devices can be “connected” via sound signals, and more specifically via ultrasound signals.

It would have been obvious to a person having ordinary skill in the art (PHOSITA), at the effective filing date of the Application, to further include in the method for “systems and methods for processing customer purchase transactions using biometric data”, as taught by the “Just reference”, with “an interactive system [that] uses acoustic waves … in the ultrasonic frequency band” to connect devices, as further taught by Atsmon (see para. [0143], because of the following:
“Ultrasound has certain benefits over the audible frequency range. First, most people cannot hear ultrasound and when they do hear it, it is not noticeably loud” (See Atsmon, para. [0144]).  “Second, ultrasound is less subject to interference.” (See Atsmon, para. [0145]). “Fourth, to implement ultrasound, the transducers are smaller than conventional sound transducers.” (See Atsmon, para. [0147]). “Fifth, the ultrasonic electronic card can be used with existing non-dedicated base station infrastructure equipment.” (See Atsmon, para. [0148]). “When ultrasound is used, the electronic card has a range of approximately 1-3 feet.” (See Atsmon, para. [0149]).
In regards to claim 4, 
4. (Currently Amended) The method for processing a payment transaction, according to claim 1, wherein said sound emitted by said electronic voice processing device is situated in the ultrasound range.

(See Atsmon, para. [0143]: “In one embodiment of the present invention, the interactive system uses acoustic waves. Preferably, acoustic signals in the ultrasonic frequency band are used. In one embodiment of the present invention, acoustic waves in the ultrasound frequency range will be used. Ultrasound is in the 15-28 kHz frequency range, and typically higher. Of course, the reader should realize that ultrasound has no fixed upper limit. In the preferred embodiment, the frequency range of 17-20 KHz has been selected for the ultrasonic acoustic signals.”)

The Examiner interprets Atsmon teaches that devices can be “connected” via sound signals, and more specifically via ultrasound signals.

In regards to claim 5,
5. (Currently Amended) The method for processing a payment transaction, according to claim 1, wherein the method further comprises, after transmitting the request to obtain a purchase authorization, receiving a payment transaction acceptance response.

(See Just, para. [0032]: “Server 250, in step 340, may send transaction information associated with the purchase transaction to financial service provider 120. Specifically, server 250 may send customer account information, transaction data, and authorization information to financial service provider 120. Financial service provider 120 may use the received transaction information in order to, for example, update customer account balances or provide additional fraud detection. A more detailed discussion is provided below regarding sending transaction information to financial service provider 120 (see FIG. 7).”)

(See Just, para. [0033]: “FIG. 4 depicts a flowchart of an exemplary method for receiving biometric data associated with a purchase transaction consistent with disclosed embodiments. As shown in FIG. 4, processing entity 150 may create a partnership with merchant 150. For example, the two entities may agree to allow customer 140 to purchase service/products from merchant 130 through processing entity 150 (step 410). In some embodiments, server 250 may receive an authorization from financial service provider to process purchase transactions associated with customer 140.”)

The Examiner interprets that “In some embodiments, server 250 may receive an authorization from financial service provider to process purchase transactions associated with customer 140” in para. [0033] means that the “Server 250, in step 340, may send transaction information associated with the purchase transaction to financial service provider 120” in para. [0032] is “transmitting, to an electronic processing device to which said electronic voice control device is connected, a request to obtain a purchase authorization”.



In regards to claim 6,
6. (Currently Amended) The method for processing a payment transaction, according to claim 5, wherein the method further comprises, after receiving a payment transaction acceptance response, transmitting a data structure representative of the payment transaction to a transaction server.

(See Just, para. [0033]: “FIG. 4 depicts a flowchart of an exemplary method for receiving biometric data associated with a purchase transaction consistent with disclosed embodiments. As shown in FIG. 4, processing entity 150 may create a partnership with merchant 150. For example, the two entities may agree to allow customer 140 to purchase service/products from merchant 130 through processing entity 150 (step 410). In some embodiments, server 250 may receive an authorization from financial service provider to process purchase transactions associated with customer 140. For instance, processing entity 150 may provide payment options or terminals to merchant 150 that allows customer 140 to request purchase transactions. The payment terminals may be accessible to customer 140 at a physical location or through internet 110. For example, customer 140 may access the payment terminals through the internet using client device 160.”)

(See Just, para. [0037]: “Furthermore, server 250 may receive transaction data associated with the purchase transaction by customer 140 (step 440). The transaction data may include, for example, the purchase price, time and data of the transaction, product/service identification (e.g., SKU number), and merchant identification (e.g., merchant identification number). Server 250 may receive the transaction data substantially simultaneously as server 250 receives the biometric data. In other embodiments, server 250 may receive the transaction data and biometric data separately, by different means, and/or at different times.”)

(See Just, para. [0048]: “FIG. 8 is a flow diagram 800 of an exemplary method for receiving a customer purchase transaction using biometric data, consistent with disclosed embodiments. In step 810, financial service provider 120 may receive purchase transaction information from one or more components of system 100. Such information may include, for example, customer account information, transaction data, and authorization information. In step 820, financial service provider 120 may process the purchase transaction. For example, financial service provider 120 may locate the customer account associated with the customer account information, deduct the purchase amount from the customer account, and notify the customer of this deduction. A more detailed discussion is provided below (with respect to FIG. 10) regarding sending purchase transaction information to financial service provider 120.”)

(See Just, para. [0051]: “FIG. 10 depicts a flowchart of an exemplary method processing the purchase transaction consistent with disclosed embodiments. As shown in FIG. 10, financial service provider 120 may process the purchase transaction. Financial service provider 120 may locate the customer account associated with the received customer account information (step 1010). The customer account may include a financial service account including, for example, credit card accounts, checking accounts, savings accounts, loans, investment accounts. Financial service provider 120 may additionally deduct the purchase price from the customer account, as shown in step 1020. In some embodiments, as shown in step 1030, financial service provider may further notify customer 140 of the deduction. For example, financial service provider 120 may provide a notification in the form of an electronic message or document (e.g., email, link to a website, SMS message, business software mechanisms (ERP, CRM, etc.). In some embodiments, financial service provider may provide the notification in the form of a bank statement.”)

	The Examiner interprets that “Financial service provider 120 may locate the customer account associated with the received customer account information (step 1010)” reads upon the claimed feature.

In regards to claim 7,
7. (Currently Amended) The method for processing a payment transaction, according to claim 6, wherein the data structure representative of the payment transaction comprises at least one data item representative of a current voiceprint.

(See Just, para. [0036]: “In step 430, server 250 may receive the requested biometric data associated with a purchase transaction. For example, server 250 may include devices capable of receiving and analyzing a customer's voice, iris eye scan, fingerprint, palm print, walking gait, facial recognition, DNA swab, or any other biometric data capable of being associated with customer 140. In exemplary embodiments, a payment terminal associated with processing entity 150 may be capable of receiving and/or analyzing the biometric data. For example, server 250 may be communicatively associated with a payment terminal having a video device capable of scanning an iris and/or capturing a voice recording of customer 140. Server 250 may further process this biometric data to determine recognizable features unique to that customer (e.g., iris pattern, syllable pronunciation, etc.).”)

(See Just, para. [0037]: “Furthermore, server 250 may receive transaction data associated with the purchase transaction by customer 140 (step 440). The transaction data may include, for example, the purchase price, time and data of the transaction, product/service identification (e.g., SKU number), and merchant identification (e.g., merchant identification number). Server 250 may receive the transaction data substantially simultaneously as server 250 receives the biometric data. In other embodiments, server 250 may receive the transaction data and biometric data separately, by different means, and/or at different times.”)

In regards to claim 8,
8. (Currently Amended) The method for processing a payment transaction, according to claim 7, wherein said at least one data item representative of a current voiceprint is used to replace at least one payment data item of a payment card of said user.

(See Just, para. [0051]: “FIG. 10 depicts a flowchart of an exemplary method processing the purchase transaction consistent with disclosed embodiments. As shown in FIG. 10, financial service provider 120 may process the purchase transaction. Financial service provider 120 may locate the customer account associated with the received customer account information (step 1010). The customer account may include a financial service account including, for example, credit card accounts, checking accounts, savings accounts, loans, investment accounts. Financial service provider 120 may additionally deduct the purchase price from the customer account, as shown in step 1020. In some embodiments, as shown in step 1030, financial service provider may further notify customer 140 of the deduction. For example, financial service provider 120 may provide a notification in the form of an electronic message or document (e.g., email, link to a website, SMS message, business software mechanisms (ERP, CRM, etc.). In some embodiments, financial service provider may provide the notification in the form of a bank statement.”)

In regards to independent claim 10, it is rejected on the same grounds as independent claim 1.
In regards to independent claim 11, it is rejected on the same grounds as independent claim 1.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Just in view of Adams and Atsmon as applied to claim 7 above, and further in view of US 2018/0349900 A1 to Anderson et al. (“Anderson”, Eff. Filed Mar. 13, 2003.  Published Dec. 6, 2018).
In regards to claim 9, under a conservative interpretation of Just in view of Adams and Atsmon, it could be argued that Just in view of Adams and Atsmon does not explicitly teach the following features, which are taught by Anderson:
9. (Currently Amended) The method for processing a payment transaction, according to claim 7, wherein said at least one data item representative of a current voiceprint is used to build a payment token using at least one payment data item of a payment card of said user.

(See Anderson, para. [0053]: “At block 212, the authorization server 106 may provide the payment credentials to the user device 102. The payment credentials may include a bank account number, a credit card number, or a financial services account name or number. In one embodiment, the authorization server 106 may also include an indication of the authorization provided by the authorization service. The indication may include an encrypted key or token that is shared between the authorization service and the merchants. For example, the merchants may require that a purchase includes the key or token in order to finalize the transaction. The key or token may represent that the authorization service has authenticated the user and that the purchase is less likely to be fraudulent.”)

It would have been obvious to a person having ordinary skill in the art (PHOSITA), at the effective filing date of the Application, to include in the method for “systems and methods for processing customer purchase transactions using biometric data”, as taught by Just above, in the combination of Just in view of Adams and Atsmon, with “systems and methods for tokenizing financial information”, as further taught by Anderson, because “the merchants may require that a purchase includes the key or token in order to finalize the transaction. The key or token may represent that the authorization service has authenticated the user and that the purchase is less likely to be fraudulent” (see Anderson para. [0053]).

Response to Arguments
Re: Claim Rejections - 35 USC § 101
Applicant's arguments filed May 7, 2024, regarding the 35 USC § 101 rejection, have been fully considered, and they were considered to be persuasive.  The 35 USC § 101 rejection was withdrawn.



Re: Claim Rejections - 35 USC § 102/103
Applicant's arguments filed March 27, 2024, regarding the 35 USC § 103, have been fully considered, and they were considered to be persuasive.  
A new 35 USC § 103 rejection has been applied, that replaces the previously presented Sokolov reference with the newly applied Adams reference.

Conclusion
Applicants are invited to contact the Office to schedule an in-person interview to discuss and resolve the issues set forth in this Office Action.  Although an interview is not required, the Office believes that an interview can be of use to resolve any issues related to a patent application in an efficient and prompt manner.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
Any inquiry concerning this communication or earlier communications should be directed to Examiner Ayal Sharon, whose telephone number is (571) 272-5614, and fax number is (571) 273-1794.  The Examiner can normally be reached from Monday to Friday between 9 AM and 6 PM.  If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christine M Behncke can be reached on (571) 272-8103.  The fax number for the organization where this application or proceeding is assigned is 571-273-8300.  
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.

Sincerely,

/Ayal I. Sharon/
Examiner, Art Unit 3695

April 3, 2025


    
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
    


Cookies help us deliver our services. By using our services, you agree to our use of cookies.