Patent Application 16696837 - FRICTIONLESS FRAUD IDENTIFICATION AND TRACKING - Rejection
Appearance
Patent Application 16696837 - FRICTIONLESS FRAUD IDENTIFICATION AND TRACKING
Title: FRICTIONLESS FRAUD IDENTIFICATION AND TRACKING
Application Information
- Invention Title: FRICTIONLESS FRAUD IDENTIFICATION AND TRACKING
- Application Number: 16696837
- Submission Date: 2025-05-12T00:00:00.000Z
- Effective Filing Date: 2019-11-26T00:00:00.000Z
- Filing Date: 2019-11-26T00:00:00.000Z
- National Class: 705
- National Sub-Class: 075000
- Examiner Employee Number: 87800
- Art Unit: 3692
- Tech Center: 3600
Rejection Summary
- 102 Rejections: 0
- 103 Rejections: 1
Cited Patents
The following patents were cited in the rejection:
Office Action Text
DETAILED ACTION Status of Claims 1. This office action is in response to RCE filed 1/16/2025. 2. Claims 1-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/16/2025 has been entered. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1: Claims 1-18 are directed to a method; claims 19-20 are directed to a system – each of which is one of the four statutory categories of invention. Step 2A: A claim is eligible at revised Step 2A unless it recites a judicial exception and the exception is not integrated into a practical application of the application. Prong 1: Prong One of Step 2A evaluates whether the claim recites a judicial exception (an abstract idea enumerated in the 2019 PEG, a law of nature, or a natural phenomenon). Groupings of Abstract Ideas: I. MATHEMATICAL CONCEPTS A. Mathematical Relationships B. Mathematical Formulas or Equations C. Mathematical Calculations II. CERTAIN METHODS OF ORGANIZING HUMAN ACTIVITY A. Fundamental Economic Practices or Principles (including hedging, insurance, mitigating risk) B. Commercial or Legal Interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations) C. Managing Personal Behavior or Relationships or Interactions between People (including social activities, teaching, and following rules or instructions) III. MENTAL PROCESSES. Concepts performed in the human mind (including an observation, evaluation, judgment, opinion). See MPEP 2106.04 (a) (2) Abstract Idea Groupings [R-10.2019] Identifying from a video that an individual is in an establishment, identifying a mobile device identifier – involves observation, evaluation, judgement and opinion and is thus a Mental Process. Tracking and monitoring audio and video of transaction data and individual behavior – constitutes collecting and analyzing data that are abstract ideas. Evaluating rules to update intervention score as an individual traverses the establishment – constitutes abstract Mental Process or Mathematical Concepts category of abstract ideas. Selecting a response action based on intervention score such as triggering a suspicious activity report, flagging an account, pausing and flagging an ongoing transaction, or flagging an account for further review – constitutes Certain Methods of Organizing Human Activity. Hence, the limitations recited in independent claims 1, 12 and 19, when considered as a whole, recite a combination of abstract ideas. See RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1327 (Fed. Cir. 2017) (“Adding one abstract idea (math) to another abstract idea (encoding and decoding) does not render the claim non-abstract.”). The specification indicates that the claimed invention is directed to fraud identification and tracking. Fraud is a business problem whereas tracking is merely data collection and analysis. Therefore, fraud identification and tracking – is a certain method of organizing human activity. The Federal Circuit has consistently held that abstract ideas include the concepts of collecting data, analyzing the data, and displaying the results of the collection and analysis, including when limited to particular content. See, e.g., Intellectual Ventures I LLC v. Capital One Fin. Corp., 850 F.3d 1332, 1340 (Fed. Cir. 2017) (identifying the abstract idea of collecting, displaying, and manipulating data); Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350, 1354 (Fed. Cir. 2016) (characterizing collecting information, analyzing information by steps people go through in their minds, or by mathematical algorithms, and presenting the results of collecting and analyzing information, without more, as matters within the realm of abstract ideas); see also SAP Am., Inc. v. InvestPic, LLC, 898 F.3d 1161, 1168 (Fed. Cir. 2018) (“As many cases make clear, even if a process of collecting and analyzing information is ‘limited to particular content’ or a particular ‘source,’ that limitation does not make the collection and analysis other than abstract.”) (quoting Elec. Power Grp., 830 F.3d at 1353, 1355 (citing cases)). See also Two-Way Media, 874 F.3d at 1337-38 (forwarding real-time information to users having access to a communications network by processing streams of audio or visual information routed information by reciting result-based functional language of converting, routing, controlling, monitoring, and accumulating records that did not describe how to achieve those results in a non-abstract way and thus recited an abstract idea); see also Cellspin Soft, Inc. v. Fitbit, Inc., 927 F.3d 1306, 1315-16 (Fed. Cir. 2019) (claims recited the abstract idea of capturing and transmitting data from one device to another device to publish to the Internet); ChargePoint, Inc. v. SemaConnect, Inc., 920 F.3d 759, 773 (Fed. Cir. 2019) (claims to the abstract idea of communicating over a network for device interaction is a “building block of the modern economy”) (citing Alice, 573 U.S. at 220); Affinity Labs of Tex., LLC v. DIRECTV, LLC, 838 F.3d 1253, 1258–59 (Fed. Cir. 2016) (communicating regional broadcast content to an out-of-region recipient with no particular way to perform that function is an abstract idea). The dependent claims merely limit the abstract idea to – fraud information, fraud profile containing biometric features of individual, sending fraud profile to financial institutions, sending redacted information to government, linking identity, monitoring terminals, tacking behavior analysis, tracking spoken words, updating real-time score, maintaining logs, suspicious activity report, monitoring transactions from video, processing response action, and aggregating logs – that also constitute combination of abstract ideas. Hence under Prong One of Step 2A, claims 1-20 recite a combination of judicial exceptions. Prong 2: Prong Two of Step 2A evaluates whether the claim recites additional elements that integrate the judicial exception into a practical application of the exception. Limitations that are indicative of integration into a practical application include: Improvements to the functioning of a computer or to any other technology or technical field – see MPEP 2106.05(a) Applying the judicial exception with, or by use of, a particular machine –see MPEP 2106.05(b) Effecting a transformation or reduction of a particular article to a different state or thing – see MPEP 2106.05(c) Applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception – see MPEP 2106.05(e) Limitations that are not indicative of integration into a practical application include: Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f) Adding insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g) Generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h) Additional element(s) recited in the claims, beyond the abstract idea, include: a server comprising a processor and a non-transitory storage media; cameras, bounding box; biometric features; person tracker; behavior-action tracker; audio manager. The processor, server, and non-transitory storage medium, camera – have been recited in the specification (para [0016], [0020], [0080]) at a very high level of generality. Next, Examiner notes that the following limitations have been recited, both in the specification and claims, at a very high level of generality such that they amount to result-oriented, functional steps: track the individual across video frames identify a mobile device identifier within a geofenced area of the establishment assigning identifiers for video frames and audio transaction data from transaction terminals using bounding box PNG media_image1.png 986 836 media_image1.png Greyscale PNG media_image2.png 940 838 media_image2.png Greyscale From the specification: Para [0023] (“Person tracker 124 may derive biometric facial features from images supplied by cameras 140 or camera 111. Person tracker 124 may compute/hash a unique biometric value from the derived biometric features and match the unique biometric value to a specific customer of the establishment. Staff entered information for a transaction initiated for the customer may include customer account information that maps to a specific customer identifier. Microphones 150 or 111 may provide audio for a speech or spoken words of the customer, which audio manager 126 computes a voice print on that maps to a voice print of a known customer of the establishment. Event manager 123 may provide a unique identity for the customer to 124 based on information provided by event agent 116, mobile device 170, or staff operated device 130.”) Para [0025] (“A bounding box within the pixels of the frames that surrounds the individual is maintained by person tracker 124. Behavior-action tracker 125 uses the bounding box to identify facial features, expressions, and extremities of the individual. Behavior-action tracker 125 also looks for predefined actions of behaviors predefined and associated with nervousness, sweating, agitation, etc. Simultaneously, audio manager 126 may listen for voice patterns indicating stress as received from audio from microphones 150 and/or 111.”) Para [0026] (“Each behavior and action (can be video-based behavior or audio-based behavior) is assigned an identifier. Behavior-action tracker 125 supplies behavior and action identifiers to score manager and monitor 128. Simultaneously, event agent 116 and/or an agent on devices 130 sends transaction information for any transaction being performed by the individual to score manager and monitor 128.”) Based on the above quoted disclosure, Examiner finds that the above listed limitations of deriving biometric values, identifying facial features, determining voice patterns are implemented by “person tracker,” “behavior-action tracker,” “audio manager” that have been used in the specification as a black box without any further technical details. A patent application has to describe how to solve the problem in a manner that encompasses something more than the “principle in the abstract. But there is no mechanism recited in either the specification or drawings that describes how the recited steps are technologically implemented. The claimed “person tracker,” “behavior-action tracker,” “audio manager” are claimed by their function and do not constitute additional structural elements beyond their functioning as part of the abstract idea. As per Figs. 1-3 and para [0023], [0025], [0026], the additional elements have been disclosed as only labeled black boxes with aspiration functional results. Each of the above limitations is expressed purely in terms of results, devoid of implementation details. All purported inventive concepts reside in how ‘track,' ‘identify,’ ‘assign,’ ‘evaluating,’ ‘calculate,’ ‘updating,’ ‘processing’ are technically accomplished and not in how the processing technologically achieves the result which the specification does not elaborate. See Two-Way Media Ltd. v. Comcast Cable Commc’’n, LLC, 874 F.3d 1329, 1337 (Fed. Cir. 2017) (“The claim [before the court] requires the functional results of ‘converting,’ ‘routing,’ ‘controlling,’ ‘monitoring,’ and ‘accumulating records,’ but does not sufficiently describe how to achieve these results in a non-abstract way.”). MPEP 2106.05(f) Mere Instructions To Apply An Exception [R-10.2019]: (1) Whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished. The recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words “apply it”. See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015). In contrast, claiming a particular solution to a problem or a particular way to achieve a desired outcome may integrate the judicial exception into a practical application or provide significantly more. See Electric Power, 830 F.3d at 1356, 119 USPQ2d at 1743. By way of example, in Intellectual Ventures I v. Capital One Fin. Corp., 850 F.3d 1332, 121 USPQ2d 1940 (Fed. Cir. 2017), the steps in the claims described “the creation of a dynamic document based upon ‘management record types’ and ‘primary record types.’” 850 F.3d at 1339-40; 121 USPQ2d at 1945-46. The claims were found to be directed to the abstract idea of "collecting, displaying, and manipulating data.” 850 F.3d at 1340; 121 USPQ2d at 1946. In addition to the abstract idea, the claims also recited the additional element of modifying the underlying XML document in response to modifications made in the dynamic document. 850 F.3d at 1342; 121 USPQ2d at 1947-48. Although the claims purported to modify the underlying XML document in response to modifications made in the dynamic document, nothing in the claims indicated what specific steps were undertaken other than merely using the abstract idea in the context of XML documents. The court thus held the claims ineligible, because the additional limitations provided only a result-oriented solution and lacked details as to how the computer performed the modifications, which was equivalent to the words “apply it”. 850 F.3d at 1341-42; 121 USPQ2d at 1947-48 (citing Electric Power Group., 830 F.3d at 1356, 1356, USPQ2d at 1743-44 (cautioning against claims “so result focused, so functional, as to effectively cover any solution to an identified problem”)). Similarly, claims 1, 12 and 19 recite functional, result-oriented solutions for ‘tracking,' ‘placing,’ ‘maintaining,’ ‘identifying,’ ' deriving,’ ‘associating,’ ‘assigning,’ ‘retaining’ that are lacking in technical details. See also Universal Secure Registry LLC v. Apple Inc., 10 F.4th 1342, 1352 (Fed. Cir. 2021) (a biometric sensor, user interface, communication interface, and processor working together to authenticate a user based on two factors and generate encrypted authentication information without improving any underlying technology or solving any technological problem was directed to an abstract idea); Prism Techs. LLC v. T-Mobile USA, Inc., 696 F. App’x 1014, 1016-17 (Fed. Cir. 2017) (merely using an authentication server to control or restrict access to computer resources is an abstract idea). Using a generic security and integration layer to authenticate users and restrict access to authorized users does not improve computers or other technology without more. Recited at this level of generality without technical details, merely tracking, monitoring and scoring a person entering an establishment, is an abstract idea that organizes commercial and legal interactions between business and customers and manages their interactions to mitigate the risk of fraud. Examiner thus finds that the additional elements – detecting, obtaining, tracking, monitoring, maintaining, updating, processing, retaining – have been recited at a high level of generality such that the claim limitations amount to no more than mere instructions to apply the exception using generic components. The combination of additional elements does not purport to improve the functioning of a computer or effect an improvement in any other technology or technical field. Instead, the additional elements do no more than “use the computer as a tool” and/or “link the use of the judicial exception to a particular technological environment or field of use.” The focus of the claims is not on improvement in computers, but on certain independently abstract ideas – monitoring transactions and behaviors associated with an individual, generating and revising intervention score, updating cumulative score based on previous transactions, and selecting a response action based on the intervention score – that merely use servers and cameras as tools. Steps that do no more than spell out what it means to “apply it on a computer” cannot confer patent eligibility. Hence, under Prong Two of the Step 2A, the additional elements, individually and in combination, do not integrate the judicial exception into a practical application. Hence, the claims are ineligible under Step 2A. Step 2B: In Step 2B, the evaluation consists of whether the claim recites additional elements that amount to an inventive concept (aka “significantly more”) than the recited judicial exception. As discussed in Prong 2, the additional elements in the claim amount to no more than mere instructions to apply the exception using generic computer components, which is insufficient to provide an inventive concept. Examiner notes that the technology of facial recognition and examination of shopper behavior at a retail establishment have been known for decades: The history of face recognition goes back to the 1950s and 1960s, but research on automatic face recognition is considered to be initiated in the 1970s [409]. In the early works, features based on distances between important regions of the face were used [164]. Research studies on face recognition flourished since the be-ginning of the 1990s following the developments in hardware and the increasing importance in security-related applications. The progress of image-based face recognition techniques since the beginning of 1990s has been roughly divided into four major conceptual development phases by Wang and Deng [346], which is not a full taxonomy but reflects the historical development of the major methods: i) Holisticor appearance-based approaches use the face region as a whole and use linear or non-linear methods to map the face to a lower dimensional subspace [27,363]. One of the first successful methods was developed by Turk and Pentland[333,332] and is known as Eigenfaces. There have been other approaches that use linear subspaces [77], manifold learning [378,139]and sparse representations [73,75]. ii) Local-feature based face recognition methods became popular after 2000s and they use hand-crafted features to describe the face such as Ga-bor features [203,373], and local binary patterns (LBP) and variants [237,6,76]. iii) Methods which use learning-based local descriptors [41,190] emerged after the 2010s and they learn the discriminant image filters using shallow techniques. iv) Deep Learning based methods gained popularity after the great success of AlexNet in the ImageNet competition in 2012 [184], and brought a new perspective to face recognition problem. An unprecedented stability has been achieved for face recognition so that their performance is similar to humans on large-scale datasets collected under unconstrained settings [320,262]. Source: NPL Face recognition: Past, present and future (a review), Taskiran et al., Elsevier See also US 8,219,438 B1 (“Method and system for measuring shopper response to products based on behavior and facial expression”) to Moon et al. When considered individually or as an ordered combination, the additional elements fail to transform the abstract idea of – monitoring transactions and behaviors associated with an individual, generating and revising intervention score, updating cumulative score based on previous transactions, and selecting a response action based on the intervention score – into significantly more. See MPEP 2106.05(f) Mere Instructions To Apply An Exception [R-10.2019]. (2) Whether the claim invokes computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. Hence, the claims are ineligible under Step 2B. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to a judicial exception without significantly more. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. Claims 1-20 Claims 1-20 are rejected under 35 U.S.C. 103(a) as being unpatentable over Vemury (US 2019/0220944 A1) in view of Showers et al. (US 2014/0365304 A1). Claim 1: A method, comprising: detecting, by a server comprising a processor and non-transitory computer-readable storage media having executable instructions, an individual entering an establishment by monitoring a video captured by cameras of the establishment and identifying an event determined from the video that indicates the individual is in the establishment; (See Vemury: Para [0194] (“When capturing an image, the device implementing the method can capture multiple images. For example, a digital camera captures a video of an individual's face or different wavelengths of energy are used/captured, e.g., visible light and near-infrared light. In some instances, the individual is instructed to remain in a fixed position during image capture, while in others a video is obtained while the individual moves to a pre-established position at which he/she is to remain.”) obtaining, by the server, an identity for the individual within the establishment by implementing specialized tracking components comprising: a person tracker that derives biometric features from images supplied by cameras and generates a unique biometric identifier to enable matching to a specific customer of the establishment; (See Vemury: Para [0063] (“an application program interface (API) that allows it to collect an image captured by the image capture device 240, e.g. a digital image camera.”) a behavior-action tracker that maintains persistent tracking of the individual using a bounding box around the individual within video frames and analyzes facial features, expressions, and physical indicators of the individual that are associated with predefined behavioral patterns; and (See Vemury: Para [0074] (“As illustrated, the collection device 232 includes an image capture device, e.g., a camera, although other devices, scanners (e.g., an iris scanner), detectors can be included with or used in place of a camera. The image capture device 240 is operable to capture biometric information. For example, a user implements a camera in a kiosk to capture an image of his/her face for inclusion with his/her biographical information. Other biometric information includes, a fingerprint image, an iris scan, a body scan and/or actions associated with behavioral traits, voice pattern, walking gate, and the like biologically identifiable traits. In the picture situation, the image is embodied in a file for inclusion in custom information sent to the intermediate. The image may be contained in a variety of file formats including, but not limited to, a jpeg file, a tiff file, a gif file, a pdf file, and so forth. As will be discussed in greater detail below, image capture can include capturing a video or multiple images and down selecting a particular image using an algorithm to select an image that meets or exceeds a quality threshold, e.g., is suitable for biometric identification.”) an audio manager that processes voice patters captured by microphones to detect stress indicators: (See Vemury: Para [0074] (“voice pattern”) wherein the specialized tracking components operate together to: track the individual across the video frames; (See Vemury: Para [0240] (“For example, instead of capturing a still image for use in facial recognition, a system operating in accordance with the illustrated embodiment captures a video composed of a plurality of images, e.g., frames.”) identify a mobile device identifier within a geofenced area of the establishment, and (See Showers: Para [0098] (“As discussed above, when the mobile user device 600 is in a location, the mobile user device 600 may obtain geofences within a specific proximity of the location.”) [0204] (“In some embodiments, a mobile device of the reminder recipient may receive the instructions, as indicated by block 1930, and detect the recipient mobile device being within a geofence or wireless environment of the merchant's physical site, as indicated by block 1932.”) monitoring, by the server, the monitoring by: assigning identifiers for the video frames and audio using transaction data from transaction terminals using the bounding box; and (See Vemury: Para [02025] (“For example, a computing system performing the method may include metadata that is associated with the biometric information, e.g., the biometric signature of the individual's face represented by the image. Example metadata includes information that uniquely identifies the image, date, time, software version, what software was used, error checking results, physical device information, location, timestamp, vendor information, biometric information, image information, use input information (such as observations from an official overseeing enrollment) and so forth.”) evaluating rules with the identifiers and transaction information to calculate and continuously update an intervention score as the individual traverses the establishment; (See Vemury: Para [0071] (“The system 202 (e.g., collection device and/or central resource) can maintain the validation records for a predetermined period of time, until occurrence of an event, and the like events. It will be appreciated that this information in the validation record may be included in a name record and/or an indication (such as a score) can be included. In the latter situation, a score may indicate the person or persons are attempting to provide false or misleading information. What threshold score is to be achieved to pass validation can be changed manually, e.g., by a system supervisor, or dynamically based on a variety of factors, including but not limited to, location, other users' errors, and so forth.”) [0243] (“Capturing a video (e.g., multiple images) permits a system or device employing the method to down-select from among the images, such as by using an algorithm that determines a quality score for in question images.”) updating a cumulative score for the individual based on previous transactions and visits of the individual to the establishment; and (See Vemury: Para [0071] (“The system 202 (e.g., collection device and/or central resource) can maintain the validation records for a predetermined period of time, until occurrence of an event, and the like events. It will be appreciated that this information in the validation record may be included in a name record and/or an indication (such as a score) can be included. In the latter situation, a score may indicate the person or persons are attempting to provide false or misleading information. What threshold score is to be achieved to pass validation can be changed manually, e.g., by a system supervisor, or dynamically based on a variety of factors, including but not limited to, location, other users' errors, and so forth.”) [0107] (“In other examples, information is stored in a name record that contains information for (potentially) multiple instances. A name record for example may contain information for multiple visits, e.g., multiple entry/exits for a particular individual in addition to containing biographic information for the individual.”) processing a response action comprising one or more of: triggering a suspicious activity report; flagging an account of the individual for increased identification: pausing particular transactions of the individual for review; or flagging for a fraud system review. (See Vemury: Para [0182] (“set a flag to prevent information from being used until the flag is removed”) Therefore, it would have been obvious to a person having ordinary skills in the art at the time of the invention to modify the combination of Griffin + Carey to include Claudatos as it relates to video surveillance. The motivation for combining the references would have been to determine whether video surveillance may be required at a particular location. Claims 12, 19 are similar to claim 1 and hence rejected on similar grounds. Claim 2: wherein the server generates a packet of information based on detected fraud comprising: an identity identifier; particular video of the individual: audio captured within the establishment; transaction information; factors used in calculating the intervention score: authentication data; account identifiers; and establishment information. (See Vemury: Para [0099]) Claim 3: wherein the server associates a profile and a fraud profile with the packet of information, the fraud profile comprising the unique biometric identifier generated by the person tracker. (See Vemury: Para [0222]) Claim 4: wherein the server sends the profile and the fraud profile to one or more financial systems that based on rules. (See Vemury: Para [0081]) Claim 5: wherein the server sends a redacted packet of information to one or more governmental systems of one or more non-governmental systems with requirements to enable obtaining a full version of the packet of information. (See Vemury: Para [0049]) Claim 6: wherein obtaining the identity comprises identifying the individual through: biometric authentication; a mobile application check-in; or transaction information. (See Vemury: Para [0099]) Claim 7: tracking transactions at terminals; or analyzing particular video for action and behavior identifiers. (See Vemury: Para [0100]) Claim 8: tracking physiological stress indicators; tracking biometric identifiers; or analyzing movements throughout the establishment. (See Vemury: Para [0100]) Claim 9: wherein tracking comprises tracking keywords spoken by the individual within as detected by one or more microphones within the establishment. (See Vemury: Para [0084]) Claim 10: continuously updating real-time scores of a perceived state of the individual. (See Vemury: Para [0071]) Claim 11: wherein updating comprises further includes generating the real-time scores based on: weighted sums of behavior indicators; threshold values; indicator pairs with enhanced weights; or machine learning analysis of threat, fraud, and impersonation (See Vemury: Para [0201]) Claim 13: wherein the server maintains logs to enable tracking activities of the user during establishment visits. (See Vemury: Para [0096]) Claim 14: wherein the server generates suspicious activity reports from the logs based on scores exceeding threshold values. (See Vemury: Para [0191]) Claim 15: wherein tracking comprises: monitoring transaction information from the terminals; and analyzing video frames for action and behavior identifiers using the bounding boxes. (See Showers: Para [0185]) Claim 16: wherein automatically processing comprises requiring increased authentication above initial requirements for current and subsequent transactions of the user. (See Vemury: Para [0104]) Claim 17: wherein automatically processing comprises: flagging transactions for review; and delaying transaction processing during review. (See Vemury: Para [0229]) Claim 18: wherein automatically processing comprises flagging account of establishment for a review. (See Vemury: Para [0229]) Claim 20: maintaining detailed activity logs during visits of a particular user to the establishment; generating customized log aggregations based on rules; distributing the customized log aggregations to external systems based on system-specific requirements. (See Vemury: Para [0096]) Response to Arguments Applicant's arguments filed 1/16/2025 have been fully considered but they are not persuasive. 101 Applicant asserts that the claims implement specific technical components. Examiner respectfully disagrees. Examiner notes that each of the claimed “person tracker,” “behavior-action tracker,” and “audio manager” has been used in the specification para [0023], [0025], [0026] as a black box without any further details. See Fig. 1. A patent application has to describe how to solve the problem in a manner that encompasses something more than the “principle in the abstract. But there is no mechanism recited in either the specification or drawings that describes how the recited steps are technologically implemented. The claimed “person tracker,” “behavior-action tracker,” “audio manager” are claimed by their function and do not constitute additional structural elements beyond their functioning as part of the abstract idea. As per Fig. 1 and para [0023], [0025], [0026], have been shown as labeled black boxes with aspiration functional results. Each of the above limitations is expressed purely in terms of results, devoid of implementation details. All purported inventive concepts reside in how ‘track,' ‘identify,’ ‘assign,’ ‘evaluating,’ ‘calculate,’ ‘updating,’ ‘processing’ are technically accomplished and not in how the processing technologically achieves the result which the specification does not elaborate. See Two-Way Media Ltd. v. Comcast Cable Commc’’n, LLC, 874 F.3d 1329, 1337 (Fed. Cir. 2017) (“The claim [before the court] requires the functional results of ‘converting,’ ‘routing,’ ‘controlling,’ ‘monitoring,’ and ‘accumulating records,’ but does not sufficiently describe how to achieve these results in a non-abstract way.”). Applicant argues that the claims are analogous to SRI, Finjan and McRO. Examiner respectfully disagrees. In McRO the court determined that the claim at issue was patent eligible because, when considered as a whole, the claim was directed to a technological improvement over existing, manual 3-D animation techniques and used limited rules in a process specifically designed to achieve an improved technological result in conventional industry practice. No comparable technological improvement is found in the present invention. Unlike the claims of Finjan, which “employ[] a new kind of file that enables a computer security system to do things it could not do before,” claim 1 does not solve any technical issues. Id. at 1305. Further, unlike the claims of Finjan, which “are directed to a non-abstract improvement in computer functionality,” claim 1 does not provide any technology improvement. Id. In short, Finjan is inapplicable here because the claims of Finjan are directed to a technology improvement, but claim 1 is directed to an abstract idea. Unlike the situation in SirFTech, applicant does not identify any improvement to GPS technology or any other computer technology. Instead, applicant’s claims merely invoke the use of a generic computer components and a series of black boxes for tracking and monitoring an individual visiting an establishment. In SRI, the claims are drawn to a method of hierarchical computer network monitoring, and recite a series of steps, including “deploying” network monitors, which detect “suspicious network activity based on analysis of network traffic data,” and generate and integrate “reports of . . . suspicious activity.” SRI, 930 F.3d at 1301. The Federal Circuit held that the claims are not directed to an abstract idea, and are patent eligible, because they are “necessarily rooted in computer technology in order to solve a specific problem in the realm of computer networks,” i.e., identifying hackers or potential intruders into the network. Id. at 1303. The court noted, “the claims actually prevent the normal, expected operation of a conventional computer network” and that “[l]ike the claims in [DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245 (Fed. Cir. 2014)], the claimed technology ‘overrides the routine and conventional sequence of events’ by detecting suspicious network activity, generating reports of suspicious activity, and receiving and integrating the reports using one or more hierarchical monitors.” Id. at 1304. The court, thus, recognized that the claims are not using a computer as a tool but, instead, improve “the technical functioning of the computer and computer networks by reciting a specific technique for improving computer network security.” Id. Previously Addressed: Applicant argues that the combination of deriving biometric facial features, computing a unique biometric value, and matching it to a specific customer represents an unconventional technical solution to the problem of accurately identifying individuals in a frictionless manner. Applicant argues that the amended claims describe a specific method of monitoring actions and behaviors by identifying facial features, expressions, and extremities using a bounding box, and looking for predefined actions associated with nervousness, sweating, and agitation. This represents an improvement in video analysis technology for detecting suspicious behavior. Applicant argues that the claims now include listening for specific voice patterns indicating stress. This integration of audio analysis with video and transaction monitoring represents a technological improvement in fraud detection systems. Examiner respectfully disagrees. Examiner notes since deriving biometric facial features to identify a person or customer has been invented long ago; nor does the applicant contend to have invented biometric facial recognition. See NPL Face recognition: Past, present and future (a review), Taskiran et al., Elsevier. See also US 8.219.438 B1 (“Method and system for measuring shopper response to products based on behavior and facial expression”) to Moon et al. Therefore, to call it an unconventional technical solution is therefore not persuasive. As set forth above, the claimed limitations – monitoring audio by identifying facial features, expressions, and extremities of the individual inside the bounding box; looking for predefined actions associated with nervousness, sweating, and agitation; and listening for specific voice patterns indicating stress – consist of no more than high level functional steps for monitoring the behavior of an individual that does not represent an advancement in the computers or other technology. Applicant argues that claims 1, 12 and 19 have been amended to highlight technical details such as processor and non-transitory computer readable storage media. In response, Examiner notes that that, as per para [0016], [0020], [0080], the above elements have been described in a manner that does not distinguish them from generic computer components. See MPEP § 2106.01(d)(I) (“mere physical or tangible implementation of an exception does not guarantee eligibility.”) Applicant asserts that the claims specify the technical means by which the server performs the steps of obtaining an identity, including the use of bounding boxes within vide0 frame, deriving biometric features, computing hash values, and matching these to unique hash values associated with known individuals. Applicant asserts that these steps are not merely expressed in terms of results but are tied to specific technologic al processes implemented by the server. Examiner finds this unpersuasive. As explained in Prong 2 above, the specification merely mentions: using Person tracker to derive biometric facial features from images supplied by cameras, computing unique hash values and matching unique biometric value to specific customer ([0023]); using a bounding box to identify facial features of an individual ([0026]). In other words, the specification mentions “Person tracker” and “Behavior-action tracker” as black boxes at a high level of generality. The specification does not provide sufficient details behind the methodology by which the “Person Tracker” or “Behavior-action tracker” derives biometric features or identifies facial features. The steps in claims 1, 12 and 19 are recited in a “result-oriented way” in which the result is recited, but not how it is accomplished. The claimed limitations do not purport to improve the functioning of the computer devices, do not recite (i) an improvement to the functionality of a computer or other technology or technical field; (ii) a “particular machine” to apply or use the judicial exception; (iii) a particular transformation of an article to a different thing or state; or (iv) any other meaningful limitation. See MPEP 2106.05(a)-(c), (e)-(h). Hence, the additional elements fail to integrate the abstract idea into a practical application or provide significantly more than an abstract idea. See MPEP 2106.04(d). To assert that the claims represent an improvement in prior art is not persuasive because there is no mechanism recited that dictates how to achieve the result recited in each step and convert any abstract idea into a practical application. Previously Addressed: Applicant argues that the claimed invention is not directed to an abstract idea but to a specific technological solution for fraud detection and tracking within an establishment. Examiner respectfully disagrees. Applicant cites to para 20-24 as evidence of technical improvement. However, Examiner notes that biometric identity verification has been invented long ago. Para 25 merely states that Behavior-action tracker 125 uses the bounding box to identify facial features, expressions, and extremities of the individual. This is merely using bounding box as a black box to identify an individual. Furthermore, the specification describes the identification of facial features at a high level of generality without details on how such derivation is technically achieved. Providing biometric facial identity hash, instead of full video, is merely the advantage of biometric identity which has been invented already. Therefore, Examiner notes using generic biometric identification technology to derive the biometric features of a person entering an establishment, at most, constitutes using existing technology in the manner it was designed as opposed to improvement to computers or technology. With respect to sharing KYC scores across different establishments and financial institution of para 31 - this is not an example of technical improvement but, a method of organizing human activity. Similarly gathering and sharing of log files in accordance with laws or regulation of para 33 is mere data gathering and following instructions to deter fraud which is a Certain Method of Organizing Human Activity. Examiner was unable to find any support for retaining hash values in the specification Finally, it is not clear why applicant brings up DDR because that case dealt with Internet centric challenges to clicking a hyperlink and getting transported to a rival e-commerce website. The present claims - which are directed to tracking, monitoring, and risk scoring a person entering an establishment - bears no similarity whatsoever to DDR. Hence, DDR is inapposite here. Applicant asserts that the clams recite a very specific approach to tracking a user within a store during which specific types of data are collected and specific types of information are recorded to with rules to generate a score which is continuously updated as the user traverses the establishment and a cumulative score for previous transactions and previous visits to the establishment is updated based on the score. Applicant asserts that this provides hands-free monitoring and improves security-based technologies. Examiner respectfully disagrees because merely describing a very specific approach to tracking a user within a store and collecting specific type of data does not change the basic fact that such collection and analysis of data is patent ineligible. See Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350, 1355 (Fed. Cir. 2016) (“[S]electing information, by content or source, for collection, analysis, and display does nothing significant to differentiate a process from ordinary mental processes, whose implicit exclusion from § 101 undergirds the information-based category of abstract ideas.”); In re Killian, 45 F.4th 1373, 1380 (Fed. Cir. 2022) (Claims “directed to collection of information, comprehending the meaning of that collected information, and indication of the results, all on a generic computer network operating in its normal, expected manner,” fail step one of the Alice framework.), Interval Licensing LLC v. AOL, Inc., 896 F.3d 1335, 1344-45 (Fed. Cir. 2018) (recognizing that information “is an intangible” and that “the collection, organization, and display of two sets of information on a generic display device is abstract absent a ‘specific improvement to the way computers [or other technologies] operate’”), FairWarning IP, LLC v. Iatric Sys., Inc., 839 F.3d 1089, 1093-94 (Fed. Cir. 2016) (determining “that the ‘realm of abstract ideas’ includes ‘collecting information, including when limited to particular content’” as well as analyzing and presenting information). Moreover, “[i]nformation as such is an intangible” and collecting, analyzing (e.g., recognizing certain data within the dataset), and displaying that information, without more, is an abstract idea. See Interval Licensing LLC v. AOL, Inc., 896 F.3d 1335, 1344-45 (Fed. Cir. 2018) (quoting Elec. Power Grp., LLC v. Alstom S.A.,830 F.3d 1350, 1353–54 (Fed. Cir. 2016) and citing similar decisions holding that displaying different types or sets of information from various sources on a generic display is abstract absent a specific improvement to the way computers or other technologies operate); see also SAP Am., Inc. v. InvestPic, LLC, 898 F.3d 1161, 1167 (Fed. Cir. 2018) (“[M]erely presenting the results of abstract processes of collecting and analyzing information … is abstract as an ancillary part of such collection and analysis.”). Previously Addressed: Applicant asserts that the claims are not directed to any of the judicially created abstract categories and the claims represent a practical application of an abstract idea because they are restricted to a specific embodiment and do not attempt to monopolize the abstract idea. Examiner disagrees. The Federal Circuit has consistently held that abstract ideas include the concepts of collecting data, analyzing the data, and displaying the results of the collection and analysis, including when limited to particular content. See, e.g., Intellectual Ventures I LLC v. Capital One Fin. Corp., 850 F.3d 1332, 1340 (Fed. Cir. 2017) (identifying the abstract idea of collecting, displaying, and manipulating data); Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350, 1354 (Fed. Cir. 2016) (characterizing collecting information, analyzing information by steps people go through in their minds, or by mathematical algorithms, and presenting the results of collecting and analyzing information, without more, as matters within the realm of abstract ideas). As pointed out in Prong 1, tracking and monitoring an individual by video; deriving biometric features; monitoring audio for speech patterns; monitoring transactions, actions and behaviors constitute collecting and analyzing data which are abstract ideas. Obtaining user information including biometric, determining geofence location, monitoring user behavior, etc. – are mere data gathering activities. Processing a response based on intervention score is a certain method of organizing human activity. For the above reasons, Applicant’s assertion – that the claims are not subject to eligibility analysis – is not persuasive. With respect to practical application analysis under the 2019 PEG, the steps of detecting, obtaining, monitoring, maintaining, processing and retaining by the server does not improve the server. Rather they improve security, prevent fraud, and thus may improve a business. See FairWarning IP, LLC v. Iatric Sys., Inc., 839 F.3d 1089, 1096-97 (Fed. Cir. 2016) (steps of compiling data from various sources to generate a full picture of a user’s activity, identity, and frequency of activity in a network audit log combine data sources but do not make the claim eligible where they does not recite technological improvements in how the information sources are accessed and combined). Indeed, nothing in the claims improves the functioning of the computer, makes it operate more efficiently, or solves any technological problem. See Trading Techs. Int’l, Inc. v. IBG LLC, 921 F.3d 1378, 1384-85 (Fed. Cir. 2019). For the above reasons, Applicant’s arguments are not persuasive. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARUNAVA CHAKRAVARTI whose telephone number is (571)270-1646. The examiner can normally be reached 9 AM - 5 PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Calvin Hewitt can be reached at (571)272-6709. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ARUNAVA CHAKRAVARTI/Primary Examiner, Art Unit 3692