Jump to content

Patent Application 18304232 - METHOD APPARATUS SYSTEM AND NON-TRANSITORY - Rejection

From WikiPatents

Patent Application 18304232 - METHOD APPARATUS SYSTEM AND NON-TRANSITORY

Title: METHOD, APPARATUS, SYSTEM, AND NON-TRANSITORY COMPUTER READABLE MEDIUM FOR IDENTIFYING AND PRIORITIZING NETWORK SECURITY EVENTS

Application Information

  • Invention Title: METHOD, APPARATUS, SYSTEM, AND NON-TRANSITORY COMPUTER READABLE MEDIUM FOR IDENTIFYING AND PRIORITIZING NETWORK SECURITY EVENTS
  • Application Number: 18304232
  • Submission Date: 2025-05-19T00:00:00.000Z
  • Effective Filing Date: 2023-04-20T00:00:00.000Z
  • Filing Date: 2023-04-20T00:00:00.000Z
  • National Class: 726
  • National Sub-Class: 022000
  • Examiner Employee Number: 86043
  • Art Unit: 2438
  • Tech Center: 2400

Rejection Summary

  • 102 Rejections: 0
  • 103 Rejections: 3

Cited Patents

No patents were cited in this rejection.

Office Action Text


    DETAILED ACTION
The following is a Final Office action in response to applicants’ amendment and remarks filed on 04/16/2025.  Claims 1, 8, 9, 16, 17, and 20 have been amended. Claims 1-20 are currently pending and have been considered as follows.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicants’ amendment of independent Claims 1, 9, and 17 regarding the newly added limitations ā€œthe plurality of defined algorithms including a first defined algorithm employing one or more machine learning models and a second defined algorithm employing one or more rule-based conditions, wherein the first defined algorithm and the second defined algorithm each generates an individual score for each IT security eventā€ has newly changed the scope of the claimed invention.  Therefore, applicants’ arguments filed 04/16/2025 have been fully considered but are moot because the amendment necessitates new ground(s) of rejection where applicants’ arguments do not apply to the updated reference(s) for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.

This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary.  Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-3, 5-7, 9-11, 13-15, 17, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Martin et al. (US 20180004948 A1, hereinafter Martin) in view of Pan et al. (US 20200334498 A1, hereinafter Pan).
As to Amended Claim 1:
Martin discloses a server for identifying and prioritizing a collection of information technology (IT) security events associated with a network (e.g. Martin ā€œBlocks of the first method S100 can be executed by a remote computer system (e.g., a remote server) that remotely monitors events and signals occurring on a network of assets on behalf of an individual, company, corporation, agency, administration, or other organization to detect possible security threatsā€ [0013]; a remote server, hereinafter the ā€œsystemā€ [0059]), the server comprising:
a memory storing computer readable instructions (e.g. Martin ā€œa computer-readable medium storing computer-readable instructions… such as RAMs, ROMs, flash memory, EEPROMsā€ [0117]); and
processing circuitry configured to execute the computer readable instructions to cause the server to (e.g. Martin ā€œa processor but any suitable dedicated hardware device can (alternatively or additionally) execute the instructionsā€ [0117]),
receive a dataset representing a plurality of IT security events associated with the network, the plurality of IT security events specific to one or more resources associated with the network (e.g. Martin ā€œaccessing a set of signals generated over a period of time in Block S110, each signal in the set of signals representing a possible security threat and containing an asset identification tag identifying a computer at which the possible security threat originatedā€ [0009]; ā€œdisparate events occurring on the network over timeā€ [0010]; [0014]; ā€œaccessing a set of signals generated over a period of time in Block S110, wherein each signal in the set of signals represents a possible security threat and containing an asset identification tag identifying a computer at which the possible security threat originated. Generally, in Block S110, the system accesses signals generated by external detection mechanisms layered throughout the network. For example, detection mechanisms within the network can serve signals to a common feed or common signal database, and the system can tap into this feed or signal database to access new and old signals over timeā€ [0022]),
generate, by each of a plurality of defined algorithms, a plurality of individual scores for the plurality of IT security events, each individual score indicative that a possible security incident occurred (e.g. Martin ā€œassigns a risk score to each signal in Block S120, such as by writing a preset risk score for a particular signal type to a signal of the same type or by implementing a risk algorithm to calculate a risk score for a signal based on various attributes of the signalā€ [0020]; ā€œa risk score indicating a risk or likelihood that one or more events represented by the signal corresponds to a cyber attackā€ [0029]; ā€œthe system can implement a common risk algorithm or various risk algorithms unique to each signal type output by the other detection mechanisms on the network and/or internally by the system… ā€ [0031]; risk algorithms [0033]; [0034]),
correlate, based on the received dataset, each of the individual scores for the plurality of IT security events with the one or more resources (e.g. Martin ā€œrelating a subset of signals in the set of signals based on like asset identification tags in Block S130ā€ [0009]; ā€œidentify related signals, such as based on a common IP address at which the signals originated, in Block S130; group multiple related signals and corresponding risk scoresā€ [0010]; [0011]; ā€œa signal type, vulnerability type, or attempted exploitation mechanism; a timestamp corresponding to generation of a signal; an asset identification tag (e.g., IP address and user ID, host name, MAC address) corresponding to an asset at which the behavior that triggered the signal originated… in signal metadata. Each signal can thus include metadata defining various parametersā€  [0015]; [0020]; ā€œThe system can then aggregate risk scores for the first, second, and third signals… originating at the first computerā€ [0021]),
aggregate, for a resource of the one or more resources, each of the individual scores correlated with the resource into a security score specific to the resource (e.g. Martin [0009]; ā€œgroup multiple related signals and corresponding risk scores into a single composite alert with a single composite risk score representing an aggregated riskā€ [0010]; ā€œThe system can then aggregate risk scores for the first, second, and third signals… to calculate a composite risk score for the composite event. The system can thus characterize a sequence of signals in Block S132 to confirm a link between disparate signals in Block S130 such that the system may generate a composite alert representing events related to a single cyber attack case and corresponding to a composite risk score that accurately portrays a risk inherent in the single cyber attack caseā€ [0021]; ā€œthe system can aggregate a set of disparate signals linked by a common asset identification tag of an originating computerā€ [0038]; [0041]; [0070]), 
determine whether the security score specific to the resource exceeds a defined threshold (e.g. Martin ā€œin response to the risk score exceeding a threshold risk scoreā€ [0009]; ā€œif the composite risk score for a composite alert exceeds a threshold risk score, the system can serve the composite alert to human security personnelā€ [0010]), and
in response to the security score specific to the resource exceeding the defined threshold, generate and transmit a security incident alert specific to the resource to a security operation center (SOC) (e.g. Martin ā€œgroup multiple related signals and corresponding risk scores into a single composite alert with a single composite risk score representing an aggregated risk that a group of related signals will culminate in a security breach at the network in Blocks S132 and S134. Thus, if the composite risk score for a composite alert exceeds a threshold risk score, the system can serve the composite alert to human security personnelā€ [0010]; ā€œhuman security personnel (e.g., a security operation center, or ā€œSOCā€) who review and respond to security threatsā€ [0012]; ā€œthe system can compare the composite risk score to the risk threshold in Block S140. If the composite risk score exceeds the threshold risk score, the system can communicate the composite alert to human security personnel… and push such communication(s) to a security analyst, various security personnel, or a security operation center (or ā€œSOCā€)ā€ [0047]), the security incident alert including each IT security event correlated with the resource (e.g. Martin ā€œto condense related signals into a single composite alert… signals corresponding to various actions (or events, ā€œbehaviors,ā€ ā€œmicrobehaviorsā€) of assetsā€ [0011]; ā€œcompiles a set of related signals into a ā€œcomposite alertā€ in Block S132ā€ [0016]; ā€œconfirm a link between disparate signals in Block S130 such that the system may generate a composite alert representing events related to a single cyber attack case and corresponding to a composite risk scoreā€ [0021]; ā€œthe system generates a new composite alert and writes metadata from a set of linked signals to the new composite alert, such as an origin of the signals (e.g., an asset identification tag), timestamps or a time window encompassing the signals, a timeline of events corresponding to signals in the setā€ [0042]);
But Martin does not specifically disclose:
the plurality of defined algorithms including a first defined algorithm employing one or more machine learning models and a second defined algorithm employing one or more rule- based conditions, wherein the first defined algorithm and the second defined algorithm each generates an individual score for each IT security event.
However, the analogous art Pan does disclose the plurality of defined algorithms including a first defined algorithm employing one or more machine learning models and a second defined algorithm employing one or more rule-based conditions, wherein the first defined algorithm and the second defined algorithm each generates an individual score for each IT security event (e.g. Pan ā€œthe event data is input to time-based models with each of the time-based models being machine-learning models… a set of machine-learning risk scores are correlated based on results received from the time-based machine-learning modelsā€ [0004]; ā€œrule-based risk scores corresponding to the user are also calculatedā€ [0005]; ā€œthe raw events from log sources 310, such as the Security Information and Event Management System (SIEM), are analyzed with a spectrum of analytics ranging from simple pattern matches, complex rule-based analytics, to machine language (ML) models, as shown in FIG. 3. A rule or a set of rules tests the incoming raw events to detect unusual activities associated with a user. The ML modeling is achieved in a separate, multi-algorithm/analytic pipeline engineā€ [0045]; ā€œUBA assigns a risk score to a user based on detected risks from ML as well as the rule engine of the SIEM… if a user accessed an external resource that is deemed to be an inappropriate, risky, or having signs of infection, then the rule ā€œUser Accessing Risky Resourcesā€ is triggered to generate a sense event with a preconfigured risk score attached… the Machine Learning modeling system uses various algorithms to ā€˜learn’ a base model with users' past behaviors, then ā€˜score’ their current behavior when it deviates from the learned behavior… As with rule-based sense score, the Machine Learning sense score value is also configurable and can be customized based an organization's environmentā€ [0046]; [0047]; [0063]; [0073]-[0074]; FIG. 7).  Martin and Pan are analogous art because they are from the same field of endeavor in security risk scoring.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art, having the teachings of Martin and Pan before him or her, to modify the invention of Martin with the teachings of Pan to include the plurality of defined algorithms including a first defined algorithm employing one or more machine learning models and a second defined algorithm employing one or more rule- based conditions, wherein the first defined algorithm and the second defined algorithm each generates an individual score for each IT security event as claimed.  The suggestion/motivation for doing so would have been for analysis resulting in time-based risk scores for each of different time intervals to avoid missing important information in data sets and reducing false positives (Pan [0002]; [0003]).  Therefore, it would have been obvious to combine Martin and Pan to obtain the invention as specified in the instant claim(s).
As to Claim 2:
Martin in view of Pan discloses the server of claim 1, wherein the processing circuitry is further configured to execute the computer readable instructions to cause the server to: determine whether the security score specific to the resource exceeds the defined threshold in a defined period of time (e.g. Martin ā€œThe system can also relate multiple like events occurring on the network—such as originating at a common computer and occurring within a particular window of time—based on preset correlation rules and can issue signals for anomalous event sequencesā€ [0014]; ā€œover a period of timeā€ [0022]), and in response to the security score specific to the resource exceeding the defined threshold in the defined period of time, generate and transmit the security incident alert specific to the resource to the SOC (e.g. Martin [0016]; ā€œThe system thus collects a multitude of signals over time in Block S100 and then assigns a risk score to each signal in Block S120, such as by writing a preset risk score for a particular signal type to a signal of the same type or by implementing a risk algorithm to calculate a risk score for a signal based on various attributes of the signal. The system then relates a subset of all collected signals based on the attributes contained in these signals. In this example application, the system: relates the first signal, second signal, and third signal based on a common IP address of the originating computer (i.e., the first computer) in Block S130; creates a composite alert from the related first, second, and third signals in Block S132; sums risk scores for the first, second, and third signals to calculate a composite risk for the composite alert, and pushes the composite alert to a security analyst if the composite risk score exceeds a threshold risk scoreā€ [0020]; [0027]; [0042]; [0116]).
As to Claim 3:
Martin in view of Pan discloses the server of claim 1, wherein the processing circuitry is further configured to execute the computer readable instructions to cause the server to receive the dataset representing the plurality of IT security events associated with the network from a plurality of data sources (e.g. Martin ā€œa feed of signals served by various detection mechanisms (e.g., sensors) throughout a network… the system can automatically link multiple disparate signals generated by other intrusion detection systems within the network substantially in real-time in order to rapidly detect zero-day exploit attacksā€ [0012]; receive multitude of disparate signals output by sensors on the network over time [0057]; [0067]-[0069]).
As to Claim 5:
Martin in view of Pan discloses the server of claim 1, wherein the processing circuitry is further configured to execute the computer readable instructions to cause the server to store the dataset representing the plurality of IT security events in one or more databases (e.g. Martin ā€œdetection mechanisms within the network can serve signals to a common feed or common signal database, and the system can tap into this feed or signal database to access new and old signals over timeā€ [0022]; [0030]).
As to Claim 6:
Martin in view of Pan discloses the server of claim 1, wherein the one or more resources includes at least one of an IP address, an individual, a virtual computing machine, and a physical computing machine (e.g. Martin ā€œa common IP address at which the signals originatedā€ [0010]; ā€œan asset (e.g., a machine and user)ā€ [0014]; ā€œan asset identification tag (e.g., IP address and user ID, host name, MAC address) corresponding to an asset at which the behavior that triggered the signal originatedā€ [0015]; first computer [0018]).
As to Claim 7:
Martin in view of Pan discloses the server of claim 1, wherein the processing circuitry is further configured to execute the computer readable instructions to cause the server to aggregate, for the resource, each of the individual scores correlated with the resource into the security score specific to the resource by summing each of the individual scores correlated with the resource (e.g. Martin ā€œThe system then relates a subset of all collected signals based on the attributes contained in these signals. In this example application, the system: relates the first signal, second signal, and third signal based on a common IP address of the originating computer (i.e., the first computer) in Block S130; creates a composite alert from the related first, second, and third signals in Block S132; sums risk scores for the first, second, and third signals to calculate a composite risk for the composite alert, and pushes the composite alert to a security analyst if the composite risk score exceeds a threshold risk scoreā€ [0020]; [0043]).
As to Amended Claim 9:
Martin discloses a method for identifying and prioritizing a collection of information technology (IT) security events associated with a network (e.g. Martin ā€œa first method S100 for predicting and characterizing cyber attacks includes: accessing a set of signals generated over a period of timeā€¦ā€ [0009]; ā€œfirst method S100 can be executed in conjunction with a computer networkā€ [0010]), the method comprising:
receiving a dataset representing a plurality of IT security events associated with the network, the plurality of IT security events specific to one or more resources associated with the network (e.g. Martin ā€œaccessing a set of signals generated over a period of time in Block S110, each signal in the set of signals representing a possible security threat and containing an asset identification tag identifying a computer at which the possible security threat originatedā€ [0009]; ā€œdisparate events occurring on the network over timeā€ [0010]; [0014]; ā€œaccessing a set of signals generated over a period of time in Block S110, wherein each signal in the set of signals represents a possible security threat and containing an asset identification tag identifying a computer at which the possible security threat originated. Generally, in Block S110, the system accesses signals generated by external detection mechanisms layered throughout the network. For example, detection mechanisms within the network can serve signals to a common feed or common signal database, and the system can tap into this feed or signal database to access new and old signals over timeā€ [0022]),
generating, by each of a plurality of defined algorithms, a plurality of individual scores for the plurality of IT security events, each individual score indicative that a possible security incident occurred (e.g. Martin ā€œassigns a risk score to each signal in Block S120, such as by writing a preset risk score for a particular signal type to a signal of the same type or by implementing a risk algorithm to calculate a risk score for a signal based on various attributes of the signalā€ [0020]; ā€œa risk score indicating a risk or likelihood that one or more events represented by the signal corresponds to a cyber attackā€ [0029]; ā€œthe system can implement a common risk algorithm or various risk algorithms unique to each signal type output by the other detection mechanisms on the network and/or internally by the system… ā€ [0031]; risk algorithms [0033]; [0034]),
correlating, based on the received dataset, each of the individual scores for the plurality of IT security events with the one or more resources (e.g. Martin ā€œrelating a subset of signals in the set of signals based on like asset identification tags in Block S130ā€ [0009]; ā€œidentify related signals, such as based on a common IP address at which the signals originated, in Block S130; group multiple related signals and corresponding risk scoresā€ [0010]; [0011]; ā€œa signal type, vulnerability type, or attempted exploitation mechanism; a timestamp corresponding to generation of a signal; an asset identification tag (e.g., IP address and user ID, host name, MAC address) corresponding to an asset at which the behavior that triggered the signal originated… in signal metadata. Each signal can thus include metadata defining various parametersā€  [0015]; [0020]; ā€œThe system can then aggregate risk scores for the first, second, and third signals… originating at the first computerā€ [0021]),
aggregating, for a resource of the one or more resources, each of the individual scores correlated with the resource into a security score specific to the resource (e.g. Martin [0009]; ā€œgroup multiple related signals and corresponding risk scores into a single composite alert with a single composite risk score representing an aggregated riskā€ [0010]; ā€œThe system can then aggregate risk scores for the first, second, and third signals… to calculate a composite risk score for the composite event. The system can thus characterize a sequence of signals in Block S132 to confirm a link between disparate signals in Block S130 such that the system may generate a composite alert representing events related to a single cyber attack case and corresponding to a composite risk score that accurately portrays a risk inherent in the single cyber attack caseā€ [0021]; ā€œthe system can aggregate a set of disparate signals linked by a common asset identification tag of an originating computerā€ [0038]; [0041]; [0070]), 
determining whether the security score specific to the resource exceeds a defined threshold (e.g. Martin ā€œin response to the risk score exceeding a threshold risk scoreā€ [0009]; ā€œif the composite risk score for a composite alert exceeds a threshold risk score, the system can serve the composite alert to human security personnelā€ [0010]), and
in response to the security score specific to the resource exceeding the defined threshold, generating and transmitting a security incident alert specific to the resource to a security operation center (SOC) (e.g. Martin ā€œgroup multiple related signals and corresponding risk scores into a single composite alert with a single composite risk score representing an aggregated risk that a group of related signals will culminate in a security breach at the network in Blocks S132 and S134. Thus, if the composite risk score for a composite alert exceeds a threshold risk score, the system can serve the composite alert to human security personnelā€ [0010]; ā€œhuman security personnel (e.g., a security operation center, or ā€œSOCā€) who review and respond to security threatsā€ [0012]; ā€œthe system can compare the composite risk score to the risk threshold in Block S140. If the composite risk score exceeds the threshold risk score, the system can communicate the composite alert to human security personnel… and push such communication(s) to a security analyst, various security personnel, or a security operation center (or ā€œSOCā€)ā€ [0047]), the security incident alert including each IT security event correlated with the resource (e.g. Martin ā€œto condense related signals into a single composite alert… signals corresponding to various actions (or events, ā€œbehaviors,ā€ ā€œmicrobehaviorsā€) of assetsā€ [0011]; ā€œcompiles a set of related signals into a ā€œcomposite alertā€ in Block S132ā€ [0016]; ā€œconfirm a link between disparate signals in Block S130 such that the system may generate a composite alert representing events related to a single cyber attack case and corresponding to a composite risk scoreā€ [0021]; ā€œthe system generates a new composite alert and writes metadata from a set of linked signals to the new composite alert, such as an origin of the signals (e.g., an asset identification tag), timestamps or a time window encompassing the signals, a timeline of events corresponding to signals in the setā€ [0042]);
But Martin does not specifically disclose:
the plurality of defined algorithms including a first defined algorithm employing one or more machine learning models and a second defined algorithm employing one or more rule- based conditions, wherein the first defined algorithm and the second defined algorithm each generates an individual score for each IT security event.
However, the analogous art Pan does disclose the plurality of defined algorithms including a first defined algorithm employing one or more machine learning models and a second defined algorithm employing one or more rule-based conditions, wherein the first defined algorithm and the second defined algorithm each generates an individual score for each IT security event (e.g. Pan ā€œthe event data is input to time-based models with each of the time-based models being machine-learning models… a set of machine-learning risk scores are correlated based on results received from the time-based machine-learning modelsā€ [0004]; ā€œrule-based risk scores corresponding to the user are also calculatedā€ [0005]; ā€œthe raw events from log sources 310, such as the Security Information and Event Management System (SIEM), are analyzed with a spectrum of analytics ranging from simple pattern matches, complex rule-based analytics, to machine language (ML) models, as shown in FIG. 3. A rule or a set of rules tests the incoming raw events to detect unusual activities associated with a user. The ML modeling is achieved in a separate, multi-algorithm/analytic pipeline engineā€ [0045]; ā€œUBA assigns a risk score to a user based on detected risks from ML as well as the rule engine of the SIEM… if a user accessed an external resource that is deemed to be an inappropriate, risky, or having signs of infection, then the rule ā€œUser Accessing Risky Resourcesā€ is triggered to generate a sense event with a preconfigured risk score attached… the Machine Learning modeling system uses various algorithms to ā€˜learn’ a base model with users' past behaviors, then ā€˜score’ their current behavior when it deviates from the learned behavior… As with rule-based sense score, the Machine Learning sense score value is also configurable and can be customized based an organization's environmentā€ [0046]; [0047]; [0063]; [0073]-[0074]; FIG. 7).  Martin and Pan are analogous art because they are from the same field of endeavor in security risk scoring.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art, having the teachings of Martin and Pan before him or her, to modify the invention of Martin with the teachings of Pan to include the plurality of defined algorithms including a first defined algorithm employing one or more machine learning models and a second defined algorithm employing one or more rule- based conditions, wherein the first defined algorithm and the second defined algorithm each generates an individual score for each IT security event as claimed.  The suggestion/motivation for doing so would have been for analysis resulting in time-based risk scores for each of different time intervals to avoid missing important information in data sets and reducing false positives (Pan [0002]; [0003]).  Therefore, it would have been obvious to combine Martin and Pan to obtain the invention as specified in the instant claim(s).
As to Claim 10:
Martin in view of Pan discloses the method of claim 9, wherein: determining whether the security score specific to the resource exceeds the defined threshold includes determining whether the security score specific to the resource exceeds the defined threshold in a defined period of time (e.g. Martin ā€œThe system can also relate multiple like events occurring on the network—such as originating at a common computer and occurring within a particular window of time—based on preset correlation rules and can issue signals for anomalous event sequencesā€ [0014]; ā€œover a period of timeā€ [0022]), and generating and transmitting the security incident alert specific to the resource to the SOC includes generating and transmitting the security incident alert to the SOC in response to the security score specific to the resource exceeding the defined threshold in the defined period of time (e.g. Martin [0016]; ā€œThe system thus collects a multitude of signals over time in Block S100 and then assigns a risk score to each signal in Block S120, such as by writing a preset risk score for a particular signal type to a signal of the same type or by implementing a risk algorithm to calculate a risk score for a signal based on various attributes of the signal. The system then relates a subset of all collected signals based on the attributes contained in these signals. In this example application, the system: relates the first signal, second signal, and third signal based on a common IP address of the originating computer (i.e., the first computer) in Block S130; creates a composite alert from the related first, second, and third signals in Block S132; sums risk scores for the first, second, and third signals to calculate a composite risk for the composite alert, and pushes the composite alert to a security analyst if the composite risk score exceeds a threshold risk scoreā€ [0020]; [0027]; [0042]; [0116]).

As to Claim 11:
Martin in view of Pan discloses the method of claim 9, wherein receiving the dataset representing the plurality of IT security events associated with the network includes receiving the dataset from a plurality of data sources (e.g. Martin ā€œa feed of signals served by various detection mechanisms (e.g., sensors) throughout a network… the system can automatically link multiple disparate signals generated by other intrusion detection systems within the network substantially in real-time in order to rapidly detect zero-day exploit attacksā€ [0012]; receive multitude of disparate signals output by sensors on the network over time [0057]; [0067]-[0069]).
As to Claim 13:
Martin in view of Pan discloses the method of claim 12, further comprising storing the dataset representing the plurality of IT security events in one or more databases (e.g. Martin ā€œdetection mechanisms within the network can serve signals to a common feed or common signal database, and the system can tap into this feed or signal database to access new and old signals over timeā€ [0022]; [0030]).
As to Claim 14:
Martin in view of Pan discloses the method of claim 9, wherein the one or more resources includes at least one of an IP address, an individual, a virtual computing machine, and a physical computing machine (e.g. Martin ā€œa common IP address at which the signals originatedā€ [0010]; ā€œan asset (e.g., a machine and user)ā€ [0014]; ā€œan asset identification tag (e.g., IP address and user ID, host name, MAC address) corresponding to an asset at which the behavior that triggered the signal originatedā€ [0015]; first computer [0018]).

As to Claim 15:
Martin in view of Pan discloses the method of claim 9, wherein aggregating, for the resource, each of the individual scores correlated with the resource into the security score specific to the resource includes summing each of the individual scores correlated with the resource (e.g. Martin ā€œThe system then relates a subset of all collected signals based on the attributes contained in these signals. In this example application, the system: relates the first signal, second signal, and third signal based on a common IP address of the originating computer (i.e., the first computer) in Block S130; creates a composite alert from the related first, second, and third signals in Block S132; sums risk scores for the first, second, and third signals to calculate a composite risk for the composite alert, and pushes the composite alert to a security analyst if the composite risk score exceeds a threshold risk scoreā€ [0020]; [0043]).
As to Amended Claim 17:
Martin discloses a non-transitory computer readable medium storing computer readable instructions (e.g. Martin ā€œa computer-readable medium storing computer-readable instructions… such as RAMs, ROMs, flash memory, EEPROMsā€ [0117]), which when executed by processing circuitry of a server (e.g. Martin ā€œa processor but any suitable dedicated hardware device can (alternatively or additionally) execute the instructionsā€ [0117]; ā€œBlocks of the first method S100 can be executed by a remote computer system (e.g., a remote server) that remotely monitors events and signals occurring on a network of assets on behalf of an individual, company, corporation, agency, administration, or other organization to detect possible security threatsā€ [0013]; a remote server, hereinafter the ā€œsystemā€ [0059]), causes the server to:
receive a dataset representing a plurality of information technology (IT) security events associated with a network, the plurality of IT security events specific to one or more resources associated with the network (e.g. Martin ā€œaccessing a set of signals generated over a period of time in Block S110, each signal in the set of signals representing a possible security threat and containing an asset identification tag identifying a computer at which the possible security threat originatedā€ [0009]; ā€œdisparate events occurring on the network over timeā€ [0010]; [0014]; ā€œaccessing a set of signals generated over a period of time in Block S110, wherein each signal in the set of signals represents a possible security threat and containing an asset identification tag identifying a computer at which the possible security threat originated. Generally, in Block S110, the system accesses signals generated by external detection mechanisms layered throughout the network. For example, detection mechanisms within the network can serve signals to a common feed or common signal database, and the system can tap into this feed or signal database to access new and old signals over timeā€ [0022]),
generate, by each of a plurality of defined algorithms, a plurality of individual scores for the plurality of IT security events, each individual score indicative that a possible security incident occurred (e.g. Martin ā€œassigns a risk score to each signal in Block S120, such as by writing a preset risk score for a particular signal type to a signal of the same type or by implementing a risk algorithm to calculate a risk score for a signal based on various attributes of the signalā€ [0020]; ā€œa risk score indicating a risk or likelihood that one or more events represented by the signal corresponds to a cyber attackā€ [0029]; ā€œthe system can implement a common risk algorithm or various risk algorithms unique to each signal type output by the other detection mechanisms on the network and/or internally by the system… ā€ [0031]; risk algorithms [0033]; [0034]),
correlate, based on the received dataset, each of the individual scores for the plurality of IT security events with the one or more resources (e.g. Martin ā€œrelating a subset of signals in the set of signals based on like asset identification tags in Block S130ā€ [0009]; ā€œidentify related signals, such as based on a common IP address at which the signals originated, in Block S130; group multiple related signals and corresponding risk scoresā€ [0010]; [0011]; ā€œa signal type, vulnerability type, or attempted exploitation mechanism; a timestamp corresponding to generation of a signal; an asset identification tag (e.g., IP address and user ID, host name, MAC address) corresponding to an asset at which the behavior that triggered the signal originated… in signal metadata. Each signal can thus include metadata defining various parametersā€  [0015]; [0020]; ā€œThe system can then aggregate risk scores for the first, second, and third signals… originating at the first computerā€ [0021]),
aggregate, for a resource of the one or more resources, each of the individual scores correlated with the resource into a security score specific to the resource (e.g. Martin [0009]; ā€œgroup multiple related signals and corresponding risk scores into a single composite alert with a single composite risk score representing an aggregated riskā€ [0010]; ā€œThe system can then aggregate risk scores for the first, second, and third signals… to calculate a composite risk score for the composite event. The system can thus characterize a sequence of signals in Block S132 to confirm a link between disparate signals in Block S130 such that the system may generate a composite alert representing events related to a single cyber attack case and corresponding to a composite risk score that accurately portrays a risk inherent in the single cyber attack caseā€ [0021]; ā€œthe system can aggregate a set of disparate signals linked by a common asset identification tag of an originating computerā€ [0038]; [0041]; [0070]), 
determine whether the security score specific to the resource exceeds a defined threshold (e.g. Martin ā€œin response to the risk score exceeding a threshold risk scoreā€ [0009]; ā€œif the composite risk score for a composite alert exceeds a threshold risk score, the system can serve the composite alert to human security personnelā€ [0010]), and
in response to the security score specific to the resource exceeding the defined threshold, generate and transmit a security incident alert specific to the resource to a security operation center (SOC) (e.g. Martin ā€œgroup multiple related signals and corresponding risk scores into a single composite alert with a single composite risk score representing an aggregated risk that a group of related signals will culminate in a security breach at the network in Blocks S132 and S134. Thus, if the composite risk score for a composite alert exceeds a threshold risk score, the system can serve the composite alert to human security personnelā€ [0010]; ā€œhuman security personnel (e.g., a security operation center, or ā€œSOCā€) who review and respond to security threatsā€ [0012]; ā€œthe system can compare the composite risk score to the risk threshold in Block S140. If the composite risk score exceeds the threshold risk score, the system can communicate the composite alert to human security personnel… and push such communication(s) to a security analyst, various security personnel, or a security operation center (or ā€œSOCā€)ā€ [0047]), the security incident alert including each IT security event correlated with the resource (e.g. Martin ā€œto condense related signals into a single composite alert… signals corresponding to various actions (or events, ā€œbehaviors,ā€ ā€œmicrobehaviorsā€) of assetsā€ [0011]; ā€œcompiles a set of related signals into a ā€œcomposite alertā€ in Block S132ā€ [0016]; ā€œconfirm a link between disparate signals in Block S130 such that the system may generate a composite alert representing events related to a single cyber attack case and corresponding to a composite risk scoreā€ [0021]; ā€œthe system generates a new composite alert and writes metadata from a set of linked signals to the new composite alert, such as an origin of the signals (e.g., an asset identification tag), timestamps or a time window encompassing the signals, a timeline of events corresponding to signals in the setā€ [0042]);
But Martin does not specifically disclose:
the plurality of defined algorithms including a first defined algorithm employing one or more machine learning models and a second defined algorithm employing one or more rule- based conditions, wherein the first defined algorithm and the second defined algorithm each generates an individual score for each IT security event.
However, the analogous art Pan does disclose the plurality of defined algorithms including a first defined algorithm employing one or more machine learning models and a second defined algorithm employing one or more rule-based conditions, wherein the first defined algorithm and the second defined algorithm each generates an individual score for each IT security event (e.g. Pan ā€œthe event data is input to time-based models with each of the time-based models being machine-learning models… a set of machine-learning risk scores are correlated based on results received from the time-based machine-learning modelsā€ [0004]; ā€œrule-based risk scores corresponding to the user are also calculatedā€ [0005]; ā€œthe raw events from log sources 310, such as the Security Information and Event Management System (SIEM), are analyzed with a spectrum of analytics ranging from simple pattern matches, complex rule-based analytics, to machine language (ML) models, as shown in FIG. 3. A rule or a set of rules tests the incoming raw events to detect unusual activities associated with a user. The ML modeling is achieved in a separate, multi-algorithm/analytic pipeline engineā€ [0045]; ā€œUBA assigns a risk score to a user based on detected risks from ML as well as the rule engine of the SIEM… if a user accessed an external resource that is deemed to be an inappropriate, risky, or having signs of infection, then the rule ā€œUser Accessing Risky Resourcesā€ is triggered to generate a sense event with a preconfigured risk score attached… the Machine Learning modeling system uses various algorithms to ā€˜learn’ a base model with users' past behaviors, then ā€˜score’ their current behavior when it deviates from the learned behavior… As with rule-based sense score, the Machine Learning sense score value is also configurable and can be customized based an organization's environmentā€ [0046]; [0047]; [0063]; [0073]-[0074]; FIG. 7).  Martin and Pan are analogous art because they are from the same field of endeavor in security risk scoring.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art, having the teachings of Martin and Pan before him or her, to modify the invention of Martin with the teachings of Pan to include the plurality of defined algorithms including a first defined algorithm employing one or more machine learning models and a second defined algorithm employing one or more rule- based conditions, wherein the first defined algorithm and the second defined algorithm each generates an individual score for each IT security event as claimed.  The suggestion/motivation for doing so would have been for analysis resulting in time-based risk scores for each of different time intervals to avoid missing important information in data sets and reducing false positives (Pan [0002]; [0003]).  Therefore, it would have been obvious to combine Martin and Pan to obtain the invention as specified in the instant claim(s).
As to Claim 18:
Martin in view of Pan discloses the non-transitory computer readable medium of claim 17, wherein the server is further caused to: determine whether the security score specific to the resource exceeds the defined threshold in a defined period of time (e.g. Martin ā€œThe system can also relate multiple like events occurring on the network—such as originating at a common computer and occurring within a particular window of time—based on preset correlation rules and can issue signals for anomalous event sequencesā€ [0014]; ā€œover a period of timeā€ [0022]), and in response to the security score specific to the resource exceeding the defined threshold in the defined period of time, generate and transmit the security incident alert specific to the resource to the SOC (e.g. Martin [0016]; ā€œThe system thus collects a multitude of signals over time in Block S100 and then assigns a risk score to each signal in Block S120, such as by writing a preset risk score for a particular signal type to a signal of the same type or by implementing a risk algorithm to calculate a risk score for a signal based on various attributes of the signal. The system then relates a subset of all collected signals based on the attributes contained in these signals. In this example application, the system: relates the first signal, second signal, and third signal based on a common IP address of the originating computer (i.e., the first computer) in Block S130; creates a composite alert from the related first, second, and third signals in Block S132; sums risk scores for the first, second, and third signals to calculate a composite risk for the composite alert, and pushes the composite alert to a security analyst if the composite risk score exceeds a threshold risk scoreā€ [0020]; [0027]; [0042]; [0116]).
Claims 4, 12, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Martin in view of Pan as applied to Claims 1, 9, and 17, and further in view of Lee (US 20090157574 A1).
As to Claim 4:
Martin in view of Pan discloses the server of claim 3, but does not specifically disclose:
at least one of logs from authentication processes, logs from accessing websites, and one or more machine learning models.
However, the analogous art Lee does disclose at least one of logs from authentication processes, logs from accessing websites, and one or more machine learning models (e.g. Lee ā€œreceiving log information of a web server from a manager; determining if there is a hacking attempt by analyzing the received log information of the web server based on a predetermined hacking attempt detection ruleā€ [0018]; ā€œanalyzing a web server log using an intrusion detection scheme, including: an input unit for receiving log information of a web server from a manager; a determination unit for determining if there is a hacking attempt by analyzing the log information of the web server based on a predetermined hacking attempt detection ruleā€ [0020]).  Martin, Pan, and Lee are analogous art because they are from the same field of endeavor of network security.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art, having the teachings of Martin, Pan, and Lee before him or her, to modify the combination of Martin and Pan with the teachings of Lee to include at least one of logs from authentication processes, logs from accessing websites, and one or more machine learning models as claimed.  The suggestion/motivation for doing so would have been to enable a manager to effectively cope with an external intrusion by automatically analyzing log information of a web server intruded from an outside source and reporting the same to the manager (Lee [Abstract]).  Therefore, it would have been obvious to combine Martin, Pan, and Lee to obtain the invention as specified in the instant claim(s).
As to Claim 12:
Martin in view of Pan discloses the method of claim 11, but does not specifically disclose:
at least one of logs from authentication processes, logs from accessing websites, and one or more machine learning models.
However, the analogous art Lee does disclose at least one of logs from authentication processes, logs from accessing websites, and one or more machine learning models (e.g. Lee ā€œreceiving log information of a web server from a manager; determining if there is a hacking attempt by analyzing the received log information of the web server based on a predetermined hacking attempt detection ruleā€ [0018]; ā€œanalyzing a web server log using an intrusion detection scheme, including: an input unit for receiving log information of a web server from a manager; a determination unit for determining if there is a hacking attempt by analyzing the log information of the web server based on a predetermined hacking attempt detection ruleā€ [0020]).  Martin, Pan, and Lee are analogous art because they are from the same field of endeavor of network security.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art, having the teachings of Martin, Pan, and Lee before him or her, to modify the combination of Martin and Pan with the teachings of Lee to include at least one of logs from authentication processes, logs from accessing websites, and one or more machine learning models as claimed.  The suggestion/motivation for doing so would have been to enable a manager to effectively cope with an external intrusion by automatically analyzing log information of a web server intruded from an outside source and reporting the same to the manager (Lee [Abstract]).  Therefore, it would have been obvious to combine Martin, Pan, and Lee to obtain the invention as specified in the instant claim(s).
As to Claim 19:
Martin in view of Pan discloses the non-transitory computer readable medium of claim 17, wherein: the server is further caused to receive the dataset representing the plurality of IT security events associated with the network from a plurality of data sources (e.g. Martin ā€œa feed of signals served by various detection mechanisms (e.g., sensors) throughout a network… the system can automatically link multiple disparate signals generated by other intrusion detection systems within the network substantially in real-time in order to rapidly detect zero-day exploit attacksā€ [0012]; receive multitude of disparate signals output by sensors on the network over time [0057]; [0067]-[0069]), but does not specifically disclose:
at least one of logs from authentication processes, logs from accessing websites, and one or more machine learning models.
However, the analogous art Lee does disclose at least one of logs from authentication processes, logs from accessing websites, and one or more machine learning models (e.g. Lee ā€œreceiving log information of a web server from a manager; determining if there is a hacking attempt by analyzing the received log information of the web server based on a predetermined hacking attempt detection ruleā€ [0018]; ā€œanalyzing a web server log using an intrusion detection scheme, including: an input unit for receiving log information of a web server from a manager; a determination unit for determining if there is a hacking attempt by analyzing the log information of the web server based on a predetermined hacking attempt detection ruleā€ [0020]).  Martin, Pan, and Lee are analogous art because they are from the same field of endeavor of network security.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art, having the teachings of Martin, Pan, and Lee before him or her, to modify the combination of Martin and Pan with the teachings of Lee to include at least one of logs from authentication processes, logs from accessing websites, and one or more machine learning models as claimed.  The suggestion/motivation for doing so would have been to enable a manager to effectively cope with an external intrusion by automatically analyzing log information of a web server intruded from an outside source and reporting the same to the manager (Lee [Abstract]).  Therefore, it would have been obvious to combine Martin, Pan, and Lee to obtain the invention as specified in the instant claim(s).
Claims 8, 16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Martin in view of Pan as applied to Claims 1, 9, and 17, and further in view of McCracken (US 20130097183 A1).
As to Amended Claim 8:
Martin in view of Pan discloses the server of claim 1, the first defined algorithm, the second defined algorithm each generates an individual score for each IT security event (e.g. Pan ā€œUBA assigns a risk score to a user based on detected risks from ML as well as the rule engine of the SIEM… if a user accessed an external resource that is deemed to be an inappropriate, risky, or having signs of infection, then the rule ā€œUser Accessing Risky Resourcesā€ is triggered to generate a sense event with a preconfigured risk score attached… the Machine Learning modeling system uses various algorithms to ā€˜learn’ a base model with users' past behaviors, then ā€˜score’ their current behavior when it deviates from the learned behavior… As with rule-based sense score, the Machine Learning sense score value is also configurable and can be customized based an organization's environmentā€ [0046]; [0047]; [0063]; [0073]-[0074]; FIG. 7), but does not specifically disclose:
a third defined algorithm employing one or more defined formulas, and the third defined algorithm generates an individual score for each IT security event.
However, the analogous art McCracken does disclose a third defined algorithm employing one or more defined formulas, and the third defined algorithm generates an individual score for each IT security event (e.g. McCracken FIG. 1B; formula to provide the score for each event [0013]; [0074]; [0075] ā€œThe system can store the information gathered during traversal: the state caused by the event (in node state cache 131), the node, and the events themselves (in event cache 133), when the nodes are traversed. Then the algorithm applies the equation to each event to provide a score, the sort of scores in order is prepared, the confidence factor is optionally calculated, and this information can be provided to the user so that the user can more easily make a determination about what is the real problemā€ [0098]).  Martin, Pan, and McCracken are analogous art because they are from the same field of endeavor of network security.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art, having the teachings of Martin, Pan, and McCracken before him or her, to modify the combination of Martin and Pan with the teachings of McCracken to include a third defined algorithm employing one or more defined formulas, and the third defined algorithm generates an individual score for each IT security event as claimed.  The suggestion/motivation for doing so would have been to determine a root cause of a service impact so that the user can more easily make a determination about what is the real problem (McCracken [0010]; [0098]).  Therefore, it would have been obvious to combine Martin, Pan, and McCracken to obtain the invention as specified in the instant claim(s).
As to Amended Claim 16:
Martin in view of Pan discloses the method of claim 9, the first defined algorithm, the second defined algorithm each generates an individual score for each IT security event (e.g. Pan ā€œUBA assigns a risk score to a user based on detected risks from ML as well as the rule engine of the SIEM… if a user accessed an external resource that is deemed to be an inappropriate, risky, or having signs of infection, then the rule ā€œUser Accessing Risky Resourcesā€ is triggered to generate a sense event with a preconfigured risk score attached… the Machine Learning modeling system uses various algorithms to ā€˜learn’ a base model with users' past behaviors, then ā€˜score’ their current behavior when it deviates from the learned behavior… As with rule-based sense score, the Machine Learning sense score value is also configurable and can be customized based an organization's environmentā€ [0046]; [0047]; [0063]; [0073]-[0074]; FIG. 7), but does not specifically disclose:
a third defined algorithm employing one or more defined formulas, and the third defined algorithm generates an individual score for each IT security event.
However, the analogous art McCracken does disclose a third defined algorithm employing one or more defined formulas, and the third defined algorithm generates an individual score for each IT security event (e.g. McCracken FIG. 1B; formula to provide the score for each event [0013]; [0074]; [0075] ā€œThe system can store the information gathered during traversal: the state caused by the event (in node state cache 131), the node, and the events themselves (in event cache 133), when the nodes are traversed. Then the algorithm applies the equation to each event to provide a score, the sort of scores in order is prepared, the confidence factor is optionally calculated, and this information can be provided to the user so that the user can more easily make a determination about what is the real problemā€ [0098]).  Martin, Pan, and McCracken are analogous art because they are from the same field of endeavor of network security.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art, having the teachings of Martin, Pan, and McCracken before him or her, to modify the combination of Martin and Pan with the teachings of McCracken to include a third defined algorithm employing one or more defined formulas, and the third defined algorithm generates an individual score for each IT security event as claimed.  The suggestion/motivation for doing so would have been to determine a root cause of a service impact so that the user can more easily make a determination about what is the real problem (McCracken [0010]; [0098]).  Therefore, it would have been obvious to combine Martin, Pan, and McCracken to obtain the invention as specified in the instant claim(s).
As to Amended Claim 20:
Martin in view of Pan discloses the non-transitory computer readable medium of claim 17, the first defined algorithm, the second defined algorithm each generates an individual score for each IT security event (e.g. Pan ā€œUBA assigns a risk score to a user based on detected risks from ML as well as the rule engine of the SIEM… if a user accessed an external resource that is deemed to be an inappropriate, risky, or having signs of infection, then the rule ā€œUser Accessing Risky Resourcesā€ is triggered to generate a sense event with a preconfigured risk score attached… the Machine Learning modeling system uses various algorithms to ā€˜learn’ a base model with users' past behaviors, then ā€˜score’ their current behavior when it deviates from the learned behavior… As with rule-based sense score, the Machine Learning sense score value is also configurable and can be customized based an organization's environmentā€ [0046]; [0047]; [0063]; [0073]-[0074]; FIG. 7), but does not specifically disclose:
a third defined algorithm employing one or more defined formulas, and the third defined algorithm generates an individual score for each IT security event.
However, the analogous art McCracken does disclose a third defined algorithm employing one or more defined formulas, and the third defined algorithm generates an individual score for each IT security event (e.g. McCracken FIG. 1B; formula to provide the score for each event [0013]; [0074]; [0075] ā€œThe system can store the information gathered during traversal: the state caused by the event (in node state cache 131), the node, and the events themselves (in event cache 133), when the nodes are traversed. Then the algorithm applies the equation to each event to provide a score, the sort of scores in order is prepared, the confidence factor is optionally calculated, and this information can be provided to the user so that the user can more easily make a determination about what is the real problemā€ [0098]).  Martin, Pan, and McCracken are analogous art because they are from the same field of endeavor of network security.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art, having the teachings of Martin, Pan, and McCracken before him or her, to modify the combination of Martin and Pan with the teachings of McCracken to include a third defined algorithm employing one or more defined formulas, and the third defined algorithm generates an individual score for each IT security event as claimed.  The suggestion/motivation for doing so would have been to determine a root cause of a service impact so that the user can more easily make a determination about what is the real problem (McCracken [0010]; [0098]).  Therefore, it would have been obvious to combine Martin, Pan, and McCracken to obtain the invention as specified in the instant claim(s).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicants’ disclosure.
Givental et al. (US 20180367561 A1)
KRAUS et al. (US 20200285737 A1)
Applicants’ amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kenneth W Chang whose telephone number is (571)270-7530. The examiner can normally be reached Monday - Friday 9:30am-5:30pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Taghi Arani can be reached at 571-272-3787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.





/KENNETH W CHANG/Primary Examiner, Art Unit 2438                                                                                                                                                                                                        
    PNG
    media_image1.png
    35
    280
    media_image1.png
    Greyscale

05.14.2025


    
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
    


Cookies help us deliver our services. By using our services, you agree to our use of cookies.