Patent Application 17625490 - NETWORK STATUS CLASSIFICATION - Rejection
Appearance
Patent Application 17625490 - NETWORK STATUS CLASSIFICATION
Title: NETWORK STATUS CLASSIFICATION
Application Information
- Invention Title: NETWORK STATUS CLASSIFICATION
- Application Number: 17625490
- Submission Date: 2025-05-12T00:00:00.000Z
- Effective Filing Date: 2022-01-07T00:00:00.000Z
- Filing Date: 2022-01-07T00:00:00.000Z
- National Class: 706
- National Sub-Class: 016000
- Examiner Employee Number: 91946
- Art Unit: 2143
- Tech Center: 2100
Rejection Summary
- 102 Rejections: 0
- 103 Rejections: 4
Cited Patents
No patents were cited in this rejection.
Office Action Text
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to pending claims 1-10, 13-18, 20-21, 23-24 filed 1/7/2022. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. â An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term âmeansâ or âstepâ or a term used as a substitute for âmeansâ that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term âmeansâ or âstepâ or the generic placeholder is modified by functional language, typically, but not always linked by the transition word âforâ (e.g., âmeans forâ) or another linking word or phrase, such as âconfigured toâ or âso thatâ; and (C) the term âmeansâ or âstepâ or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word âmeansâ (or âstepâ) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word âmeansâ (or âstepâ) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word âmeansâ (or âstepâ) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word âmeansâ (or âstepâ) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word âmeans,â but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: apparatus configured to (claim 24). Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-10, 13-18, 20-21, 23-24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea, a mental process and a mathematical concept, without significantly more, according to subject matter eligibility flowchart (MPEP 2106): PNG media_image1.png 834 1333 media_image1.png Greyscale As all claims recite statutory categories, step 1 is answered affirmatively. We proceed to step 2. As all the claim at least contain elements directed to an abstract idea, step 2A prong 1 (2A-1) is answered affirmatively. We then divide the claim into parts directed to an abstract idea (without underline) and additional elements (underlined). We ask whether the additional elements are integrated into a practical application in step 2A prong 2 (2A-2) and whether the additional elements are significantly more or well understood, routine, and conventional (WURC) in step 2B (2B): 1. A method of training a network status classification model, the method comprising: obtaining measurements of network parameters of a communications network (obtaining measurements for consideration is a mental process, the additional elements generally apply the technique to network communications, and hence, do not serve to meaningfully limit the abstract idea (2A-2); furthermore, network communication monitoring is well-understood, routine, and conventional (WURC) (2B)); converting the measurements into a plurality of first images representing the measurements (conversion of measurements into images, e.g., visualizations, charts, etc., is a mental process); training a deep convolutional generative adversarial network, DCGAN, with the first images (A deep convolutional generative adversarial network (GAN) is a mathematical model or optimization algorithm for classifying images, while training such a neural network comprises an optimization scheme for such a model which is a mathematical concept, hence, this limitation is directed to an abstract idea, a mathematical concept); generating a plurality of second images representing artificial measurements of the network parameters using the DCGAN (generating images is part of the output of the generative component of a GAN, hence, constitutes a mathematical concept as above; see above for the analysis of ânetwork parametersâ); and training a network status classification model with the plurality of second images (training a model with generated images is part of the adversarial training process of the GAN, and hence, constitutes a mathematical concept, see above for the analysis of ânetwork statusâ). 2. The method of claim 1, comprising further training the DCGAN with the plurality of first images representing the measurements (As above, training a convolutional GAN with image input data representing measurements is part of a mathematical optimization scheme, hence, is directed to a mathematical concept). 3. The method of claim 1, wherein each of the first images is associated with a respective network status of the network from a plurality of network statuses, and wherein generating the plurality of second images comprises generating a respective artificial network status associated with each of the second images (The additional elements are directed to generally applying the technique to network communications and do not impose meaningful limitations, in particular as any generating of images would be associated with additional network statuses, and hence, do not serve to meaningfully limit the abstract idea (2A-2); furthermore, network communication monitoring is well-understood, routine, and conventional (WURC) (2B)). 4. The method of claim 1, comprising, after training the DCGAN, evaluating the DCGAN, wherein evaluating the DCGAN comprises (evaluating is a mental process): training a further network status classification model with the plurality of second images (training a classification model is a mental process, one of judgment of evaluation based on prior data; see above for analysis of ânetwork statusâ); providing one or more of the first images to the further network status classification model to provide, for each of the one or more first images, a respective estimated network status of the network (classification of images is a mental process; see above for analysis of ânetwork statusâ); and comparing the network status and the estimated network status associated with each of the one or more first images (comparing of results is a mental process; see above for analysis of ânetwork statusâ). 5. The method of claim 1, comprising, after training the DCGAN, evaluating the DCGAN, wherein evaluating the DCGAN comprises (evaluating is a mental process): training a further network status classification model with the plurality of first images (training a classification model is a mental process, one of judgment of evaluation based on prior data; see above for analysis of ânetwork statusâ); providing one or more of the second images to the further network status classification model to provide, for each of the one or more second images, a respective estimated artificial network status of the network (classification of images is a mental process; see above for analysis of ânetwork statusâ); and comparing the artificial network status and the estimated artificial network status associated with each of the one or more second images (comparing of results is a mental process; see above for analysis of ânetwork statusâ). 6. The method of any of claims 3, wherein at least one of the plurality of network statuses comprises a network fault status (The additional elements are directed to generally applying the technique to network communications, and hence, do not serve to meaningfully limit the abstract idea (2A-2); furthermore, network communication monitoring is well-understood, routine, and conventional (WURC) (2B)). 7. The method of claim 1, comprising classifying a status of the network based on further measurements of the network parameters (classification via a neural network is a mathematical concept). 8. The method of claim 7, wherein classifying the status of the network comprises converting the further measurements into a further image representing the further measurements (converting to an image is a mental process), and providing the further image to the network status classification model to provide a status of the network (classification is a mental process; see above for analysis of ânetwork statusâ). 9. The method of claim 1, wherein the network status model comprises an image recognition model (image classification is a mental process, see above for analysis of ânetwork statusâ). 10. The method of claim 1, wherein the network parameters comprise a plurality of network performance indicators, and/or one or more of a PUSCH interference level, PUCCH interference level, an average Channel Quality Indicator, CQI, and a rate of a CQI below a predetermined value received at a node in the network (These are merely directed to application of the abstract idea in a particular field or to a particular type of data, but these limitations do not meaningfully limit the abstract idea (2a-2); furthermore, use of such network parameters is WURC (2b). Claims 13-18, 20-21, 23-24 recite analogous apparatuses and computer media corresponding to the above claims and hence are rejected for the same reason. Furthermore, the use of a processing device and memory to implement an abstract idea do not serve to meaningfully limit the abstract idea (2A-2); furthermore, computer processing and memory is well-understood, routine, and conventional (WURC) (2B) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 6-10 are rejected under 35 U.S.C. 103 as being unpatentable over Li ("MAD-GAN: Multivariate anomaly detection for time series data with generative adversarial networks", published 1/15/2019) in view of Liu ("Time series classification with multivariate convolutional neural network", published 2018). For claim 1, Li discloses: a method of training a network status classification model (p.4 fig.1, with contemplated application to detection of cyber-attack status, see §1¶1 Introduction: application to cyber-physical systems (CPS), internet of things (IoT) including power plants, data centers, etc.; see also §4.3¶1, hence, network status classification model), the method comprising: obtaining measurements of network parameters of a communications network (fig.1: acquiring multivariate parameters for training, testing; p.1§¶1: application to communication parameters for devices over a network); training a deep generative adversarial network with the first representations (fig.1: training the GAN with normalized training data); generating a plurality of second representations representing artificial measurements of the network parameters using the DCGAN (fig.1: samples generated by the generator); and training a network status classification model with the plurality of second representations (fig.1: training GAN classification model with generated representations). Li does not disclose: wherein the representations are images; wherein the GAN comprises a convolutional network or DCGAN; converting the measurements into a plurality of first images representing the measurements. Liu discloses: wherein the representations are images (fig.4, p.4791-2, §B:1 âInput tensor transformationâ: âimage-like tensor schemeâ for encoding multivariate time series data, see also §B:3 âMultivariate convolution stageâ: generating further image representation through processing through CNN); wherein the GAN comprises a convolutional network or DCGAN (ibid: shows a deep convolutional network, hence, combination with Li yielding a GAN with deep convolutional layers); converting the measurements into a plurality of first images representing the measurements (ibid: sensor measurements are converted to image representations for passing through CNN). It would have been obvious before the effective filing date to a person of ordinary skill in the art to modify the method of Li by incorporating the convolutional technique of Liu. Both concern the art of multivariate time-series sensor data processing, and the incorporation would have, according to Liu, leveraged the advantages of CNN architecture to deal with time-series data (p.4789 col.1 ¶3-4). For claim 2, Li modified by Liu discloses the method of claim 1, as described above. Li modified by Liu further discloses: further training the DCGAN with the plurality of first images representing the measurements (Liu p.4792: 4) Fully connected stage: using loss functions to train the CNN via the image representations, such as with the PHM data set (p.4792 §A); combination with Li, fig.1 yielding training the GAN with the converted images of Liu). For claim 3, Li modified by Liu discloses the method of claim 1, as described above. Li modified by Liu further discloses: wherein each of the first images is associated with a respective network status of the network from a plurality of network statuses (Liu p.4791 fig.1 gives overview of the transformation, with fig.2, 1) âInput tensor transformation stageâ describing conversion of status signals to tensors for formation of images, combination with Li §1¶1 yielding application to network status), and wherein generating the plurality of second images comprises generating a respective artificial network status associated with each of the second images (Li p.4: combination of Liuâs tensor transformation technique with the GAN of Li would yield a technique where corresponding images are generated by the generator network, the image tensors associated with network statuses) . For claim 6, Li modified by Liu discloses the method of claim 3, as described above. Li modified by Liu further discloses: wherein at least one of the plurality of network statuses comprises a network fault status (Li §3.2, Liu p.4793 §B: Dataset ¶2 (âThe purpose of âŠâ): detections of anomalies or faults in the network statuses). For claim 7, Li modified by Liu discloses the method of claim 1, as described above. Li modified by Liu further discloses: classifying a status of the network based on further measurements of the network parameters (Li §3.1 ¶2 (p.3), Liu fig.2: as a sliding window is used, further measurements in time are used for further status classifications). For claim 8, Li modified by Liu discloses the method of claim 7, as described above. Li modified by Liu further discloses: wherein classifying the status of the network comprises converting the further measurements into a further image representing the further measurements (Liu p.4791 figs.2-4: image conversion likewise applied to further measurements), and providing the further image to the network status classification model to provide a status of the network (Liu fig.1, Li fig.1: likewise applying classification). For claim 9, Li modified by Liu discloses the method of claim 1, as described above. Li modified by Liu further discloses: wherein the network status model comprises an image recognition model (Liu p.4791: conversion of time-series network status to image-like tensor schemes in order to perform fault recognition via a CNN, a CNN being an image-based model for recognizing image properties (p.4789 col.1 ¶3), hence, image-recognition model). For claim 10, Li modified by Liu discloses the method of claim 1, as described above. Li modified by Liu further discloses: wherein the network parameters comprise a plurality of network performance indicators (Li §1 ¶1: performance indicators indicating the performance of network devices in IoS, CPS systems, hence, network performance indicators), and/or one or more of a PUSCH interference level, PUCCH interference level, an average Channel Quality Indicator, CQI, and a rate of a CQI below a predetermined value received at a node in the network (Li and Liu do not disclose this branch). Claim(s) 4-5 are rejected under 35 U.S.C. 103 as being unpatentable over Li ("MAD-GAN: Multivariate anomaly detection for time series data with generative adversarial networks", published 1/15/2019) in view of Liu ("Time series classification with multivariate convolutional neural network", published 2018) in view of Shmelkov ("How good is my GAN?", published 2018). For claim 4, Li modified by Liu discloses the method of claim 1, as described above. Li modified by Liu does not disclose claim 4. Shmelkov discloses: after training the DCGAN, evaluating the DCGAN (p.6 §3), wherein evaluating the DCGAN comprises: training a further network status classification model with the plurality of second images (§3¶2: GAN-train trains on generated images S-sub-g); providing one or more of the first images to the further network status classification model to provide, for each of the one or more first images, a respective estimated network status of the network (ibid: providing first original training images S-sub-t to the classifier to provide a classification, combination with Li and Liu yielding application to network status classification); and comparing the network status and the estimated network status associated with each of the one or more first images (ibid: evaluating and diagnosing the GAN based on comparison of classifier results). It would have been obvious before the effective filing date to a person of ordinary skill in the art to modify the method of Li modified by Liu by incorporating the GAN evaluation techniques of Shmelkov. Both concern the art GANâs, and the incorporation would have, according to Shmelkov, provide a better metric for determining GAN performance (p.2¶3-p.3¶1, p.3¶5). For claim 5, Li modified by Liu discloses the method of claim 1, as described above. Li modified by Liu does not disclose the method of claim 5. Shmelkov discloses: after training the DCGAN, evaluating the DCGAN (p.6 §3), wherein evaluating the DCGAN comprises: training a further network status classification model with the plurality of first images (§3¶3: GAN-test: training on S-sub-t); providing one or more of the second images to the further network status classification model to provide, for each of the one or more second images, a respective estimated artificial network status of the network (ibid: providing classification results on the second set) ; and comparing the artificial network status and the estimated artificial network status associated with each of the one or more second images (ibid: evaluating and diagnosing the GAN based on comparison of classifier results). It would have been obvious before the effective filing date to a person of ordinary skill in the art to modify the method of Li modified by Liu by incorporating the GAN evaluation techniques of Shmelkov. Both concern the art GANâs, and the incorporation would have, according to Shmelkov, provide a better metric for determining GAN performance (p.2¶3-p.3¶1, p.3¶5). Claim(s) 13-16, 20-21, 23-24 are rejected under 35 U.S.C. 103 as being unpatentable over Li ("MAD-GAN: Multivariate anomaly detection for time series data with generative adversarial networks", published 1/15/2019) in view of Liu ("Time series classification with multivariate convolutional neural network", published 2018) in view of Ryan 589 (US 20190379589 A1). Claims 13-16, 20-21, 23-24 disclose media and apparatuses corresponding to the above claims and are hence rejected under the same rationale. However, the cited references do not explicitly disclose limitations directed to a processor and memory, i.e.: a computer program product comprising non transitory computer readable medium having stored thereon a computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out a method. However, Ryan discloses: a computer program product comprising non transitory computer readable medium having stored thereon a computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out a method (fig.32:502, 510, 0161). It would have been obvious before the effective filing date to a person of ordinary skill in the art to modify the method of Li modified by Liu by incorporating the hardware of Ryan. Both concern the art of neural networks, and the incorporation would have allowed the use of widely used computing devices to implement the technique. Claim(s) 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Li ("MAD-GAN: Multivariate anomaly detection for time series data with generative adversarial networks", published 1/15/2019) in view of Liu ("Time series classification with multivariate convolutional neural network", published 2018) in view of Ryan 589 (US 20190379589 A1) in view of Shmelkov ("How good is my GAN?", published 2018). Claims 17-18 disclose media and apparatuses corresponding to the above claims and are hence rejected under the same rationale. However, the cited references do not explicitly disclose limitations directed to a processor and memory, i.e.: a computer program product comprising non transitory computer readable medium having stored thereon a computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out a method. However, Ryan discloses: a computer program product comprising non transitory computer readable medium having stored thereon a computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out a method (fig.32:502, 510, 0161). It would have been obvious before the effective filing date to a person of ordinary skill in the art to modify the method of Li modified by Liu by incorporating the hardware of Ryan. Both concern the art of neural networks, and the incorporation would have allowed the use of widely used computing devices to implement the technique. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Chen (US 20190042745 A1) discloses classifying of images to classify time-series data. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LIANG LI whose telephone number is (303)297-4263. The examiner can normally be reached Mon-Fri 9-12p, 3-11p MT (11-2p, 5-1a ET). If attempts to reach the examiner by telephone are unsuccessful, the examinerâs supervisor Jennifer Welch can be reached on (571)272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center or Private PAIR to authorized users only. Should you have questions about access to Patent Center or the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. The examiner is available for interviews Mon-Fri 6-11a, 2-7p MT (8-1p, 4-9p ET). /LIANG LI/ Primary examiner AU 2143