Patent Application 17115646 - DEEP REINFORCEMENT LEARNING METHOD FOR - Rejection
Appearance
Patent Application 17115646 - DEEP REINFORCEMENT LEARNING METHOD FOR
Title: DEEP REINFORCEMENT LEARNING METHOD FOR GENERATION OF ENVIRONMENTAL FEATURES FOR VULNERABILITY ANALYSIS AND IMPROVED PERFORMANCE OF COMPUTER VISION SYSTEMS
Application Information
- Invention Title: DEEP REINFORCEMENT LEARNING METHOD FOR GENERATION OF ENVIRONMENTAL FEATURES FOR VULNERABILITY ANALYSIS AND IMPROVED PERFORMANCE OF COMPUTER VISION SYSTEMS
- Application Number: 17115646
- Submission Date: 2025-05-12T00:00:00.000Z
- Effective Filing Date: 2020-12-08T00:00:00.000Z
- Filing Date: 2020-12-08T00:00:00.000Z
- National Class: 706
- National Sub-Class: 020000
- Examiner Employee Number: 98701
- Art Unit: 2123
- Tech Center: 2100
Rejection Summary
- 102 Rejections: 0
- 103 Rejections: 1
Cited Patents
No patents were cited in this rejection.
Office Action Text
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments The amendment filed 01/22/2025 have been entered. Claims 1, 2, 4, 5, 8, 9, 11, 12, 14, 15, 17, 18 remain pending in the application. Applicantâs amendment, with respect to claim rejections of claims 1, 2, 4, 5, 8, 9, 11, 12, 14, 15, 17, 18 under 35 U.S.C 101 filed 10/22/2024 have been considered and they are persuasive. Therefore, the previous rejections as set forth in the previous office action has been removed. Applicantâs amendment, with respect to claim rejections of claims 1, 2, 4, 5, 8, 9, 11, 12, 14, 15, 17, 18 under 35 U.S.C 103 filed 10/22/2024 have been considered and some of them are persuasive. The applicant argues that the claims have been amended to claim the process by which both of the trained policy network and generative model is generated using a reinforcement learning agent with a modified reward signal based on an action sampled from a randomly initialized recurrent neural network. The teaching by Graepel uses Reinforcement Learning to adjust values or signals based on interaction data, while the invention uses Reinforcement Learning to modify signals based on samplings from a randomly initialized RNN. The examiner respectfully agrees that Graepel does not recite a randomly initialized RNN. However, upon further consideration, new ground(s) of rejections have been raised (See Below.) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 4, 5, 8, 9, 11, 12, 14, 15, 17, 18 are rejected under 35 U.S.C. 103 as being unpatentable in view of Graepel et.al (US 20180032864 A1), further in view of Caldwell et.al (US 11565709 B1), further in view of Xia et.al (NPL: Generative Adversarial Regularized Mutual Information Policy Gradient Framework for Automatic Diagnosis), further in view of Wiest et.al (US 11537134 B1), further in view of Huang et.al (US 20210174594 A1). Regarding claim 1, Graepel teaches a part of the 1st limitation âA system for generating environmental feature using deep reinforcement learning ...â (paragraph 0014, where Graepel discloses âa reinforcement learning system that selects actions to be performed by a reinforcement learning agent interacting with an environment. In order to interact with the environment, the reinforcement learning system receives data characterizing the current state of the environmentâ. Graepel discloses a reinforcement learning system including function to receive environment data, thus implying the generation of environment data to be used within the system.) Graepel teaches the 2nd limitation âOne or more processors and a non-transitory computer-readable medium having executable instructions encoded thereon such that when executed, the one or more processors perform operationsâ (paragraph 103 âEmbodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory program carrier⌠The computer storage medium can be a machine-readable storage deviceâ and paragraph 0104, where Graepel discloses âthe term âdata processing apparatusâ refers to⌠a programmable processor, a computer, or multiple processorsâ. Graepel discloses components such as processors and carrier of a storage medium with computer program instructions encoded to be used for the system. All of the methods and systems as disclosed by Graepel can be performed through the data processing apparatus comprises of programmable processor or multiple processors to perform each process.) Graepel teaches the 3rd limitation âreceiving, by the one or more processors, a policy network architecture, initialization parameters, and a simulation environment that models a trajectory of an autonomous vehicle through a physical environmentâ (paragraph 16 âthe agent may be a control system integrated in an autonomous or semi-autonomous vehicle navigating through the environmentâ and paragraph 64 âThat is, after initializing the values, the system trains the RL policy neural network to adjust the values of the parameters of the RL policy neural network using reinforcement learning from data generated from interactions of the agent with the simulated version of the environmentâ. Graepel discloses the system comprises of these steps: initialize the values of parameters, a reinforcement learning policy neural network which suggest a policy network architecture within the claim, and a simulation of an agent such as an autonomous vehicle interact with the environment.) Graepel teaches a part of the 5th limitation âGenerating, by the one or more processors, a trained policy network by training the policy network using a reinforcement learning algorithm ...â (paragraph 64 âthe system trains the RL policy neural network to adjust the values of the parameters of the RL policy neural network using reinforcement learningâ. Graepel discloses the training of the policy network using reinforcement learning and how it is performed.) Graepel does not teach the 4th limitation âinitializing, by the one or more processors, a set of landmark features sampled from the policy networkâ. However, Caldwell teaches this limitation (Column 7, lines 4-20, where Caldwell discloses âIn various examples, a simulation generation component 110 of the simulation computing system 106 may receive one or more initialization parameters 112 (e.g., initial parameter 112). The initial parameters 112 may represent an initial state of the simulation 100⌠a number of objects present in the environment ... object densityâ. Caldwell discloses the system receive one or more initialization parameters, which also include parameters of objects that can be controlled by user. These parameters are initialized. The various objects along with their parameters is an implication of landmark features.) Graepel does not teach the 6th limitation. However, Caldwell teaches the 6th limitation âgenerating, by the one or more processors implementing the generative model, a set of environmental features using the trained policy networkâ (Column 24, line 8-11 âin some instances, the components in the memory 418 (and the memory 432, discussed below) may be implemented as a neural networkâ, and Column 25, line 18-45 âThe simulation generation component 440 may be configured to generate one or more simulations based on one or more initialization parameters. The initial parameters may represent an initial state of the simulation(s)...â. Caldwell discloses the memory which store the simulation component, wherein the components can be implemented as a neural network, suggesting that a neural network is utilized for the system that can generating simulations for evaluating a performance of a controller of an autonomous vehicle. The simulation generation component within the simulation component can be configured to generate one or more simulations condition based on one or more initialization parameters. Such parameters can be road conditions, map topography, number of objects presented, objects such as buildings, bridges, signs, etc. These are all representation of environmental features that are generated by the simulation generation component within the simulation component being implemented as a neural network. Such neural network can be a trained policy neural network with a processor implementing the GAN network as disclosed by Graepel and Xia based on the teaching combination below.) Graepel does not teach the 7th limitation. However, Caldwell teaches the 7th limitation âsampling a fixed value v of landmark features from a distribution Ď(Ă) as a fixed attack, where Ď denotes the trained policy network and Ă denotes an empty set, resulting in simulation initial conditionsâ (Column 3 line 30-44 âIn various examples, the objects in the simulated environment may be controlled based on user input (e.g., user-controlled objects) ... The user interface may receive the input and may cause the object to change position and/or orientation according to the input. In various examples, the computing system may provide the user with one or more objectives for the simulation. The objective(s) may include a destination, a time to arrive at destination, a disruption to vehicle travel, or the like. In some examples, the user may receive rewards for satisfying one or more objectives in the simulation (e.g., trying to cause the autonomous vehicle to break one or more rules, or otherwise interfere with the planner logic of the autonomous vehicle in simulation)â, Column 24, line 8-11 âin some instances, the components in the memory 418 (and the memory 432, discussed below) may be implemented as a neural networkâ, and Column 25, line 18-45 âThe simulation generation component 440 may be configured to generate one or more simulations based on one or more initialization parameters. The initial parameters may represent an initial state of the simulation(s). In various examples, the initial parameters may include road conditions (e.g., smooth surface, potholes, gravel road, etc.), a map topography, weather conditions (e.g., rain, snow, sleet, fog, etc.), a starting point of the vehicle, ... In various examples, the user interface component 442 may be configured to receive input corresponding to initialization parameters, vehicle parameters (e.g., parameters of the autonomous controller), object parameters, object movements, vehicle movements, or the like. In some examples, the computing device(s) 434 may include one or more computing devices configured to run simulations of vehicle 402 and/or object operation in simulated environments, such as utilizing the techniques described above.â Caldwell discloses objects in the simulated environment may be controlled based on user input such as user-controlled object to accomplish one or more objectives such as disruption to the vehicle which can be considered as an attack. User can obtain an input of objects associated with object parameters which implies a fixed value of landmark features. Such objects are being input as part of the setup of the simulation. Caldwell then discloses the memory which store the simulation component, wherein the components can be implemented as a neural network, suggesting that a neural network is utilized for the system that can generating simulations for evaluating a performance of a controller of an autonomous vehicle. The simulation generation component within the simulation component can be configured to generate one or more simulations condition based on one or more initialization parameters, wherein such parameters can represent an initial state of the simulation(s). For instance, the initial parameters may include a starting point of the vehicle which represent an empty set corresponding to location and distance as the vehicle has not started moving. Caldwell also discloses that the user interface component within the simulation component allow the user to provide input corresponding to initialization parameters and object to be controlled, thus a user can provide an input of an empty set, indicating a starting point or time in accordance with a distance of an autonomous vehicle within the simulation. Although Caldwell only suggest a neural network but did not suggest a trained policy neural network, Graepel did suggest a trained policy neural network at paragraph 0037 âthe reinforcement learning system 100 includes a neural network training subsystem 110 that trains the neural networks in the collectionâ. Graepel discloses the reinforcement learning system includes a neural network training subsystem that trains the neural networks in the collection, which include the policy neural network.) Graepel does not teach the 8th limitation. However, Caldwell teaches the 8th limitation ârunning a plurality of simulations on the fixed value vâ, the (column 33, line 12 âAt operation 604, the process may include running a first instantiation of the simulation with the autonomous controller. Running the first instantiation of the simulation may include receiving input regarding object actions and moving the object forward in the simulation based on the object actions and receiving input from the autonomous controller regarding vehicle actionsâ. Caldwell discloses having a simulation setup with initialization parameter and objects along with object parameters as disclosed above, the system performs the operation to run a first instantiation of the simulation.) Graepel does not teach the 9th limitation. However, Caldwell teaches a part of the 9th limitation âtransforming values of the set of landmark features from (Đigi) (v) into the set of environmental features...â (column 3 line 1-8 âThe simulation may include one or more objects with which the vehicle (e.g., controlled by the autonomous controller) may interact while operating in the simulated environment. The objects may include static objects (e.g., buildings, bridges, signs, etc.) and/or dynamic objects such as other vehicles (e.g., cars, trucks, motorcycles, mopeds, etc.), pedestrians, bicyclists, or the like. In various examples, the objects may be controlled by the computing system (e.g., computer-controlled objects), such as utilizing artificial intelligence. Computer-controlled objects may be operated according to one or more object parameters. Caldwell discloses objects are included within the simulated environment. These objects are associated with corresponding parameters. By having the simulation to include on or more objects with which the vehicle may interact, the teaching by Caldwell suggest a transformation that bring these objects with corresponding parameters as input into the environment as part of the environment that the user can interact with.â) Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the teaching of a system and method to train a value neural network including various policy neural networks using reinforcement learning that is configured to receive an observation characterizing a state of an environment being interacted with by an agent by Caldwell with the teaching of a system for generating simulations for evaluating a performance of a controller of an autonomous vehicle by Graepel. The motivation to do is referred in Caldwellâs disclosure (Column 2, lines 32-37, where Caldwell discloses âIn general, the techniques disclosed herein are equally valid for confirming whether or not programmatic logic of a system sufficiently accounts for all potential scenarios (ranges of initialization parameters), as well as enabling additional input to train such a system using, for example, reinforcement learningâ. Caldwell discloses the technique used herein is to confirm whether the logic of the system will sufficiently account for all scenario of a simulation of an autonomous vehicle interact with environment, thus the teaching by Graepel can further incorporate the teaching Caldwell for a system to configure the environment that the agent can interact with and evaluate the performance as well as improve the process of training the neural network that configure the process of the agent interacting with the environment based on such evaluation, thereby ensure the efficiency of the system.) Graepel/Caldwell does not teach a part of the 5th limitation â... both a generative model and the policy network are trained together such that the policy network itself is the generator of the generative model using a reinforcement learning agent with a modified reward signal based on an action sampled from a randomly initialized recurrent neural networkâ. However, Xia teaches this limitation (page 2 column 1 âWe propose to train an RL model and a GAN simultaneously for automatic diagnosis, with taking the generator of GAN as policy network of RL.â, page 4 column 2 âwe directly train a discriminator that is able to assign rewards to both fully and partially observed sequences. We compute the intermediate reward from the partial observed symptom sequence as well as the complete symptom sequenceâ, page 5 column 2 ârecurrent neural network (RNN) can be used as a discriminator in our frameworkâ, and page 6 column 2 âOur dialogue system has a generator Gθ, an evaluation discriminator DĎ and an inference engine DĎ. All the parameters are initializedâ. Xia discloses a new policy gradient framework based on the Generative Adversarial Network (GAN) to optimize the RL model for automatic diagnosis. Within the disclosure, Xia discloses training a Reinforcement Learning model and a GAN simultaneously, wherein the Gan suggest a generative model and a Reinforcement Learning model suggests the policy network trained with reinforcement learning. Xia further discloses taking the generator of GAN as policy network of the Reinforcement Learning model, such that the policy network of the Reinforcement Learning model being the generate of GAN. The frame work further include a discriminator that may be configured by a Recurrent Neural Network, wherein the discriminator may have the parameters initialized, and the discriminator compute rewards based on a partial observed symptom sequence.) Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the teaching of a system and method to train a value neural network including various policy neural networks using reinforcement learning that is configured to receive an observation characterizing a state of an environment being interacted with by an agent, and the teaching of a system for generating simulations for evaluating a performance of a controller of an autonomous vehicle a by Graepel/Caldwell with the teaching of a new policy gradient framework based on the Generative Adversarial Network (GAN) to optimize the RL model for automatic diagnosis by Xia. The motivation to do so is referred to in Xiaâs disclosure (page 8 column 2 âwe propose a Generative Adversarial regularized Mutual information Policy gradient framework (GAMP) for automatic diagnosis which aims to make a better medical dialogue system with higher diagnosis accuracy and less interactive turns with the user. First, we propose a new technique, called generative adversarial regularized policy gradient, to optimize the diagnosis system, which tries to avoid inquiring unreasonable symptoms deviate from the doctorâs common diagnosis paradigm. Second, we devise a mechanism to add mutual information as a part of the reward function. Experiment evaluations on two public datasets have confirmed the validity of our proposed method. It not only can improve the accuracy of diagnosis but also can use less inquires to make a diagnosis decisionâ Xia discloses the benefit of the new framework which aims to make a better medical dialogue system with higher diagnosis accuracy and less interactive turns with the user. The new framework can not only can improve the accuracy of diagnosis but also can use less inquires to make a diagnosis decision. While the framework is proposed for the medical field and a dialogue system, a person ordinary skilled in the art would have been able to incorporate the framework into the reinforcement learning network by Graepel, such that the framework may be performed with environment observation data instead of medical data to output a result of reward policy associated with environment data. The teaching combination by Graepel/Caldwell may further incorporate the framework by Xia to further develop the reinforming learning policy network and system.) Graepel/Caldwell/Xia does not teach the part of the 9th limitation â..., where Đigi denotes a mapping from a joint latent space to a space representing the set of landmark featuresâ. However, Wiest teaches this part of the limitation (Column 18 line 31-34 âIn some embodiments respective state vectors may be generated for each dynamic object or entity. The total number of relevant dynamic objects in the vicinity of the autonomous vehicle may change over time, so the representation may be designed to handle varying numbers of dynamic objects in various embodiments... In another approach towards handling varying numbers of dynamic objects, a mapping or embedding from a first representation space to a second representation space may be used ...â. Wiest discloses a method and apparatus for generating joint state predictions to be used for decision making regarding movements or trajectories of an autonomous vehicle. Within the disclosure, Wiest discloses an encoding of an environment for operating vehicles is obtained, comprising a combination of at least a representation of moving entities. Such entities can be dynamic object. Wiest then discloses a process of mapping or embedding from a first representation space to a second representation space to handle varying numbers of dynamic objects.) Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the teaching of a system and method to train a value neural network including various policy neural networks using reinforcement learning that is configured to receive an observation characterizing a state of an environment being interacted with by an agent, and the teaching of a system for generating simulations for evaluating a performance of a controller of an autonomous vehicle, and a new policy gradient framework based on the Generative Adversarial Network (GAN) to optimize the RL model for automatic diagnosis by Graepel/Caldwell/Xia with the teaching of methods and apparatus for generating joint state predictions to be used for decision making regarding movements or trajectories of an autonomous vehicle by Wiest. The motivation to do is referred in Wiestâs disclosure (column 2 line 51-56 âIn some embodiments, the neural network-based machine learning model may comprise several logically decoupled components including a policy model predicting respective actions expected to be selected by a plurality of moving entities and a state transition model.â, and column 2 line 39-43 âThe method may comprise, in various embodiments, training, using at least the input encodings, a neural network-based machine learning model to produce a probabilistic representation of a set of predicted states of the environment.â. The teaching by Wiest suggests the same process to the teaching combination of Graepel/Caldwell which perform training neural network to process an agent interact with a state of an environment through action predicting and action selecting by using a setup of a simulation to display how the agent interact with the environment along with surrounding objects. The teaching by Wiest similarly recites such process and similarly using a policy model in neural network model for training. The teaching by Wiest further suggests using input encoding to produce a probabilistic representation of a set of predicted states of the environment. Therefore, the teaching combination by Graepel/Caldwell/Xia can further incorporate the teaching by Wiest based on similarity in the process and further improve toward the setup of simulation environment through technique as introduced in Wiest such as encoding input and mapping embedding from a representation space to another representation space.) Graepel/Caldwell/Xia/Wiest does not teach part of the 1st limitation âproducing a three-dimensional (3D) physical realization of the environment features using a printer apparatus configured to print the physical realization of the environmental featuresâ However, Huang teaches this limitation (paragraph 12 âThe example system 100 includes a 3D printer 110 to generate a 3D object, such as the object 130 illustrated in FIG. 1.â. Huang discloses the usage of 3D printer for printing 3D objects within an environment.) Graepel/Caldwell/Xia/Wiest does not teach the 10th limitation âcausing the printer apparatus to print the 3D physical realizations of the set of environmental features for placement in the physical environmentâ. However, Huang teaches this limitation (paragraph 0014 âReferring now to FIG. 2, another example virtualization system for 3D printing is illustrated. The example system 200 includes a 3D printer 210 for printing 3D objectsâ and paragraph 0016 âThe display 250 is provided to display a virtualized environment, such as the virtual environment 260 illustrated in FIG. 2. As used herein, virtualized environment includes virtual reality, as well as augmented reality in which virtual content or objects and physical content or objects are displayed together. In some examples of augmented reality systems, the user is provided with a direct view of the physical environment, and virtual elements are overlaid, or overlappedâ Huang discloses the usage of 3D printer for printing 3D objects within an environment, wherein the system includes an augmented reality system to provide display of physical environment as well as virtual elements overlap with each other while the printer perform its printing function.) Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the teaching of a system and method to train a value neural network including various policy neural networks using reinforcement learning that is configured to receive an observation characterizing a state of an environment being interacted with by an agent, and the teaching of a system for generating simulations for evaluating a performance of a controller of an autonomous vehicle, a new policy gradient framework based on the Generative Adversarial Network (GAN) to optimize the RL model for automatic diagnosis, and the teaching of methods and apparatus for generating joint state predictions to be used for decision making regarding movements or trajectories of an autonomous vehicle by Graepel/Caldwell/Xia/Wiest with the teaching of 3d printing along with the display of virtualized and physical content together within the environment by Huang. The motivation to do is referred in Graepelâs disclosure (paragraph 0039 âGenerally, the simulated version of the environment 104 is a virtualized environment that simulates how actions performed by the agent 120 would affect the state of the environmentâ. Graepel discloses its simulated version of the environment is a virtualized environment that simulates how actions performed by the agent would affect the state of the environment. Therefore, the teaching combination can incorporate the teaching of the display of virtualized environment by Huang as Huang disclosed 3D printing within the combination of virtualized and physical environment overlap with each other. Thus, users can obtain a simulation of the agent interacting with its surrounding environment in both physical and virtualized setting. Regarding claim 2 depends on claim 1, thus the rejection of claim 1 is incorporated. Caldwell teaches the limitation âthe set of environmental features affects performance of a task by a machine learning perception systemâ (Column 7, lines 52-58, where Caldwell discloses âa perception system of an autonomous controller 108 and prior to incorporation into a vehicle, a software engineer may input a command for the simulation computing system 106 to test an instantiation of the updated autonomous controller 108 to verify a level of performance with respect to identifying and avoiding static obstaclesâ. Caldwell discloses a perception system which incorporate machine learning is used to identify environmental features, which is referred as static obstacles.) Regarding claim 4, depends on claim 2, thus the rejection of claim 2 is incorporated. Caldwell teaches the limitation of claim 4 âtask performed is selected from a group consisting of detection, classification, tracking, segmentation, textual analysis, and anomaly detectionâ (Column 22, lines 36-38, where Caldwell discloses âthe perception component 422 may include functionality to perform object detection, segmentation, and/or classificationâŚâ. Caldwell discloses a perception component of the system to perform function of detection, classification and segmentation, the component further performs tracking by detecting characteristics that associated with the object such as position (e.g., x-position, y-position), textual analysis through detecting and read on characteristics such as acceleration and velocity, and anomaly detection through detecting characteristic of environments (e.g., a time of day, a season, a weather condition, an indication of darkness/light, etc.)) Regarding claim 5 depends on claim 1, thus the rejection of claim 1 is incorporated. Caldwell teaches the limitation of claim 5 âthe one or more processors further performs an operation of training one or more generative modelsâ (Column 3, lines 50-53, where Caldwell discloses âIn various examples, such input may not be provided via user input, but another machine learned model (examples including adversarial networks, such as generative adversarial networks, or GANs)â. Caldwell discloses the training on the computing system receiving input through another kind of model such as generative adversarial network or GANs, which are implication of a generative model.) Regarding claim 8, the applicant is directed to the rejections to claim 1 set forth above, because claim 8 recites similar limitations to claim 8, therefore they are rejected based on the same rationale. Regarding claim 9 depends on claim 8, thus the rejection of claim 8 is incorporated. Applicant is directed to the rejections to claim 2 set forth above, because claim 9 recites similar limitations to claim 2, therefore they are rejected based on the same rationale. Regarding claim 11, which depends on claim 8, thus the rejection of claim 8 is incorporated. Applicant is directed to the rejections to claim 5 set forth above, because claim 11 recites similar limitations to claim 5, therefore they are rejected based on the same rationale. Regarding claim 12, which depends on claim 9, thus the rejection of claim 9 is incorporated. Applicant is directed to the rejections to claim 4 set forth above, because claim 12 recites similar limitations to claim 4, therefore they are rejected based on the same rationale. Regarding claim 14, the applicant is directed to the rejections to claim 1 set forth above, as they are rejected based on the same rationale. Regarding claim 15, which depends on claim 14, thus the rejection of claim 14 is incorporated. Applicant is directed to the rejections to claim 2 set forth above, because claim 15 recites similar limitations to claim 2, therefore they are rejected based on the same rationale. Regarding claim 17, which depends on claim 14, thus the rejection of claim 14 is incorporated. Applicant is directed to the rejections to claim 5 set forth above, because claim 17 recites similar limitations to claim 5, therefore they are rejected based on the same rationale. Regarding claim 18, which depends on claim 15, thus the rejection of claim 15 is incorporated. Applicant is directed to the rejections to claim 4 set forth above, because claim 18 recites similar limitations to claim 4, therefore they are rejected based on the same rationale. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUY TU DIEP whose telephone number is (703)756-1738. The examiner can normally be reached M-F 8-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examinerâs supervisor, Alexey Shmatov can be reached at (571) 270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DUY T DIEP/Examiner, Art Unit 2123 /ALEXEY SHMATOV/Supervisory Patent Examiner, Art Unit 2123