Jump to content

Patent Application 18367438 - Serverless Computing for Portfolio Optimization - Rejection

From WikiPatents

Patent Application 18367438 - Serverless Computing for Portfolio Optimization

Title: Serverless Computing for Portfolio Optimization Apparatuses, Processes and Systems

Application Information

  • Invention Title: Serverless Computing for Portfolio Optimization Apparatuses, Processes and Systems
  • Application Number: 18367438
  • Submission Date: 2025-05-14T00:00:00.000Z
  • Effective Filing Date: 2023-09-12T00:00:00.000Z
  • Filing Date: 2023-09-12T00:00:00.000Z
  • National Class: 705
  • National Sub-Class: 310000
  • Examiner Employee Number: 79504
  • Art Unit: 3629
  • Tech Center: 3600

Rejection Summary

  • 102 Rejections: 1
  • 103 Rejections: 0

Cited Patents

The following patents were cited in the rejection:

Office Action Text


    DETAILED ACTION
Notice of Pre-AIA  or AIA  Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-18 are currently pending in application 18/367,438.

Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. 

The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art.  The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) is invoked. 
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
(A)	the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; 
(B)	the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and 
(C)	the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. 
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. 
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. 
Claim limitations in this application that use the word “means” (or “step”) (Independent Claim 17) are being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier.  Such claim limitation(s) is/are: “module(s)” in claims 1, 13-14, and 16-18.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f), applicant may:  (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f).

Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA  35 U.S.C. 102 and 103 (or as subject to pre-AIA  35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA  to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.  
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –

(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.

Claims 1-18 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Sreenivasan (US 2022/0237700 A1) 
As per independent Claim 1 and 16-18, Sreenivasan discloses an optimization application configuring apparatus (system, method, programmed apparatus), comprising: at least one memory; a component collection stored in the at least one memory; at least one processor disposed in communication with the at least one memory, the at least one processor executing processor-executable instructions from the component collection, the component collection storage structured with processor-executable instructions (See at least Figs.122-123, Figs. 172-173; Para 0332-0339) comprising: obtain, via the at least one processor, an optimization application configuration request associated with an optimization application, in which the optimization application configuration request is structured as specifying a plurality of optimization modules to configure for the optimization application, in which an optimization module corresponds to an optimization configuration comprising a distinct combination of an optimizer and a solver; generate, via the at least one processor, a first optimization configuration datastructure for a first optimization module from the plurality of optimization modules, in which the first optimization configuration datastructure is structured as specifying a first cloud function for the first optimization module, a first API path for the first optimization module, and an identifier of an application load balancer to utilize for the optimization application, in which the application load balancer is structured as triggering execution of the first cloud function in response to a request specifying the first API path; generate, via the at least one processor, a second optimization configuration datastructure for a second optimization module from the plurality of optimization modules, in which the second optimization configuration datastructure is structured as specifying a second cloud function for the second optimization module, a second API path for the second optimization module, and the identifier of the application load balancer to utilize for the optimization application, in which the application load balancer is structured as triggering execution of the second cloud function in response to a request specifying the second API path; and provide, via the at least one processor, the first optimization configuration datastructure and the second optimization configuration datastructure to a cloud configuration server, in which the cloud configuration server is structured as initializing the application load balancer in accordance with the provided optimization configuration data structures (See at least Fig. 172; Para 0325, “FIG. 172 illustrates one embodiment of a cloud-based system for the AI investment platform. In one embodiment, the AI investment platform is application programming interface (API) compatible with cloud infrastructures including, but not limited to, AMAZON WEB SERVICES (AWS), MICROSOFT AZURE, and/or GOOGLE CLOUD PLATFORM. The cloud-based system includes at least one server computer, at least one user device, at least one cloud (e.g., private cloud, public cloud), at least one container cluster (e.g., KUBERNETES), at least one application, at least one database, at least one network load balancer, at least one application load balancer, third party applications and/or data providers (e.g., POLYGON.IO, PLAID), at least one workflow manager (e.g., APACHE AIRFLOW), and/or at least one virtual container packager (e.g., DOCKER).”; Para 0337, “…In one embodiment, the cloud-based server platform hosts serverless functions for distributed computing devices 820, 830, and 840.”; and Para 0338; See also Figs.122-123, Figs. 148-150, Fig. 173; Para 0234-0236, Para 0249-0250, Para 0254-0278, Para 0293-0294, and Para 0326).  
As per Claim 2, Sreenivasan discloses the apparatus of claim 1, in which the component collection storage is further structured with processor-executable instructions, comprising: provide, via the at least one processor, a first deployment package associated with the First cloud function to the cloud configuration server; and provide, via the at least one processor, a second deployment package associated with the second cloud function to the cloud configuration server (See at least Figs. 172-173; Para 0325-0327).  
As per Claim 3, Sreenivasan discloses the apparatus of claim 1, in which the first optimization configuration datastructure is structured as specifying a first cloud function dependency, and in which the second optimization configuration datastructure is structured as specifying a second cloud function dependency (See at least Figs. 172-173; Para 0325-0327).  
As per Claim 4, Sreenivasan discloses the apparatus of claim 3, in which the component collection storage is further structured with processor-executable instructions, comprising: provide, via the at least one processor, a first dependency deployment package associated with the first cloud function dependency to the cloud configuration server; and provide, via the at least one processor, a second dependency deployment package associated with the second cloud function dependency to the cloud configuration server (See at least Figs. 172-173; Para 0325-0327).  
As per Claim 5, Sreenivasan discloses the apparatus of claim 4, in which the first dependency deployment package and the second dependency deployment package share a common code base (See at least Figs. 172-173; Para 0325-0327).  
As per Claim 6, Sreenivasan discloses the apparatus of claim 1, in which the optimization application configuration request is structured as specifying cached data repository settings for the optimization application (See at least Figs. 172-173; Para 0325-0327). 
As per Claim 7, Sreenivasan discloses the apparatus of claim 6, in which the cached data repository settings are structured to specify an IP address and a port of a cached data repository, in which the cached data repository is structured as storing data retrieved from a set of source data repositories and transformed into a cached data format utilized by the optimization application  (See at least Figs. 172-173; Para 0325-0327). 
As per Claim 8, Sreenivasan discloses the apparatus of claim 6, in which the first optimization configuration datastructure is structured as specifying the cached data repository settings, and in which the second optimization configuration datastructure is structured as specifying the cached data repository settings (See at least Figs. 172-173; Para 0325-0327).  
As per Claim 9, Sreenivasan discloses the apparatus of claim 1, in which the first optimization configuration datastructure is structured as specifying a first number of concurrent cloud function instances for the first cloud function, and in which the second optimization configuration datastructure is structured as specifying a second number of concurrent cloud function instances for the second cloud function (See at least Figs. 172-173; Para 0325-0327).  
As per Claim 10, Sreenivasan discloses the apparatus of claim 9, in which the first number of concurrent cloud function instances and the second number of concurrent cloud function instances are identical (See at least Figs. 172-173; Para 0325-0327).  
As per Claim 11, Sreenivasan discloses the apparatus of claim 1, in which the First optimization configuration datastructure is structured as specifying first runtime environment settings, and in which the second optimization configuration datastructure is structured as specifying second runtime environment settings (See at least Figs. 172-173; Para 0325-0327).  
As per Claim 12, Sreenivasan discloses the apparatus of claim 1, in which the application load balancer is structured as triggering execution of the first cloud function in response to the request specifying the first API path on an instance of the first cloud function that depends on a requester's region (See at least Figs. 172-173; Para 0325-0327).  
As per Claim 13, Sreenivasan discloses the apparatus of claim 1, in which the component collection storage is further structured with processor-executable instructions, comprising: generate, via the at least one processor, a third optimization configuration datastructure for a third optimization module from the plurality of optimization modules, in which the third optimization configuration datastructure is structured as specifying a third cloud function for the third optimization module, a third API path for the third optimization module, and the identifier of the application load balancer to utilize for the optimization application, in which the application load balancer is structured as triggering execution of the third cloud function in response to a request specifying the third API path, in which the first optimization module and the third optimization module utilize an identical optimizer; and provide, via the at least one processor, the third optimization configuration datastructure to the cloud configuration server (See at least Figs. 172-173; Para 0325-0327).  
As per Claim 14, Sreenivasan discloses the apparatus of claim 13, in which the component collection storage is further structured with processor-executable instructions, comprising: generate, via the at least one processor, a fourth optimization configuration datastructure for a fourth optimization module from the plurality of optimization modules, in which the fourth optimization configuration datastructure is structured as specifying a fourth cloud function for the fourth optimization module, a fourth API path for the fourth optimization module, and the identifier of the application load balancer to utilize for the optimization application, in which the application load balancer is structured as triggering execution of the fourth cloud function in response to a request specifying the fourth API path, in which the fourth optimization module and the second optimization module utilize an identical solver; and provide, via the at least one processor, the fourth optimization configuration datastructure to the cloud configuration server (See at least Figs. 172-173; Para 0325-0327).  
As per Claim 15, Sreenivasan discloses the apparatus of claim 1, in which the optimization application is a portfolio optimizer structured as utilizing a set of security identifiers as an input (See at least Figs.122-123, Figs. 148-150, Figs. 172-173, Para 0234-0236, Para 0249-0250, Para 0254-0278, Para 0293-0294, and Para 0325-0337).  

Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure can be found in the PTO-892 Notice of References Cited. The Examiner suggests the applicant review all of these documents before submitting any amendments.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN P OUELLETTE whose telephone number is (571)272-6807. The examiner can normally be reached on M-F 8am-6pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynda C Jasmin, can be reached at telephone number (571) 272-6782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.

May 8, 2025
/JONATHAN P OUELLETTE/Primary Examiner, Art Unit 3629                                                                                                                                                                                                        


    
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
    


(Ad) Transform your business with AI in minutes, not months

Custom AI strategy tailored to your specific industry needs
Step-by-step implementation with measurable ROI
5-minute setup that requires zero technical skills
Get your AI playbook

Trusted by 1,000+ companies worldwide

Cookies help us deliver our services. By using our services, you agree to our use of cookies.