Jump to content

Patent Application 18110816 - SYSTEMS AND METHODS FOR USING MACHINE LEARNING - Rejection

From WikiPatents

Patent Application 18110816 - SYSTEMS AND METHODS FOR USING MACHINE LEARNING

Title: SYSTEMS AND METHODS FOR USING MACHINE LEARNING ALGORITHMS TO FORECAST PROMOTIONAL DEMAND OF PRODUCTS

Application Information

  • Invention Title: SYSTEMS AND METHODS FOR USING MACHINE LEARNING ALGORITHMS TO FORECAST PROMOTIONAL DEMAND OF PRODUCTS
  • Application Number: 18110816
  • Submission Date: 2025-05-14T00:00:00.000Z
  • Effective Filing Date: 2023-02-16T00:00:00.000Z
  • Filing Date: 2023-02-16T00:00:00.000Z
  • National Class: 705
  • National Sub-Class: 007310
  • Examiner Employee Number: 85092
  • Art Unit: 3625
  • Tech Center: 3600

Rejection Summary

  • 102 Rejections: 1
  • 103 Rejections: 3

Cited Patents

The following patents were cited in the rejection:

Office Action Text


    Notice of Pre-AIA  or AIA  Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .


DETAILED ACTION
This non-final Office action is in response to applicant’s communication received on February 16, 2023, wherein claims 1-20 are currently pending.


Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.


Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Regarding Step 1 (MPEP 2106.03) of the subject matter eligibility test per MPEP 2106.03, Claims 1-15 are directed to a method (i.e., process), claims 16-19 are directed to a system (i.e. machine), and claim 20 is directed to non-transitory computer readable medium (i.e. product or article of manufacture). Accordingly, all claims are directed to one of the four statutory categories of invention.
(Under Step 2) The claimed invention is directed to an abstract idea without significantly more. 
(Under Step 2A, Prong 1 (MPEP 2106.04)) The independent claims (1, 16, 20) and dependent claims (2-15, 17-19) recite obtaining/receiving known type abstract information/data (e.g. information in a sales/retail setting – information regarding marketing, product, sales, promotions, product amounts, etc.,), data analysis and manipulation (including using mathematical models/concepts/algorithms/etc.,) to determine more abstract information/data (forecasting/predicting in an environment of product promotions, sales, marketing, product segments, etc.,), and providing/displaying this determined data for further decision-making and/or manipulation (e.g. determining amount of a product for a facility and storefront product management).  The claimed invention further uses mathematical steps to analyze and determine further data. 
The limitations of the independent claims (1, 16, 20) and dependent claims (2-15, 17-19), under the broadest reasonable interpretation, covers methods of organizing human activities (commercial interactions (in advertising, marketing and sales activities or behaviors and including business relations)) and mathematical concepts (using mathematical models/concepts/algorithms/etc.,).  If a claims limitation, under its broadest reasonable interpretation, covers the performance of the limitation as fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including scheduling, social activities, teaching, and following rules or instructions), then it falls within the “organizing human activities” grouping of abstract ideas. (MPEP 2106.04; and also see 2019 Revised Patent Subject Matter Eligibility Guidance – Federal Register, Vol. 84, Vol. 4, January 07, 2019, pages 50-57). If a claims limitation, under its broadest reasonable interpretation, covers the performance of the limitation as mathematical relationships, mathematical formulas or equations, mathematical calculations then it falls within the Mathematical concepts grouping of abstract ideas. (MPEP 2106.04; and also see 2019 Revised Patent Subject Matter Eligibility Guidance - Federal Register, Vol. 84, Vol. 4, January 07, 2019, pages 50-57). 
Accordingly, since Applicant's claims fall under organizing human activities grouping and mathematical concepts grouping, the claims recite an abstract idea.
(Under Step 2A, prong 2 (MPEP 2106.04(d))) This judicial exception is not integrated into a practical application because but for the recitation of, for example, machine learning - artificial intelligence (ML - Al) (recited only as a model where the specification shows it’s run on a generic/general-purpose computer of computing system/network/device/etc.,), computing system, processors, memory, etc., (in Independent claim 1 and its dependent claims 2-15); computing system, processors, non-transitory computer-readable medium having processor-executable instructions, machine learning - artificial intelligence (ML - Al) (recited only as a model where the specification shows it’s run on a generic/general-purpose computer of computing system/network/device/etc.,), etc., (in independent claim 16 and its dependent claims 17-19); and non-transitory computer-readable medium, processors, instructions (software), machine learning - artificial intelligence (ML - Al) (recited only as a model where the specification shows it’s run on a generic/general-purpose computer of computing system/network/device/etc.,), etc., (independent claim 20) in the context of the claims, the claim encompasses the above state abstract idea (organizing human activities (commercial interactions (in advertising, marketing and sales activities or behaviors and including business relations)) and mathematical concepts (using mathematical models/concepts/algorithms/etc.,)).  As shown above, the claims and specification recite generic/general-purpose computers and computing components/elements/devices/etc., which are recited at a high level of generality performing generic/general-purpose computer and computing functions. (MPEP 2106.04; and also see 2019 Revised Patent Subject Matter Eligibility Guidance – Federal Register, Vol. 84, Vol. 4, January 07, 2019, page 53-55). It should also be noted that patents that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101. See Recentive Analytics, Inc. v. Fox Corp., CAFC Case number 23-2437 (Fed. Cir. 2025).  The generic/general-purpose computers and computing elements/terms/limitations are no more than mere instructions to apply the judicial exception (the above abstract idea) in an apply-it fashion using generic/general-purpose computers, processors, and/or computer components/elements/ devices, etc.  The CAFC has stated that it is not enough, however, to merely improve abstract processes by invoking a computer merely as a tool. Customedia Techs., LLC v. Dish Network Corp., 951 F.3d 1359, 1364 (Fed. Cir. 2020).  The focus of the claims is simply to use computers and a familiar network as a tool to perform abstract processes involving simple information exchange. Carrying out abstract processes involving information exchange is an abstract idea. See, e.g., BSG, 899 F.3d at 1286; SAP America, 898 F.3d at 1167-68; Affinity Labs of Tex., LLC v. DIRECTV, LLC, 838 F.3d 1253, 1261-62 (Fed. Cir. 2016). And use of standard computers and networks to carry out those functions—more speedily, more efficiently, more reliably—does not make the claims any less directed to that abstract idea. See Alice Corp., 573 U.S. at 222-25; Customedia, 951 F.3d at 1364; Trading Techs. Int'l, Inc. v. IBG LLC, 921 F.3d 1084, 1092-93 (Fed. Cir. 2019); SAP America, 898 F.3d at 1167; Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1314 (Fed. Cir. 2016); Electric Power Grp., LLC v. Alstom S.A., 830 F.3d 1350, 1353, 1355 (Fed. Cir. 2016); Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 1370 (Fed. Cir. 2015); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355 (Fed. Cir. 2014).  Accordingly, the additional elements do not integrate the abstract idea in to a practical application because it does not impose any meaningful limits on practicing the abstract idea – i.e. they are just post-solution/extra-solution activities. 
(Under Step 2B (MPEP 2106.05)) The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims do not recite an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of an abstract idea to a particular technological environment. The claims recite using known and/or generic computing devices and software (for example, machine learning - artificial intelligence (ML - Al) (recited only as a model where the specification shows it’s run on a generic/general-purpose computer of computing system/network/device/etc.,), computing system, processors, memory, etc., (in Independent claim 1 and its dependent claims 2-15); computing system, processors, non-transitory computer-readable medium having processor-executable instructions, machine learning - artificial intelligence (ML - Al) (recited only as a model where the specification shows it’s run on a generic/general-purpose computer of computing system/network/device/etc.,), etc., (in independent claim 16 and its dependent claims 17-19); and non-transitory computer-readable medium, processors, instructions (software), machine learning - artificial intelligence (ML - Al) (recited only as a model where the specification shows it’s run on a generic/general-purpose computer of computing system/network/device/etc.,), etc., (independent claim 20)). For the role of a computer in a computer implemented invention to be deemed meaningful in the context of this analysis, it must involve more than performance of "well-understood, routine, [and] conventional activities previously known to the industry." Alice Corp. v. CLS Bank Int'l, 110 USPQ2d 1976 (U.S. 2014), at 2359 (quoting Mayo, 132 S. Ct. at 1294 (internal quotation marks and brackets omitted)). These activities as claimed by the Applicant are all well-known and routine tasks in the field of art – as can been seen in the specification of Applicant’s application (for example, see Applicant’s specification at, Fig. 2 ¶¶ 0051 [general-purpose/generic computers/processors/etc., and generic/general-purpose computing components/devices/etc.,], 0122 [general-purpose/generic computers/processors/etc., and generic/general-purpose computing components/devices/etc.,], 0066-0070 [general-purpose/generic computers/processors/etc., and generic/general-purpose computing components/devices/etc.,]) and/or the specification of the below cited art (used in the rejection below and on the PTO-892) and/or also as noted in the court cases in §2106.05 in the MPEP. Further, "the mere recitation of a generic computer cannot transform a patent ineligible abstract idea into a patent-eligible invention." Alice, at 2358. None of the hardware offers a meaningful limitation beyond generally linking the system to a particular technological environment, that is, implementation via computers. Adding generic computer components to perform generic functions that are well‐understood, routine and conventional, such as gathering data, performing calculations, and outputting a result would not transform the claim into eligible subject matter.  Abstract ideas are excluded from patent eligibility based on a concern that monopolization of the basic tools of scientific and technological work might impede innovation more than it would promote it.  The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims require no more than a generic computer to perform generic computer functions.  The additional element(s) or combination of elements in the claim(s) other than the abstract idea per se amount(s) to no more than: (i) mere instructions to implement the idea on a computer, and/or (ii) recitation of generic computer structure that serves to perform generic computer functions that are well-understood, routine, and conventional activities previously known to the pertinent industry.  Applicant is directed to the following citations and references: Digitech Image., LLC v. Electronics for Imaging, Inc.(U.S. Patent No. 6,128,415); and (2) Federal register/Vol. 79, No 241 issued on December 16, 2014, page 74629, column 2, Gottschalk v. Benson.  Viewed as a whole, the claims do not purport to improve the functioning of the computer itself, or to improve any other technology or technical field. Use of an unspecified, generic computer does not transform an abstract idea into a patent-eligible invention. Thus, the claim does not amount to significantly more than the abstract idea itself. See Alice Corp. v. CLS Bank Int'l, 110 USPQ2d 1976 (U.S. 2014).

The dependent claims (2-15, 17-19) further define the independent claims and merely narrow the described abstract idea, but not adding significantly more than the abstract idea. The above rejection includes and details the discussion of dependent claims (2-15, 17-19) and the above rejection applies to all the dependent claim limitations.  In summary, the dependent claims further state using obtained data/information (where the information itself is abstract in nature), data analysis/manipulation to determine more data/information (including using mathematical concepts/algorithms/models/etc.,), possibly obtaining more abstract information/data, and providing this determined data/information for further analysis and decision making (in marketing, sales, product amounts, promotions, etc.,).  The claimed invention further uses mathematical steps to analyze and determine further data (stated models/algorithms and analysis are mathematical in nature).  The dependent claims are directed towards organizing human activities (commercial interactions (in advertising, marketing and sales activities or behaviors and including business relations)) and mathematical concepts (using mathematical models/concepts/algorithms/etc.,). This judicial exception is not integrated into a practical application because the claims and specification recite generic/general-purpose computers and computing components/elements/etc., performing generic computer functions (for example, machine learning - artificial intelligence (ML - Al) (recited only as a model where the specification shows it’s run on a generic/general-purpose computer of computing system/network/device/etc.,), computing system, processors, memory, etc., (in Independent claim 1 and its dependent claims 2-15); computing system, processors, non-transitory computer-readable medium having processor-executable instructions, machine learning - artificial intelligence (ML - Al) (recited only as a model where the specification shows it’s run on a generic/general-purpose computer of computing system/network/device/etc.,), etc., (in independent claim 16 and its dependent claims 17-19); and non-transitory computer-readable medium, processors, instructions (software), machine learning - artificial intelligence (ML - Al) (recited only as a model where the specification shows it’s run on a generic/general-purpose computer of computing system/network/device/etc.,), etc., (independent claim 20)). (MPEP 2106.04 and also see 2019 Revised Patent Subject Matter Eligibility Guidance – Federal Register, Vol. 84, Vol. 4, January 07, 2019, page 53-55).  The dependent claims also merely recites post-solution/extra-solution activities (with generic/general-purpose computers and/or computing components/devices/etc.,). The additional elements do not integrate the abstract idea in to a practical application because it does not impose any meaningful limits on practicing the abstract idea – i.e. they are just post-solution/extra-solution activities. The dependent claims merely use the same general technological environment and instructions to implement the abstract idea without adding any new additional elements.  Also, the dependent claims also do not include additional elements that are sufficient to amount to significantly more than the juridical exception because the additional elements either individually or in combination are merely an extension of the abstract idea itself.  See detailed discussion above.


Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA  35 U.S.C. 102 and 103 (or as subject to pre-AIA  35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA  to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.  
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –

(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.


Claims 1-6, 11, and 14-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Keng et al., (US 2021/0334845).
As per claim 1, Keng discloses a method, comprising: 
obtaining, by a promotional forecasting computing system (PFCS), a new marketing promotion for a particular product (¶¶ 0039-0040 [system…analytics…promotion…system provide technological approach…promotion…marketing…product], 0046-0049 [forecast…predict…promotion analytics…machine learning module…machine learning model…predict…optimize the forecast], 0056-0057; see also 0154, 0181); 
determining, by the PFCS and based on the particular product, a first product segment from a plurality of product segments, wherein each of the plurality of product segments comprises a set of products that are clustered together (¶¶ 0015 [category of products], 0040 [product group…subcategory of products], 0020, 0050, 0158-0159 [which product…selection of products]; see also 0090, 0114, 0147-0150); 
determining, by the PFCS, one or more promotional forecasting machine learning - artificial intelligence (ML - Al) models from a plurality of promotional forecasting ML - Al models to use for the new marketing promotion based on the first product segment, wherein each of the plurality of promotional forecasting ML - Al models is associated with a product segment from the plurality of product segments (¶¶ 0014-0019 [shows many models and the usability of different models for different purposes and needs and even using multiple combinations of models depending on results desired – so selecting model needed from multiple models], 0005-0007 [also shows various models used for predicting based on category of products and also predicting demand – see also with 0014], 0135 [model (type)…product…sub-category]; also see 0086 [forecasting model using one or more of models], 0130-0135 [various models and using a model from the models], 0169 [machine learning model (a specific type)… promotional…predicting selling quantities of…products], 0175 [machine learning model…products]; see also claim 19 of Keng); 
inputting, by the PFCS, promotional information associated with the new marketing promotion into the one or more determined promotional forecasting ML - Al models to forecast an amount of the particular product to provide to one or more storefronts (¶¶ 0004-0005 [input parameters/information…forecasting…machine learning model], 0020 [receiving one or more input parameters…using machine learning models (also shows information used/inputted reading promotions – e.g. )], 0049, 0056 [machine learning module use machine learning techniques with the machine learning model (promotion forecasting model) to forecast output analytics – see with 0053 [inventory forecasting (such as at a warehouse or store level)]], 0138-0139+ [machine learning…forecast model…forecast…ordering inventory…supply chain…per-store unit demand forecast…a machine learning model, as described herein, to forecast a ratio for total store level unit demand…the machine learning module can determine a per-store SKU (stock-keeping) forecast], 0141 [per-store unit demand forecast model – keeps track of store stock (for ordering and stocking the store – see above 0138-0139)]; also see 0014-0019, 0022, 0056-0060+); and 
providing, by the PFCS, product information indicating the amount of the particular product to one or more facility computing systems associated with the one or more storefronts (see citations above and in addition also see figs. 2 [computing system], 6-7 [system which shows system and output interface]; ¶¶ 0014 [outputting…analytics to the user], 0035, 0043 [interface…outputs information to output devices – see with 0042 [point-of-sale (PoS) device (which is a well-known device part of the facility computing system associated with a store-front)] and 0053-0054 [output analytics…promotion…output module]], 0144 [ promotion forecast…output to a user via output interface – see fig. 4 forecast volume provided]; see also 0148-0149, 0154 [interaction with digitized point-of-sale machines…PoS terminals]).

As per claim 16, claim 16 discloses substantially similar limitations as claim 1 above; and therefore claim 16 is rejected under the same rationale and reasoning as presented above for claim 1.
As per claim 20, claim 20 discloses substantially similar limitations as claim 1 above; and therefore claim 20 is rejected under the same rationale and reasoning as presented above for claim 1.

As per claim 2, Keng discloses the method of claim 1, wherein each of the plurality of promotional forecasting ML - Al models is further associated with a storefront from a plurality of storefronts associated with an enterprise organization, and wherein selecting the one or more promotional forecasting ML - Al model is based on: comparing the first product segment with the plurality of product segments; and comparing the one or more storefronts with the plurality of storefronts (see citations of claim 1 above which  discloses some of the limitations and in addition see ¶¶ 0052 [which stores receive the promotion, the quantity of stores that receive the promotion, which areas of the country receive the promotion], 0086 [forecasting model…various factors…differences in the stores (comparing stores)], 0123-0124 [various models and factors such as stores and number of stores and categories/subcategory of products], 0135).
As per claim 17, claim 17 discloses substantially similar limitations as claim 2 above; and therefore claim 17 is rejected under the same rationale and reasoning as presented above for claim 2.

As per claim 3, Keng discloses the method of claim 1, wherein inputting the promotional information associated with the new marketing promotion into the one or more selected promotional forecasting ML - Al models comprises: inputting the promotional information into a first selected promotional forecasting ML - Al model associated with a first storefront, from the one or more storefronts, to forecast a first amount of the particular product to provide to the first storefront (¶¶ 0004-0005 [input parameters/information…forecasting…machine learning model], 0020 [receiving one or more input parameters…using machine learning models (also shows information used/inputted reading promotions – e.g. )], 0049, 0056 [machine learning module use machine learning techniques with the machine learning model (promotion forecasting model) to forecast output analytics – see with 0053 [inventory forecasting (such as at a warehouse or store level)]], 0138-0139+ [machine learning…forecast model…forecast…ordering inventory…supply chain…per-store unit demand forecast…a machine learning model, as described herein, to forecast a ratio for total store level unit demand…the machine learning module can determine a per-store SKU (stock-keeping) forecast], 0141 [per-store unit demand forecast model – keeps track of store stock (for ordering and stocking the store – see above 0138-0139)]; also see 0014-0019, 0022, 0056-0060+); and
inputting the promotional information into a second selected promotional forecasting ML - Al model associated with a second storefront, from the one or more storefronts, to forecast a second amount of the particular product to provide to the second storefront, wherein the second amount is different from the first amount (this is the same process as the previous limitation but applied to another store and Keng does disclose multiple stores – see citations in combination of claims 1 and 2 show this’ also see for example see ¶¶ 0004-0005 [input parameters/information…forecasting…machine learning model], 0020 [receiving one or more input parameters…using machine learning models (also shows information used/inputted reading promotions – e.g. )], 0049, 0056 [machine learning module use machine learning techniques with the machine learning model (promotion forecasting model) to forecast output analytics – see with 0053 [inventory forecasting (such as at a warehouse or store level)]], 0138-0139+ [machine learning…forecast model…forecast…ordering inventory…supply chain…per-store unit demand forecast…a machine learning model, as described herein, to forecast a ratio for total store level unit demand…the machine learning module can determine a per-store SKU (stock-keeping) forecast], 0141 [per-store unit demand forecast model – keeps track of store stock (for ordering and stocking the store – see above 0138-0139)], 0052 [which stores receive the promotion, the quantity of stores that receive the promotion, which areas of the country receive the promotion], 0086 [forecasting model…various factors…differences in the stores (comparing stores)], 0123-0124 [various models and factors such as stores and number of stores and categories/subcategory of products], 0135; also see 0014-0019, 0022, 0056-0060+).
As per claim 18, claim 18 discloses substantially similar limitations as claim 3 above; and therefore claim 18 is rejected under the same rationale and reasoning as presented above for claim 3.

As per claim 4, Keng discloses the method of claim 1, further comprising: obtaining historical data for a plurality of products; standardizing the historical data using a plurality of standardization processors to generate standardized historical data; and training the plurality of promotional forecasting ML - Al models using the standardized historical data (¶¶ 0014 [historical data/information…machine learning model trained…optimization training…using a promotion forecasting machine learning model trained or instantiated with an forecasting training set – see with 0050 [showing historical data/information], 0019-0020 [confidence module to determine a confidence indicator, the confidence indicator indicates the reliability of the forecast, the confidence module determines the confidence indicator by: determining if the forecast is in a predetermined scope; and determining, using an accuracy machine learning model trained or instantiated with an accuracy training set, the confidence indicator, the accuracy training set comprising previous forecasts and their respective actualized values… (a type of standardization)… the selection training set comprising the historical data and the one or more input parameters, the selection comprising: assigning a prominence weight to each of the one or more products; normalizing the prominence weight for each of the one or more products (standardization process also)], and 0056 [training]], 0039-0048 [analyzing historical data…machine learning module…use time series approaches that primarily use historical data as basis for analytically estimating future behavior…series approaches can include, for example, ARIMAX, AR, Moving Average, Exponential smoothing, or the like (standardization)… use regression based approaches that use a variety of factors (including past data points) to predict future outcomes with an implicit concept of time (through the data points)], 0135-0138 [data…train…mean and standard deviation…historical data (and other historical information used) – see also 0140-0153 [detailing training]]; see also 0007 [normalizing by the mean of the product's entire history], 0022, 0112-0118, 0153-0154, 0173).  
As per claim 19, claim 19 discloses substantially similar limitations as claim 4 above; and therefore claim 19 is rejected under the same rationale and reasoning as presented above for claim 4.

As per claim 5, Keng discloses the method of claim 4, wherein obtaining the historical data for the plurality of products comprises: determining lagging information for the plurality of products based on the historical data, and wherein training the plurality of promotional forecasting ML - Al models is based on the lagging information (see citations above for claims 1-2 and 4 and in addition see, for example ¶¶ 0022, 0153 [performs an analysis of historical data to find an optimized configuration…historical data includes…product-level results], 0175 [historical data…seasonality of the products…time of year…an example, sun screen advertised in the winter to residents of a northern country would not be optimal (lagging)…placing sun screen products in advertisements that are distributed in the spring and summer are more likely…optimal sales effects…score is developed based on the seasonality of…product for inclusion into the machine learning decision making, based…on historical sales of the product around the date of distribution (taking into account lagging/slow information and non-lagging information)], 0145-0148 [trends…insights…evaluate and measure past…past performance…negative effects (in sales) (lagging information) – see also Fig. 5]).  

As per claim 6, Keng discloses the method of claim 4, wherein standardizing the historical data comprises: determining a plurality of features for the plurality of products, wherein each of the plurality of features indicates an input that is used for training the plurality of promotional forecasting ML - Al models (¶¶ 0004, 0014 [historical data…products…promotions…input…parameters…machine learning…model…training set (optimization machine learning model, determine at least one determined parameter for the promotion which optimizes at least one of the received input parameters, the optimization training set comprising the received historical data)], 0020, 0047-0048, 0056; also see 0049-0052); and
 populating one or more arrays based on the plurality of features, wherein the standardized historical data comprises the one or more arrays (note that arrays are data/information organized (in table/spreadsheet format) as described in Applicant’s specification (e.g. Applicant’s para. 0031 pointing to figs. 6A-B of Applicant’s specification); described in Keng ¶¶ 0088 [data can be aggregated for each week…feature table can be indexed by SKU, week…with a real number for each of the resulting feature columns], 0092-0105 [describing the “array”/table with columns and information etc.,; e.g.  column creation…encoding scheme…stacking promotions of different types on the same SKU, at the same time).  

As per claim 11, Keng discloses the method of claim 4, wherein standardizing the historical data comprises: determining one or more lost sales entries within the historical data, wherein the one or more lost sales entries indicate the particular product being out of stock during a time period; generating new sales data for the one or more lost sales entries; and populating the one or more lost sales entries with the new sales data (¶¶ 0141 [percent number of days stock out…variation is that if a store stocks out of an item, the above equation will under stock the item, causing the store to stock out of the product…cause the store to then under or over order, possibly leading to a cycle of out of stock situations], 0127-0128 [impose a non-zero mean Bayesian Prior that can help fill in missing or sparse data… Bayesian Prior can be used to fill the missing coefficient…approach is made possible because of the normalization described above due to fitting the pooled SKUs model], 0139-0144 [4 week average unit sales per store (new sales), and use those proportions to multiply by the total-store forecast – see equations which deal with missing/lost data entries – provide more accurate forecasts using multiple factors (for example, past sales, trends, price, promo mechanics, and the like)…provide for a reduction in stockouts and excess inventory… evaluation can also include: promotion lift as a measure of incremental promotional lift of the promotion in comparison to a baseline… residual basket value as a measure of average basket size when this product is sold, minus the product – basket penetration as a measure of the proportion of transactions involving the product (current new sales evaluation)]).  

As per claim 14, Keng discloses the method of claim 4, further comprising: determining whether to retrain the plurality of promotional forecasting ML - Al models; and storing the plurality of promotional forecasting ML - Al models in memory (see citations above for claims 1-4 and also see ¶¶ 0172-0175 [steady state of the machine learning module shifts over to a reinforcement learning…re-trained and re-scored… reinforcement learning and feedback approach…building blocks are re-trained and re-scored], 0177-0178 [re-train the models], 0038 [storage device(s)…memory…store…information], 0043 [stored data…database…includes machine learning module (see with 0048, 0170 [machine learning module…number of models])]; see also 0152-0160, 0168-0170).  

As per claim 15, Keng discloses the method of claim 14, wherein determining whether to retrain the plurality of promotional forecasting ML - Al models comprises retraining the plurality of promotional forecasting ML - Al models periodically after a set amount of time has elapsed (see citations above for claims 1-4 and also see ¶¶ 0048 [time series approaches], 0172-0175 [steady state of the machine learning module shifts over to a reinforcement learning…re-trained and re-scored…reinforcement learning and feedback approach…building blocks are re-trained and re-scored…time of year (see with 0007, 0135, 0153, 0140-0143 [week average…training…time period])], 0108, 0177-0178 [re-train the models]; see also 0138-0153 [full scope with re-training described in 0172-0175], 0069-0086).


Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA  35 U.S.C. 102 and 103 (or as subject to pre-AIA  35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.  
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.

Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Keng et al., (US 2021/0334845) in view of Kobe (US 2021/0383286).
As per claim 7, Keng discloses the method of claim 4, wherein standardizing the historical data comprises: determining a plurality of sub-groups for the plurality of products using time series; and determining the plurality of product segments (see citations above for claims 1-4 which disclose these limitations ¶¶ 0048 [ the machine learning model can use time series approaches that primarily use historical data as basis for analytically estimating future behavior…time series approaches can include, for example, ARIMAX, AR, Moving Average, Exponential smoothing, or the like…the machine learning model can use regression based approaches that use a variety of factors (including past data points) to predict future outcomes with an implicit concept of time (through the data points)…regression based approaches can include, for example, linear regression, random forest, neural network, or the like], 0014-0019 [shows many models and the usability of different models for different purposes and needs and even using multiple combinations of models depending on results desired – so selecting model needed from multiple models], 0005-0007 [also shows various models used for predicting based on category of products and also predicting demand – see also with 0014], 0135 [model (type)…product…sub-category]; also see 0086 [forecasting model using one or more of models], 0130-0135 [various models and using a model from the models], 0169 [machine learning model (a specific type)… promotional…predicting selling quantities of…products], 0175 [machine learning model…products]; see also claim 19 of Keng).  
Keng does not explicitly state using a dynamic time warping (DTW) algorithm (although this is an algorithm used in time series approaches) and exceeding a maximum data size limit.
Analogous art Kobe discloses using a dynamic time warping (DTW) algorithm (although this is an algorithm used in time series approaches) and exceeding a maximum data size limit (Kobe ¶¶ 0020-0024 [demand prediction…product…inventory amount prediction…prediction model…models… includes, for example, a model for performing demand prediction by time-series analysis, such as an autoregressive model (AR model), a moving average model (MA model), an ARMA model (autoregressive moving average model), an ARIMA model (autoregressive integrated moving average model), and a SARIMA model (seasonal autoregressive integrated moving average model) (which overlaps with Keng) – see with Kobe at 0028 [analysis…time-series data…dynamic time warping (DTW)]], 0032 [DTW…data selection…minimum values…data is equal to or less than a certain number…maximum…more than a certain number], 0043, 0062 [min-max normalization]; see also 0028-0032, 0068-0069, 0082-0083).
Therefore, it would be obvious to one of ordinary skill in the art to include in the system/method of Keng using a dynamic time warping (DTW) algorithm and exceeding a maximum data size limit as taught by analogous art Kobe in order to accurately predict/forecast since doing so could be performed readily by any person of ordinary skill in the art, with neither undue experimentation, nor risk of unexpected results (KSR-G/TSM); and also since one of ordinary skill in the art at the time of the invention would have recognized that applying the known technique and concepts of Kobe would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such concepts and features into similar systems (KSR-D). (MPEP 2141; and also see (1) 2007 Examination Guidelines for Determining Obviousness Under 35 U.S.C. 103 in View of the Supreme Court Decision in KSR International Co. v. Teleflex Inc. - Federal Register, Vol. 72, No. 195, October 10, 2007, pages 57526-57535; (2) 2010 Examination Guidelines Updated Developments in the Obviousness Inquiry After KSR v. Teleflex. -Federal Register, Vol. 75, No. 169, September 01, 2010, pages 53643-53660; and (3) materials posted at https://www.uspto.gov/patent/laws-and-regulations/examination-policy/examination-guidelines-training-materials-view-ksr).

As per claim 8, Keng discloses the method of claim 7, and discloses determining the plurality of product segments is further based on algorithm and metrics effecting each of the plurality of sub-groups (see citations above for claims 1-4 which disclose these limitations ¶¶ 0048 [ the machine learning model can use time series approaches that primarily use historical data as basis for analytically estimating future behavior…time series approaches can include, for example, ARIMAX, AR, Moving Average, Exponential smoothing, or the like…the machine learning model can use regression based approaches that use a variety of factors (including past data points) to predict future outcomes with an implicit concept of time (through the data points)…regression based approaches can include, for example, linear regression, random forest, neural network, or the like], 0014-0019 [shows many models and the usability of different models for different purposes and needs and even using multiple combinations of models depending on results desired – so selecting model needed from multiple models], 0005-0007 [also shows various models used for predicting based on category of products and also predicting demand – see also with 0014], 0135 [model (type)…product…sub-category]; also see 0086 [forecasting model using one or more of models], 0130-0135 [various models and using a model from the models], 0169 [machine learning model (a specific type)… promotional…predicting selling quantities of…products], 0175 [machine learning model…products]; see also claim 19 of Keng).
Keng does not explicitly state determining below a minimum data size limit.  
Analogous art Kobe discloses determining below a minimum data size limit (Kobe ¶¶ 0020-0024 [demand prediction…product…inventory amount prediction…prediction model…models… includes, for example, a model for performing demand prediction by time-series analysis, such as an autoregressive model (AR model), a moving average model (MA model), an ARMA model (autoregressive moving average model), an ARIMA model (autoregressive integrated moving average model), and a SARIMA model (seasonal autoregressive integrated moving average model) (which overlaps with Keng) – see with Kobe at 0028 [analysis…time-series data…dynamic time warping (DTW)]], 0032 [DTW…data selection…minimum values…data is equal to or less than a certain number…maximum…more than a certain number], 0043, 0062 [min-max normalization]; see also 0028-0032, 0068-0069, 0082-0083).
Therefore, it would be obvious to one of ordinary skill in the art to include in the system/method of Keng determining below a minimum data size limit as taught by analogous art Kobe in order to accurately predict/forecast since doing so could be performed readily by any person of ordinary skill in the art, with neither undue experimentation, nor risk of unexpected results (KSR-G/TSM); and also since one of ordinary skill in the art at the time of the invention would have recognized that applying the known technique and concepts of Kobe would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such concepts and features into similar systems (KSR-D). (MPEP 2141; and also see (1) 2007 Examination Guidelines for Determining Obviousness Under 35 U.S.C. 103 in View of the Supreme Court Decision in KSR International Co. v. Teleflex Inc. - Federal Register, Vol. 72, No. 195, October 10, 2007, pages 57526-57535; (2) 2010 Examination Guidelines Updated Developments in the Obviousness Inquiry After KSR v. Teleflex. -Federal Register, Vol. 75, No. 169, September 01, 2010, pages 53643-53660; and (3) materials posted at https://www.uspto.gov/patent/laws-and-regulations/examination-policy/examination-guidelines-training-materials-view-ksr).

Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Keng et al., (US 2021/0334845) in view of Milton et al., (US 2016/0247175).
As per claim 9, Keng discloses the method of claim 4, wherein standardizing the historical data comprises: obtaining one or more indications indicating one or more currently/presently considered products/SKUs or one or more currently/presently considered store fronts that do not have historical data; and generating currently/presently considered product/SKU data for the one or more currently/presently considered products or currently/presently considered storefront data for the one or more currently/presently considered storefronts, and wherein generating the standardized historical data is based on the currently/presently considered product data or the storefront data (¶¶ 0046 [product…referred to by its…SKU – also see 0052 [SKU age]], 0118-0122 [SKU…not have…breath of…history…pool data from multiple similar SKUs…effects can be estimated for a target SKU, even though that SKU may not have had those effects directly observed for it (no historical data available so using information from similar SKUs)…complications for pooling data can be overcome by normalizing the units and average price within a subcategory or brand such that the data may be reliably pooled together…for each SKU, normalizing absolute units by a mean for non-promotion periods, and if such mean is not available, then normalizing by the mean of the SKU's entire history…the raw normalized model output data is multiplied by this scaling factor to get the prediction in terms of units – (0123-0124 give details on concepts (e.g. using Random forest model (which is used in forecasting/predicting cases where information is missing or unavailable)) used to generate the standardized historical data based on SKUs that have lacking initial historical information)]; also see 0129-0130).  
Keng does not state “new” as in new storefronts or new products (however, Applicant does not provide definition of “new” and Keng does disclose various stores and products that are in the present (currently) considered (where a “new” store could just be the one currently/different considered store or product than initially/previously considered but within the “one or more stores” disclosed in Keng)).
Analogous art Milton discloses the term “new” data and new stores and other new information relating to products and stores in the machine learning models for predicting in a retail setting (for example, see ¶¶ 0006, 0051, 0062 [model optimization by re-learning these parameters in the face of new data], 0099-0101, 0114 [machine learning models…new vector to a probability function – see with 0116 [vector…predict…new retail store or products]]). 
Therefore, it would be obvious to one of ordinary skill in the art to include in the system/method of Keng the term new for new stores or new products as taught by analogous art Milton in order to optimally and accurately predict/forecast since doing so could be performed readily by any person of ordinary skill in the art, with neither undue experimentation, nor risk of unexpected results (KSR-G/TSM); and also since one of ordinary skill in the art at the time of the invention would have recognized that applying the known technique and concepts of Milton would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such concepts and features into similar systems (KSR-D). (MPEP 2141; and also see (1) 2007 Examination Guidelines for Determining Obviousness Under 35 U.S.C. 103 in View of the Supreme Court Decision in KSR International Co. v. Teleflex Inc. - Federal Register, Vol. 72, No. 195, October 10, 2007, pages 57526-57535; (2) 2010 Examination Guidelines Updated Developments in the Obviousness Inquiry After KSR v. Teleflex. -Federal Register, Vol. 75, No. 169, September 01, 2010, pages 53643-53660; and (3) materials posted at https://www.uspto.gov/patent/laws-and-regulations/examination-policy/examination-guidelines-training-materials-view-ksr).

As per claim 10, Keng discloses the method of claim 9, wherein generating the currently/presently considered product data or the currently/presently considered storefront data is based on determining similar products to the one or more currently/presently considered products or similar storefronts to the one or more currently/presently considered storefronts data (¶¶ 0118-0122 [SKU…not have…breath of…history…pool data from multiple similar SKUs…effects can be estimated for a target SKU, even though that SKU may not have had those effects directly observed for it (no historical data available so using information from similar SKUs)…complications for pooling data can be overcome by normalizing the units and average price within a subcategory or brand such that the data may be reliably pooled together]).  
Keng does not state “new” as in new storefronts or new products (however, Applicant does not provide definition of “new” and Keng does disclose various stores and products that are in the present (currently) considered (where a “new” store could just be the one currently/different considered store or product than initially/previously considered but within the “one or more stores” disclosed in Keng)).
Analogous art Milton discloses the term “new” data and new stores and other new information relating to products and stores in the machine learning models for predicting in a retail setting (for example, see ¶¶ 0006, 0051, 0062 [model optimization by re-learning these parameters in the face of new data], 0099-0101, 0114 [machine learning models…new vector to a probability function – see with 0116 [vector…predict…new retail store or products]]). 
Therefore, it would be obvious to one of ordinary skill in the art to include in the system/method of Keng the term new for new stores or new products as taught by analogous art Milton in order to optimally and accurately predict/forecast since doing so could be performed readily by any person of ordinary skill in the art, with neither undue experimentation, nor risk of unexpected results (KSR-G/TSM); and also since one of ordinary skill in the art at the time of the invention would have recognized that applying the known technique and concepts of Milton would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such concepts and features into similar systems (KSR-D). (MPEP 2141; and also see (1) 2007 Examination Guidelines for Determining Obviousness Under 35 U.S.C. 103 in View of the Supreme Court Decision in KSR International Co. v. Teleflex Inc. - Federal Register, Vol. 72, No. 195, October 10, 2007, pages 57526-57535; (2) 2010 Examination Guidelines Updated Developments in the Obviousness Inquiry After KSR v. Teleflex. -Federal Register, Vol. 75, No. 169, September 01, 2010, pages 53643-53660; and (3) materials posted at https://www.uspto.gov/patent/laws-and-regulations/examination-policy/examination-guidelines-training-materials-view-ksr).

Claims 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Keng et al., (US 2021/0334845) in view of Kansara et al., (US 2021/0117995).
As per claim 12, Keng discloses the method of claim 4, comprising forecasting ML - Al models, and wherein training the plurality of promotional forecasting ML - Al models is based on using a customized loss function (see citations for claims 1-4 above and also see ¶¶ 0141 [percent number of days stock out…variation is that if a store stocks out of an item, the above equation will under stock the item, causing the store to stock out of the product…cause the store to then under or over order, possibly leading to a cycle of out of stock situations], 0127-0128 [impose a non-zero mean Bayesian Prior that can help fill in missing or sparse data… Bayesian Prior can be used to fill the missing coefficient…approach is made possible because of the normalization described above due to fitting the pooled SKUs model], 0139-0144 [4 week average unit sales per store (new sales), and use those proportions to multiply by the total-store forecast – see equations which deal with missing/lost data entries – provide more accurate forecasts using multiple factors (for example, past sales, trends, price, promo mechanics, and the like)…provide for a reduction in stockouts and excess inventory… evaluation can also include: promotion lift as a measure of incremental promotional lift of the promotion in comparison to a baseline… residual basket value as a measure of average basket size when this product is sold, minus the product – basket penetration as a measure of the proportion of transactions involving the product (current new sales evaluation)]).  
Although Keng does disclose regression analysis, Keng does not explicitly state Light gradient-boosting machine (LightGBM) models.
Analogous art Kansara discloses Light gradient-boosting machine (LightGBM) models (¶¶ 0052 [predictions in retail setting; also see 0016-0018], 0094 [LGBM (Light Gradient Boosting Model); see with 0057-0058 [missing values…training sequences]]).
Therefore, it would be obvious to one of ordinary skill in the art to include in the system/method of Keng Light gradient-boosting machine (LightGBM) models as taught by analogous art Kansara in order to optimally and accurately predict/forecast since doing so could be performed readily by any person of ordinary skill in the art, with neither undue experimentation, nor risk of unexpected results (KSR-G/TSM); and also since one of ordinary skill in the art at the time of the invention would have recognized that applying the known technique and concepts of Kansara would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such concepts and features into similar systems (KSR-D). (MPEP 2141; and also see (1) 2007 Examination Guidelines for Determining Obviousness Under 35 U.S.C. 103 in View of the Supreme Court Decision in KSR International Co. v. Teleflex Inc. - Federal Register, Vol. 72, No. 195, October 10, 2007, pages 57526-57535; (2) 2010 Examination Guidelines Updated Developments in the Obviousness Inquiry After KSR v. Teleflex. -Federal Register, Vol. 75, No. 169, September 01, 2010, pages 53643-53660; and (3) materials posted at https://www.uspto.gov/patent/laws-and-regulations/examination-policy/examination-guidelines-training-materials-view-ksr).

As per claim 13, Keng discloses the method of claim 12, wherein the customized loss function is based on a sales velocity associated with the particular product and a margin rate associated with the particular product (see citations above for claims 1-4 and 12 and also see ¶¶ 0052 [stockout percentage (which is known as a stockout rate – percentage of times the product is out of stock (track the number of orders that you receive for each product and the number of orders that you cannot fulfill due to stockouts; then, you can divide the number of unfulfilled orders by the total number of orders and multiply by 100 to get the percentage))], 0147 [taking into account baseline sales, and the negative effects of cannibalization, halo, and pull forward…advantageously, determining uplift (increase in sales performance/rate) not just based on raw sales. In further cases, the evaluation can also include: promotion lift as a measure of incremental promotional lift of the promotion in comparison to a baseline; price elasticity as a measure of impact of price changes to demand; residual basket value as a measure of average basket size when this product is sold, minus the product; basket penetration as a measure of the proportion of transactions involving the product], 0155-0157 [sales uplift (increase in sales performance)…sales…margin…product; see with 0047-0048, 0054 [deriving uplift (increase in performance or sales)]]).  


Conclusion
The prior art made of record on the PTO-892 and not relied upon is considered pertinent to applicant's disclosure. For example, some of the pertinent prior art is as follows:
Panikkar et al., (US 2024/0193536): Disclosed techniques includes, by a computing device, receiving data about a probable overstock product at a retail partner in a sales region from another computing device, the data including a finished goods assembly (FGA) stock keeping unit (SKU) that identifies the probable overstock product and a corresponding quantity of the probable overstock product. The method also includes, by the computing device, determining a total quantity of the probable overstock product in the sales region, wherein the total quantity includes the quantity of the probable overstock product at the retail partner, and computing a corrected FGA demand of the probable overstock product in the sales region based on the total quantity of the probable overstock product in the sales region. The method further includes, by the computing device, the corrected FGA demand of the probable overstock product in the sales region.
Lei et al., (US 11,922,440): Illustrates  forecast demand of an item by receiving historical sales data for the item for a plurality of past time periods including a plurality of features that define one or more feature sets. Embodiments use the feature sets as inputs to one or more different algorithms to generate a plurality of different models. Embodiments train each of the different models. Embodiments use each of the trained models to generate a plurality of past demand forecasts for each of some or all of the past time periods. Embodiments determine a root-mean-square error (“RMSE”) for each of the past demand forecasts and, based on the RMSE, determine a weight for each of the trained models and normalize each weight. Embodiments then generate a final demand forecast for the item for each future time period by combining a weighted value for each trained model.
Schroeder et al., (US 7,689,456): Discusses predicting the profit attributable to a proposed sales promotion of a product, wherein the product has a wholesale price and a manufacturing cost per unit sales, including establishing a base volume for sales of the product in the absence of promotions; determining a sales lift for the plurality of single promotions; and correlating the sales lift with promotion information to provide a sales lift model. The method and system also include proposing a promotion having a cost per unit sales for a promotion time period and having a planned sale price for the product; applying the sales lift model to the proposed promotion to predict sales of the product for the promotion time period; and calculating manufacturer profit based upon the product's predicted sales, cost per unit sales for promotion, wholesale price, and manufacturing cost per unit sales during the promotion time period.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GURKANWALJIT SINGH whose telephone number is (571)270-5392.  The examiner can normally be reached on M-F 8:30-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Epstein can be reached on 571-270-5389.  The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.





/Gurkanwaljit Singh/
Primary Examiner, Art Unit 3625


    
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
    


Cookies help us deliver our services. By using our services, you agree to our use of cookies.