Jump to content

Patent Application 18829232 - SYSTEMS AND METHODS FOR GENERATING AI-DRIVEN - Rejection

From WikiPatents

Patent Application 18829232 - SYSTEMS AND METHODS FOR GENERATING AI-DRIVEN

Title: SYSTEMS AND METHODS FOR GENERATING AI-DRIVEN INTEGRATED INSIGHTS

Application Information

  • Invention Title: SYSTEMS AND METHODS FOR GENERATING AI-DRIVEN INTEGRATED INSIGHTS
  • Application Number: 18829232
  • Submission Date: 2025-05-22T00:00:00.000Z
  • Effective Filing Date: 2024-09-09T00:00:00.000Z
  • Filing Date: 2024-09-09T00:00:00.000Z
  • Examiner Employee Number: 83899
  • Art Unit: 3624
  • Tech Center: 3600

Rejection Summary

  • 102 Rejections: 0
  • 103 Rejections: 1

Cited Patents

The following patents were cited in the rejection:

Office Action Text


    DETAILED ACTION
Notice of Pre-AIA  or AIA  Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .

Status of Application
This office action is in response to the most recent filings filed by applicants on 09/05/25. 
Claims 1-3, 8 and 16 are amended
No claims are cancelled
No claims are added
Claims 1-20 are pending

Claim Rejections - 35 USC § 112F not invoked
In claims, 8-15 and 16-20, claim limitations recite “configured to”, but the claim and the written description discloses the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function and as such does not invoke 35 U.S.C. 112(f) or pre-AIA  35 U.S.C. 112, sixth paragraph. 


Claim Rejections - 35 USC § 101

35 U.S.C. 101 reads as follows: 
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.


Claims 1-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more.  
Step One - First, pursuant to step 1 in the January 2019 Guidance on 84 Fed. Reg. 53, the claims 1-7 and 14-20 is/are directed to a method which is a statutory category.
Step One - First, pursuant to step 1 in the January 2019 Guidance on 84 Fed. Reg. 53, the claims 8-13 is/are directed to a system which is a statutory category.
Under the 2019 PEG, Step 2A under which a claim is not “directed to” a judicial exception unless the claim satisfies a two-prong inquiry. Further, particular groupings of abstract ideas are consistent with judicial precedent and are based on an extraction and synthesis of the key concepts identified by the courts as being abstract.
With respect to the Step 2A, Prong One, the claims as drafted, and given their broadest reasonable interpretation, fall within the Abstract idea grouping of “certain methods of organizing human activity” (business relations; relationships or interactions between people). For instance, independent Claim 1 is directed to an abstract idea, as evidenced by claim limitations “receiving, streaming data from a plurality of data sources, wherein the streaming data comprises user-specific and contextual attributes; normalizing, the streaming data into a standardized format and storing the normalized data; generating, a feature vector for each user entity based on the normalized data; segmenting, user entities into one or more segments based on behavioral similarity among the feature vectors, wherein the utilizes techniques such as clustering analysis, decision trees, or neural networks to identify meaningful segments within the data, allowing for precise targeting and personalized insights; determining, one or more role-specific insights for a given user entity based at least in part on a segment assignment and a role context associated with the user entity; causing, display of the one or more role-specific insights, is configured to adapt the display based on the role context associated with the user entity; and updating, one or more operational parameters based on user interaction feedback and sentiment data received, wherein the updated operational parameters influence subsequent segmentation or insight generation in real time.” 
Independent Claim 8 is/are recites substantially similar limitations to independent claim 1 and is/are rejected under 2B for similar reasons to claim 1 above.
Independent Claim 16 is directed to an abstract idea, as evidenced by claim limitations “receiving, normalized data associated with a user entity, the normalized data comprising at least one of historical interaction data, transactional records, contextual signals, or system-generated behavioral features; generating, one or more role-specific insights for the user entity based at least in part on a role context associated with the user entity; delivering, the one or more role-specific insights to the user entity, configured to: dynamically present the insights based on the role context associated with the user entity; initiate delivery of the insights in response to an event condition, the event condition comprising a change in market conditions, user behavior, or generation of the insight; utilize one or more delivery mechanisms selected from push notifications, in- application messages, or email alerts; and collect user interaction feedback and sentiment signals and transmit the collected feedback; and updating, one or more operational parameters based on the collected user interaction feedback and sentiment signals, wherein the updated parameters influence subsequent insight generation in real time.” 
These claim limitations belong to the grouping of “certain methods of organizing human activity” because the claims are related to “managing interaction points between a population of users” for one or more human entities involves organizing human activity based on the description of “certain methods of organizing human activity” provided by the courts. The court have used the phrase “Certain methods of organizing human activity” as —fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions).
With respect to the Step 2A, Prong Two - This judicial exception is not integrated into a practical application. In particular, the claim recites additional elements: Claims 1 and 8: “A computerized method for delivering role-specific insights based on real-time data, the method comprising: by a Real-time Data Mesh (RTDM) module executing on a server, by the RTDM module; in a data repository accessible to other modules; by an Advanced Artificial Intelligence/Machine Learning (AAML) module communicatively coupled to the RTDM module, wherein the AAML module applies one or more trained machine learning algorithms to derive the feature vector; by a Customer and Vendor Segmentation Engine (CVSE) communicatively coupled to the AAML module, CVSE, by a Personalization and Recommendation Engine (PRE) module communicatively coupled to the CVSE module, by a Single Pane of Glass (SPoG) User Interface executing on a client device, wherein the SpoG User Interface, by a Feedback and Adaptation Mechanism (FAM) module communicatively coupled to the SpoG User Interface and to at least one of the AAML module, CVSE, or PRE module, of a machine learning algorithm, through the SpoG User Interface, A system for automated AI-driven integrated insights generation and delivery, comprising:” Claim 16: “receiving, normalized data associated with a user entity, the normalized data comprising at least one of historical interaction data, transactional records, contextual signals, or system-generated behavioral features; generating, one or more role-specific insights for the user entity based at least in part on a role context associated with the user entity; delivering, the one or more role-specific insights to the user entity, configured to: dynamically present the insights based on the role context associated with the user entity; initiate delivery of the insights in response to an event condition, the event condition comprising a change in market conditions, user behavior, or generation of the insight; utilize one or more delivery mechanisms selected from push notifications, in- application messages, or email alerts; and collect user interaction feedback and sentiment signals and transmit the collected feedback; and updating, one or more operational parameters based on the collected user interaction feedback and sentiment signals, wherein the updated parameters influence subsequent insight generation in real time” at a high level of generality such that it amounts to no more than: adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f);
Thus, the additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limitations on practicing the abstract idea. As a result, claims 1, 8 and 15 do not provide any specifics regarding the integration into a practical application when recited in a claim with a judicial exception. See MPEP 2106.05(f).
The additional elements of a “machine learning model” and “AI-powered” or “AI-driven”. This language merely requires execution of an algorithm that can be performed by a generic computer component and provides no detail regarding the operation of that algorithm. As such, the claim requirement amounts to mere instructions to implement the abstract idea on a computer, and, therefore, is not sufficient to make the claim patent eligible. See Alice, 573 U.S. at 226 (determining that the claim limitations “data processing system,” “communications controller,” and “data storage unit” were generic computer components that amounted to mere instructions to implement the abstract idea on a computer); October 2019 Guidance Update at 11–12 (recitation of generic computer limitations for implementing the abstract idea “would not be sufficient to demonstrate integration of a judicial exception into a practical application”). Such a generic recitation of “machine learning model” is insufficient to show a practical application of the recited abstract idea. All of these additional elements are not significantly more because these, again, are merely the software and/or hardware components used to implement the abstract idea on a general-purpose computer. 
Similarly dependent claims 2-7, 9-15 and 17-20 are also directed to an abstract idea under 2A, first and second prong. In the present application, all of the dependent claims have been evaluated and it was found that they all inherit the deficiencies set forth with respect to the independent claims. For instance, dependent claims 2 recite “further comprising logging transaction details related to the segmentation process within the platform for ongoing enhancement and optimization efforts, facilitated by the data logging mechanisms within RTDM and CVSE modules within the system for generating integrated insights” and dependent claims 4 recite “further comprising iteratively refining the segmentation analysis based on evolving market dynamics and user interactions, ensuring continuous improvement and adaptation within the system for generating integrated insights.” Here, these claims offer further descriptive limitations of elements found in the independent claims which are similar to the abstract idea noted in the independent claim above. 
Dependent claims 3 recites “machine learning algorithms” and “by the FAM” in the claim limitations “comprising analyzing the effectiveness of the segmentation process post-implementation using machine learning models and predictive analytics to refine segmentation strategies based on updated data and user feedback, leveraging the feedback and adaptation mechanism (FAM) within the system for generating integrated insights”. Similarly, dependent claims 13 recites “AI-powered integrated insights”, “AAML module” and “algorithms” in the claim limitations “further comprising a Feedback and Adaptation Mechanism (FAM) module enabling continuous evolution and improvement of the AI-powered integrated insights platform based on user feedback and changing market conditions, wherein the FAM module collects feedback from users through interactive interfaces within the SPoG UI, sentiment analysis of user interactions, and direct input mechanisms to dynamically adjust the algorithms and models within the AAML module, CVSE, and/or PRE module”. Similarly, dependent claims 15 recites “AAML module” in the claim limitations “wherein the AAML module adapts algorithms based on continuous feedback loops, refining precision of AAML module processes over time to enhance relevance of generated insights”. In this claim, “machine learning algorithms” is an additional element, but it is still being recited such that it amounts to no more than: adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f). As a result, Examiner asserts that dependent claims, such as dependent claims 2-7, 9-15 and 17-20 are also directed to the abstract idea identified above.
With respect to Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. First, the invention lacks improvements to another technology or technical field [see Alice at 2351; 2019 IEG at 55], and lacks meaningful limitations beyond generally linking the use of an abstract idea to a particular technological environment [Alice at 2360, 2019 IEG at 55], and fails to effect a transformation or reduction of a particular article to a different state or thing [2019 IEG, 55]. For the reasons articulated above, the claims recite an abstract idea that is limited to a particular field of endeavor (MPEP § 2106.05(h)) and recites insignificant extra-solution activity (MPEP § 2106.05(g)). By the factors and rationale provided above with respect to these MPEP sections, the additional elements of the claims that fail to integrate the abstract idea into a practical application also fail to amount to “significantly more” than the abstract idea.
As discussed above with respect to integration of the abstract idea into a practical application, the additional element(s) of “Claims 1 and 8: “A computerized method for delivering role-specific insights based on real-time data, the method comprising: by a Real-time Data Mesh (RTDM) module executing on a server, by the RTDM module; in a data repository accessible to other modules; by an Advanced Artificial Intelligence/Machine Learning (AAML) module communicatively coupled to the RTDM module, wherein the AAML module applies one or more trained machine learning algorithms to derive the feature vector; by a Customer and Vendor Segmentation Engine (CVSE) communicatively coupled to the AAML module, CVSE, by a Personalization and Recommendation Engine (PRE) module communicatively coupled to the CVSE module, by a Single Pane of Glass (SPoG) User Interface executing on a client device, wherein the SpoG User Interface, by a Feedback and Adaptation Mechanism (FAM) module communicatively coupled to the SpoG User Interface and to at least one of the AAML module, CVSE, or PRE module, of a machine learning algorithm, through the SpoG User Interface, A system for automated AI-driven integrated insights generation and delivery, comprising:” Claim 16: “receiving, normalized data associated with a user entity, the normalized data comprising at least one of historical interaction data, transactional records, contextual signals, or system-generated behavioral features; generating, one or more role-specific insights for the user entity based at least in part on a role context associated with the user entity; delivering, the one or more role-specific insights to the user entity, configured to: dynamically present the insights based on the role context associated with the user entity; initiate delivery of the insights in response to an event condition, the event condition comprising a change in market conditions, user behavior, or generation of the insight; utilize one or more delivery mechanisms selected from push notifications, in- application messages, or email alerts; and collect user interaction feedback and sentiment signals and transmit the collected feedback; and updating, one or more operational parameters based on the collected user interaction feedback and sentiment signals, wherein the updated parameters influence subsequent insight generation in real time” are insufficient to amount to significantly more. Applicants originally submitted specification describes the computer components above at least in page/ paragraph [0028]-[0032], [0057]-[0065]. In light of the specification, it should be noted that the components discussed above did not meaningfully limit the abstract idea because they merely linked the use of the abstract idea to a particular technological environment (i.e., "implementation via computers"). In light of the specification, it should be noted that the claim limitations discussed above are merely instructions to implement the abstract idea on a computer. See MPEP 2106.05(f). (See MPEP 2106.05(f) - Mere Instructions to Apply an Exception - “Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible.” Alice Corp., 134 S. Ct. at 235). Mere instructions to apply an exception using computer component cannot provide an inventive concept.). The additional elements amount to no more than a recitation of generic computer elements utilized to perform generic computer functions, such as performing repetitive calculations, Bancorp Services v. Sun Life, 687 F.3d 1266, 1278, 103 USPQ2d 1425, 1433 (Fed. Cir. 2012) ("The computer required by some of Bancorp’s claims is employed only for its most basic function, the performance of repetitive calculations, and as such does not impose meaningful limits on the scope of those claims."); and storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; see MPEP 2106.05(d)(II).
Applicants originally submitted specification describes the computer components above at least in [0028]-[0032], [0057]-[0065]. In light of the specification, it should be noted that the computer components identified above are well-understood, routine, conventional activities previously known to the industry (see 2106.05(d)). Here, the computer components discussed above are similar to the court case…. site one of the court cases from 2106.05(d), like “Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec” 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information) (see MPEP 2106.05(d) II).
The claim fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment. See 84 Fed. Reg. 55. Viewed individually or as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself.
Further, it should be noted that additional elements of the claimed invention such as claim limitations when considered individually or as an ordered combination along with the other limitations discussed above in method claims 1 and 8 also do not meaningfully limit the abstract idea because they merely linked the use of the abstract idea to a particular technological environment (i.e., "implementation via computers"). In light of the specification, it should be noted that the claim limitations discussed above are merely instructions to implement the abstract idea on a computer. See MPEP 2106. 
Similarly, dependent claims 2-7, 9-15 and 17-20 also do not include limitations amounting to significantly more than the abstract idea under the second prong or 2B of the Alice framework. In the present application, all of the dependent claims have been evaluated and it was found that they all inherit the deficiencies set forth with respect to the independent claims. Further, it should be noted that the dependent claims do not include limitations that overcome the stated assertions. Here, the dependent claims recite features/limitations that include computer components identified above in part 2B of analysis of independent claims 1, 8 and 16. As a result, Examiner asserts that dependent claims, such as dependent claims 2-7, 9-15 and 17-20 are also directed to the abstract idea identified above. 
For more information on 101 rejections, see MPEP 2106, January 2019 Guidance at https://www.govinfo.gov/content/pkg/FR-2019-01 -07/pdf/2018-28282.pdf
	

	Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA  35 U.S.C. 102 and 103 (or as subject to pre-AIA  35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA  to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.  
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.

Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Makhija et al. (US 2020/0279200), further in view of Kadayam (US 11,163,846) and Vogler (US 2003/0173403).

As per claims 1 and 8: Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
A computerized method for delivering role-specific insights based on real-time data, the method comprising: 
Regarding the claim limitations below, Reference Makhija shows:
“receiving, by a Real-time Data Mesh (RTDM) module executing on a server, streaming data from a plurality of data sources, wherein the streaming data comprises user-specific and contextual attributes;
normalizing, by the RTDM module, the streaming data into a standardized format and storing the normalized data in a data repository accessible to other modules”
Makhija shows the above limitation at least in: paragraph 46, 53-54, 61-62, “the data cleansing and normalization engine 116 is configured to clean data received at the data lake in real time using natural language processing and machine learning algorithms for enhanced accuracy. Since, the data will be received from multiple disconnected sources, the engine 116 has an ability to remove duplicates, standardize and group the data. The cleansing engine is coupled to a data mapper and curator engine. The engine 116 detects and corrects Corrupt or duplicate or vague data. Further, the cleansed data is sent for approval through a routing mechanism post which they are stored in master data tables of the data lake “… “system layer architecture diagrams with data lake/platform (100B, 100c) of AI based self-driven ERP and SCM system is shown in accordance with an embodiment of the present invention. The system 100a includes a plurality of distinct data source layer 127 to capture all customer, factory, supplier, machine and third-party sources of data (both structured and unstructured), the data lake layer 108 storing all data received from the distinct data source layer 127, an application function layer 128 configured to re-calibrate functions based on data models and scripts generated by a bot. The data models are auto-generated based on change in attribute of the received data to determine the impact of the change on the functions of the one or more applications.” …” a Query language tool (QL) 130, data governance & standardization/protocol layer 131” …” collects data from diverse sources, acts a gateway and identifies data attributes to be extracted from application event. The curator engine 132b with the help of mapper and ingestion module 132a stores the received data in multiple type stores viz, the search store for advance search, graph for data and relations, flat structure for logs purpose etc.”).
Reference Makhija shows in paragraph 62-64, 92, 98, 100-101, the system detects changes in data which indicates that the data is continuously being received and compared and synchronizing. Reference Makhija shows demographic information ([0104]: Demand planning S504 allows determination of a demand for the item or product considering various factors like customer base, consumption, density of population is a geographic location etc.). Makhija also shows transaction history ([0098]: the system includes pro-active detection algorithms for any record/transactions (items/Suppliers/PO/Invoices etc) being entered by a user (supplier/Customer/Employee etc) at the user interface.); 
Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
“generating, by an Advanced Artificial Intelligence/Machine Learning (AAML) module communicatively coupled to the RTDM module, a feature vector for each user entity based on the normalized data, wherein the AAML module applies one or more trained machine learning algorithms to derive the feature vector;
segmenting, by a Customer and Vendor Segmentation Engine (CVSE) communicatively coupled to the AAML module, user entities into one or more segments based on behavioral similarity among the feature vectors, wherein the CVSE utilizes techniques such as clustering analysis, decision trees, or neural networks to identify meaningful segments within the data, allowing for precise targeting and personalized insights;”
Reference Makhija shows “generating, by an Advanced Artificial Intelligence/Machine Learning (AAML) module communicatively coupled to the RTDM module, a feature vector for each user entity based on the normalized data, wherein the AAML module applies one or more trained machine learning algorithms to derive the feature vector; segmenting, by a Customer and Vendor Segmentation Engine (CVSE) communicatively coupled to the AAML module, user entities into one or more segments based on behavioral similarity among the feature vectors….” (paragraph 62-64, 92, 98, 100-101, the system detects changes in data which indicates that the data is continuously being received and compared and synchronizing. Makhija also shows: paragraph 56-60, “The simulation UI 130a enables user to draft statements/query as per underlined model provided through intelligent sensing. The Translator 130 uses NLP and domain specific nomenclature repository, to tokenize query string received from user. Tokenizer takes a sequence of characters and output a sequence of tokens. It will analyze character by character, using multiple levels of lookahead in order to identity what token is currently being examine. The Code Generator 130c extracts Keywords and tokens that are used to generate underlying Machine Learning query and big data query. The Mapper is responsible to generate code and the model 130d utilized domain attributes, Synonyms and tokens.” …” the tool includes an AI based prediction and recommendation engine coupled to a processor configured for processing at least one prediction algorithm to generate at least one recommendation option/task/action in real time.” …” the tool is configured to attach the recommended task/action to a desired workflow or User interface element or set of rules or validations”.
However, Makhija in view of Kadayam does not explicitly show wherein the CVSE utilizes techniques such as clustering analysis, decision trees, or neural networks to identify meaningful segments within the data, allowing for precise targeting and personalized insights”. Reference Vogler shows the above limitation at least in (paragraph 35-37 “The EWA 130 can apply a predetermined set of rules, or alternatively, the EWA 130 can include artificial intelligence logic that enables the EWA 130 to adapt its behavior in response to current or historical inventory patterns. The artificial intelligence logic enables the EWA to estimate potential variation in inventory levels in the near future in order to identify potentially risky situations early enough to allow for corrective measures. For example, initially the rules may specify that an alert should be fired when the inventory drops below 10. However, if the EWA 130 detects that it sends alerts much more frequently during the summer season than during other seasons, the EWA 130 may adapt to this seasonal variation by increasing the threshold from 10 to 20 during the summer season so that the inventory planner 140 is notified earlier of the impending inventory shortage. This adaptive behavior occurs with minimal human intervention, and with minimal need of parameter adjustment or any other kind of manual calibration. [0036] The EWA 130 can retrieve and analyze current and historical inventory data to detect trends such as deviations between planned replenishment and actual replenishment and to build a predictive model of future inventory needs. These trends and predictions can be determined using linear regression, classification and regression trees, or other stochastic algorithms. [0037]: The EWA 130 combines the estimates of potential variation for several individual activities into an estimate of the potential variation for an entire inventory. These algorithms can be implemented using decision trees such as classification and regression trees.”).
Reference Makhija and Reference Vogler are analogous prior art to the claimed invention because the references generally relate to field of tracking items (Makhija: [0011], [0050]-[0051], [0079]-[0085]: tracking module. Vogler: [0021]-[0022], [0026], [0031]). Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.  
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA  to provide the teachings of Reference Vogler, particularly the ability to use decision trees, etc. ([0035]-[0037]), in the disclosure of Reference Makhija, particularly in the ability to collect real time data and analyze data (paragraph 62-64, 92, 98, 100-101), in order to provide for a system that uses data analysis methods like decision trees to allow suppliers to keep track of how much inventory they have and how much inventory they have distributed to particular retailers. Periodically, the retailer reports to the supplier the current inventory level of the store. Based on the report, the supplier determines whether the store inventory needs to be replenished as taught by Reference Vogler (see at least in [0004]), so that the process of managing the process of tracking items can be made more efficient and effective. 
Further, the claimed invention is merely a combination of old elements in a similar managing the process of tracking items field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Makhija in view of Reference Vogler, the results of the combination were predictable (MPEP 2143 A); 
Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
determining, by a Personalization and Recommendation Engine (PRE) module communicatively coupled to the CVSE module, one or more role-specific insights for a given user entity based at least in part on a segment assignment and a role context associated with the user entity
Reference Makhija does not explicitly show the above limitations. Reference Kadayam discloses the above limitation at least in (Col. 3:31-52, col. 11:54-65, col.12:46-54, col. 15:39-45, col. 29:22-29, col. 32:61 to col. 33:4, col. 34:48-29, col. 35:3-20 and 50-60, “The system is programmed to further store at least some of the collected data in a database. The data can be available at various granularities. For example, for suppliers or buyers, the data can be related to industries, organizations, departments, or individuals; for products, the data can be related to industries, categories, makes, or models. The data can be collected directly from supplier accounts or buyer accounts or from external data sources. The data can be collected in response to processing user queries, as further discussed below, or during ordinary user online activities. The data can be collected from explicit user input through graphical user interfaces, such as a button that when pushed indicates a vote for a particular product as being a good match for another product or a comment box configured to accept supplier reviews, or from implicit user behavior that indicates an affinity for parties or items, such as putting a particular item in a shopping cart or paying invoices to a particular supplier within a specific period of time. (89) In some embodiments, a collective product wisdom dataset may be updated continuously in real-time to reflect the up-to-the-moment selection/purchase activity. The Adaptive Navigation user experience may be built on top of this dynamically changing dataset about the continuously evolving nature of product knowledge and product preferences in the organization. In some embodiments, this may be used to provide a user experience of browsing through a marketplace that is very dynamic and adapts in real-time. Compared to the conventional approach of using all marketplace product data to build a static, and generic product browsing experience, an Adaptive Navigation user experience may be naturally tailored to the company's own product preference and product purchasing behaviors… an Organization Preferences Cognitive Advisor may also have a learning engine inside. This learning engine learns user behavior from actions taken by users to whom recommendations from this Cognitive Advisor have been delivered. These actions may include, for example, that a user clicks on an Organization Preference recommendation, a user selects an Organization Preference recommendation for adding to cart, or a user selects an Organization Preference recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Organization Preferences recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. The Organization Preferences Cognitive Advisor taps into the Collective Intelligence of the organization on product selections and purchases, to provide high quality, reliable recommendations for the user doing the queries. (211) The Best Bets learning engine learns user behavior from actions taken by users to whom the Best Bets recommendations have been delivered. These actions may include, for example, that a user clicks on a Best Bet recommendation, a user selects a Best Bet recommendation for adding to cart, or a user selects a Best Bet recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Best Bets recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization… 215) The learning engine underneath the Bundles Cognitive Advisor 2918 also learns user behavior from actions taken by users to whom the Bundles recommendations have been delivered. These actions may include, for example, that a user clicks on a Bundle recommendation, a user selects a Bundle recommendation for adding to cart, or a user selects a Bundle recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Bundles recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. In addition, the highly active bundles for a given query in a given category in a given region, can be additional data for the Global Item Master, potentially driving recommendations for new Bundle creation or Bundle enhancement for this organization or other organizations overall (244) Often what occurs is that users' shopping patterns don't match with the strategy and contracts set up by Procurement Buyers in different categories; i.e. the user shopping behavior can be said to be non-compliant, and this does not help to tap into procurement rules and expectations, and in turn, the savings are not actualized. A few specific situations would need to be addressed in this regard. For example, procurement buyers would have to provide actual contract information to the system so that the information contained in it can be used in real-time by the system. Systems implemented based on this disclosure may provide the means for the procurement buyers to do so. As another example, Suppliers would need to be carefully organized into categories, and tagged appropriately for their specific attributes (e.g. minority supplier, woman-owned supplier, veteran supplier etc.), so that the guided buying capability of an e-procurement system can operate as expected, suitably rank ordering product selections in the universal search experience. A system implemented based on this disclosure may provide the tools for this to be setup correctly.”).
Reference Makhija and Reference Kadayam are analogous prior art to the claimed invention because the references generally relate to field of workflow processing (Makhija: [0015], [0065]. Kadayam: col. 15, lines 28-38, col. 20, lines 60-67, col. 27, lines 53-57, col. 40-67). Further, said references are part of the same classification, i.e., G06Q and G06F. Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.  
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA  to provide the teachings of Reference Kadayam, particularly the ability to customize data insights using additional perspectives like user preference and market conditions information (Col. 3, lines 31-52, col. 11 lines 54-65, col.12, lines 46-54, col. 15 lines 39-45, col. 29 lines 22-29, col. 32 line 61 to col. 33 line 4, col. 34 lines 48-29, col. 35 lines 3-20 and 50-60), in the disclosure of Reference Makhija, particularly in the ability to collect real time data (paragraph 62-64, 92, 98, 100-101), in order to provide for a system that the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier as taught by Reference Kadayam (see at least in col. 3, lines 54-65: the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier), so that the process of managing workflow processing can be made more efficient and effective. 
Further, the claimed invention is merely a combination of old elements in a similar workflow processing field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Makhija in view of Reference Kadayam, the results of the combination were predictable (MPEP 2143 A);
Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
causing, by a Single Pane of Glass (SPoG) User Interface executing on a client device, display of the one or more role-specific insights, wherein the SpoG User Interface is configured to adapt the display based on the role context associated with the user entity (Makhija: paragraph 61-63, 70, “Data Relation analytics (using Graph store) will help users view relation-first perspective of their data which is not possible in classical data model. Information will feed into Analytics and Dashboard 129, with a view getting mode insights. Graph algorithms library will also provide the ability to detect hard-to-find or complex patterns and structures in supply chain data model.”…”  It collects data from diverse sources, acts a gateway and identifies data attributes to be extracted from application event. The curator engine 132b with the help of mapper and ingestion module 132a stores the received data in multiple type stores viz, the search store for advance search, graph for data and relations, flat structure for logs purpose etc. The Curation including selection and organization of data takes place through capturing metadata and lineage and making it available in a data catalog.”…” The data flows in the data lake in real-time processing through event stream layer. Domain Model exposed through the query language (QL) tool 130 enables user to self-serve their data and analytical requirements. Models developed by users are utilized to improve the insights for future purpose.”…”The plurality of distinct data sources includes internet of things (IOT), demand from various sources at different levels like retailers, distribution channels, POS systems, customer feedback, supplier collaboration platform, invoices, purchase orders (PO), finance modules, inventory management module, contracts and RFx module, supplier module, item master, bill of materials, vendor master, warehouse management module, logistics management module, social media, weather, real time commodity and stock market prices, geo-political news etc. It shall be apparent to a person skilled in the art that the data source may include other source within the scope of the present invention”. Makhija shows: paragraph 41-42, 61, “Information will feed into Analytics and Dashboard 129, with a view getting mode insights.” Makhija shows: paragraph 62-64, 92, 98, 100-101, the system detects changes in data which indicates that the data is continuously being received and compared and synchronizing. Makhija: paragraph 56-60, “The simulation UI 130a enables user to draft statements/query as per underlined model provided through intelligent sensing. The Translator 130 uses NLP and domain specific nomenclature repository, to tokenize query string received from user. Tokenizer takes a sequence of characters and output a sequence of tokens. It will analyze character by character, using multiple levels of lookahead in order to identity what token is currently being examine. The Code Generator 130c extracts Keywords and tokens that are used to generate underlying Machine Learning query and big data query. The Mapper is responsible to generate code and the model 130d utilized domain attributes, Synonyms and tokens.” …” the tool includes an AI based prediction and recommendation engine coupled to a processor configured for processing at least one prediction algorithm to generate at least one recommendation option/task/action in real time.” …” the tool is configured to attach the recommended task/action to a desired workflow or User interface element or set of rules or validations”);
Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
updating, by a Feedback and Adaptation Mechanism (FAM) module communicatively coupled to the SpoG User Interface and to at least one of the AAML module, CVSE, or PRE module, one or more operational parameters of a machine learning algorithm based on user interaction feedback and sentiment data received through the SpoG User Interface, wherein the updated operational parameters influence subsequent segmentation or insight generation in real time (Makhija: paragraph 61-63, 70, “Data Relation analytics (using Graph store) will help users view relation-first perspective of their data which is not possible in classical data model. Information will feed into Analytics and Dashboard 129, with a view getting mode insights. Graph algorithms library will also provide the ability to detect hard-to-find or complex patterns and structures in supply chain data model.”…”  It collects data from diverse sources, acts a gateway and identifies data attributes to be extracted from application event. The curator engine 132b with the help of mapper and ingestion module 132a stores the received data in multiple type stores viz, the search store for advance search, graph for data and relations, flat structure for logs purpose etc. The Curation including selection and organization of data takes place through capturing metadata and lineage and making it available in a data catalog.”…” The data flows in the data lake in real-time processing through event stream layer. Domain Model exposed through the query language (QL) tool 130 enables user to self-serve their data and analytical requirements. Models developed by users are utilized to improve the insights for future purpose.”…”The plurality of distinct data sources includes internet of things (IOT), demand from various sources at different levels like retailers, distribution channels, POS systems, customer feedback, supplier collaboration platform, invoices, purchase orders (PO), finance modules, inventory management module, contracts and RFx module, supplier module, item master, bill of materials, vendor master, warehouse management module, logistics management module, social media, weather, real time commodity and stock market prices, geo-political news etc. It shall be apparent to a person skilled in the art that the data source may include other source within the scope of the present invention”. Makhija shows: paragraph 41-42, 61, “Information will feed into Analytics and Dashboard 129, with a view getting mode insights.” Makhija shows: paragraph 62-64, 92, 98, 100-101, the system detects changes in data which indicates that the data is continuously being received and compared and synchronizing. Makhija: paragraph 56-60, “The simulation UI 130a enables user to draft statements/query as per underlined model provided through intelligent sensing. The Translator 130 uses NLP and domain specific nomenclature repository, to tokenize query string received from user. Tokenizer takes a sequence of characters and output a sequence of tokens. It will analyze character by character, using multiple levels of lookahead in order to identity what token is currently being examine. The Code Generator 130c extracts Keywords and tokens that are used to generate underlying Machine Learning query and big data query. The Mapper is responsible to generate code and the model 130d utilized domain attributes, Synonyms and tokens.” …” the tool includes an AI based prediction and recommendation engine coupled to a processor configured for processing at least one prediction algorithm to generate at least one recommendation option/task/action in real time.” …” the tool is configured to attach the recommended task/action to a desired workflow or User interface element or set of rules or validations”).  

As per claim 2: Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
further comprising logging transaction details related to the segmentation process within the platform for ongoing enhancement and optimization efforts, facilitated by the data logging mechanisms within the RTDM and CVSE modules within the system for generating integrated insights 
Makhija shows “further comprising logging transaction details related to the segmentation process within the platform for ongoing enhancement and optimization efforts, facilitated by the data logging mechanisms within RTDM and CVSE modules”: paragraph 46, 53-54, 61-62, “the data cleansing and normalization engine 116 is configured to clean data received at the data lake in real time using natural language processing and machine learning algorithms for enhanced accuracy. Since, the data will be received from multiple disconnected sources, the engine 116 has an ability to remove duplicates, standardize and group the data. The cleansing engine is coupled to a data mapper and curator engine. The engine 116 detects and corrects Corrupt or duplicate or vague data. Further, the cleansed data is sent for approval through a routing mechanism post which they are stored in master data tables of the data lake “…“system layer architecture diagrams with data lake/platform (100B, 100c) of AI based self-driven ERP and SCM system is shown in accordance with an embodiment of the present invention. The system 100a includes a plurality of distinct data source layer 127 to capture all customer, factory, supplier, machine and third-party sources of data (both structured and unstructured), the data lake layer 108 storing all data received from the distinct data source layer 127, an application function layer 128 configured to re-calibrate functions based on data models and scripts generated by a bot. The data models are auto-generated based on change in attribute of the received data to determine the impact of the change on the functions of the one or more applications.”…” a Query language tool (QL) 130, data governance & standardization/protocol layer 131”…” collects data from diverse sources, acts a gateway and identifies data attributes to be extracted from application event. The curator engine 132b with the help of mapper and ingestion module 132a stores the received data in multiple type stores viz, the search store for advance search, graph for data and relations, flat structure for logs purpose etc.”
Makhija does not explicitly show “within the system for generating integrated insights”. Kadayam shows the above limitations (Col. 3:31-52, col. 11:54-65, col.12:46-54, col. 15:39-45, col. 29:22-29, col. 32:61 to col. 33:4, col. 34:48-29, col. 35:3-20 and 50-60, “The system is programmed to further store at least some of the collected data in a database. The data can be available at various granularities. For example, for suppliers or buyers, the data can be related to industries, organizations, departments, or individuals; for products, the data can be related to industries, categories, makes, or models. The data can be collected directly from supplier accounts or buyer accounts or from external data sources. The data can be collected in response to processing user queries, as further discussed below, or during ordinary user online activities. The data can be collected from explicit user input through graphical user interfaces, such as a button that when pushed indicates a vote for a particular product as being a good match for another product or a comment box configured to accept supplier reviews, or from implicit user behavior that indicates an affinity for parties or items, such as putting a particular item in a shopping cart or paying invoices to a particular supplier within a specific period of time. (89) In some embodiments, a collective product wisdom dataset may be updated continuously in real-time to reflect the up-to-the-moment selection/purchase activity. The Adaptive Navigation user experience may be built on top of this dynamically changing dataset about the continuously evolving nature of product knowledge and product preferences in the organization. In some embodiments, this may be used to provide a user experience of browsing through a marketplace that is very dynamic and adapts in real-time. Compared to the conventional approach of using all marketplace product data to build a static, and generic product browsing experience, an Adaptive Navigation user experience may be naturally tailored to the company's own product preference and product purchasing behaviors… an Organization Preferences Cognitive Advisor may also have a learning engine inside. This learning engine learns user behavior from actions taken by users to whom recommendations from this Cognitive Advisor have been delivered. These actions may include, for example, that a user clicks on an Organization Preference recommendation, a user selects an Organization Preference recommendation for adding to cart, or a user selects an Organization Preference recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Organization Preferences recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. The Organization Preferences Cognitive Advisor taps into the Collective Intelligence of the organization on product selections and purchases, to provide high quality, reliable recommendations for the user doing the queries. (211) The Best Bets learning engine learns user behavior from actions taken by users to whom the Best Bets recommendations have been delivered. These actions may include, for example, that a user clicks on a Best Bet recommendation, a user selects a Best Bet recommendation for adding to cart, or a user selects a Best Bet recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Best Bets recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization… 215) The learning engine underneath the Bundles Cognitive Advisor 2918 also learns user behavior from actions taken by users to whom the Bundles recommendations have been delivered. These actions may include, for example, that a user clicks on a Bundle recommendation, a user selects a Bundle recommendation for adding to cart, or a user selects a Bundle recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Bundles recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. In addition, the highly active bundles for a given query in a given category in a given region, can be additional data for the Global Item Master, potentially driving recommendations for new Bundle creation or Bundle enhancement for this organization or other organizations overall (244) Often what occurs is that users' shopping patterns don't match with the strategy and contracts set up by Procurement Buyers in different categories; i.e. the user shopping behavior can be said to be non-compliant, and this does not help to tap into procurement rules and expectations, and in turn, the savings are not actualized. A few specific situations would need to be addressed in this regard. For example, procurement buyers would have to provide actual contract information to the system so that the information contained in it can be used in real-time by the system. Systems implemented based on this disclosure may provide the means for the procurement buyers to do so. As another example, Suppliers would need to be carefully organized into categories, and tagged appropriately for their specific attributes (e.g. minority supplier, woman-owned supplier, veteran supplier etc.), so that the guided buying capability of an e-procurement system can operate as expected, suitably rank ordering product selections in the universal search experience. A system implemented based on this disclosure may provide the tools for this to be setup correctly.”)
Reference Makhija and Reference Kadayam are analogous prior art to the claimed invention because the references generally relate to field of workflow processing (Makhija: [0015], [0065]. Kadayam: col. 15, lines 28-38, col. 20, lines 60-67, col. 27, lines 53-57, col. 40-67). Further, said references are part of the same classification, i.e., G06Q and G06F. Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.  
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA  to provide the teachings of Reference Kadayam, particularly the ability to customize data insights using additional perspectives like user preference and market conditions information (Col. 3, lines 31-52, col. 11 lines 54-65, col.12, lines 46-54, col. 15 lines 39-45, col. 29 lines 22-29, col. 32 line 61 to col. 33 line 4, col. 34 lines 48-29, col. 35 lines 3-20 and 50-60), in the disclosure of Reference Makhija, particularly in the ability to collect real time data (paragraph 62-64, 92, 98, 100-101), in order to provide for a system that the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier as taught by Reference Kadayam (see at least in col. 3, lines 54-65: the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier), so that the process of managing workflow processing can be made more efficient and effective. 
Further, the claimed invention is merely a combination of old elements in a similar workflow processing field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Makhija in view of Reference Kadayam, the results of the combination were predictable (MPEP 2143 A).

As per claim 3: Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
further comprising analyzing, by the FAM, the effectiveness of the segmentation process post-implementation using machine learning models and predictive analytics to refine segmentation strategies based on updated data and user feedback.  
Reference Makhija shows in paragraph 62-64, 92, 98, 100-101, the system detects changes in data which indicates that the data is continuously being received and compared and synchronizing. Reference Makhija does not explicitly show user's preferences and market conditions, as such, Reference Makhija does not explicitly show “user feedback” and “generating integrated insights”. 
Reference Kadayam shows the above limitations at least in (Col. 3:31-52, col. 11:54-65, col.12:46-54, col. 15:39-45, col. 29:22-29, col. 32:61 to col. 33:4, col. 34:48-29, col. 35:3-20 and 50-60, “The system is programmed to further store at least some of the collected data in a database. The data can be available at various granularities. For example, for suppliers or buyers, the data can be related to industries, organizations, departments, or individuals; for products, the data can be related to industries, categories, makes, or models. The data can be collected directly from supplier accounts or buyer accounts or from external data sources. The data can be collected in response to processing user queries, as further discussed below, or during ordinary user online activities. The data can be collected from explicit user input through graphical user interfaces, such as a button that when pushed indicates a vote for a particular product as being a good match for another product or a comment box configured to accept supplier reviews, or from implicit user behavior that indicates an affinity for parties or items, such as putting a particular item in a shopping cart or paying invoices to a particular supplier within a specific period of time. (89) In some embodiments, a collective product wisdom dataset may be updated continuously in real-time to reflect the up-to-the-moment selection/purchase activity. The Adaptive Navigation user experience may be built on top of this dynamically changing dataset about the continuously evolving nature of product knowledge and product preferences in the organization. In some embodiments, this may be used to provide a user experience of browsing through a marketplace that is very dynamic and adapts in real-time. Compared to the conventional approach of using all marketplace product data to build a static, and generic product browsing experience, an Adaptive Navigation user experience may be naturally tailored to the company's own product preference and product purchasing behaviors… an Organization Preferences Cognitive Advisor may also have a learning engine inside. This learning engine learns user behavior from actions taken by users to whom recommendations from this Cognitive Advisor have been delivered. These actions may include, for example, that a user clicks on an Organization Preference recommendation, a user selects an Organization Preference recommendation for adding to cart, or a user selects an Organization Preference recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Organization Preferences recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. The Organization Preferences Cognitive Advisor taps into the Collective Intelligence of the organization on product selections and purchases, to provide high quality, reliable recommendations for the user doing the queries. (211) The Best Bets learning engine learns user behavior from actions taken by users to whom the Best Bets recommendations have been delivered. These actions may include, for example, that a user clicks on a Best Bet recommendation, a user selects a Best Bet recommendation for adding to cart, or a user selects a Best Bet recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Best Bets recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization… 215) The learning engine underneath the Bundles Cognitive Advisor 2918 also learns user behavior from actions taken by users to whom the Bundles recommendations have been delivered. These actions may include, for example, that a user clicks on a Bundle recommendation, a user selects a Bundle recommendation for adding to cart, or a user selects a Bundle recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Bundles recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. In addition, the highly active bundles for a given query in a given category in a given region, can be additional data for the Global Item Master, potentially driving recommendations for new Bundle creation or Bundle enhancement for this organization or other organizations overall (244) Often what occurs is that users' shopping patterns don't match with the strategy and contracts set up by Procurement Buyers in different categories; i.e. the user shopping behavior can be said to be non-compliant, and this does not help to tap into procurement rules and expectations, and in turn, the savings are not actualized. A few specific situations would need to be addressed in this regard. For example, procurement buyers would have to provide actual contract information to the system so that the information contained in it can be used in real-time by the system. Systems implemented based on this disclosure may provide the means for the procurement buyers to do so. As another example, Suppliers would need to be carefully organized into categories, and tagged appropriately for their specific attributes (e.g. minority supplier, woman-owned supplier, veteran supplier etc.), so that the guided buying capability of an e-procurement system can operate as expected, suitably rank ordering product selections in the universal search experience. A system implemented based on this disclosure may provide the tools for this to be setup correctly.”
Reference Makhija and Reference Kadayam are analogous prior art to the claimed invention because the references generally relate to field of workflow processing (Makhija: [0015], [0065]. Kadayam: col. 15, lines 28-38, col. 20, lines 60-67, col. 27, lines 53-57, col. 40-67). Further, said references are part of the same classification, i.e., G06Q and G06F. Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.  
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA  to provide the teachings of Reference Kadayam, particularly the ability to customize data insights using additional perspectives like user preference and market conditions information (Col. 3, lines 31-52, col. 11 lines 54-65, col.12, lines 46-54, col. 15 lines 39-45, col. 29 lines 22-29, col. 32 line 61 to col. 33 line 4, col. 34 lines 48-29, col. 35 lines 3-20 and 50-60), in the disclosure of Reference Makhija, particularly in the ability to collect real time data (paragraph 62-64, 92, 98, 100-101), in order to provide for a system that the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier as taught by Reference Kadayam (see at least in col. 3, lines 54-65: the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier), so that the process of managing workflow processing can be made more efficient and effective. 
Further, the claimed invention is merely a combination of old elements in a similar workflow processing field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Makhija in view of Reference Kadayam, the results of the combination were predictable (MPEP 2143 A).

As per claim 4: Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
further comprising iteratively refining the segmentation analysis based on evolving market dynamics and user interactions, ensuring continuous improvement and adaptation within the system for generating integrated insights.  
Reference Makhija paragraph 46, 53-54, 61-62, “the data cleansing and normalization engine 116 is configured to clean data received at the data lake in real time using natural language processing and machine learning algorithms for enhanced accuracy. Since, the data will be received from multiple disconnected sources, the engine 116 has an ability to remove duplicates, standardize and group the data. The cleansing engine is coupled to a data mapper and curator engine. The engine 116 detects and corrects Corrupt or duplicate or vague data. Further, the cleansed data is sent for approval through a routing mechanism post which they are stored in master data tables of the data lake “…“system layer architecture diagrams with data lake/platform (100B, 100c) of AI based self-driven ERP and SCM system is shown in accordance with an embodiment of the present invention. The system 100a includes a plurality of distinct data source layer 127 to capture all customer, factory, supplier, machine and third-party sources of data (both structured and unstructured), the data lake layer 108 storing all data received from the distinct data source layer 127, an application function layer 128 configured to re-calibrate functions based on data models and scripts generated by a bot. The data models are auto-generated based on change in attribute of the received data to determine the impact of the change on the functions of the one or more applications.”…” a Query language tool (QL) 130, data governance & standardization/protocol layer 131”…” collects data from diverse sources, acts a gateway and identifies data attributes to be extracted from application event. The curator engine 132b with the help of mapper and ingestion module 132a stores the received data in multiple type stores viz, the search store for advance search, graph for data and relations, flat structure for logs purpose etc.”
Makhija does not explicitly show “within the system for generating integrated insights”. Kadayam shows the above limitations (Col. 3:31-52, col. 11:54-65, col.12:46-54, col. 15:39-45, col. 29:22-29, col. 32:61 to col. 33:4, col. 34:48-29, col. 35:3-20 and 50-60, “The system is programmed to further store at least some of the collected data in a database. The data can be available at various granularities. For example, for suppliers or buyers, the data can be related to industries, organizations, departments, or individuals; for products, the data can be related to industries, categories, makes, or models. The data can be collected directly from supplier accounts or buyer accounts or from external data sources. The data can be collected in response to processing user queries, as further discussed below, or during ordinary user online activities. The data can be collected from explicit user input through graphical user interfaces, such as a button that when pushed indicates a vote for a particular product as being a good match for another product or a comment box configured to accept supplier reviews, or from implicit user behavior that indicates an affinity for parties or items, such as putting a particular item in a shopping cart or paying invoices to a particular supplier within a specific period of time. (89) In some embodiments, a collective product wisdom dataset may be updated continuously in real-time to reflect the up-to-the-moment selection/purchase activity. The Adaptive Navigation user experience may be built on top of this dynamically changing dataset about the continuously evolving nature of product knowledge and product preferences in the organization. In some embodiments, this may be used to provide a user experience of browsing through a marketplace that is very dynamic and adapts in real-time. Compared to the conventional approach of using all marketplace product data to build a static, and generic product browsing experience, an Adaptive Navigation user experience may be naturally tailored to the company's own product preference and product purchasing behaviors… an Organization Preferences Cognitive Advisor may also have a learning engine inside. This learning engine learns user behavior from actions taken by users to whom recommendations from this Cognitive Advisor have been delivered. These actions may include, for example, that a user clicks on an Organization Preference recommendation, a user selects an Organization Preference recommendation for adding to cart, or a user selects an Organization Preference recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Organization Preferences recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. The Organization Preferences Cognitive Advisor taps into the Collective Intelligence of the organization on product selections and purchases, to provide high quality, reliable recommendations for the user doing the queries. (211) The Best Bets learning engine learns user behavior from actions taken by users to whom the Best Bets recommendations have been delivered. These actions may include, for example, that a user clicks on a Best Bet recommendation, a user selects a Best Bet recommendation for adding to cart, or a user selects a Best Bet recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Best Bets recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization… 215) The learning engine underneath the Bundles Cognitive Advisor 2918 also learns user behavior from actions taken by users to whom the Bundles recommendations have been delivered. These actions may include, for example, that a user clicks on a Bundle recommendation, a user selects a Bundle recommendation for adding to cart, or a user selects a Bundle recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Bundles recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. In addition, the highly active bundles for a given query in a given category in a given region, can be additional data for the Global Item Master, potentially driving recommendations for new Bundle creation or Bundle enhancement for this organization or other organizations overall (244) Often what occurs is that users' shopping patterns don't match with the strategy and contracts set up by Procurement Buyers in different categories; i.e. the user shopping behavior can be said to be non-compliant, and this does not help to tap into procurement rules and expectations, and in turn, the savings are not actualized. A few specific situations would need to be addressed in this regard. For example, procurement buyers would have to provide actual contract information to the system so that the information contained in it can be used in real-time by the system. Systems implemented based on this disclosure may provide the means for the procurement buyers to do so. As another example, Suppliers would need to be carefully organized into categories, and tagged appropriately for their specific attributes (e.g. minority supplier, woman-owned supplier, veteran supplier etc.), so that the guided buying capability of an e-procurement system can operate as expected, suitably rank ordering product selections in the universal search experience. A system implemented based on this disclosure may provide the tools for this to be setup correctly.”)
Reference Makhija and Reference Kadayam are analogous prior art to the claimed invention because the references generally relate to field of workflow processing (Makhija: [0015], [0065]. Kadayam: col. 15, lines 28-38, col. 20, lines 60-67, col. 27, lines 53-57, col. 40-67). Further, said references are part of the same classification, i.e., G06Q and G06F. Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.  
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA  to provide the teachings of Reference Kadayam, particularly the ability to customize data insights using additional perspectives like user preference and market conditions information (Col. 3, lines 31-52, col. 11 lines 54-65, col.12, lines 46-54, col. 15 lines 39-45, col. 29 lines 22-29, col. 32 line 61 to col. 33 line 4, col. 34 lines 48-29, col. 35 lines 3-20 and 50-60), in the disclosure of Reference Makhija, particularly in the ability to collect real time data (paragraph 62-64, 92, 98, 100-101), in order to provide for a system that the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier as taught by Reference Kadayam (see at least in col. 3, lines 54-65: the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier), so that the process of managing workflow processing can be made more efficient and effective. 
Further, the claimed invention is merely a combination of old elements in a similar workflow processing field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Makhija in view of Reference Kadayam, the results of the combination were predictable (MPEP 2143 A).

As per claim 5: Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
further comprising dynamically adjusting segmentation algorithms and models based on real-time data streams and user feedback, enhancing the precision and relevance of generated insights within the system for generating integrated insights.
Reference Makhija paragraph 46, 53-54, 61-62, “the data cleansing and normalization engine 116 is configured to clean data received at the data lake in real time using natural language processing and machine learning algorithms for enhanced accuracy. Since, the data will be received from multiple disconnected sources, the engine 116 has an ability to remove duplicates, standardize and group the data. The cleansing engine is coupled to a data mapper and curator engine. The engine 116 detects and corrects Corrupt or duplicate or vague data. Further, the cleansed data is sent for approval through a routing mechanism post which they are stored in master data tables of the data lake “…“system layer architecture diagrams with data lake/platform (100B, 100c) of AI based self-driven ERP and SCM system is shown in accordance with an embodiment of the present invention. The system 100a includes a plurality of distinct data source layer 127 to capture all customer, factory, supplier, machine and third-party sources of data (both structured and unstructured), the data lake layer 108 storing all data received from the distinct data source layer 127, an application function layer 128 configured to re-calibrate functions based on data models and scripts generated by a bot. The data models are auto-generated based on change in attribute of the received data to determine the impact of the change on the functions of the one or more applications.”…” a Query language tool (QL) 130, data governance & standardization/protocol layer 131”…” collects data from diverse sources, acts a gateway and identifies data attributes to be extracted from application event. The curator engine 132b with the help of mapper and ingestion module 132a stores the received data in multiple type stores viz, the search store for advance search, graph for data and relations, flat structure for logs purpose etc.”
Makhija does not explicitly show “within the system for generating integrated insights”. Kadayam shows the above limitations (Col. 3:31-52, col. 11:54-65, col.12:46-54, col. 15:39-45, col. 29:22-29, col. 32:61 to col. 33:4, col. 34:48-29, col. 35:3-20 and 50-60, “The system is programmed to further store at least some of the collected data in a database. The data can be available at various granularities. For example, for suppliers or buyers, the data can be related to industries, organizations, departments, or individuals; for products, the data can be related to industries, categories, makes, or models. The data can be collected directly from supplier accounts or buyer accounts or from external data sources. The data can be collected in response to processing user queries, as further discussed below, or during ordinary user online activities. The data can be collected from explicit user input through graphical user interfaces, such as a button that when pushed indicates a vote for a particular product as being a good match for another product or a comment box configured to accept supplier reviews, or from implicit user behavior that indicates an affinity for parties or items, such as putting a particular item in a shopping cart or paying invoices to a particular supplier within a specific period of time. (89) In some embodiments, a collective product wisdom dataset may be updated continuously in real-time to reflect the up-to-the-moment selection/purchase activity. The Adaptive Navigation user experience may be built on top of this dynamically changing dataset about the continuously evolving nature of product knowledge and product preferences in the organization. In some embodiments, this may be used to provide a user experience of browsing through a marketplace that is very dynamic and adapts in real-time. Compared to the conventional approach of using all marketplace product data to build a static, and generic product browsing experience, an Adaptive Navigation user experience may be naturally tailored to the company's own product preference and product purchasing behaviors… an Organization Preferences Cognitive Advisor may also have a learning engine inside. This learning engine learns user behavior from actions taken by users to whom recommendations from this Cognitive Advisor have been delivered. These actions may include, for example, that a user clicks on an Organization Preference recommendation, a user selects an Organization Preference recommendation for adding to cart, or a user selects an Organization Preference recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Organization Preferences recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. The Organization Preferences Cognitive Advisor taps into the Collective Intelligence of the organization on product selections and purchases, to provide high quality, reliable recommendations for the user doing the queries. (211) The Best Bets learning engine learns user behavior from actions taken by users to whom the Best Bets recommendations have been delivered. These actions may include, for example, that a user clicks on a Best Bet recommendation, a user selects a Best Bet recommendation for adding to cart, or a user selects a Best Bet recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Best Bets recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization… 215) The learning engine underneath the Bundles Cognitive Advisor 2918 also learns user behavior from actions taken by users to whom the Bundles recommendations have been delivered. These actions may include, for example, that a user clicks on a Bundle recommendation, a user selects a Bundle recommendation for adding to cart, or a user selects a Bundle recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Bundles recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. In addition, the highly active bundles for a given query in a given category in a given region, can be additional data for the Global Item Master, potentially driving recommendations for new Bundle creation or Bundle enhancement for this organization or other organizations overall (244) Often what occurs is that users' shopping patterns don't match with the strategy and contracts set up by Procurement Buyers in different categories; i.e. the user shopping behavior can be said to be non-compliant, and this does not help to tap into procurement rules and expectations, and in turn, the savings are not actualized. A few specific situations would need to be addressed in this regard. For example, procurement buyers would have to provide actual contract information to the system so that the information contained in it can be used in real-time by the system. Systems implemented based on this disclosure may provide the means for the procurement buyers to do so. As another example, Suppliers would need to be carefully organized into categories, and tagged appropriately for their specific attributes (e.g. minority supplier, woman-owned supplier, veteran supplier etc.), so that the guided buying capability of an e-procurement system can operate as expected, suitably rank ordering product selections in the universal search experience. A system implemented based on this disclosure may provide the tools for this to be setup correctly.”)
Reference Makhija and Reference Kadayam are analogous prior art to the claimed invention because the references generally relate to field of workflow processing (Makhija: [0015], [0065]. Kadayam: col. 15, lines 28-38, col. 20, lines 60-67, col. 27, lines 53-57, col. 40-67). Further, said references are part of the same classification, i.e., G06Q and G06F. Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.  
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA  to provide the teachings of Reference Kadayam, particularly the ability to customize data insights using additional perspectives like user preference and market conditions information (Col. 3, lines 31-52, col. 11 lines 54-65, col.12, lines 46-54, col. 15 lines 39-45, col. 29 lines 22-29, col. 32 line 61 to col. 33 line 4, col. 34 lines 48-29, col. 35 lines 3-20 and 50-60), in the disclosure of Reference Makhija, particularly in the ability to collect real time data (paragraph 62-64, 92, 98, 100-101), in order to provide for a system that the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier as taught by Reference Kadayam (see at least in col. 3, lines 54-65: the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier), so that the process of managing workflow processing can be made more efficient and effective. 
Further, the claimed invention is merely a combination of old elements in a similar workflow processing field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Makhija in view of Reference Kadayam, the results of the combination were predictable (MPEP 2143 A).

As per claim 6: Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
further comprising integrating multiple data sources and analytics tools within the system for generating integrated insights to facilitate comprehensive segmentation analysis, ensuring thorough and accurate insights generation Reference Makhija paragraph 46, 53-54, 61-62, “the data cleansing and normalization engine 116 is configured to clean data received at the data lake in real time using natural language processing and machine learning algorithms for enhanced accuracy. Since, the data will be received from multiple disconnected sources, the engine 116 has an ability to remove duplicates, standardize and group the data. The cleansing engine is coupled to a data mapper and curator engine. The engine 116 detects and corrects Corrupt or duplicate or vague data. Further, the cleansed data is sent for approval through a routing mechanism post which they are stored in master data tables of the data lake “…“system layer architecture diagrams with data lake/platform (100B, 100c) of AI based self-driven ERP and SCM system is shown in accordance with an embodiment of the present invention. The system 100a includes a plurality of distinct data source layer 127 to capture all customer, factory, supplier, machine and third-party sources of data (both structured and unstructured), the data lake layer 108 storing all data received from the distinct data source layer 127, an application function layer 128 configured to re-calibrate functions based on data models and scripts generated by a bot. The data models are auto-generated based on change in attribute of the received data to determine the impact of the change on the functions of the one or more applications.”…” a Query language tool (QL) 130, data governance & standardization/protocol layer 131”…” collects data from diverse sources, acts a gateway and identifies data attributes to be extracted from application event. The curator engine 132b with the help of mapper and ingestion module 132a stores the received data in multiple type stores viz, the search store for advance search, graph for data and relations, flat structure for logs purpose etc.”
Makhija does not explicitly show “within the system for generating integrated insights”. Kadayam shows the above limitations (Col. 3:31-52, col. 11:54-65, col.12:46-54, col. 15:39-45, col. 29:22-29, col. 32:61 to col. 33:4, col. 34:48-29, col. 35:3-20 and 50-60, “The system is programmed to further store at least some of the collected data in a database. The data can be available at various granularities. For example, for suppliers or buyers, the data can be related to industries, organizations, departments, or individuals; for products, the data can be related to industries, categories, makes, or models. The data can be collected directly from supplier accounts or buyer accounts or from external data sources. The data can be collected in response to processing user queries, as further discussed below, or during ordinary user online activities. The data can be collected from explicit user input through graphical user interfaces, such as a button that when pushed indicates a vote for a particular product as being a good match for another product or a comment box configured to accept supplier reviews, or from implicit user behavior that indicates an affinity for parties or items, such as putting a particular item in a shopping cart or paying invoices to a particular supplier within a specific period of time. (89) In some embodiments, a collective product wisdom dataset may be updated continuously in real-time to reflect the up-to-the-moment selection/purchase activity. The Adaptive Navigation user experience may be built on top of this dynamically changing dataset about the continuously evolving nature of product knowledge and product preferences in the organization. In some embodiments, this may be used to provide a user experience of browsing through a marketplace that is very dynamic and adapts in real-time. Compared to the conventional approach of using all marketplace product data to build a static, and generic product browsing experience, an Adaptive Navigation user experience may be naturally tailored to the company's own product preference and product purchasing behaviors… an Organization Preferences Cognitive Advisor may also have a learning engine inside. This learning engine learns user behavior from actions taken by users to whom recommendations from this Cognitive Advisor have been delivered. These actions may include, for example, that a user clicks on an Organization Preference recommendation, a user selects an Organization Preference recommendation for adding to cart, or a user selects an Organization Preference recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Organization Preferences recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. The Organization Preferences Cognitive Advisor taps into the Collective Intelligence of the organization on product selections and purchases, to provide high quality, reliable recommendations for the user doing the queries. (211) The Best Bets learning engine learns user behavior from actions taken by users to whom the Best Bets recommendations have been delivered. These actions may include, for example, that a user clicks on a Best Bet recommendation, a user selects a Best Bet recommendation for adding to cart, or a user selects a Best Bet recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Best Bets recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization… 215) The learning engine underneath the Bundles Cognitive Advisor 2918 also learns user behavior from actions taken by users to whom the Bundles recommendations have been delivered. These actions may include, for example, that a user clicks on a Bundle recommendation, a user selects a Bundle recommendation for adding to cart, or a user selects a Bundle recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Bundles recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. In addition, the highly active bundles for a given query in a given category in a given region, can be additional data for the Global Item Master, potentially driving recommendations for new Bundle creation or Bundle enhancement for this organization or other organizations overall (244) Often what occurs is that users' shopping patterns don't match with the strategy and contracts set up by Procurement Buyers in different categories; i.e. the user shopping behavior can be said to be non-compliant, and this does not help to tap into procurement rules and expectations, and in turn, the savings are not actualized. A few specific situations would need to be addressed in this regard. For example, procurement buyers would have to provide actual contract information to the system so that the information contained in it can be used in real-time by the system. Systems implemented based on this disclosure may provide the means for the procurement buyers to do so. As another example, Suppliers would need to be carefully organized into categories, and tagged appropriately for their specific attributes (e.g. minority supplier, woman-owned supplier, veteran supplier etc.), so that the guided buying capability of an e-procurement system can operate as expected, suitably rank ordering product selections in the universal search experience. A system implemented based on this disclosure may provide the tools for this to be setup correctly.”)
Reference Makhija and Reference Kadayam are analogous prior art to the claimed invention because the references generally relate to field of workflow processing (Makhija: [0015], [0065]. Kadayam: col. 15, lines 28-38, col. 20, lines 60-67, col. 27, lines 53-57, col. 40-67). Further, said references are part of the same classification, i.e., G06Q and G06F. Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.  
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA  to provide the teachings of Reference Kadayam, particularly the ability to customize data insights using additional perspectives like user preference and market conditions information (Col. 3, lines 31-52, col. 11 lines 54-65, col.12, lines 46-54, col. 15 lines 39-45, col. 29 lines 22-29, col. 32 line 61 to col. 33 line 4, col. 34 lines 48-29, col. 35 lines 3-20 and 50-60), in the disclosure of Reference Makhija, particularly in the ability to collect real time data (paragraph 62-64, 92, 98, 100-101), in order to provide for a system that the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier as taught by Reference Kadayam (see at least in col. 3, lines 54-65: the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier), so that the process of managing workflow processing can be made more efficient and effective. 
Further, the claimed invention is merely a combination of old elements in a similar workflow processing field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Makhija in view of Reference Kadayam, the results of the combination were predictable (MPEP 2143 A).

As per claim 7: Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
further comprising providing users with customizable segmentation parameters and criteria within the SPoG UI, allowing for tailored segmentation analysis and insights delivery based on individual user preferences within the system for generating integrated insights 
Makhija shows in paragraph 98, 105: “the system includes pro-active detection algorithms for any record/transactions (items/Suppliers/PO/Invoices etc) being entered by a user (supplier/Customer/Employee etc) at the user interface. These will ensure that the Master tables are clean, accurate, complete and non-fraudulent/non-duplicate at any point in time and the data flowing through every single module or pipeline is clean and accurate. The master tables are stored in relational database 122a”. Reference Makhija paragraph 46, 53-54, 61-62, “the data cleansing and normalization engine 116 is configured to clean data received at the data lake in real time using natural language processing and machine learning algorithms for enhanced accuracy. Since, the data will be received from multiple disconnected sources, the engine 116 has an ability to remove duplicates, standardize and group the data. The cleansing engine is coupled to a data mapper and curator engine. The engine 116 detects and corrects Corrupt or duplicate or vague data. Further, the cleansed data is sent for approval through a routing mechanism post which they are stored in master data tables of the data lake “…“system layer architecture diagrams with data lake/platform (100B, 100c) of AI based self-driven ERP and SCM system is shown in accordance with an embodiment of the present invention. The system 100a includes a plurality of distinct data source layer 127 to capture all customer, factory, supplier, machine and third-party sources of data (both structured and unstructured), the data lake layer 108 storing all data received from the distinct data source layer 127, an application function layer 128 configured to re-calibrate functions based on data models and scripts generated by a bot. The data models are auto-generated based on change in attribute of the received data to determine the impact of the change on the functions of the one or more applications.”…” a Query language tool (QL) 130, data governance & standardization/protocol layer 131”…” collects data from diverse sources, acts a gateway and identifies data attributes to be extracted from application event. The curator engine 132b with the help of mapper and ingestion module 132a stores the received data in multiple type stores viz, the search store for advance search, graph for data and relations, flat structure for logs purpose etc.”
Makhija does not explicitly show “within the system for generating integrated insights”. Kadayam shows the above limitations (Col. 3:31-52, col. 11:54-65, col.12:46-54, col. 15:39-45, col. 29:22-29, col. 32:61 to col. 33:4, col. 34:48-29, col. 35:3-20 and 50-60, “The system is programmed to further store at least some of the collected data in a database. The data can be available at various granularities. For example, for suppliers or buyers, the data can be related to industries, organizations, departments, or individuals; for products, the data can be related to industries, categories, makes, or models. The data can be collected directly from supplier accounts or buyer accounts or from external data sources. The data can be collected in response to processing user queries, as further discussed below, or during ordinary user online activities. The data can be collected from explicit user input through graphical user interfaces, such as a button that when pushed indicates a vote for a particular product as being a good match for another product or a comment box configured to accept supplier reviews, or from implicit user behavior that indicates an affinity for parties or items, such as putting a particular item in a shopping cart or paying invoices to a particular supplier within a specific period of time. (89) In some embodiments, a collective product wisdom dataset may be updated continuously in real-time to reflect the up-to-the-moment selection/purchase activity. The Adaptive Navigation user experience may be built on top of this dynamically changing dataset about the continuously evolving nature of product knowledge and product preferences in the organization. In some embodiments, this may be used to provide a user experience of browsing through a marketplace that is very dynamic and adapts in real-time. Compared to the conventional approach of using all marketplace product data to build a static, and generic product browsing experience, an Adaptive Navigation user experience may be naturally tailored to the company's own product preference and product purchasing behaviors… an Organization Preferences Cognitive Advisor may also have a learning engine inside. This learning engine learns user behavior from actions taken by users to whom recommendations from this Cognitive Advisor have been delivered. These actions may include, for example, that a user clicks on an Organization Preference recommendation, a user selects an Organization Preference recommendation for adding to cart, or a user selects an Organization Preference recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Organization Preferences recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. The Organization Preferences Cognitive Advisor taps into the Collective Intelligence of the organization on product selections and purchases, to provide high quality, reliable recommendations for the user doing the queries. (211) The Best Bets learning engine learns user behavior from actions taken by users to whom the Best Bets recommendations have been delivered. These actions may include, for example, that a user clicks on a Best Bet recommendation, a user selects a Best Bet recommendation for adding to cart, or a user selects a Best Bet recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Best Bets recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization… 215) The learning engine underneath the Bundles Cognitive Advisor 2918 also learns user behavior from actions taken by users to whom the Bundles recommendations have been delivered. These actions may include, for example, that a user clicks on a Bundle recommendation, a user selects a Bundle recommendation for adding to cart, or a user selects a Bundle recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Bundles recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. In addition, the highly active bundles for a given query in a given category in a given region, can be additional data for the Global Item Master, potentially driving recommendations for new Bundle creation or Bundle enhancement for this organization or other organizations overall (244) Often what occurs is that users' shopping patterns don't match with the strategy and contracts set up by Procurement Buyers in different categories; i.e. the user shopping behavior can be said to be non-compliant, and this does not help to tap into procurement rules and expectations, and in turn, the savings are not actualized. A few specific situations would need to be addressed in this regard. For example, procurement buyers would have to provide actual contract information to the system so that the information contained in it can be used in real-time by the system. Systems implemented based on this disclosure may provide the means for the procurement buyers to do so. As another example, Suppliers would need to be carefully organized into categories, and tagged appropriately for their specific attributes (e.g. minority supplier, woman-owned supplier, veteran supplier etc.), so that the guided buying capability of an e-procurement system can operate as expected, suitably rank ordering product selections in the universal search experience. A system implemented based on this disclosure may provide the tools for this to be setup correctly.”)
Reference Makhija and Reference Kadayam are analogous prior art to the claimed invention because the references generally relate to field of workflow processing (Makhija: [0015], [0065]. Kadayam: col. 15, lines 28-38, col. 20, lines 60-67, col. 27, lines 53-57, col. 40-67). Further, said references are part of the same classification, i.e., G06Q and G06F. Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.  
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA  to provide the teachings of Reference Kadayam, particularly the ability to customize data insights using additional perspectives like user preference and market conditions information (Col. 3, lines 31-52, col. 11 lines 54-65, col.12, lines 46-54, col. 15 lines 39-45, col. 29 lines 22-29, col. 32 line 61 to col. 33 line 4, col. 34 lines 48-29, col. 35 lines 3-20 and 50-60), in the disclosure of Reference Makhija, particularly in the ability to collect real time data (paragraph 62-64, 92, 98, 100-101), in order to provide for a system that the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier as taught by Reference Kadayam (see at least in col. 3, lines 54-65: the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier), so that the process of managing workflow processing can be made more efficient and effective. 
Further, the claimed invention is merely a combination of old elements in a similar workflow processing field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Makhija in view of Reference Kadayam, the results of the combination were predictable (MPEP 2143 A).

As per claim 9: Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
further comprising a Personalization and Recommendation Engine (PRE) module configured to leveraging data from the RTDM module and insights generated by the AAML module to deliver highly tailored recommendations to users.
Makhija does not explicitly shows “highly tailored recommendations”. Reference Kadayam shows the above limitations at least in (Col. 3, lines 31-52, col. 11 lines 54-65, col.12, lines 46-54, col. 15 lines 39-45, col. 29 lines 22-29, col. 32 line 61 to col. 33 line 4, col. 34 lines 48-29, col. 35 lines 3-20 and 50-60, “The system is programmed to further store at least some of the collected data in a database. The data can be available at various granularities. For example, for suppliers or buyers, the data can be related to industries, organizations, departments, or individuals; for products, the data can be related to industries, categories, makes, or models. The data can be collected directly from supplier accounts or buyer accounts or from external data sources. The data can be collected in response to processing user queries, as further discussed below, or during ordinary user online activities. The data can be collected from explicit user input through graphical user interfaces, such as a button that when pushed indicates a vote for a particular product as being a good match for another product or a comment box configured to accept supplier reviews, or from implicit user behavior that indicates an affinity for parties or items, such as putting a particular item in a shopping cart or paying invoices to a particular supplier within a specific period of time. (89) In some embodiments, a collective product wisdom dataset may be updated continuously in real-time to reflect the up-to-the-moment selection/purchase activity. The Adaptive Navigation user experience may be built on top of this dynamically changing dataset about the continuously evolving nature of product knowledge and product preferences in the organization. In some embodiments, this may be used to provide a user experience of browsing through a marketplace that is very dynamic and adapts in real-time. Compared to the conventional approach of using all marketplace product data to build a static, and generic product browsing experience, an Adaptive Navigation user experience may be naturally tailored to the company's own product preference and product purchasing behaviors… an Organization Preferences Cognitive Advisor may also have a learning engine inside. This learning engine learns user behavior from actions taken by users to whom recommendations from this Cognitive Advisor have been delivered. These actions may include, for example, that a user clicks on an Organization Preference recommendation, a user selects an Organization Preference recommendation for adding to cart, or a user selects an Organization Preference recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Organization Preferences recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. The Organization Preferences Cognitive Advisor taps into the Collective Intelligence of the organization on product selections and purchases, to provide high quality, reliable recommendations for the user doing the queries. (211) The Best Bets learning engine learns user behavior from actions taken by users to whom the Best Bets recommendations have been delivered. These actions may include, for example, that a user clicks on a Best Bet recommendation, a user selects a Best Bet recommendation for adding to cart, or a user selects a Best Bet recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Best Bets recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization… 215) The learning engine underneath the Bundles Cognitive Advisor 2918 also learns user behavior from actions taken by users to whom the Bundles recommendations have been delivered. These actions may include, for example, that a user clicks on a Bundle recommendation, a user selects a Bundle recommendation for adding to cart, or a user selects a Bundle recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Bundles recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. In addition, the highly active bundles for a given query in a given category in a given region, can be additional data for the Global Item Master, potentially driving recommendations for new Bundle creation or Bundle enhancement for this organization or other organizations overall (244) Often what occurs is that users' shopping patterns don't match with the strategy and contracts set up by Procurement Buyers in different categories; i.e. the user shopping behavior can be said to be non-compliant, and this does not help to tap into procurement rules and expectations, and in turn, the savings are not actualized. A few specific situations would need to be addressed in this regard. For example, procurement buyers would have to provide actual contract information to the system so that the information contained in it can be used in real-time by the system. Systems implemented based on this disclosure may provide the means for the procurement buyers to do so. As another example, Suppliers would need to be carefully organized into categories, and tagged appropriately for their specific attributes (e.g. minority supplier, woman-owned supplier, veteran supplier etc.), so that the guided buying capability of an e-procurement system can operate as expected, suitably rank ordering product selections in the universal search experience. A system implemented based on this disclosure may provide the tools for this to be setup correctly.”
Reference Makhija and Reference Kadayam are analogous prior art to the claimed invention because the references generally relate to field of workflow processing (Makhija: [0015], [0065]. Kadayam: col. 15, lines 28-38, col. 20, lines 60-67, col. 27, lines 53-57, col. 40-67). Further, said references are part of the same classification, i.e., G06Q and G06F. Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.  
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA  to provide the teachings of Reference Kadayam, particularly the ability to customize data insights using additional perspectives like user preference and market conditions information (Col. 3, lines 31-52, col. 11 lines 54-65, col.12, lines 46-54, col. 15 lines 39-45, col. 29 lines 22-29, col. 32 line 61 to col. 33 line 4, col. 34 lines 48-29, col. 35 lines 3-20 and 50-60), in the disclosure of Reference Makhija, particularly in the ability to collect real time data (paragraph 62-64, 92, 98, 100-101), in order to provide for a system that the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier as taught by Reference Kadayam (see at least in col. 3, lines 54-65: the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier), so that the process of managing workflow processing can be made more efficient and effective. 
Further, the claimed invention is merely a combination of old elements in a similar workflow processing field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Makhija in view of Reference Kadayam, the results of the combination were predictable (MPEP 2143 A).

As per claim 10: Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
wherein the PRE module employs a combination of collaborative filtering, content-based filtering, and matrix factorization techniques to analyze user preferences, historical interactions, and market trends to generate highly personalized recommendations for users.
Reference Makhija shows in paragraph 62-64, 92, 98, 100-101, the system detects changes in data which indicates that the data is continuously being received and compared and synchronizing. Reference Makhija does not explicitly show user's preferences and market conditions, as such, Reference Makhija does not explicitly show “user preferences, historical interactions, and market trends to generate highly personalized recommendations for users.” 
Reference Kadayam shows the above limitations at least in (Col. 3:31-52, col. 11:54-65, col.12:46-54, col. 15:39-45, col. 29:22-29, col. 32:61 to col. 33:4, col. 34:48-29, col. 35:3-20 and 50-60, “The system is programmed to further store at least some of the collected data in a database. The data can be available at various granularities. For example, for suppliers or buyers, the data can be related to industries, organizations, departments, or individuals; for products, the data can be related to industries, categories, makes, or models. The data can be collected directly from supplier accounts or buyer accounts or from external data sources. The data can be collected in response to processing user queries, as further discussed below, or during ordinary user online activities. The data can be collected from explicit user input through graphical user interfaces, such as a button that when pushed indicates a vote for a particular product as being a good match for another product or a comment box configured to accept supplier reviews, or from implicit user behavior that indicates an affinity for parties or items, such as putting a particular item in a shopping cart or paying invoices to a particular supplier within a specific period of time. (89) In some embodiments, a collective product wisdom dataset may be updated continuously in real-time to reflect the up-to-the-moment selection/purchase activity. The Adaptive Navigation user experience may be built on top of this dynamically changing dataset about the continuously evolving nature of product knowledge and product preferences in the organization. In some embodiments, this may be used to provide a user experience of browsing through a marketplace that is very dynamic and adapts in real-time. Compared to the conventional approach of using all marketplace product data to build a static, and generic product browsing experience, an Adaptive Navigation user experience may be naturally tailored to the company's own product preference and product purchasing behaviors… an Organization Preferences Cognitive Advisor may also have a learning engine inside. This learning engine learns user behavior from actions taken by users to whom recommendations from this Cognitive Advisor have been delivered. These actions may include, for example, that a user clicks on an Organization Preference recommendation, a user selects an Organization Preference recommendation for adding to cart, or a user selects an Organization Preference recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Organization Preferences recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. The Organization Preferences Cognitive Advisor taps into the Collective Intelligence of the organization on product selections and purchases, to provide high quality, reliable recommendations for the user doing the queries. (211) The Best Bets learning engine learns user behavior from actions taken by users to whom the Best Bets recommendations have been delivered. These actions may include, for example, that a user clicks on a Best Bet recommendation, a user selects a Best Bet recommendation for adding to cart, or a user selects a Best Bet recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Best Bets recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization… 215) The learning engine underneath the Bundles Cognitive Advisor 2918 also learns user behavior from actions taken by users to whom the Bundles recommendations have been delivered. These actions may include, for example, that a user clicks on a Bundle recommendation, a user selects a Bundle recommendation for adding to cart, or a user selects a Bundle recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Bundles recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. In addition, the highly active bundles for a given query in a given category in a given region, can be additional data for the Global Item Master, potentially driving recommendations for new Bundle creation or Bundle enhancement for this organization or other organizations overall (244) Often what occurs is that users' shopping patterns don't match with the strategy and contracts set up by Procurement Buyers in different categories; i.e. the user shopping behavior can be said to be non-compliant, and this does not help to tap into procurement rules and expectations, and in turn, the savings are not actualized. A few specific situations would need to be addressed in this regard. For example, procurement buyers would have to provide actual contract information to the system so that the information contained in it can be used in real-time by the system. Systems implemented based on this disclosure may provide the means for the procurement buyers to do so. As another example, Suppliers would need to be carefully organized into categories, and tagged appropriately for their specific attributes (e.g. minority supplier, woman-owned supplier, veteran supplier etc.), so that the guided buying capability of an e-procurement system can operate as expected, suitably rank ordering product selections in the universal search experience. A system implemented based on this disclosure may provide the tools for this to be setup correctly.”
Reference Makhija and Reference Kadayam are analogous prior art to the claimed invention because the references generally relate to field of workflow processing (Makhija: [0015], [0065]. Kadayam: col. 15, lines 28-38, col. 20, lines 60-67, col. 27, lines 53-57, col. 40-67). Further, said references are part of the same classification, i.e., G06Q and G06F. Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.  
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA  to provide the teachings of Reference Kadayam, particularly the ability to customize data insights using additional perspectives like user preference and market conditions information (Col. 3, lines 31-52, col. 11 lines 54-65, col.12, lines 46-54, col. 15 lines 39-45, col. 29 lines 22-29, col. 32 line 61 to col. 33 line 4, col. 34 lines 48-29, col. 35 lines 3-20 and 50-60), in the disclosure of Reference Makhija, particularly in the ability to collect real time data (paragraph 62-64, 92, 98, 100-101), in order to provide for a system that the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier as taught by Reference Kadayam (see at least in col. 3, lines 54-65: the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier), so that the process of managing workflow processing can be made more efficient and effective. 
Further, the claimed invention is merely a combination of old elements in a similar workflow processing field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Makhija in view of Reference Kadayam, the results of the combination were predictable (MPEP 2143 A).

As per claim 11: Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
further comprising a Real-Time Insights Delivery Module (RIDM) module configured to perform efficient delivery of insights to users within the SPoG UI, employing real-time data streaming technologies and event-driven architectures.  
Reference Makhija paragraph 46, 53-54, 61-62, “the data cleansing and normalization engine 116 is configured to clean data received at the data lake in real time using natural language processing and machine learning algorithms for enhanced accuracy. Since, the data will be received from multiple disconnected sources, the engine 116 has an ability to remove duplicates, standardize and group the data. The cleansing engine is coupled to a data mapper and curator engine. The engine 116 detects and corrects Corrupt or duplicate or vague data. Further, the cleansed data is sent for approval through a routing mechanism post which they are stored in master data tables of the data lake “…“system layer architecture diagrams with data lake/platform (100B, 100c) of AI based self-driven ERP and SCM system is shown in accordance with an embodiment of the present invention. The system 100a includes a plurality of distinct data source layer 127 to capture all customer, factory, supplier, machine and third-party sources of data (both structured and unstructured), the data lake layer 108 storing all data received from the distinct data source layer 127, an application function layer 128 configured to re-calibrate functions based on data models and scripts generated by a bot. The data models are auto-generated based on change in attribute of the received data to determine the impact of the change on the functions of the one or more applications.”…” a Query language tool (QL) 130, data governance & standardization/protocol layer 131”…” collects data from diverse sources, acts a gateway and identifies data attributes to be extracted from application event. The curator engine 132b with the help of mapper and ingestion module 132a stores the received data in multiple type stores viz, the search store for advance search, graph for data and relations, flat structure for logs purpose etc.”
Makhija does not explicitly show “perform efficient delivery of insights”. Kadayam shows the above limitations (Col. 3:31-52, col. 11:54-65, col.12:46-54, col. 15:39-45, col. 29:22-29, col. 32:61 to col. 33:4, col. 34:48-29, col. 35:3-20 and 50-60, “The system is programmed to further store at least some of the collected data in a database. The data can be available at various granularities. For example, for suppliers or buyers, the data can be related to industries, organizations, departments, or individuals; for products, the data can be related to industries, categories, makes, or models. The data can be collected directly from supplier accounts or buyer accounts or from external data sources. The data can be collected in response to processing user queries, as further discussed below, or during ordinary user online activities. The data can be collected from explicit user input through graphical user interfaces, such as a button that when pushed indicates a vote for a particular product as being a good match for another product or a comment box configured to accept supplier reviews, or from implicit user behavior that indicates an affinity for parties or items, such as putting a particular item in a shopping cart or paying invoices to a particular supplier within a specific period of time. (89) In some embodiments, a collective product wisdom dataset may be updated continuously in real-time to reflect the up-to-the-moment selection/purchase activity. The Adaptive Navigation user experience may be built on top of this dynamically changing dataset about the continuously evolving nature of product knowledge and product preferences in the organization. In some embodiments, this may be used to provide a user experience of browsing through a marketplace that is very dynamic and adapts in real-time. Compared to the conventional approach of using all marketplace product data to build a static, and generic product browsing experience, an Adaptive Navigation user experience may be naturally tailored to the company's own product preference and product purchasing behaviors… an Organization Preferences Cognitive Advisor may also have a learning engine inside. This learning engine learns user behavior from actions taken by users to whom recommendations from this Cognitive Advisor have been delivered. These actions may include, for example, that a user clicks on an Organization Preference recommendation, a user selects an Organization Preference recommendation for adding to cart, or a user selects an Organization Preference recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Organization Preferences recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. The Organization Preferences Cognitive Advisor taps into the Collective Intelligence of the organization on product selections and purchases, to provide high quality, reliable recommendations for the user doing the queries. (211) The Best Bets learning engine learns user behavior from actions taken by users to whom the Best Bets recommendations have been delivered. These actions may include, for example, that a user clicks on a Best Bet recommendation, a user selects a Best Bet recommendation for adding to cart, or a user selects a Best Bet recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Best Bets recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization… 215) The learning engine underneath the Bundles Cognitive Advisor 2918 also learns user behavior from actions taken by users to whom the Bundles recommendations have been delivered. These actions may include, for example, that a user clicks on a Bundle recommendation, a user selects a Bundle recommendation for adding to cart, or a user selects a Bundle recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Bundles recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. In addition, the highly active bundles for a given query in a given category in a given region, can be additional data for the Global Item Master, potentially driving recommendations for new Bundle creation or Bundle enhancement for this organization or other organizations overall (244) Often what occurs is that users' shopping patterns don't match with the strategy and contracts set up by Procurement Buyers in different categories; i.e. the user shopping behavior can be said to be non-compliant, and this does not help to tap into procurement rules and expectations, and in turn, the savings are not actualized. A few specific situations would need to be addressed in this regard. For example, procurement buyers would have to provide actual contract information to the system so that the information contained in it can be used in real-time by the system. Systems implemented based on this disclosure may provide the means for the procurement buyers to do so. As another example, Suppliers would need to be carefully organized into categories, and tagged appropriately for their specific attributes (e.g. minority supplier, woman-owned supplier, veteran supplier etc.), so that the guided buying capability of an e-procurement system can operate as expected, suitably rank ordering product selections in the universal search experience. A system implemented based on this disclosure may provide the tools for this to be setup correctly.”)
Reference Makhija and Reference Kadayam are analogous prior art to the claimed invention because the references generally relate to field of workflow processing (Makhija: [0015], [0065]. Kadayam: col. 15, lines 28-38, col. 20, lines 60-67, col. 27, lines 53-57, col. 40-67). Further, said references are part of the same classification, i.e., G06Q and G06F. Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.  
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA  to provide the teachings of Reference Kadayam, particularly the ability to customize data insights using additional perspectives like user preference and market conditions information (Col. 3, lines 31-52, col. 11 lines 54-65, col.12, lines 46-54, col. 15 lines 39-45, col. 29 lines 22-29, col. 32 line 61 to col. 33 line 4, col. 34 lines 48-29, col. 35 lines 3-20 and 50-60), in the disclosure of Reference Makhija, particularly in the ability to collect real time data (paragraph 62-64, 92, 98, 100-101), in order to provide for a system that the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier as taught by Reference Kadayam (see at least in col. 3, lines 54-65: the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier), so that the process of managing workflow processing can be made more efficient and effective. 
Further, the claimed invention is merely a combination of old elements in a similar workflow processing field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Makhija in view of Reference Kadayam, the results of the combination were predictable (MPEP 2143 A).

As per claim 12: Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
wherein the RIDM module supports various delivery channels, including push notifications, in-app messages, and email alerts, RIDM module configured to enable delivery of insights based on user preferences about format and channel.
Reference Makhija paragraph 46, 53-54, 61-62, “the data cleansing and normalization engine 116 is configured to clean data received at the data lake in real time using natural language processing and machine learning algorithms for enhanced accuracy. Since, the data will be received from multiple disconnected sources, the engine 116 has an ability to remove duplicates, standardize and group the data. The cleansing engine is coupled to a data mapper and curator engine. The engine 116 detects and corrects Corrupt or duplicate or vague data. Further, the cleansed data is sent for approval through a routing mechanism post which they are stored in master data tables of the data lake “…“system layer architecture diagrams with data lake/platform (100B, 100c) of AI based self-driven ERP and SCM system is shown in accordance with an embodiment of the present invention. The system 100a includes a plurality of distinct data source layer 127 to capture all customer, factory, supplier, machine and third-party sources of data (both structured and unstructured), the data lake layer 108 storing all data received from the distinct data source layer 127, an application function layer 128 configured to re-calibrate functions based on data models and scripts generated by a bot. The data models are auto-generated based on change in attribute of the received data to determine the impact of the change on the functions of the one or more applications.”…” a Query language tool (QL) 130, data governance & standardization/protocol layer 131”…” collects data from diverse sources, acts a gateway and identifies data attributes to be extracted from application event. The curator engine 132b with the help of mapper and ingestion module 132a stores the received data in multiple type stores viz, the search store for advance search, graph for data and relations, flat structure for logs purpose etc.” Reference Makhija shows “push notifications, in-app messages, and email alerts” ([0062]-[0067]).

As per claim 13: Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
Wherein the FAM module enabling continuous evolution and improvement of the AI-powered integrated insights platform based on user feedback and changing market conditions, wherein the FAM module collects feedback from users through interactive interfaces within the SPoG UI, sentiment analysis of user interactions, and direct input mechanisms to dynamically adjust the algorithms and models within the AAML module, CVSE, and/or PRE module 
(Makhija: paragraph 38, 61-63, 70, 72, 90, “the recommended task/action includes auto adjust data for the plurality of functions, risk mitigation, removing duplicate entry, or direct interactions with the plurality of nodes. Further, the duplicate entry can be of any data existing in the EA and SCM applications, including but not limited to supplier, invoice, contract etc. “… “It also provides ability for end users to track life cycle and relation of entities in the system. Data Relation analytics (using Graph store) will help users view relation-first perspective of their data which is not possible in classical data model. Information will feed into Analytics and Dashboard 129, with a view getting mode insights. Graph algorithms library will also provide the ability to detect hard-to-find or complex patterns and structures in supply chain data model.” …”  It collects data from diverse sources, acts a gateway and identifies data attributes to be extracted from application event. The curator engine 132b with the help of mapper and ingestion module 132a stores the received data in multiple type stores viz, the search store for advance search, graph for data and relations, flat structure for logs purpose etc. The Curation including selection and organization of data takes place through capturing metadata and lineage and making it available in a data catalog.” …” The data flows in the data lake in real-time processing through event stream layer. Domain Model exposed through the query language (QL) tool 130 enables user to self-serve their data and analytical requirements. Models developed by users are utilized to improve the insights for future purpose.”…” the plurality of distinct data sources includes internet of things (IOT), demand from various sources at different levels like retailers, distribution channels, POS systems, customer feedback, supplier collaboration platform, invoices, purchase orders (PO), finance modules, inventory management module, contracts and RFx module, supplier module, item master, bill of materials, vendor master, warehouse management module, logistics management module, social media, weather, real time commodity and stock market prices, geo-political news etc. It shall be apparent to a person skilled in the art that the data source may include other source within the scope of the present invention” …” the EA and SCM applications include a plurality of nodes at the data source layer 127 like inventory, logistics, warehouse, procurement, customers, supplier, retailers, distributors, resellers, co-packers and transportation wherein the nodes interact with each other to structure the plurality of functions associated with the applications.”…” The interaction and data exchange between the service provide and subscriber is through the API gateway 133, event management block 134 and routers 137.”).  

As per claim 14: Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
wherein the RTDM module is configured to perform one or more extract, transform, and load (ETL) processes and data normalization techniques to generate uniform, accessible data
(Makhija: paragraph 46, 53-54, 61-62, “the data cleansing and normalization engine 116 is configured to clean data received at the data lake in real time using natural language processing and machine learning algorithms for enhanced accuracy. Since, the data will be received from multiple disconnected sources, the engine 116 has an ability to remove duplicates, standardize and group the data. The cleansing engine is coupled to a data mapper and curator engine. The engine 116 detects and corrects Corrupt or duplicate or vague data. Further, the cleansed data is sent for approval through a routing mechanism post which they are stored in master data tables of the data lake “…“system layer architecture diagrams with data lake/platform (100B, 100c) of AI based self-driven ERP and SCM system is shown in accordance with an embodiment of the present invention. The system 100a includes a plurality of distinct data source layer 127 to capture all customer, factory, supplier, machine and third-party sources of data (both structured and unstructured), the data lake layer 108 storing all data received from the distinct data source layer 127, an application function layer 128 configured to re-calibrate functions based on data models and scripts generated by a bot. The data models are auto-generated based on change in attribute of the received data to determine the impact of the change on the functions of the one or more applications.” …” a Query language tool (QL) 130, data governance & standardization/protocol layer 131” …” collects data from diverse sources, acts a gateway and identifies data attributes to be extracted from application event. The curator engine 132b with the help of mapper and ingestion module 132a stores the received data in multiple type stores viz, the search store for advance search, graph for data and relations, flat structure for logs purpose etc.”)

As per claim 15: Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
wherein the AAML module adapts algorithms based on continuous feedback loops, refining precision of AAML module processes over time to enhance relevance of generated insights.
Makhija shows the above limitation at least in: paragraph 46, 53-54, 61-62, “the data cleansing and normalization engine 116 is configured to clean data received at the data lake in real time using natural language processing and machine learning algorithms for enhanced accuracy. Since, the data will be received from multiple disconnected sources, the engine 116 has an ability to remove duplicates, standardize and group the data. The cleansing engine is coupled to a data mapper and curator engine. The engine 116 detects and corrects Corrupt or duplicate or vague data. Further, the cleansed data is sent for approval through a routing mechanism post which they are stored in master data tables of the data lake “… “system layer architecture diagrams with data lake/platform (100B, 100c) of AI based self-driven ERP and SCM system is shown in accordance with an embodiment of the present invention. The system 100a includes a plurality of distinct data source layer 127 to capture all customer, factory, supplier, machine and third-party sources of data (both structured and unstructured), the data lake layer 108 storing all data received from the distinct data source layer 127, an application function layer 128 configured to re-calibrate functions based on data models and scripts generated by a bot. The data models are auto-generated based on change in attribute of the received data to determine the impact of the change on the functions of the one or more applications.” …” a Query language tool (QL) 130, data governance & standardization/protocol layer 131” …” collects data from diverse sources, acts a gateway and identifies data attributes to be extracted from application event. The curator engine 132b with the help of mapper and ingestion module 132a stores the received data in multiple type stores viz, the search store for advance search, graph for data and relations, flat structure for logs purpose etc.”
Makhija shows in paragraph 98, 105: “the system includes pro-active detection algorithms for any record/transactions (items/Suppliers/PO/Invoices etc) being entered by a user (supplier/Customer/Employee etc) at the user interface. These will ensure that the Master tables are clean, accurate, complete and non-fraudulent/non-duplicate at any point in time and the data flowing through every single module or pipeline is clean and accurate. The master tables are stored in relational database 122a”. Reference Makhija paragraph 46, 53-54, 61-62, “the data cleansing and normalization engine 116 is configured to clean data received at the data lake in real time using natural language processing and machine learning algorithms for enhanced accuracy. Since, the data will be received from multiple disconnected sources, the engine 116 has an ability to remove duplicates, standardize and group the data. The cleansing engine is coupled to a data mapper and curator engine. The engine 116 detects and corrects Corrupt or duplicate or vague data. Further, the cleansed data is sent for approval through a routing mechanism post which they are stored in master data tables of the data lake “…“system layer architecture diagrams with data lake/platform (100B, 100c) of AI based self-driven ERP and SCM system is shown in accordance with an embodiment of the present invention. The system 100a includes a plurality of distinct data source layer 127 to capture all customer, factory, supplier, machine and third-party sources of data (both structured and unstructured), the data lake layer 108 storing all data received from the distinct data source layer 127, an application function layer 128 configured to re-calibrate functions based on data models and scripts generated by a bot. The data models are auto-generated based on change in attribute of the received data to determine the impact of the change on the functions of the one or more applications.”…” a Query language tool (QL) 130, data governance & standardization/protocol layer 131”…” collects data from diverse sources, acts a gateway and identifies data attributes to be extracted from application event. The curator engine 132b with the help of mapper and ingestion module 132a stores the received data in multiple type stores viz, the search store for advance search, graph for data and relations, flat structure for logs purpose etc.”
Makhija does not explicitly show “within the system for generating integrated insights”. Kadayam shows the above limitations (Col. 3:31-52, col. 11:54-65, col.12:46-54, col. 15:39-45, col. 29:22-29, col. 32:61 to col. 33:4, col. 34:48-29, col. 35:3-20 and 50-60, “The system is programmed to further store at least some of the collected data in a database. The data can be available at various granularities. For example, for suppliers or buyers, the data can be related to industries, organizations, departments, or individuals; for products, the data can be related to industries, categories, makes, or models. The data can be collected directly from supplier accounts or buyer accounts or from external data sources. The data can be collected in response to processing user queries, as further discussed below, or during ordinary user online activities. The data can be collected from explicit user input through graphical user interfaces, such as a button that when pushed indicates a vote for a particular product as being a good match for another product or a comment box configured to accept supplier reviews, or from implicit user behavior that indicates an affinity for parties or items, such as putting a particular item in a shopping cart or paying invoices to a particular supplier within a specific period of time. (89) In some embodiments, a collective product wisdom dataset may be updated continuously in real-time to reflect the up-to-the-moment selection/purchase activity. The Adaptive Navigation user experience may be built on top of this dynamically changing dataset about the continuously evolving nature of product knowledge and product preferences in the organization. In some embodiments, this may be used to provide a user experience of browsing through a marketplace that is very dynamic and adapts in real-time. Compared to the conventional approach of using all marketplace product data to build a static, and generic product browsing experience, an Adaptive Navigation user experience may be naturally tailored to the company's own product preference and product purchasing behaviors… an Organization Preferences Cognitive Advisor may also have a learning engine inside. This learning engine learns user behavior from actions taken by users to whom recommendations from this Cognitive Advisor have been delivered. These actions may include, for example, that a user clicks on an Organization Preference recommendation, a user selects an Organization Preference recommendation for adding to cart, or a user selects an Organization Preference recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Organization Preferences recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. The Organization Preferences Cognitive Advisor taps into the Collective Intelligence of the organization on product selections and purchases, to provide high quality, reliable recommendations for the user doing the queries. (211) The Best Bets learning engine learns user behavior from actions taken by users to whom the Best Bets recommendations have been delivered. These actions may include, for example, that a user clicks on a Best Bet recommendation, a user selects a Best Bet recommendation for adding to cart, or a user selects a Best Bet recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Best Bets recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization… 215) The learning engine underneath the Bundles Cognitive Advisor 2918 also learns user behavior from actions taken by users to whom the Bundles recommendations have been delivered. These actions may include, for example, that a user clicks on a Bundle recommendation, a user selects a Bundle recommendation for adding to cart, or a user selects a Bundle recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Bundles recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. In addition, the highly active bundles for a given query in a given category in a given region, can be additional data for the Global Item Master, potentially driving recommendations for new Bundle creation or Bundle enhancement for this organization or other organizations overall (244) Often what occurs is that users' shopping patterns don't match with the strategy and contracts set up by Procurement Buyers in different categories; i.e. the user shopping behavior can be said to be non-compliant, and this does not help to tap into procurement rules and expectations, and in turn, the savings are not actualized. A few specific situations would need to be addressed in this regard. For example, procurement buyers would have to provide actual contract information to the system so that the information contained in it can be used in real-time by the system. Systems implemented based on this disclosure may provide the means for the procurement buyers to do so. As another example, Suppliers would need to be carefully organized into categories, and tagged appropriately for their specific attributes (e.g. minority supplier, woman-owned supplier, veteran supplier etc.), so that the guided buying capability of an e-procurement system can operate as expected, suitably rank ordering product selections in the universal search experience. A system implemented based on this disclosure may provide the tools for this to be setup correctly.”)
Reference Makhija and Reference Kadayam are analogous prior art to the claimed invention because the references generally relate to field of workflow processing (Makhija: [0015], [0065]. Kadayam: col. 15, lines 28-38, col. 20, lines 60-67, col. 27, lines 53-57, col. 40-67). Further, said references are part of the same classification, i.e., G06Q and G06F. Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.  
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA  to provide the teachings of Reference Kadayam, particularly the ability to customize data insights using additional perspectives like user preference and market conditions information (Col. 3, lines 31-52, col. 11 lines 54-65, col.12, lines 46-54, col. 15 lines 39-45, col. 29 lines 22-29, col. 32 line 61 to col. 33 line 4, col. 34 lines 48-29, col. 35 lines 3-20 and 50-60), in the disclosure of Reference Makhija, particularly in the ability to collect real time data (paragraph 62-64, 92, 98, 100-101), in order to provide for a system that the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier as taught by Reference Kadayam (see at least in col. 3, lines 54-65: the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier), so that the process of managing workflow processing can be made more efficient and effective. 
Further, the claimed invention is merely a combination of old elements in a similar workflow processing field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Makhija in view of Reference Kadayam, the results of the combination were predictable (MPEP 2143 A)).

As per claim 16: Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
A computerized method for delivering role-specific insights based on real-time data, comprising: 
Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
receiving, by a Personalization and Recommendation Engine (PRE) module, normalized data associated with a user entity, the normalized data comprising at least one of historical interaction data, transactional records, contextual signals, or system-generated behavioral features
Makhija shows the above limitation at least in: paragraph 46, 53-54, 61-62, “the data cleansing and normalization engine 116 is configured to clean data received at the data lake in real time using natural language processing and machine learning algorithms for enhanced accuracy. Since, the data will be received from multiple disconnected sources, the engine 116 has an ability to remove duplicates, standardize and group the data. The cleansing engine is coupled to a data mapper and curator engine. The engine 116 detects and corrects Corrupt or duplicate or vague data. Further, the cleansed data is sent for approval through a routing mechanism post which they are stored in master data tables of the data lake “… “system layer architecture diagrams with data lake/platform (100B, 100c) of AI based self-driven ERP and SCM system is shown in accordance with an embodiment of the present invention. The system 100a includes a plurality of distinct data source layer 127 to capture all customer, factory, supplier, machine and third-party sources of data (both structured and unstructured), the data lake layer 108 storing all data received from the distinct data source layer 127, an application function layer 128 configured to re-calibrate functions based on data models and scripts generated by a bot. The data models are auto-generated based on change in attribute of the received data to determine the impact of the change on the functions of the one or more applications.” …” a Query language tool (QL) 130, data governance & standardization/protocol layer 131” …” collects data from diverse sources, acts a gateway and identifies data attributes to be extracted from application event. The curator engine 132b with the help of mapper and ingestion module 132a stores the received data in multiple type stores viz, the search store for advance search, graph for data and relations, flat structure for logs purpose etc.”
Makhija shows in paragraph 98, 105: “the system includes pro-active detection algorithms for any record/transactions (items/Suppliers/PO/Invoices etc) being entered by a user (supplier/Customer/Employee etc) at the user interface. These will ensure that the Master tables are clean, accurate, complete and non-fraudulent/non-duplicate at any point in time and the data flowing through every single module or pipeline is clean and accurate. The master tables are stored in relational database 122a”. Reference Makhija paragraph 46, 53-54, 61-62, “the data cleansing and normalization engine 116 is configured to clean data received at the data lake in real time using natural language processing and machine learning algorithms for enhanced accuracy. Since, the data will be received from multiple disconnected sources, the engine 116 has an ability to remove duplicates, standardize and group the data. The cleansing engine is coupled to a data mapper and curator engine. The engine 116 detects and corrects Corrupt or duplicate or vague data. Further, the cleansed data is sent for approval through a routing mechanism post which they are stored in master data tables of the data lake “…“system layer architecture diagrams with data lake/platform (100B, 100c) of AI based self-driven ERP and SCM system is shown in accordance with an embodiment of the present invention. The system 100a includes a plurality of distinct data source layer 127 to capture all customer, factory, supplier, machine and third-party sources of data (both structured and unstructured), the data lake layer 108 storing all data received from the distinct data source layer 127, an application function layer 128 configured to re-calibrate functions based on data models and scripts generated by a bot. The data models are auto-generated based on change in attribute of the received data to determine the impact of the change on the functions of the one or more applications.”…” a Query language tool (QL) 130, data governance & standardization/protocol layer 131”…” collects data from diverse sources, acts a gateway and identifies data attributes to be extracted from application event. The curator engine 132b with the help of mapper and ingestion module 132a stores the received data in multiple type stores viz, the search store for advance search, graph for data and relations, flat structure for logs purpose etc.”
Makhija does not explicitly show “or system-generated behavioral features”. Kadayam shows the above limitations (Col. 3:31-52, col. 11:54-65, col.12:46-54, col. 15:39-45, col. 29:22-29, col. 32:61 to col. 33:4, col. 34:48-29, col. 35:3-20 and 50-60, “The system is programmed to further store at least some of the collected data in a database. The data can be available at various granularities. For example, for suppliers or buyers, the data can be related to industries, organizations, departments, or individuals; for products, the data can be related to industries, categories, makes, or models. The data can be collected directly from supplier accounts or buyer accounts or from external data sources. The data can be collected in response to processing user queries, as further discussed below, or during ordinary user online activities. The data can be collected from explicit user input through graphical user interfaces, such as a button that when pushed indicates a vote for a particular product as being a good match for another product or a comment box configured to accept supplier reviews, or from implicit user behavior that indicates an affinity for parties or items, such as putting a particular item in a shopping cart or paying invoices to a particular supplier within a specific period of time. (89) In some embodiments, a collective product wisdom dataset may be updated continuously in real-time to reflect the up-to-the-moment selection/purchase activity. The Adaptive Navigation user experience may be built on top of this dynamically changing dataset about the continuously evolving nature of product knowledge and product preferences in the organization. In some embodiments, this may be used to provide a user experience of browsing through a marketplace that is very dynamic and adapts in real-time. Compared to the conventional approach of using all marketplace product data to build a static, and generic product browsing experience, an Adaptive Navigation user experience may be naturally tailored to the company's own product preference and product purchasing behaviors… an Organization Preferences Cognitive Advisor may also have a learning engine inside. This learning engine learns user behavior from actions taken by users to whom recommendations from this Cognitive Advisor have been delivered. These actions may include, for example, that a user clicks on an Organization Preference recommendation, a user selects an Organization Preference recommendation for adding to cart, or a user selects an Organization Preference recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Organization Preferences recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. The Organization Preferences Cognitive Advisor taps into the Collective Intelligence of the organization on product selections and purchases, to provide high quality, reliable recommendations for the user doing the queries. (211) The Best Bets learning engine learns user behavior from actions taken by users to whom the Best Bets recommendations have been delivered. These actions may include, for example, that a user clicks on a Best Bet recommendation, a user selects a Best Bet recommendation for adding to cart, or a user selects a Best Bet recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Best Bets recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization… 215) The learning engine underneath the Bundles Cognitive Advisor 2918 also learns user behavior from actions taken by users to whom the Bundles recommendations have been delivered. These actions may include, for example, that a user clicks on a Bundle recommendation, a user selects a Bundle recommendation for adding to cart, or a user selects a Bundle recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Bundles recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. In addition, the highly active bundles for a given query in a given category in a given region, can be additional data for the Global Item Master, potentially driving recommendations for new Bundle creation or Bundle enhancement for this organization or other organizations overall (244) Often what occurs is that users' shopping patterns don't match with the strategy and contracts set up by Procurement Buyers in different categories; i.e. the user shopping behavior can be said to be non-compliant, and this does not help to tap into procurement rules and expectations, and in turn, the savings are not actualized. A few specific situations would need to be addressed in this regard. For example, procurement buyers would have to provide actual contract information to the system so that the information contained in it can be used in real-time by the system. Systems implemented based on this disclosure may provide the means for the procurement buyers to do so. As another example, Suppliers would need to be carefully organized into categories, and tagged appropriately for their specific attributes (e.g. minority supplier, woman-owned supplier, veteran supplier etc.), so that the guided buying capability of an e-procurement system can operate as expected, suitably rank ordering product selections in the universal search experience. A system implemented based on this disclosure may provide the tools for this to be setup correctly.”)
Reference Makhija and Reference Kadayam are analogous prior art to the claimed invention because the references generally relate to field of workflow processing (Makhija: [0015], [0065]. Kadayam: col. 15, lines 28-38, col. 20, lines 60-67, col. 27, lines 53-57, col. 40-67). Further, said references are part of the same classification, i.e., G06Q and G06F. Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.  
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA  to provide the teachings of Reference Kadayam, particularly the ability to customize data insights using additional perspectives like user preference and market conditions information (Col. 3, lines 31-52, col. 11 lines 54-65, col.12, lines 46-54, col. 15 lines 39-45, col. 29 lines 22-29, col. 32 line 61 to col. 33 line 4, col. 34 lines 48-29, col. 35 lines 3-20 and 50-60), in the disclosure of Reference Makhija, particularly in the ability to collect real time data (paragraph 62-64, 92, 98, 100-101), in order to provide for a system that the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier as taught by Reference Kadayam (see at least in col. 3, lines 54-65: the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier), so that the process of managing workflow processing can be made more efficient and effective. 
Further, the claimed invention is merely a combination of old elements in a similar workflow processing field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Makhija in view of Reference Kadayam, the results of the combination were predictable (MPEP 2143 A)).
Makhija in view of Kadayam does not explicitly show “historical interaction data, transactional records, contextual signals, or system-generated behavioral features”. Reference Vogler shows the above limitation at least in (paragraph 35-37 “The EWA 130 can apply a predetermined set of rules, or alternatively, the EWA 130 can include artificial intelligence logic that enables the EWA 130 to adapt its behavior in response to current or historical inventory patterns. The artificial intelligence logic enables the EWA to estimate potential variation in inventory levels in the near future in order to identify potentially risky situations early enough to allow for corrective measures. For example, initially the rules may specify that an alert should be fired when the inventory drops below 10. However, if the EWA 130 detects that it sends alerts much more frequently during the summer season than during other seasons, the EWA 130 may adapt to this seasonal variation by increasing the threshold from 10 to 20 during the summer season so that the inventory planner 140 is notified earlier of the impending inventory shortage. This adaptive behavior occurs with minimal human intervention, and with minimal need of parameter adjustment or any other kind of manual calibration. [0036] The EWA 130 can retrieve and analyze current and historical inventory data to detect trends such as deviations between planned replenishment and actual replenishment and to build a predictive model of future inventory needs. These trends and predictions can be determined using linear regression, classification and regression trees, or other stochastic algorithms. [0037]: The EWA 130 combines the estimates of potential variation for several individual activities into an estimate of the potential variation for an entire inventory. These algorithms can be implemented using decision trees such as classification and regression trees.”).
Reference Makhija and Reference Vogler are analogous prior art to the claimed invention because the references generally relate to field of tracking items (Makhija: [0011], [0050]-[0051], [0079]-[0085]: tracking module. Vogler: [0021]-[0022], [0026], [0031]). Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.  
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA  to provide the teachings of Reference Vogler, particularly the ability to use decision trees, etc. ([0035]-[0037]), in the disclosure of Reference Makhija, particularly in the ability to collect real time data and analyze data (paragraph 62-64, 92, 98, 100-101), in order to provide for a system that uses data analysis methods like decision trees to allow suppliers to keep track of how much inventory they have and how much inventory they have distributed to particular retailers. Periodically, the retailer reports to the supplier the current inventory level of the store. Based on the report, the supplier determines whether the store inventory needs to be replenished as taught by Reference Vogler (see at least in [0004]), so that the process of managing the process of tracking items can be made more efficient and effective. 
Further, the claimed invention is merely a combination of old elements in a similar managing the process of tracking items field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Makhija in view of Reference Vogler, the results of the combination were predictable (MPEP 2143 A);
Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
generating, by the PRE module, one or more role-specific insights for the user entity based at least in part on a role context associated with the user entity and one or more trained machine learning algorithms 
Makhija does not explicitly shows “Personalization and Recommendation Engine”. Reference Kadayam shows the above limitations at least in (Col. 3, lines 31-52, col. 11 lines 54-65, col.12, lines 46-54, col. 15 lines 39-45, col. 29 lines 22-29, col. 32 line 61 to col. 33 line 4, col. 34 lines 48-29, col. 35 lines 3-20 and 50-60, “The system is programmed to further store at least some of the collected data in a database. The data can be available at various granularities. For example, for suppliers or buyers, the data can be related to industries, organizations, departments, or individuals; for products, the data can be related to industries, categories, makes, or models. The data can be collected directly from supplier accounts or buyer accounts or from external data sources. The data can be collected in response to processing user queries, as further discussed below, or during ordinary user online activities. The data can be collected from explicit user input through graphical user interfaces, such as a button that when pushed indicates a vote for a particular product as being a good match for another product or a comment box configured to accept supplier reviews, or from implicit user behavior that indicates an affinity for parties or items, such as putting a particular item in a shopping cart or paying invoices to a particular supplier within a specific period of time. (89) In some embodiments, a collective product wisdom dataset may be updated continuously in real-time to reflect the up-to-the-moment selection/purchase activity. The Adaptive Navigation user experience may be built on top of this dynamically changing dataset about the continuously evolving nature of product knowledge and product preferences in the organization. In some embodiments, this may be used to provide a user experience of browsing through a marketplace that is very dynamic and adapts in real-time. Compared to the conventional approach of using all marketplace product data to build a static, and generic product browsing experience, an Adaptive Navigation user experience may be naturally tailored to the company's own product preference and product purchasing behaviors… an Organization Preferences Cognitive Advisor may also have a learning engine inside. This learning engine learns user behavior from actions taken by users to whom recommendations from this Cognitive Advisor have been delivered. These actions may include, for example, that a user clicks on an Organization Preference recommendation, a user selects an Organization Preference recommendation for adding to cart, or a user selects an Organization Preference recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Organization Preferences recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. The Organization Preferences Cognitive Advisor taps into the Collective Intelligence of the organization on product selections and purchases, to provide high quality, reliable recommendations for the user doing the queries. (211) The Best Bets learning engine learns user behavior from actions taken by users to whom the Best Bets recommendations have been delivered. These actions may include, for example, that a user clicks on a Best Bet recommendation, a user selects a Best Bet recommendation for adding to cart, or a user selects a Best Bet recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Best Bets recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization… 215) The learning engine underneath the Bundles Cognitive Advisor 2918 also learns user behavior from actions taken by users to whom the Bundles recommendations have been delivered. These actions may include, for example, that a user clicks on a Bundle recommendation, a user selects a Bundle recommendation for adding to cart, or a user selects a Bundle recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Bundles recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. In addition, the highly active bundles for a given query in a given category in a given region, can be additional data for the Global Item Master, potentially driving recommendations for new Bundle creation or Bundle enhancement for this organization or other organizations overall (244) Often what occurs is that users' shopping patterns don't match with the strategy and contracts set up by Procurement Buyers in different categories; i.e. the user shopping behavior can be said to be non-compliant, and this does not help to tap into procurement rules and expectations, and in turn, the savings are not actualized. A few specific situations would need to be addressed in this regard. For example, procurement buyers would have to provide actual contract information to the system so that the information contained in it can be used in real-time by the system. Systems implemented based on this disclosure may provide the means for the procurement buyers to do so. As another example, Suppliers would need to be carefully organized into categories, and tagged appropriately for their specific attributes (e.g. minority supplier, woman-owned supplier, veteran supplier etc.), so that the guided buying capability of an e-procurement system can operate as expected, suitably rank ordering product selections in the universal search experience. A system implemented based on this disclosure may provide the tools for this to be setup correctly.”
Reference Makhija and Reference Kadayam are analogous prior art to the claimed invention because the references generally relate to field of workflow processing (Makhija: [0015], [0065]. Kadayam: col. 15, lines 28-38, col. 20, lines 60-67, col. 27, lines 53-57, col. 40-67). Further, said references are part of the same classification, i.e., G06Q and G06F. Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.  
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA  to provide the teachings of Reference Kadayam, particularly the ability to customize data insights using additional perspectives like user preference and market conditions information (Col. 3, lines 31-52, col. 11 lines 54-65, col.12, lines 46-54, col. 15 lines 39-45, col. 29 lines 22-29, col. 32 line 61 to col. 33 line 4, col. 34 lines 48-29, col. 35 lines 3-20 and 50-60), in the disclosure of Reference Makhija, particularly in the ability to collect real time data (paragraph 62-64, 92, 98, 100-101), in order to provide for a system that the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier as taught by Reference Kadayam (see at least in col. 3, lines 54-65: the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier), so that the process of managing workflow processing can be made more efficient and effective. 
Further, the claimed invention is merely a combination of old elements in a similar workflow processing field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Makhija in view of Reference Kadayam, the results of the combination were predictable (MPEP 2143 A).
Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
delivering, by a Single Pane of Glass (SpOG) User Interface executing on a client device, the one or more role-specific insights to the user entity, wherein the SpOG User Interface is configured to:
dynamically present the insights based on the role context associated with the user entity;
initiate delivery of the insights in response to an event condition, the event condition comprising a change in market conditions, user behavior, or generation of the insight;
utilize one or more delivery mechanisms selected from push notifications, in- application messages, or email alerts; and
collect user interaction feedback and sentiment signals and transmit the collected feedback to a Feedback and Adaptation Mechanism (FAM) module; and
updating, by the FAM module, one or more operational parameters of a machine learning algorithm within the PRE module based on the collected user interaction feedback and sentiment signals, wherein the updated parameters influence subsequent insight generation in real time 
Reference Makhija shows the above limitations at least in “delivering personalized recommendations to users via a Single Pane of Glass User Interface (SPoG UD, wherein the SPoG UI is configured to:” (paragraph 62-64, 92, 98, 100-101, the system detects changes in data which indicates that the data is continuously being received and compared and synchronizing. Makhija also shows: paragraph 56-60, “The simulation UI 130a enables user to draft statements/query as per underlined model provided through intelligent sensing. The Translator 130 uses NLP and domain specific nomenclature repository, to tokenize query string received from user. Tokenizer takes a sequence of characters and output a sequence of tokens. It will analyze character by character, using multiple levels of lookahead in order to identity what token is currently being examine. The Code Generator 130c extracts Keywords and tokens that are used to generate underlying Machine Learning query and big data query. The Mapper is responsible to generate code and the model 130d utilized domain attributes, Synonyms and tokens.”…” the tool includes an AI based prediction and recommendation engine coupled to a processor configured for processing at least one prediction algorithm to generate at least one recommendation option/task/action in real time.” …” the tool is configured to attach the recommended task/action to a desired workflow or User interface element or set of rules or validations”.
However, Makhija in view of Kadayam does not explicitly show “market data analysis, customer segmentation, and/or predictive analytics”. Reference Vogler shows the above limitation at least in (paragraph 35-37 “The EWA 130 can apply a predetermined set of rules, or alternatively, the EWA 130 can include artificial intelligence logic that enables the EWA 130 to adapt its behavior in response to current or historical inventory patterns. The artificial intelligence logic enables the EWA to estimate potential variation in inventory levels in the near future in order to identify potentially risky situations early enough to allow for corrective measures. For example, initially the rules may specify that an alert should be fired when the inventory drops below 10. However, if the EWA 130 detects that it sends alerts much more frequently during the summer season than during other seasons, the EWA 130 may adapt to this seasonal variation by increasing the threshold from 10 to 20 during the summer season so that the inventory planner 140 is notified earlier of the impending inventory shortage. This adaptive behavior occurs with minimal human intervention, and with minimal need of parameter adjustment or any other kind of manual calibration. [0036] The EWA 130 can retrieve and analyze current and historical inventory data to detect trends such as deviations between planned replenishment and actual replenishment and to build a predictive model of future inventory needs. These trends and predictions can be determined using linear regression, classification and regression trees, or other stochastic algorithms. [0037]: The EWA 130 combines the estimates of potential variation for several individual activities into an estimate of the potential variation for an entire inventory. These algorithms can be implemented using decision trees such as classification and regression trees.”).
Reference Makhija and Reference Vogler are analogous prior art to the claimed invention because the references generally relate to field of tracking items (Makhija: [0011], [0050]-[0051], [0079]-[0085]: tracking module. Vogler: [0021]-[0022], [0026], [0031]). Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.  
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA  to provide the teachings of Reference Vogler, particularly the ability to use decision trees, etc. ([0035]-[0037]), in the disclosure of Reference Makhija, particularly in the ability to collect real time data and analyze data (paragraph 62-64, 92, 98, 100-101), in order to provide for a system that uses data analysis methods like decision trees to allow suppliers to keep track of how much inventory they have and how much inventory they have distributed to particular retailers. Periodically, the retailer reports to the supplier the current inventory level of the store. Based on the report, the supplier determines whether the store inventory needs to be replenished as taught by Reference Vogler (see at least in [0004]), so that the process of managing the process of tracking items can be made more efficient and effective. 
Further, the claimed invention is merely a combination of old elements in a similar managing the process of tracking items field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Makhija in view of Reference Vogler, the results of the combination were predictable (MPEP 2143 A). 
Reference Makhija shows in paragraph 62-64, 92, 98, 100-101, the system detects changes in data which indicates that the data is continuously being received and compared and synchronizing. Reference Makhija does not explicitly show user's preferences and market conditions, as such, Reference Makhija does not explicitly show “user feedback” and “generating integrated insights”. 
Reference Kadayam shows the above limitations at least in (Col. 3:31-52, col. 11:54-65, col.12:46-54, col. 15:39-45, col. 29:22-29, col. 32:61 to col. 33:4, col. 34:48-29, col. 35:3-20 and 50-60, “The system is programmed to further store at least some of the collected data in a database. The data can be available at various granularities. For example, for suppliers or buyers, the data can be related to industries, organizations, departments, or individuals; for products, the data can be related to industries, categories, makes, or models. The data can be collected directly from supplier accounts or buyer accounts or from external data sources. The data can be collected in response to processing user queries, as further discussed below, or during ordinary user online activities. The data can be collected from explicit user input through graphical user interfaces, such as a button that when pushed indicates a vote for a particular product as being a good match for another product or a comment box configured to accept supplier reviews, or from implicit user behavior that indicates an affinity for parties or items, such as putting a particular item in a shopping cart or paying invoices to a particular supplier within a specific period of time. (89) In some embodiments, a collective product wisdom dataset may be updated continuously in real-time to reflect the up-to-the-moment selection/purchase activity. The Adaptive Navigation user experience may be built on top of this dynamically changing dataset about the continuously evolving nature of product knowledge and product preferences in the organization. In some embodiments, this may be used to provide a user experience of browsing through a marketplace that is very dynamic and adapts in real-time. Compared to the conventional approach of using all marketplace product data to build a static, and generic product browsing experience, an Adaptive Navigation user experience may be naturally tailored to the company's own product preference and product purchasing behaviors… an Organization Preferences Cognitive Advisor may also have a learning engine inside. This learning engine learns user behavior from actions taken by users to whom recommendations from this Cognitive Advisor have been delivered. These actions may include, for example, that a user clicks on an Organization Preference recommendation, a user selects an Organization Preference recommendation for adding to cart, or a user selects an Organization Preference recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Organization Preferences recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. The Organization Preferences Cognitive Advisor taps into the Collective Intelligence of the organization on product selections and purchases, to provide high quality, reliable recommendations for the user doing the queries. (211) The Best Bets learning engine learns user behavior from actions taken by users to whom the Best Bets recommendations have been delivered. These actions may include, for example, that a user clicks on a Best Bet recommendation, a user selects a Best Bet recommendation for adding to cart, or a user selects a Best Bet recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Best Bets recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization… 215) The learning engine underneath the Bundles Cognitive Advisor 2918 also learns user behavior from actions taken by users to whom the Bundles recommendations have been delivered. These actions may include, for example, that a user clicks on a Bundle recommendation, a user selects a Bundle recommendation for adding to cart, or a user selects a Bundle recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Bundles recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. In addition, the highly active bundles for a given query in a given category in a given region, can be additional data for the Global Item Master, potentially driving recommendations for new Bundle creation or Bundle enhancement for this organization or other organizations overall (244) Often what occurs is that users' shopping patterns don't match with the strategy and contracts set up by Procurement Buyers in different categories; i.e. the user shopping behavior can be said to be non-compliant, and this does not help to tap into procurement rules and expectations, and in turn, the savings are not actualized. A few specific situations would need to be addressed in this regard. For example, procurement buyers would have to provide actual contract information to the system so that the information contained in it can be used in real-time by the system. Systems implemented based on this disclosure may provide the means for the procurement buyers to do so. As another example, Suppliers would need to be carefully organized into categories, and tagged appropriately for their specific attributes (e.g. minority supplier, woman-owned supplier, veteran supplier etc.), so that the guided buying capability of an e-procurement system can operate as expected, suitably rank ordering product selections in the universal search experience. A system implemented based on this disclosure may provide the tools for this to be setup correctly.”
Reference Makhija and Reference Kadayam are analogous prior art to the claimed invention because the references generally relate to field of workflow processing (Makhija: [0015], [0065]. Kadayam: col. 15, lines 28-38, col. 20, lines 60-67, col. 27, lines 53-57, col. 40-67). Further, said references are part of the same classification, i.e., G06Q and G06F. Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.  
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA  to provide the teachings of Reference Kadayam, particularly the ability to customize data insights using additional perspectives like user preference and market conditions information (Col. 3, lines 31-52, col. 11 lines 54-65, col.12, lines 46-54, col. 15 lines 39-45, col. 29 lines 22-29, col. 32 line 61 to col. 33 line 4, col. 34 lines 48-29, col. 35 lines 3-20 and 50-60), in the disclosure of Reference Makhija, particularly in the ability to collect real time data (paragraph 62-64, 92, 98, 100-101), in order to provide for a system that the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier as taught by Reference Kadayam (see at least in col. 3, lines 54-65: the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier), so that the process of managing workflow processing can be made more efficient and effective. 
Further, the claimed invention is merely a combination of old elements in a similar workflow processing field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Makhija in view of Reference Kadayam, the results of the combination were predictable (MPEP 2143 A); 
Reference Makhija shows in paragraph 62-64, 92, 98, 100-101, the system detects changes in data which indicates that the data is continuously being received and compared and synchronizing. Reference Makhija shows demographic information ([0104]: Demand planning S504 allows determination of a demand for the item or product considering various factors like customer base, consumption, density of population is a geographic location etc.). Makhija also shows transaction history ([0098]: the system includes pro-active detection algorithms for any record/transactions (items/Suppliers/PO/Invoices etc) being entered by a user (supplier/Customer/Employee etc) at the user interface.).
However, Makhija in view of Kadayam does not explicitly show “market data analysis, customer segmentation, and/or predictive analytics”. Reference Vogler shows the above limitation at least in (paragraph 35-37 “The EWA 130 can apply a predetermined set of rules, or alternatively, the EWA 130 can include artificial intelligence logic that enables the EWA 130 to adapt its behavior in response to current or historical inventory patterns. The artificial intelligence logic enables the EWA to estimate potential variation in inventory levels in the near future in order to identify potentially risky situations early enough to allow for corrective measures. For example, initially the rules may specify that an alert should be fired when the inventory drops below 10. However, if the EWA 130 detects that it sends alerts much more frequently during the summer season than during other seasons, the EWA 130 may adapt to this seasonal variation by increasing the threshold from 10 to 20 during the summer season so that the inventory planner 140 is notified earlier of the impending inventory shortage. This adaptive behavior occurs with minimal human intervention, and with minimal need of parameter adjustment or any other kind of manual calibration. [0036] The EWA 130 can retrieve and analyze current and historical inventory data to detect trends such as deviations between planned replenishment and actual replenishment and to build a predictive model of future inventory needs. These trends and predictions can be determined using linear regression, classification and regression trees, or other stochastic algorithms. [0037]: The EWA 130 combines the estimates of potential variation for several individual activities into an estimate of the potential variation for an entire inventory. These algorithms can be implemented using decision trees such as classification and regression trees.”).
Makhija does not explicitly show monitoring user preferences and market conditions. Reference Kadayam shows the above limitations at least in (Col. 3, lines 31-52, col. 11 lines 54-65: shows purchasing behaviors, col.12, lines 46-54, col. 15 lines 39-45, col. 25, lines 55-67: market trends, col. 29 lines 22-29, col. 32 line 61 to col. 33 line 4, col. 34 lines 48-29, col. 35 lines 3-20 and 50-60, “The system is programmed to further store at least some of the collected data in a database. The data can be available at various granularities. For example, for suppliers or buyers, the data can be related to industries, organizations, departments, or individuals; for products, the data can be related to industries, categories, makes, or models. The data can be collected directly from supplier accounts or buyer accounts or from external data sources. The data can be collected in response to processing user queries, as further discussed below, or during ordinary user online activities. The data can be collected from explicit user input through graphical user interfaces, such as a button that when pushed indicates a vote for a particular product as being a good match for another product or a comment box configured to accept supplier reviews, or from implicit user behavior that indicates an affinity for parties or items, such as putting a particular item in a shopping cart or paying invoices to a particular supplier within a specific period of time. (89) In some embodiments, a collective product wisdom dataset may be updated continuously in real-time to reflect the up-to-the-moment selection/purchase activity. The Adaptive Navigation user experience may be built on top of this dynamically changing dataset about the continuously evolving nature of product knowledge and product preferences in the organization. In some embodiments, this may be used to provide a user experience of browsing through a marketplace that is very dynamic and adapts in real-time. Compared to the conventional approach of using all marketplace product data to build a static, and generic product browsing experience, an Adaptive Navigation user experience may be naturally tailored to the company's own product preference and product purchasing behaviors… an Organization Preferences Cognitive Advisor may also have a learning engine inside. This learning engine learns user behavior from actions taken by users to whom recommendations from this Cognitive Advisor have been delivered. These actions may include, for example, that a user clicks on an Organization Preference recommendation, a user selects an Organization Preference recommendation for adding to cart, or a user selects an Organization Preference recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Organization Preferences recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. The Organization Preferences Cognitive Advisor taps into the Collective Intelligence of the organization on product selections and purchases, to provide high quality, reliable recommendations for the user doing the queries. (211) The Best Bets learning engine learns user behavior from actions taken by users to whom the Best Bets recommendations have been delivered. These actions may include, for example, that a user clicks on a Best Bet recommendation, a user selects a Best Bet recommendation for adding to cart, or a user selects a Best Bet recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Best Bets recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization… 215) The learning engine underneath the Bundles Cognitive Advisor 2918 also learns user behavior from actions taken by users to whom the Bundles recommendations have been delivered. These actions may include, for example, that a user clicks on a Bundle recommendation, a user selects a Bundle recommendation for adding to cart, or a user selects a Bundle recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Bundles recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. In addition, the highly active bundles for a given query in a given category in a given region, can be additional data for the Global Item Master, potentially driving recommendations for new Bundle creation or Bundle enhancement for this organization or other organizations overall (244) Often what occurs is that users' shopping patterns don't match with the strategy and contracts set up by Procurement Buyers in different categories; i.e. the user shopping behavior can be said to be non-compliant, and this does not help to tap into procurement rules and expectations, and in turn, the savings are not actualized. A few specific situations would need to be addressed in this regard. For example, procurement buyers would have to provide actual contract information to the system so that the information contained in it can be used in real-time by the system. Systems implemented based on this disclosure may provide the means for the procurement buyers to do so. As another example, Suppliers would need to be carefully organized into categories, and tagged appropriately for their specific attributes (e.g. minority supplier, woman-owned supplier, veteran supplier etc.), so that the guided buying capability of an e-procurement system can operate as expected, suitably rank ordering product selections in the universal search experience. A system implemented based on this disclosure may provide the tools for this to be setup correctly.”
Reference Makhija and Reference Kadayam are analogous prior art to the claimed invention because the references generally relate to field of workflow processing (Makhija: [0015], [0065]. Kadayam: col. 15, lines 28-38, col. 20, lines 60-67, col. 27, lines 53-57, col. 40-67). Further, said references are part of the same classification, i.e., G06Q and G06F. Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.  
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA  to provide the teachings of Reference Kadayam, particularly the ability to customize data insights using additional perspectives like user preference and market conditions information (Col. 3, lines 31-52, col. 11 lines 54-65, col.12, lines 46-54, col. 15 lines 39-45, col. 29 lines 22-29, col. 32 line 61 to col. 33 line 4, col. 34 lines 48-29, col. 35 lines 3-20 and 50-60), in the disclosure of Reference Makhija, particularly in the ability to collect real time data (paragraph 62-64, 92, 98, 100-101), in order to provide for a system that the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier as taught by Reference Kadayam (see at least in col. 3, lines 54-65: the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier), so that the process of managing workflow processing can be made more efficient and effective. 
Further, the claimed invention is merely a combination of old elements in a similar workflow processing field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Makhija in view of Reference Kadayam, the results of the combination were predictable (MPEP 2143 A).

As per claim 17: Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
wherein the delivering comprises providing users with an opportunity to provide feedback regarding relevance of presented insights.  
Reference Makhija shows in paragraph 62-64, 92, 98, 100-101, the system detects changes in data which indicates that the data is continuously being received and compared and synchronizing. Reference Makhija does not explicitly show user's preferences and market conditions, as such, Reference Makhija does not explicitly show “user feedback” and “generating integrated insights”. 
Reference Kadayam shows the above limitations at least in (Col. 3:31-52, col. 11:54-65, col.12:46-54, col. 15:39-45, col. 29:22-29, col. 32:61 to col. 33:4, col. 34:48-29, col. 35:3-20 and 50-60, “The system is programmed to further store at least some of the collected data in a database. The data can be available at various granularities. For example, for suppliers or buyers, the data can be related to industries, organizations, departments, or individuals; for products, the data can be related to industries, categories, makes, or models. The data can be collected directly from supplier accounts or buyer accounts or from external data sources. The data can be collected in response to processing user queries, as further discussed below, or during ordinary user online activities. The data can be collected from explicit user input through graphical user interfaces, such as a button that when pushed indicates a vote for a particular product as being a good match for another product or a comment box configured to accept supplier reviews, or from implicit user behavior that indicates an affinity for parties or items, such as putting a particular item in a shopping cart or paying invoices to a particular supplier within a specific period of time. (89) In some embodiments, a collective product wisdom dataset may be updated continuously in real-time to reflect the up-to-the-moment selection/purchase activity. The Adaptive Navigation user experience may be built on top of this dynamically changing dataset about the continuously evolving nature of product knowledge and product preferences in the organization. In some embodiments, this may be used to provide a user experience of browsing through a marketplace that is very dynamic and adapts in real-time. Compared to the conventional approach of using all marketplace product data to build a static, and generic product browsing experience, an Adaptive Navigation user experience may be naturally tailored to the company's own product preference and product purchasing behaviors… an Organization Preferences Cognitive Advisor may also have a learning engine inside. This learning engine learns user behavior from actions taken by users to whom recommendations from this Cognitive Advisor have been delivered. These actions may include, for example, that a user clicks on an Organization Preference recommendation, a user selects an Organization Preference recommendation for adding to cart, or a user selects an Organization Preference recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Organization Preferences recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. The Organization Preferences Cognitive Advisor taps into the Collective Intelligence of the organization on product selections and purchases, to provide high quality, reliable recommendations for the user doing the queries. (211) The Best Bets learning engine learns user behavior from actions taken by users to whom the Best Bets recommendations have been delivered. These actions may include, for example, that a user clicks on a Best Bet recommendation, a user selects a Best Bet recommendation for adding to cart, or a user selects a Best Bet recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Best Bets recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization… 215) The learning engine underneath the Bundles Cognitive Advisor 2918 also learns user behavior from actions taken by users to whom the Bundles recommendations have been delivered. These actions may include, for example, that a user clicks on a Bundle recommendation, a user selects a Bundle recommendation for adding to cart, or a user selects a Bundle recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Bundles recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. In addition, the highly active bundles for a given query in a given category in a given region, can be additional data for the Global Item Master, potentially driving recommendations for new Bundle creation or Bundle enhancement for this organization or other organizations overall (244) Often what occurs is that users' shopping patterns don't match with the strategy and contracts set up by Procurement Buyers in different categories; i.e. the user shopping behavior can be said to be non-compliant, and this does not help to tap into procurement rules and expectations, and in turn, the savings are not actualized. A few specific situations would need to be addressed in this regard. For example, procurement buyers would have to provide actual contract information to the system so that the information contained in it can be used in real-time by the system. Systems implemented based on this disclosure may provide the means for the procurement buyers to do so. As another example, Suppliers would need to be carefully organized into categories, and tagged appropriately for their specific attributes (e.g. minority supplier, woman-owned supplier, veteran supplier etc.), so that the guided buying capability of an e-procurement system can operate as expected, suitably rank ordering product selections in the universal search experience. A system implemented based on this disclosure may provide the tools for this to be setup correctly.”
Reference Makhija and Reference Kadayam are analogous prior art to the claimed invention because the references generally relate to field of workflow processing (Makhija: [0015], [0065]. Kadayam: col. 15, lines 28-38, col. 20, lines 60-67, col. 27, lines 53-57, col. 40-67). Further, said references are part of the same classification, i.e., G06Q and G06F. Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.  
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA  to provide the teachings of Reference Kadayam, particularly the ability to customize data insights using additional perspectives like user preference and market conditions information (Col. 3, lines 31-52, col. 11 lines 54-65, col.12, lines 46-54, col. 15 lines 39-45, col. 29 lines 22-29, col. 32 line 61 to col. 33 line 4, col. 34 lines 48-29, col. 35 lines 3-20 and 50-60), in the disclosure of Reference Makhija, particularly in the ability to collect real time data (paragraph 62-64, 92, 98, 100-101), in order to provide for a system that the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier as taught by Reference Kadayam (see at least in col. 3, lines 54-65: the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier), so that the process of managing workflow processing can be made more efficient and effective. 
Further, the claimed invention is merely a combination of old elements in a similar workflow processing field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Makhija in view of Reference Kadayam, the results of the combination were predictable (MPEP 2143 A).

As per claim 18: Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
further comprising monitoring the effectiveness of recommendations and dynamically adjusting algorithms and models based on user feedback.  
Reference Makhija paragraph 46, 53-54, 61-62, “the data cleansing and normalization engine 116 is configured to clean data received at the data lake in real time using natural language processing and machine learning algorithms for enhanced accuracy. Since, the data will be received from multiple disconnected sources, the engine 116 has an ability to remove duplicates, standardize and group the data. The cleansing engine is coupled to a data mapper and curator engine. The engine 116 detects and corrects Corrupt or duplicate or vague data. Further, the cleansed data is sent for approval through a routing mechanism post which they are stored in master data tables of the data lake “…“system layer architecture diagrams with data lake/platform (100B, 100c) of AI based self-driven ERP and SCM system is shown in accordance with an embodiment of the present invention. The system 100a includes a plurality of distinct data source layer 127 to capture all customer, factory, supplier, machine and third-party sources of data (both structured and unstructured), the data lake layer 108 storing all data received from the distinct data source layer 127, an application function layer 128 configured to re-calibrate functions based on data models and scripts generated by a bot. The data models are auto-generated based on change in attribute of the received data to determine the impact of the change on the functions of the one or more applications.”…” a Query language tool (QL) 130, data governance & standardization/protocol layer 131”…” collects data from diverse sources, acts a gateway and identifies data attributes to be extracted from application event. The curator engine 132b with the help of mapper and ingestion module 132a stores the received data in multiple type stores viz, the search store for advance search, graph for data and relations, flat structure for logs purpose etc.”
Makhija does not explicitly show “monitoring the effectiveness of recommendations”. Kadayam shows the above limitations (Col. 3:31-52, col. 11:54-65, col.12:46-54, col. 15:39-45, col. 29:22-29, col. 32:61 to col. 33:4, col. 34:48-29, col. 35:3-20 and 50-60, “The system is programmed to further store at least some of the collected data in a database. The data can be available at various granularities. For example, for suppliers or buyers, the data can be related to industries, organizations, departments, or individuals; for products, the data can be related to industries, categories, makes, or models. The data can be collected directly from supplier accounts or buyer accounts or from external data sources. The data can be collected in response to processing user queries, as further discussed below, or during ordinary user online activities. The data can be collected from explicit user input through graphical user interfaces, such as a button that when pushed indicates a vote for a particular product as being a good match for another product or a comment box configured to accept supplier reviews, or from implicit user behavior that indicates an affinity for parties or items, such as putting a particular item in a shopping cart or paying invoices to a particular supplier within a specific period of time. (89) In some embodiments, a collective product wisdom dataset may be updated continuously in real-time to reflect the up-to-the-moment selection/purchase activity. The Adaptive Navigation user experience may be built on top of this dynamically changing dataset about the continuously evolving nature of product knowledge and product preferences in the organization. In some embodiments, this may be used to provide a user experience of browsing through a marketplace that is very dynamic and adapts in real-time. Compared to the conventional approach of using all marketplace product data to build a static, and generic product browsing experience, an Adaptive Navigation user experience may be naturally tailored to the company's own product preference and product purchasing behaviors… an Organization Preferences Cognitive Advisor may also have a learning engine inside. This learning engine learns user behavior from actions taken by users to whom recommendations from this Cognitive Advisor have been delivered. These actions may include, for example, that a user clicks on an Organization Preference recommendation, a user selects an Organization Preference recommendation for adding to cart, or a user selects an Organization Preference recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Organization Preferences recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. The Organization Preferences Cognitive Advisor taps into the Collective Intelligence of the organization on product selections and purchases, to provide high quality, reliable recommendations for the user doing the queries. (211) The Best Bets learning engine learns user behavior from actions taken by users to whom the Best Bets recommendations have been delivered. These actions may include, for example, that a user clicks on a Best Bet recommendation, a user selects a Best Bet recommendation for adding to cart, or a user selects a Best Bet recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Best Bets recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization… 215) The learning engine underneath the Bundles Cognitive Advisor 2918 also learns user behavior from actions taken by users to whom the Bundles recommendations have been delivered. These actions may include, for example, that a user clicks on a Bundle recommendation, a user selects a Bundle recommendation for adding to cart, or a user selects a Bundle recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Bundles recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. In addition, the highly active bundles for a given query in a given category in a given region, can be additional data for the Global Item Master, potentially driving recommendations for new Bundle creation or Bundle enhancement for this organization or other organizations overall (244) Often what occurs is that users' shopping patterns don't match with the strategy and contracts set up by Procurement Buyers in different categories; i.e. the user shopping behavior can be said to be non-compliant, and this does not help to tap into procurement rules and expectations, and in turn, the savings are not actualized. A few specific situations would need to be addressed in this regard. For example, procurement buyers would have to provide actual contract information to the system so that the information contained in it can be used in real-time by the system. Systems implemented based on this disclosure may provide the means for the procurement buyers to do so. As another example, Suppliers would need to be carefully organized into categories, and tagged appropriately for their specific attributes (e.g. minority supplier, woman-owned supplier, veteran supplier etc.), so that the guided buying capability of an e-procurement system can operate as expected, suitably rank ordering product selections in the universal search experience. A system implemented based on this disclosure may provide the tools for this to be setup correctly.”)
Reference Makhija and Reference Kadayam are analogous prior art to the claimed invention because the references generally relate to field of workflow processing (Makhija: [0015], [0065]. Kadayam: col. 15, lines 28-38, col. 20, lines 60-67, col. 27, lines 53-57, col. 40-67). Further, said references are part of the same classification, i.e., G06Q and G06F. Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.  
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA  to provide the teachings of Reference Kadayam, particularly the ability to customize data insights using additional perspectives like user preference and market conditions information (Col. 3, lines 31-52, col. 11 lines 54-65, col.12, lines 46-54, col. 15 lines 39-45, col. 29 lines 22-29, col. 32 line 61 to col. 33 line 4, col. 34 lines 48-29, col. 35 lines 3-20 and 50-60), in the disclosure of Reference Makhija, particularly in the ability to collect real time data (paragraph 62-64, 92, 98, 100-101), in order to provide for a system that the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier as taught by Reference Kadayam (see at least in col. 3, lines 54-65: the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier), so that the process of managing workflow processing can be made more efficient and effective. 
Further, the claimed invention is merely a combination of old elements in a similar workflow processing field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Makhija in view of Reference Kadayam, the results of the combination were predictable (MPEP 2143 A).

As per claim 19: Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
further comprising refining segmentation strategies based on updated data and evolving market dynamics, to enable continuous improvement and adaptation of the personalization and recommendation engine. 
Reference Makhija paragraph 46, 53-54, 61-62, “the data cleansing and normalization engine 116 is configured to clean data received at the data lake in real time using natural language processing and machine learning algorithms for enhanced accuracy. Since, the data will be received from multiple disconnected sources, the engine 116 has an ability to remove duplicates, standardize and group the data. The cleansing engine is coupled to a data mapper and curator engine. The engine 116 detects and corrects Corrupt or duplicate or vague data. Further, the cleansed data is sent for approval through a routing mechanism post which they are stored in master data tables of the data lake “…“system layer architecture diagrams with data lake/platform (100B, 100c) of AI based self-driven ERP and SCM system is shown in accordance with an embodiment of the present invention. The system 100a includes a plurality of distinct data source layer 127 to capture all customer, factory, supplier, machine and third-party sources of data (both structured and unstructured), the data lake layer 108 storing all data received from the distinct data source layer 127, an application function layer 128 configured to re-calibrate functions based on data models and scripts generated by a bot. The data models are auto-generated based on change in attribute of the received data to determine the impact of the change on the functions of the one or more applications.”…” a Query language tool (QL) 130, data governance & standardization/protocol layer 131”…” collects data from diverse sources, acts a gateway and identifies data attributes to be extracted from application event. The curator engine 132b with the help of mapper and ingestion module 132a stores the received data in multiple type stores viz, the search store for advance search, graph for data and relations, flat structure for logs purpose etc.”
Makhija does not explicitly show “to enable continuous improvement and adaptation of the personalization and recommendation engine”. Kadayam shows the above limitations (Col. 3:31-52, col. 11:54-65, col.12:46-54, col. 15:39-45, col. 29:22-29, col. 32:61 to col. 33:4, col. 34:48-29, col. 35:3-20 and 50-60, “The system is programmed to further store at least some of the collected data in a database. The data can be available at various granularities. For example, for suppliers or buyers, the data can be related to industries, organizations, departments, or individuals; for products, the data can be related to industries, categories, makes, or models. The data can be collected directly from supplier accounts or buyer accounts or from external data sources. The data can be collected in response to processing user queries, as further discussed below, or during ordinary user online activities. The data can be collected from explicit user input through graphical user interfaces, such as a button that when pushed indicates a vote for a particular product as being a good match for another product or a comment box configured to accept supplier reviews, or from implicit user behavior that indicates an affinity for parties or items, such as putting a particular item in a shopping cart or paying invoices to a particular supplier within a specific period of time. (89) In some embodiments, a collective product wisdom dataset may be updated continuously in real-time to reflect the up-to-the-moment selection/purchase activity. The Adaptive Navigation user experience may be built on top of this dynamically changing dataset about the continuously evolving nature of product knowledge and product preferences in the organization. In some embodiments, this may be used to provide a user experience of browsing through a marketplace that is very dynamic and adapts in real-time. Compared to the conventional approach of using all marketplace product data to build a static, and generic product browsing experience, an Adaptive Navigation user experience may be naturally tailored to the company's own product preference and product purchasing behaviors… an Organization Preferences Cognitive Advisor may also have a learning engine inside. This learning engine learns user behavior from actions taken by users to whom recommendations from this Cognitive Advisor have been delivered. These actions may include, for example, that a user clicks on an Organization Preference recommendation, a user selects an Organization Preference recommendation for adding to cart, or a user selects an Organization Preference recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Organization Preferences recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. The Organization Preferences Cognitive Advisor taps into the Collective Intelligence of the organization on product selections and purchases, to provide high quality, reliable recommendations for the user doing the queries. (211) The Best Bets learning engine learns user behavior from actions taken by users to whom the Best Bets recommendations have been delivered. These actions may include, for example, that a user clicks on a Best Bet recommendation, a user selects a Best Bet recommendation for adding to cart, or a user selects a Best Bet recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Best Bets recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization… 215) The learning engine underneath the Bundles Cognitive Advisor 2918 also learns user behavior from actions taken by users to whom the Bundles recommendations have been delivered. These actions may include, for example, that a user clicks on a Bundle recommendation, a user selects a Bundle recommendation for adding to cart, or a user selects a Bundle recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Bundles recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. In addition, the highly active bundles for a given query in a given category in a given region, can be additional data for the Global Item Master, potentially driving recommendations for new Bundle creation or Bundle enhancement for this organization or other organizations overall (244) Often what occurs is that users' shopping patterns don't match with the strategy and contracts set up by Procurement Buyers in different categories; i.e. the user shopping behavior can be said to be non-compliant, and this does not help to tap into procurement rules and expectations, and in turn, the savings are not actualized. A few specific situations would need to be addressed in this regard. For example, procurement buyers would have to provide actual contract information to the system so that the information contained in it can be used in real-time by the system. Systems implemented based on this disclosure may provide the means for the procurement buyers to do so. As another example, Suppliers would need to be carefully organized into categories, and tagged appropriately for their specific attributes (e.g. minority supplier, woman-owned supplier, veteran supplier etc.), so that the guided buying capability of an e-procurement system can operate as expected, suitably rank ordering product selections in the universal search experience. A system implemented based on this disclosure may provide the tools for this to be setup correctly.”)
Reference Makhija and Reference Kadayam are analogous prior art to the claimed invention because the references generally relate to field of workflow processing (Makhija: [0015], [0065]. Kadayam: col. 15, lines 28-38, col. 20, lines 60-67, col. 27, lines 53-57, col. 40-67). Further, said references are part of the same classification, i.e., G06Q and G06F. Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.  
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA  to provide the teachings of Reference Kadayam, particularly the ability to customize data insights using additional perspectives like user preference and market conditions information (Col. 3, lines 31-52, col. 11 lines 54-65, col.12, lines 46-54, col. 15 lines 39-45, col. 29 lines 22-29, col. 32 line 61 to col. 33 line 4, col. 34 lines 48-29, col. 35 lines 3-20 and 50-60), in the disclosure of Reference Makhija, particularly in the ability to collect real time data (paragraph 62-64, 92, 98, 100-101), in order to provide for a system that the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier as taught by Reference Kadayam (see at least in col. 3, lines 54-65: the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier), so that the process of managing workflow processing can be made more efficient and effective. 
Further, the claimed invention is merely a combination of old elements in a similar workflow processing field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Makhija in view of Reference Kadayam, the results of the combination were predictable (MPEP 2143 A).

As per claim 20: Regarding the claim limitations below, Reference Makhija in view of Reference Kadayam and Reference Vogler shows:
further comprising logging transaction details related to the generated one or more insights generated within the platform, wherein the transaction details are logged to facilitate optimization of the Personalization and Recommendation Engine and/or the algorithms, wherein the transaction details include user interactions, segmentation results, and/or feedback. 
Reference Makhija shows in paragraph 62-64, 92, 98, 100-101, the system detects changes in data which indicates that the data is continuously being received and compared and synchronizing. Reference Makhija does not explicitly show user's preferences and market conditions, as such, Reference Makhija does not explicitly show “user feedback” and “generating integrated insights”. 
Reference Kadayam shows the above limitations at least in (Col. 3:31-52, col. 11:54-65, col.12:46-54, col. 15:39-45, col. 29:22-29, col. 32:61 to col. 33:4, col. 34:48-29, col. 35:3-20 and 50-60, “The system is programmed to further store at least some of the collected data in a database. The data can be available at various granularities. For example, for suppliers or buyers, the data can be related to industries, organizations, departments, or individuals; for products, the data can be related to industries, categories, makes, or models. The data can be collected directly from supplier accounts or buyer accounts or from external data sources. The data can be collected in response to processing user queries, as further discussed below, or during ordinary user online activities. The data can be collected from explicit user input through graphical user interfaces, such as a button that when pushed indicates a vote for a particular product as being a good match for another product or a comment box configured to accept supplier reviews, or from implicit user behavior that indicates an affinity for parties or items, such as putting a particular item in a shopping cart or paying invoices to a particular supplier within a specific period of time. (89) In some embodiments, a collective product wisdom dataset may be updated continuously in real-time to reflect the up-to-the-moment selection/purchase activity. The Adaptive Navigation user experience may be built on top of this dynamically changing dataset about the continuously evolving nature of product knowledge and product preferences in the organization. In some embodiments, this may be used to provide a user experience of browsing through a marketplace that is very dynamic and adapts in real-time. Compared to the conventional approach of using all marketplace product data to build a static, and generic product browsing experience, an Adaptive Navigation user experience may be naturally tailored to the company's own product preference and product purchasing behaviors… an Organization Preferences Cognitive Advisor may also have a learning engine inside. This learning engine learns user behavior from actions taken by users to whom recommendations from this Cognitive Advisor have been delivered. These actions may include, for example, that a user clicks on an Organization Preference recommendation, a user selects an Organization Preference recommendation for adding to cart, or a user selects an Organization Preference recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Organization Preferences recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. The Organization Preferences Cognitive Advisor taps into the Collective Intelligence of the organization on product selections and purchases, to provide high quality, reliable recommendations for the user doing the queries. (211) The Best Bets learning engine learns user behavior from actions taken by users to whom the Best Bets recommendations have been delivered. These actions may include, for example, that a user clicks on a Best Bet recommendation, a user selects a Best Bet recommendation for adding to cart, or a user selects a Best Bet recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Best Bets recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization… 215) The learning engine underneath the Bundles Cognitive Advisor 2918 also learns user behavior from actions taken by users to whom the Bundles recommendations have been delivered. These actions may include, for example, that a user clicks on a Bundle recommendation, a user selects a Bundle recommendation for adding to cart, or a user selects a Bundle recommendation for adding to cart followed by an actual purchase. The entire query context and user context are taken into account, along with the signals above of user actions from Bundles recommendations, for learning and improving the quality of the recommendations made to all users overall within the organization. In addition, the highly active bundles for a given query in a given category in a given region, can be additional data for the Global Item Master, potentially driving recommendations for new Bundle creation or Bundle enhancement for this organization or other organizations overall (244) Often what occurs is that users' shopping patterns don't match with the strategy and contracts set up by Procurement Buyers in different categories; i.e. the user shopping behavior can be said to be non-compliant, and this does not help to tap into procurement rules and expectations, and in turn, the savings are not actualized. A few specific situations would need to be addressed in this regard. For example, procurement buyers would have to provide actual contract information to the system so that the information contained in it can be used in real-time by the system. Systems implemented based on this disclosure may provide the means for the procurement buyers to do so. As another example, Suppliers would need to be carefully organized into categories, and tagged appropriately for their specific attributes (e.g. minority supplier, woman-owned supplier, veteran supplier etc.), so that the guided buying capability of an e-procurement system can operate as expected, suitably rank ordering product selections in the universal search experience. A system implemented based on this disclosure may provide the tools for this to be setup correctly.”
Reference Makhija and Reference Kadayam are analogous prior art to the claimed invention because the references generally relate to field of workflow processing (Makhija: [0015], [0065]. Kadayam: col. 15, lines 28-38, col. 20, lines 60-67, col. 27, lines 53-57, col. 40-67). Further, said references are part of the same classification, i.e., G06Q and G06F. Lastly, said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.  
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA  to provide the teachings of Reference Kadayam, particularly the ability to customize data insights using additional perspectives like user preference and market conditions information (Col. 3, lines 31-52, col. 11 lines 54-65, col.12, lines 46-54, col. 15 lines 39-45, col. 29 lines 22-29, col. 32 line 61 to col. 33 line 4, col. 34 lines 48-29, col. 35 lines 3-20 and 50-60), in the disclosure of Reference Makhija, particularly in the ability to collect real time data (paragraph 62-64, 92, 98, 100-101), in order to provide for a system that the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier as taught by Reference Kadayam (see at least in col. 3, lines 54-65: the system is programmed to maintain a collection of “cognitive advisors” or recommendation models. Each recommendation model has certain required input parameters and produces a procurement recommendation. Each recommendation can also have various optional parameters to cover possible information can may be contained in the query context. A recommendation model can be pretrained based on representative data in the database with machine leaning techniques known to one of skilled in the art, in which case the recommendation model acts as a classifier), so that the process of managing workflow processing can be made more efficient and effective. 
Further, the claimed invention is merely a combination of old elements in a similar workflow processing field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Makhija in view of Reference Kadayam, the results of the combination were predictable (MPEP 2143 A).

				Response to Arguments
Applicants’ arguments are moot in view of the new grounds of rejection necessitated by the amendments made to previously presented claims.
Applicant’s Argument #1
Applicants’ argue on page(s) 11-13 of applicants remarks that “For example, claim 1 as amended recites updating, by a Feedback and Adaptation Mechanism (FAM) module communicatively coupled to the SpoG User Interface and to at least … Here, the claims recite a specific architectural configuration that enables real-time AI-enhanced segmentation and insights generation, not the use of a computer as a tool. The modules are recited in functional detail, and the RTDM and AAML modules are described as tightly integrated, continuously adaptive components central to improving the relevance and precision of output insights. This constitutes a concrete improvement to the functioning of computer-based data segmentation systems … Furthermore, the claims are analogous to those in Ancora Technologies Inc. v. HTC America Inc., 908 F.3d 1343 (Fed. Cir. 2018), where the court held that "improving the security of a computer system via a non-abstract technique is an improvement in computer functionality." Here, the use of real-time feedback loops and adaptive ML algorithms via integrated processing modules constitutes a specific and structured technological solution that goes well beyond mere abstraction. Accordingly, the claims are not directed to an abstract idea under Step 2A, and even if they were, the combination of specifically-configured modules imposes meaningful limitations that provide "significantly more" under Step 2B of the Alice framework. Therefore, Applicant requests that the Examiner withdraw the rejections under § 101 and pass all claims to allowance.” (see applicants remarks for more details). 
Response to Argument #1
Applicants' arguments have been fully considered; however, the examiner respectfully disagrees.
The claim limitations argued by applicants are not practical application, because the additional elements are recited such that it amounts to no more than: adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f). Please see 101 rejection above for more details.
Especially when the recite training machine learning model without giving the details of how the model works. Without the details it is still “apply it”.
Thus, the additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limitations on practicing the abstract idea. As a result, claims 1, 8 and 15 do not provide any specifics regarding the integration into a practical application when recited in a claim with a judicial exception. See MPEP 2106.05(f).
The additional elements of a “machine learning model” and “AI-powered” or “AI-driven”. This language merely requires execution of an algorithm that can be performed by a generic computer component and provides no detail regarding the operation of that algorithm. As such, the claim requirement amounts to mere instructions to implement the abstract idea on a computer, and, therefore, is not sufficient to make the claim patent eligible. See Alice, 573 U.S. at 226 (determining that the claim limitations “data processing system,” “communications controller,” and “data storage unit” were generic computer components that amounted to mere instructions to implement the abstract idea on a computer); October 2019 Guidance Update at 11–12 (recitation of generic computer limitations for implementing the abstract idea “would not be sufficient to demonstrate integration of a judicial exception into a practical application”). Such a generic recitation of “machine learning model” is insufficient to show a practical application of the recited abstract idea. All of these additional elements are not significantly more because these, again, are merely the software and/or hardware components used to implement the abstract idea on a general-purpose computer. 

Applicant’s Argument #2
Applicants’ argue on page(s) 11-13 of applicants remarks that “As noted above, claim 1 as amended recites updating, by a Feedback and Adaptation Mechanism (FAM) module communicatively coupled to the SpoG User Interface and to at least one of the AAML module, CVSE, or PRE module, one or more operational parameters of a machine learning algorithm based on user interaction feedback and sentiment data received through the SpoG User Interface, wherein the updated operational parameters influence subsequent segmentation or insight generation in real time. Claims 8 and 16 have been similarly amended. The applied references do not disclose the features of the amended claims. For example, Makhija does not disclose or suggest collecting feedback from a user interface, performing sentiment analysis, or dynamically updating any algorithm or model in real time during system operation. Makhija's architecture is explicitly batch-oriented and closed-loop, designed for controlled, curated output, not real-time adaptation.” (see applicants remarks for more details). 
Response to Argument #2
Applicants' arguments have been fully considered; however, the examiner respectfully disagrees. Applicants’ arguments are moot in view of the new grounds of rejection necessitated by the amendments made to previously presented claims. 

Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. 
NPL Reference:
Biswas et al. A proposed architecture for big data driven supply chain analytics. ICFAI University Press (IUP) Journal of Supply Chain Management, Vol XIII, No 3 (2016), pp. 7 – 34. https://arxiv.org/abs/1705.04958.

Foreign Reference:
(KR20070057806A) Beckerle et al. This reference recites an architecture for building and managing data integration processes. The architecture may provide modularity and extensibility to many aspects of the integration design process, including user interfaces, programmatic interfaces, services, components, runtime engines, and external connectors. The architecture may also employ a common integrated metadata sharing approach throughout the design process to enable seamless transitions between various phases of the design and implementation of a data integration process.

Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.

Any inquiry concerning this communication or earlier communications from the examiner should be directed to NANCY PRASAD whose telephone number is (571)270-3265. The examiner can normally be reached M-F: 8:00 AM - 4:30 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia Munson can be reached on (571)270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.





/N.N.P/Examiner, Art Unit 3624                                                                                                                                                                                                        /PATRICIA H MUNSON/Supervisory Patent Examiner, Art Unit 3624                                                                                                                                                                                                        


    
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
    


(Ad) Transform your business with AI in minutes, not months

✓
Custom AI strategy tailored to your specific industry needs
✓
Step-by-step implementation with measurable ROI
✓
5-minute setup that requires zero technical skills
Get your AI playbook

Trusted by 1,000+ companies worldwide

Cookies help us deliver our services. By using our services, you agree to our use of cookies.