Jump to content

Patent Application 18096556 - METHOD AND SYSTEM FOR QUALITY INSPECTION - Rejection

From WikiPatents

Patent Application 18096556 - METHOD AND SYSTEM FOR QUALITY INSPECTION

Title: METHOD AND SYSTEM FOR QUALITY INSPECTION

Application Information

  • Invention Title: METHOD AND SYSTEM FOR QUALITY INSPECTION
  • Application Number: 18096556
  • Submission Date: 2025-05-12T00:00:00.000Z
  • Effective Filing Date: 2023-01-12T00:00:00.000Z
  • Filing Date: 2023-01-12T00:00:00.000Z
  • National Class: 700
  • National Sub-Class: 109000
  • Examiner Employee Number: 94029
  • Art Unit: 2116
  • Tech Center: 2100

Rejection Summary

  • 102 Rejections: 1
  • 103 Rejections: 1

Cited Patents

No patents were cited in this rejection.

Office Action Text


    DETAILED ACTION
Claims 1-16 (filed 01/12/2023) have been considered in this action.  Claims 1-16 are newly filed.

Specification
The disclosure is objected to because of the following informalities:
The title of the invention is not descriptive.  A new title is required that is clearly indicative of the invention to which the claims are directed. 

The specification is objected to as failing to provide proper antecedent basis for the claimed subject matter.  See 37 CFR 1.75(d)(1) and MPEP § 608.01(o).  Correction of the following is required: The subject matter of claims 2, 5, 6, 7 and 11-16 is not recited in the original specification and lacks antecedent basis for this subject matter.  

Appropriate correction is required.


Claim Objections
Claim 10 is objected to because of the following informalities: 
Claim 10 appears to have a typographical error in that the claim recites “In a non-transitory computer-readable storage medium…”, and the use of the word “In” appears to be improperly interjected into the claim.  When looking to the priority documents retrieved on 03/01/2023, the form of the corresponding claim appears directed towards appropriate subject matter.  Should the applicant find that claim 10 is recited in a proper manner and an appropriate correction is not required for what the examiner considers a typographical error, an explanation will need to be provided to enunciate why the claim should not be considered to be reflective of information per se. 
 Appropriate correction is required.

Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b)  CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.


The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.


Claims 1-16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA  35 U.S.C. 112, the applicant), regards as the invention.
Claims 1, 8 and 10 recites the limitation "the trained machine learning model" in the final limitation of each claim.  There is insufficient antecedent basis for this limitation in the claim.  There is no linking between the step of “providing the one or more subsets as labelled training data for training a machine learning model” and “providing the trained machine learning model” because the claims never positively recite that the machine learning model is training with the labeled training data to create a trained machine learning model.  Due to this fact, there is no antecedent established for “the trained machine learning model”.  Instead, the claims use the broad language of “provide” which under the BRI simply means “to make available”.  For example, a person on Halloween may provide a bowl of candy outside their front porch for trick-or-treaters to take, however unless the trick-or-treaters actually come to take the candy, it never is actually utilized and it cannot be assumed that candy was actually consumed just because it was provided for.  There is a clear distinction between something being provided and it actually being used, a distinction that is not addressed by the claim due to the above identified lack in linkage of steps.  This offers two competing interpretations that make the claim indefinite because the metes and bounds of the claim are unclear as to whether “the trained machine learning model” refers to the same machine learning model provided with labeled training data or not.  For the sake of compact prosecution, the examiner shall consider that “the trained machine learning model” can be any machine learning model for which a form of quality inspection is provided, and does not necessarily have to relate to the same machine learning model that was provided the labeled training data. 
Claims 2-7, 9 and 11-16 depend upon one of claims 1, 8 and 10, and thus inherit the rejection of claims 1, 8 and 10 under 35 U.S.C. 112(b).

Claims 5-6 and 15-16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA  35 U.S.C. 112, the applicant), regards as the invention.
Claims 5 and 15 claim reference to optional features that are required by additional limitations of the claims, thus making the claim scope indeterminate.  Claims 5 and 15 each dependent from a claim that establishes that a status data comprises “events relating to, characteristic properties relevant for, or events relating to and characteristic properties relevant for utilization of the component within the manufacturing device”.  This establishes that status data is one of:
Events relating to utilization of the component within the manufacturing devices
Characteristic properties relevant for the utilization of the component within the manufacturing device; or
Events relating to and characteristic properties relevant for utilization of the component within the manufacturing device
However, claims 5 and 15 each require:
recording, by a second client, a time stamp with each event of the status data, and 
transmitting the events to a second server and storing the status data in a second database communicatively coupled to the second server.

This leaves open the viable interpretation that when ‘characteristic properties’ are what comprises the status data, it is unclear what is being performed by the recording and transmitting steps of the second client because no event type characteristic data is required.  For the sake of compact prosecution, the examiner shall consider the above noted limitations from claims 5 and 15 as being optional and not required to be performed by the claim because it is unclear what happens when the status data does not contain events. 
Claims 6 and 16 are dependent upon claims 5 and 15, and thus inherit the rejection of claims 5 and 15 under 35 U.S.C. 112(b).

Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.

Claims 1, 3, 8-9 and 10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to the judicial exception of an abstract idea without significantly more.  To determine if a claim is directed towards to patent ineligible subject matter, the Courts have guided the USPTO to apply the Alice/Mayo test, which requires:
Step 1: Determining if the claim falls within a statutory category of invention
Step 2A: Determining gif the claim is directed to a patent ineligible judicial exception consisting of a law of nature, a natural phenomenon, or abstract idea; and
Step 2B: If the claim is directed to a judicial exception, determining if the claim recites limitations or elements that amount to significantly more than the judicial exception (See MPEP 2106)

Step 1: With respect to claims 1 and 3-4, applying step 1, the preamble of independent claim 1 establishes the invention as A computer-implemented method.  As such, these claims fall within the statutory of a method
Step 2A, prong one: In order to apply step 2A, which determines whether particular limitations recite an abstract idea, law of nature, or natural phenomenon, a recitation of claim 1 is copied below.  The limitations of the claim that describe an abstract idea are bolded.  The claim recites:
A computer-implemented method for quality inspection of a component of a manufacturing device, the computer-implemented method comprising:
obtaining operational data relating to operation of the manufacturing device, the operational data comprising a time series of one or more physical properties of the manufacturing device;
obtaining status data relating to a component of the manufacturing device, the status data comprising events relating to, characteristic properties relevant for, or events relating to and characteristic properties relevant for utilization of the component within the manufacturing device;

labelling one or more subsets of the operational data, the labelling comprising associating one or more of the events, the characteristic properties, or the events and the characteristic properties to the one or more subsets; (mental process capable of being performed in the human mind with the assistance of pen and paper – observation, evaluation, judgement or opinion) 
providing the one or more subsets as labelled training data for training a machine learning model, wherein the machine learning model serves for outputting a quality indicator based on the labelled training data input; and
providing the trained machine learning model for quality inspection.

The limitations as analyzed include concepts directed to the “mental process” grouping of abstract ideas performed in the human mind (including an observation, evaluation, judgement or opinion, see MPEP §2106.04(a)(2), subsection III).  The claim involves determining and making judgements and evaluations of operational data and status data, and forming associations between the two.  The determinations of the claim are themselves simple and broad enough as they are claimed that they could be performed mental in the human mind, with the aid of pen and paper.  For example, a person in their mind could mentally associate events that occur with periods in the operational data that correspond to those events, such as when an alarm or alert occurs that a high temperature occurred, a person could then look at corresponding temperature data and associate a feedback of that temperature data with the high temperature alarm.  The scope of what it means to label the subsets is itself simple and broad enough that it is more than capable of being performed in the human mind with the aid of pen and paper.  The act of associating two types of data is a process capable of being performed in the human mind.  Thus, the limitations noted above fall into the “mental process” grouping of abstract ideas. 

Step 2A, prong 2: Under step 2A prong two, this judicial exception is not integrated into a practical application because the additional elements are outside the consideration of what can be considered an abstract idea, and only present elements that can be considered a general linking of the abstract idea to a field of use limitations, limitations that offer only insignificant extra-solution activity, or limitations that amount to “apply it”.  In particular, the claim recites the additional elements:
A computer-implemented method for quality inspection of a component of a manufacturing device, the computer-implemented method comprising:
(general field of use type limitation that recites an intended result “for quality inspection of a component”)
obtaining operational data relating to operation of the manufacturing device, the operational data comprising a time series of one or more physical properties of the manufacturing device; (insignificant extra-solution activity in the form of data gathering)
obtaining status data relating to a component of the manufacturing device, the status data comprising events relating to, characteristic properties relevant for, or events relating to and characteristic properties relevant for utilization of the component within the manufacturing device; (insignificant extra-solution activity in the form of data gathering)
providing the one or more subsets as labelled training data for training a machine learning model, wherein the machine learning model serves for outputting a quality indicator based on the labelled training data input; (limitation that amounts to “apply it”)
providing the trained machine learning model for quality inspection. (general field of use/limitation that amounts to “apply it”)

In regards to the preamble and the limitations that begin with “providing”, these limitations are deemed insignificant to transform the judicial exception to a patentable invention because the recited elements are recited at a high level of generality such that they represent no more than mere instructions to apply the judicial exception on a computer system, see MPEP 2106.05(f).  The act of “providing” is considered under the broadest reasonable interpretation, and includes the plain language definition of “making available”.  The act of “providing” therefore does not require that any form of quality inspection is performed, nor that a trained machine learning model is formed because simply making one or more subsets of data available for training (i.e. providing) does imply that such a learning is actually performed, nor that the learning results in a trained machine learning model because this is not within the scope of “providing”.  Accordingly, these limitation fail to offer a practical application as none of these limitations provide a clear and distinct linkage between the data consumed and the forming of a machine learning model, nor the use of that model for quality inspection.  In regards to the “for quality inspection” portions of the preamble and final limitation, these additional elements amount to merely indicating a field of use or technological environment to apply the judicial exception, see MPEP 2106.05(f).  None of these steps positively recite that a quality inspection is performed, as any use of quality inspection is linked to an intended use of the method steps, rather than a positive recitation of another method step, and thus do little to offer a practical application to the claim.  In regards to the “obtaining” steps, these limitations offer insignificant extra-solution activity in the form of routine and conventional data gathering to the judicial exception, see MPEP 2106.05(g).  In order to show the routine and convention nature of gathering operational data and status data, all of Hsu (US 20230129188) at least at [0057] and [0072]; Prakash et al. (US 20240160550) at least at [0004], [0008], [0079] and [0082]; and Kloepper et al. (US 20230019404) at least at [0027] and [0035] teach such features of operational data of time series data gathering and status data of events data gathering.  The conventional and routine nature of the data gathering means that it fails to provide a practical application because the data is used in a routine and conventional way.  It cannot be said that the claims result in an improvement to computer technology because the claim amounts to the use of computers as generic tools required to perform the abstract idea but do little to improve computers or computer technology is any way.  Because these limitations do not integrate the judicial exception into a practical application, the claims fail to overcome the presumption of presenting an abstract idea.  The claims relate to a computer system, but fail to offer any improvement to computers or the functioning of a computer.  See MPEP 2106.06.

Step 2B: Under step 2B, it is considered whether each claim limitation, when considered individually or as an ordered combination, amounts to significantly more than the abstract idea established in the previous steps of the analysis in order to determine if the claim as a whole provides an inventive concept that amounts to an improvement in the field.  This analysis includes determining whether an inventive concept is furnished by an element or a combination of elements that are beyond the judicial exception.  For limitations that are categorized as “apply it” or generally linking the use of the abstract idea to a particular technological environment or field of use, the analysis is the same.  Accordingly, the claim does not include any additional elements that amount to an inventive concept, or an improvement to the functioning of a computer, because no elements are beyond what a person having ordinary skill in the art would consider general functions of a computer.  When considered as a whole, the claimed elements add nothing that is not already present when the steps are considered separately, and thus fail to amount to significantly more than the judicial exception.  As noted above, nothing in the claim requires that a trained machine learning model is formed from the operational data and status data and that a quality inspection is performed with that trained machine learning model because the breadth of the claimed “providing” steps does not require such an interpretation.  As such, considering the claimed limitations as an ordered combination, claim 1 does not include significantly more than the abstract idea.  The claims do not include additional elements, alone or in combination, that are sufficient to amount to significantly more than the judicial exception.  As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no mere instructions to apply the judicial exception, field of use/technical environment elements which do not amount to significantly more than the abstract idea, or insignificant extra-solution activity in the form of data gathering.  
For the foregoing reasons, claim 1 is directed to an abstract idea without significantly more, and is rejected as being directed towards patent ineligible subject matter under 35 U.S.C. 101.  

In regards to Claims 8 and 10, a similar analysis of the corresponding limitations from claim 1 can be applied to claims 8 and 10.  Claims 8 and 10 fundamentally relate to an identical abstract idea capable of being performed in the human mind of associating two types of data with one another (i.e. labelling).  Claims 8 and 10 only differ in that their statutory category of invention is different (claim 8 is a system/machine, claim 10 is a product of manufacture).  
Accordingly, Claim 8 and Claim 10 are directed to a similar abstract idea without significantly more as applied to claim 1 above.  Claims 8 and 10 are therefore rejected under 35 U.S.C. 101 as being directed towards patent ineligible subject matter.

In regards to Claim 3, the above analysis for claim 1 under 35 U.S.C. 101 is considered.  Claim 3 requires the additional elements of “providing the quality indicator to a user; initiating, based on the quality indicator, an alert; preventing, based on the quality indicator, further usage of the component; indicating/initiating, based on the quality indicator, a component inspection; at least temporarily stopping, based on the quality indicator, operation of the manufacturing device; or any combination thereof”.  The claim requires only one of the identified features as is requisite of “any combination thereof”.  The act of providing a quality indicator to a user is a process that can be considered under the BRI are deemed insignificant to transform the judicial exception to a patentable invention because the recited elements are recited at a high level of generality such that they represent no more than mere instructions to apply the judicial exception on a computer system, see MPEP 2106.05(f), or that merely links the invention to the field of use of quality indicators.  The act of “providing” is considered under the broadest reasonable interpretation, and includes the plain language definition of “making available”.  The act of “providing” therefore does not require that any form of quality indicator evaluation is performed, nor that a trained machine learning model is used to form the quality indicator because simply making a quality indicator available (i.e. providing) does imply that the quality indicator is a result of a trained machine learning model trained with operational data and status data because this is not within the scope of “providing” and is outside the BRI of the claim.  Accordingly, these limitation fail to offer a practical application as none of these limitations provide a clear and distinct linkage between the data consumed and the forming of a machine learning model, nor the use of that model for quality inspection.  
Claim 3 is rejected under 35 U.S.C. 101 as being directed towards patent ineligible subject matter.


In regards to Claim 9, the above analysis for claim 8 under 35 U.S.C. 101 is considered.  Claim 9 requires the additional elements of “a machine tool comprising a first client configured to provide the operational data to a server, implemented in hardware, software, or hardware and software; and a tool management system configured in software, the tool management system being configured to provide the status data to the server”.  These limitations do nothing more than provide the general technological environment for applying the abstract idea without significantly more.  For example, aside that requiring that the data comes from a machine tool and tool management system (software) and is sent to a server, nothing in the claim offers any form of practical application of what is done with said data.  The claim establishes a generalized field of use for the claims to be operated in (machine tools, servers) but do little to offer meaningful limits to the claimed abstract idea.  Claim 9 fails to offer a practical application because nothing in claim 9 requires that the system formulate the quality inspection, nor that the quality inspection is utilized in any meaningful way or is formed in any meaningful way.  
Accordingly, claim 9 is rejected under 35 U.S.C. 101 as being directed towards patent ineligible subject matter.


Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –

(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.


Claims 1, 3, 4 and 8-10 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Hsu (US 20230129188, hereinafter Hsu).

In regards to Claim 1, Hsu discloses “A computer-implemented method for quality inspection of a component of a manufacturing device, the computer-implemented method comprising” ([0008] The invention relates to the art of machine learning, data mining, and artificial intelligence in welding fabrication production, in order to automate human decision-making process in welding equipment preventative/predictive maintenance (PPM) and condition-based maintenance (CBM), weld quality control and weld engineering. [0009] According to a first aspect, a welding system comprises: a first processing circuitry to process a first welding input from a first data source; wherein welding system is a type of manufacturing device) “obtaining operational data relating to operation of the manufacturing device, the operational data comprising a time series of one or more physical properties of the manufacturing device” ([0057] Another example of input “x” vector may contain one or more time series data as part of the vector. Welding current time series is notated as {I[t]}={I(t), I(t-1), I(t-2), I(t-T)} where t is real valued present time, I(t) is welding current sampled at present time, I(t-1) is welding current sampled at one sampling period (Δt) in the past, and I(t-2) is welding current sampled at two sampling periods (or 2Δt) in the past, and T denotes total delay elements (or memory depth or embedding dimension) as we transform a one dimensional time vector into a T-dimensional spatial vector in constructing time series {I[t]} as inputs to neural network. Similarly, welding voltage time series can be notated as {V[t]}={V(t), V(t-1), V(t-2), V(t-T)}. Equation 1 example can potentially take the form of {x[t]}={{IN}, {V[t]}, shorting frequency [t]}; wherein welding current is the property of the welding tool applying welding current to the weldement; [0067] One or more sensors 236 may be positioned throughout a welding station to measure and collect welding data. For example, depending on the type of sensor, the one or more sensors 236 may be positioned adjacent the work piece 206, integrated with the welding equipment 210, integrated with the welding headwear, or a combination thereof. Indeed, the one or more sensors 236 may be positioned adjacent (e.g., operably situated) the work so as to enable to one or more sensors 236 to properly function. For example, a camera should have a line of sight to the weld, a microphone should be close enough to detect acoustic features of the weld, or weld process, etc. [0071] The one or more sensors or transducers 236 may include any sensor useful in identifying defects, or measuring attributes/parameters, of a weld in a weldment. Examples of suitable sensors include, without limitation, current/LEM sensor, voltage and power sensors/calorimeter, encoders, photodiodes, cameras, microphones, seam finders, temperature sensors (e.g., positioned inside the welding equipment 210, or on the work piece 206), infrared (IR) detectors, proximity sensors, laser ranging and scanning devices, pressure sensors, inertial sensors, humidity sensors, airflow sensors, inertial measurement unit (IMU) sensors, shape memory alloy (SMA) sensors, piezoelectric sensors, nanotechnology chemical sensors, EMAT sensors, MEMS sensors, GPS, etc.) “obtaining status data relating to a component of the manufacturing device, the status data comprising events relating to, characteristic properties relevant for, or events relating to and characteristic properties relevant for utilization of the component within the manufacturing device” ([0054] To help fabricators improve productivity and quality of their weldments, and to drive continuous improvement in their welding operations, a welding information management system may be employed to collect real-time data from their welding equipment. Using such welding information management systems, fabricators can remotely assess welding performance information in real-time via a computer network (e.g., over the Internet, a local network, etc.). Welding information management systems help fabricators assess performance indicators, such as productivity and quality of the work. For example, welding information management systems may be used to improve a weldment's weld quality by detecting a predetermined characteristic, such as potential weld defects, and by identifying operators associated with the potentially defective weldments. Suitable welding information management systems are available from Miller Electric Mfg. Co. For example, Insight Core™ and Insight Centerpoint™ are Internet-based industrial welding information management solutions that collect and report, for example, arc starts, arc-on time, identify missing welds, and quality performance based on amperage and voltage. Insight Core™ further provides, for example, a real-time snapshot of a weld cell's performance, thereby eliminating outdated and often ineffective methods of manual data collection on the production floor. [0072] A first operator interface 238 may be provided at the welding station that enables welding personnel (e.g., a welding operator, a supervisor/manager, maintenance personnel, quality control personnel, etc.) to indicate, or enter, any equipment fault classification, set points, set up conditions, quality classification, and/or other parameters. In some aspects, certain of the parameters (e.g., weld programs, set points, set up conditions, etc.) and fault or event codes may be transmitted from the robot and/or welding equipment 210 to the analytics computing platform 234 as input features or automatically detected/sensed, thereby obviating the need for welding personnel to manually indicate at least those parameters. The parameters and fault or event codes etc. may be transmitted with tags associated with work piece 206 and other related information for traceability as in metadata for later processing; wherein events and welding information that includes arc-on time and arc starts and missing welds are forms of status data that relate to the welding head of the welding equipment) “labelling one or more subsets of the operational data, the labelling comprising associating one or more of the events, the characteristic properties, or the events and the characteristic properties to the one or more subsets” ([0056] The machine-learning algorithm may be supervised, or unsupervised. Supervised machine learning algorithms requires that the output of the hypothesis be “labeled,” thus each feature is a pair of input (x) and output (y), or {(x, y)}. For instance, given a particular welding current as input, a resulting weld may be labeled using a binary output (or class) (e.g., either an “acceptable” weld or an “unacceptable” weld). The hypothesis used to classify the resulting weld may be called a classifier. [0079] As illustrated, the work-in-process weldment (e.g., a carrier 402) may be tagged with a tag 404, which may be used to identify and track the weldment. When the work-in-process weldment arrives at an inspection station 408, a quality assurance device (e.g., tensile machine 410a, computing device 410b, etc.) may classify the weldment as passing (or failing) in one or more aspects of routine tests and communicate the test results (y1, y2) along with tag data (or weld quality metadata) to the analytics computing platform 234. Analytics computing platform 234 will combine the “x” and the “y” (e.g., {(x, tag)} and {(y1, y2, tag)}) data together to form a complete training example {(x, y)}. For example, “x” may be a vector of all the sensors of welding process and equipment, while the “y1” vector may include fault codes, events and error logs from networked welding machines & robots & PLCs that can be digitally and automatically transmitted, but “y2” may be a human interface for manual entry by the maintenance personnel when he recovers the fault (such as those described with regard to FIGS. 3a and 3b). The “x,” “y1,” and “y2” data may be further time stamped so that when they reach the analytics computing platform 234; wherein the combining of x data that is sensor data such as the above current time series data, and events are the events including errors as y data is an associating/labeling of the two; [0151] To increase ease of use, an existing infrastructure of data collection (e.g., a welding information management system) may be retrofitted with a welding data labeling interface (e.g., operator interface 238, 510) such that predictions regarding maintenance and quality control can be made using subsequent welding data. In one implementation, the algorithm training can be automated where the welding data labeling interface may be a digital interface to other welding equipment, or weld quality inspection instruments) “providing the one or more subsets as labelled training data for training a machine learning model, wherein the machine learning model serves for outputting a quality indicator based on the labelled training data input; and” ([0063] an object of the subject disclosure is to depart from conventional approaches of machine learning algorithms training using a costly controlled small dataset, to a new paradigm of machine learning algorithms training using a large scale dataset that is dynamically generated from actual welding equipment in production with network connectivity and from actual quality control and maintenance activities in a factory. In other words, as disclosed herein, a weld production knowledge machine learning algorithm may be trained by a large scale dataset of actual welding process data collected continuously in real-time from real life welding equipment in production (e.g., on-line) at one or more welding cells of a fabricator, and at one or more fabricator sources, and by actual weld quality data from real life inspection equipment, and using actual quality standards of pass/fail from fabricators themselves, which may be integral with fabricators' quality control system. In other words, the data labeling of supervised learning (e.g. weld quality or weld equipment maintenance conditions) is not done generically in controlled experiments for all applications but can be customized and based on actual human decisions specific to each application.[0079] The “x,” “y1,” and “y2” data may be further time stamped so that when they reach the analytics computing platform 234, a pre-processor can parse and assemble them into dataset before ingesting them into machine learning algorithms of the one or more analytics computing platforms 234 for training, validation and testing. The data transmitted by the welding cell 406 and inspection station 408 include metadata with “tags” or supplemental information such as weldment traceability, time and location data attached to the weld process data, welding equipment maintenance data and weld quality data. The data may be in human readable forms such as XML or JSON. Alternatively, it may be binary or machine-readable only. In certain aspects, data transmitted by 406 may be encapsulated by a content neutral wrapper to accommodate other formats. In certain aspects, the data may be formatted into a standardized or structured form. For example, a wrapper may be employed, which, extracts content of a particular information source and translates it into, for example, a relational form; [0087] The machine learning application engine virtual machine can use MAHOUT™ and/or MLlib implementations of distributed and/or scalable machine learning algorithms or libraries. Java-based WEKA® open source ML software can be used for data mining. Alternatively ad-hoc machine learning algorithms can be provided, which may be developed using, for example, R Connector, SAS® software, MATLAB®, Octave, etc. The algorithms may be built on the APACHE™ HADOOP® cloud computing layers below. For example, the machine learning engines like MAHOUT™ libraries/MLlib/WEKA® can use MapReduce paradigm to perform supervised learning and unsupervised learning for hypothesis training, validation and testing; and to provide services such weld quality prediction, maintenance prediction and data mining (for unexpected anomaly detection and alarm) “providing the trained machine learning model for quality inspection” ([0093] The one or more analytics computing platforms 234 may employ a supervised machine learning algorithm relying on data labeling generated by human (e.g., via operator interface 238, 510) and/or machine, such as linear regression, logistic regression, neural network, and support vector machine (SVM) large margin classifier. The one or more analytics computing platforms 234 may instead employ an unsupervised machine learning algorithm that does not rely on user labeling of the output, such as K-means (e.g., KNN classification), Kohonen self-organizing maps, competitive learning, clustering, PCA for general anomaly detection and for data compression as part of supervised machine learning.[0094] After the neural network is trained, it may be used to predict weld geometry based on the welding parameters: user interface 238 in FIG. 2 may display a web page served out of Analytics computing platform(s) 234 that runs the “Weld geometry prediction” virtual machine. The web page may comprise a graphical representation of the weld macro similar to 410Eb in FIG. 4e to show user what the expected weld looks like with the weld parameters programmed in the welding equipment 210. The user may play out “what-if” scenarios to see possible parameter optimization routes. For example, the user may change the gap size of the joint and see the effect on the bead profile. Although FIG. 4e only illustrates two dimensional weld measurements (bead profile and penetration profile), other dimensions may be measured and used for machine learning, e.g., spatter level, distortion, residual stress, microstructure, hardness, mechanical properties of the weld, discoloration, surface blemish, etc.).

In regards to Claim 3, Hsu discloses “The computer-implemented method of claim 1, further comprising: providing the quality indicator to a user; initiating, based on the quality indicator, an alert; preventing, based on the quality indicator, further usage of the component; indicating/initiating, based on the quality indicator, a component inspection; at least temporarily stopping, based on the quality indicator, operation of the manufacturing device; or any combination thereof” ([0096] The operator interface 510 may generate audible, visual, and/or tactile output (e.g., via speakers, a display, and/or motors/actuators/servos/etc.) in response to signals from the control circuitry 502. In certain aspects, one or more components of the operator interface 510 may be positioned on the welding tool, whereby control signals from the one or more components are communicated to the control circuitry 502 via conduit 218; [0115] precision and recall of the “live” performance of hypothesis may be displayed on operator interface 238 for user discretion of their tradeoff. User interface may also display the predicted life of contact tip, or h.sub.θ(x) to alert operator for pro-active tip change to avoid unexpected downtime; wherein predicted life of tip is a quality indicator provided to a user).

In regards to Claim 4, Hsu discloses “The computer-implemented method of claim 3, further comprising initiating, based on the quality indicator, the alert, wherein the alert comprises a notification displayed on a display screen of the manufacturing device to a user or an app in a cloud” ([0096] The operator interface 510 may generate audible, visual, and/or tactile output (e.g., via speakers, a display, and/or motors/actuators/servos/etc.) in response to signals from the control circuitry 502. In certain aspects, one or more components of the operator interface 510 may be positioned on the welding tool, whereby control signals from the one or more components are communicated to the control circuitry 502 via conduit 218; [0115] precision and recall of the “live” performance of hypothesis may be displayed on operator interface 238 for user discretion of their tradeoff. User interface may also display the predicted life of contact tip, or h.sub.θ(x) to alert operator for pro-active tip change to avoid unexpected downtime; wherein alert of tip change is displayed and is for an operator (user)).


In regards to Claim 8, Hsu discloses “An apparatus comprising: a memory; and a processor configured for quality inspection of a component of a manufacturing device, wherein the processor being configured for quality inspection of the component comprises the processor being configured to:” ([0008] The invention relates to the art of machine learning, data mining, and artificial intelligence in welding fabrication production, in order to automate human decision-making process in welding equipment preventative/predictive maintenance (PPM) and condition-based maintenance (CBM), weld quality control and weld engineering. [0009] According to a first aspect, a welding system comprises: a first processing circuitry to process a first welding input from a first data source; wherein welding system is a type of manufacturing device; wherein a processing device implies a computer [0110] An exemplary analytics computing platform 234 may comprise a processor configured to perform one or more algorithms (e.g., weld production knowledge machine learning algorithms) and a non-transitory data storage device. The processor may be communicatively and operatively coupled with one or more non-transitory data storage devices, which may be a non-transitory, computer-readable medium having one or more databases (e.g., weld data store(s) having a large scale dataset) and/or computer-executable instructions embodied therein. The computer-executable instructions, when executed by the processor, facilitate the various quality assurance systems and algorithms disclosed herein) “obtain operational data relating to operation of the manufacturing device, the operational data comprising a time series of one or more physical properties of the manufacturing device;” ([0057] Another example of input “x” vector may contain one or more time series data as part of the vector. Welding current time series is notated as {I[t]}={I(t), I(t-1), I(t-2), I(t-T)} where t is real valued present time, I(t) is welding current sampled at present time, I(t-1) is welding current sampled at one sampling period (Δt) in the past, and I(t-2) is welding current sampled at two sampling periods (or 2Δt) in the past, and T denotes total delay elements (or memory depth or embedding dimension) as we transform a one dimensional time vector into a T-dimensional spatial vector in constructing time series {I[t]} as inputs to neural network. Similarly, welding voltage time series can be notated as {V[t]}={V(t), V(t-1), V(t-2), V(t-T)}. Equation 1 example can potentially take the form of {x[t]}={{IN}, {V[t]}, shorting frequency [t]}; wherein welding current is the property of the welding tool applying welding current to the weldement; [0067] One or more sensors 236 may be positioned throughout a welding station to measure and collect welding data. For example, depending on the type of sensor, the one or more sensors 236 may be positioned adjacent the work piece 206, integrated with the welding equipment 210, integrated with the welding headwear, or a combination thereof. Indeed, the one or more sensors 236 may be positioned adjacent (e.g., operably situated) the work so as to enable to one or more sensors 236 to properly function. For example, a camera should have a line of sight to the weld, a microphone should be close enough to detect acoustic features of the weld, or weld process, etc. [0071] The one or more sensors or transducers 236 may include any sensor useful in identifying defects, or measuring attributes/parameters, of a weld in a weldment. Examples of suitable sensors include, without limitation, current/LEM sensor, voltage and power sensors/calorimeter, encoders, photodiodes, cameras, microphones, seam finders, temperature sensors (e.g., positioned inside the welding equipment 210, or on the work piece 206), infrared (IR) detectors, proximity sensors, laser ranging and scanning devices, pressure sensors, inertial sensors, humidity sensors, airflow sensors, inertial measurement unit (IMU) sensors, shape memory alloy (SMA) sensors, piezoelectric sensors, nanotechnology chemical sensors, EMAT sensors, MEMS sensors, GPS, etc.) “obtain status data relating to a component of the manufacturing device, the status data comprising events relating to, characteristic properties relevant for, or events relating to and characteristic properties relevant for utilization of the component within the manufacturing device;” ([0054] To help fabricators improve productivity and quality of their weldments, and to drive continuous improvement in their welding operations, a welding information management system may be employed to collect real-time data from their welding equipment. Using such welding information management systems, fabricators can remotely assess welding performance information in real-time via a computer network (e.g., over the Internet, a local network, etc.). Welding information management systems help fabricators assess performance indicators, such as productivity and quality of the work. For example, welding information management systems may be used to improve a weldment's weld quality by detecting a predetermined characteristic, such as potential weld defects, and by identifying operators associated with the potentially defective weldments. Suitable welding information management systems are available from Miller Electric Mfg. Co. For example, Insight Core™ and Insight Centerpoint™ are Internet-based industrial welding information management solutions that collect and report, for example, arc starts, arc-on time, identify missing welds, and quality performance based on amperage and voltage. Insight Core™ further provides, for example, a real-time snapshot of a weld cell's performance, thereby eliminating outdated and often ineffective methods of manual data collection on the production floor. [0072] A first operator interface 238 may be provided at the welding station that enables welding personnel (e.g., a welding operator, a supervisor/manager, maintenance personnel, quality control personnel, etc.) to indicate, or enter, any equipment fault classification, set points, set up conditions, quality classification, and/or other parameters. In some aspects, certain of the parameters (e.g., weld programs, set points, set up conditions, etc.) and fault or event codes may be transmitted from the robot and/or welding equipment 210 to the analytics computing platform 234 as input features or automatically detected/sensed, thereby obviating the need for welding personnel to manually indicate at least those parameters. The parameters and fault or event codes etc. may be transmitted with tags associated with work piece 206 and other related information for traceability as in metadata for later processing; wherein events and welding information that includes arc-on time and arc starts and missing welds are forms of status data that relate to the welding head of the welding equipment) “label one or more subsets of the operational data, the label of the one or more subsets of the operational data comprising associating one or more of the events, the characteristic properties, or the events and the characteristic properties to the one or more subsets;” ([0056] The machine-learning algorithm may be supervised, or unsupervised. Supervised machine learning algorithms requires that the output of the hypothesis be “labeled,” thus each feature is a pair of input (x) and output (y), or {(x, y)}. For instance, given a particular welding current as input, a resulting weld may be labeled using a binary output (or class) (e.g., either an “acceptable” weld or an “unacceptable” weld). The hypothesis used to classify the resulting weld may be called a classifier. [0079] As illustrated, the work-in-process weldment (e.g., a carrier 402) may be tagged with a tag 404, which may be used to identify and track the weldment. When the work-in-process weldment arrives at an inspection station 408, a quality assurance device (e.g., tensile machine 410a, computing device 410b, etc.) may classify the weldment as passing (or failing) in one or more aspects of routine tests and communicate the test results (y1, y2) along with tag data (or weld quality metadata) to the analytics computing platform 234. Analytics computing platform 234 will combine the “x” and the “y” (e.g., {(x, tag)} and {(y1, y2, tag)}) data together to form a complete training example {(x, y)}. For example, “x” may be a vector of all the sensors of welding process and equipment, while the “y1” vector may include fault codes, events and error logs from networked welding machines & robots & PLCs that can be digitally and automatically transmitted, but “y2” may be a human interface for manual entry by the maintenance personnel when he recovers the fault (such as those described with regard to FIGS. 3a and 3b). The “x,” “y1,” and “y2” data may be further time stamped so that when they reach the analytics computing platform 234; wherein the combining of x data that is sensor data such as the above current time series data, and events are the events including errors as y data is an associating/labeling of the two; [0151] To increase ease of use, an existing infrastructure of data collection (e.g., a welding information management system) may be retrofitted with a welding data labeling interface (e.g., operator interface 238, 510) such that predictions regarding maintenance and quality control can be made using subsequent welding data. In one implementation, the algorithm training can be automated where the welding data labeling interface may be a digital interface to other welding equipment, or weld quality inspection instruments) “provide the one or more subsets as labelled training data for training a machine learning model, wherein the machine learning model is configured to output a quality indicator based on the labelled training data input; and” ([0063] an object of the subject disclosure is to depart from conventional approaches of machine learning algorithms training using a costly controlled small dataset, to a new paradigm of machine learning algorithms training using a large scale dataset that is dynamically generated from actual welding equipment in production with network connectivity and from actual quality control and maintenance activities in a factory. In other words, as disclosed herein, a weld production knowledge machine learning algorithm may be trained by a large scale dataset of actual welding process data collected continuously in real-time from real life welding equipment in production (e.g., on-line) at one or more welding cells of a fabricator, and at one or more fabricator sources, and by actual weld quality data from real life inspection equipment, and using actual quality standards of pass/fail from fabricators themselves, which may be integral with fabricators' quality control system. In other words, the data labeling of supervised learning (e.g. weld quality or weld equipment maintenance conditions) is not done generically in controlled experiments for all applications but can be customized and based on actual human decisions specific to each application.[0079] The “x,” “y1,” and “y2” data may be further time stamped so that when they reach the analytics computing platform 234, a pre-processor can parse and assemble them into dataset before ingesting them into machine learning algorithms of the one or more analytics computing platforms 234 for training, validation and testing. The data transmitted by the welding cell 406 and inspection station 408 include metadata with “tags” or supplemental information such as weldment traceability, time and location data attached to the weld process data, welding equipment maintenance data and weld quality data. The data may be in human readable forms such as XML or JSON. Alternatively, it may be binary or machine-readable only. In certain aspects, data transmitted by 406 may be encapsulated by a content neutral wrapper to accommodate other formats. In certain aspects, the data may be formatted into a standardized or structured form. For example, a wrapper may be employed, which, extracts content of a particular information source and translates it into, for example, a relational form; [0087] The machine learning application engine virtual machine can use MAHOUT™ and/or MLlib implementations of distributed and/or scalable machine learning algorithms or libraries. Java-based WEKA® open source ML software can be used for data mining. Alternatively ad-hoc machine learning algorithms can be provided, which may be developed using, for example, R Connector, SAS® software, MATLAB®, Octave, etc. The algorithms may be built on the APACHE™ HADOOP® cloud computing layers below. For example, the machine learning engines like MAHOUT™ libraries/MLlib/WEKA® can use MapReduce paradigm to perform supervised learning and unsupervised learning for hypothesis training, validation and testing; and to provide services such weld quality prediction, maintenance prediction and data mining (for unexpected anomaly detection and alarm) “provide the trained machine learning model for quality inspection” ([0093] The one or more analytics computing platforms 234 may employ a supervised machine learning algorithm relying on data labeling generated by human (e.g., via operator interface 238, 510) and/or machine, such as linear regression, logistic regression, neural network, and support vector machine (SVM) large margin classifier. The one or more analytics computing platforms 234 may instead employ an unsupervised machine learning algorithm that does not rely on user labeling of the output, such as K-means (e.g., KNN classification), Kohonen self-organizing maps, competitive learning, clustering, PCA for general anomaly detection and for data compression as part of supervised machine learning.[0094] After the neural network is trained, it may be used to predict weld geometry based on the welding parameters: user interface 238 in FIG. 2 may display a web page served out of Analytics computing platform(s) 234 that runs the “Weld geometry prediction” virtual machine. The web page may comprise a graphical representation of the weld macro similar to 410Eb in FIG. 4e to show user what the expected weld looks like with the weld parameters programmed in the welding equipment 210. The user may play out “what-if” scenarios to see possible parameter optimization routes. For example, the user may change the gap size of the joint and see the effect on the bead profile. Although FIG. 4e only illustrates two dimensional weld measurements (bead profile and penetration profile), other dimensions may be measured and used for machine learning, e.g., spatter level, distortion, residual stress, microstructure, hardness, mechanical properties of the weld, discoloration, surface blemish, etc.).

In regards to Claim 9, Hsu discloses “The apparatus of claim 8, further comprising: a machine tool comprising a first client configured to provide the operational data to a server, implemented in hardware, software, or hardware and software; and” ([0066] Referring to FIG. 2, an example welding system 200 is shown in which a robot 202 welds a work piece 206 using a welding tool 208 (or, when under manual control, a handheld torch) to which power or fuel is delivered by welding equipment 210 via conduit 218 (for electrical welding, ground conduit 220 provides the return path). The welding equipment 210 may be communicatively coupled with one or more analytics computing platforms 234 (e.g., one or more big data analytics computing platforms at a data center, which may be remotely situated) via a communication link 230 and a communication network 232. [0075] Any data entered via the operator interface 238, or generated by the one or more sensors 236, is preferably traceable back to the raw data collected by the one or more sensors 236 regarding the weldment, which could also identify the weldment, the operator, etc. The welding equipment 210 may be configured to communicate the welding data to the analytics computing platform 234 via the communication network 232 for processing, while still preserving traceability to the weldments. Through the communication network 232, the welding equipment 210 may also be configured to report programmed set points and set up conditions to the analytics computing platform 234) “a tool management system configured in software, the tool management system being configured to provide the status data to the server.” ([0060] Weld production knowledge machine learning algorithms may be used to predict and/or identify predetermined characteristic of said welding equipment or welding personnel, such as, inter alia, tool life, weld quality (e.g., passing or failing the WPS or compliance with production specifications), weldment quality, weld tool consumable life, welding equipment (and its component), service condition/interval, welding equipment reliability such as MTTR/MTBF, decisions and actions by weld personnel, worker training needs/performance/error/skill, welding consumable usage/replenish pattern, welding fixture service condition, anomaly in welding materials and supplies/input power or fuel/welding conditions/pre-weld and post-weld operations, productivity of weld, pre-weld and post-weld operations such as parts per shift/weld cell cycle time/production yield; wherein a system which predicts tool life is a tool management software; [0072]  In some aspects, certain of the parameters (e.g., weld programs, set points, set up conditions, etc.) and fault or event codes may be transmitted from the robot and/or welding equipment 210 to the analytics computing platform 234 as input features or automatically detected/sensed, thereby obviating the need for welding personnel to manually indicate at least those parameters. The parameters and fault or event codes etc. may be transmitted with tags associated with work piece 206 and other related information for traceability as in metadata for later processing. The first operator interface 238 is preferable a computing device (e.g., a computer, laptop, tablet, smart phone, etc.) with network connectivity).

In regards to Claim 10, Hsu discloses “In a non-transitory computer-readable storage medium that stores instructions executable by one or more processors for quality inspection of a component of a manufacturing device, the instructions comprising:” ([0008] The invention relates to the art of machine learning, data mining, and artificial intelligence in welding fabrication production, in order to automate human decision-making process in welding equipment preventative/predictive maintenance (PPM) and condition-based maintenance (CBM), weld quality control and weld engineering. [0009] According to a first aspect, a welding system comprises: a first processing circuitry to process a first welding input from a first data source; wherein welding system is a type of manufacturing device; [0110] An exemplary analytics computing platform 234 may comprise a processor configured to perform one or more algorithms (e.g., weld production knowledge machine learning algorithms) and a non-transitory data storage device. The processor may be communicatively and operatively coupled with one or more non-transitory data storage devices, which may be a non-transitory, computer-readable medium having one or more databases (e.g., weld data store(s) having a large scale dataset) and/or computer-executable instructions embodied therein. The computer-executable instructions, when executed by the processor, facilitate the various quality assurance systems and algorithms disclosed herein) “obtaining operational data relating to operation of the manufacturing device, the operational data comprising a time series of one or more physical properties of the manufacturing device” ([0057] Another example of input “x” vector may contain one or more time series data as part of the vector. Welding current time series is notated as {I[t]}={I(t), I(t-1), I(t-2), I(t-T)} where t is real valued present time, I(t) is welding current sampled at present time, I(t-1) is welding current sampled at one sampling period (Δt) in the past, and I(t-2) is welding current sampled at two sampling periods (or 2Δt) in the past, and T denotes total delay elements (or memory depth or embedding dimension) as we transform a one dimensional time vector into a T-dimensional spatial vector in constructing time series {I[t]} as inputs to neural network. Similarly, welding voltage time series can be notated as {V[t]}={V(t), V(t-1), V(t-2), V(t-T)}. Equation 1 example can potentially take the form of {x[t]}={{IN}, {V[t]}, shorting frequency [t]}; wherein welding current is the property of the welding tool applying welding current to the weldement; [0067] One or more sensors 236 may be positioned throughout a welding station to measure and collect welding data. For example, depending on the type of sensor, the one or more sensors 236 may be positioned adjacent the work piece 206, integrated with the welding equipment 210, integrated with the welding headwear, or a combination thereof. Indeed, the one or more sensors 236 may be positioned adjacent (e.g., operably situated) the work so as to enable to one or more sensors 236 to properly function. For example, a camera should have a line of sight to the weld, a microphone should be close enough to detect acoustic features of the weld, or weld process, etc. [0071] The one or more sensors or transducers 236 may include any sensor useful in identifying defects, or measuring attributes/parameters, of a weld in a weldment. Examples of suitable sensors include, without limitation, current/LEM sensor, voltage and power sensors/calorimeter, encoders, photodiodes, cameras, microphones, seam finders, temperature sensors (e.g., positioned inside the welding equipment 210, or on the work piece 206), infrared (IR) detectors, proximity sensors, laser ranging and scanning devices, pressure sensors, inertial sensors, humidity sensors, airflow sensors, inertial measurement unit (IMU) sensors, shape memory alloy (SMA) sensors, piezoelectric sensors, nanotechnology chemical sensors, EMAT sensors, MEMS sensors, GPS, etc.) “obtaining status data relating to a component of the manufacturing device, the status data comprising events relating to, characteristic properties relevant for, or events relating to and characteristic properties relevant for utilization of the component within the manufacturing device” ([0054] To help fabricators improve productivity and quality of their weldments, and to drive continuous improvement in their welding operations, a welding information management system may be employed to collect real-time data from their welding equipment. Using such welding information management systems, fabricators can remotely assess welding performance information in real-time via a computer network (e.g., over the Internet, a local network, etc.). Welding information management systems help fabricators assess performance indicators, such as productivity and quality of the work. For example, welding information management systems may be used to improve a weldment's weld quality by detecting a predetermined characteristic, such as potential weld defects, and by identifying operators associated with the potentially defective weldments. Suitable welding information management systems are available from Miller Electric Mfg. Co. For example, Insight Core™ and Insight Centerpoint™ are Internet-based industrial welding information management solutions that collect and report, for example, arc starts, arc-on time, identify missing welds, and quality performance based on amperage and voltage. Insight Core™ further provides, for example, a real-time snapshot of a weld cell's performance, thereby eliminating outdated and often ineffective methods of manual data collection on the production floor. [0072] A first operator interface 238 may be provided at the welding station that enables welding personnel (e.g., a welding operator, a supervisor/manager, maintenance personnel, quality control personnel, etc.) to indicate, or enter, any equipment fault classification, set points, set up conditions, quality classification, and/or other parameters. In some aspects, certain of the parameters (e.g., weld programs, set points, set up conditions, etc.) and fault or event codes may be transmitted from the robot and/or welding equipment 210 to the analytics computing platform 234 as input features or automatically detected/sensed, thereby obviating the need for welding personnel to manually indicate at least those parameters. The parameters and fault or event codes etc. may be transmitted with tags associated with work piece 206 and other related information for traceability as in metadata for later processing; wherein events and welding information that includes arc-on time and arc starts and missing welds are forms of status data that relate to the welding head of the welding equipment) “labelling one or more subsets of the operational data, the labelling comprising associating one or more of the events, the characteristic properties, or the events and the characteristic properties to the one or more subsets” ([0056] The machine-learning algorithm may be supervised, or unsupervised. Supervised machine learning algorithms requires that the output of the hypothesis be “labeled,” thus each feature is a pair of input (x) and output (y), or {(x, y)}. For instance, given a particular welding current as input, a resulting weld may be labeled using a binary output (or class) (e.g., either an “acceptable” weld or an “unacceptable” weld). The hypothesis used to classify the resulting weld may be called a classifier. [0079] As illustrated, the work-in-process weldment (e.g., a carrier 402) may be tagged with a tag 404, which may be used to identify and track the weldment. When the work-in-process weldment arrives at an inspection station 408, a quality assurance device (e.g., tensile machine 410a, computing device 410b, etc.) may classify the weldment as passing (or failing) in one or more aspects of routine tests and communicate the test results (y1, y2) along with tag data (or weld quality metadata) to the analytics computing platform 234. Analytics computing platform 234 will combine the “x” and the “y” (e.g., {(x, tag)} and {(y1, y2, tag)}) data together to form a complete training example {(x, y)}. For example, “x” may be a vector of all the sensors of welding process and equipment, while the “y1” vector may include fault codes, events and error logs from networked welding machines & robots & PLCs that can be digitally and automatically transmitted, but “y2” may be a human interface for manual entry by the maintenance personnel when he recovers the fault (such as those described with regard to FIGS. 3a and 3b). The “x,” “y1,” and “y2” data may be further time stamped so that when they reach the analytics computing platform 234; wherein the combining of x data that is sensor data such as the above current time series data, and events are the events including errors as y data is an associating/labeling of the two; [0151] To increase ease of use, an existing infrastructure of data collection (e.g., a welding information management system) may be retrofitted with a welding data labeling interface (e.g., operator interface 238, 510) such that predictions regarding maintenance and quality control can be made using subsequent welding data. In one implementation, the algorithm training can be automated where the welding data labeling interface may be a digital interface to other welding equipment, or weld quality inspection instruments) “providing the one or more subsets as labelled training data for training a machine learning model, wherein the machine learning model serves for outputting a quality indicator based on the labelled training data input; and” ([0063] an object of the subject disclosure is to depart from conventional approaches of machine learning algorithms training using a costly controlled small dataset, to a new paradigm of machine learning algorithms training using a large scale dataset that is dynamically generated from actual welding equipment in production with network connectivity and from actual quality control and maintenance activities in a factory. In other words, as disclosed herein, a weld production knowledge machine learning algorithm may be trained by a large scale dataset of actual welding process data collected continuously in real-time from real life welding equipment in production (e.g., on-line) at one or more welding cells of a fabricator, and at one or more fabricator sources, and by actual weld quality data from real life inspection equipment, and using actual quality standards of pass/fail from fabricators themselves, which may be integral with fabricators' quality control system. In other words, the data labeling of supervised learning (e.g. weld quality or weld equipment maintenance conditions) is not done generically in controlled experiments for all applications but can be customized and based on actual human decisions specific to each application.[0079] The “x,” “y1,” and “y2” data may be further time stamped so that when they reach the analytics computing platform 234, a pre-processor can parse and assemble them into dataset before ingesting them into machine learning algorithms of the one or more analytics computing platforms 234 for training, validation and testing. The data transmitted by the welding cell 406 and inspection station 408 include metadata with “tags” or supplemental information such as weldment traceability, time and location data attached to the weld process data, welding equipment maintenance data and weld quality data. The data may be in human readable forms such as XML or JSON. Alternatively, it may be binary or machine-readable only. In certain aspects, data transmitted by 406 may be encapsulated by a content neutral wrapper to accommodate other formats. In certain aspects, the data may be formatted into a standardized or structured form. For example, a wrapper may be employed, which, extracts content of a particular information source and translates it into, for example, a relational form; [0087] The machine learning application engine virtual machine can use MAHOUT™ and/or MLlib implementations of distributed and/or scalable machine learning algorithms or libraries. Java-based WEKA® open source ML software can be used for data mining. Alternatively ad-hoc machine learning algorithms can be provided, which may be developed using, for example, R Connector, SAS® software, MATLAB®, Octave, etc. The algorithms may be built on the APACHE™ HADOOP® cloud computing layers below. For example, the machine learning engines like MAHOUT™ libraries/MLlib/WEKA® can use MapReduce paradigm to perform supervised learning and unsupervised learning for hypothesis training, validation and testing; and to provide services such weld quality prediction, maintenance prediction and data mining (for unexpected anomaly detection and alarm) “providing the trained machine learning model for quality inspection” ([0093] The one or more analytics computing platforms 234 may employ a supervised machine learning algorithm relying on data labeling generated by human (e.g., via operator interface 238, 510) and/or machine, such as linear regression, logistic regression, neural network, and support vector machine (SVM) large margin classifier. The one or more analytics computing platforms 234 may instead employ an unsupervised machine learning algorithm that does not rely on user labeling of the output, such as K-means (e.g., KNN classification), Kohonen self-organizing maps, competitive learning, clustering, PCA for general anomaly detection and for data compression as part of supervised machine learning.[0094] After the neural network is trained, it may be used to predict weld geometry based on the welding parameters: user interface 238 in FIG. 2 may display a web page served out of Analytics computing platform(s) 234 that runs the “Weld geometry prediction” virtual machine. The web page may comprise a graphical representation of the weld macro similar to 410Eb in FIG. 4e to show user what the expected weld looks like with the weld parameters programmed in the welding equipment 210. The user may play out “what-if” scenarios to see possible parameter optimization routes. For example, the user may change the gap size of the joint and see the effect on the bead profile. Although FIG. 4e only illustrates two dimensional weld measurements (bead profile and penetration profile), other dimensions may be measured and used for machine learning, e.g., spatter level, distortion, residual stress, microstructure, hardness, mechanical properties of the weld, discoloration, surface blemish, etc.).

 Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.

The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.

Claims 2, 5-7 and 11-16 are rejected under 35 U.S.C. 103 as being unpatentable over Hsu as applied to claims 1, 8 and 10 above, and further in view of Prakash et al. (US 20240160550, hereinafter Prakash).

In regards to Claim 2, Hsu teaches the method for quality inspection as incorporated by claim 1 above. 
Hsu further teaches “The computer-implemented method of claim 1, further comprising: creating a query comprising at least one first condition for the operational data” ([0087] OLAP virtual machine service or online analytical processing of streaming data could be provided, e.g., with MonetDB open source system for efficient complex queries against large databases. [0117] the analytics computing platform 234's centralized weld production knowledge system allows the fabricator 604, or a third party (e.g., service provider to the fabricator), to remotely monitor and/or manage one or more welding systems (or welding equipment). For example, machine learning analysts and weld engineers may be situated at a remote analyst labor center 602 with access to the analytics computing platform 234, without physically visiting each customer's location (e.g., each fabricator 604a, 604b, 604c, 604n). These analysts may remotely use tools (e.g., MATLAB®, Octave, etc.) installed at the analytics computing platform 234 to perform manual tasks such as examining feature histogram and perform various transformation of features to achieve a normal distribution (i.e., Gaussian) before feeding the features into a chosen weld production knowledge machine learning algorithm; manually inspecting the learning curves, and performing training and cross-validation error analysis and ceiling analysis; manually choosing features and see the effect in error analysis to separate anomaly from normal distribution, etc. Weld engineers at the Labor Center may remotely query the weld data stores, accessing the weld process data, as well as corresponding weld quality data at the big data analytics computing platform, and remotely perform welding engineering tasks such as identifying the root cause of porosity, lack of fusion or solidification cracking).  
Hsu fails to teach “creating a query comprising at least one first condition for the operational data and at least one second condition for the status data; and retrieving, based on the query, one or more subsets of the operational data fulfilling the at least one first condition and falling within a time span during which the at least one second condition is fulfilled by the status data”.
Prakash teaches “creating a query comprising at least one first condition for the operational data and at least one second condition for the status data” (Fig. 5 and [0079] FIG. 1 depicts a non-limiting example historian 111 comprising a computer system for securely providing and obtaining configuration data according to some embodiments. In some embodiments, an operational historian can store (e.g., “historize”) various types of data related to an industrial process. Some example data can include, but is not limited to, time-series data, metadata, event data, configuration data, raw time-series binary data, tag metadata, diagnostic log data, and the like.  An operational historian can analyze process related data stored in an operational historian database and transform that data into timely reports that are communicated to one or more user displays. In this manner, an operational historian can filter (e.g., curate) data to raise the visibility of the data to users (e.g., via user displays) without overwhelming them and/or overburdening communications networks [0097] Some embodiments include a computer-implemented system and method comprising program logic executed by at least one processor enabling one or more users to visualize all related alarms for an asset based on one or more asset searches (e.g., such as one or more searches initiated through search service 216); wherein a search is a query; [0101] FIG. 5 illustrates a non-limiting example embodiment of an alarm view page 500 according to some embodiments. In some embodiments, the grid 510 can show a list of all alarms generated for the selected asset as well as for its children. Some further embodiments include one or more additional, adjoining and/or overlapping designs including alarm display and statistics. In some embodiments, the system and method can process and provide a chart area (shown on the left side of FIG. 5 and shown enlarged in FIGS. 6A and 6B) that can be used to display useful alarm summary information where the user is provided a snapshot of alarm activity; [0103] In some embodiments, alarms can be grouped by alarm, tag, area and/or object according to a “Group By” control. In some embodiments, alarms can be selected based on condition using the selector 520, including, but not limited to, selected conditions 521, 523, 525, and 527; wherein the asset/sensor being filtered is the first condition, and the alarms being filtered is the second condition) “and retrieving, based on the query, one or more subsets of the operational data fulfilling the at least one first condition and falling within a time span during which the at least one second condition is fulfilled by the status data” ([0102] the alarm view page 500 can be filtered by time or date using selection filter 590 shown at the bottom of the alarm view page 500. [0104] In some embodiments, the grid can show color key rectangles next to data in all cells of the columns represented by the currently selected group (shown as alert column 560). In some embodiments, the Pareto chart 530 can then show a set of data representing the number of alarms grouped by current selection. In FIG. 6B, alarm counts 532, 534, 536, 538, 539 are shown according to some embodiments. [0118] In some embodiments, spark lines (e.g., small inline or overlaid charts) are constructed by fetching process values from the system server for a specific tag mentioned in each alarm record. In some embodiments, if process values are empty for a given tag, then an empty spark line (which is indicated by filling spark line charts with a solid color in some embodiments) can be shown in the grid or grid section. In some embodiments, if the process values are present, then the spark line is drawn using process values. In some embodiments, after drawing the spark line, a section of the spark line is highlighted based on the ‘in alarm’ duration and colored according to the severity of the alarm; wherein fig. 5 shows that a sparkline (subset of operational data fulfilling first condition) for a particular asset's tag in association with an alarm (second condition is fulled) and in the time span (i.e. last 7 days) all as shown in Fig. 5).
It would have been obvious to a person having ordinary skill in the art before the effective file date of the claimed invention to have modified the system for creating databases of welding data used for training machine learning algorithms relating to the quality of welding and welding equipment as taught by Hsu, with the system of Prakash that uses databases that contain related time-series and event/alarm data for devices and is able to query that data so that when first and second filtering conditions are true over a time span, all those instances where the first and second filtering condition are true are sent to a user for display as taught by Prakash, because it gain the stated benefit of Prakash, namely that “[0081] aspects of the system 200 can filter (e.g., curate) the data to raise the visibility of the data to users (e.g., via the user displays) without overwhelming them and/or overburdening communications networks”.  In other words, by taking the operational data and status data that is able to be queried as taught by Hsu, and improving it with the filtering and viewing system of Prakash that allows displaying of database values that meet first and second conditions, including displaying of alarms and alarm count over a period (status data) and displaying the sparkline that shows a snapshot of sensor data surrounding that alarm condition in the time span (operational data), it can be considered taking a known system (Hsu), and improving it with known methods (Prakash) in a way that achieves predictable results and improvements to the base system.  

In regards to Claim 5, the combination of Hsu and Prakash teach the method for quality inspection as incorporated by claim 2 above.  Hsu further teaches “The computer-implemented method of claim 2, further comprising: recording, by the manufacturing device, a time stamp with each data item of the time series of the operational data… recording, by a second client, a time stamp with each event of the status data, and” ([0079]  Analytics computing platform 234 will combine the “x” and the “y” (e.g., {(x, tag)} and {(y1, y2, tag)}) data together to form a complete training example {(x, y)}. For example, “x” may be a vector of all the sensors of welding process and equipment, while the “y1” vector may include fault codes, events and error logs from networked welding machines & robots & PLCs that can be digitally and automatically transmitted, but “y2” may be a human interface for manual entry by the maintenance personnel when he recovers the fault (such as those described with regard to FIGS. 3a and 3b). The “x,” “y1,” and “y2” data may be further time stamped so that when they reach the analytics computing platform 234, a pre-processor can parse and assemble them into dataset before ingesting them into machine learning algorithms of the one or more analytics computing platforms 234 for training, validation and testing. The data transmitted by the welding cell 406 and inspection station 408 include metadata with “tags” or supplemental information such as weldment traceability, time and location data attached to the weld process data, welding equipment maintenance data and weld quality data; wherein it is implied that the manufacturing device (welding system) is that which time stamps the data because it is before reaching the analytics computer) “transmitting, by a first client communicatively coupled to the manufacturing device, the operational data to a first server and storing the time series in a first database communicatively coupled to the first server; and… transmitting the events to a second server and storing the status data in a second database communicatively coupled to the second server.” (Fig. 4a and [0012] In certain aspects, the first data source and the second data source each include: a sensor; a non-transitory data storage device; an operator interface; a database inside or outside welding equipment; or a combination thereof; [0101] The communication interface circuitry 512 comprises circuitry (e.g., a microcontroller and memory) operable to facilitate communication with one or more other devices or systems. The communication interface circuitry 512 is operable to interface the control circuitry 502 to the antenna 516 and/or port 514 for transmit and receive operations. For transmit, the communication interface 512 may receive data from the control circuitry 502 and packetize the data and convert the data to physical layer signals in accordance with protocols in use on the communication link 230. In certain aspects, the data may be communicated in batches, rather than in real time, but real time is still possible. For example, welding data from the welding equipment 210 may be communicated to the analytics computing platform 234 in batches, [0110] Thus, the non-transitory data storage device may be further configured to store any received welding data (e.g., welding data received by the analytics computing platform 234 from a welding system) and to create a weld data store of previously received welding data (e.g., historic welding data), which may employ a large-scale dataset associated with one or more fabricators. Thus, in certain aspects, the weld data store may employ a large scale dataset comprising, for example, (1) welding process data collected from welding equipment that is associated with one or more fabricators, and/or (2) weld quality data associated with said welding equipment that is associated with one or more fabricators. The fabricators represented in the weld data store need not be related, rather, they may be unrelated fabricators. In other words, the weld data store may dynamically receive and store the welding process data that is to be used in both present and/or future weld analyses; Fig. 4a shows x, y1 and tag information being sent from manufacturing and inspection equipment to analytics platform 234 and stored in different databases).

In regards to Claim 6, the combination of Hsu and Prakash teach the method for quality inspection as incorporated by claim 5 above.  Hsu further teaches “The computer-implemented method of claim 5, further comprising: querying the first database” (([0087] OLAP virtual machine service or online analytical processing of streaming data could be provided, e.g., with MonetDB open source system for efficient complex queries against large databases. [0117] the analytics computing platform 234's centralized weld production knowledge system allows the fabricator 604, or a third party (e.g., service provider to the fabricator), to remotely monitor and/or manage one or more welding systems (or welding equipment). For example, machine learning analysts and weld engineers may be situated at a remote analyst labor center 602 with access to the analytics computing platform 234, without physically visiting each customer's location (e.g., each fabricator 604a, 604b, 604c, 604n). These analysts may remotely use tools (e.g., MATLAB®, Octave, etc.) installed at the analytics computing platform 234 to perform manual tasks such as examining feature histogram and perform various transformation of features to achieve a normal distribution (i.e., Gaussian) before feeding the features into a chosen weld production knowledge machine learning algorithm; manually inspecting the learning curves, and performing training and cross-validation error analysis and ceiling analysis; manually choosing features and see the effect in error analysis to separate anomaly from normal distribution, etc. Weld engineers at the Labor Center may remotely query the weld data stores, accessing the weld process data, as well as corresponding weld quality data at the big data analytics computing platform, and remotely perform welding engineering tasks such as identifying the root cause of porosity, lack of fusion or solidification cracking).  Prakash further teaches “The computer-implemented method of claim 5, further comprising: querying the first database based on a first part of the query comprising the at least one first condition; and querying the first database or a third database based on a second part of the query comprising the at least one second condition” (Fig. 5 and [0007] Therefore, there is a need for a system that automatically monitors production environments and generates a display with items generated from relevant information from enormous amounts of asset data (e.g., tags) stored on a database, such that timely action can be taken to prevent the loss of profit. [0079] FIG. 1 depicts a non-limiting example historian 111 comprising a computer system for securely providing and obtaining configuration data according to some embodiments. In some embodiments, an operational historian can store (e.g., “historize”) various types of data related to an industrial process. Some example data can include, but is not limited to, time-series data, metadata, event data, configuration data, raw time-series binary data, tag metadata, diagnostic log data, and the like.  An operational historian can analyze process related data stored in an operational historian database and transform that data into timely reports that are communicated to one or more user displays. In this manner, an operational historian can filter (e.g., curate) data to raise the visibility of the data to users (e.g., via user displays) without overwhelming them and/or overburdening communications networks [0097] Some embodiments include a computer-implemented system and method comprising program logic executed by at least one processor enabling one or more users to visualize all related alarms for an asset based on one or more asset searches (e.g., such as one or more searches initiated through search service 216); wherein a search is a query; [0101] FIG. 5 illustrates a non-limiting example embodiment of an alarm view page 500 according to some embodiments. In some embodiments, the grid 510 can show a list of all alarms generated for the selected asset as well as for its children. Some further embodiments include one or more additional, adjoining and/or overlapping designs including alarm display and statistics. In some embodiments, the system and method can process and provide a chart area (shown on the left side of FIG. 5 and shown enlarged in FIGS. 6A and 6B) that can be used to display useful alarm summary information where the user is provided a snapshot of alarm activity; [0103] In some embodiments, alarms can be grouped by alarm, tag, area and/or object according to a “Group By” control. In some embodiments, alarms can be selected based on condition using the selector 520, including, but not limited to, selected conditions 521, 523, 525, and 527; wherein the asset/sensor being filtered is the first condition, and the alarms being filtered is the second condition).

In regards to Claim 7, the combination of Hsu and Prakash teach the method for quality inspection as incorporated by claim 2 above.  Prakash further teaches “The computer-implemented method of claim 2, further comprising: identifying concurrent time spans within the operational data and the status data fulfilling the at least one first condition and the at least one second condition of the query, respectively” ([0110] In some embodiments, the system and methods associated therewith can process data based on an asset hierarchy and selected time duration, where raw alarms are fetched from a system server such as computer 203. [0114] if the group contains only an ‘alarm.clear’ record, then unack duration and in alarm durations are calculated based on the start time specified in the time control and event time registered in the ‘alarm.clear’ record. Later, additional properties (such as “in-alarm”, “is-silenced”, and “is-shelved”) are calculated. For example, some embodiments include rule-based processing definitions that can comprise one or more of: [0115] A “In-Alarm”: Within the queried duration, this property is set to true for each alarm if the ‘Alarm.Clear’ record is not present for that alarm. If not, this property is set to false; [0118] In some embodiments, spark lines (e.g., small inline or overlaid charts) are constructed by fetching process values from the system server for a specific tag mentioned in each alarm record. In some embodiments, if process values are empty for a given tag, then an empty spark line (which is indicated by filling spark line charts with a solid color in some embodiments) can be shown in the grid or grid section. In some embodiments, if the process values are present, then the spark line is drawn using process values. In some embodiments, after drawing the spark line, a section of the spark line is highlighted based on the ‘in alarm’ duration and colored according to the severity of the alarm; wherein the filtering conditions for presenting grid in Fig. 5).

In regards to Claim 11, Hsu teaches the non-transitory computer readable medium for quality inspection as incorporated by claim 10 above.  
Hsu further teaches “The non-transitory computer-readable storage medium of claim 10, wherein the instructions further comprise: creating a query comprising at least one first condition for the operational data” ([0087] OLAP virtual machine service or online analytical processing of streaming data could be provided, e.g., with MonetDB open source system for efficient complex queries against large databases. [0117] the analytics computing platform 234's centralized weld production knowledge system allows the fabricator 604, or a third party (e.g., service provider to the fabricator), to remotely monitor and/or manage one or more welding systems (or welding equipment). For example, machine learning analysts and weld engineers may be situated at a remote analyst labor center 602 with access to the analytics computing platform 234, without physically visiting each customer's location (e.g., each fabricator 604a, 604b, 604c, 604n). These analysts may remotely use tools (e.g., MATLAB®, Octave, etc.) installed at the analytics computing platform 234 to perform manual tasks such as examining feature histogram and perform various transformation of features to achieve a normal distribution (i.e., Gaussian) before feeding the features into a chosen weld production knowledge machine learning algorithm; manually inspecting the learning curves, and performing training and cross-validation error analysis and ceiling analysis; manually choosing features and see the effect in error analysis to separate anomaly from normal distribution, etc. Weld engineers at the Labor Center may remotely query the weld data stores, accessing the weld process data, as well as corresponding weld quality data at the big data analytics computing platform, and remotely perform welding engineering tasks such as identifying the root cause of porosity, lack of fusion or solidification cracking).  
Hsu fails to teach “creating a query comprising at least one first condition for the operational data and at least one second condition for the status data; and retrieving, based on the query, one or more subsets of the operational data fulfilling the at least one first condition and falling within a time span during which the at least one second condition is fulfilled by the status data”.
Prakash teaches “creating a query comprising at least one first condition for the operational data and at least one second condition for the status data” (Fig. 5 and [0079] FIG. 1 depicts a non-limiting example historian 111 comprising a computer system for securely providing and obtaining configuration data according to some embodiments. In some embodiments, an operational historian can store (e.g., “historize”) various types of data related to an industrial process. Some example data can include, but is not limited to, time-series data, metadata, event data, configuration data, raw time-series binary data, tag metadata, diagnostic log data, and the like.  An operational historian can analyze process related data stored in an operational historian database and transform that data into timely reports that are communicated to one or more user displays. In this manner, an operational historian can filter (e.g., curate) data to raise the visibility of the data to users (e.g., via user displays) without overwhelming them and/or overburdening communications networks [0097] Some embodiments include a computer-implemented system and method comprising program logic executed by at least one processor enabling one or more users to visualize all related alarms for an asset based on one or more asset searches (e.g., such as one or more searches initiated through search service 216); wherein a search is a query; [0101] FIG. 5 illustrates a non-limiting example embodiment of an alarm view page 500 according to some embodiments. In some embodiments, the grid 510 can show a list of all alarms generated for the selected asset as well as for its children. Some further embodiments include one or more additional, adjoining and/or overlapping designs including alarm display and statistics. In some embodiments, the system and method can process and provide a chart area (shown on the left side of FIG. 5 and shown enlarged in FIGS. 6A and 6B) that can be used to display useful alarm summary information where the user is provided a snapshot of alarm activity; [0103] In some embodiments, alarms can be grouped by alarm, tag, area and/or object according to a “Group By” control. In some embodiments, alarms can be selected based on condition using the selector 520, including, but not limited to, selected conditions 521, 523, 525, and 527; wherein the asset/sensor being filtered is the first condition, and the alarms being filtered is the second condition) “and retrieving, based on the query, one or more subsets of the operational data fulfilling the at least one first condition and falling within a time span during which the at least one second condition is fulfilled by the status data” ([0102] the alarm view page 500 can be filtered by time or date using selection filter 590 shown at the bottom of the alarm view page 500. [0104] In some embodiments, the grid can show color key rectangles next to data in all cells of the columns represented by the currently selected group (shown as alert column 560). In some embodiments, the Pareto chart 530 can then show a set of data representing the number of alarms grouped by current selection. In FIG. 6B, alarm counts 532, 534, 536, 538, 539 are shown according to some embodiments. [0118] In some embodiments, spark lines (e.g., small inline or overlaid charts) are constructed by fetching process values from the system server for a specific tag mentioned in each alarm record. In some embodiments, if process values are empty for a given tag, then an empty spark line (which is indicated by filling spark line charts with a solid color in some embodiments) can be shown in the grid or grid section. In some embodiments, if the process values are present, then the spark line is drawn using process values. In some embodiments, after drawing the spark line, a section of the spark line is highlighted based on the ‘in alarm’ duration and colored according to the severity of the alarm; wherein fig. 5 shows that a sparkline (subset of operational data fulfilling first condition) for a particular asset's tag in association with an alarm (second condition is fulled) and in the time span (i.e. last 7 days) all as shown in Fig. 5).
It would have been obvious to a person having ordinary skill in the art before the effective file date of the claimed invention to have modified the system for creating databases of welding data used for training machine learning algorithms relating to the quality of welding and welding equipment as taught by Hsu, with the system of Prakash that uses databases that contain related time-series and event/alarm data for devices and is able to query that data so that when first and second filtering conditions are true over a time span, all those instances where the first and second filtering condition are true are sent to a user for display as taught by Prakash, because it gain the stated benefit of Prakash, namely that “[0081] aspects of the system 200 can filter (e.g., curate) the data to raise the visibility of the data to users (e.g., via the user displays) without overwhelming them and/or overburdening communications networks”.  In other words, by taking the operational data and status data that is able to be queried as taught by Hsu, and improving it with the filtering and viewing system of Prakash that allows displaying of database values that meet first and second conditions, including displaying of alarms and alarm count over a period (status data) and displaying the sparkline that shows a snapshot of sensor data surrounding that alarm condition in the time span (operational data), it can be considered taking a known system (Hsu), and improving it with known methods (Prakash) in a way that achieves predictable results and improvements to the base system.  

In regards to Claim 12, the combination of Hsu and Prakash teaches the non-transitory computer readable medium for quality inspection as incorporated by claim 11 above. Hsu further teaches “The non-transitory computer-readable storage medium of claim 11, wherein the instructions further comprise: providing the quality indicator to a user; initiating, based on the quality indicator, an alert; preventing, based on the quality indicator, further usage of the component; indicating/initiating, based on the quality indicator, a component inspection; at least temporarily stopping, based on the quality indicator, operation of the manufacturing device; or any combination thereof.” ([0096] The operator interface 510 may generate audible, visual, and/or tactile output (e.g., via speakers, a display, and/or motors/actuators/servos/etc.) in response to signals from the control circuitry 502. In certain aspects, one or more components of the operator interface 510 may be positioned on the welding tool, whereby control signals from the one or more components are communicated to the control circuitry 502 via conduit 218; [0115] precision and recall of the “live” performance of hypothesis may be displayed on operator interface 238 for user discretion of their tradeoff. User interface may also display the predicted life of contact tip, or h.sub.θ(x) to alert operator for pro-active tip change to avoid unexpected downtime; wherein predicted life of tip is a quality indicator provided to a user).

In regards to Claim 13, the combination of Hsu and Prakash teaches the non-transitory computer readable medium for quality inspection as incorporated by claim 12 above. Hsu further teaches “The non-transitory computer-readable storage medium of claim 12, wherein the instructions further comprise initiating, based on the quality indicator, the alert, wherein the alert comprises a notification displayed on a display screen of the manufacturing device to a user or an app in a cloud.” ([0096] The operator interface 510 may generate audible, visual, and/or tactile output (e.g., via speakers, a display, and/or motors/actuators/servos/etc.) in response to signals from the control circuitry 502. In certain aspects, one or more components of the operator interface 510 may be positioned on the welding tool, whereby control signals from the one or more components are communicated to the control circuitry 502 via conduit 218; [0115] precision and recall of the “live” performance of hypothesis may be displayed on operator interface 238 for user discretion of their tradeoff. User interface may also display the predicted life of contact tip, or h.sub.θ(x) to alert operator for pro-active tip change to avoid unexpected downtime; wherein alert of tip change is displayed and is for an operator (user)).

In regards to Claim 14, the combination of Hsu and Prakash teaches the non-transitory computer readable medium for quality inspection as incorporated by claim 11 above. Hsu further teaches “The computer-implemented method of claim 2, further comprising: recording, by the manufacturing device, a time stamp with each data item of the time series of the operational data… recording, by a second client, a time stamp with each event of the status data, and” ([0079]  Analytics computing platform 234 will combine the “x” and the “y” (e.g., {(x, tag)} and {(y1, y2, tag)}) data together to form a complete training example {(x, y)}. For example, “x” may be a vector of all the sensors of welding process and equipment, while the “y1” vector may include fault codes, events and error logs from networked welding machines & robots & PLCs that can be digitally and automatically transmitted, but “y2” may be a human interface for manual entry by the maintenance personnel when he recovers the fault (such as those described with regard to FIGS. 3a and 3b). The “x,” “y1,” and “y2” data may be further time stamped so that when they reach the analytics computing platform 234, a pre-processor can parse and assemble them into dataset before ingesting them into machine learning algorithms of the one or more analytics computing platforms 234 for training, validation and testing. The data transmitted by the welding cell 406 and inspection station 408 include metadata with “tags” or supplemental information such as weldment traceability, time and location data attached to the weld process data, welding equipment maintenance data and weld quality data; wherein it is implied that the manufacturing device (welding system) is that which time stamps the data because it is before reaching the analytics computer) “transmitting, by a first client communicatively coupled to the manufacturing device, the operational data to a first server and storing the time series in a first database communicatively coupled to the first server; and… transmitting the events to a second server and storing the status data in a second database communicatively coupled to the second server.” (Fig. 4a and [0012] In certain aspects, the first data source and the second data source each include: a sensor; a non-transitory data storage device; an operator interface; a database inside or outside welding equipment; or a combination thereof; [0101] The communication interface circuitry 512 comprises circuitry (e.g., a microcontroller and memory) operable to facilitate communication with one or more other devices or systems. The communication interface circuitry 512 is operable to interface the control circuitry 502 to the antenna 516 and/or port 514 for transmit and receive operations. For transmit, the communication interface 512 may receive data from the control circuitry 502 and packetize the data and convert the data to physical layer signals in accordance with protocols in use on the communication link 230. In certain aspects, the data may be communicated in batches, rather than in real time, but real time is still possible. For example, welding data from the welding equipment 210 may be communicated to the analytics computing platform 234 in batches, [0110] Thus, the non-transitory data storage device may be further configured to store any received welding data (e.g., welding data received by the analytics computing platform 234 from a welding system) and to create a weld data store of previously received welding data (e.g., historic welding data), which may employ a large-scale dataset associated with one or more fabricators. Thus, in certain aspects, the weld data store may employ a large scale dataset comprising, for example, (1) welding process data collected from welding equipment that is associated with one or more fabricators, and/or (2) weld quality data associated with said welding equipment that is associated with one or more fabricators. The fabricators represented in the weld data store need not be related, rather, they may be unrelated fabricators. In other words, the weld data store may dynamically receive and store the welding process data that is to be used in both present and/or future weld analyses; Fig. 4a shows x, y1 and tag information being sent from manufacturing and inspection equipment to analytics platform 234 and stored in different databases).

In regards to Claim 15, the combination of Hsu and Prakash teaches the non-transitory computer readable medium for quality inspection as incorporated by claim 14 above. Hsu further teaches “The non-transitory computer-readable storage medium of claim 14, wherein the instructions further comprise: querying the first database” (([0087] OLAP virtual machine service or online analytical processing of streaming data could be provided, e.g., with MonetDB open source system for efficient complex queries against large databases. [0117] the analytics computing platform 234's centralized weld production knowledge system allows the fabricator 604, or a third party (e.g., service provider to the fabricator), to remotely monitor and/or manage one or more welding systems (or welding equipment). For example, machine learning analysts and weld engineers may be situated at a remote analyst labor center 602 with access to the analytics computing platform 234, without physically visiting each customer's location (e.g., each fabricator 604a, 604b, 604c, 604n). These analysts may remotely use tools (e.g., MATLAB®, Octave, etc.) installed at the analytics computing platform 234 to perform manual tasks such as examining feature histogram and perform various transformation of features to achieve a normal distribution (i.e., Gaussian) before feeding the features into a chosen weld production knowledge machine learning algorithm; manually inspecting the learning curves, and performing training and cross-validation error analysis and ceiling analysis; manually choosing features and see the effect in error analysis to separate anomaly from normal distribution, etc. Weld engineers at the Labor Center may remotely query the weld data stores, accessing the weld process data, as well as corresponding weld quality data at the big data analytics computing platform, and remotely perform welding engineering tasks such as identifying the root cause of porosity, lack of fusion or solidification cracking).  Prakash further teaches “The non-transitory computer-readable storage medium of claim 14, wherein the instructions further comprise: querying the first database based on a first part of the query comprising the at least one first condition; and querying the first database or a third database based on a second part of the query comprising the at least one second condition.” (Fig. 5 and [0007] Therefore, there is a need for a system that automatically monitors production environments and generates a display with items generated from relevant information from enormous amounts of asset data (e.g., tags) stored on a database, such that timely action can be taken to prevent the loss of profit. [0079] FIG. 1 depicts a non-limiting example historian 111 comprising a computer system for securely providing and obtaining configuration data according to some embodiments. In some embodiments, an operational historian can store (e.g., “historize”) various types of data related to an industrial process. Some example data can include, but is not limited to, time-series data, metadata, event data, configuration data, raw time-series binary data, tag metadata, diagnostic log data, and the like.  An operational historian can analyze process related data stored in an operational historian database and transform that data into timely reports that are communicated to one or more user displays. In this manner, an operational historian can filter (e.g., curate) data to raise the visibility of the data to users (e.g., via user displays) without overwhelming them and/or overburdening communications networks [0097] Some embodiments include a computer-implemented system and method comprising program logic executed by at least one processor enabling one or more users to visualize all related alarms for an asset based on one or more asset searches (e.g., such as one or more searches initiated through search service 216); wherein a search is a query; [0101] FIG. 5 illustrates a non-limiting example embodiment of an alarm view page 500 according to some embodiments. In some embodiments, the grid 510 can show a list of all alarms generated for the selected asset as well as for its children. Some further embodiments include one or more additional, adjoining and/or overlapping designs including alarm display and statistics. In some embodiments, the system and method can process and provide a chart area (shown on the left side of FIG. 5 and shown enlarged in FIGS. 6A and 6B) that can be used to display useful alarm summary information where the user is provided a snapshot of alarm activity; [0103] In some embodiments, alarms can be grouped by alarm, tag, area and/or object according to a “Group By” control. In some embodiments, alarms can be selected based on condition using the selector 520, including, but not limited to, selected conditions 521, 523, 525, and 527; wherein the asset/sensor being filtered is the first condition, and the alarms being filtered is the second condition).

In regards to Claim 16, the combination of Hsu and Prakash teaches the non-transitory computer readable medium for quality inspection as incorporated by claim 11 above. Prakash further teaches The non-transitory computer-readable storage medium of claim 11, wherein the instructions further comprise: identifying concurrent time spans within the operational data and the status data fulfilling the at least one first condition and the at least one second condition of the query, respectively.” ([0110] In some embodiments, the system and methods associated therewith can process data based on an asset hierarchy and selected time duration, where raw alarms are fetched from a system server such as computer 203. [0114] if the group contains only an ‘alarm.clear’ record, then unack duration and in alarm durations are calculated based on the start time specified in the time control and event time registered in the ‘alarm.clear’ record. Later, additional properties (such as “in-alarm”, “is-silenced”, and “is-shelved”) are calculated. For example, some embodiments include rule-based processing definitions that can comprise one or more of: [0115] A “In-Alarm”: Within the queried duration, this property is set to true for each alarm if the ‘Alarm.Clear’ record is not present for that alarm. If not, this property is set to false; [0118] In some embodiments, spark lines (e.g., small inline or overlaid charts) are constructed by fetching process values from the system server for a specific tag mentioned in each alarm record. In some embodiments, if process values are empty for a given tag, then an empty spark line (which is indicated by filling spark line charts with a solid color in some embodiments) can be shown in the grid or grid section. In some embodiments, if the process values are present, then the spark line is drawn using process values. In some embodiments, after drawing the spark line, a section of the spark line is highlighted based on the ‘in alarm’ duration and colored according to the severity of the alarm; wherein the filtering conditions for presenting grid in Fig. 5).

Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Ramnani et al. (US 20230169805) – teaches an asset data collection regime for a fleet whereby real-time sensor data is collected and extracted using a decoder
Kloepper et al (US 20230019404) – teaches a data cleaning method for industrial devices that collects real-time sensor data and event log data and using the cleaned data in machine learning processes
Quinonez et al. (US 20220048253) – teaches a data collection system for manufacturing devices for data aggregation and  analytics
Nishiyama et al. (US 20180267510) – teaches a network of manufacturing control devices that collect and store time series data 

Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN M SKRZYCKI whose telephone number is (571)272-0933. The examiner can normally be reached M-Th 7:30-3:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KAMINI SHAH can be reached at 571-272-2279. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.


/JONATHAN MICHAEL SKRZYCKI/Examiner, Art Unit 2116                                                                                                                                                                                                        
/ROBERT E FENNEMA/Supervisory Patent Examiner, Art Unit 2117                                                                                                                                                                                                        


    
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
    


Cookies help us deliver our services. By using our services, you agree to our use of cookies.