Patent Application 17512916 - SYSTEMS AND METHODS FOR MONITORING A COOKING - Rejection
Appearance
Patent Application 17512916 - SYSTEMS AND METHODS FOR MONITORING A COOKING
Title: SYSTEMS AND METHODS FOR MONITORING A COOKING OPERATION USING A CAMERA
Application Information
- Invention Title: SYSTEMS AND METHODS FOR MONITORING A COOKING OPERATION USING A CAMERA
- Application Number: 17512916
- Submission Date: 2025-04-09T00:00:00.000Z
- Effective Filing Date: 2021-10-28T00:00:00.000Z
- Filing Date: 2021-10-28T00:00:00.000Z
- National Class: 219
- National Sub-Class: 506000
- Examiner Employee Number: 96932
- Art Unit: 3761
- Tech Center: 3700
Rejection Summary
- 102 Rejections: 0
- 103 Rejections: 6
Cited Patents
The following patents were cited in the rejection:
Office Action Text
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is responsive to the amendments filed 01/13/2025. Claims 1-16, 18-20 are pending in this application. As directed, claims 1 and 11 have been amended; claim 17 cancelled. With respect to 35 U.S.C. 101 Claim Rejections: Applicant’s amendments to Claim 1 have overcome the 35 U.S.C. 101 Claim Rejections set forth in the Non-Final Office Action dated 09/12/2024. Response to Arguments With respect to 35 U.S.C. 102 & 103 Claim Rejections: Regarding the independent claim 1, Applicant(s)’ arguments filed 01/13/2025 have been fully considered but are moot based on new ground(s) of rejection necessitated by amendments. Regarding claim 7 and independent claim 11, Applicant(s)’ arguments filed 01/13/2025 regarding the limitation “combining the first zone into the second zone to create a third zone” as recited in claim 7 and as currently added to the independent claim 11 have been fully considered but they are not persuasive for the following reasons: The Applicant(s)’ Argument: (Regarding claims 7 & 11 – Remarks dated 01/13/2025 on page 8) Applicant argued that the primary reference “Libman fails to disclose, suggest, or teach splitting or combining zones”, see details in the Remarks dated 01/13/2025 on page 8. The Examiner’s Response: In response to Applicant’s argument that “Libman fails to disclose, suggest, or teach splitting or combining zones”, Examiner respectfully disagrees because: First, the primary reference Libman properly discloses the limitations: determining that the at least one characteristic of the cooking item in the first zone (first portion, Libman Pars.0142-0143) is the same as the at least one characteristic of the cooking item in the second zone (second portion, Libman Pars.0142-0143). Specifically, Pars.0142-0143 of Libman disclose the processing information includes the degree of doneness of portions of the object, Libman also discloses results obtained from the processing information of the first portion and the second portion can be the same or similar. To be more specific, Libman Par.0142 discloses “The first processing result for a first portion of the object may be the same as or similar to the processing result for a second portion of the object. Alternatively, the first processing result for the first portion of the object may be different from the processing result for the second portion of the object.”, and Libman Par.0143 discloses “the processing information may include a desired amount of energy to be dissipated or absorbed in a portion or portions of the object, a desired target temperature profile of a portion or portions of the object, a desired degree of doneness of a portion or portions of the object, a desired target humidity of a portion or portions of the object, a desired density of a portion or portions of the object, a desired pH value of a portion or portions of the object, and/or desired cooking units to apply to a portion or portions of the object.”. Therefore, Libman discloses the doneness level of the cooking item in the first zone is the same as the doneness level of the cooking item in the second zone. Second, the primary reference Libman properly discloses the limitations: combining the first zone (first portion, Libman Pars.0142-0143) into the second zone (second portion, Libman Pars.0142-0143) to create a third zone. Specifcally, Libman Par.0023 discloses: “The image may comprise a combination of at least two of a graphical image, generated based on one or more values indicative of EM energy absorption in the object; a temperature profile, associating differing portions of the object in the energy application zone with different temperatures, or an optical image, generated based on visual light received from the energy application zone.”. Thus, Libman Par.0023 discloses the image may comprise a combination of at least two of a temperature profile, associating differing portions of the object in the energy application zone; and Libman Pars.0142-0143 discloses there are two portions including “first portion” and “second portion”. Thus, since two portions/zones are combined to obtain a combined image, the combined image is interpreted to be a third portion/zone, which is the combination of the two portions/zones. Therefore, Libman properly discloses the limitation “combining the first zone into the second zone to create a third zone” as recited in claim 7 & claim 11. Claim Objections Claims 9-10, 19-20 are objected to because of the following informalities: Claim 9 (line 2) and claim 19 (line 3) recite the limitation “a first zone”. It is understood that “a first zone” recited herein in claims 9 and 19 refers to the limitation “a first zone” recited previously in claim 1 (line 11) and claim 11 (line 16), respectively. Therefore, the limitation “a first zone” as recited in claims 9 and 19 should read “the first zone” to properly refer to the corresponding limitation recited previously in claim 1 (line 11) and claim 11 (line 16), respectively. Similarly, claim 9 (line 3) and claim 19 (line 4) recite the limitation “a second zone”. It is understood that “a second zone” recited herein in claims 9 and 19 refers to the limitation “a second zone” recited previously in claim 1 (line 13) and claim 11 (line 17), respectively. Therefore, the limitation “a second zone” recited in claims 9 and 19 should read “the second zone” to properly refer to the corresponding limitation recited previously in claim 1 (line 13) and claim 11 (line 17), respectively. Claim 10 is objected by virtue of its dependence on claim 9. Claim 20 is objected by virtue of its dependence on claim 19. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-4, 7 are rejected under 35 U.S.C. 103 as being unpatentable over Libman et al. (U.S. Pub. No. 2016/0309548 A1, previously cited) in view of Du et al. (U.S. Pub. No. 2022/0273134 A1, newly cited). Regarding claim 1, Libman discloses a method of operating an oven appliance (“oven”, Libman Par.0058), the oven appliance (“oven”, Libman Par.0058) comprising a cabinet defining a cooking chamber (“energy application zone”, Libman Pars.0058 & Par.0130), a controller (processor 1680, Libman Fig.6) positioned within the cabinet, and a camera (camera 1610, Libman Fig.6) provided within the cooking chamber (“energy application zone”, Libman Par.0130) (Libman Par.0130 discloses “apparatus 1600 may include an image acquiring device, such as camera 1610, which, in operation, captures an image of the object in the energy application zone”, Libman Par.0191 discloses “The image acquiring device may be positioned in a known location and orientation in the energy application zone and configured with a predetermined field of view.”, and Libman Par.0058 discloses: “energy application zone 9 may include locations where energy is applied in an oven (e.g., a cooking oven)”, and Libman Par.0029 discloses “The method may further include causing application of a plurality of electromagnetic field patterns to the object in the energy application zone”; therefore, Libman discloses the energy application zone is the oven cavity where the energy is applied to the object placed in oven; thus, Libman discloses the camera provided within the cooking chamber), the method, performed by the controller (processor 1680, Libman Fig.6), comprising: capturing, via the camera (camera 1610, Libman Fig.6), a first image (image 1620, Libman Fig.6) of the cooking chamber (“energy application zone”, Libman Pars.0058 & Par.0130) (Libman Fig.14 Step 2140 discloses “Acquiring an image of the object from a camera”, Libman Par.0202 discloses “An image of the object, optionally an optical image or IR image may be acquired in step 2140, for example from camera 1610”, Libman Par.0130 discloses “As illustrated in FIG. 6, apparatus 1600 may include an image acquiring device, such as camera 1610, which, in operation, captures an image of the object in the energy application zone.”, additionally, Libman Par.0134 discloses “image 1620 may constitute an image of the inside of cavity 10 (FIG. 3) or energy application zone 9 where plate 1620 may be placed.”, Libman Par.0058 discloses: “energy application zone 9 may include locations where energy is applied in an oven (e.g., a cooking oven)” and Libman Par.0194 discloses “energy application zone 9 (e.g., cavity)”; therefore, Libman discloses capturing, via the camera, a first image of the cooking chamber); defining, on the first image (image 1620, Libman Fig.6), a plurality of zones (plurality of zones includes the “first portion” and the “second portion”, Libman Pars.0142-0143) within the cooking chamber (“energy application zone”, Libman Pars.0058 & Par.0130), each of the plurality of zones (each of plurality of zones including each of the “first portion” and the “second portion”, Libman Pars.0142-0143) comprising a first predetermined area of the cooking chamber (“energy application zone”, Libman Pars.0058 & Par.0130) (Libman Par.0135 discloses “image 1620 may also focus on one or more portions of energy application zone 9 or any objects placed in energy application zone 9. The term portion(s) is used herein interchangeably with any of the terms, area(s), segment(s), region(s), sub-volume(s)”; therefore, Libman discloses defining, on the first image, a plurality of portions within the cooking chamber; since Libman discloses portions can be area(s), segment(s), region(s), sub-volume(s), thus, each of the plurality of portions comprising a first predetermined area of the cooking chamber); analyzing, by one or more computing devices using a machine learning image recognition model (Libman Par.0180 discloses: “the image may be processed to obtain a processed image, and optionally scaled, by a processor (e.g., processor 1680, 2030 and/or 101)” and “the processing on the image may include: zooming, filtering, e.g., digital filtering, image recognition processing”), the first image (image 1620, Libman Fig.6) to evaluate a doneness level of a cooking item in each of the plurality of zones (plurality of zones includes the “first portion” and the “second portion”, Libman Pars.0142-0143) (Libman Par.0141 discloses: “Based on the information received from the user via interface 1640, processing information can be determined for selected portion(s) of the object. The selected portions may be selected by the user based on the image of the object.”, and Libman Par.0143 discloses “the processing information may include…a desired degree of doneness of a portion or portions of the object”; therefore, Libman discloses analyzing, by one or more computing devices using a machine learning image recognition model, the first image to evaluate a doneness level of a cooking item in each of the plurality of portions); and comparing the doneness level of the cooking item in a first zone (first portion, Libman Pars.0142-0143) of the plurality of zones (plurality of zones includes the “first portion” and the “second portion”, Libman Pars.0142-0143) (Libman Pars.0142-0143 discloses the processing information includes the degree of doneness of a portion or portions of the object, Libman also discloses results obtained from the processing information of the first portion and the second portion can be the same or different; therefore, Libman discloses comparing the doneness level of the cooking item in a first portion of the plurality of portions with the doneness level of the cooking item in a second portion of the plurality of portions; specifically, Libman Par.0142 discloses “The first processing result for a first portion of the object may be the same as or similar to the processing result for a second portion of the object. Alternatively, the first processing result for the first portion of the object may be different from the processing result for the second portion of the object.”, and Libman Par.0143 discloses “the processing information may include a desired amount of energy to be dissipated or absorbed in a portion or portions of the object, a desired target temperature profile of a portion or portions of the object, a desired degree of doneness of a portion or portions of the object, a desired target humidity of a portion or portions of the object, a desired density of a portion or portions of the object, a desired pH value of a portion or portions of the object, and/or desired cooking units to apply to a portion or portions of the object.”; therefore, Libman discloses comparing the doneness level of the cooking item in a first portion of the plurality of portions with the doneness level of the cooking item in a second portion of the plurality of portions), comparing the doneness level of the cooking item in the first zone (first portion, Libman Pars.0142-0143) of the plurality of zones (plurality of zones includes the “first portion” and the “second portion”, Libman Pars.0142-0143) with the doneness level of the cooking item in a second zone (second portion, Libman Pars.0142-0143) of the plurality of zones (plurality of zones includes the “first portion” and the “second portion”, Libman Pars.0142-0143) (Libman Pars.0142-0143 discloses the processing information includes the degree of doneness of a portion or portions of the object, Libman also discloses results obtained from the processing information of the first portion and the second portion can be the same or different; therefore, Libman discloses comparing the doneness level of the cooking item in a first portion of the plurality of portions with the doneness level of the cooking item in a second portion of the plurality of portions; specifically, Libman Par.0142 discloses “The first processing result for a first portion of the object may be the same as or similar to the processing result for a second portion of the object. Alternatively, the first processing result for the first portion of the object may be different from the processing result for the second portion of the object.”, and Libman Par.0143 discloses “the processing information may include a desired amount of energy to be dissipated or absorbed in a portion or portions of the object, a desired target temperature profile of a portion or portions of the object, a desired degree of doneness of a portion or portions of the object, a desired target humidity of a portion or portions of the object, a desired density of a portion or portions of the object, a desired pH value of a portion or portions of the object, and/or desired cooking units to apply to a portion or portions of the object.”; therefore, Libman discloses comparing the doneness level of the cooking item in a first portion of the plurality of portions with the doneness level of the cooking item in a second portion of the plurality of portions), the second zone (second portion, Libman Pars.0142-0143) being different from the first zone (first portion, Libman Pars.0142-0143) (Libman Pars.0142-0143 discloses the processing information includes the degree of doneness of a portion or portions of the object, Libman also discloses results obtained from the processing information of the first portion and the second portion can be the same or different; Libman Par.0142 discloses “the first processing result for the first portion of the object may be different from the processing result for the second portion of the object.” & Libman Par.0022 discloses “the information relating to processing instructions indicates that a first portion of the object shown in the image is to be processed differently from a second portion of the object”; therefore, Libman discloses the second portion being different from the first portion) Libman does not disclose: determining that the doneness level of the cooking item in the first zone is different within the first zone; and splitting the first zone to create two smaller zones. Du teaches a method of operating an oven appliance: determining that the doneness level (“cooking doneness”, Du Pars.0202 & 0231) of the cooking item in the first zone (heating region 140, Du Fig.2) is different within the first zone (heating region 140, Du Fig.2) (Du Fig.2 shows the heating region 140 includes the first heating region 142 and the second heating region 144, additionally, Du Par.0202 teaches: “by identifying the image information, that the cooking doneness of food corresponding to a certain heating region is lower than that in other heating regions, the heating device in this heating region is controlled to heat the food at higher power and/or continuously heat the food; and if it is determined that the cooking doneness of food corresponding to another heating region is higher than that in other heating regions, the heating device in this heating region is controlled to reduce the heating power and/or shorten the heating time, thereby preventing parts of the food from being burned or undercooked”, Du Par.0230 teaches: “the cooking doneness of the food material may be divided into six states including uncooked, blue rare, rare, medium, medium well and well done”, and Du Par.0232 teaches: “If the image information analyzed by the processor 120 shows that the food in the first heating region 142 is medium and the food in the second heating region 144 is medium well”; thus, Du teaches determining that the doneness level of the cooking item in the heating region 140 is different within the heating region 140); and splitting the first zone (heating region 140, Du Fig.2) to create two smaller zones (first heating region 142 and second heating region 144, Du Fig.2) (Du Par.0048 teaches: “the present disclosure provides a control method for a cooking apparatus, including: acquiring image information of a cooked food material; dividing the image information of the cooked food material into multiple regions according to pixel information in the image information of the cooked food material; and selecting a target operation mode according to a comparison result of a color difference value of any two regions and a first color difference threshold, and controlling heating devices of the cooking apparatus to operate according to the target operation mode”; it is noted that “multiple regions” means two regions or more, additionally, Du Figs.2, 6 & Par.0229 teaches: “Step 604, the image information is identified, and food types and cooking doneness of food corresponding to the first heating region and the second heating region are acquired”; thus, Du teaches splitting the heating region 140 to create two smaller heating regions 142 and 144). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Libman, by adding the teachings of determining that the doneness level of the cooking item in the first zone is different within the first zone, and splitting the first zone to create two smaller zones, as taught by Du, in order to control the heating of food by cooking region based on the cooking doneness of the food, thereby achieving uniform food heating, providing a consistent overall cooking doneness, preventing parts of the food from being burned or undercooked, and improving cooking performance, as recognized by Du [Du, Abstract]. Regarding claim 3, Libman in view of Du teaches the method as set forth in claim 1, Libman also discloses further comprising: determining that the doneness level of the cooking item in the first zone (first portion, Libman Pars.0142-0143) is different from the doneness level of the cooking item in the second zone (second portion, Libman Pars.0142-0143) (Libman Pars.0142-0143 discloses the processing information includes the degree of doneness of portions of the object, Libman also discloses results obtained from the processing information of the first portion and the second portion can be different; specifically, Libman Par.0142 discloses “The first processing result for a first portion of the object may be the same as or similar to the processing result for a second portion of the object. Alternatively, the first processing result for the first portion of the object may be different from the processing result for the second portion of the object.”, and Libman Par.0143 discloses “the processing information may include a desired amount of energy to be dissipated or absorbed in a portion or portions of the object, a desired target temperature profile of a portion or portions of the object, a desired degree of doneness of a portion or portions of the object, a desired target humidity of a portion or portions of the object, a desired density of a portion or portions of the object, a desired pH value of a portion or portions of the object, and/or desired cooking units to apply to a portion or portions of the object.”; therefore, Libman discloses the doneness level of the cooking item in the first portion is different from the doneness level of the cooking item in the second portion); and alerting a user as to the determination, wherein alerting the user comprises providing a recommended course of action (it is noted that the limitation “doneness” is interpreted as “a cooked or heated level of a food item”, according to the Instant Application Par.0035: “the term "doneness" and the like are generally intended to refer to a cooked or heated level of a food item 152.”; in this case, the prior art Libman Par.0188 discloses “As the energy application progresses, the spatial temperature profile of the object may change. The changes may be continuously presented (e.g., displayed) to the user on the user interface. Additionally or alternatively the user may be alerted, for instance, audibly, if the temperature of one or more of the items or portions heat to a temperature outside an allowed range. In some embodiments, the user may select a display of the temperature profile among some given display options. The selection may be changed, for example, during energy application, according to the user's desire. For example, prior to the energy application, the user may select to display only the acquired image (e.g., optical image). However as energy application proceeds, the user may select a different display option, e.g. a spatial temperature profile image or the combined image, or the user may choose that the interface displays only temperatures at selected areas (when, for example, some areas are more heat sensitive than others)”; therefore, Libman discloses alert user as to the determination, wherein alerting the user comprises changing selection for portions of the being cooked food). Regarding claim 4, Libman in view of Du teaches the method as set forth in claim 3, Libman also discloses: wherein analyzing the first image (image 1620, Libman Fig.6) to evaluate the doneness level of the cooking item comprises comparing the first image (image 1620, Libman Fig.6) against one or more libraries of stored images via the machine learning image recognition model (Libman Par.0192 discloses: “a matrix of values corresponding to different known objects (e.g., different food objects) may be obtained and optionally recorded in a look up table.”, Libman Par.0193 discloses “the acquired image may be compared with pre-stored data taken, for example, from the lookup table.” and “the acquired image may be compared by image recognition methods known in the art”, and Libman Par.0180 discloses: “the image may be processed to obtain a processed image, and optionally scaled, by a processor (e.g., processor 1680, 2030 and/or 101)” and “the processing on the image may include: zooming, filtering, e.g., digital filtering, image recognition processing”, Libman Par.0198 discloses: “the lookup tables may include data regarding the temperature and/or water content levels of various items. The processor may further be configured to identify at least a portion of the object based on the temperature and/or humidity. The processor may further control the energy application to at least a portion of the object in order to achieve a desired temperature and/or a desired humidity level in at least a portion of the object or in several items within the object.”, Libman Par.0203 discloses: “automatic recognition (e.g., identification) of at least a portion or an item of the object may be performed, for example by comparing data stored in lookup tables with data gathered from the energy application zone, in step 2170. The data stored may include for example a list of: colors, shapes, textures, state of aggregation (liquid, solid or gas) and volume for the image, of at least a portion of known objects acquired by the image acquiring device, and energy absorption values associated with different known items.”, and step by step of comparing doneness level is described in details in Par.0201 of Libman, and it is noted that the limitation “doneness” is interpreted as “a cooked or heated level of a food item”, according to the Instant Application Par.0035: “the term "doneness" and the like are generally intended to refer to a cooked or heated level of a food item 152.”; therefore, Libman discloses analyzing the first image to evaluate the doneness level of the cooking item comprises comparing the first image against one or more libraries of stored images via the machine learning image recognition model). Regarding claim 7, Libman in view of Du teaches the method as set forth in claim 1, Libman also discloses further comprising: determining that the doneness level of the cooking item in the first zone (first portion, Libman Pars.0142-0143) is the same as the doneness level of the cooking item in the second zone (second portion, Libman Pars.0142-0143) (Libman Pars.0142-0143 discloses the processing information includes the degree of doneness of portions of the object, Libman also discloses results obtained from the processing information of the first portion and the second portion can be the same or similar; specifically, Libman Par.0142 discloses “The first processing result for a first portion of the object may be the same as or similar to the processing result for a second portion of the object. Alternatively, the first processing result for the first portion of the object may be different from the processing result for the second portion of the object.”, and Libman Par.0143 discloses “the processing information may include a desired amount of energy to be dissipated or absorbed in a portion or portions of the object, a desired target temperature profile of a portion or portions of the object, a desired degree of doneness of a portion or portions of the object, a desired target humidity of a portion or portions of the object, a desired density of a portion or portions of the object, a desired pH value of a portion or portions of the object, and/or desired cooking units to apply to a portion or portions of the object.”; therefore, Libman discloses the doneness level of the cooking item in the first zone is the same as the doneness level of the cooking item in the second zone); and combining the first zone (first portion, Libman Pars.0142-0143) into the second zone (second portion, Libman Pars.0142-0143) to create a third zone (Libman Par.0023 discloses: “The image may comprise a combination of at least two of a graphical image, generated based on one or more values indicative of EM energy absorption in the object; a temperature profile, associating differing portions of the object in the energy application zone with different temperatures, or an optical image, generated based on visual light received from the energy application zone.”; thus, Libman Par.0023 discloses the image may comprise a combination of at least two of a temperature profile, associating differing portions of the object in the energy application zone, and Libman Pars.0142-0143 discloses there are two portions including “first portion” and “second portion”; therefore, since two portions/zones are combined to obtain a combined image, the combined image is interpreted to be a third portion/zone, which is the combination of the two portions/zones). Claims 2 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Libman et al. (U.S. Pub. No. 2016/0309548 A1, previously cited) in view of Du et al. (U.S. Pub. No. 2022/0273134 A1, newly cited), and further in view of Bhogal (U.S. Pub No. 2018/0292092 A1, previously cited). Regarding claim 2, Libman in view of Du teaches the method as set forth in claim 1, Libman also discloses the processing on the image may include: zooming, filtering, e.g., digital filtering, image recognition processing or any other processing on images known in the art as such in Par.0180. However, Libman does not explicitly disclose: wherein the machine learning image recognition model comprises at least one of a convolution neural network ("CNN"), a region-based convolution neural network ("R-CNN"), a deep belief network ("DBN"), or a deep neural network ("DNN") image recognition process. Bhogal teaches a method of operating an oven appliance (oven 100, Bhogal Fig.4), the oven appliance (oven 100, Bhogal Fig.4) comprising a cooking chamber (cooking cavity 200, Bhogal Fig.6) and a camera (camera 710, Bhogal Fig.9), the method comprising: wherein the machine learning image recognition model comprises at least one of a convolution neural network ("CNN") (Bhogal Par.0113 teaches: “foodstuff features can be recognized from the image using computer-implemented techniques, using convolutional neural network methods”; therefore, Bhogal teaches the machine learning image recognition model comprising a convolution neural network), a region-based convolution neural network ("R-CNN"), a deep belief network ("DBN"), or a deep neural network ("DNN") image recognition process [It is noted that the limitation “at least one of a convolution neural network ("CNN"), a region-based convolution neural network ("R-CNN"), a deep belief network ("DBN"), or a deep neural network ("DNN") image recognition process” is in alternative form; therefore, only one of these features was given patentable weight during examination]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Libman in view of Du, by adding the teaching of the machine learning image recognition model comprising a convolution neural network, as taught by Bhogal, because the convolution neural network is highly effective at identifying objects and are often used for image recognition and object detection tasks because it can capture the variability and diversity of images; additionally, the convolution neural network uses layers like convolution and pooling to reduce computational costs by decreasing the number of parameters and data size; thereby, saving cost and energy. Regarding claim 6, Libman in view of Du teaches the method as set forth in claim 3, but does not teach: wherein alerting the user comprises transmitting a prompt comprising the recommended course of action to a mobile device, the mobile device being in remote communication with the oven appliance. Bhogal teaches a method of operating an oven appliance (oven 100, Bhogal Fig.4), the oven appliance (oven 100, Bhogal Fig.4) comprising a cooking chamber (cooking cavity 200, Bhogal Fig.6) and a camera (camera 710, Bhogal Fig.9), the method comprising: wherein alerting the user comprises transmitting a prompt comprising the recommended course of action to a mobile device (mobile device can be a laptop, tablet, phone, smartwatch, etc., as taught by Bhogal Par.0051) (Bhogal Par.0038 teaches “a user interface 30 (e.g., at a remote user device 40”, Bhogal Par.0051 teaches “The user device 40 can be a mobile device (e.g., a laptop, tablet, phone, smartwatch, etc.)”, and Bhogal Par.0054 teaches “the user, via the user interface 30, can receive and respond to notifications from the oven 100 and/or the computing system 20 (e.g., notification of time remaining in a cooking session, an error or safety alert, etc.). The notification can be a cooking session completion notification (e.g., “ready to eat!”), an error notification, an instruction notification, or be any other suitable notification. In a variation, the cooking session completion notification is generated in response to a target cooking parameter value being met. In one example, the notification is generated when a target cooking time is met. In a second example, the notification is generated when a target food parameter value, such as a target internal temperature, surface browning, or internal water content, is met. However, any other suitable notification can be generated in response to the occurrence of any other suitable event.”; therefore, Bhogal teaches alerting the user comprises transmitting a prompt comprising the recommended course of action to a mobile device), the mobile device (mobile device 40, Bhogal Fig.4) being in remote communication with the oven appliance (oven 100, Bhogal Fig.4) (Bhogal Fig.4 shows the remote communication between the mobile device 40 and the oven 100; additionally, Bhogal Par.0038 teaches “a user interface 30 (e.g., at a remote user device 40” and Bhogal Par.0054 teaches “the user, via the user interface 30, can receive and respond to notifications from the oven 100”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Libman in view of Du, by adding the teachings of alerting the user comprises transmitting a prompt comprising the recommended course of action to a mobile device, the mobile device being in remote communication with the oven appliance, as taught by Bhogal, in order to monitor and control the progress of a cooking session in a location away from the oven, as recognized by Bhogal [Bhogal, Par.0037]; thus, providing convenience for user because user can control the oven from anywhere, additionally, enhancing safety by receiving/reposing to remote notification, and also saving time for operating the oven. Claims 5, 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Libman et al. (U.S. Pub. No. 2016/0309548 A1, previously cited) in view of Du et al. (U.S. Pub. No. 2022/0273134 A1, newly cited), and further in view of Liu et al. (U.S. Pub. No. 2021/0228022 A1, previously cited). Regarding claim 5, Libman in view of Du teaches the method as set forth in claim 3, but does not teach: wherein the recommended course of action comprises adjusting a positioning of the cooking item within the cooking chamber. Liu teaches a method of operating an oven appliance (cooking appliance 200, Liu Fig.2A), the oven appliance (cooking appliance 200, Liu Fig.2A) comprising a cooking chamber (cooking cavity of the cooking appliance 200, Liu Fig.2A) and a camera (second sensors 142 (e.g., shown as second sensors 214-1 and 214-2, Liu Fig.2A) (Liu Par.0076 teaches “the one or more second sensors 142 are part of an in situ imaging system (e.g., imaging system) that includes one or more still image cameras or video cameras (e.g., second sensors 214-1 and 214-2)”), the method comprising: wherein the recommended course of action comprises adjusting a positioning of the cooking item within the cooking chamber (Liu Par.0159 teaches: “when a food item is determined to reach a desired doneness level before other food items inside cooking appliance, the first cooking appliance includes mechanical mechanisms to transport the food item to a cool “park” zone so that the cooking of the food item is stopped or slowed.”, and Liu Par.0160 teaches: “Liu Par.0160 teaches: “in accordance with a determination that the current cooking progress level of the first food item inside the first cooking appliance corresponds to a preset cooking progress level, the computing system generates a user alert. For example, the alert includes an image of the current state of the first food item on the first home appliance or on a mobile device of the user. The alert includes images of two subsequent cooking progress levels, for the user to choose a desired cooking progress level as the final desired cooking progress level for the first food item. For example, when the cookie has reached 80% doneness, the user is given opportunity to choose between a soft cookie (95% done) or a hard cookie (100% done) as the final desired state for the cookie. Once the desired state is reached, the cooking appliance automatically stops the cooking process, e.g., by transporting the cookie to a cool part of the oven, or stops the heating power of the oven.”, as cited and incorporated in the rejection of claim 9 above; therefore, Liu teaches the recommended course of action comprises adjusting a positioning the cooking item within the cooking chamber). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Libman in view of Du, by adding the teaching of the recommended course of action comprises adjusting a positioning of the cooking item within the cooking chamber, as taught by Liu, in order to prevent overheating the food, and correctly place the food in the determined area, therefore, preventing cooking failure and securing the safety of the cooking apparatus. Regarding claim 8, Libman in view of Du teaches the method as set forth in claim 1, but does not teach: capturing, via the camera, a second image of the cooking chamber, the second image being captured a predetermined amount of time after capturing the first image; analyzing, via the one or more computing devices using the machine learning image recognition model, the second image to evaluate the doneness level of the cooking item provided within the cooking chamber; and determining a rate of change of the doneness level within each zone between the first image and the second image. Liu teaches a method of operating an oven appliance (cooking appliance 200, Liu Fig.2A), the oven appliance (cooking appliance 200, Liu Fig.2A) comprising a cooking chamber (cooking cavity of the cooking appliance 200, Liu Fig.2A) and a camera (second sensors 142 (e.g., shown as second sensors 214-1 and 214-2, Liu Fig.2A) (Liu Par.0076 teaches “the one or more second sensors 142 are part of an in situ imaging system (e.g., imaging system) that includes one or more still image cameras or video cameras (e.g., second sensors 214-1 and 214-2)”), the method comprising: capturing, via the camera (second sensors 142 (e.g., shown as second sensors 214-1 and 214-2, Liu Fig.2A) (Liu Par.0076 teaches “the one or more second sensors 142 are part of an in situ imaging system (e.g., imaging system) that includes one or more still image cameras or video cameras (e.g., second sensors 214-1 and 214-2)”), a second image (“first test image”, Liu Par.0150 & Fig.9) of the cooking chamber (cooking cavity of the cooking appliance 200, Liu Fig.2A), the second image (“first test image”, Liu Par.0150 & Fig.9) being captured a predetermined amount of time after capturing the first image (“first baseline image”, Liu Par.0150 & Fig.9) (Liu Par.0150 teaches: “a first baseline image corresponding to an initial cooking progress level of a first food item inside the first cooking appliance” and “a first test image corresponding to a current cooking progress level of the first food item inside the first cooking appliance. For example, the smart oven is configured to capture an image (e.g., in conjunction with other sensors data, such as temperature, weight map, thermal map, etc.) every 10 seconds after the first cooking process is started, and the first test image is the most recently captured image among a series of images captured periodically by the first cooking appliance”; therefore, the first baseline image is the first image, and the first test image is the second image; and the second image being captured a predetermined amount of time after capturing the first image); analyzing, via the one or more computing devices using the machine learning image recognition model (Liu Par.0010 teaches “A combination of image processing and user input is utilized in order to create models with food recognition”, and Liu Par.0149 teaches: “Method 900 is performed by a computing system (e.g., computing system 130, 160, 130′)”), the second image (“first test image”, Liu Par.0150 & Fig.9) to evaluate the doneness level of the cooking item provided within the cooking chamber (cooking cavity of the cooking appliance 200, Liu Fig.2A) (Liu Par.0150 teaches “The computing system generates (906) a first test feature tensor corresponding to the first test image.” and “The computing system determines (908) the current cooking progress level of the first food item inside the first cooking appliance using the first test feature tensor as input for a cooking progress determination model (e.g., cooking progress level determination model 126)”; it is noted that the model 126 is doneness model because Liu Par.0045 teaches “Doneness models 126 are related to determining the cooking progress level or the “done-ness” of food items present in a cooking appliance.”; therefore, Liu teaches analyzing, via the one or more computing devices using the machine learning image recognition model, the second image to evaluate the doneness level of the cooking item provided within the cooking chamber); and determining a rate of change of the doneness level within each zone between the first image (“first baseline image”, Liu Par.0150 & Fig.9) and the second image (“first test image”, Liu Par.0150 & Fig.9) (Liu Par.0150 teaches “calculating a difference feature tensor based on a difference between the respective feature tensor corresponding to the first test image and thee first baseline feature tensor corresponding to the first baseline image. The difference feature tensor is used as the first test feature tensor corresponding to the first test image. The computing system determines (908) the current cooking progress level of the first food item inside the first cooking appliance using the first test feature tensor as input for a cooking progress determination model (e.g., cooking progress level determination model 126) that has been trained on difference feature tensors corresponding to training images of instances of the first food item at various cooking progress levels.”; it is noted that the model 126 is doneness model because Liu Par.0045 teaches “Doneness models 126 are related to determining the cooking progress level or the “done-ness” of food items present in a cooking appliance.”; therefore, Liu teaches determining a rate of change of the doneness level within each zone between the first image and the second image). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Libman in view of Du, by adding the teachings of capturing, via the camera, a second image of the cooking chamber, the second image being captured a predetermined amount of time after capturing the first image; analyzing, via the one or more computing devices using the machine learning image recognition model, the second image to evaluate the doneness level of the cooking item provided within the cooking chamber; and determining a rate of change of the doneness level within each zone between the first image and the second image, as taught by Liu, in order to determine cooking progress levels for images received from cooking appliances in-real time, and optionally provide control instructions in accordance with the obtained results of the image processing, as recognized by Liu [Liu, Par.0092]; thus, providing better prediction accuracy to improve cooking process. Regarding claim 9, Libman in view of Du and Liu teaches the method as set forth in claim 8, Libman also discloses: determining that the rate of change of the doneness level in a first zone (first portion, Libman Pars.0142-0143) is greater than the rate of change of the doneness level in a second zone (second portion, Libman Pars.0142-0143), the second zone (second portion, Libman Pars.0142-0143) being different from the first zone (first portion, Libman Pars.0142-0143) (Libman Par.0135 discloses “image 1620 may concentrate on one or more segment of plate 1619 (e.g., segment 1625, 1626, 1627) and the objects those segments contain. Apparatus 1600 may also be configured such that image 1620 concentrates on one or more individual objects for processing.”; and Libman Par.0201 discloses “five items to be cooked may be placed in a cooking oven. The processor may identify the items to as five sirloin beef steaks. Following the identification, the processor may further prompt the user via the user interface and/or the audio device to select the degree of doneness for each sirloin beef steak. The user may point to each steak on the user interface screen and select, for example, from among several degrees of doneness. For example, the user may specify that two steaks are to be cooked until well done, two steaks should be medium, and one steak should be medium-rare. The processor may further control the energy application to the cooking oven based on the provided instructions and data stored in a lookup table. For example, the lookup table may indicate how much energy is to be absorbed in a 500 gm piece of sirloin steak in order to bring it to a given degree of doneness, and the processor may control energy application such that each steak absorbs the amount of energy indicated in or suggested by the lookup table. In some embodiments, RF energy application may be controlled such that all the steaks are done at about the same time.”; therefore, each steak is placed in different zones/portions, and since the two steaks are to be cooked until well done, and one steak should be medium-rare, and RF energy application is controlled such that all the steaks are done at about the same time; therefore, the rate of change of the doneness level in a first zone where the two well done steaks are located is greater than the rate of change of the doneness level in a second zone where the medium-rare steak is located, the second zone being different from the first zone); and alerting a user as to the determination, wherein alerting the user comprises providing a recommended course of action (it is noted that the limitation “doneness” is interpreted as “a cooked or heated level of a food item”, according to the Instant Application Par.0035: “the term "doneness" and the like are generally intended to refer to a cooked or heated level of a food item 152.”; in this case, the prior art Libman Par.0188 discloses “As the energy application progresses, the spatial temperature profile of the object may change. The changes may be continuously presented (e.g., displayed) to the user on the user interface. Additionally or alternatively the user may be alerted, for instance, audibly, if the temperature of one or more of the items or portions heat to a temperature outside an allowed range. In some embodiments, the user may select a display of the temperature profile among some given display options. The selection may be changed, for example, during energy application, according to the user's desire. For example, prior to the energy application, the user may select to display only the acquired image (e.g., optical image). However as energy application proceeds, the user may select a different display option, e.g. a spatial temperature profile image or the combined image, or the user may choose that the interface displays only temperatures at selected areas (when, for example, some areas are more heat sensitive than others)”; therefore, Libman discloses alert user as to the determination, wherein alerting the user comprises changing selection for portions of the being cooked food). Regarding claim 10, Libman in view of Du and Liu teaches the method as set forth in claim 9, but does not teach: wherein the recommended course of action comprises adjusting a positioning the cooking item within the cooking chamber. Liu teaches: wherein the recommended course of action comprises adjusting a positioning the cooking item within the cooking chamber (Liu Par.0159 teaches: “when a food item is determined to reach a desired doneness level before other food items inside cooking appliance, the first cooking appliance includes mechanical mechanisms to transport the food item to a cool “park” zone so that the cooking of the food item is stopped or slowed.”, and Liu Par.0160 teaches: “Liu Par.0160 teaches: “in accordance with a determination that the current cooking progress level of the first food item inside the first cooking appliance corresponds to a preset cooking progress level, the computing system generates a user alert. For example, the alert includes an image of the current state of the first food item on the first home appliance or on a mobile device of the user. The alert includes images of two subsequent cooking progress levels, for the user to choose a desired cooking progress level as the final desired cooking progress level for the first food item. For example, when the cookie has reached 80% doneness, the user is given opportunity to choose between a soft cookie (95% done) or a hard cookie (100% done) as the final desired state for the cookie. Once the desired state is reached, the cooking appliance automatically stops the cooking process, e.g., by transporting the cookie to a cool part of the oven, or stops the heating power of the oven.”, as cited and incorporated in the rejection of claim 9 above; therefore, Liu teaches the recommended course of action comprises adjusting a positioning the cooking item within the cooking chamber). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Libman in view of Du and Liu, by further adding the teaching of the recommended course of action comprises adjusting a positioning the cooking item within the cooking chamber, as taught by Liu, in order to prevent overheating the food, and correctly place the food in the determined area, therefore, preventing cooking failure and securing the safety of the cooking apparatus. Claims 11, 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Libman et al. (U.S. Pub. No. 2016/0309548 A1, previously cited) in view of Bate (U.S. Pub. No. 2020/0060470 A1, previously cited). Regarding claim 11, Libman discloses an oven appliance (“oven”, Libman Par.0058); comprising: a cabinet defining a cooking chamber (“energy application zone”, Libman Pars.0058 & Par.0130) (Libman Par.0058 discloses: “energy application zone 9 may include locations where energy is applied in an oven (e.g., a cooking oven)” and Libman Par.0194 discloses “energy application zone 9 (e.g., cavity)”); a user interface provided on the cabinet (Libman Par.0004 discloses: “Microwave oven user interfaces may include a keypad having several keys that indicate several options from which the user can select desired processing instructions, e.g., cooking time and cooking power level.”); a camera (camera 1610, Libman Fig.6) provided within the cooking chamber (“energy application zone”, Libman Par.0130) (Libman Par.0130 discloses “apparatus 1600 may include an image acquiring device, such as camera 1610, which, in operation, captures an image of the object in the energy application zone”, Libman Par.0191 discloses “The image acquiring device may be positioned in a known location and orientation in the energy application zone and configured with a predetermined field of view.”, and Libman Par.0058 discloses: “energy application zone 9 may include locations where energy is applied in an oven (e.g., a cooking oven)”, and Libman Par.0029 discloses “The method may further include causing application of a plurality of electromagnetic field patterns to the object in the energy application zone”; therefore, Libman discloses the energy application zone is the oven cavity where the energy is applied to the object placed in oven; thus, Libman discloses the camera provided within the cooking chamber) and configured to capture one (image 1620, Libman Fig.6) or more images [it is noted that the limitation “one or more images” is in alternative form; therefore, only one of these features was given patentable weight during examination] of the cooking chamber (“energy application zone”, Libman Pars.0058 & Par.0130) (Libman Fig.14 Step 2140 discloses “Acquiring an image of the object from a camera”, Libman Par.0202 discloses “An image of the object, optionally an optical image or IR image may be acquired in step 2140, for example from camera 1610”, Libman Par.0130 discloses “As illustrated in FIG. 6, apparatus 1600 may include an image acquiring device, such as camera 1610, which, in operation, captures an image of the object in the energy application zone.”, additionally, Libman Par.0134 discloses “image 1620 may constitute an image of the inside of cavity 10 (FIG. 3) or energy application zone 9 where plate 1620 may be placed.”, Libman Par.0058 discloses: “energy application zone 9 may include locations where energy is applied in an oven (e.g., a cooking oven)” and Libman Par.0194 discloses “energy application zone 9 (e.g., cavity)”; therefore, Libman discloses the camera captures image of the cooking chamber); and a controller (processor 1680, Libman Fig.6) being operably coupled to the camera (camera 1610, Libman Fig.6) and the user interface (user interface 1640, Libman Par.0132-0133) (Libman Par.0133 discloses “the user interface may further include a processor for processing input received via the input unit into processing information”), wherein the controller (processor 1680, Libman Fig.6) is configured to perform a series of operations, the series of operations comprising: capturing, via the camera (camera 1610, Libman Fig.6), a first image (image 1620, Libman Fig.6) of the cooking chamber (“energy application zone”, Libman Pars.0058 & Par.0130) (Libman Fig.14 Step 2140 discloses “Acquiring an image of the object from a camera”, Libman Par.0202 discloses “An image of the object, optionally an optical image or IR image may be acquired in step 2140, for example from camera 1610”, Libman Par.0130 discloses “As illustrated in FIG. 6, apparatus 1600 may include an image acquiring device, such as camera 1610, which, in operation, captures an image of the object in the energy application zone.”, additionally, Libman Par.0134 discloses “image 1620 may constitute an image of the inside of cavity 10 (FIG. 3) or energy application zone 9 where plate 1620 may be placed.”, Libman Par.0058 discloses: “energy application zone 9 may include locations where energy is applied in an oven (e.g., a cooking oven)” and Libman Par.0194 discloses “energy application zone 9 (e.g., cavity)”; therefore, Libman discloses capturing, via the camera, a first image of the cooking chamber); defining, on the first image (image 1620, Libman Fig.6), a plurality of zones within the cooking chamber (“energy application zone”, Libman Pars.0058 & Par.0130), each of the plurality of zones comprising a first predetermined area of the cooking chamber (“energy application zone”, Libman Pars.0058 & Par.0130) (Libman Par.0135 discloses “image 1620 may also focus on one or more portions of energy application zone 9 or any objects placed in energy application zone 9. The term portion(s) is used herein interchangeably with any of the terms, area(s), segment(s), region(s), sub-volume(s)”; therefore, Libman discloses defining, on the first image, a plurality of portions within the cooking chamber; since Libman discloses portions can be area(s), segment(s), region(s), sub-volume(s), thus, each of the plurality of portions comprising a first predetermined area of the cooking chamber); analyzing, by one or more computing devices using a machine learning image recognition model (Libman Par.0180 discloses: “the image may be processed to obtain a processed image, and optionally scaled, by a processor (e.g., processor 1680, 2030 and/or 101)” and “the processing on the image may include: zooming, filtering, e.g., digital filtering, image recognition processing”), the first image (image 1620, Libman Fig.6) to evaluate at least one characteristic of a cooking item provided within the cooking chamber, the cooking item being divided into the plurality of zones (one characteristic of a cooking item can be doneness level, Libman Par.0141 discloses: “Based on the information received from the user via interface 1640, processing information can be determined for selected portion(s) of the object. The selected portions may be selected by the user based on the image of the object.”, Libman Par.0143 discloses “the processing information may include…a desired degree of doneness of a portion or portions of the object”, and Libman Par.0135 discloses “image 1620 may also focus on one or more portions of energy application zone 9 or any objects placed in energy application zone 9. The term portion(s) is used herein interchangeably with any of the terms, area(s), segment(s), region(s), sub-volume(s)”; therefore, Libman discloses analyzing, by one or more computing devices using a machine learning image recognition model, the first image to evaluate a doneness level of a cooking item provided within the cooking chamber, the cooking item being divided into the plurality of portions); and comparing the at least one characteristic of the cooking item in a first zone of the plurality of zones with the at least one characteristic of the cooking item in a second zone of the plurality of zones different from the first zone (Libman Pars.0142-0143 discloses the processing information includes the degree of doneness of a portion or portions of the object, Libman also discloses results obtained from the processing information of the first portion and the second portion can be the same or different; therefore, Libman discloses comparing the doneness level of the cooking item in a first portion of the plurality of portions with the doneness level of the cooking item in a second portion of the plurality of portions; specifically, Libman Par.0142 discloses “The first processing result for a first portion of the object may be the same as or similar to the processing result for a second portion of the object. Alternatively, the first processing result for the first portion of the object may be different from the processing result for the second portion of the object.”, and Libman Par.0143 discloses “the processing information may include a desired amount of energy to be dissipated or absorbed in a portion or portions of the object, a desired target temperature profile of a portion or portions of the object, a desired degree of doneness of a portion or portions of the object, a desired target humidity of a portion or portions of the object, a desired density of a portion or portions of the object, a desired pH value of a portion or portions of the object, and/or desired cooking units to apply to a portion or portions of the object.”; therefore, Libman discloses comparing the doneness level of the cooking item in a first portion of the plurality of portions with the doneness level of the cooking item in a second portion of the plurality of portions; furthermore, Libman Par.0022 discloses “the information relating to processing instructions indicates that a first portion of the object shown in the image is to be processed differently from a second portion of the object”; therefore, Libman discloses the second portion being different from the first portion), determining that the at least one characteristic of the cooking item in the first zone is the same as the at least one characteristic of the cooking item in the second zone (Libman Pars.0142-0143 discloses the processing information includes the degree of doneness of portions of the object, Libman also discloses results obtained from the processing information of the first portion and the second portion can be the same or similar; specifically, Libman Par.0142 discloses “The first processing result for a first portion of the object may be the same as or similar to the processing result for a second portion of the object. Alternatively, the first processing result for the first portion of the object may be different from the processing result for the second portion of the object.”, and Libman Par.0143 discloses “the processing information may include a desired amount of energy to be dissipated or absorbed in a portion or portions of the object, a desired target temperature profile of a portion or portions of the object, a desired degree of doneness of a portion or portions of the object, a desired target humidity of a portion or portions of the object, a desired density of a portion or portions of the object, a desired pH value of a portion or portions of the object, and/or desired cooking units to apply to a portion or portions of the object.”; therefore, Libman discloses the doneness level of the cooking item in the first zone is the same as the doneness level of the cooking item in the second zone); and combining the first zone into the second zone to create a third zone (Libman Par.0023 discloses: “The image may comprise a combination of at least two of a graphical image, generated based on one or more values indicative of EM energy absorption in the object; a temperature profile, associating differing portions of the object in the energy application zone with different temperatures, or an optical image, generated based on visual light received from the energy application zone.”; thus, since two portions are combined to obtain the combined image, the combined image includes a third portion, which is the combination of the two portions; therefore, Libman discloses combining the first portion into the second portion to create a third portion). Libman discloses the apparatus as set forth above, but does not explicitly disclose: the controller provided within the cabinet; Bate teaches an oven appliance (oven appliance 20, Bate Fig.2) comprising: the controller provided within the cabinet (cabinet 1, Bate Fig.3) (Bate Par.0027 teaches “The micro controller can be located anywhere within the cabinet 01 of the oven appliance 20”; therefore, Bate teaches the controller provided within the cabinet); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Libman, by making the controller within the cabinet, as taught by Bate, in order to have a compact automatic control oven system; thereby, saving space and cost because the cost of installation and maintenance can be reduced, in addition, improving safety and aesthetics, providing easy transport and install. Regarding claim 13, Libman in view of Bate teaches the apparatus as set forth in claim 11, Libman also discloses wherein the series of operations further comprises: determining that the at least one characteristic of the cooking item in the first zone (first portion, Libman Pars.0142-0143) is different from the at least one characteristic of the cooking item in the second zone (second portion, Libman Pars.0142-0143) (Libman Pars.0142-0143 discloses the processing information includes the degree of doneness of portions of the object, Libman also discloses results obtained from the processing information of the first portion and the second portion can be different; specifically, Libman Par.0142 discloses “The first processing result for a first portion of the object may be the same as or similar to the processing result for a second portion of the object. Alternatively, the first processing result for the first portion of the object may be different from the processing result for the second portion of the object.”, and Libman Par.0143 discloses “the processing information may include a desired amount of energy to be dissipated or absorbed in a portion or portions of the object, a desired target temperature profile of a portion or portions of the object, a desired degree of doneness of a portion or portions of the object, a desired target humidity of a portion or portions of the object, a desired density of a portion or portions of the object, a desired pH value of a portion or portions of the object, and/or desired cooking units to apply to a portion or portions of the object.”; therefore, Libman discloses the doneness level of the cooking item in the first portion is different from the doneness level of the cooking item in the second portion); and alerting a user as to the determination, wherein alerting the user comprises providing a recommended course of action (it is noted that the limitation “doneness” is interpreted as “a cooked or heated level of a food item”, according to the Instant Application Par.0035: “the term "doneness" and the like are generally intended to refer to a cooked or heated level of a food item 152.”; in this case, the prior art Libman Par.0188 discloses “As the energy application progresses, the spatial temperature profile of the object may change. The changes may be continuously presented (e.g., displayed) to the user on the user interface. Additionally or alternatively the user may be alerted, for instance, audibly, if the temperature of one or more of the items or portions heat to a temperature outside an allowed range. In some embodiments, the user may select a display of the temperature profile among some given display options. The selection may be changed, for example, during energy application, according to the user's desire. For example, prior to the energy application, the user may select to display only the acquired image (e.g., optical image). However as energy application proceeds, the user may select a different display option, e.g. a spatial temperature profile image or the combined image, or the user may choose that the interface displays only temperatures at selected areas (when, for example, some areas are more heat sensitive than others)”; therefore, Libman discloses alert user as to the determination, wherein alerting the user comprises changing selection for portions of the being cooked food). Regarding claim 14, Libman in view of Bate teaches the apparatus as set forth in claim 13, Libman also discloses: wherein the at least one characteristic is a doneness level of the cooking item (Libman Par.0143 discloses “the processing information may include…a desired degree of doneness of a portion or portions of the object”, thus, the at least one characteristic is a doneness level of the cooking item, as cited and explained in the rejection of independent claim 11 above). Claims 12 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Libman et al. (U.S. Pub. No. 2016/0309548 A1, previously cited) in view of Bate (U.S. Pub. No. 2020/0060470 A1, previously cited), and further in view of Bhogal (U.S. Pub No. 2018/0292092 A1, previously cited). Regarding claim 12, Libman in view of Bate teaches the apparatus as set forth in claim 11, Libman also discloses the processing on the image may include: zooming, filtering, e.g., digital filtering, image recognition processing or any other processing on images known in the art as such in Par.0180. However, Libman does not explicitly disclose: wherein the machine learning image recognition model comprises at least one of a convolution neural network ("CNN"), a region-based convolution neural network ("R-CNN"), a deep belief network ("DBN"), or a deep neural network ("DNN") image recognition process. Bhogal teaches an oven appliance (oven 100, Bhogal Fig.4) comprising: wherein the machine learning image recognition model comprises at least one of a convolution neural network ("CNN") (Bhogal Par.0113 teaches: “foodstuff features can be recognized from the image using computer-implemented techniques, using convolutional neural network methods”; therefore, Bhogal teaches the machine learning image recognition model comprising a convolution neural network), a region-based convolution neural network ("R-CNN"), a deep belief network ("DBN"), or a deep neural network ("DNN") image recognition process [It is noted that the limitation “at least one of a convolution neural network ("CNN"), a region-based convolution neural network ("R-CNN"), a deep belief network ("DBN"), or a deep neural network ("DNN") image recognition process” is in alternative form; therefore, only one of these features was given patentable weight during examination]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Libman in view of Bate, by adding the teaching of the machine learning image recognition model comprising a convolution neural network, as taught by Bhogal, because the convolution neural network is highly effective at identifying objects and are often used for image recognition and object detection tasks because it can capture the variability and diversity of images; additionally, the convolution neural network uses layers like convolution and pooling to reduce computational costs by decreasing the number of parameters and data size; thereby, saving cost and energy. Regarding claim 16, Libman in view of Bate teaches the apparatus as set forth in claim 13, but does not teach wherein alerting the user comprises transmitting a prompt comprising the recommended course of action to a mobile device, the mobile device being in remote communication with the oven appliance. Bhogal teaches an oven appliance (oven 100, Bhogal Fig.4) comprising: wherein alerting the user comprises transmitting a prompt comprising the recommended course of action to a mobile device (mobile device can be a laptop, tablet, phone, smartwatch, etc., as taught by Bhogal Par.0051) (Bhogal Par.0038 teaches “a user interface 30 (e.g., at a remote user device 40”, Bhogal Par.0051 teaches “The user device 40 can be a mobile device (e.g., a laptop, tablet, phone, smartwatch, etc.)”, and Bhogal Par.0054 teaches “the user, via the user interface 30, can receive and respond to notifications from the oven 100 and/or the computing system 20 (e.g., notification of time remaining in a cooking session, an error or safety alert, etc.). The notification can be a cooking session completion notification (e.g., “ready to eat!”), an error notification, an instruction notification, or be any other suitable notification. In a variation, the cooking session completion notification is generated in response to a target cooking parameter value being met. In one example, the notification is generated when a target cooking time is met. In a second example, the notification is generated when a target food parameter value, such as a target internal temperature, surface browning, or internal water content, is met. However, any other suitable notification can be generated in response to the occurrence of any other suitable event.”; therefore, Bhogal teaches alerting the user comprises transmitting a prompt comprising the recommended course of action to a mobile device), the mobile device (mobile device 40, Bhogal Fig.4) being in remote communication with the oven appliance (oven 100, Bhogal Fig.4) (Bhogal Fig.4 shows the remote communication between the mobile device 40 and the oven 100; additionally, Bhogal Par.0038 teaches “a user interface 30 (e.g., at a remote user device 40” and Bhogal Par.0054 teaches “the user, via the user interface 30, can receive and respond to notifications from the oven 100”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Libman in view of Bate, by adding the teachings of alerting the user comprises transmitting a prompt comprising the recommended course of action to a mobile device, the mobile device being in remote communication with the oven appliance, as taught by Bhogal, in order to monitor and control the progress of a cooking session in a location away from the oven, as recognized by Bhogal [Bhogal, Par.0037]; thus, providing convenience for user because user can control the oven from anywhere, additionally, enhancing safety by receiving/reposing to remote notification, and also saving time for operating the oven. Claims 15, 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Libman et al. (U.S. Pub. No. 2016/0309548 A1, previously cited) in view of Bate (U.S. Pub. No. 2020/0060470 A1, previously cited), and further in view of Liu et al. (U.S. Pub. No. 2021/0228022 A1, previously cited). Regarding claim 15, Libman in view of Bate teaches the apparatus as set forth in claim 13, but does not teach: wherein the recommended course of action comprises adjusting a positioning of the cooking item within the cooking chamber. Liu teaches an oven appliance (cooking appliance 200, Liu Fig.2A) comprising: wherein the recommended course of action comprises adjusting a positioning of the cooking item within the cooking chamber (Liu Par.0159 teaches: “when a food item is determined to reach a desired doneness level before other food items inside cooking appliance, the first cooking appliance includes mechanical mechanisms to transport the food item to a cool “park” zone so that the cooking of the food item is stopped or slowed.”, and Liu Par.0160 teaches: “Liu Par.0160 teaches: “in accordance with a determination that the current cooking progress level of the first food item inside the first cooking appliance corresponds to a preset cooking progress level, the computing system generates a user alert. For example, the alert includes an image of the current state of the first food item on the first home appliance or on a mobile device of the user. The alert includes images of two subsequent cooking progress levels, for the user to choose a desired cooking progress level as the final desired cooking progress level for the first food item. For example, when the cookie has reached 80% doneness, the user is given opportunity to choose between a soft cookie (95% done) or a hard cookie (100% done) as the final desired state for the cookie. Once the desired state is reached, the cooking appliance automatically stops the cooking process, e.g., by transporting the cookie to a cool part of the oven, or stops the heating power of the oven.”, as cited and incorporated in the rejection of claim 9 above; therefore, Liu teaches the recommended course of action comprises adjusting a positioning the cooking item within the cooking chamber). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Libman in view of Bate, by adding the teaching of the recommended course of action comprises adjusting a positioning of the cooking item within the cooking chamber, as taught by Liu, in order to prevent overheating the food, and correctly place the food in the determined area, therefore, preventing cooking failure and securing the safety of the cooking apparatus. Regarding claim 18, Libman in view of Bate teaches the apparatus as set forth in claim 11, but does not teach wherein the series of operations further comprises: capturing, via the camera, a second image of the cooking chamber, the second image being captured a predetermined amount of time after capturing the first image; analyzing, via the one or more computing devices using the machine learning image recognition model, the second image to evaluate the at least one characteristic of the cooking item provided within the cooking chamber; and determining a rate of change of the at least one characteristic within each zone between the first image and the second image. Liu teaches an oven appliance (cooking appliance 200, Liu Fig.2A) comprising: capturing, via the camera (second sensors 142 (e.g., shown as second sensors 214-1 and 214-2, Liu Fig.2A) (Liu Par.0076 teaches “the one or more second sensors 142 are part of an in situ imaging system (e.g., imaging system) that includes one or more still image cameras or video cameras (e.g., second sensors 214-1 and 214-2)”), a second image (“first test image”, Liu Par.0150 & Fig.9) of the cooking chamber (cooking cavity of the cooking appliance 200, Liu Fig.2A), the second image (“first test image”, Liu Par.0150 & Fig.9) being captured a predetermined amount of time after capturing the first image (“first baseline image”, Liu Par.0150 & Fig.9) (Liu Par.0150 teaches: “a first baseline image corresponding to an initial cooking progress level of a first food item inside the first cooking appliance” and “a first test image corresponding to a current cooking progress level of the first food item inside the first cooking appliance. For example, the smart oven is configured to capture an image (e.g., in conjunction with other sensors data, such as temperature, weight map, thermal map, etc.) every 10 seconds after the first cooking process is started, and the first test image is the most recently captured image among a series of images captured periodically by the first cooking appliance”; therefore, the first baseline image is the first image, and the first test image is the second image; and the second image being captured a predetermined amount of time after capturing the first image); analyzing, via the one or more computing devices using the machine learning image recognition model (Liu Par.0010 teaches “A combination of image processing and user input is utilized in order to create models with food recognition”, and Liu Par.0149 teaches: “Method 900 is performed by a computing system (e.g., computing system 130, 160, 130′)”), the second image (“first test image”, Liu Par.0150 & Fig.9) to evaluate the at least one characteristic of the cooking item provided within the cooking chamber (cooking cavity of the cooking appliance 200, Liu Fig.2A) (Liu Par.0150 teaches “The computing system generates (906) a first test feature tensor corresponding to the first test image.” and “The computing system determines (908) the current cooking progress level of the first food item inside the first cooking appliance using the first test feature tensor as input for a cooking progress determination model (e.g., cooking progress level determination model 126)”; it is noted that the model 126 is doneness model because Liu Par.0045 teaches “Doneness models 126 are related to determining the cooking progress level or the “done-ness” of food items present in a cooking appliance.”; therefore, Liu teaches analyzing, via the one or more computing devices using the machine learning image recognition model, the second image to evaluate the doneness level of the cooking item provided within the cooking chamber); and determining a rate of change of the at least one characteristic within each zone between the first image (“first baseline image”, Liu Par.0150 & Fig.9) and the second image (“first test image”, Liu Par.0150 & Fig.9) (Liu Par.0150 teaches “calculating a difference feature tensor based on a difference between the respective feature tensor corresponding to the first test image and thee first baseline feature tensor corresponding to the first baseline image. The difference feature tensor is used as the first test feature tensor corresponding to the first test image. The computing system determines (908) the current cooking progress level of the first food item inside the first cooking appliance using the first test feature tensor as input for a cooking progress determination model (e.g., cooking progress level determination model 126) that has been trained on difference feature tensors corresponding to training images of instances of the first food item at various cooking progress levels.”; it is noted that the model 126 is doneness model because Liu Par.0045 teaches “Doneness models 126 are related to determining the cooking progress level or the “done-ness” of food items present in a cooking appliance.”; therefore, Liu teaches determining a rate of change of the doneness level within each zone between the first image and the second image). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Libman in view of Bate, by adding the teachings of capturing, via the camera, a second image of the cooking chamber, the second image being captured a predetermined amount of time after capturing the first image; analyzing, via the one or more computing devices using the machine learning image recognition model, the second image to evaluate the doneness level of the cooking item provided within the cooking chamber; and determining a rate of change of the doneness level within each zone between the first image and the second image, as taught by Liu, in order to determine cooking progress levels for images received from cooking appliances in-real time, and optionally provide control instructions in accordance with the obtained results of the image processing, as recognized by Liu [Liu, Par.0092]; thus, providing better prediction accuracy to improve cooking process. Regarding claim 19, Libman in view of Bate and Liu teaches the apparatus as set forth in claim 18, Libman also discloses wherein the series of operations further comprises: determining that the rate of change of the at least one characteristic in a first zone (first portion, Libman Pars.0142-0143) is greater than the rate of change of the at least one characteristic in a second zone (second portion, Libman Pars.0142-0143), the second zone (second portion, Libman Pars.0142-0143) being different from the first zone (first portion, Libman Pars.0142-0143) (Libman Par.0135 discloses “image 1620 may concentrate on one or more segment of plate 1619 (e.g., segment 1625, 1626, 1627) and the objects those segments contain. Apparatus 1600 may also be configured such that image 1620 concentrates on one or more individual objects for processing.”; and Libman Par.0201 discloses “five items to be cooked may be placed in a cooking oven. The processor may identify the items to as five sirloin beef steaks. Following the identification, the processor may further prompt the user via the user interface and/or the audio device to select the degree of doneness for each sirloin beef steak. The user may point to each steak on the user interface screen and select, for example, from among several degrees of doneness. For example, the user may specify that two steaks are to be cooked until well done, two steaks should be medium, and one steak should be medium-rare. The processor may further control the energy application to the cooking oven based on the provided instructions and data stored in a lookup table. For example, the lookup table may indicate how much energy is to be absorbed in a 500 gm piece of sirloin steak in order to bring it to a given degree of doneness, and the processor may control energy application such that each steak absorbs the amount of energy indicated in or suggested by the lookup table. In some embodiments, RF energy application may be controlled such that all the steaks are done at about the same time.”; therefore, each steak is placed in different zones/portions, and since the two steaks are to be cooked until well done, and one steak should be medium-rare, and RF energy application is controlled such that all the steaks are done at about the same time; therefore, the rate of change of the doneness level in a first zone where the two well done steaks are located is greater than the rate of change of the doneness level in a second zone where the medium-rare steak is located, the second zone being different from the first zone); and alerting a user as to the determination, wherein alerting the user comprises providing a recommended course of action (it is noted that the limitation “doneness” is interpreted as “a cooked or heated level of a food item”, according to the Instant Application Par.0035: “the term "doneness" and the like are generally intended to refer to a cooked or heated level of a food item 152.”; in this case, the prior art Libman Par.0188 discloses “As the energy application progresses, the spatial temperature profile of the object may change. The changes may be continuously presented (e.g., displayed) to the user on the user interface. Additionally or alternatively the user may be alerted, for instance, audibly, if the temperature of one or more of the items or portions heat to a temperature outside an allowed range. In some embodiments, the user may select a display of the temperature profile among some given display options. The selection may be changed, for example, during energy application, according to the user's desire. For example, prior to the energy application, the user may select to display only the acquired image (e.g., optical image). However as energy application proceeds, the user may select a different display option, e.g. a spatial temperature profile image or the combined image, or the user may choose that the interface displays only temperatures at selected areas (when, for example, some areas are more heat sensitive than others)”; therefore, Libman discloses alert user as to the determination, wherein alerting the user comprises changing selection for portions of the being cooked food). Regarding claim 20, Libman in view of Bate and Liu teaches the apparatus as set forth in claim 19, Libman does not disclose: wherein the recommended course of action comprises adjusting a positioning the cooking item within the cooking chamber. Liu teaches: wherein the recommended course of action comprises adjusting a positioning the cooking item within the cooking chamber (Liu Par.0159 teaches: “when a food item is determined to reach a desired doneness level before other food items inside cooking appliance, the first cooking appliance includes mechanical mechanisms to transport the food item to a cool “park” zone so that the cooking of the food item is stopped or slowed.”, and Liu Par.0160 teaches: “Liu Par.0160 teaches: “in accordance with a determination that the current cooking progress level of the first food item inside the first cooking appliance corresponds to a preset cooking progress level, the computing system generates a user alert. For example, the alert includes an image of the current state of the first food item on the first home appliance or on a mobile device of the user. The alert includes images of two subsequent cooking progress levels, for the user to choose a desired cooking progress level as the final desired cooking progress level for the first food item. For example, when the cookie has reached 80% doneness, the user is given opportunity to choose between a soft cookie (95% done) or a hard cookie (100% done) as the final desired state for the cookie. Once the desired state is reached, the cooking appliance automatically stops the cooking process, e.g., by transporting the cookie to a cool part of the oven, or stops the heating power of the oven.”, as cited and incorporated in the rejection of claim 9 above; therefore, Liu teaches the recommended course of action comprises adjusting a positioning the cooking item within the cooking chamber). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Libman in view of Bate and Liu, by further adding the teaching of the recommended course of action comprises adjusting a positioning the cooking item within the cooking chamber, as taught by Liu, in order to prevent overheating the food, and correctly place the food in the determined area, therefore, preventing cooking failure and securing the safety of the cooking apparatus. Conclusion The following prior art(s) made of record and not relied upon is/are considered pertinent to Applicant’s disclosure. Torres et al. (U.S. Patent No. 10,575,372 B2) discloses systems, apparatuses, and methods for cooking a food item using RF oven. Lim et al. (U.S. Patent No. 9,433,233 B2) discloses a cooking apparatus and control method thereof to achieve a fry-cooking process using functions of the cooking apparatus. Applicant’s amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to THAO TRAN-LE whose telephone number is (571) 272-7535. The examiner can normally be reached M-F 9:00 - 5:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, HELENA KOSANOVIC can be reached on (571) 272-9059. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THAO UYEN TRAN-LE/Examiner, Art Unit 3761 03/28/2025 /HELENA KOSANOVIC/Supervisory Patent Examiner, Art Unit 3761