Jump to content

Patent Application 17956119 - METHODS AND SYSTEMS FOR USE IN PROCESSING IMAGES - Rejection

From WikiPatents

Patent Application 17956119 - METHODS AND SYSTEMS FOR USE IN PROCESSING IMAGES

Title: METHODS AND SYSTEMS FOR USE IN PROCESSING IMAGES RELATED TO CROPS

Application Information

  • Invention Title: METHODS AND SYSTEMS FOR USE IN PROCESSING IMAGES RELATED TO CROPS
  • Application Number: 17956119
  • Submission Date: 2025-04-07T00:00:00.000Z
  • Effective Filing Date: 2022-09-29T00:00:00.000Z
  • Filing Date: 2022-09-29T00:00:00.000Z
  • National Class: 702
  • National Sub-Class: 002000
  • Examiner Employee Number: 90275
  • Art Unit: 2857
  • Tech Center: 2800

Rejection Summary

  • 102 Rejections: 0
  • 103 Rejections: 4

Cited Patents

No patents were cited in this rejection.

Office Action Text



    DETAILED ACTION
Notice of Pre-AIA  or AIA  Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .

Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.

The following is a quotation of pre-AIA  35 U.S.C. 112, fourth paragraph:
Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA  35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.


Claim 19 is rejected under 35 U.S.C. 112(d) or pre-AIA  35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends.  Claim 12, from which Claim 19 depends, already includes an identical limitation.  Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements.

Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.


Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) the abstract idea of a mathematical algorithm for predicting plot yield from image data and environmental data.
This judicial exception is not integrated into a practical application because the algorithm result is not used in any manner to improve the underlying farming activity.
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because accessing the data needed for the algorithm and storing the algorithm result amount to insignificant extra-solution activity necessary in implementing the algorithm using a general-purpose computer.  The use of a generative adversarial network to enhance the image data amounts to the use of a well-understood, routine, and conventional artificial intelligence arrangement [See the discussion of Birla below, as well as the references cited in the Conclusion section] and thus not significantly more than the recitation of general-purpose computer components.  The same applies to the recitation of the neural network [See the discussion of Guo, below].

Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA  35 U.S.C. 102 and 103 (or as subject to pre-AIA  35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA  to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.  
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.


Claim(s) 1, 4-9, 11, 12, 15, 16, and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shendryk et al., Integrating satellite imagery and environmental data to predict field-level cane and sugar yields in Australia using machine learning, Elsevier, 2020 [hereinafter “Shendryk”](supplied by the Applicant on 3/2/2023); Birla, Single Image Super Resolution Using GANS, Medium.com, 11.8.2018 (supplied by the Applicant on 3/2/2023); and Scharf et al. (US 20120250962 A1)[hereinafter “Scharf”].
Regarding Claims 1, 12, 19, and 20, Shendryk discloses a computer-implemented method/method/instructions for use in processing image data associated with crop-bearing fields [Title – “Integrating satellite imagery and environmental data to predict field-level cane and sugar yields in Australia using machine learning”], the method comprising:
	accessing, by a computing device, a first data set, the first data set including images associated with one or more fields, the images having a spatial resolution of about one meter or more per pixel [Page 3, first column – “The satellite imagery consisted of Sentinel-1 SAR and Sentinel-2 multispectral imagery covering the area of the Wet Tropics of Australia (Fig. 1). Sentinel-1 imagery had a spatial resolution of 10 m and temporal resolution of up to 12 days, while Sentinel-2 imagery had a spatial resolution of 10−20 m and temporal resolution of up to 5 days.”Page 6, second column – “In this study, to utilize the power of OBIA approaches without relying on segmentation algorithms to delineate individual fields we trained machine learning models using predictor variables calculated within each sugarcane field (> 0.64 ha) (see Table 4) and used them for inference using predictor variables calculated within 8 × 8 pixels windows (0.64 ha) to match the smallest size of fields used in this study. Prediction maps of cane yield, CCS, sugar yield and crop variety were generated for the month of March (i.e. between January – March) for each growing season between 2016 – 2020. Then, cane yield, CCS and sugar yield prediction maps over the whole of Wet Tropics at 10 m resolution were averaged (excluding fallow crop areas identified using a machine learning model) across four mill areas (i.e. Mulgrave, South Johnstone, Tully and Macknade, see Section 2.6) and compared against actual values of cane yield, CCS and sugar yield reported by the mills in 2016, 2017 and 2018 growing seasons (Canegrowers, 2019).”].
	Shendryk fails to disclose generating, by the computing device, based on a generative model, defined resolution images of the one or more fields from the first data set, the defined resolution images each having a spatial resolution of about X centimeters per pixel, where X is less than about 5 centimeters.
	However, Birla discloses a super-resolution GAN [Page 1 – “Image super resolution can be defined as increasing the size of small images while keeping the drop in quality to minimum, or restoring high resolution images from rich details obtained from low resolution images.”] through which a generator uses low resolution images to generate high resolution images [See Page 4].  It would have been obvious to use a super-resolution GAN in order to obtain higher resolution images from the satellites of Shendryk for use in analyzing crop-bearing fields.
	Scharf discloses the use of an aerial camera [Paragraph [0046] – “Aerial photographic images were acquired from five production cornfields in 2001 and two production cornfields in 2004. Aerial photographs in 2001 were taken from small-plane flyovers in a nadir (straight down) orientation at altitudes ranging from 1000-1400 m above ground level using ASA 400 35 mm color positive film. Larger fields were photographed from higher altitudes to fit the whole field onto the camera's field of view. Photos were obtained on 5 Jul. 2001, and the growth stage of the corn at that time was approximately V11 to V13. The film was processed into color slides, and then digitized using a Nikon CoolScan 1.05 film scanner (Nikon, Inc., Melville, N.Y.). The spatial resolution of the digitized images ranged from 0.42 to 1.00 m per pixel (see Table 1), depending on differences in the altitudes at which the aerial photographs were obtained and differences in the focal length used by the camera to obtain the aerial photographs.”] and the production of defined resolution images each having a spatial resolution of about X centimeters per pixel, where X is less than about 5 centimeters [Paragraph [0050] – “Several photographs contained areas of low vegetative cover, which were removed from the analysis to reduce errors in the appearance of the crop. All polygons with a green/red pixel color ratio of less than 1.2 were removed from the analysis, using a decision rule developed using previously acquired photographs of corn fields with high resolution (~4 cm) and low vegetative cover.”].  It would have been obvious to use an aerial camera to produce such high resolution images such that they could be used as the high resolution images needed by Birla in order to use the SRGAN to enhance the lower resolution satellite images.
	Shendryk, as modified, would disclose deriving, by the computing device, index values for the one or more fields [Page 3, second column – “In contrast to previous studies that exclusively relied on NDVI (Begue et al., 2010; Morel et al., 2014) or green NDVI (GNDVI) (Rahman and Robson, 2020) for yield prediction, in this study we derived 45 normalized difference spectral indices (NDSIs) from Sentinel-2 imagery, that previously showed to be useful in differentiating vegetation types (Shendryk et al., 2020b). NDSIs were calculated in succession from Blue to SWIR-2 (Table 3) spectral bands as follows:
NDSI(i,j) = (Ri − Rj)/(Ri + Rj),
where R is the spectral reflectance, and i and j are numbers indicating the wavelengths (nm). Throughout this paper each NDSI is denoted as a combination of three-letter acronyms in Table 3 (e.g. NDSI(RED,NI1) is denoted as REDNI1 while NDSI(GRE,NI1) is denoted as GRENI1 and is an inverse of GNDVI).”Page 7, second column – “Sentinel-2 derived NDSIs and DEM derived predictors were the strongest in predicting cane yield, CCS and sugar yield.”], based on the defined resolution images of the one or more fields [per Birla];
aggregating, by the computing device, the index values for the one or more fields with at least one environmental metric for the one or more fields [Page 4, first column – “Climate data were downloaded from the database of Australian climate data (SILO, 2020) for 98 stations spread across the Wet Tropics of Australia. These data contained daily averages for (1) solar radiation – total incoming downward shortwave radiation on a horizontal surface in MJ/m2 (RAD), (2) maximum temperature in ºC (TMAX), (3) minimum temperature ºC (TMIN), (4) vapour pressure in hPA (VP) as well as daily totals of (5) evaporation in mm (EVAP) and (6) rainfall in mm (RAIN).”Page 5, second column – “In total, there were 371 predictor variables, extracted from Sentinel1 (21 predictors, i.e. 7 statistics (Table 4) extracted from 3 SAR bands), Sentinel-2 (315 predictors, i.e. 7 statistics (Table 4) exracted from 45 NDSIs), DEM (28 predictors, i.e. 7 statistics (Table 4) extracted from 4 DEM variables), soil (1 predictor) and climate (6 predictors, i.e. average (avg) value of 6 climate variables) mosaics.”See Table 6 – All predictors.];
predicting, by the computing device, a plot yield for the one or more fields, based on the aggregated index values and the at least one environmental metric [See section 2.8. Yield prediction, particularly – “In total, there were 371 predictor variables, extracted from Sentinel1 (21 predictors, i.e. 7 statistics (Table 4) extracted from 3 SAR bands), Sentinel-2 (315 predictors, i.e. 7 statistics (Table 4) exracted from 45 NDSIs), DEM (28 predictors, i.e. 7 statistics (Table 4) extracted from 4 DEM variables), soil (1 predictor) and climate (6 predictors, i.e. average (avg) value of 6 climate variables) mosaics.”See section 3.2. Yield predictionsSee Table 6 – All predictors.]; and
storing, by the computing device, the predicted yield for the one or more fields [See Fig. 8] in a memory [Inherent given the use of machine learning.  See section 2.8.3. Machine learning inference].

Regarding Claims 4 and 15, the combination would disclose that the generative model includes a generative adversarial network (GAN) model [Page 4 of Birla – SRGAN], and wherein the GAN model includes a generator [Page 4 of Birla – Generator] and a discriminator coupled to the generator [Page 4 of Birla – Discriminator], and
wherein generating the defined resolution images includes generating, by the generator, the defined resolution images [Page 4 of Birla – SR ImagesAchieving the high resolution of the images of Scharf.] based on at least one input image from the first data set [Page 4 of Birla – LR ImagesEnhancing the low resolution 10 m satellite images of Shendryk.].

Regarding Claim 5, Shendryk discloses defining, by the computing device, a field level image for each of the one of more fields, from the defined resolution images [See Figs. 2 and 8]; and
	wherein deriving the index values for the one or more fields includes deriving the index values for each of the field level images for the one or more fields [Page 3, second column – “In contrast to previous studies that exclusively relied on NDVI (Begue et al., 2010; Morel et al., 2014) or green NDVI (GNDVI) (Rahman and Robson, 2020) for yield prediction, in this study we derived 45 normalized difference spectral indices (NDSIs) from Sentinel-2 imagery, that previously showed to be useful in differentiating vegetation types (Shendryk et al., 2020b). NDSIs were calculated in succession from Blue to SWIR-2 (Table 3) spectral bands as follows:
NDSI(i,j) = (Ri − Rj)/(Ri + Rj),
where R is the spectral reflectance, and i and j are numbers indicating the wavelengths (nm). Throughout this paper each NDSI is denoted as a combination of three-letter acronyms in Table 3 (e.g. NDSI(RED,NI1) is denoted as REDNI1 while NDSI(GRE,NI1) is denoted as GRENI1 and is an inverse of GNDVI).”Page 7, second column – “Sentinel-2 derived NDSIs and DEM derived predictors were the strongest in predicting cane yield, CCS and sugar yield.”].

Regarding Claim 6, Shendryk discloses that deriving the index values includes deriving each of the index values based on the following:
index value = (nir — red) / (nir +red);
wherein nir is a near infrared band value of each of the field level images and red is a red band value of each of the field level images [Page 3, second column – “In contrast to previous studies that exclusively relied on NDVI (Begue et al., 2010; Morel et al., 2014) or green NDVI (GNDVI) (Rahman and Robson, 2020) for yield prediction, in this study we derived 45 normalized difference spectral indices (NDSIs) from Sentinel-2 imagery, that previously showed to be useful in differentiating vegetation types (Shendryk et al., 2020b). NDSIs were calculated in succession from Blue to SWIR-2 (Table 3) spectral bands as follows:
NDSI(i,j) = (Ri − Rj)/(Ri + Rj),
where R is the spectral reflectance, and i and j are numbers indicating the wavelengths (nm). Throughout this paper each NDSI is denoted as a combination of three-letter acronyms in Table 3 (e.g. NDSI(RED,NI1) is denoted as REDNI1 while NDSI(GRE,NI1) is denoted as GRENI1 and is an inverse of GNDVI).”Page 7, second column – “Sentinel-2 derived NDSIs and DEM derived predictors were the strongest in predicting cane yield, CCS and sugar yield.”].

Regarding Claim 7, Shendryk discloses that the index values are representative of a vegetation greenness of the one or more fields [Page 3, second column – “In contrast to previous studies that exclusively relied on NDVI (Begue et al., 2010; Morel et al., 2014) or green NDVI (GNDVI) (Rahman and Robson, 2020) for yield prediction, in this study we derived 45 normalized difference spectral indices (NDSIs) from Sentinel-2 imagery, that previously showed to be useful in differentiating vegetation types (Shendryk et al., 2020b). NDSIs were calculated in succession from Blue to SWIR-2 (Table 3) spectral bands as follows:
NDSI(i,j) = (Ri − Rj)/(Ri + Rj),
where R is the spectral reflectance, and i and j are numbers indicating the wavelengths (nm). Throughout this paper each NDSI is denoted as a combination of three-letter acronyms in Table 3 (e.g. NDSI(RED,NI1) is denoted as REDNI1 while NDSI(GRE,NI1) is denoted as GRENI1 and is an inverse of GNDVI).”Page 7, second column – “Sentinel-2 derived NDSIs and DEM derived predictors were the strongest in predicting cane yield, CCS and sugar yield.”].

Regarding Claims 8 and 16, the combination would disclose that the images of the first data set further include a temporal resolution of one image per N number of days, where N is an integer less than about 30 [Page 3, first column of Shendryk – “The satellite imagery consisted of Sentinel-1 SAR and Sentinel-2 multispectral imagery covering the area of the Wet Tropics of Australia (Fig. 1). Sentinel-1 imagery had a spatial resolution of 10 m and temporal resolution of up to 12 days, while Sentinel-2 imagery had a spatial resolution of 10−20 m and temporal resolution of up to 5 days.”]; and
that the defined resolution images of the one or more fields include said temporal resolution [Super resolution enhancement of the satellite images per Birla].

Regarding Claim 9, Shendryk discloses the at least one environmental metric includes at least one of: precipitation, solar radiation, and/or temperature [Page 4, first column – “Climate data were downloaded from the database of Australian climate data (SILO, 2020) for 98 stations spread across the Wet Tropics of Australia. These data contained daily averages for (1) solar radiation – total incoming downward shortwave radiation on a horizontal surface in MJ/m2 (RAD), (2) maximum temperature in ºC (TMAX), (3) minimum temperature ºC (TMIN), (4) vapour pressure in hPA (VP) as well as daily totals of (5) evaporation in mm (EVAP) and (6) rainfall in mm (RAIN).”].

Regarding Claims 11 and 18, the combination would disclose accessing, by the computing device, a training data set [Page 4 of Birla – “train the discriminator and the generator”], the training data set include a high resolution data set [Page 4 of Birla – HR ImagesUsing the high resolution images of Scharf.] and a low resolution data set [Page 4 of Birla – LR ImagesUsing the low resolution satellite images of Shendryk.];
wherein the high resolution data set includes images associated with the one or more fields, the images of the high resolution data set having a spatial resolution of about X centimeters per pixel, where X is an integer less than about 5 [Paragraph [0046] of Scharf – “The spatial resolution of the digitized images ranged from 0.42 to 1.00 m per pixel (see Table 1), depending on differences in the altitudes at which the aerial photographs were obtained and differences in the focal length used by the camera to obtain the aerial photographs.”Paragraph [0050] of Scharf – “previously acquired photographs of corn fields with high resolution (~4 cm)”]; and
wherein the low resolution data set includes images associated with one or more fields, the images of the low resolution data set having a spatial resolution of at least about one meter per pixel [Page 3, first column of Shendryk – “The satellite imagery consisted of Sentinel-1 SAR and Sentinel-2 multispectral imagery covering the area of the Wet Tropics of Australia (Fig. 1). Sentinel-1 imagery had a spatial resolution of 10 m and temporal resolution of up to 12 days, while Sentinel-2 imagery had a spatial resolution of 10−20 m and temporal resolution of up to 5 days.”Scharf also discloses that satellites can have such a resolution in Paragraph [0026] – “Remote sensing data may be collected using a variety of instruments selected from the list comprising aircraft mounted camera, satellite-mounted camera, camera mounted on a tower or depressed area of the field, and combinations thereof. The altitude and camera focal length used to capture the field images are selected to capture the largest possible image of the field that is possible within the field of view of the camera, in order to maximize the resolution of the resulting image. For best results, the resolution of the resulting digital images falls in the range between about 0.1 m/pixel and about 5 m/pixel, and more preferably between about 0.2 and 2 m/pixel, and most preferably about 0.5 m/pixel.”]; and
training the generative model, based on at least a portion of the high resolution data set and the low resolution data set [Page 4 of Birla – “train the discriminator and the generator”].

	Claim(s) 2 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shendryk et al., Integrating satellite imagery and environmental data to predict field-level cane and sugar yields in Australia using machine learning, Elsevier, 2020 [hereinafter “Shendryk”](supplied by the Applicant on 3/2/2023); Birla, Single Image Super Resolution Using GANS, Medium.com, 11.8.2018 (supplied by the Applicant on 3/2/2023); Scharf et al. (US 20120250962 A1)[hereinafter “Scharf”]; and Ellinger, Understanding Spatial Resolution with Drones, TLT Photography, 2017.
Regarding Claims 2 and 13, Shendryk discloses that the first data set includes satellite images of the one or more fields, in which a crop is grown [Title – “Integrating satellite imagery and environmental data to predict field-level cane and sugar yields in Australia using machine learning”], but the combination fails to disclose that X is less than or equal to about 1 centimeter.
	However, Ellinger discloses that an aerial drone can be used to image at such a resolution [Page 4 – “If you used the Pix4D GSD Calculator and you wanted to obtain a 3cm/pixel spatial resolution you would fly at an altitude of 109 meters (358ft). … Yes, you can achieve 1cm spatial resolution by drone[.]”].  It would have been obvious to use an aerial camera to produce such high resolution images such that they could be used as the high resolution images needed by Birla in order to use the SRGAN to enhance the lower resolution satellite images.

	Claim(s) 3 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shendryk et al., Integrating satellite imagery and environmental data to predict field-level cane and sugar yields in Australia using machine learning, Elsevier, 2020 [hereinafter “Shendryk”](supplied by the Applicant on 3/2/2023); Birla, Single Image Super Resolution Using GANS, Medium.com, 11.8.2018 (supplied by the Applicant on 3/2/2023); Scharf et al. (US 20120250962 A1)[hereinafter “Scharf”]; and Roman et al., Noise Estimation for Generative Diffusion Models, arXiv, 9.12.2021 [hereinafter “Roman”].
Regarding Claims 3 and 14, Birla fails to disclose that that the generative model includes a diffusion model.  However, Roman discloses the use of a diffusion model in image generation [Page 1, first column – “An emerging class of non-autoregessive models is the one of Denoising Diffusion Probabilistic Models (DDPM). Such methods use diffusion models and denoising score matching in order to generate images (Ho, Jain, and Abbeel 2020) and speech (Chen et al. 2020). The DDPM model learns to perform a diffusion process on a Markov chain of latent variables. The diffusion process transforms a data sample into Gaussian noise. During inference the reverse process is used, which is called the denoising process.”].  It would have been obvious to use such a diffusion model in the image generation of Birla in order to achieve denoising of the low resolution images.

	Claim(s) 10 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shendryk et al., Integrating satellite imagery and environmental data to predict field-level cane and sugar yields in Australia using machine learning, Elsevier, 2020 [hereinafter “Shendryk”](supplied by the Applicant on 3/2/2023); Birla, Single Image Super Resolution Using GANS, Medium.com, 11.8.2018 (supplied by the Applicant on 3/2/2023); Scharf et al. (US 20120250962 A1)[hereinafter “Scharf”]; and Guo et al. (US 20220217894 A1)[hereinafter “Guo”].
Regarding Claims 10 and 17, Shendryk discloses that aggregating the index values for the one or more fields with the at least one environmental metric for the one or more fields includes aggregating the index values with the at least one environmental metric [Page 5, second column – “In total, there were 371 predictor variables, extracted from Sentinel1 (21 predictors, i.e. 7 statistics (Table 4) extracted from 3 SAR bands), Sentinel-2 (315 predictors, i.e. 7 statistics (Table 4) exracted from 45 NDSIs), DEM (28 predictors, i.e. 7 statistics (Table 4) extracted from 4 DEM variables), soil (1 predictor) and climate (6 predictors, i.e. average (avg) value of 6 climate variables) mosaics.”See Table 6 – All predictors.], but fails to disclose aggregating the value/metric, by using one of inverted variance weighting and convolutional neural networking, into a combined metric; and wherein predicting the plot yield is based on the combined metric.
	However, Guo discloses aggregating overhead farm images and environmental data through use of a neural network in the determination of an SOC metric [Paragraph [0025] – “As shown in FIG. 1 and mentioned previously, some farm vehicles may be operated at least partially autonomously, and may include, for instance, unmanned aerial vehicle 107.sub.1 that carries a vision sensor 108.sub.1 that acquires vision sensor data such as digital images from overhead field(s) 112.”
Paragraph [0027] – “Local data module 116 may be configured to gather, collect, request, obtain, and/or retrieve ground truth observational data from a variety of different sources, such as agricultural personnel and sensors and software implemented on robot(s), aerial drones, and so forth. Local data module 116 may store that ground truth observational data in one or more of the databases 115, 121, or in another database (not depicted). This ground truth observational data may be associated with individual agricultural fields or particular positional coordinates within such field(s), and may include various types of information derived from user input and sensor output related to soil composition (e.g., soil aeration, moisture, organic carbon content, etc.), agricultural management practices (e.g., crop plantings, crop identification, crop rotation, irrigation, tillage practices, etc.), terrain (e.g., land elevation, slope, erosion, etc.), climate or weather (e.g., precipitation levels/frequency, temperatures, sunlight exposure, wind, humidity, etc.), and any other features, occurrences, or practices that could affect the agricultural conditions of the field(s) and which could be identified based on analyzing sensor output and/or user input and/or generated based on such identified data.”
Paragraph [0036] – “SOC inference module 128 can receive, gather, or otherwise obtain the digital images, the operational data, and observational data in order to use the types of data to generate predicted SOC measurements for the field(s) 112.”
Paragraph [0040] – “SOC inference module 128 may take the form of recurrent neural network(s) (“RNN”), the aforementioned CNNs, long short-term memory (“LSTM”) neural network(s), gated recurrent unit (“GRU”) recurrent network(s), feed forward neural network(s), or other types of memory networks.”].  It would have been obvious to determine/aggregate such a metric using the index values and environmental metric and use it in predicting crop yield because Guo discloses that SOC is relevant to crop yield [Paragraph [0059] – “In some implementations, at block 408, crop yield may be predicted as well (crop yield may be correlated with SOC extracted from and/or added to the soil).”].

Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Gandikota et al., RTC-GAN: REAL-TIME CLASSIFICATION OF SATELLITE IMAGERY USING DEEP GENERATIVE ADVERSARIAL NETWORKS WITH INFUSED SPECTRAL INFORMATION, IEEE, 2020
Jiang et al., GAN-BASED MULTI-LEVEL MAPPING NETWORK FOR SATELLITE IMAGERY SUPER-RESOLUTION, IEEE, 2019
Liu et al., PSGAN: A GENERATIVE ADVERSARIAL NETWORK FOR REMOTE SENSING IMAGE PAN-SHARPENING, IEEE, 2018
US 20200125929 A1 – CROP YIELD PREDICTION AT FIELD-LEVEL AND PIXEL-LEVEL
US 20220335715 A1 – PREDICTING VISIBLE/INFRARED BAND IMAGES USING RADAR REFLECTANCE/BACKSCATTER IMAGES OF A TERRESTRIAL REGION
US 20210012109 A1 – SYSTEM AND METHOD FOR ORCHARD RECOGNITION ON GEOGRAPHIC AREA
US 20180189564 A1 – METHOD AND SYSTEM FOR CROP TYPE IDENTIFICATION USING SATELLITE OBSERVATION AND WEATHER DATA
US 20220198221 A1 – ARTIFICIAL INTELLIGENCE GENERATED SYNTHETIC IMAGE DATA FOR USE WITH MACHINE LANGUAGE MODELS
US 20180211156 A1 – CROP YIELD ESTIMATION USING AGRONOMIC NEURAL NETWORK
US 20210201024 A1 – CROP IDENTIFICATION METHOD AND COMPUTING DEVICE
US 20210397836 A1 – USING EMPIRICAL EVIDENCE TO GENERATE SYNTHETIC TRAINING DATA FOR PLANT DETECTION
US 20210312591 A1 – SYSTEMS AND METHOD OF TRAINING NETWORKS FOR REAL-WORLD SUPER RESOLUTION WITH UNKNOWN DEGRADATIONS

Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE ROBERT QUIGLEY whose telephone number is (313)446-4879. The examiner can normally be reached 11AM-9PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arleen Vazquez can be reached on (571) 272-2619. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.





/KYLE R QUIGLEY/Primary Examiner, Art Unit 2857                                                                                                                                                                                                        


    
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
    


Cookies help us deliver our services. By using our services, you agree to our use of cookies.