18374568. FUSING MULTIMODAL ENVIRONMENTAL DATA FOR AGRICULTURAL INFERENCE (Deere & Company)
FUSING MULTIMODAL ENVIRONMENTAL DATA FOR AGRICULTURAL INFERENCE
Organization Name
Inventor(s)
Yawen Zhang of Mountain View CA US
Kezhen Chen of San Mateo CA US
Jinmeng Rao of Sunnyvale CA US
Xiaoyuan Guo of Palo Alto CA US
Luis Pazos Outon of Mountain View CA US
FUSING MULTIMODAL ENVIRONMENTAL DATA FOR AGRICULTURAL INFERENCE
This abstract first appeared for US patent application 18374568 titled 'FUSING MULTIMODAL ENVIRONMENTAL DATA FOR AGRICULTURAL INFERENCE
Original Abstract Submitted
Implementations are disclosed for fusing multiple modalities of data into a multimodal feature embedding and then processing the multimodal feature embedding using various downstream processes for training and/or inference purposes. In various implementations, multiple different modalities of agricultural data about an agricultural parcel may be obtained. Each modality of agricultural data may be processed based on a respective modality-specific encoder to generate a respective modality-specific embedding. The plurality of modality-specific embeddings may be processed based on a multimodal fusion machine learning model to generate a multimodal feature embedding that represents the agricultural parcel. In some implementations, the multimodal feature embedding may be processed using downstream computer process(es) to generate agricultural prediction(s) about the agricultural parcel. Additionally or alternatively, the multimodal feature embedding may be used to train the multimodal fusion model and/or the modality specific encoder(s).