US Patent Application 18324002. METHOD FOR AUTOMATIC OBJECT AND/OR SEGMENT LABELING OF SENSOR TARGET DATA, USE OF SUCH LABELED SENSOR TARGET DATA, COMPUTER PROGRAM, AND CONTROL DEVICE OR CENTRAL OR ZONAL COMPUTING MODULE simplified abstract

From WikiPatents
Jump to navigation Jump to search

METHOD FOR AUTOMATIC OBJECT AND/OR SEGMENT LABELING OF SENSOR TARGET DATA, USE OF SUCH LABELED SENSOR TARGET DATA, COMPUTER PROGRAM, AND CONTROL DEVICE OR CENTRAL OR ZONAL COMPUTING MODULE

Organization Name

Robert Bosch GmbH

Inventor(s)

Jerg Pfeil of Cleebronn (DE)

METHOD FOR AUTOMATIC OBJECT AND/OR SEGMENT LABELING OF SENSOR TARGET DATA, USE OF SUCH LABELED SENSOR TARGET DATA, COMPUTER PROGRAM, AND CONTROL DEVICE OR CENTRAL OR ZONAL COMPUTING MODULE - A simplified explanation of the abstract

This abstract first appeared for US patent application 18324002 titled 'METHOD FOR AUTOMATIC OBJECT AND/OR SEGMENT LABELING OF SENSOR TARGET DATA, USE OF SUCH LABELED SENSOR TARGET DATA, COMPUTER PROGRAM, AND CONTROL DEVICE OR CENTRAL OR ZONAL COMPUTING MODULE

Simplified Explanation

- The patent application describes a method for automatically labeling objects and segments in sensor data from vehicle target sensors. - The method involves capturing a sequence of camera images and generating an environment representation of the vehicle based on these images. - The method then uses a machine recognition method to recognize objects in the environment and estimate their positions based on the camera images. - The method classifies points in the environment representation based on the recognized objects and estimated positions. - Distance data is captured using distance sensors, and the environment representation is adjusted based on this data. - Finally, a synthetic image of the environment is calculated from a virtual perspective of observation using the adjusted environment representation.


Original Abstract Submitted

A method for automatic object and/or segment labeling of sensor target data of at least one vehicle target sensor. The method comprises first capturing of at least one sequence of camera images; generating an environment representation of the vehicle as a function of the captured sequence; recognizing at least one object in the environment by a learned machine recognition method as a function of a captured camera image; ascertaining an estimated position of the object as a function of the camera image; classifying a point of the environment representation based on the recognized object and the ascertained estimated position; and second capturing of distance data using at least one distance sensor. The generated environment representation is adjusted as a function of the captured distance data. A calculation of a synthetic image of the environment from a virtual perspective of observation takes place based on the adjusted environment representation.