18060444. SENSOR FUSION USING ULTRASONIC SENSORS FOR AUTONOMOUS SYSTEMS AND APPLICATIONS simplified abstract (NVIDIA Corporation)

From WikiPatents
Jump to navigation Jump to search

SENSOR FUSION USING ULTRASONIC SENSORS FOR AUTONOMOUS SYSTEMS AND APPLICATIONS

Organization Name

NVIDIA Corporation

Inventor(s)

David Weikersdorfer of Mountain View CA (US)

Qian Lin of Berkeley CA (US)

Aman Jhunjhunwala of Toronto (CA)

Emilie Lucie Eloïse Wirbel of Nogent-su-Marne (FR)

Sangmin Oh of San Jose CA (US)

Minwoo Park of Saratoga CA (US)

Gyeong Woo Cheon of San Jose CA (US)

Arthur Henry Rajala of Greenville OH (US)

Bor-Jeng Chen of San Jose CA (US)

SENSOR FUSION USING ULTRASONIC SENSORS FOR AUTONOMOUS SYSTEMS AND APPLICATIONS - A simplified explanation of the abstract

This abstract first appeared for US patent application 18060444 titled 'SENSOR FUSION USING ULTRASONIC SENSORS FOR AUTONOMOUS SYSTEMS AND APPLICATIONS

Simplified Explanation

The abstract describes techniques for sensor-fusion based object detection and free-space detection using ultrasonic sensors. Systems process sensor data to generate input data representing objects within an environment, then input the data into neural networks trained to output maps associated with the environment.

  • Ultrasonic sensors used for object detection and free-space detection
  • Sensor data processed to generate input data representing objects within an environment
  • Neural networks trained to output maps associated with the environment, such as height, occupancy, or height/occupancy maps

Potential Applications

The technology can be applied in autonomous vehicles, robotics, security systems, and industrial automation for accurate object detection and free-space detection.

Problems Solved

The technology solves the problem of accurately detecting objects and free spaces within an environment using sensor fusion techniques, improving the overall safety and efficiency of machines and systems.

Benefits

The benefits of this technology include improved object detection accuracy, enhanced safety measures, increased efficiency in navigation and operation, and potential cost savings in various industries.

Potential Commercial Applications

Potential commercial applications of this technology include autonomous vehicles, drones, warehouse automation, smart buildings, and surveillance systems.

Possible Prior Art

One possible prior art for this technology could be the use of LiDAR sensors for object detection and mapping in autonomous vehicles and robotics applications.

Unanswered Questions

How does this technology compare to LiDAR-based object detection systems in terms of accuracy and cost-effectiveness?

This article does not provide a direct comparison between ultrasonic sensor-based object detection systems and LiDAR-based systems in terms of accuracy and cost-effectiveness. Further research and testing would be needed to determine the advantages and disadvantages of each technology in different applications.

What are the limitations of using ultrasonic sensors for object detection in complex environments with obstacles and varying lighting conditions?

The article does not address the specific limitations of using ultrasonic sensors for object detection in complex environments with obstacles and varying lighting conditions. Further investigation would be required to understand the challenges and potential drawbacks of this technology in such scenarios.


Original Abstract Submitted

In various examples, techniques for sensor-fusion based object detection and/or free-space detection using ultrasonic sensors are described. Systems may receive sensor data generated using one or more types of sensors of a machine. In some examples, the systems may then process at least a portion of the sensor data to generate input data, where the input data represents one or more locations of one or more objects within an environment. The systems may then input at least a portion of the sensor data and/or at least a portion of the input data into one or more neural networks that are trained to output one or more maps or other output representations associated with the environment. In some examples, the map(s) may include a height, an occupancy, and/or height/occupancy map generated, e.g., from a birds-eye-view perspective. The machine may use these outputs to perform one or more operations