20240012415. TRUE VISION AUTONOMOUS MOBILE SYSTEM simplified abstract (Unknown Organization)

From WikiPatents
Jump to navigation Jump to search

TRUE VISION AUTONOMOUS MOBILE SYSTEM

Organization Name

Unknown Organization

Inventor(s)

Newton Howard of Potomac MD (US)

TRUE VISION AUTONOMOUS MOBILE SYSTEM - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240012415 titled 'TRUE VISION AUTONOMOUS MOBILE SYSTEM

Simplified Explanation

Embodiments of this patent application propose a novel approach to autonomous systems by utilizing video and audio signals. In one example, a mobile system such as a vehicle, vessel, or aircraft is equipped with multiple video sensors and audio sensors. These sensors gather information about the surroundings and transmit video and audio data representing this information. A computer system receives the data from the sensors, performs fusion of the received data to generate information representing the surroundings, and utilizes this information to enable autonomous functioning of the vehicle, vessel, or aircraft.

  • The patent application suggests using existing video and audio signals to create an alternative approach to autonomous systems.
  • The mobile system (vehicle, vessel, or aircraft) is equipped with multiple video sensors and audio sensors to gather information about the surroundings.
  • The video and audio data obtained from the sensors are transmitted to a computer system.
  • The computer system performs fusion of the received data to generate information representing the surroundings.
  • The generated information is then used to enable autonomous functioning of the mobile system.

Potential Applications:

  • Autonomous vehicles: This technology can be applied to self-driving cars, trucks, and other autonomous vehicles to enhance their perception and decision-making capabilities.
  • Autonomous drones: Drones can utilize this technology to improve their ability to navigate and interact with their surroundings autonomously.
  • Autonomous vessels: Ships and boats can benefit from this technology to enhance their situational awareness and autonomous navigation.

Problems Solved:

  • Limited perception: Traditional autonomous systems heavily rely on specific sensors like LiDAR or radar, which may have limitations in certain scenarios. This technology expands the perception capabilities by utilizing video and audio signals.
  • Sensor redundancy: By using multiple video and audio sensors, the system can compensate for the failure or limitations of individual sensors, ensuring a more reliable perception of the surroundings.
  • Real-time decision-making: The fusion of video and audio data allows for more comprehensive and accurate information about the surroundings, enabling faster and more informed decision-making for autonomous systems.

Benefits:

  • Enhanced perception: By combining video and audio signals, the system can provide a more detailed and comprehensive understanding of the surroundings, improving the safety and efficiency of autonomous systems.
  • Redundancy and reliability: The use of multiple sensors ensures that the system can continue to operate even if one or more sensors fail or encounter limitations.
  • Cost-effectiveness: Utilizing existing video and audio signals eliminates the need for additional expensive sensors, reducing the overall cost of implementing autonomous systems.


Original Abstract Submitted

embodiments may provide techniques for an alternative and innovative approach to autonomous systems using two already existing senses: video and audio signals. for example, in an embodiment, a mobile system may comprise a vehicle, vessel, or aircraft comprising a plurality of video sensors, and a plurality of audial sensors, adapted to obtain information about surroundings of the vehicle, vessel, or aircraft and to transmit video and audial data representing the information about surroundings of the vehicle, vessel, or aircraft, and at least one computer system adapted to receive the video and audial data from the plurality of sensors, perform fusion of the received data to generate information representing the surroundings of the vehicle, vessel, or aircraft, and to use the generated information to provide autonomous functioning of the vehicle, vessel, or aircraft.