Nvidia corporation (20240199074). EGO TRAJECTORY PLANNING WITH RULE HIERARCHIES FOR AUTONOMOUS VEHICLES simplified abstract

From WikiPatents
Revision as of 18:16, 20 June 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

EGO TRAJECTORY PLANNING WITH RULE HIERARCHIES FOR AUTONOMOUS VEHICLES

Organization Name

nvidia corporation

Inventor(s)

Sushant Veer of Santa Clara CA (US)

Karen Leung of Santa Clara CA (US)

Ryan Cosner of Altadena CA (US)

Yuxiao Chen of Santa Clara CA (US)

Marco Pavone of Santa Clara CA (US)

EGO TRAJECTORY PLANNING WITH RULE HIERARCHIES FOR AUTONOMOUS VEHICLES - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240199074 titled 'EGO TRAJECTORY PLANNING WITH RULE HIERARCHIES FOR AUTONOMOUS VEHICLES

Simplified Explanation

The patent application discusses how autonomous vehicles can navigate conflicting traveling rules by using a rank-preserving reward function to select the best control action.

Key Features and Innovation

  • Utilizes a rank-preserving reward function to choose the least objectionable control action for autonomous vehicles.
  • Applies a robustness vector to each trajectory to determine the reward for following a specific rule.
  • Incorporates optimizers like stochastic optimizers to enhance the reward calculation results.
  • Smooths out the calculation using a sigmoid function to improve the robustness vector.
  • Communicates the preferred trajectory to the AV controller for implementation as a control action.

Potential Applications

The technology can be applied in autonomous vehicles to navigate conflicting traveling rules efficiently and safely.

Problems Solved

Addresses the challenge of autonomous vehicles dealing with conflicting traveling rules and selecting the best control action.

Benefits

  • Enhances the decision-making process for autonomous vehicles in complex traffic scenarios.
  • Improves the overall safety and efficiency of autonomous vehicle navigation.

Commercial Applications

The technology can be utilized in the automotive industry for the development of advanced autonomous driving systems.

Prior Art

Readers can explore prior research on reward-based decision-making systems for autonomous vehicles to understand the evolution of this technology.

Frequently Updated Research

Stay updated on the latest advancements in reward-based decision-making systems for autonomous vehicles to enhance your knowledge in this field.

Questions about Autonomous Vehicle Navigation

How does the rank-preserving reward function improve decision-making in autonomous vehicles?

The rank-preserving reward function helps autonomous vehicles select the least objectionable control action by assigning rewards based on the rank of traveling rules.

What role do optimizers play in enhancing the reward calculation process for autonomous vehicles?

Optimizers like stochastic optimizers are utilized to improve the results of the reward calculation, making the decision-making process more efficient and effective.


Original Abstract Submitted

autonomous vehicles (avs) may need to contend with conflicting traveling rules and the av controller would need to select the least objectionable control action. a rank-preserving reward function can be applied to trajectories derived from a rule hierarchy. the reward function can be correlated to a robustness vector derived for each trajectory. thereby the highest ranked rules would result in the highest reward, and the lower ranked rules would result in lower reward. in some aspects, one or more optimizers, such as a stochastic optimizer can be utilized to improve the results of the reward calculation. in some aspects, a sigmoid function can be applied to the calculation to smooth out the step function used to calculate the robustness vector. the preferred trajectory selected using the results from the reward function can be communicated to an av controller for implementation as a control action.