Saturday, April 20, 2024
ad
HomeMiscellaneousLatest Research Solves Freeway Ramp Merging problem of Autonomous Vehicles

Latest Research Solves Freeway Ramp Merging problem of Autonomous Vehicles

The ramp merging has proven to be tricky to maneuver for autonomous vehicles. Carnegie Mellon's team uses reinforcement learning to address this.

In the past two decades, there has been a lot of interest in autonomous driving because to its numerous advantages, like relieving drivers from exhausting driving and reducing traffic congestion, among others. As a result, researchers have paid close attention to autonomous vehicles due to their potential to increase the efficiency and safety of transportation networks through control algorithms while cutting down on fuel usage. 

Despite encouraging advancement, ramp merging has been a major challenge that threatens to cause frequent traffic jams on the road, higher fuel consumption and emissions, safety concerns, and rear-end and side collisions. This is due to the decision-making process of merging cars, which causes them to first slow down or even stop on the ramp before merging into the main lane at an appropriate moment through control without interfering with the moving vehicles on the main lane. Since the cut-in movements of ramp vehicles can frequently disrupt the mainline traffic flow and result in numerous issues, ramp merging is crucial for freeway traffic operation.

At present, with their real-time communication and precise motion control abilities, autonomous vehicles can improve ramp merging activities through enhanced coordination techniques. Using specialized short-range radio communications and cellular networks, the communication technologies enable detailed and rapid information transmission among road users, traffic infrastructures, and control centers. As a result, vehicular moves can be arranged through real-time interactions among traffic participants. Furthermore, because they are less prone to delays and mistakes in the processes of identification, decision-making, and performance, autonomous driving systems in cars can execute the intended actions in a steady and timely way.

Read More: University of California, Berkeley designs self-driving robot based on Reinforcement Learning

To enhance tactical decision-making in autonomous vehicles, a number of impediments still need to be addressed, and as computational resources advance, there will undoubtedly be a number of exciting new chances to solve challenging issues. In an effort to boost efficiency, researchers from Carnegie Mellon University have created a reinforcement learning (RL)-based framework that could aid in the performance of autonomous vehicles in ramp merging settings. Their framework, outlined in a pre-published paper on arXiv, can contribute to strengthening the safety of autonomous vehicles at these crucial decision-making periods while lowering the likelihood of accidents.

Reinforcement learning is one of the most important machine learning methods to achieve Artificial General Intelligence (AGI). RL systems are frequently trained in gaming environments, which serve as testbeds for teaching agents new tasks using visual signals and the popular “carrot and stick” approach. 

In a reinforcement learning approach, artificial intelligence (AI) agents are put into simulated settings and given two options that are determined by predetermined policy. The agent makes a decision and is “punished” or “rewarded” for it; in other words, positive actions are encouraged, and negative ones are discouraged. Whether its decision has a favorable or detrimental consequence, the AI modifies its policy accordingly and repeats the process with fresh choices made that are time influenced by the modified policy. The AI agents keep going through this process until they find the best solution.

Read More: Understanding risk of Membership Inference Attacks on Deep Reinforcement Learning Models

Given the potential outcomes of the infinite complexity of complicated real-world circumstances and the significant risks involved, RL may require fundamental technological advancements to enable complete ‘autonomous’ driving. In recent times, reinforcement learning has been extensively investigated for lane-changing decision-making in AVs, with positive results. However, it was eventually discovered that most of these studies had compromised either the safety or the efficiency of the algorithm.

According to Soumith Udatha, one of the researchers who created the model, Prof. John Dolan’s department at CMU has been working on numerous autonomous driving applications for quite some time. Udatha says that due to the challenges posed by fast-moving cars, drivers with different driving styles, and inherent uncertainties, the application on which his team concentrated in this work is freeway merging.

The central goal of Udatha and his team’s study is to increase the safety of autonomous vehicles. In their paper, they sought to develop a framework particularly designed to capture ramp merging situations and plan a vehicle’s actions based on its analysis of any uncertainties and potential dangers.

Though, as mentioned above: reinforcement learning models interact with the environment and gather information to maximize their rewards, Udatha explained that this data exploration meets with several complications when used in practical contexts. This is partially due to the fact that not all of the states the agent encounters are safe. In order to assure safety at a given distance, the team limited its RL policy using control barrier functions (CBFs). As a result, they disregard unsafe states and improve a system’s capacity to learn how to travel according to environmental constraints.

CBFs are a group of relatively recent computational techniques created to improve the reliable control of autonomous systems, by ensuring a suitably-defined barrier function remains bounded for all time. They can be leveraged directly for a variety of optimization issues, particularly ramp-merging. Although they look good on paper, the optimizations they carry out do not take into consideration the information a system gathers as it is exploring an area. Reinforcement learning methods, as per Udatha, can eliminate this discrepancy.

The research team discovered that their algorithm could be applied to RL settings that are both online (while interacting with the environment) and offline (learning from a fixed dataset or logged data). However, offline reinforcement learning has currently become a core approach for using RL methods in practical settings. This is because it allows for the empirical evaluation of RL algorithms based on how well they can use a predefined dataset of interactions and produce real-world effects. 

Read More: Introducing Autocast: Dataset to Enable Forecasting of Real-World Events via Neural Networks

The team used a dataset extracted from the NGSIM Database, which includes high-quality traffic data at four locations: two freeway segments (I-80 and US-101) and two arterial segments (Lankershim Boulevard and Peachtree Street), between 2005 and 2006. The datasets collected and created for each location comprise the vehicle trajectory data (primary data), various location-specific primary and support data (e.g., ortho-rectified pictures of the research area, Computer-Aided Design (CAD) drawings of the study area, signal timings, weather data, detector data), raw video files, and processed video files.

Because the team now has massive volumes of data for offline RL, training on offline datasets may eventually result in superior models. The researchers also found — using their metrics — that adding probabilistic CBFs as limitations improves safety by partially accounting for driver uncertainty.

Using the online CARLA simulator created by a group of researchers at Intel Labs and the Computer Vision Center in Barcelona, Udatha and his colleagues put their framework through a number of tests. Their approach produced outstanding results in these simulations, emphasizing its great implications for boosting the safety of autonomous cars during ramp merging.

The research team now intends to continue the study by training their model to merge an autonomous car with many other vehicles in a situation with unknown drivers. Additionally, they discovered that there is presently no benchmark that can be used to evaluate different ramp-merging strategies, therefore Udatha’s team is working on creating one for NGSIM.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Preetipadma K
Preetipadma K
Preeti is an Artificial Intelligence aficionado and a geek at heart. When she is not busy reading about the latest tech stories, she will be binge-watching Netflix or F1 races!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular