α-RACER: Real-Time Algorithm for Game-Theoretic Motion Planning and Control in Autonomous Racing using Near-Potential Function
This repository contains the code (Data collection, training, comparison with baselines and rendering in Unity) for the paper "α-RACE: Real-Time Algorithms for Game-Theoretic Motion Planning and Control in Autonomous Racing using Near-Potential Function".
α-RACER is a novel approach that approximates nash equilibrium in real-time for continuous competitive games like autonomous multi-car racing. This is achieved by parameterizing the policy space of the agents in the game with a bunch of policy parameters. Then, data is collected by randomizing these parameters and the joint game state. This offline collectd data is used to learn a potential function that takes the joint state and policy parameters as input and outputs the potential value of the game. This learned potential function is then maximized in real-time during game to approximate nash-equilibrium parameters given the current joint state
- Installation
- Setting up
- Phase 1: Data collection
- Phase 2: Training
- Phase 3: Racing with baselines
- Visualizing races in Foxglove
- Visualizing races in Unity
- Citing this Work
-
Clone the repo
git clone https://github.com/sastry-group/alpha-RACER.git
-
Install dependencies
pip install -r requirements.txt
-
Install ROS2 Humble for ubuntu 22.04/ compatible ROS2 version based on your system: Link
-
Install ROS2 dependencies
sudo apt update
sudo apt install ros-humble-rclpy ros-humble-geometry-msgs ros-humble-nav-msgs ros-humble-visualization-msgs ros-humble-std-msgs ros-humble-ackermann-msgs ros-humble-tf-transformations- Colcon build your workspace
cd ros2_ws/
colcon build- Source your ros2 workspace. You may add this to your .bashrc. Replace 'humble' with your installed ROS2 distro
source /opt/ros/humble/setup.bash
source /home/dvij/multi-car-racing/ros2_ws/install/setup.bash- Install car_collect and car_dynamics
cd car_collect/
pip install -e .
cd ../car_dynamics
pip install -e .
cd ..- Install jax (Replace cuda11 with whatever version you have)
pip install -U "jax[cuda11]"- Install foxglove bridge to visualize. Replace 'humble' with your ROS distro. Install foxglove software from here: Linkros-foxglove-bridge/
sudo apt install ros-humble-foxglove-bridge-
Run:-
cd car_ros2/car_ros2 bash collect_data.bashThis will open 10 new parallel processes for data collection that saves data to data/ folder. Wait for about 4 hrs for the data collection to finish and later merge with
python3 merge_data.py
-
(Optional) To visualize training, open a new terminal and run foxglove bridge. For visualizing you also need to set VIS=True in multi_car_blocking.py which is set to False by default
ros2 run foxglove_bridge foxglove_bridge
-
(Optional) Open foxglove software and open the connection 'localhost'. Import layout from foxglove_multi_car.json. You should see a multi-car racing environment now
- First, we need to train the value function estimators from the collected data as a function of joint state and policy parameters. For this, we first need to run:-
cd car_ros2/car_ros2
python3 train_q_model_multi.py --data_name data_large_multi --mpc [--cuda] Add --cuda if you have a GPU. This will take around 2-3 mins to train the value function estimator. The trained model will be saved in the q_models folder. Later you also need to run train_q_model_multi_rel.py with the same settings. Running the first step with train_q_model.py was required for smooth initialization of p-value estimator for better convergence. This step will train the value function estimators with the relative utility
python3 train_q_model_multi_rel.py --data_name data_large_multi --mpc [--cuda] Now, we can train the potential value estimator with the following command:-
python3 train_p_model_multi.py --data_name data_large_multi --model_name1 model_multi0 --model_name2 model_multi1 --model_name3 model_multi2 --mpc [--cuda] --rel-
To run races with random policy params of opponent, run:-
cd car_ros2/car_ros2 python3 multi_car_comp_blocking.py --opp1 mpc --opp2 mpc --mpc --relTo run races with baseline IBR (Iterated Best Response), run:-
cd car_ros2/car_ros2 python3 multi_car_comp_blocking.py --opp1 ibr --opp2 ibr --mpc --relTo run races with RL, run:-
cd car_ros2/car_ros2 python3 multi_car_comp_blocking.py --opp1 rl --opp2 rl --mpc --relTo run races with our baseline trained with low training data, run:-
cd car_ros2/car_ros2 python3 multi_car_comp_blocking.py --opp1 ours-low_data --opp2 ours-low_data --mpc --relTo run races with our baseline trained with low gamma, run:-
cd car_ros2/car_ros2 python3 multi_car_comp_blocking.py --opp1 ours-low_p --opp2 ours-low_p --mpc --relTo run races with our baseline trained with high gamma, run:-
cd car_ros2/car_ros2 python3 multi_car_comp_blocking.py --opp1 ours-high_p --opp2 ours-high_p --mpc --relThis will also save the races under recorded_races/ directory for you to later visualize
-
(Optional) To visualize races, open a new terminal and run foxglove bridge
ros2 run foxglove_bridge foxglove_bridge
-
(Optional) Open foxglove software and open the connection 'localhost'. Import layout from foxglove_head_to_head.json. You should see a head to head racing environment now
You can rerun races saved in recorded_races folder by running the following command. Modify the filename in rerun_races.py header
cd car_ros2/car_ros2
python3 rerun_races.pyYou can rerun races saved in recorded_races folder rendered in Unity by running the following command. Modify the filename in rerun_unity.py header. Change UNITY_BUILD_PATH in header to env-v0.x86_64 for linux. By default, it is set to run on MacOS
cd simulators
chmod +x env-v0.x86_64 # for linuxThen run the following command
python3 rerun_unity.py@article{kalaria2025alpharacer,
title={α-RACER: Real-Time Algorithm for Game-Theoretic Motion Planning and Control in Autonomous Racing using Near-Potential Function},
author={Kalaria, Dvij and Maheshwari, Chinmay and Sastry, Shankar},
booktitle={7th Annual Learning for Dynamics & Control Conference (L4DC) 2025},
year={2025},
organization={IEEE}
}