[RAL 2024] Deep Reinforcement Learning-based Large-scale Robot Exploration - - Public code and model
Note: This is a new implementation of ARiADNE ground truth critic variant. You can find our original implementation in the main branch. We reimplement the code to optimize the computing time, RAM/VRAM usage, and compatibility with ROS. The trained model can be directly tested in our ARiADNE ROS planner.
We recommend to use conda for package management. Our planner is coded in Python and based on Pytorch. Other than Pytorch, please install following packages by:
pip install scikit-image matplotlib ray tensorboard
We tested our planner in various version of these packages so you can just install the latest one.
Download this repo and go into the folder:
git clone https://github.com/marmotlab/large-scale-DRL-exploration
cd ARiADNE
Launch your conda environment if any and run:
python driver.py
The default training code requires around 8GB VRAM and 20G RAM.
You can modify the hyperparameters in parameter.py.
parameters.pyTraining parameters.driver.pyDriver of training program, maintain & update the global network.runner.pyWrapper of the workers.worker.pyInteract with environment and collect episode experience.model.pyDefine attention-based network.env.pyAutonomous exploration environment.node_manager.pyManage and update the informative graph for policy observation.ground_truth_node_manager.pyManage and update the ground truth informative graph for critic observation.quadsQuad tree for node indexing provided by Daniel Lindsley.sensor.pySimulate the sensor model of Lidar.utilsSome helper functions./mapsMaps of training environments provided by Chen et al..
Yuhong Cao
Rui Zhao
Yizhuo Wang
Bairan Xiang
Guillaume Sartoretti