CAWSR (Carla Autoware Scenario Runner) is a scenario execution engine built for the testing of Autoware in route-based scenarios.
Both CARLA and Autoware require a high-spec computer with a high-end Nvidia GPU. It is also possible to run a distributed setup with multiple machines to help ease the workload, or run the entire stack locally. Currently, only Linux is supported (guide was written on Ubuntu 24.04).
Ensure the target machine(s) have the Docker Engine and Nvidia Container toolkit installed to enable gpu accelerated workflows in Docker.
Setup access to the Github container registry
Once you have been granted access, pull the following images:
docker pull ghcr.io/intelligent-testing-lab/carla:0.9.15
docker pull ghcr.io/intelligent-testing-lab/autoware-scenario-runner:latest
docker pull ghcr.io/intelligent-testing-lab/autoware:latest
Autoware and ROS use a custom messaging interface for communication, known as DDS. For maximum performance, configure your network settings as follows. If not configured, you will see heavy performance issues as the default ubuntu buffer sizes fill up fast, especially when running over lossy networks such as WiFi.
# Increase the maximum receive and send buffer size for network packets, allowing our containers to communicate
sudo sysctl -w net.core.rmem_max=2147483647 # 2 GiB, default is 208 KiB
sudo sysctl -w net.core.wmem_max=2147483647
# IP fragmentation settings
sudo sysctl -w net.ipv4.ipfrag_time=3 # in seconds, default is 30 s
sudo sysctl -w net.ipv4.ipfrag_high_thresh=134217728 # 128 MiB, default is 256 KiBSave the following commands in setup.sh, allow it to be executable chmod +x setup.sh.
These settings are temporary and will revert on restart.
To allow GUI applications (like Autoware and CARLA) to run through Docker, you must allow xhost connections from the docker group.
xhost +local:dockerCAWSR can both be ran locally or distributed. Due to the high-spec requirements, it is recommended to run distributed if you do not meet the following minimum specs:
- At least 10GB VRAM and a modern GPU (2080 ti or newer)
- We currently supports 20 and 30 series GPUs. Compatibility with the 40 series is current WIP.
- At least 32GB RAM
- A modern Intel or AMD CPU with at least 8 cores.
To run locally, set MODE in .env
[Network]
MODE=localEnsure multicast is enabled for the localhost interface:
sudo ip link show lo
1: lo: <LOOPBACK,UP,LOWER_UP, MULTICAST>If MUTLICAST is not present, you can enable it with sudo ip link set lo multicast on.
Run the entire stack
docker compose upWhen running distributed, we use unicast to enable compatibility with all networks. This requires some extra configuration.
Running CAWSR distributed using the following setup:
- Machine A: CAWSR and CARLA
- Machine B: Autoware
Configure the .env and ensure is it the same across both machines
[Network]
MODE=distributed
ROS_DOMAIN_ID=0
# For distributed mode
HOST_IP=127.0.0.1 # CAWSR and CARLA
AUTOWARE_IP=127.0.0.1 # Autoware
The ROS_DOMAIN_ID must match, otherwise the ROS2 nodes will not be able to find each other. Once configured, start CAWSR on Machine A
docker compose up cawsr
and Autoware on Machine B
docker compose up autoware-latest
If you are experiencing problems, you can trouble shoot with the following command, replacing eth0 and 192.168.1.20 with your network interface and destination IP:
ip addr show # find the network interface used
sudo tcpdump -i eth0 udp and src 192.168.1.20This will tell you if packets are flowing between Machine A and Machine B. If no packets are flowing, it is likely an issue with your network configuration.
After completing the prerequisite steps, clone the CAWSR workspace repository. The structure of the workspace is as follows.
scenarios/ -> this folder holds all the scenario configurations
configs/ -> this folder holds all user config files
results/ -> results from runs are stored here
algorithms/ -> holds all custom algorithm scripts
docker_compose.yml
.env
All folders are mounted as Docker volumes into the CAWSR container, so any changes persist between host and container.
CAWSR is designed to be highly configurable and supports easy swapping of config files.
- Create a config file in
configs/based on one of the examples. - Modify the
CAWSR_CONFIGenvironmental variable in.envto point towards the selected file. All files use relative paths from the CAWSR root directory.
In CAWSR, there are two modes you can configure algorithm or benchmark. To set the mode, modify the mode variable in config.yaml.
This mode enables the use of a custom algorithm to modify / optimize the scenario definition after execution.
algorithm:
initial_definition: scenarios/examples/example_scenario.json # can be null
seed: 10
runs: 50
path: algorithms/random_search
args:
lanelet2: algorithms/resources/Town01.osmIncluded in algorithms/basic_algorithm.py is the BasicAlgorithm class, from which all algorithms inherit. The algorithm is ran on every
iteration of the scenario, modifying the definition based on the result of the previous scenario. At beginning of every iteration, the method
def _scenario_callback(
self, scenario_definition: dict, driving_score: float
) -> dict:is called.
Create a class than inherits BasicAlgorithm and implements the function scenario_callback. The function must accept the current scenario_definition and the driving score, returning a new scenario definition. To use outside resources, such as loading a lanelet file (see example config), pass them in via the args config variable. This gets converted into a python dictionary and passed to the algorithm class when initialised. Algorithms are run synchronously, so CAWSR will wait for completion.
The algorithm will execute runs times.
Benchmark simply executes all scenario definitions in a given directory. Set scenarios to a path containing .json scenario definitions, and enable / disable random sampling. If enabled, CAWSR will executed each scenario once in a random order.
We use a custom implementation of a scenario definition in JSON. We have included a scenario domain model, as well as plenty of examples in the CAWSR Workspace repository scenarios/examples/.
Please take a look at our Contribution guidelines.
Core CAWSR logic and Autoware integration.
- Copyright: © 2025 University of Sheffield
- License File:
LICENSE Sheffield
Associated Files / Directories:
- srunner/scenarioconfigs/environment_configuration.py
- srunner/scenario_decoder/json_to_xml_files.py
- srunner/scenarioconfigs/carla_config.py
- srunner/autoagents/autoware_agent.py
- srunner/autoagents/autoware_nodes/
- srunner/tools/results_manager.py
- srunner/autoagents/agent_state/
- srunner/objects/
- Dockerfile
- cawsr.py
- srunner/tools/metrics_collector.py
- srunner/tools/CARLA_manager.py
- srunner/tools/log.py
- srunner/scenariomanager/scenario_manager.py (modified)
- srunner/autoagents/autoware_carla_interface/carla_autoware.py (modified)
- srunner/autoagents/autoware_carla_interface/carla_ros.py (modified)
- docker/autoware_msgs.tar.xz/src/autoware_cawsr_msgs
Scenario execution engine for CARLA.
- Copyright: © Intel Corporation / CARLA Team
- License File:
LICENSE Carla
Associated Files / Directories (excluding files under CAWSR license):
- srunner/scenarios/
- srunner/examples/
- srunner/scenarioconfigs/route_scenario_configuration.py
- srunner/scenarioconfigs/scenario_configuration.py
- srunner/scenariomanager/actorcontrols
- srunner/scenariomanager/scenarioatomics
- srunner/scenariomanager/lights_sim.py
- srunner/scenariomanager/results_writer.py
- srunner/scenariomanager/timer.py
- srunner/scenariomanager/traffic_events.py
- srunner/scenariomanager/watchdog.py
- srunner/scenariomanager/weather_sim.py
- srunner/scenariomanager/carla_data_provider.py
- srunner/tools/background_manager.py
- srunner/tools/py_trees_port.py
- srunner/tools/route_manipulation.py
- srunner/tools/route_parser.py
- srunner/tools/scenario_parser.py
- srunner/tools/scenario_helper.py
Autoware communication bridge. Apache License 2.0.
- Copyright: © The Autoware Foundation / Tier IV, Inc. / AutoCore / Leo Drive
- License File:
LICENSE-Autoware
Associated Files / Directories (excluding files under CAWSR license):
- srunner/autoagents/autoware_carla_bridge/
- docker/autoware_msgs.tar.xz/src
Notices: See NOTICE for the full list of required attributions.
