This is a set of tools, developed for automating performance measures of the Picoquic multicast extension (but should work with any server and client binary):
https://github.com/j0nem/picoquic-multicast
Setup for Picoquic-Multicast:
Setup all tools + the specific picoquic-multicast project, build it, and get example files on Debian/Ubuntu:
# Please have a look what that script does before executing it
./setup_picoquic_multicast.shGeneral setup:
Install only the tools for the measurements:
# Ubuntu/Debian
sudo apt-get update
sudo apt-get install -y sysstat time ssh
# RHEL/CentOS
sudo yum install -y sysstat time openssh-clients
# Enable sar data collection
sudo systemctl enable sysstat
sudo systemctl start sysstatFor the analyze_results.py script, Python is needed, for plotting in compare_results.py script, additionally matplotlib
# Ubuntu/Debian
sudo apt-get update
sudo apt-get install -y python3 python3-pip
pip3 install numpy matplotlib
# RHEL/CentOS
sudo yum install -y python3 python3-pip
pip3 install numpy matplotlibOn your control machine:
ssh-keygen -t rsa -b 4096
ssh-copy-id user@server-vm
ssh-copy-id user@client1-vm
ssh-copy-id user@client2-vm
ssh-copy-id user@client3-vmTest passwordless login:
ssh user@server-vm "hostname"-
Download all scripts to your control machine:
server_measure.shclient_measure.shorchestrator.shanalyze_results.pycompare_results.py
-
Make scripts executable:
chmod +x *.sh *.py- Create configuration files (see examples below)
server_vm: root@your-ip
client_vms:
- root@your-ip
- root@your-ip
- root@your-ip
clients_per_vm: 2
iterations: 3
test_name: multicast_test
server_binary: /path/to/multicast
client_binary: /path/to/multicast
server_args: server 4433 4434 /path/to/cert /path/to/key 24000 /path/to/served-file.mp4 3
client_args: client SERVER_IP 4433 /path/to/client/folder 24000
max_test_duration: 300 # in seconds, set to 0 for no timeout (only manual interruption or natural termination)server_vm: root@your-ip
client_vms:
- root@your-ip
- root@your-ip
- root@your-ip
clients_per_vm: 1
iterations: 1
test_name: unicast_test
server_binary: /path/to/dgramspl
client_binary: /path/to/dgramspl
server_args: server 4433 /path/to/cert /path/to/key /path/to/served-file.mp4
client_args: client SERVER_IP 4433 /path/to/client/folder
max_test_duration: 300Note:
SERVER_IPis automatically replaced with the actual server IP address.server_argsandclient_argscan be arbitrary, tailored to the corresponding binary.clients_per_vmspecifies how many client processes to start on each VM (default: 1): Total clients = number of client VMs × clients_per_vmiterations(optional) specifies how many times to run the test (default: 1). Results will be averaged.max_test_duration(optional) is the maximum time a test should run (default: 0). Set to 0 for no timeout (only manual interruption or natural termination of the server/client processes)
See also:
Run the entire distributed test from your control machine:
# Run multicast test (3 iterations)
./orchestrator.sh multicast_test.conf
# Will run 3 times automatically with 30s pause between iterations
# Press Ctrl+C during any iteration to stop
# Run unicast test (3 iterations)
./orchestrator.sh unicast_test.conf
# Get averaged results for multiple iterations of the test scenario results
./server_analysis_aggregator.py results
# This will:
# - Average results across all iterations for each scenario
# - Print the results in a readable way to stdout
# Compare results with averaging and plots
./compare_results.py "results/multicast_test_iter*" "results/unicast_test_iter*"
# This will:
# - Average results across all iterations
# - Calculate standard deviations
# - Generate comparison plots (CPU, memory, network)
# - Save plots to results/ directoryThe orchestrator will:
- Upload scripts to all VMs
- Run the test multiple times (based on
iterationsconfig) - For each iteration:
- Start the server
- Start multiple clients on each client VM
- Wait for the timeout, or you to press Ctrl+C (or for clients to finish naturally)
- Stop all processes gracefully
- Collect all results automatically
- Generate analysis reports
- Store results in separate iteration directories
Run multiple scenarios:
Use the run_all.sh script to run multiple test scenarios/configurations one after the other:
./run_all.sh <config_folder>If you prefer more control, run measurements manually on each VM:
On Server VM:
./server_measure.sh /path/to/binary test_name server 4433 4434 /path/to/cert /path/to/key 24000 /path/to/served-file.mp4 3
# Press Ctrl+C when doneOn Each Client VM (with multiple clients):
./client_measure.sh /path/to/binary test_name 3 server_ip 4433 /path/to/client/folder
# This starts 3 client processes on this VM
# Press Ctrl+C to stop all clients on this VMCollect and Analyze:
# Download results from server
scp -r user@server-vm:~/quic_tests/results/test_name_* ./local_results/
# Analyze
python3 analyze_results.py ./local_results/test_name_*
# Aggregate across iterations
python3 server_analysis_aggregator.py ./local_results/After running tests, you'll have:
results/
├── multicast_test_iter1_20260116_143022/
│ ├── server/
│ │ ├── server_time.log # Resource usage summary
│ │ ├── pidstat.log # CPU/memory over time
│ │ ├── network_stats.log # Interface statistics
│ │ ├── server_pid # Server process ID
│ │ └── server_stdout.log # Server output
│ ├── client_vm0/
│ │ ├── client_1/ # First client on this VM
│ │ │ ├── stdout.log
│ │ │ └── time.log
│ │ ├── client_2/ # Second client on this VM
│ │ │ ├── stdout.log
│ │ │ └── time.log
│ │ └── test_config.txt
│ ├── client_vm1/
│ │ └── ...
│ ├── server_analysis.txt
│ └── test_summary.txt
├── multicast_test_iter2_20260116_144530/
│ └── ...
├── multicast_test_iter3_20260116_150045/
│ └── ...
├── cpu_memory_comparison.png # Generated plots
└── network_comparison.png
CPU Usage:
Captured with pidstat:
- Average CPU % - Overall CPU utilization
- Peak CPU % - Maximum CPU spike
- User Time - CPU time in user mode
- System Time - CPU time in kernel mode
Memory Usage:
Captured with pidstat, values based on RSS (Resident Set Size) - Physical memory used
- Average Memory - Mean memory consumption
- Peak Memory - Maximum memory used
Network:
Captured with sar:
- Average Packet Rate - Packets per second
- Average Data Rate - Bytes per second
- Peak Packet Rate - Packets per second
- Peak Data Rate - Bytes per second
Context Switches:
- Voluntary - Process yielded CPU (I/O wait, etc.)
- Involuntary - Process preempted by scheduler
When comparing multicast vs unicast:
- Positive improvement % = Multicast is better (uses fewer resources)
- Negative improvement % = Unicast is better
Expected multicast advantages:
- Network traffic: Should see significant reduction (50-75%) with multiple clients
- CPU usage: May be slightly higher due to multicast overhead
- Memory: Similar between both versions
# On all VMs
sudo apt-get install -y ntp
sudo systemctl enable ntp
sudo systemctl start ntp# On all VMs
sudo apt-get install -y cpufrequtils
sudo cpufreq-set -g performanceTest with different client counts to see multicast scaling benefits by adjusting clients_per_vm:
Example configurations:
# Small scale: 3 VMs × 1 client = 3 total clients
clients_per_vm: 1
# Medium scale: 3 VMs × 3 clients = 9 total clients
clients_per_vm: 3
# Large scale: 3 VMs × 5 clients = 15 total clients
clients_per_vm: 5Or vary the number of client VMs in your config file.
Always record:
- VM specifications (CPU, RAM, network)
- Network topology
- Any background processes
- OS version and kernel
- QUIC implementation version
Check the logs:
cat results/test_name_*/server/server_stdout.log
cat results/test_name_*/server/server_orchestrator.log# On server, add multicast route
sudo route add -net 224.0.0.0 netmask 240.0.0.0 dev eth0
# Verify multicast group membership
netstat -gYou can extend compare_results.py to add custom metrics:
def parse_custom_metric(filepath):
# Your custom parsing logic
passWatch metrics during test:
# On server VM
watch -n 1 'ps aux | grep server_binary'
# Monitor network in real-time
iftop -i eth0Modify compare_results.py to output CSV format for spreadsheet analysis.
# 1. Prepare configuration with iterations
nano multicast_test.conf # Set iterations: 5
nano unicast_test.conf # Set iterations: 5
# 2. Run multicast tests (5 iterations)
./orchestrator.sh multicast_test.conf
# Each iteration: wait ~2 minutes, then press Ctrl+C
# 30 seconds between iterations
# 3. Run unicast tests (5 iterations)
./orchestrator.sh unicast_test.conf
# Each iteration: wait ~2 minutes, then press Ctrl+C
# 4. Compare results with statistical analysis
./compare_results.py \
"results/multicast_test_iter*" \
"results/unicast_test_iter*" \
| tee comparison_report.txt
# 5. View generated plots
xdg-open results/cpu_memory_comparison.png
xdg-open results/network_comparison.png
# 6. Examine individual iteration results if needed
cat results/multicast_test_iter1_*/server_analysis.txt
cat results/multicast_test_iter2_*/server_analysis.txt