Skip to content
/ sharc Public

Simulator for Hardware Architecture and Real-time Control

Notifications You must be signed in to change notification settings

pwintz/sharc

Repository files navigation

Simulator for Hardware Architecture and Real-time Control (SHARC)

SHARC

In cyber-physical systems (CPSs), computation, communication, and control are tightly coupled. Due to the complexity of these systems, advanced design procedures that account for these tight interconnections are vital to ensure safe and reliable operation of control algorithms under computational constraints. The Simulator for Hardware Architecture and Real-time (SHARC) is a tool to assist in the co-design of control algorithms and the computational hardware on which they are run. SHARC simulates a user-specified control algorithm on a user-specified microarchitecture, evaluating how computational constraints affect the performance of the control algorithm and the safety of the physical system.

The Scarab Simulator is used to simulate the computational hardware. Scarab is a microarchitecture simulator, which can simulate the execution of a computer program on different processor and memory hardware than the one the simulation runs on.
This project uses Scarab to simulate the execution of control feedback in various computational configurations (processor speed, cache size, etc.). We use the simulation at each time-step to determine the computation time of the controller, which is used in-the-loop to simulate the trajectory of a closed-loop control system that includes the effects of computational delays.

Table of Contents

  1. Overview
  2. Requirements
  3. Repeatability Evaluation Package
  4. 🚀 Quick Start
  5. Getting Started
  6. Development in Dev Container
  7. SHARC Implementation Details
  8. Configuration Files
  9. Testing
  10. Troubleshooting

Overview

SHARC simulates control feedback loops with computational delays introduced by hardware constraints. It uses the Scarab Simulator to model hardware performance, incorporating parameters such as processor speed, cache size, and memory latency. The tool supports both serial and parallel simulations for efficient modeling.

Key Features:

  • 🖥️ Simulate control feedback loops with computational delays
  • 🏗️ Model hardware performance using Scarab Simulator
  • 🚀 Support for both serial and parallel simulations
  • 🐳 Fully Dockerized for consistent and reproducible environments

Requirements

Before you begin, ensure your system meets the following requirements:

  • Supported Architecture
    The SCARAB simulator is currently incompatible with ARM architectures.

  • Docker
    SHARC operates within a Docker container. Install Docker by following the appropriate instructions for your platform:

  • Git
    Install Git and ensure SSH is set up if you plan to build the Docker image yourself. Follow these steps to generate and configure an SSH key.


Repeatability Evaluation Package

To install the SHARC simulator and repeat the experiments in the submitted manuscript, do the following steps:

  1. Check Requirements
    Check that your system satisfies SHARC's requirements, listed above.

  2. Clone the Repository
    Clone the SHARC repository:

    git clone git@github.com:pwintz/sharc.git && cd sharc
  3. Run Setup Script
    Execute the setup_sharc.sh script in the root of the sharc directory:

    ./setup_sharc.sh
    

    This script offers you a choice to either

    • pull a SHARC Docker image from Docker Hub or
    • build a Docker image locally.

    After an image is available, the script starts a container and runs a suite of (quick) automated tests. Once the tests finish, you will have an option to enter a temporary interactive SHARC container where you can explore the file system. (Any changes made inside this container are lost when you exit.)

  4. Run Long Tests
    Once you have a SHARC Docker image, you can run various examples and tests via a collection of scripts in the repeatability_evaluation/ folder. To run these scripts, you must be in repeatability_evaluation/ on the host machine (not in a Docker container). The following command executes an integration test of SHARC:

    cd repeatability_evaluation/  # Unless already in folder.
    ./run_long_tests.sh # Can take 5-10 minutes
    

    The run_long_tests.sh script runs SHARC using several relatively quick test scenarions---including serial and parallel execution using the Scarab simulator---over a small number of time steps. This is in contrast to run_short_tests.sh, which only runs quick unit tests of individual units of code. During setup, the ./run_short_tests.sh was already run automatically, so you can ignore it.

  5. Run Adaptive Cruise Control (ACC) Example
    For a quick initial example, run the following command :

    cd repeatability_evaluation/
    ./run_acc_example_with_fake_delays.sh
    
    The results of the simulation will appear repeatability_evaluation/acc_example_experiments/ on the host machine. The fake_data configuration is useful for testing, but does not use the Scarab microarchitecture simulator to determine computation times, resulting in a much faster test. To run the ACC example using the Scarab simulator and generate Figure 5, execute the following command (This can take several hours, depending on your system):
    cd repeatability_evaluation/  # Unless already in folder.
    ./run_acc_example_to_generate_figure_5.sh
    
    After the simulation finishes, the repeatability_evaluation/acc_example_experiments/ folder will contain an image matching Figure 5 in the submitted manuscript.

  6. Run Cart Pole Example
    The Cart Pole example uses nonlinear MPC, which results in long computation times, requiring over 24 hours to complete. To start the simulation, run

    ./run_example_in_container.sh cartpole default.json

while in the `repeatability_evaluation` folder.

🚀 Quick Start

Get SHARC up and running in two simple steps:

  1. Clone the Repository
    Clone the SHARC repository:

    git clone git@github.com:pwintz/sharc.git && cd sharc
  2. Run the Setup Script
    Execute the setup_sharc.sh script. This script offers you a choice to either pull a SHARC Docker image from Docker Hub or build a Docker image locally. After an image is available, the script starts a container and runs a suite of (quick) automated tests. Once the tests finish, you will have an option to enter a temporary interactive SHARC container where you can explore the file system. (Any changes made inside this container are lost when you exit.)

Getting Started

SHARC is fully Dockerized, so installation only requires installing Docker, getting a Docker image, and starting a container from that image. The Docker container can be run in three ways:

  • Dev-containers — Useful for the development of SHARC projects or SHARC itself. Dev containers allow developers to connect to a Docker container using VS Code and automatically persist changes to source code.
  • Interactive Docker — Useful for manually interacting with the Docker container in environments where dev containers are not supported or configured.
  • Non-interactive Docker — Useful for automated running of simulations.

Obtaining the SHARC Docker Image

To get a SHARC Docker image, you can either pull a pre-build image from Docker Hub or build your own image using the provided Dockerfile. While the pre-build image is generally easier, only images for the Linux operating system are available. If your OS or architecture is not available, then follow the instructions for building an image.

Pre-built Image

For a limited number of OS and host architectures, a Docker image is provided on Docker Hub. To download the latest SHARC Docker image, run

docker pull pwintz/sharc:latest

Troubleshooting: If you get an error that says, Error response from daemon: manifest for pwintz/sharc:latest not found: manifest unknown: manifest unknown, then

  • check for typos
  • check your selected tag exists here
  • try logging in to Docker Hub via the command line:
docker login

Building an Image

For platforms where a Docker image is not available on Docker Hub, it is necessary to build a Docker image. To build an image, you must install Git and ensure SSH is setup for authenticating with GitHub.

  1. Clone this repository.
    1. Navigate to the folder where you want this project located.
    2. Run git clone git@github.com:pwintz/sharc.git.
    3. Run git submodule update --init --recursive.
  2. Change your working directory to the newly created sharc/ folder. Inside you should see a file named Dockerfile.
  3. Inside 'sharc/, run docker build --tag sharc:my_tag ., where my_tag can changed to an identifier of your choosing. Note the "." at the end of the docker build command, which tells Docker to build the Dockerfile in the current directory.

Warning: Each time you run docker build, it adds another Docker image to your system. For SHARC, each image is 5 GB, so you can quickly fill up your hard drive! To build without saving the image, use docker build --rm [rest of the command]. If you have previously built several images, you can cleanup unused ones by running docker image prune.

Creating a SHARC Docker Container from an Image

You should now have a SHARC Docker image on your system that is either named pwintz/sharc (if you pulled from Docker Hub) or sharc, if you built locally. It will also have a tag such as latest or my_tag. For simplicity, we will assume the image name and tag are sharc:my_tag from here on out. To check that your image is in fact available run

docker images

Now that you have an image, you need to create a Docker container—that is, a virtual environment initialized using the system state contained in the Docker image.

As mentioned above, you can create interactive or non-interactive containers, or open a container using a dev-container.

Interactive Docker Container

When you open an interactive container, your current terminal changes to be inside the container where you can run commands and explore the container's file system. Changes to the container's files will persist throughout the life of the container, but are not automatically saved on the host file system and will be lost if the container is deleted.

To create and start a container in interactive mode from the image sharc:my_tag, run

docker run -it sharc:my_tag

To leave the container, run exit.

Each time you run docker run, it creates a new container. Just like building Docker images, creating many containers will quickly fill up your hard drive. To avoid saving the container after you exit, add "--rm" before the image name in the docker run command. You can also delete all stopped containers by running docker containers prune.

Files on your host machine can be made accessible within a container as a "volume" by adding

-v "<path_on_host>:<path_in_container>"

to the docker run command. Changes made to a volume inside a container are persisted on the host machine after the container is closed and deleted.

Non-interactive Docker Container

To create and start a container in non-interactive mode from the image sharc:my_tag, run

docker run --rm sharc:my_tag <command_to_run>

(without -it). The --rm argument ensures that the container is deleted after execution. The initial working directory of the sharc images is the examples/ folder. To run the ACC example, <command_to_run> can be replaced by "cd acc_example && sharc --config_filename fake_delays.json", resulting in

docker run --rm sharc:my_tag bash -c "cd acc_example && sharc --config_filename fake_delays.json"

To access the results of the simulation, create a volume for the container and copy the results of the simulation into the volume path.

Development in VS Code with Dev-Containers

  1. Install Docker and VS Code.
  2. Open the sharc repository directory in VS Code. VS Code should prompt you to install recommended extensions, including dev-containers. Accept this suggestion.
  3. Use Dev Containers to build and run the Docker file (via CTRL+SHIFT+P followed by "Dev containers: Build and Open Container"). This will change your VS Code environment to running within the Docker container, where Scarab and libmpc are configured.

Running an Example: Adaptive Cruise Control

As an introductory example, the acc_example folder contains a simulation of a vehicle controlled by an adaptive cruise control (ACC) system.

To run the ACC example, use one of the methods above for creating a SHARC Docker container.

Within the container, navigate to examples/acc_example and run

sharc --config_file fake_delays.json

The fake_delays.json file is located in examples/acc_example/simulation_configs/, and defines configurations for quickly running the example in the serial and parallel mode without actually executing the microarchitectural simulations with Scarab. Select other configuration files in examples/acc_example/simulation_configs/ to explore the settings available. For configurations that use Scarab, the time to execute the simulation can range from minutes to hours depending on the number of sample times, the number of iterations used by the MPC optimization algorithm, the prediction and control horizons, and whether parallel or serial simulation is used.

The results of the simulation are saved into examples/acc_example/experiments/. Within the appropriate experiment directory, a file named experiment_list_data_incremental.json will be populated incrementally during the simulation, so that you can monitor the progress of the simulation. At the end of the simulation, experiment_list_data_incremental.json is copied to experiment_list_data.json.

A Jupyter notebook make_plots.ipynb is located within acc_example/ for generating plots based on the last experiment.

Creating a SHARC Project

The examples/ folder contains some example SHARC projects that are configured to be simulated using SHARC.

Each SHARC project must have the following structure:

  • The controller and dynamics are defined as described in Tutorial: Implementing Custom Dynamics and Controllers.md.
  • base_config.json: A JSON file that defines the settings. Some settings are required by Scarab, but users can add additional configurations in the base_config.json that are used by their particular project.
  • simulation_configs/: A directory containing default.json and (optionally) other simulation configuration files. The JSON files in simulation_configs/ cannot contain any keys (including nested keys) that are not present in base_config.json. When sharc is run in the project root (e.g., examples/acc_example/), the optional argument --config_filename can be used to specify one of the JSON files in simulation_configs/, such as run_sharc.py --config_filename example_configs.json. Values from example_configs.json will be patched onto the values from base_config.json, using the values in base_config.json as "defaults" and the values in example_configs.json when present. Some keys are required in config JSON files, but a given project may add additional keys to define model parameters or other options. [TODO: Write a section describing the requirements for the config json files.]
  • chip_configs/: A directory containing PARAMS files used to specify hardware parameters, such as clock speed. In the configuration JSON files, the key-value pair "PARAMS_base_file": "PARAMS.base" would specify to use chip_configs/PARAMS.base as the base for the chip parameters (modifications can be made based on other key-value pairs).
  • controller_delegator.py: A Python module for building the controller executable based on the needs of the particular system. In the ACC example, CMake is used with various modifications applied at compile time based on the config JSON input. Other build systems can also be used.

The results of experiments are placed into the experiments/ folder (the folder will be created if it does not exist).

Development in Dev Container

The context in the dev container (i.e., the folder presented in /dev-workspace in the container) is the root directory of the ros-docker project. Changes to the files in /dev-workspace are synced to the local host (outside of the Docker container) whereas changes made in the user's home directory are not.

SHARC Implementation Details

Directory Structure

  • resources/: A directory containing files and folders that are included in Docker images, include
    • sharc/: Python sharc package and subpackages (See SHARC Python Package Structure)
    • dynamics/: Python code for dynamics
    • controllers/: C++ Source code for controllers
    • include/: C++ header files
    • tests/: Unit tests
  • Dockerfile: A file that defines the steps for building a Docker image. The Dockerfile is divided into several stages, or "targets", which can be built specifically by running docker build. --target <target-name>. Targets that you might want to use are listed here:
    • scarab: Configures Scarab (and DynamoRIO) without setting up SHARC or examples.
    • sharc: Sets up SHARC and its dependencies.
    • examples: Sets up several SHARC examples.
    By default, running docker build . will build the last target in the Dockerfile, which is examples.
  • docs/: A directory containing documentation files.
  • examples/: A directory containing several example projects. The structure of each example project is described in "Project Directory Structure".

SHARC Python Package Structure

The following lists the most important components of the sharc Python package that a user may need to know:

  • requirements.txt: List of Pip package requirements
  • __init__.py: The core of the SHARC package. Handles setting up and running simulations.
  • __main__.py: Allows sharc to be called as a script.
  • plant_runner.py: Generates a time series by calling the controller interface and plant dynamics to generate a simulation of the closed-loop system.
  • scarabizor.py: Defines several classes to simplify calling Scarab and reading the statistics generated by Scarab.
  • data_types.py: Defines ComputationData and TimeStepSeries classes. The ComputationData stores a single (simulated) "computation event", i.e., information about when a computation starts and ends, and the computed control value. The TimeStepSeries class is used to store all the data of a simulation, with the values of the time, state, control, computation events, etc., at each time step.
  • controller_delegator_base.py: Defines an abstract BaseControllerExecutableProvider class and a concrete implementation CmakeControllerExecutableProvider that are used to provide the controller executable to SHARC, from a user's project. In particular, a user would implement a BaseControllerExecutableProvider that builds the executable file based on the particular simulation, dynamics, and controller parameters provided by the user.
  • controller_interface.py: Provides an ControllerInterface class that acts as an interface between the simulation of the controller executable, and the SHARC simulation. In "production", the PipesControllerInterface subclass will always be used, but in testing other subclasses are used to simplify tests.
  • debug_levels.py: Defines various debugging levels that can be set in the base_config.json of projects.
  • dynamics_base.py: Defines abstract Dynamics and OdeDynamics classes that are subclassed by users to implement their system's dynamics.

Configuration Files

SHARC uses JSON configuration files to customize simulations. Key configurations include:

  • Simulation mode (serial/parallel)
  • Dynamics and controller selection
  • Hardware parameters
  • Debugging levels

The following example configuration file contains all of the values required by SHARC. Additional settings can be included to set values such as system parameters. In the following example, C++-style comments are used (//....), but these are not permitted in JSON files and should be removed.

{
  // A human-readable description of the experiment.
  "label": "base",
  "skip": false,
  "Simulation Options": {
    // Select serial or parallel simulation.
    "parallel_scarab_simulation": false, 
    // The "in-the-loop_delay_provider" value determines what is 
    // used to determine delays. 
    // This should be set to "onestep" when using the parallel mode
    // and "execution-driven scarab" when using serial mode.
    "in-the-loop_delay_provider": "onestep",
    // Define the maximum number of batches when running in parallel.
    // Useful for ensuring that a simulation runs within a desired amount n 
    "max_batches": 9999999,
    // The maximum batch size when running in parallel. 
    // The actual batch size is chosen to be the smaller of the following:
    // * max_batch_size
    // * number of CPUs
    // * Number of steps remaining
    "max_batch_size": 9999999
  },
  // Select the module where the dynamics are loaded from.
  "dynamics_module_name": "dynamics.dynamics",
  // Select the name of the dynamics class, which must be located 
  // in the dynamics module.
  "dynamics_class_name": "ACCDynamics",
  // Set the maximum number of time steps to run the simulation.
  "n_time_steps": 6,
  // Initial State
  "x0": [0, 20.0, 20.0],
  // Initial control
  "u0": [0.0, 100.0],
  "only_update_control_at_sample_times": true,
  // If fake delays are enabled, then Scarab is not used to determine 
  // computational delays, instead all computations are assumed to be 
  // fast enough except for those at the time steps listed. The length of 
  // the delay is set to the value in "sample_time_multipliers" multiplied 
  // by the sample time.
  "fake_delays": {
    "enable": false,
    "time_steps":              [ 12,  15],
    "sample_time_multipliers": [1.2, 2.2]
  },
  "==== Debugging Levels ====": {
    "debug_program_flow_level": 1,
    "debug_interfile_communication_level": 2,
    "debug_optimizer_stats_level": 0,
    "debug_dynamics_level":        2,
    "debug_configuration_level":   0,
    "debug_build_level":           0, 
    "debug_shell_calls_level":     0, 
    "debug_batching_level":        0, 
    "debug_scarab_level":          0
  },
  "system_parameters": {
    // Select the controller to run. 
    "controller_type": "ACC_Controller",
    "state_dimension": 3, 
    "input_dimension": 2,
    "exogenous_input_dimension": 2,
    "output_dimension": 1,
    "sample_time": 0.2,
    // Human-readable names for variables. Useful for generating plots.
    "x_names": ["p", "h", "v"],
    "u_names": ["F^a", "F^b"], 
    "y_names": ["v"]
  },  
  // Choose the chip configuration file. 
  "PARAMS_base_file": "PARAMS.base",
  // Select values to patch in the chip configuration file. 
  // Each key included here must occur as a parameter in the PARAMS_base_file.
  // A null value indicates that the value in PARAMS_base_file should be used.
  "PARAMS_patch_values": {
    "chip_cycle_time": null,
    "l1_size":         null,
    "icache_size":     null,
    "dcache_size":     null
  }
}

Testing

SHARC comes with a suite of automated testing to check its correctness. Unit tests (fast tests of small pieces of the software) are located in

<sharc_root>/tests

To run all unit tests, change to the tests/ directory in a SHARC Docker container and execute

./run_all.sh

Update Docker Hub Images

To update a pwintz/sharc Docker image on Docker Hub use the following commands:

docker login
docker build --tag sharc:latest .
docker tag sharc:latest pwintz/sharc:latest
docker push pwintz/sharc:latest

Troubleshooting

Problem: Docker Build Fails

  • Make sure you are connected to the internet.
  • Make sure that SSH is setup in your environment.
  • Check that you are running docker build . in the directory that contains the Dockerfile.
  • If all else fails, try building from scratch, discarding the Docker cache, by running docker build --no-cache .

Problem: Docker Push Fails

  • Log-in using docker login before pushing.
  • Tag the image with the user name as a prefix in the form username/tag so that Docker knows where to direct the pushed image.

Problem: Runnning a Serial SHARC Simulation Fails

When running serial simulations in Docker, the following error occurs in certain circmstances:

setarch: failed to set personality to x86_64: Operation not permitted

The reason this error occurs is because the Docker container does not allow some operations, by default. To fix the problem, use the --privileged flag when starting the Docker container.

Software Tools used by this project

  • Docker – Creates easily reproducible environments so that we can immediately spin-up new virtual machines that run Scarab.
  • Scarab – A microarchitectural simulator of computational hardware (e.g., CPU and memory).
  • DynamoRIO – Optional - Only needed if doing trace-based simulation with Scarab.
  • libmpc – Optional - Used for examples running MPC controllers

About

Simulator for Hardware Architecture and Real-time Control

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •