-
Clone this repo:
git clone https://github.com/Green0-0/propagate -
Setup your venv and install vllm:
https://docs.vllm.ai/en/v0.11.2/getting_started/installation/ -
Install the dependencies:
cd propagate && pip install -e .
Propagate should work wherever vLLM does, including on windows! Look for a fork of https://github.com/SystemPanic/vllm-windows with the appropriate CUDA version, and remove distributed_executor_backend="ray", from vllm_backend.py.
- Run
python examples/demo_countdown.py. You should be prompted to login to wandb, and then training will begin!