⚠️ Notice: This branch is to stay in sync with the main branch of the ROSA repository, and includes expanded, platform-specific instructions for setting up the TurtleSim demo on macOS, Windows, and Ubuntu.⚠️
robotics systems using natural language queries. 🗣️🤖
Important
📚 New to ROSA? Check out our Wiki for documentation, guides and FAQs!
ROSA is your AI-powered assistant for ROS1 and ROS2 systems. Built on the Langchain framework, ROSA helps you interact with robots using natural language, making robotics development more accessible and efficient.
- Python 3.9+
- ROS Noetic or higher
pip3 install jpl-rosafrom rosa import ROSA
llm = get_your_llm_here()
agent = ROSA(ros_version=1, llm=llm)
agent.invoke("Show me a list of topics that have publishers but no subscribers")For detailed information on configuring the LLM, please refer to our Model Configuration Wiki page.
ROSA is designed to be easily adaptable to different robots and environments. You can create custom agents by either inheriting from the ROSA class or creating a new instance with custom parameters.
For detailed information on creating custom agents, adding tools, and customizing prompts, please refer to our Custom Agents Wiki page.
We have included a demo that uses ROSA to control the TurtleSim robot in simulation. To run the demo, you will need to have Docker installed on your machine. 🐳
The following video shows ROSA reasoning about how to draw a 5-point star, then executing the necessary commands to do so.
turtle_demo.mov
For detailed instructions on setting up and running the TurtleSim demo, please refer to the instructions specific to your system:
MacOS
The following sections describe instructions for Sonoma 14.6.1, M2 Max (Anaconda Python). They are based off a few things found on the internet, included this helpful gist.
-
Set up Python environment:
- Create a conda environment for Python 3.9 and activate it.
- Install the
jpl-rosaPython package:python -m pip install jpl-rosa
-
Install Homebrew and update packages:
- Install MacOS Homebrew if not already installed (Homebrew).
- Update your brew packages:
brew update
-
Install XQuartz for graphical support:
- Install XQuartz:
brew install --cask xquartz
- Launch XQuartz:
open -a XQuartz
- In the XQuartz menu, go to Settings > Preferences > Security tab and ensure both options are checked.
- Reboot your machine.
- Install XQuartz:
-
Configure X11 connections:
- Allow X11 connections from anywhere:
xhost +
- Allow X11 connections from anywhere:
-
Prepare the repository:
- Clone this repository and navigate to its top-level directory.
git clone https://github.com/nasa-jpl/rosa.git cd rosa- Edit the
demo.shscript and change the lineexport DISPLAY=host.docker.internal:0toexport=[IP_ADDRESS]:0whereIP_ADDRESSis your machine's local IP address.
-
Configure the LLM:
- Edit the
.envfile with at least yourOPENAI_API_KEY. - Edit the file
src/turtle_agent/scripts/llm.pywith these changes:- Add the following import statement at the top:
from langchain_openai import ChatOpenAI. - Create an instance of
ChatOpenAI()and make sure its return at the top of the functiondef get_llm().
- Add the following import statement at the top:
- For more detailed instructions or if you rather use a different model, refer to the Model Configuration guide.
- Edit the
-
Run the demo:
- Launch the
demo.shscript and wait for the TurtleSim window to appear. - Start the simulation and type
helporexamplesto get more information about commands you can run:root@docker-desktop:/app# start
- Launch the
Windows (WSL2)
The following sections describe instructions for a Windows 10 machine using WSL2 Ubuntu 22.04 and VSCode.
-
Set up WSL2 and Docker:
- Follow this guide to install WSL2 Ubuntu on your machine.
- Install Chocolatey package manager.
- Install Docker Desktop.
-
Install required tools:
- Install the
jpl-rosaPython package:pip3 install jpl-rosa
- Install VcXsrv via Chocolatey (for X11 graphical render support):
choco install vcxsrv
- Launch VcXsrv (XLaunch) and uncheck "Native OpenGL" while checking "Disable access control."
- Install the
-
Prepare the repository:
- Clone this repository and navigate to its top-level directory.
git clone https://github.com/nasa-jpl/rosa.git cd rosa - Configure VSCode to handle Linux line endings:
- At the bottom bar, change the end-of-line sequence to LF for this directory.
- In settings, change "Files: EOL" to use
\nline endings.
- Ensure Git is configured to handle Linux line endings:
git config --global core.autocrlf input git rm -rf --cached . git reset --hard HEAD
- Clone this repository and navigate to its top-level directory.
-
Configure the LLM:
- Edit the
.envfile with at least yourOPENAI_API_KEY. - Edit the file
src/turtle_agent/scripts/llm.pywith these changes:- Add the following import statement at the top:
from langchain_openai import ChatOpenAI. - Create an instance of
ChatOpenAI()and make sure its return at the top of the functiondef get_llm().
- Add the following import statement at the top:
- For more detailed instructions or if you rather use a different model, refer to the Model Configuration guide.
- Edit the
-
Run the demo:
- Launch the
demo.shscript in WSL and wait for the TurtleSim window to appear:./demo.sh
- Start the simulation and type
helporexamplesto get more information about commands you can run:root@docker-desktop:/app# start
- Launch the
Linux
The following sections describe instructions for a Linux machine running Ubuntu 22.04.
-
Set up Docker:
- Install Docker and add your user to the Docker group to run Docker commands without root privileges:
sudo apt-get install docker.io sudo usermod -aG docker $USER newgrp docker - Verify the installation:
docker --version
- Install Docker and add your user to the Docker group to run Docker commands without root privileges:
-
Prepare the repository:
- Clone this repository and navigate to its top-level directory:
git clone https://github.com/nasa-jpl/rosa.git cd rosa - Edit the
demo.shscript and change the lineexport DISPLAY=host.docker.internal:0toexport DISPLAY=${DISPLAY:-:0}On native Linux, skip this step.
- Clone this repository and navigate to its top-level directory:
-
Configure the LLM:
- Edit the
.envfile with at least yourOPENAI_API_KEY. - Edit the file
src/turtle_agent/scripts/llm.pywith these changes:- Add the following import statement at the top:
from langchain_openai import ChatOpenAI. - Create an instance of
ChatOpenAI()and make sure its return at the top of the functiondef get_llm().
- Add the following import statement at the top:
- For more detailed instructions or if you rather use a different model, refer to the Model Configuration guide.
- Edit the
-
Run the demo:
- Launch the
demo.shscript and wait for the TurtleSim window to appear:./demo.sh
- Start the simulation and type
helporexamplesto get more information about commands you can run:root@docker-desktop:/app# start
- If the previous build was successful, then on subsequent runs you may comment out the line with
docker buildaround line 60 indemo.sh.
- Launch the
ROSA is coming to Nvidia IsaacSim! While you can already use ROSA with robots running in IsaacSim (using the ROS/ROS2 bridge), we are adding direct integration in the form of an IsaacSim extension. This will allow you not only to control your robots in IsaacSim, but control IsaacSim itself. Check out the video below to learn mroe.
See our CHANGELOG.md for a history of our changes.
Interested in contributing to our project? Please see our: CONTRIBUTING.md
For guidance on how to interact with our team, please see our code of conduct located at: CODE_OF_CONDUCT.md
For guidance on our governance approach, including decision-making process and our various roles, please see our governance model at: GOVERNANCE.md
See our: LICENSE
Key points of contact are:
Copyright (c) 2024. Jet Propulsion Laboratory. All rights reserved.


