Skip to content
forked from nasa-jpl/rosa

ROSA 🤖 is an AI Agent designed to interact with ROS1- and ROS2-based robotics systems using natural language queries. ROSA helps robot developers inspect, diagnose, understand, and operate robots.

License

Notifications You must be signed in to change notification settings

supertechft/ROSA

 
 

⚠️ Notice: This branch is to stay in sync with the main branch of the ROSA repository, and includes expanded, platform-specific instructions for setting up the TurtleSim demo on macOS, Windows, and Ubuntu. ⚠️


image
The ROS Agent (ROSA) is designed to interact with ROS-based
robotics systems using natural language queries. 🗣️🤖

arXiv ROS 1 ROS 2 License SLIM

Main Branch Dev Branch Publish Status Version Downloads

Important

📚 New to ROSA? Check out our Wiki for documentation, guides and FAQs!

ROSA is your AI-powered assistant for ROS1 and ROS2 systems. Built on the Langchain framework, ROSA helps you interact with robots using natural language, making robotics development more accessible and efficient.

ROSA Demo: NeBula-Spot in JPL's Mars Yard (click for YouTube)

Spot YouTube Thumbnail

🚀 Quick Start

Requirements

  • Python 3.9+
  • ROS Noetic or higher

Installation

pip3 install jpl-rosa

Usage Examples

from rosa import ROSA

llm = get_your_llm_here()
agent = ROSA(ros_version=1, llm=llm)
agent.invoke("Show me a list of topics that have publishers but no subscribers")

For detailed information on configuring the LLM, please refer to our Model Configuration Wiki page.

Adapting ROSA for Your Robot 🔧

ROSA is designed to be easily adaptable to different robots and environments. You can create custom agents by either inheriting from the ROSA class or creating a new instance with custom parameters.

For detailed information on creating custom agents, adding tools, and customizing prompts, please refer to our Custom Agents Wiki page.

TurtleSim Demo 🐢

We have included a demo that uses ROSA to control the TurtleSim robot in simulation. To run the demo, you will need to have Docker installed on your machine. 🐳

The following video shows ROSA reasoning about how to draw a 5-point star, then executing the necessary commands to do so.

turtle_demo.mov

Demo Setup Instructions

For detailed instructions on setting up and running the TurtleSim demo, please refer to the instructions specific to your system:

MacOS

The following sections describe instructions for Sonoma 14.6.1, M2 Max (Anaconda Python). They are based off a few things found on the internet, included this helpful gist.

  1. Set up Python environment:

    • Create a conda environment for Python 3.9 and activate it.
    • Install the jpl-rosa Python package:
      python -m pip install jpl-rosa
  2. Install Homebrew and update packages:

    • Install MacOS Homebrew if not already installed (Homebrew).
    • Update your brew packages:
      brew update
  3. Install XQuartz for graphical support:

    • Install XQuartz:
      brew install --cask xquartz
    • Launch XQuartz:
      open -a XQuartz
    • In the XQuartz menu, go to Settings > Preferences > Security tab and ensure both options are checked.
    • Reboot your machine.
  4. Configure X11 connections:

    • Allow X11 connections from anywhere:
      xhost +
  5. Prepare the repository:

    • Clone this repository and navigate to its top-level directory.
    git clone https://github.com/nasa-jpl/rosa.git
    cd rosa
    • Edit the demo.sh script and change the line export DISPLAY=host.docker.internal:0 to export=[IP_ADDRESS]:0 where IP_ADDRESS is your machine's local IP address.
  6. Configure the LLM:

    • Edit the .env file with at least your OPENAI_API_KEY.
    • Edit the file src/turtle_agent/scripts/llm.py with these changes:
      • Add the following import statement at the top: from langchain_openai import ChatOpenAI.
      • Create an instance of ChatOpenAI() and make sure its return at the top of the function def get_llm().
    • For more detailed instructions or if you rather use a different model, refer to the Model Configuration guide.
  7. Run the demo:

    • Launch the demo.sh script and wait for the TurtleSim window to appear.
    • Start the simulation and type help or examples to get more information about commands you can run:
      root@docker-desktop:/app# start
Windows (WSL2)

The following sections describe instructions for a Windows 10 machine using WSL2 Ubuntu 22.04 and VSCode.

  1. Set up WSL2 and Docker:

  2. Install required tools:

    • Install the jpl-rosa Python package:
      pip3 install jpl-rosa
    • Install VcXsrv via Chocolatey (for X11 graphical render support):
      choco install vcxsrv
    • Launch VcXsrv (XLaunch) and uncheck "Native OpenGL" while checking "Disable access control."
  3. Prepare the repository:

    • Clone this repository and navigate to its top-level directory.
      git clone https://github.com/nasa-jpl/rosa.git
      cd rosa
    • Configure VSCode to handle Linux line endings:
      • At the bottom bar, change the end-of-line sequence to LF for this directory.
      • In settings, change "Files: EOL" to use \n line endings.
    • Ensure Git is configured to handle Linux line endings:
      git config --global core.autocrlf input
      git rm -rf --cached .
      git reset --hard HEAD
  4. Configure the LLM:

    • Edit the .env file with at least your OPENAI_API_KEY.
    • Edit the file src/turtle_agent/scripts/llm.py with these changes:
      • Add the following import statement at the top: from langchain_openai import ChatOpenAI.
      • Create an instance of ChatOpenAI() and make sure its return at the top of the function def get_llm().
    • For more detailed instructions or if you rather use a different model, refer to the Model Configuration guide.
  5. Run the demo:

    • Launch the demo.sh script in WSL and wait for the TurtleSim window to appear:
      ./demo.sh
    • Start the simulation and type help or examples to get more information about commands you can run:
      root@docker-desktop:/app# start
Linux

The following sections describe instructions for a Linux machine running Ubuntu 22.04.

  1. Set up Docker:

    • Install Docker and add your user to the Docker group to run Docker commands without root privileges:
      sudo apt-get install docker.io
      sudo usermod -aG docker $USER
      newgrp docker
    • Verify the installation:
      docker --version
  2. Prepare the repository:

    • Clone this repository and navigate to its top-level directory:
      git clone https://github.com/nasa-jpl/rosa.git
      cd rosa
    • Edit the demo.sh script and change the line export DISPLAY=host.docker.internal:0 to export DISPLAY=${DISPLAY:-:0} On native Linux, skip this step.
  3. Configure the LLM:

    • Edit the .env file with at least your OPENAI_API_KEY.
    • Edit the file src/turtle_agent/scripts/llm.py with these changes:
      • Add the following import statement at the top: from langchain_openai import ChatOpenAI.
      • Create an instance of ChatOpenAI() and make sure its return at the top of the function def get_llm().
    • For more detailed instructions or if you rather use a different model, refer to the Model Configuration guide.
  4. Run the demo:

    • Launch the demo.sh script and wait for the TurtleSim window to appear:
      ./demo.sh
    • Start the simulation and type help or examples to get more information about commands you can run:
      root@docker-desktop:/app# start
    • If the previous build was successful, then on subsequent runs you may comment out the line with docker build around line 60 in demo.sh.

IsaacSim Extension (Coming Soon)

ROSA is coming to Nvidia IsaacSim! While you can already use ROSA with robots running in IsaacSim (using the ROS/ROS2 bridge), we are adding direct integration in the form of an IsaacSim extension. This will allow you not only to control your robots in IsaacSim, but control IsaacSim itself. Check out the video below to learn mroe.

ROSA Demo: Nvidia IsaacSim Extension (click for YouTube)

Carter YouTube Thumbnail Play

📘 Learn More

Changelog

See our CHANGELOG.md for a history of our changes.

Contributing

Interested in contributing to our project? Please see our: CONTRIBUTING.md

For guidance on how to interact with our team, please see our code of conduct located at: CODE_OF_CONDUCT.md

For guidance on our governance approach, including decision-making process and our various roles, please see our governance model at: GOVERNANCE.md

License

See our: LICENSE

Support

Key points of contact are:


ROSA: Robot Operating System Agent 🤖
Copyright (c) 2024. Jet Propulsion Laboratory. All rights reserved.

About

ROSA 🤖 is an AI Agent designed to interact with ROS1- and ROS2-based robotics systems using natural language queries. ROSA helps robot developers inspect, diagnose, understand, and operate robots.

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 94.0%
  • CMake 3.9%
  • Shell 1.3%
  • Dockerfile 0.8%