-
Notifications
You must be signed in to change notification settings - Fork 15
[FEATURE] remote policy with server/client #310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
b65f8cb to
f542ea6
Compare
alexmillane
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for putting this together.
Most of my comments are minor syntactic things that we can easily address.
I have one major comment on the design. At the moment, the server-client split point leaves half of gr00t policy running on the client and half (the model itself) running on the server. To me it would make sense to move the split point such that the whole policy is launched on the server. That way, in the remote inference case, we remove all gr00t details from the client side. It also simplifies the server side - the server just launches the same policy (called Gr00tClosedloopPolicy) that is launched in the local case.
This diagram explains (a simplified version) of the current design and the alternative that I'm proposing.
| #!/bin/bash | ||
| set -euo pipefail | ||
|
|
||
| # Script to install GR00T policy dependencies | ||
| # This script is called from the GR00T server Dockerfile | ||
|
|
||
| : "${GROOT_DEPS_GROUP:=base}" | ||
| : "${WORKDIR:=/workspace}" | ||
|
|
||
| echo "Installing GR00T with dependency group: $GROOT_DEPS_GROUP" | ||
|
|
||
| # CUDA environment variables for GR00T installation. | ||
| # In the PyTorch base image, CUDA is already configured, so we only | ||
| # set variables if CUDA_HOME exists. | ||
| if [ -d "/usr/local/cuda" ]; then | ||
| export CUDA_HOME=${CUDA_HOME:-/usr/local/cuda} | ||
| export PATH=${CUDA_HOME}/bin:${PATH} | ||
| export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:${LD_LIBRARY_PATH:-} | ||
| fi | ||
|
|
||
| echo "CUDA environment variables:" | ||
| echo "CUDA_HOME=${CUDA_HOME:-unset}" | ||
| echo "PATH=$PATH" | ||
| echo "LD_LIBRARY_PATH=${LD_LIBRARY_PATH:-unset}" | ||
|
|
||
| # Install system-level media libraries (no sudo in container) | ||
| echo "Installing system-level media libraries..." | ||
| apt-get update && apt-get install -y ffmpeg && rm -rf /var/lib/apt/lists/* | ||
|
|
||
| # Upgrade packaging tools | ||
| echo "Upgrading packaging tools..." | ||
| python -m pip install --upgrade setuptools packaging wheel | ||
|
|
||
| # Install Isaac-GR00T with the specified dependency group | ||
| echo "Installing Isaac-GR00T with dependency group: $GROOT_DEPS_GROUP" | ||
| python -m pip install --no-build-isolation --use-pep517 \ | ||
| -e "${WORKDIR}/submodules/Isaac-GR00T/[$GROOT_DEPS_GROUP]" | ||
|
|
||
| # Install flash-attn (optional, keep same version as Arena Dockerfile) | ||
| echo "Installing flash-attn..." | ||
| python -m pip install --no-build-isolation --use-pep517 flash-attn==2.7.1.post4 || \ | ||
| echo "flash-attn install failed, continue without it" | ||
|
|
||
| echo "GR00T dependencies installation completed successfully" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for putting this together.
This script seems to differ only very slightly from the install_gr00t_deps.sh. Is the only difference the python command used? I.e. python vs /isaac-sim/python.sh.
Could we combine these two scripts into a single script that takes an argument(s)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I remove this to install_gr00t_deps.sh and add a arguement on install_gr00t_deps.s
docker/run_gr00t_server.sh
Outdated
| # REQUIRED: host models directory (must be set by user) | ||
| if [[ -z "${MODELS_DIR:-}" ]]; then | ||
| echo "ERROR: MODELS_DIR is not set." | ||
| echo "Please export MODELS_DIR to your host models directory, e.g.:" | ||
| echo " export MODELS_DIR=/path/to/your/models" | ||
| echo "Then run:" | ||
| echo " bash ./docker/run_gr00t_server.sh" | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion to used the same thing used in run_docker.sh:
Models are by default expected on the host at $HOME/models but this can be changed with the -d flag to the script.
See default and the optional override
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please check again ,I remove this part and add similar code as run_docker.sh
| --timeout_ms "${TIMEOUT_MS}" \ | ||
| --policy_type "${POLICY_TYPE}" \ | ||
| --policy_config_yaml_path "${POLICY_CONFIG_YAML_PATH}" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not strictly related to this MR, but I'm wondering if we should move all the gr00t related stuff. I.e. all the gr00t docker stuff and the gr00t related tests out of the core framework and into the isaaclab_arena_gr00t package. Actually this is basically certainly a good idea. Ideally the core framework would make no mention of gr00t.
It would probably make sense to do this before merging this MR. So we can get everything gr00t related in the right package, before expanding what we can do there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have now made the change I suggested above. All gr00t-related code lives in isaaclab_arena_gr00t
| num_steps = policy.get_trajectory_length(policy.get_trajectory_index()) | ||
|
|
||
| elif args.policy_type == "gr00t_closedloop": | ||
| from pathlib import Path |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion to move to the top of the file.
We have some imports not at the top of the file if they require special dependencies that are conditionally not required. But prefer to put imports at the top.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
| remote_group.add_argument( | ||
| "--remote_api_token", | ||
| type=str, | ||
| default=None, | ||
| help="Optional API token for remote policy server.", | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is an API token (optionally) required? Suggestion to expand the help string here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I add a detail description ,please check
| # if options is not None: | ||
| # payload["options"] = options |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Intentional? options is currently unused.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove options,right now it is not used. When I was developing this, I considered that if some special information needed to be transmitted, an extra options field might be necessary. Since it is not needed at the moment, I have removed it.
| if isinstance(resp, dict) and "action" in resp: | ||
| return resp["action"] | ||
| return resp |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why have this optional unwrapping? Can we assume one form of the response? Either the action dict, or a dict containing the action dict? Is there a reason to accept both?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also removed
| if not Path(self.model_path).exists(): | ||
| warnings.warn( | ||
| "[GR00TConfig] model_path does not exist: " | ||
| f"{self.model_path}. No model checkpoint was found. " | ||
| "If this is the client side of a remote policy, this warning can be ignored. " | ||
| "However, if you are running a local policy or the server side of a remote policy, " | ||
| "the program will fail to load the model and cannot run correctly." | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion to remove this warning altogether. From the message the warning will certainly fire in cases where nothing is wrong. That's pretty confusing for the user, I feel, so I'd suggest that we remove this check here. Perhaps we could move the check to a more appropriate place? For example in the local policy where we know we need a valid model path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK ,I remove it now,we can add it once we have fully separated the policy and the client.
| POLICY_REGISTRY: dict[str, str] = { | ||
| # policy_type: "module_path:ClassName" | ||
| "gr00t_closedloop": "isaaclab_arena_gr00t.gr00t_remote_policy:Gr00tRemoteModelPolicy", | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm thinking we should generalize and use our registry class used for Assets, Devices, and Retargetters for Policies too.
But I agree with the idea here: let's have a policy registry to allow users to register policies, to free the core code from directly depending on policy code. I think that this will be critical to getting policy (currently gr00t) code out of the core framework.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I add a registry class in remote_policy folder .please check
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for doing that. Unfortunately we've double-done the work 🥲 . I added a new registry as part of moving the gr00t code out of the core package in #316 . I would suggest that you use my registry (which utilizes the same machinery we use for registering assets etc.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion to rename to remote_policy_config.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
f542ea6 to
698c2bf
Compare
e59b3d6 to
111d093
Compare
Hi @alexmillane , I’ve reintroduced the |
111d093 to
a6a44d8
Compare
| self.action_dim = len(self.robot_action_joints_config) | ||
| if self.task_mode == TaskMode.G1_LOCOMANIPULATION: | ||
| # GR00T outputs are used for WBC inputs dim. So adding WBC commands to the action dim. | ||
| # WBC commands: navigate_command, base_height_command, torso_orientation_rpy_command |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you keep those comments?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
| from isaaclab_arena_gr00t.utils.robot_joints import JointsAbsPosition | ||
|
|
||
|
|
||
| class Gr00tRemoteServerSidePolicy(ServerSidePolicy): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A lot of Gr00t core codes from gr00t_closedloop_policy.py are duplicated in this class. Can you modify this class such that if there is any changes for the GR00T core logics (input prep, model forward to get predictions, and postprocess), there will be only 1 file needed to modify.
| if env_ids is None: | ||
| env_ids = slice(None) | ||
| self.current_action_chunk[env_ids] = 0.0 | ||
| self.current_action_index[env_ids] = -1 | ||
| self.env_requires_new_action_chunk[env_ids] = True | ||
|
|
||
| if env_ids is None: | ||
| env_ids = torch.arange(self.num_envs, device=self.device, dtype=torch.long) | ||
|
|
||
| self.current_action_chunk[env_ids] = 0.0 | ||
| self.current_action_index[env_ids] = -1 | ||
| self.env_requires_new_action_chunk[env_ids] = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you remove duplication?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
| from isaaclab_arena.remote_policy.policy_server import PolicyServer | ||
|
|
||
|
|
||
| def get_policy_cls(policy_type: str) -> type["ServerSidePolicy"]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the function copied? Is there any one to have one version of the func being kept in the repo?
| packed[key_path] = value | ||
| return packed | ||
|
|
||
| class ActionChunkingClientPolicy(PolicyBase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if it makes sense to have ActionChunkingClientPolicy inherited ClientPolicy, ClientPolicy inheriates PolicyBase.
ActionChunking is a specific post processing method used by major VLAs, there are others like avg smoothing method too. The simplest postprocessing could just be passing through, aka doing nothing for single-obs-single-action.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @xyao-nv , Let me check if I understand you correctly: you are suggesting a hierarchy like PolicyBase -> ClientPolicy -> post-processing policies (ActionChunking, avg smoothing, and others?). In this design, ClientPolicy would handle the networking part, and each post-processing policy would focus on how to handle the actions, right?
Could you share some code examples for avg smoothing and other post-processing methods? I’d like to take a look, because in the current workflow the server reads the config and sends it to the client, and my current get_init_info only transmits the action chunk length, which might not be sufficient for other post-processing strategies.
|
Thanks for updating your commits! One major concern is from code maintenance and future dev. Esp for GR00T core logics in terms of preprocess obs, get predictions, and postprocess predictions, I observed there are a significant amount of code duplication for those, partially because of the server-client setup. This is very risky and not ideal at all. When we have two versions of the same logic with slight modifications, we double the surface area for bugs and make future updates twice as risky. To address this, we should aim for Single Source of Truth (SSOT) by abstracting the common logic and injecting the "slight modifications" as parameters or strategies. Other, I prefer having a sample test setup as well. As our repo is OSS, every single commit is public and community will suffer from poorly-tested features. Also it prevents your added feature are broken by us, mostly me, when upgrading to GN1.6. |
Hi @xyao-nv , thank you very much for your review. In my understanding, if this PR is merged, then Regarding tests, this PR does need to add them. However, the current version is mainly for you to check whether this server/client workflow (the server loading the policy config, the init config being sent from server to client, the task text being sent from client to server, and how observations are transmitted) aligns with IsaacLab-Arena’s logic and future extensibility. If this workflow looks reasonable to you, I will then do a full cleanup of the code, update the documentation, and add unit tests. |
a6a44d8 to
6bbadc5
Compare
|
Regarding client side post processing techniques, not all policies return action chunks. Token-based predictions may just output one action at all time. |
|
As you are thinking of retiring
So I would prefer, if you can do a 2-phase approach, adding client-server with policy core logics consolidation & tests & doc in one MR, and retiring our previous imple in another MR. Does it make sense? Or am I asking too much? |
|
Re this server-client workflow (the server loading the policy config, the init config being sent from server to client, the task text being sent from client to server, and how observations are transmitted), it makes sense to me esp setting task description and init cfg. One more question, it looks to me your imple in |
OK, I agree with you. I can split it into two phases. |
Actually, the current action post-processing in navila is somewhat similar to action chunking, but instead of generating many actions, it generates a single action that is executed multiple steps in the Isaac environment, so with my current implementation this can be adjusted on the server side. |
efac3db to
66dae62
Compare
|
Hi @alexmillane @xyao-nv , I’ve redesigned the server/client APIs and code. In this new API, we can decouple the client and server code, and it also supports extending other action post-processing methods. Could you help review it? |
66dae62 to
5a4ce83
Compare
alexmillane
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @kanghui0204. Great work. This is starting to look really good. I think at this point the design is good. Thanks for your work in improving it.
I have left a bunch of small comments. The slightly larger ones are to do with. Some of the duplicated logic in the gr00t server-side policy and the action chunking. As @xyao-nv points out, because of some tight deadlines with GTC, we're likely to have to maintain both a local gr00t policy alongside the gr00t server. So it would be great if you could think of a way of getting rid of this duplication. What do you think?
| if [[ "$USE_SERVER_ENV" -eq 1 ]]; then | ||
| # Script to install GR00T policy dependencies | ||
| # This script is called from the GR00T server Dockerfile | ||
|
|
||
| # CUDA environment variables for GR00T installation. | ||
| # In the PyTorch base image, CUDA is already configured, so we only | ||
| # set variables if CUDA_HOME exists. | ||
| if [ -d "/usr/local/cuda" ]; then | ||
| export CUDA_HOME=${CUDA_HOME:-/usr/local/cuda} | ||
| export PATH=${CUDA_HOME}/bin:${PATH} | ||
| export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:${LD_LIBRARY_PATH:-} | ||
| fi | ||
|
|
||
| echo "CUDA environment variables:" | ||
| echo "CUDA_HOME=${CUDA_HOME:-unset}" | ||
| echo "PATH=$PATH" | ||
| echo "LD_LIBRARY_PATH=${LD_LIBRARY_PATH:-unset}" | ||
| else | ||
| # Script to install GR00T policy dependencies | ||
| # This script is called from the Dockerfile when INSTALL_GROOT is true | ||
|
|
||
| # Set CUDA environment variables for GR00T installation | ||
| export CUDA_HOME=/usr/local/cuda-12.8 | ||
| export PATH=/usr/local/cuda-12.8/bin:${PATH} | ||
| export LD_LIBRARY_PATH=/usr/local/cuda-12.8/lib64:${LD_LIBRARY_PATH:-} | ||
| export TORCH_CUDA_ARCH_LIST=8.0+PTX | ||
|
|
||
| echo "CUDA environment variables set:" | ||
| echo "CUDA_HOME=$CUDA_HOME" | ||
| echo "PATH=$PATH" | ||
| echo "LD_LIBRARY_PATH=$LD_LIBRARY_PATH" | ||
| echo "TORCH_CUDA_ARCH_LIST=$TORCH_CUDA_ARCH_LIST" | ||
| echo "CUDA environment variables set:" | ||
| echo "CUDA_HOME=$CUDA_HOME" | ||
| echo "PATH=$PATH" | ||
| echo "LD_LIBRARY_PATH=$LD_LIBRARY_PATH" | ||
| echo "TORCH_CUDA_ARCH_LIST=$TORCH_CUDA_ARCH_LIST" | ||
| fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can get rid of some duplication here. Actually I'm not sure I see any difference between then the branches of the conditionals.
For example:
CUDA_HOME=/usr/local/cuda-12.8can be replaced withCUDA_HOME=/usr/local/cuda(which is a symlink to/usr/local/cuda-12.8that always exists.export LD_LIBRARY_PATH=/usr/local/cuda-12.8/lib64:${LD_LIBRARY_PATH:-}can be replaced withexport LD_LIBRARY_PATH={CUDA_HOME}/lib64:${LD_LIBRARY_PATH:-}- The
echostatements are the same between both branches and can be moved outside the conditional statements. - etc.
After these modifications these two branches of the conditional look the same to me? What was the idea here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
44 # Set CUDA environment variables for GR00T installation
45 export CUDA_HOME=/usr/local/cuda-12.8
46 export PATH=/usr/local/cuda-12.8/bin:${PATH}
47 export LD_LIBRARY_PATH=/usr/local/cuda-12.8/lib64:${LD_LIBRARY_PATH:-}
48 export TORCH_CUDA_ARCH_LIST=8.0+PTX
49
50 echo "CUDA environment variables set:"
51 echo "CUDA_HOME=$CUDA_HOME"
52 echo "PATH=$PATH"
53 echo "LD_LIBRARY_PATH=$LD_LIBRARY_PATH"
54 echo "TORCH_CUDA_ARCH_LIST=$TORCH_CUDA_ARCH_LIST"
Actually, this part of the code comes from the existing GR00T installation process in IsaacLab Arena. It’s indeed a bit different from the standard setup (it uses /usr/local/cuda-12.8), so I didn’t modify it. Because of this special setup, I wasn’t sure if it was required for a specific reason, so I kept a separate branch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, I don't see the difference between the branches of this if/else condition. The only difference is /usr/local/cuda-12.8 vs /usr/local/cuda. On all systems I've worked on, these two are symlinked to one another. So I assume we can just use /usr/local/cuda.
But in the interests of getting this in, I'm happy to fix this after merging.
|
|
||
| ENV PYTHONPATH=${WORKDIR} | ||
|
|
||
| ENTRYPOINT ["python", "-u", "-m", "isaaclab_arena.remote_policy.remote_policy_server_runner"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool!
docker/run_gr00t_server.sh
Outdated
| # Other parameters (can also be overridden via environment variables) | ||
| HOST="${HOST:-0.0.0.0}" | ||
| PORT="${PORT:-5555}" | ||
| API_TOKEN="${API_TOKEN:-API_TOKEN_123}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I understand this default. Shall we make that the default is no API token checking? Wdyt?
| f"got {self.action_mode.value!r}." | ||
| ) | ||
|
|
||
| proto: ChunkingActionProtocol = self.protocol # type: ignore |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Appears unused?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed
| def from_args(args: argparse.Namespace) -> "ActionChunkingClientSidePolicy": | ||
| """Create an ActionChunkingClientSidePolicy from CLI arguments.""" | ||
| remote_config = ClientSidePolicy.build_remote_config_from_args(args) | ||
| dummy_config: Any = None # Replace with real config dataclass if needed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this intended? I'm wondering what the purpose of this variable is, and how a user could make use of it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
policybase requires a config to be passed in, but since action chunking doesn’t use it, I pass in None.
| from .message_serializer import MessageSerializer | ||
| from .remote_policy_config import RemotePolicyConfig |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason to use relative imports here? Perhaps this is because of the only partial install we're doing of isaaclab_arena in the server case?
I would recommend against the partial imports. I suggest we instead install the full (python) package (even though it will likely not pass tests due to missing dependencies on the server).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add absolute path
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering about the split of files between isaaclab_arena/policy and isaaclab_arena/remote_policy. There are a couple of files, which seem related to remote inference, that are in this folder.
Could we move them to the remote_policy folder? Wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean the two files action_chunking_client.py and client_side_policy.py? I actually see the corresponding ClientSidePolicy and ActionChunkingClientSidePolicy (together with ReplayActionPolicy) as client-side policies, so I placed them in the policy folder.
There is also another concern: both ClientSidePolicy and ActionChunkingClientSidePolicy inherit from PolicyBase. If we move them into the remote_policy folder, the remote policy code would need to import PolicyBase from the policy package, which introduces an unnecessary dependency. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see your point. There are dependencies between these two folders. What do you think about just making remote_policy a subfolder of policy?
What do you think about that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @alexmillane, remote_policy does not depend on anything inside the policy folder.
In contrast, the init in the policy folder does from .replay_action_policy import *, and replay_action_policy imports IsaacLab code.
When the server side starts, it needs to copy remote_policy into its environment.
If it instead copies the whole policy folder, it will also bring in other code, which can cause the program to crash.
For this reason, I feel that remote_policy should not be placed inside policy. What do you think?
| print(f"[PolicyServer] binding on {bind_addr}") | ||
| self._socket.bind(bind_addr) | ||
| self._api_token = api_token | ||
| self._serializer = MessageSerializer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: storing a class with static methods like this appears a bit.. strange. It's a nit, but I'd suggest just calling MessageSerializer.serialize(msg), because that clearly communicates that these are static methods.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| if not self._validate_token(request): | ||
| self._socket.send( | ||
| self._serializer.to_bytes({"error": "Unauthorized: invalid api_token"}) | ||
| ) | ||
| continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're you inspired by this API token design from somewhere else?
What's the intention? Because we transmit the API token unencrypted we don't get any security, so I'm wondering what the intention is?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The API token is not intended to provide real security. The goal is just to avoid accidental connections or mixed-up processes on the same network (i.e. a lightweight guardrail), not to protect against an active attacker, since the token is sent in plaintext.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok let's leave it as is.
| ) | ||
| continue | ||
|
|
||
| endpoint = request.get("endpoint", "get_action") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: we have a default here. I would have thought that all requests must include an endpoint. I would suggest enforcing that by not having a default. (and probably assert "endpoint" in request).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
5a4ce83 to
a6ab404
Compare
|
Hi @alexmillane , I’ve updated some of the code based on your comments. Could you please take another look, especially at the action protocol part? There are also a few comments I didn’t change, but I added my reasoning there—could you check whether it seems reasonable? The next step is to merge the Gr00t remote and local code, and then add docs and tests. |
a6ab404 to
9194424
Compare
alexmillane
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fine with the changes!
| if [[ "$USE_SERVER_ENV" -eq 1 ]]; then | ||
| # Script to install GR00T policy dependencies | ||
| # This script is called from the GR00T server Dockerfile | ||
|
|
||
| # CUDA environment variables for GR00T installation. | ||
| # In the PyTorch base image, CUDA is already configured, so we only | ||
| # set variables if CUDA_HOME exists. | ||
| if [ -d "/usr/local/cuda" ]; then | ||
| export CUDA_HOME=${CUDA_HOME:-/usr/local/cuda} | ||
| export PATH=${CUDA_HOME}/bin:${PATH} | ||
| export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:${LD_LIBRARY_PATH:-} | ||
| fi | ||
|
|
||
| echo "CUDA environment variables:" | ||
| echo "CUDA_HOME=${CUDA_HOME:-unset}" | ||
| echo "PATH=$PATH" | ||
| echo "LD_LIBRARY_PATH=${LD_LIBRARY_PATH:-unset}" | ||
| else | ||
| # Script to install GR00T policy dependencies | ||
| # This script is called from the Dockerfile when INSTALL_GROOT is true | ||
|
|
||
| # Set CUDA environment variables for GR00T installation | ||
| export CUDA_HOME=/usr/local/cuda-12.8 | ||
| export PATH=/usr/local/cuda-12.8/bin:${PATH} | ||
| export LD_LIBRARY_PATH=/usr/local/cuda-12.8/lib64:${LD_LIBRARY_PATH:-} | ||
| export TORCH_CUDA_ARCH_LIST=8.0+PTX | ||
|
|
||
| echo "CUDA environment variables set:" | ||
| echo "CUDA_HOME=$CUDA_HOME" | ||
| echo "PATH=$PATH" | ||
| echo "LD_LIBRARY_PATH=$LD_LIBRARY_PATH" | ||
| echo "TORCH_CUDA_ARCH_LIST=$TORCH_CUDA_ARCH_LIST" | ||
| echo "CUDA environment variables set:" | ||
| echo "CUDA_HOME=$CUDA_HOME" | ||
| echo "PATH=$PATH" | ||
| echo "LD_LIBRARY_PATH=$LD_LIBRARY_PATH" | ||
| echo "TORCH_CUDA_ARCH_LIST=$TORCH_CUDA_ARCH_LIST" | ||
| fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, I don't see the difference between the branches of this if/else condition. The only difference is /usr/local/cuda-12.8 vs /usr/local/cuda. On all systems I've worked on, these two are symlinked to one another. So I assume we can just use /usr/local/cuda.
But in the interests of getting this in, I'm happy to fix this after merging.
| if self.requested_action_mode is ActionMode.NONE: | ||
| raise NotImplementedError( | ||
| f"{self.__class__.__name__} must set 'requested_action_mode', " | ||
| f"can't be {ActionMode.NONE}." | ||
| ) | ||
| if self.protocol_cls is None: | ||
| raise NotImplementedError( | ||
| f"{self.__class__.__name__} must set 'protocol_cls' " | ||
| f"to a concrete ActionProtocol subclass." | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good to me.
| """Decode tagged structures created in _encode_custom.""" | ||
| if not isinstance(obj, dict): | ||
| return obj | ||
|
|
||
| # numpy array | ||
| if "__ndarray_class__" in obj: | ||
| return np.load(io.BytesIO(obj["as_npy"]), allow_pickle=False) | ||
|
|
||
| # generic binary blob | ||
| if "__blob_class__" in obj: | ||
| return { | ||
| "mime": obj.get("mime"), | ||
| "data": obj.get("as_bytes"), | ||
| } | ||
|
|
||
| # other tagged types can be added here | ||
| return obj |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense to me. Thank you for clarifying
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see your point. There are dependencies between these two folders. What do you think about just making remote_policy a subfolder of policy?
What do you think about that?
| if not self._validate_token(request): | ||
| self._socket.send( | ||
| self._serializer.to_bytes({"error": "Unauthorized: invalid api_token"}) | ||
| ) | ||
| continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok let's leave it as is.
7e0333b to
8cba6be
Compare
|
Hi @alexmillane @xyao-nv ,Regarding merging the Gr00t code, I have factored out a I also added documentation for the Finally, how should I trigger this PR’s CI, and how is your CI scheduled? Right now this PR doesn’t add any new tests to the CI, but it still can’t trigger the CI. |
8cba6be to
6fe0f95
Compare
|
GR00TN1.6 upgrades are merged. Sorry for holding you off, please rebase on main. #334 |



Hi Team,
I added a remote server/client mechanism for IsaacLab-Arena that allows the IsaacLab environment and the policy model environment to run separately. Alex and I wrote a design document(design doc) for this feature, which includes our discussions about the design. Below is an overview of this PR.
1. Functionality
Previously, the IsaacLab-Arena pipeline ran entirely in a single process:
IsaacLab env → Env policy class (e.g., Gr00tClosedloopPolicy) → local policy model (e.g., Gr00tPolicy), all in one environment.
This PR adds remote policy/server–client support and decouples the env policy class from the local policy model. The new pipeline becomes:
Client: IsaacLab env → Env policy class (e.g., ActionChunkingClientPolicy)
Server: VLA/VLN policy model (e.g., Gr00tRemoteServerSidePolicy)
The client and server exchange observations and actions via sockets.
The server/client implementation lives in
isaaclab_arena/remote_policyinside IsaacLab-Arena. Users can copy this directory and import it on the server side directly, without needing to install it as a separate package.2. How to use it
On the client side, you still use
isaaclab_arena/examples/policy_runner.py, but you now pass a few extra arguments. Example:On the server side, there is a Python entry point
isaaclab_arena.remote_policy.remote_policy_server_runner.py. You specify host/port/etc., select the policy type, and provide the policy config file:3. Current example
Right now there is a working example for GR00T. On a given machine, the steps are:
Start the server
This uses the following defaults inside the script:
Start the client
Set up the IsaacLab Docker container following
IsaacLab Arena docker documentation(no GR00T installation is needed inside this container).
Inside the container, run:
With this setup, you can obtain results for the GR1 open microwave task using a remote GR00T policy server.
and I can get the result:
{'success_rate': 0.58, 'revolute_joint_moved_rate': 0.95, 'num_episodes': 200}4. What remains to be done? Future work
At the moment, documentation for this feature has not been updated. I’d like the team to review this PR first and confirm that the usage and interface look reasonable.
Planned future work includes: