Add SimBa Policy: Simplicity Bias for Scaling Up Parameters in DRL#59
Add SimBa Policy: Simplicity Bias for Scaling Up Parameters in DRL#59
Conversation
|
Hi @araffin! I was curious to see that you have not implemented RSNorm (at least until now) in this Simba implementation. From the paper (see Figure 12), RSNorm is critical to the performance of Simba. I found this particularly very surprising, and I was wondering why not simply using BatchNorm to normalize the inputs has the same effect (from Figure 12, it is much worse). |
because I use import optax
default_hyperparams = dict(
n_envs=1,
n_timesteps=int(5e5),
policy="SimbaPolicy",
learning_rate=3e-4,
# qf_learning_rate=1e-3,
policy_kwargs={
"optimizer_class": optax.adamw,
# "optimizer_kwargs": {"weight_decay": 0.01},
"net_arch": {"pi": [128], "qf": [256, 256]},
"n_critics": 2,
},
learning_starts=10_000,
normalize={"norm_obs": True, "norm_reward": False},
)
hyperparams = {}
for env_id in [
"HalfCheetah-v4",
"Humanoid-v4",
"HalfCheetahBulletEnv-v0",
"Ant-v4",
"Hopper-v4",
"Walker2d-v4",
"Swimmer-v4",
"AntBulletEnv-v0",
"HopperBulletEnv-v0",
"Walker2DBulletEnv-v0",
"BipedalWalkerHardcore-v3",
"Pendulum-v1",
]:
hyperparams[env_id] = default_hyperparamsSo far in my test, having a second critic was more important. I'm suspecting that the hyperparameters presented are overfitted to the dmc hard benchmark.
probably because they would need to use |
|
That is interesting, I was also surprised they removed clipped double q-learning from SAC, and there is no ablation on that in the paper. At the moment, I am using CrossQ+DroQ for a personal project, and I am really curious if its worth changing it to Simba. It would be really cool if you could share your findings, thanks! :) |
so far, I'm actually quite happy with TQC + Simba (see #60 (comment)), but I need to do a more systematic evaluation soon. |
Thanks a lot for the reply! Do you have any insights on how it compares with CrossQ? Or is it possible to combine CrossQ and Simba? |
|
@araffin Hi Araffin, how do you find the performance of TQC simba compare to TQC droQ with similar parameter? Thanks! |
|
Current perf report (early results, only 3 seeds, MuJoCo envs only, pybullet envs coming later): https://wandb.ai/openrlbenchmark/sbx/reports/Simba-SBX-Perf-Report--VmlldzoxMDM5MjQxOQ
I didn't have much time to investigate that. |
Description
https://openreview.net/forum?id=jXLiDKsuDo
https://arxiv.org/abs/2410.09754
Perf report: https://wandb.ai/openrlbenchmark/sbx/reports/Simba-SBX-Perf-Report--VmlldzoxMDM5MjQxOQ
Motivation and Context
Types of changes
Checklist:
make format(required)make check-codestyleandmake lint(required)make pytestandmake typeboth pass. (required)make doc(required)Note: You can run most of the checks using
make commit-checks.Note: we are using a maximum length of 127 characters per line