Skip to content

IRMVLab/UENeRFNav

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

UENeRFNav

This is the official implementation of IEEE TAI accepted paper [ "Embodied Navigation in Unknown Environments with Implicit Scene Memory and Target-aware Memory Retrieval" ].

Overview

Our method intend to leverage NeRF as a memory structure for storing scene cues and to explore its potential in robotic navigation tasks, thereby enabling robots to achieve reliable and efficient navigation in unknown environments. During navigation, scene features from historical observations are stored online in the NeRF memory. Concurrently, exploiting the implicit characteristics of NeRF, a differentiable memory retrieval mechanism called Dual Space Attention is designed to extract target-relevant scene cues from the NeRF structure, supporting long-term optimized behavior patterns. Additionally, to address the significant noise challenges due to the lack of environmental priors when using implicit NeRF memory in unknown scenes, an uncertainty masking operation is introduced during memory retrieval to eliminate low signal-to-noise ratio information and potentially enhance exploration behavior.

We use the Docker image provided by iGibson for simulation environment setup. The following steps will guide you through the installation process.

Installation

Step 1: Install iGibson Docker Image

We use iGibson v2.0.6 as our simulation platform. Please Follow the official iGibson installation guide to install the environment.

https://stanfordvl.github.io/iGibson/installation.html

Step 2: Install tiny-cuda-nn

Our project uses tiny-cuda-nn for efficient neural network encoding. Follow the instruction on this website to install it:

https://github.com/NVlabs/tiny-cuda-nn

Training

We provide scripts for data collection and conversion, allowing you to train on your own dataset. bash

Collect training data

python tools/sample.py \
    --scenario-dir /path/to/Gibson/navigation_scenarios/waypoints/full+ \
    --scene-list scenes_for_training.txt \
    --output-dir dataset

Train the model on the collected dataset

python scripts/train.py \
    --dataset /path/to/dataset \
    --gpu-ids 1 0

Testing

We provide a pre-trained model checkpoint and a test scenario. You can obtain the results by running the script:

python scripts/test.py \
    --model model_checkpoint/full-128000.pkl \
    --config configs/turtlebot_nav.yaml \
    --test-dataset test_dataset \
    --output-dir test_results \
    --gpu-ids 0 3

Cite This Paper!🤗

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages