-
Notifications
You must be signed in to change notification settings - Fork 201
Open
Labels
questionFurther information is requestedFurther information is requested
Description
Hi~ I noticed that the TartanAirV2 dataset you submitted on Huggingface was sampled from the official dataset. The directory structure of the original dataset is as follows:
TartanGround_Root/
├── AbandonedCable/
│ ├── AbandonedCable_rgb.pcd # Global RGB point cloud
│ ├── AbandonedCable_sem.pcd # Global Semantic point cloud
│ ├── seg_label_map.json # Semantic segmentation label map
│ ├── Data_omni/ # Omnidirectional robot data
│ │ ├── P0000/
│ │ │ ├── image_lcam_front/
│ │ │ ├── depth_lcam_front/
│ │ │ ├── seg_lcam_front/
│ │ │ ├── imu/
│ │ │ ├── lidar/
│ │ │ ├── pose_lcam_front.txt
│ │ │ ├── P0000_metadata.json
│ │ │ ├── image_lcam_left/
│ │ │ └── ...
│ │ └── P00XX/
│ ├── Data_diff/ # Differential drive robot data
│ │ ├── P1000/
│ │ └── P10XX/
│ └── Data_anymal/ # Quadrupedal robot data
│ ├── P2000/
│ └── P20XX/
├── AbandonedFactory/
│ └── (same structure as above)
└── ...
May I ask what your sampling process is? For each scene, did you only sample the data of a certain robot type (for example Data_omni) on a certain trajectory (for example P0000) ? Then do you merge the image_..., depths_..., and other information under this trajectory into the corresponding images, depth, and other folders, as shown in the dataset structure in your code:
Expected root directory structure for the raw TAv2-WB dataset:
.
└── tav2_wb/
├── AbandonedCable/
│ ├── camera_params/
│ │ ├── 00000021_0.npy
│ │ ├── ...
│ ├── depth/
│ │ ├── 00000021_0.exr
│ │ ├── ...
│ ├── images/
│ │ ├── 00000021_0.png
│ │ ├── ...
│ ├── poses/
│ │ ├── 00000021_0.npy
│ │ ├── ...
├── ...
├── DesertGasStation
├── ...
├── PolarSciFi
└── ...
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
questionFurther information is requestedFurther information is requested