Skip to content

CIVA-Lab/3D-Root-Zonation

Repository files navigation

3D-Root-Zonation

Implementation of a pipeline for 3d-root zonation to measure the cell size in epidermal (root hair), epidermal (non-root hair), and cortex.


Overall Pipeline

Overall pipeline for 3d-root zonation is demonstrated in the below figure, where it can be split into 4 major steps:

  1. Cell wall enhancement (PlantSeg)

  2. Cell file detection/tracking (CoTrackerv3)

  3. Cell wall segmentation (SAM)

  4. Cell size and density estimation (Peak finding & Mosaicing)



Install The Environment

Use the Anaconda, the following steps are given in requirements.txt file.

  ##############################################
  ########### Create an Environment ############
  ##############################################

  conda create -n rootZonation -c conda-forge -c pytorch -c nvidia python=3.11 -y
  conda activate rootZonation
  pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

  ##############################################
  ########## PlantSeg-1.8 Installation #########
  ##############################################

  cd plant-seg-1.8.0/
  pip install -r requirements.txt
  pip install -e .
  plantseg --help
  conda install -c conda-forge vigra
  pip install numba
  pip install pooch
  pip install napari[all]
  pip install nifty
  pip install elf
  pip install pyelftools
  conda install conda-forge::python-elf

  # Run PlantSeg to check if it was setup correctly
  CUDA_VISIBLE_DEVICES=1 plantseg --config plant-seg-config_meru.yml

  ##############################################
  #### COTRACKER and SAM Installation ##########
  ##############################################

  cd CoTrackerv3-SAM
  pip install -e .
  pip install opencv-python

  pip install imageio==2.35.1
  pip install imageio-ffmpeg==0.5.1

  pip install segment_anything
  pip install matplotlib==3.7.5

Preprocessing Steps

In the preprocessing steps the original volume from longitudinal slices are converted into transverse slices and enhanced for furhter stpes.


1. Convert tif file to h5 to run PlantSeg

The tif files need to be converted to h5 data type in order to run PlantSeg.

  1. Change the properties accordingly in runConvertForPlantSeg.sh file.

  2. Specify the input path in this config: INPUT_PATH="__________", where the raw images are located in .tif format.

  3. Specify the output path in this config: OUTPUT_PATH="___________", where the converted .h5 file will be saved.

CUDA_VISIBLE_DEVICES="$GPU_ID" python convert_tif_to_h5.py \
    --input_path  "$INPUT_PATH" \
    --output_path "$OUTPUT_PATH"
  1. Run runConvertForPlantSeg.sh file.

2. Running PlantSeg

Using PlantSeg version 1.8.0 to get cell boundary predictions.

  1. Change the properties accordingly in plant-seg-config_meru.yml file.

  2. Specify the input path in this config: path: __________, where the raw images are located in .h5 format.

  3. Specify the model name in this config: model_name: generic_confocal_3D_unet, for cell boundary predictions the given model name is used.

  4. Run

CUDA_VISIBLE_DEVICES=1 plantseg --config plant-seg-config_meru.yml
  1. After successfull run a folder named generic_confocal_3D_unet will be created in the same path specified in step 2 with PlantSeg preditions.

3. Initial Steps to Get Transverse Slices and Enhance It From PlantSeg Outcome

Getting transverse slices from the PlantSeg result and removing blobs and enhancing the slice to prepare it for CoTrackerv3 and SAM steps.

This steps will be run for each STACK separately.

  1. Change the properties accordingly in runInitialStepsWithLogs.sh file.

  2. Change the parameters in the CONFIGURATION part accoring to your need here (the example is for STACK-2):

####### CONFIGURATION: change these only! #######
GPU_ID=1

# common parameters
STACK_NUMBER="2"

BASE_DIR="/usr/mvl5/Images2/BIO2/Baskin_root-confocal/umass_data/Epidermal images/15C/Columbia/Round 1/Oct_11_15C_Col_rd1_rt1/pwr 7/h5"

RAW_DATASET="${BASE_DIR}/Oct_11_15C_Col_rt1-${STACK_NUMBER} pwr 7.h5"
PRED_DATASET="${BASE_DIR}/generic_confocal_3D_unet/Oct_11_15C_Col_rt1-${STACK_NUMBER} pwr 7_predictions.h5"

# output path
OUTPUT_BASE="/usr/mvl5/Images2/BIO2/Baskin_root-confocal/deniz/vis/02-15C_Columbia_Round1_Root1_pw7_New_Pipeline_TT"

##### getRootTransverseSlices.py params #####
# Remove small blobs below the given threshold
MIN_BLOB_AREA=5000
# Size of the disk used for closing morphology
DISK_SIZE=40
# Ratio to save slices in percentage
RATIO_TO_SAVE_SLICE=60  # FOR PS-1, PS-2, PS-3, PS-8, PS-9 = 60, PS-4 = 70, PS-5, PS-6, PS-7, PS-10, PS-11 = 80
# 1 to crop slice and 0 to ignore cropping
CROP_SLICE=0
# crop height to crop slice from center
CROP_HEIGHT=750
#1 to save vis and 0 to ignore visuals
SLICE_SAVE_VIS=1

##### getEnhancedSlice.py params #####
# The limit to look to remove the noise in the slice
LIMIT=5
###############################################
  1. Run runInitialStepsWithLogs.sh file.

  2. It will output results of each step in sequential order:

# 1) Get Root Transverse Slices
run_step "1) Get Root Transverse Slices" \
  env CUDA_VISIBLE_DEVICES="$GPU_ID" python getRootTransverseSlices.py \
    --raw_input_path  "$RAW_DATASET" \
    --pred_input_path  "$PRED_DATASET" \
    --output_path "$OUTPUT_BASE" \
    --stack_number "$STACK_NUMBER" \
    --min_blob_area $MIN_BLOB_AREA \
    --disk_size $DISK_SIZE \
    --ratio_to_save_slice $RATIO_TO_SAVE_SLICE \
    --crop_slice $CROP_SLICE \
    --crop_height $CROP_HEIGHT \
    --save_vis $SLICE_SAVE_VIS

# 2) Get Enhanced Slice
run_step "2) Get Enhanced Slice" \
  env CUDA_VISIBLE_DEVICES="$GPU_ID" python getEnhancedSlice.py \
    --input_path  "$OUTPUT_BASE" \
    --output_path "$OUTPUT_BASE" \
    --stack_number "$STACK_NUMBER" \
    --limit $LIMIT

echo "✅ All steps completed."

4. Manually Define The File Center Points In a Stack

In order to track the file center points in the Stack the initial center points should be defined manually.

Here is an example demo video how to do that using ImageJ.

Defining File Center Points Demo Video

Steps to define center points

  1. Open ImageJ and upload the first image from Enhaced transverse slice result obtained from step 3.

  2. Define center points for a files and save it as .csv file with the following name structure Stack-2-file-cortex-center-points.csv, this is for Stack 2 and fyle type is Cortex.

  3. Create a folder named Manual-Initial-Center-Points in the same location provided as OUTPUT_BASE in step 3.

  4. Copy the .csv file to that folder.


5. Running CoTracker3

This repo will describe the workflow for the file analysis part of the pipeline for the 3D confocal root file cell boundary analysis. The method employs a CoTracker3 to track center points within the file.

This steps will be run for each STACK separately.

Pre-trained weights of CoTracker3

  1. Donwload pre-trained weights of CoTracker3 into CoTrackerv3-SAM/checkpoints/ folder.

  2. Follow the readme in CoTrackerv3-SAM/checkpoints folder to download the model.

How to run CoTracker3

  1. Change the properties accordingly in runCoTracker3WithLogs.sh file.

  2. Change the parameters in the CONFIGURATION part accoring to your need here (the example is for STACK-2):

####### CONFIGURATION: change these only! #######
GPU_ID=1

# Stack Number
STACK_NUMBER="2" 

# base paths
# type of the file, can be cortex, hair or no-hair
FILE_TYPE="Cortex"
lower_file_type=$(printf '%s' "$FILE_TYPE" | tr '[:upper:]' '[:lower:]')
INPUT_BASE="/usr/mvl5/Images2/BIO2/Baskin_root-confocal/deniz/vis/02-15C_Columbia_Round1_Root1_pw7_New_Pipeline_TT"
CSV_FILE="${INPUT_BASE}/Manual-Initial-Center-Points/Stack-${STACK_NUMBER}-file-${lower_file_type}-center-points.csv"
OUTPUT_PATH="${INPUT_BASE}/${FILE_TYPE}/"

# common parameters
STACK_NAME="Enh-${STACK_NUMBER}"
# number of frames to ignore, usually used for the stack-1
IGNORE_FRAMES=0  # For stack-1 ignore_frames = 350
# 1 to save vis and 0 to ignore visuals
SAVE_VIS=1
# 1 to start from last to first and 0 to start from first to last
# To traverse the files in reverse order set this config: ```REVERSE=1``` in this case the files would be traversed
#  in reverse order (from last to first frame, usefull for edipermal in the initial stacks) and change to 0 if you 
# want to traverse from first to last frame. 
REVERSE=0
###############################################
  1. Run runCoTracker3WithLogs.sh file.

  2. It will output results of CoTracker3 step:

run_step "3) CoTrackerv3" \
  env CUDA_VISIBLE_DEVICES="$GPU_ID" python rootCoTracker.py \
      --input_path  "$INPUT_BASE/TransverseSlice-Enh" \
      --csv_path  "$CSV_FILE" \
      --output_path "$OUTPUT_PATH" \
      --stack_name  "$STACK_NAME" \
      --number_of_ignore_frames $IGNORE_FRAMES \
      --save_vis    $SAVE_VIS \
      --reverse $REVERSE \
      --backward_tracking \
      --offline

echo "✅ Step 3 completed."

6. Running SAM and Getting Statistics

After running CoTracker3 to track the center points for each file in a Stack, the obtrained center points are provided to SAM as prompt to get the file mask.

This steps will be run for each STACK and FILE separately.


Pre-trained weights of SAM

  1. Donwload pre-trained weights of SAM into CoTrackerv3-SAM/sam_weights/ folder.

  2. Follow the readme in CoTrackerv3-SAM/sam_weights folder to download the model.


Pre-trained weights of MicroSAM

  1. Donwload pre-trained weights of MicroSAM into CoTrackerv3-SAM/micro_sam_weights/ folder.

  2. Follow the readme in CoTrackerv3-SAM/micro_sam_weights folder to download the model.

  3. Read this repo to understand MicroSAM:

https://github.com/computational-cell-analytics/micro-sam

How to run SAM or MicroSAM

  1. Change the properties accordingly in runSAMStepsWithLogs.sh file.

  2. Change the parameters in the CONFIGURATION part accoring to your need here (the example is for STACK-2 FILE-1):

####### CONFIGURATION: change these only! #######
GPU_ID=1

STACK="2"
FILE_NUMBER="1"

# base paths
FILE_TYPE="Cortex"
INPUT_BASE="/usr/mvl5/Images2/BIO2/Baskin_root-confocal/deniz/vis/02-15C_Columbia_Round1_Root1_pw7_New_Pipeline"
OUTPUT_PATH="${INPUT_BASE}/${FILE_TYPE}/"
TXT_PATH_MAIN="${INPUT_BASE}/${FILE_TYPE}/CenterPoints"

# derived/common parameters
STACK_NAME="Enh-$STACK"
FILE_NAME="file-$FILE_NUMBER"
# center points of a file which is the output of CoTracker3
TXT_PATH="${TXT_PATH_MAIN}/${STACK_NAME}/${FILE_NAME}-center-points.txt"

#### Step 4: rootSAM.py params
# number of frames to ignore
IGNORE_FRAMES=350 # For stack-1 ignore_frames = 350
# 1 to use SAM and 0 to use MicroSAM model
USE_SAM=1            # 1 = SAM, 0 = MicroSAM
# 1 to save vis and 0 to ignore visuals
SAVE_VIS=0
# 1 to start from last to first and 0 to start from first to last 
REVERSE=0

##### Step 5: getSAMIntensityMean.py params
STACK_NUMBER="-$STACK"   # without Enh prefix
# Remove small blobs below the given threshold
REMOVE_SMALL_THRESHOLD=200
# 1 to do post processing and 0 to not do post processing
POST_PROC=1
# For post-processing if the mean of current is lower than this ratio ignore the frame and replace it with previous
LOW_RATIO=70
# For post-processing if the mean of current is higher than this ratio ignore the frame and replace it with previous
HIGH_RATIO=130 # deafult 130 , stack-5=200, stack-6=150, hair-stack-1, 10=200
# 1 to save vis and 0 to ignore visuals
SAVE_VIS_PP=1

#### Step 6: getStatistics.py params
# 1 to save animated histogram and 0 to ignore
ANIM_HIST=1
###############################################
  1. Run runSAMStepsWithLogs.sh file.

  2. It will output results of each step in sequential order:

run_step "4) SAM (rootSAM.py)" \
  env CUDA_VISIBLE_DEVICES="$GPU_ID" python rootSAM.py \
      --input_path              "$INPUT_BASE/TransverseSlice-Enh" \
      --center_points_path      "$TXT_PATH" \
      --output_path             "$OUTPUT_PATH" \
      --stack_name              "$STACK_NAME" \
      --file_name               "$FILE_NAME" \
      --number_of_ignore_frames $IGNORE_FRAMES \
      --use_sam                 $USE_SAM \
      --save_vis                $SAVE_VIS \
      --reverse                 $REVERSE

run_step "5) SAM Intensity Mean + Post-Processing (getSAMIntensityMean.py)" \
  env CUDA_VISIBLE_DEVICES="$GPU_ID" python getSAMIntensityMean.py \
      --input_path              "$INPUT_BASE/TransverseSlice-PS" \
      --sam_path                "$OUTPUT_PATH/SAMMasks" \
      --output_path             "$OUTPUT_PATH" \
      --stack_number            "$STACK_NUMBER" \
      --file_name               "$FILE_NAME" \
      --remove_small_threshold  $REMOVE_SMALL_THRESHOLD \
      --post_processing         $POST_PROC \
      --low_ratio               $LOW_RATIO \
      --high_ratio              $HIGH_RATIO \
      --save_vis                $SAVE_VIS_PP

run_step "6) Get Statistics (getStatistics.py)" \
  env CUDA_VISIBLE_DEVICES="$GPU_ID" python getStatistics.py \
      --input_path              "$OUTPUT_PATH" \
      --stack_name              "$STACK_NAME" \
      --file_name               "$FILE_NAME" \
      --post_processing         $POST_PROC \
      --animated_histogram      $ANIM_HIST

echo "✅ All steps completed."

Post-Processing Step Explained

The figure below shows the post-processing step to remove undersegmentation or over-ssegmentation.

  1. LOW_RATIO (inside CONFIG of previous step) controls the low mean to replace the frame with previous. Those are defined empirically after running experiments.

  2. HIGH_RATIO (inside CONFIG of previous step) controls the high mean to replace the frame with previous. Those are defined empirically after running experiments.


Project Collaborators and Contact

Author: Gani Rahmon, Deniz Kavzak Ufuktepe and Kannappan Palaniappan

Copyright © 2024-2026. Gani Rahmon, Deniz Kavzak Ufuktepe and Prof. K. Palaniappan and Curators of the University of Missouri, a public corporation. All Rights Reserved.

Created by: Ph.D. student: Gani Rahmon
Department of Electrical Engineering and Computer Science,
University of Missouri-Columbia

For more information, contact:


✏️ Citation

If you think this project is helpful, please feel free to leave a star⭐️ and cite the following papers:

@inproceedings{karaev23cotracker,
  title     = {CoTracker: It is Better to Track Together},
  author    = {Nikita Karaev and Ignacio Rocco and Benjamin Graham and Natalia Neverova and Andrea Vedaldi and Christian Rupprecht},
  booktitle = {Proc. {ECCV}},
  year      = {2024}
}

@inproceedings{karaev24cotracker3,
  title     = {CoTracker3: Simpler and Better Point Tracking by Pseudo-Labelling Real Videos},
  author    = {Nikita Karaev and Iurii Makarov and Jianyuan Wang and Natalia Neverova and Andrea Vedaldi and Christian Rupprecht},
  booktitle = {Proc. {arXiv:2410.11831}},
  year      = {2024}
}

@article{kirillov2023segany,
  title={Segment Anything},
  author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross},
  journal={arXiv:2304.02643},
  year={2023}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published