Skip to content

JaneliaSciComp/nf-BigStitcher-Spark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

90 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

nf-BigStitcher-Spark

GitHub Actions CI Status GitHub Actions Linting Status Cite with Zenodo Nextflow run with conda run with docker run with singularity

Introduction

JaneliaSciComp/nf-BigStitcher-Spark is a Nextflow pipeline that allows you to run individual BigStitcher-Spark modules. This means you can run the compute-intensive parts of BigStitcher on any compute infrastructure supported by Nextflow (SGE, SLURM, AWS, etc.). The pipeline starts up an Apache Spark cluster, runs the selected BigStitcher step, and then shuts down Spark.

Usage

Note

If you are new to Nextflow and nf-core, please refer to this page on how to set-up Nextflow. Make sure to test your setup with -profile test before running the workflow on actual data.

Review the current nf-core configs to see if your compute environment is already supported by nf-core. If so, you can specify the config using -profile when running the pipeline. If not, you may need to create a profile for your compute infrastructure.

Downloading test data.

To only download test data from an http URL you can use

nextflow run JaneliaSciComp/nf-BigStitcher-Spark \
   -profile <docker|podman|singularity|...|institute> \
   --module download-only \
   --download_url myurl \
   --download_dir /path/to/downloaded_data

You can also combine downloading with running a BigStitcher module that you know is ready to be processed based on the content of the test data. In that case all the other parameters for the BigStitcher should be specified as if everything is available at the specified --download_dir

nextflow run JaneliaSciComp/nf-BigStitcher-Spark \
   -profile <docker|podman|singularity|...|institute> \
   --module stitching \
   --download_url myurl \
   --download_dir /path/to/downloaded_data \
   --xml /path/to/downloaded_data/dataset.xml

To run the "resave" module:

nextflow run JaneliaSciComp/nf-BigStitcher-Spark \
   -profile <docker/singularity/.../institute> \
   --module resave \
   --xml /path/to/your/bigstitcher/project.xml \
   --output /path/to/your/output.zarr

To run "affine-fusion" module:

nextflow run JaneliaSciComp/nf-BigStitcher-Spark \
   -profile <docker/singularity/.../institute> \
   --module affine-fusion \
   --output /path/to/your/zarr_or_n5_container

If the container is on S3 and it references local files you may need to pass these files using --input_data_files parameter. For example if the container was created using:

nextflow run JaneliaSciComp/nf-BigStitcher-Spark \
   -profile docker \
   --module create-container \
   --xml <local>/datasets/Stitching_Tiff/zstd-dataset.ome.zarr \
   --output s3://janelia-bigstitcher-spark/Stitching/cg-fused.zarr \
   --container_runtime_opts "-e AWS_ACCESS_KEY_ID=<key> -e AWS_SECRET_ACCESS_KEY=<secret>" \
   --module_params '--preserveAnisotropy --multiRes'

Then to fuse it you need to run:

nextflow run JaneliaSciComp/nf-BigStitcher-Spark \
   -profile docker \
   --module affine-fusion \
   --output s3://janelia-bigstitcher-spark/Stitching/cg-fused.zarr \
   --publishdir work \
   --container_runtime_opts "-e AWS_ACCESS_KEY_ID=<k> -e AWS_SECRET_ACCESS_KEY=<s>" \
   --input_data_files <local>/datasets/Stitching_Tiff/zstd-dataset.xml

To fuse a container on S3 you may need to provide AWS credentials using container_runtime_opts as below:

nextflow run JaneliaSciComp/nf-BigStitcher-Spark \
   -profile docker \
   --module affine-fusion \
   --output s3://janelia-bigstitcher-spark/Stitching/fused.zarr \
   --publishdir work \
   --container_runtime_opts "-e AWS_ACCESS_KEY_ID=<key> -e AWS_SECRET_ACCESS_KEY=<secret>"

To fuse a container on Janelia's LSF cluster, with the head nextflow task running on the cluster as well use:

bsub -o job.out -e job.err -J fuse -P <projectCode> \
    nextflow run main.nf \
    -profile janelia \
    --module affine-fusion \
    --output <fused-container-location> \
    --lsf_opts "-P <projectCode>"

Warning

Please provide pipeline parameters via the CLI or Nextflow -params-file option. Custom config files including those provided by the -c Nextflow option can be used to provide any configuration except for parameters; see docs.

For more details and further functionality, please refer to the usage documentation and the parameter documentation.

Pipeline output

To see the results of an example test run with a full size dataset refer to the results tab on the nf-core website pipeline page. For more details about the output files and reports, please refer to the output documentation.

Credits

JaneliaSciComp/nf-BigStitcher-Spark was developed by Cristian Goina, Konrad Rokicki, and Stephan Preibisch (the author of BigStitcher).

Contributions and Support

If you would like to contribute to this pipeline, please see the contributing guidelines.

Citations

If you use BigStitcher for your analysis, please cite it using the following DOI: 10.1038/s41592-019-0501-0

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

About

Run BigStitcher-Spark using Nextflow

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •