diff --git a/.github/ISSUE_TEMPLATE/bug-report.md b/.github/ISSUE_TEMPLATE/bug-report.md index d8e8e78a3..5f98c7e52 100644 --- a/.github/ISSUE_TEMPLATE/bug-report.md +++ b/.github/ISSUE_TEMPLATE/bug-report.md @@ -1,52 +1,72 @@ --- name: Bug Report -about: Create a bug report to help us improve FastSurfer +about: You observed a behavior of FastSurfer that is clearly not intended, i.e. not working as intended. title: '' labels: bug assignees: '' --- +Bug Report +========== + -.... +This issue type is for: +You observed a behavior of FastSurfer that is clearly not intended, i.e. not working as intended. -## Steps to Reproduce - - -... +Description +----------- +Which functionality, script, interface or interaction does this bug concern? -## Expected Behavior - -... +Expected Behavior +----------------- +Please provide a clear and concise description of what you expected to happen and/or how this is different to what you +expected. This also important to determine, if your expectation is also FastSurfer's intention. -## Log Files /Screenshots - -... +Steps to Reproduce +------------------ +How can others reproduce this behavior? Please provide a step-by-step guide! If applicable provide error messages, stack +traces, and/or code snippets here. Make sure to **include the command causing the observed behavior** here! -## Environment - - Docker/Singularity Image (version?) or local install - - FastSurfer Version: ... - - FreeSurfer Version: ... - - OS: ... - - GPU: ... - +1. Go to '...' +2. Checkout version '...' +3. Run '...' +4. The observed behavior/error is '...' + +This is very important to understand and repair the behavior, bug, etc. Best, copy-paste your code or command line call, +but it is also completely fine to use screenshots. + +Log Files / Screenshots +----------------------- +Please attach your log files here. They help understanding problems, FastSurfer version, your recognized hardware, etc. - +If you are interacting with the standard entrypoint scripts, the relevant log files are: +* `run_fastsurfer.sh`: Log files `$SUBJECTS_DIR/$SUBJECT_ID/scripts/deep-seg.log`, + `$SUBJECTS_DIR/$SUBJECT_ID/scripts/BUILD.log` and `$SUBJECTS_DIR/$SUBJECT_ID/scripts/recon-surf.log` (unless use are + running with `--seg_only`). +* `long_fastsurfer.sh`: Log files `$SUBJECTS_DIR/$TEMPLATE_ID/scripts/long_fastsurfer.log`. +* `brun_fastsurfer.sh`: Same as `run_fastsurfer.sh`. +* `srun_fastsurfer.sh`: All log files in the work log-directory (`/slurm/logs` and `/slum/scripts`), you may + choose select one example processing log from there, e.g. `seg_XXXX.log` and `surf_XXXX_Y.log`, try to select a case + affected by the bug. -### Execution - +If you can/want to share data privately with the FastSurfer team, you may use the following dropbox: +https://nextcloud.dzne.de/index.php/s/Z2qtHW8c7p3NSJ5 - -Run Command: +Environment +----------- +Please describe how and which version of FastSurfer you are using. For this fill out the list below! + +- **Installation type**: Docker / Singularity Image / native install +- FastSurfer Version: `run_fastsurfer.sh --version all` gives full version information, see also `scripts/BUILD.log`. +- FreeSurfer Version (if you are running a native install): FILL IN HERE +- OS: Linux Version / Windows / Mac +- GPU: e.g. RTX 2080 / ... + -## Additional Context - -... +Additional Context +------------------ +Add any other context and comments about the problem here. diff --git a/.github/ISSUE_TEMPLATE/documentation.md b/.github/ISSUE_TEMPLATE/documentation.md index 62bcb6f72..45cca5936 100644 --- a/.github/ISSUE_TEMPLATE/documentation.md +++ b/.github/ISSUE_TEMPLATE/documentation.md @@ -1,12 +1,36 @@ --- name: Documentation -about: Report an issue or make a suggestion related to FastSurfer documentation +about: | + You are learning about FastSurfer, but part of the documentation is incomplete, outdated or wrong. Also, your use-case + is not covered in the documentation, but you think it is common (or you could not find a solution in other issues). title: '' labels: documentation assignees: '' --- +Documentation Request +===================== + -... +This issue type is for: +You are learning about FastSurfer, but part of the documentation is wrong, outdated or incomplete. Also, your use-case +is not covered in the documentation, but you think it is common (or you could not find a solution in other issues). + +Please provide clear and concise description of: +--> + +Description +----------- +What documentation is this request referencing? + +Expected/Proposed Documentation +------------------------------- +What kind of documentation would you expect? What should this include? + +If you can provide this documentation, please do! If you want to get credit create a pull request, see the contribution +guide https://deep-mi.org/FastSurfer/dev/overview/CONTRIBUTING.html + +Existing Documentation +---------------------- +If there is existent documentation, which should be updated or extended, link it. How is this different to what exists? diff --git a/.github/ISSUE_TEMPLATE/questions-help-support.md b/.github/ISSUE_TEMPLATE/questions-help-support.md index 1068b9b22..5b46c5e5d 100644 --- a/.github/ISSUE_TEMPLATE/questions-help-support.md +++ b/.github/ISSUE_TEMPLATE/questions-help-support.md @@ -1,30 +1,70 @@ --- name: Questions/Help/Support -about: Submit a request for support or a question +about: You need help in using FastSurfer, for example in interpreting an error message. title: '' labels: question assignees: '' - --- +General Support Request +======================= + + +Description +----------- +What are you trying to achieve? + +Steps that lead to your Issue +----------------------------- +Did you find an example/solution in the documentation/external guides that you are following? Please copy-paste a link +or the text of these instructions. + +If your results did not match your expectations, please provide a step-by-step guide to reproduce these results and +state your expectations! +If applicable provide error messages, stack traces, and/or code snippets here. Make sure to **include the +commands causing the observed behavior** here, i.e. the commands that brought you to your question! + +1. Go to '...' +2. Checkout version '...' +3. Run '....' + +This is very important to understand where you are and how to help. Best, copy-paste your code or command line call, +but it is also completely fine to use screenshots. -**IMPORTANT**: Please make sure to fill out the information about your environment (see below). This is often critical information we need to help you. +Log Files / Screenshots +----------------------- +Please attach your log files here. They help understanding problems, FastSurfer version, your recognized hardware, etc. -## Question/Support Request -A clear and concise description of a question you may have or a problem for which you would like to request support. +If you are interacting with the standard entrypoint scripts, the relevant log files are: +* `run_fastsurfer.sh`: Log files `$SUBJECTS_DIR/$SUBJECT_ID/scripts/deep-seg.log`, + `$SUBJECTS_DIR/$SUBJECT_ID/scripts/BUILD.log` and `$SUBJECTS_DIR/$SUBJECT_ID/scripts/recon-surf.log` (unless use are + running with `--seg_only`). +* `long_fastsurfer.sh`: Log files `$SUBJECTS_DIR/$TEMPLATE_ID/scripts/long_fastsurfer.log`. +* `brun_fastsurfer.sh`: Same as `run_fastsurfer.sh`. +* `srun_fastsurfer.sh`: All log files in the work log-directory (`/slurm/logs` and `/slum/scripts`), you may + choose select one example processing log from there, e.g. `seg_XXXX.log` and `surf_XXXX_Y.log`, try to select a case + affected by the bug. -## Screenshots / Log files -Please provide error messages (can be a screenshot), stack traces, log files (specifically `$SUBJECTS_DIR/$SUBJECT_ID/scripts/deep-seg.log` and `$SUBJECTS_DIR/$SUBJECT_ID/scripts/recon-surf.log`) and any snippets useful in describing your problem here. +If you can/want to share data privately with the FastSurfer team, you may use the following dropbox: +https://nextcloud.dzne.de/index.php/s/Z2qtHW8c7p3NSJ5 -## Environment - - FastSurfer Version: please run `run_fastsurfer.sh --version all` and copy/attach the resulting output - - Installation type: official docker/custom docker/singularity/native - - FreeSurfer Version: 7.4.1/7.3.2 - - OS: Windows/Linux/macOS - - GPU: none/RTX 2080/... +Environment +----------- +Please describe how and which version of FastSurfer you are using. For this fill out the list below! - -... +- **Installation type**: official docker image / custom docker image / singularity / native +- **FastSurfer version**: `run_fastsurfer.sh --version all` gives full version information, see also `scripts/BUILD.log`. +- **FreeSurfer version** (if you are running a native install): 7.4.1 / 7.3.2 / ... +- **OS**: Linux (Ubuntu) / Windows / Mac / ... +- **GPU**: none / RTX 2080 / ... + -### Execution -Include the command you used to run FastSurfer that cause the problem, e.g. -`./run_fastsurfer.sh --sid test --sd /path/to/dir --t1 /path/to/file.nii`. +Additional Context +------------------ +Add any other context and comments about the problem here. diff --git a/.github/workflows/QUICKTEST.md b/.github/workflows/QUICKTEST.md index afb1188fa..17ec50cdc 100644 --- a/.github/workflows/QUICKTEST.md +++ b/.github/workflows/QUICKTEST.md @@ -1,19 +1,17 @@ -# FastSurfer Singularity GitHub Actions Workflow +FastSurfer Singularity GitHub Actions Workflow +============================================== -This GitHub Actions workflow is designed to automate the integration testing of new code into the FastSurfer repository using Singularity containers. The workflow is triggered whenever new code is pushed to the repository. - -The workflow runs on a self-hosted runner labelled 'ci-gpu' to ensure security. - -## Jobs +This GitHub Actions workflow is designed to automate the integration testing of new code into the FastSurfer repository +using Singularity containers. The workflow is triggered whenever new code is pushed to the repository. +Jobs +---- The workflow consists of several jobs that are executed in sequence: ### Checkout - This job checks out the repository using the `actions/checkout@v2` action. ### Prepare Job - This job sets up the necessary environments for the workflow. It depends on the successful completion of the `checkout` job. The environments set up in this job include: - Python 3.10, using the `actions/setup-python@v3` action. @@ -21,38 +19,28 @@ This job sets up the necessary environments for the workflow. It depends on the - Singularity, using the `eWaterCycle/setup-singularity@v7` action with version `3.8.3`. ### Build Singularity Image - -This job builds a Docker image and converts it to a Singularity image. It depends on the successful completion of the `prepare-job`. The Docker image is built using a Python script `tools/Docker/build.py` with the `--device cuda --tag fastsurfer_gpu:cuda` flags. The Docker image is then converted to a Singularity image. +This job builds a Docker image and converts it to a Singularity image. It depends on the successful completion of the `prepare-job`. The Docker image is built using a Python script `tools/Docker/build.py` with the `--device cuda --tag fastsurfer_gpu:cuda` flags. The Docker image is then converted to a Singularity image. ### Run FastSurfer - This job runs FastSurfer on sample MRI data using the Singularity image built in the previous job. It depends on the successful completion of the `build-singularity-image` job. The Singularity container is executed with the `--nv`, `--no-home`, and `--bind` flags to enable GPU access, prevent home directory mounting, and bind the necessary directories respectively. The `FASTSURFER_HOME` environment variable is set to `/fastsurfer-dev` inside the container. ### Test File Existence - This job tests for the existence of certain files after running FastSurfer. It depends on the successful completion of the `run-fastsurfer` job. The test is performed using a Python script `test/test_file_existence.py`. ### Test Error Messages - This job tests for errors in log files after running FastSurfer. It runs on a self-hosted runner labeled `ci-gpu` and depends on the successful completion of both the `run-fastsurfer` and `test-file-existence` jobs. The test is performed using a Python script `test/test_error_messages.py`. -## Usage - +Usage +----- To use this workflow, you need to have a self-hosted runner labeled `ci-gpu` set up on your machine. You also need to update the environment variables of the runner, by going to `/home/your_runner/.env` file and adding the following environment variables with the actual paths you want to use. - ### Environment variables -`RUNNER_FS_MRI_DATA`: Path to MRI Data - -`RUNNER_FS_OUTPUT`: Path to Output directory - -`RUNNER_FS_LICENSE`: Path to License directory - -`RUNNER_SINGULARITY_IMGS`: Path to where Singularity images should be stored - -`RUNNER_FS_OUTPUT_FILES`: Path to output files to be tested - -`RUNNER_FS_OUTPUT_LOGS`: Path to output log files to check for errors - - -Once everything is set up, you can trigger the workflow manually from the GitHub Actions tab in your repository, as well as by pushing code to the repository. +- `RUNNER_FS_MRI_DATA`: Path to MRI Data +- `RUNNER_FS_OUTPUT`: Path to Output directory +- `RUNNER_FS_LICENSE`: Path to License directory +- `RUNNER_SINGULARITY_IMGS`: Path to where Singularity images should be stored +- `RUNNER_FS_OUTPUT_FILES`: Path to output files to be tested +- `RUNNER_FS_OUTPUT_LOGS`: Path to output log files to check for errors + +Once everything is set up, you can trigger the workflow manually from the GitHub Actions tab in your repository, as well +as by pushing code to the repository. diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md index 9385358c4..1fd9ea94f 100644 --- a/CODE_OF_CONDUCT.md +++ b/CODE_OF_CONDUCT.md @@ -1,18 +1,16 @@ -# Contributor Covenant Code of Conduct +Contributor Covenant Code of Conduct +==================================== -## Our Pledge +Our Pledge +---------- +In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making +participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, +disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, +socioeconomic status, nationality, personal appearance, race, religion, or sexual identity and orientation. -In the interest of fostering an open and welcoming environment, we as -contributors and maintainers pledge to making participation in our project and -our community a harassment-free experience for everyone, regardless of age, body -size, disability, ethnicity, sex characteristics, gender identity and expression, -level of experience, education, socioeconomic status, nationality, personal -appearance, race, religion, or sexual identity and orientation. - -## Our Standards - -Examples of behavior that contributes to creating a positive environment -include: +Our Standards +------------- +Examples of behavior that contributes to creating a positive environment include: * Using welcoming and inclusive language * Being respectful of differing viewpoints and experiences @@ -22,57 +20,42 @@ include: Examples of unacceptable behavior by participants include: -* The use of sexualized language or imagery and unwelcome sexual attention or - advances +* The use of sexualized language or imagery and unwelcome sexual attention or advances * Trolling, insulting/derogatory comments, and personal or political attacks * Public or private harassment -* Publishing others' private information, such as a physical or electronic - address, without explicit permission -* Other conduct which could reasonably be considered inappropriate in a - professional setting - -## Our Responsibilities - -Project maintainers are responsible for clarifying the standards of acceptable -behavior and are expected to take appropriate and fair corrective action in -response to any instances of unacceptable behavior. - -Project maintainers have the right and responsibility to remove, edit, or -reject comments, commits, code, wiki edits, issues, and other contributions -that are not aligned to this Code of Conduct, or to ban temporarily or -permanently any contributor for other behaviors that they deem inappropriate, -threatening, offensive, or harmful. - -## Scope - -This Code of Conduct applies both within project spaces and in public spaces -when an individual is representing the project or its community. Examples of -representing a project or community include using an official project e-mail -address, posting via an official social media account, or acting as an appointed -representative at an online or offline event. Representation of a project may be -further defined and clarified by project maintainers. - -## Enforcement - -Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported by contacting the project team at any of the mail addresses listed -here: [https://deep-mi.org/members/](https://deep-mi.org/members/) (e.g. -martin.reuter (at) dzne.de). All complaints will be reviewed and -investigated and will result in a response that is deemed necessary and -appropriate to the circumstances. The project team is obligated to maintain -confidentiality with regard to the reporter of an incident. Further details -of specific enforcement policies may be posted separately. - -Project maintainers who do not follow or enforce the Code of Conduct in good -faith may face temporary or permanent repercussions as determined by other -members of the project's leadership. - -## Attribution - -This Code of Conduct was adapted from the [Contributor Covenant][homepage], version 1.4, -available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html - -[homepage]: https://www.contributor-covenant.org - -For answers to common questions about this code of conduct, see -https://www.contributor-covenant.org/faq +* Publishing others' private information, such as a physical or electronic address, without explicit permission +* Other conduct which could reasonably be considered inappropriate in a professional setting + +Our Responsibilities +-------------------- +Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take +appropriate and fair corrective action in response to any instances of unacceptable behavior. + +Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, +issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any +contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. + +Scope +----- +This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the +project or its community. Examples of representing a project or community include using an official project e-mail +address, posting via an official social media account, or acting as an appointed representative at an online or offline +event. Representation of a project may be further defined and clarified by project maintainers. + +Enforcement +----------- +Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at +any of the mail addresses listed here: [https://deep-mi.org/members/](https://deep-mi.org/members/) (e.g. martin.reuter (at) dzne.de). All +complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to +the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. +Further details of specific enforcement policies may be posted separately. + +Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent +repercussions as determined by other members of the project's leadership. + +Attribution +----------- +This Code of Conduct was adapted from the [Contributor Covenant](https://www.contributor-covenant.org), +[version 1.4](https://www.contributor-covenant.org/version/1/4/code-of-conduct.html). + +[Answers to common questions about this code of conduct](https://www.contributor-covenant.org/faq) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index b827b9e07..cecd2439f 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,52 +1,70 @@ -# Contributing to FastSurfer - +Contribution Guide +================== All types of contributions are encouraged and valued. The community looks forward to your contributions. -## Reporting Bugs +Reporting Bugs +-------------- ### Before Submitting a Bug Report - Please complete the following steps in advance to help us fix any potential bug as fast as possible. - Make sure that you are using the latest version. -- Determine if your bug is really a bug and not an error on your side e.g. using incompatible environment components/versions. -- To see if other users have experienced (and potentially already solved) the same issue you are having, check if there is not already a bug report existing for your bug or error in the [bug tracker](https://github.com/Deep-MI/FastSurfer/issues?q=label%3Abug). -- Collect information about the bug: - - Stack trace (Traceback) +- Determine if you your setup is supported, in particular, check if you are using the correct versions of python + packages, FreeSurfer, the operating system and drivers. You may use `$FASTSURFER_HOME/run_fastsurfer.sh --version all` + to get a list of versions of the python packages used by FastSurfer. +- Search the [Issue Tracker](https://github.com/Deep-MI/FastSurfer/issues?q=is%3Aissue) to see if other users have + previously experienced the same issue or similar issues to yours. You may find a solution in a response to an issue + reported by another user. +- Collect information about your issue: + - Error messages and stack traces (Traceback) - OS, Platform and Version (Windows, Linux, macOS, x86, ARM) - - Version of the interpreter, compiler, SDK, runtime environment, package manager, depending on what seems relevant. - - Possibly your input and the output. - - Can you reliably reproduce the issue? And can you also reproduce it with older versions? + - FastSurfer installation type: native, Docker, Singularity + - Versions of relevant libraries, in particular those reported by `$FASTSURFER_HOME/run_fastsurfer.sh --version all` + - How did you run FastSurfer (command and arguments)? + - Your input and output files + - Can you reliably reproduce the issue (e.g. on other computers)? And can you also reproduce it with older versions? ### How Do I Submit a Good Bug Report? - We use GitHub issues to track bugs and errors. If you run into an issue with the project: -- Open an [Issue](https://github.com/Deep-MI/FastSurfer/issues/new). (Since we can't be sure at this point whether it is a bug or not, we ask you not to talk about a bug yet and not to label the issue.) -- Explain the behavior you would expect and the actual behavior. -- Please provide as much context as possible and describe the *reproduction steps* that someone else can follow to recreate the issue on their own. This usually includes your code. For good bug reports you should isolate the problem and create a reduced test case. +- Open an [Issue in GitHub](https://github.com/Deep-MI/FastSurfer/issues/new). +- Select the Issue type that best describes the Issue type you are reporting. "Bugs" describe situations, where the + observed and the intended behavior are different. If you are about the intended behavior, it is best to report the + Issue as "Question/Help/Support" and ask about the intended behavior or "Documentation". +- Closely follow the template format presented to you and answer the questions asked. +- In particular, please explain the behavior you would expect and the observed behavior. +- Please provide as much context as possible and describe the *reproduction steps* that someone else can follow to + recreate the issue on their own. This usually includes your code. For good bug reports you should isolate the problem + and create a reduced test case. - Provide the information you collected in the previous section. -- Also provide the `$subjid/scripts/recon-surf.log` (if existent) and in the case of a parallel run, also the `$subjid/scripts/[l/r]h.processing.cmdf.log` (if existent). +- Also provide log files and screenshots as indicated in the template, e.g. `/scripts/deep-seg.log`, + `/scripts/recon-surf.log`, and `/scripts/[l/r]h.processing.cmdf.log` (if these exist). Once it's filed: - The project team will label the issue accordingly. -- A team member will try to reproduce the issue with your provided steps. If there are no reproduction steps or no obvious way to reproduce the issue, the team will ask you for those steps and mark the issue as `needs-repro`. Bugs with the `needs-repro` tag will not be addressed until they are reproduced. -- If the team is able to reproduce the issue, it will be marked `needs-fix`, as well as possibly other tags (such as `critical`). - -## Suggesting Enhancements - +- A team member will try to reproduce the issue with your provided steps. If there are no reproduction steps or no + obvious way to reproduce the issue, the team will ask you for those steps and mark the issue as `needs-repro`. Bugs + with the `needs-repro` tag will not be addressed until they are reproduced. +- If the team is able to reproduce the issue, it will be marked `needs-fix`, as well as possibly other tags (such as + `critical`). + +Suggesting Enhancements +----------------------- Please follow these guidelines to help maintainers and the community to understand your suggestion for enhancements. ### Before Submitting an Enhancement - - Make sure that you are using the latest version. -- Read the documentation carefully and find out if the functionality is already covered, maybe by an individual configuration. -- Perform a [search](https://github.com/Deep-MI/FastSurfer/issues) to see if the enhancement has already been suggested. If it has, add a comment to the existing issue instead of opening a new one. -- Find out whether your idea fits with the scope and aims of the project. It's up to you to make a strong case to convince the project's developers of the merits of this feature. Keep in mind that we want features that will be useful to the majority of our users and not just a small subset. If you're just targeting a minority of users, consider writing an add-on/plugin library. +- Read the documentation carefully and find out if the functionality is already covered, maybe by an individual + configuration. +- Perform a [search](https://github.com/Deep-MI/FastSurfer/issues?q=is%3Aissue) to see if the enhancement has already + been suggested. If it has, add a comment to the existing issue instead of opening a new one. +- Find out whether your idea fits with the scope and aims of the project. It's up to you to make a strong case to + convince the project's developers of the merits of this feature. Keep in mind that we want features that will be + useful to the majority of our users and not just a small subset. If you're just targeting a minority of users, + consider writing an add-on/plugin library. ### How Do I Submit a Good Enhancement Suggestion? - Enhancement suggestions are tracked as [GitHub issues](https://github.com/Deep-MI/FastSurfer/issues). - Use a **clear and descriptive title** for the issue to identify the suggestion. @@ -54,53 +72,98 @@ Enhancement suggestions are tracked as [GitHub issues](https://github.com/Deep-M - **Describe the current behavior** and **explain which behavior you expected to see instead** and why. - **Explain why this enhancement would be useful** to most users. -## Contributing Code - -1. [Fork](https://docs.github.com/en/get-started/quickstart/fork-a-repo) this repository to your github account -2. Clone your fork to your computer (`git clone https://github.com//FastSurfer.git`) -3. Change into the project directory (`cd FastSurfer`) -4. Add Deep-MI repo as upstream (`git remote add upstream https://github.com/Deep-MI/FastSurfer.git`) -5. Update information from upstream (`git fetch upstream`) -6. Checkout the upstream dev branch (`git checkout -b dev upstream/dev`) -7. Create your feature branch from dev (`git checkout -b my-new-feature`) -8. Commit your changes (`git commit -am 'Add some feature'`) -9. Push to the branch to your github (`git push origin my-new-feature`) -10. Create new pull request on github web interface from that branch into Deep-NI **dev branch** (not into stable) - -If lots of things changed in the meantime or the pull request is showing conflicts you should rebase your branch to the current upstream dev. -This is the preferred way, but only possible if you are the sole develop or your branch: - -10. Switch into dev branch (`git checkout dev`) -11. Update your dev branch (`git pull upstream dev`) -12. Switch into your feature (`git checkout my-new-feature`) -13. Rebase your branch onto dev (`git rebase dev`), resolve conflicts and continue until complete -14. Force push the updated feature branch to your gihub (`git push -f origin my-new-feature`) - -If other people co-develop the my-new-feature branch, rewriting history with a rebase is not possible. -Instead you need to merge upstream dev into your branch: - -10. Switch into dev branch (`git checkout dev`) -11. Update your dev branch (`git pull upstream dev`) -12. Switch into your feature (`git checkout my-new-feature`) -13. Merge dev into your feature (`git merge dev`), resolve conflicts and commit -14. Push to origin (`git push origin my-new-feature`) +Contributing Code +----------------- +1. [Fork](https://docs.github.com/en/get-started/quickstart/fork-a-repo) the + [official FastSurfer](https://github.com/Deep-MI/FastSurfer) repository to your GitHub account. +2. Clone your fork to your computer: `git clone https://github.com//FastSurfer.git` +3. Change into the project directory: `cd FastSurfer` +4. Add Deep-MI repo as upstream: `git remote add upstream https://github.com/Deep-MI/FastSurfer.git` +5. Update information from upstream: `git fetch upstream` +6. Checkout the upstream dev branch: `git checkout -b dev upstream/dev` +7. Create your feature branch from dev: `git checkout -b my-new-feature` + + ```bash + git clone https://github.com//FastSurfer.git + cd FastSurfer + git remote add upstream https://github.com/Deep-MI/FastSurfer.git + git fetch upstream + git checkout -b dev upstream/dev + git checkout -b my-new-feature + ``` + +8. Edit the code and implement your changes/features +9. Commit your changes: `git commit -am 'Add some feature'` +10. Push to the branch to your GitHub: `git push origin my-new-feature` + + ```bash + git commit -am 'Add some feature' + git push origin my-new-feature + ``` + +11. [Create a new pull request on the GitHub web interface](https://github.com/Deep-MI/FastSurfer/compare) from your + branch into Deep-NI **dev branch** (not into stable): Select "Compare across forks" and then your fork on the right, + finally, select your `my-new-feature` branch to compare to. + +If lots of things changed on the official FastSurfer repository in the meantime or the pull request is showing +conflicts, these need to be resolved by either rebasing (preferred) or merging. + +### Option 1: Rebasing +Rebasing is preferred, because it leaves a cleaner history of changes. However, rebasing is only possible if you are the +sole developer collaborating on your branch. To rebase, do the following: + +12. Switch into dev branch: `git checkout dev` +13. Update your dev branch: `git pull upstream dev` +14. Switch into your feature: `git checkout my-new-feature` +15. Rebase your branch onto dev, resolve conflicts and continue until complete: `git rebase dev` +16. Force push the updated feature branch to your GitHub: `git push -f origin my-new-feature` + + ```bash + git checkout dev + git pull upstream dev + git checkout my-new-feature + git rebase dev + git push -f origin my-new-feature + ``` + +### Option 2: Merging +If other people co-develop in the `my-new-feature` branch, rewriting history with a rebase is not possible. +Instead, you need to merge `upstream/dev` into your branch: + +12. Switch into dev branch: `git checkout dev` +13. Update your dev branch: `git pull upstream dev` +14. Switch into your feature: `git checkout my-new-feature` +15. Merge dev into your feature, resolve conflicts and commit: `git merge dev` +16. Push to origin: `git push origin my-new-feature` + + ```bash + git checkout dev + git pull upstream dev + git checkout my-new-feature + git merge dev + git push origin my-new-feature + ``` Either method updates the pull request and resolves conflicts, so that we can merge it once it is complete. -Once the pull request is merged by us you can delete the feature branch in your clone and on your fork: +Once the pull request is merged by us you can delete the feature branch locally and on your fork: -15. Switch into dev branch (`git checkout dev`) -16. Delete feature branch (`git branch -D my-new-feature`) -17. Delete the branch on your github fork either via GUI, or via command line (`git push origin --delete my-new-feature`) +17. Switch into dev branch: `git checkout dev` +18. Delete feature branch: `git branch -D my-new-feature` +19. Delete the branch on your GitHub fork either via GUI, or via command line: `git push origin --delete my-new-feature` -This procedure will ensure that your local dev branch always follows our dev branch and will never diverge. You can, once in a while, push the dev branch, or similarly update stable and push it to your fork (origin), but that is not really necessary. +This procedure will ensure that your local `dev` branch always follows our `dev` branch and will never diverge. You can, +once in a while, push the `dev` branch, or similarly update `stable` and push it to your fork (`origin`), but that is +not really necessary. -Next time you contribute a feature, you do not need to go through the steps 1-6 above, but simply: -- Switch to dev branch (`git checkout dev`) -- Make sure it is identical to upstream (`git pull upstream dev`) +Next time you contribute a feature, steps 1-6 above are simpler as you already have a local FastSurfer copy. Simply: +- Switch to dev branch: `git checkout dev` +- Make sure it is identical to upstream: `git pull upstream dev` - Check out a new feature branch and continue from 7. above. -Another good command, if for some reasons your dev branch diverged, which should never happen as you never commit to it, you can reset it by `git reset --hard upstream/dev`. Make absolutely sure you are in your dev branch (not the feature branch) and be aware that this will delete any local changes! - -## Attribution +Another good command, if -- for any reason -- your `dev` branch diverged, which should never happen as you never +commit to it, you can reset it by `git reset --hard upstream/dev`. Make absolutely sure you are in your `dev` branch +(not the feature branch) and be aware that this will delete any local changes! +Attribution +----------- This guide is based on the **contributing-gen**. [Make your own](https://github.com/bttger/contributing-gen)! diff --git a/FastSurferCNN/README.md b/FastSurferCNN/README.md index 3636f5948..26d899bf2 100644 --- a/FastSurferCNN/README.md +++ b/FastSurferCNN/README.md @@ -1,31 +1,42 @@ -# Overview - -This directory contains all information needed to run inference with the readily trained FastSurferVINN or train it from scratch. FastSurferCNN is capable of whole brain segmentation into 95 classes in under 1 minute, mimicking FreeSurfer's anatomical segmentation and cortical parcellation (DKTatlas). The network architecture incorporates local and global competition via competitive dense blocks and competitive skip pathways, as well as multi-slice information aggregation that specifically tailor network performance towards accurate segmentation of both cortical and sub-cortical structures. +Overview +======== +This directory contains all information needed to run inference with the readily trained FastSurferVINN or train it from +scratch. FastSurferCNN is capable of whole brain segmentation into 95 classes in under 1 minute, mimicking FreeSurfer's +anatomical segmentation and cortical parcellation (DKTatlas). The network architecture incorporates local and global +competition via competitive dense blocks and competitive skip pathways, as well as multi-slice information aggregation +that specifically tailor network performance towards accurate segmentation of both cortical and sub-cortical structures. ![](../doc/images/detailed_network.png) -The network was trained with conformed images (UCHAR, 1-0.7 mm voxels and standard slice orientation). These specifications are checked in the run_prediction.py script and the image is automatically conformed if it does not comply. +The network was trained with conformed images (UCHAR, 1-0.7 mm voxels and standard slice orientation). These +specifications are checked in the run_prediction.py script and the image is automatically conformed if it does not +comply. -# 1. Inference +1. Inference +------------ -The *FastSurferCNN* directory contains all the source code and modules needed to run the scripts. A list of python libraries used within the code can be found in __requirements.txt__. The main script is called __run_prediction.py__ within which certain options can be selected and set via the command line: - -## General +The *FastSurferCNN* directory contains all the source code and modules needed to run the scripts. A list of python +libraries used within the code can be found in `requirements.txt`. The main script is called `run_prediction.py` within +which certain options can be selected and set via the command line: -* `--in_dir`: Path to the input volume directory (e.g /your/path/to/ADNI/fs60) or +### General +* `--in_dir`: Path to the input volume directory (e.g `/your/path/to/ADNI/fs60`) or * `--csv_file`: Path to csv-file listing input volume directories -* `--t1`: name of the T1-weighted MRI_volume (like mri_volume.mgz, __default: orig.mgz__) -* `--conformed_name`: name of the conformed MRI_volume (the input volume is always first conformed, if not already, and the result is saved under the given name, __default: orig.mgz__) +* `--t1`: name of the T1-weighted MRI_volume (like `mri_volume.mgz`, default: `orig.mgz`) +* `--conformed_name`: name of the conformed MRI_volume (the input volume is always first conformed, if not already, and + the result is saved under the given name, default: `orig.mgz`) * `--t`: search tag limits processing to subjects matching the pattern (e.g. sub-* or 1030*...) * `--sd`: Path to output directory (where should predictions be saved). Will be created if it does not already exist. -* `--seg_log`: name of log-file (information about processing is stored here; If not set, logs will not be saved). Saved in the same directory as the predictions. -* `--strip`: strip suffix from path definition of input file to yield correct subject name. (Optional, if full path is defined for `--t1`) -* `--lut`: FreeSurfer-style Color Lookup Table with labels to use in final prediction. Default: ./config/FastSurfer_ColorLUT.tsv +* `--seg_log`: name of log-file (information about processing is stored here; If not set, logs will not be saved). Saved + in the same directory as the predictions. +* `--strip`: strip suffix from path definition of input file to yield correct subject name (optional, if full path is + defined for `--t1`). +* `--lut`: FreeSurfer-style Color Lookup Table with labels to use in final prediction. Default: + `./config/FastSurfer_ColorLUT.tsv` * `--seg`: Name of intermediate DL-based segmentation file (similar to aparc+aseg). -## Checkpoints and configs - +### Checkpoints and configs * `--ckpt_sag`: path to sagittal network checkpoint * `--ckpt_cor`: path to coronal network checkpoint * `--ckpt_ax`: path to axial network checkpoint @@ -33,10 +44,10 @@ The *FastSurferCNN* directory contains all the source code and modules needed to * `--cfg_sag`: Path to the axial config file * `--cfg_ax`: Path to the sagittal config file -## Optional commands - +### Optional commands * `--clean`: clean up segmentation after running it (optional) -* `--device `:Device for processing (_auto_, _cpu_, _cuda_, _cuda:_), where cuda means Nvidia GPU; you can select which one e.g. "cuda:1". Default: "auto", check GPU and then CPU +* `--device `:Device for processing (_auto_, _cpu_, _cuda_, _cuda:_), where cuda means Nvidia GPU; you + can select which one e.g. "cuda:1". Default: "auto", check GPU and then CPU * `--viewagg_device `: Define where the view aggregation should be run on. Can be _auto_ or a device (see --device). By default (_auto_), the program checks if you have enough memory to run the view aggregation on the gpu. @@ -45,9 +56,9 @@ The *FastSurferCNN* directory contains all the source code and modules needed to Equivalently, if you define `--viewagg_device gpu`, view agg will be run on the gpu (no memory check will be done). * `--batch_size`: Batch size for inference. Default=1 - -## Example Command: Evaluation Single Subject -To run the network on MRI-volumes of subjectX in ./data (specified by `--t1` flag; e.g. ./data/subjectX/t1-weighted.nii.gz), change into the *FastSurferCNN* directory and run the following commands: +### Example Command: Evaluation Single Subject +To run the network on MRI-volumes of subjectX in `./data` (specified by `--t1` flag; e.g. +`./data/subjectX/t1-weighted.nii.gz`), change into the *FastSurferCNN* directory and run the following commands: ```bash python3 run_prediction.py \ @@ -58,15 +69,13 @@ python3 run_prediction.py \ ``` The output will be stored in: - - `../output/subjectX/mri/aparc.DKTatlas+aseg.deep.mgz` (large segmentation) - `../output/subjectX/mri/mask.mgz` (brain mask) - `../output/subjectX/mri/aseg_noCC.mgz` (reduced segmentation) Here the logfile "temp_Competitive.log" will include the logfiles of all subjects. If left out, the logs will be written to stdout - -## Example Command: Evaluation whole directory +### Example Command: Evaluation whole directory To run the network on all subjects MRI-volumes in ./data, change into the *FastSurferCNN* directory and run the following command: ```bash @@ -77,7 +86,6 @@ python3 run_prediction.py \ ``` The output will be stored in: - - `../output/subjectX/mri/aparc.DKTatlas+aseg.deep.mgz` (large segmentation) - `../output/subjectX/mri/mask.mgz` (brain mask) - `../output/subjectX/mri/aseg_noCC.mgz` (reduced segmentation) @@ -85,19 +93,26 @@ The output will be stored in: -# 2. Hdf5-Trainingset Generation +2. Hdf5-Trainingset Generation +------------------------------ -The *FastSurferCNN* directory contains all the source code and modules needed to create a hdf5-file from given MRI volumes. Here, we use the orig.mgz output from freesurfer as the input image and the aparc.DKTatlas+aseg.mgz as the ground truth. The mapping functions are set-up accordingly as well and need to be changed if you use a different segmentation as ground truth. -A list of python libraries used within the code can be found in __requirements.txt__. The main script is called __generate_hdf5.py__ within which certain options can be selected and set via the command line: +The *FastSurferCNN* directory contains all the source code and modules needed to create a hdf5-file from given MRI +volumes. Here, we use the orig.mgz output from freesurfer as the input image and the `aparc.DKTatlas+aseg.mgz` as the +ground truth. The mapping functions are set-up accordingly as well and need to be changed if you use a different +segmentation as ground truth. +A list of python libraries used within the code can be found in `requirements.txt`. The main script is called +`generate_hdf5.py` within which certain options can be selected and set via the command line: ### General - -* `--hdf5_name`: Path and name of the to-be-created hdf5-file. Default: ../data/hdf5_set/Multires_coronal.hdf5 -* `--data_dir`: Directory with images to load. Default: /data +* `--hdf5_name`: Path and name of the to-be-created hdf5-file. Default: `../data/hdf5_set/Multires_coronal.hdf5` +* `--data_dir`: Directory with images to load. Default: `/data` * `--pattern`: Pattern to match only certain files in the directory -* `--csv_file`: Csv-file listing subjects to load (can be used instead of data_dir; one complete path per line (up to the subject directory)) - Example: You have a directory called **dataset** with three different datasets (**D1**, **D2** and **D3**). You want to include subject1, subject10 and subject20 from D1 and D2. Your csv-file would then look like this: +* `--csv_file`: CSV-file listing subjects to load (can be used instead of data_dir; one complete path per line; must be + relative to the subject directory). + + Example: You have a directory called **dataset** with three different datasets (**D1**, **D2** and **D3**). You want + to include subject1, subject10 and subject20 from D1 and D2. Your csv-file would then look like this: ``` /dataset/D1/subject1 /dataset/D1/subject10 @@ -106,18 +121,19 @@ A list of python libraries used within the code can be found in __requirements.t /dataset/D2/subject10 /dataset/D2/subject20 ``` -* --lut: FreeSurfer-style Color Lookup Table with labels to use in final prediction. Default: ./config/FastSurfer_ColorLUT.tsv +* `--lut`: FreeSurfer-style Color Lookup Table with labels to use in final prediction. Default: + `./config/FastSurfer_ColorLUT.tsv` -The actual filename and segmentation ground truth name is specified via `--image_name` and `--gt_name` (e.g. the actual file could be sth. like /dataset/D1/subject1/mri_volume.mgz and /dataset/D1/subject1/segmentation.mgz) - -## Image Names - -* `--image_name`: Default name of original images. FreeSurfer orig.mgz is default (mri/orig.mgz) -* `--gt_name`: Default name for ground truth segmentations. Default: mri/aparc.DKTatlas+aseg.mgz. -* `--gt_nocc`: Segmentation without corpus callosum (used to mask this segmentation in ground truth). For a normal FreeSurfer input, use mri/aseg.auto_noCCseg.mgz. +The actual filename and segmentation ground truth name is specified via `--image_name` and `--gt_name` (e.g. the actual +file could be something like `/dataset/D1/subject1/mri_volume.mgz and /dataset/D1/subject1/segmentation.mgz`) -## Image specific options +### Image Names +* `--image_name`: Default name of original images. FreeSurfer `orig.mgz` is default (`mri/orig.mgz`). +* `--gt_name`: Default name for ground truth segmentations. Default: `mri/aparc.DKTatlas+aseg.mgz`. +* `--gt_nocc`: Segmentation without corpus callosum (used to mask this segmentation in ground truth). For a normal + FreeSurfer input, use `mri/aseg.auto_noCCseg.mgz`. +### Image specific options * `--plane`: Which anatomical plane to use for slicing (axial, coronal or sagittal) * `--thickness`: Number of pre- and succeeding slices (we use 3 --> total of 7 slices is fed to the network; default: 3) * `--combi`: Suffixes of labels names to combine. Default: Left- and Right- @@ -130,8 +146,7 @@ The actual filename and segmentation ground truth name is specified via `--image * `--sizes`: Resolutions of images in the dataset. Default: 256 * `--edge_w`: Weight for edges in weight mask. Default=5 -## Example Command: Axial (Single Resolution) - +### Example Command: Axial (Single Resolution) ```bash python3 generate_hdf5.py \ --hdf5_name ../data/training_set_axial.hdf5 \ @@ -140,15 +155,14 @@ python3 generate_hdf5.py \ --plane axial \ --image_name mri/orig.mgz \ --gt_name mri/aparc.DKTatlas+aseg.mgz \ - --gt_nocc mri/aseg.auto_noCCseg.mgz + --gt_nocc mri/aseg.auto_noCCseg.mgz \ --max_w 5 \ --edge_w 4 \ --hires_w 4 \ --sizes 256 ``` -## Example Command: Coronal (Single Resolution) - +### Example Command: Coronal (Single Resolution) ```bash python3 generate_hdf5.py \ --hdf5_name ../data/training_set_coronal.hdf5 \ @@ -156,15 +170,14 @@ python3 generate_hdf5.py \ --plane coronal \ --image_name mri/orig.mgz \ --gt_name mri/aparc.DKTatlas+aseg.mgz \ - --gt_nocc mri/aseg.auto_noCCseg.mgz + --gt_nocc mri/aseg.auto_noCCseg.mgz \ --max_w 5 \ --edge_w 4 \ --hires_w 4 \ --sizes 256 ``` -## Example Command: Sagittal (Multiple Resolutions) - +### Example Command: Sagittal (Multiple Resolutions) ```bash python3 generate_hdf5.py \ --hdf5_name ../data/training_set_sagittal.hdf5 \ @@ -172,19 +185,21 @@ python3 generate_hdf5.py \ --plane sagittal \ --image_name mri/orig.mgz \ --gt_name mri/aparc.DKTatlas+aseg.mgz \ - --gt_nocc mri/aseg.auto_noCCseg.mgz + --gt_nocc mri/aseg.auto_noCCseg.mgz \ --max_w 5 \ --edge_w 4 \ --hires_w 4 \ --sizes 256 311 320 ``` -## Example Command: Sagittal using --data_dir instead of --csv_file -`--data_dir` specifies the path in which the data is located, with `--pattern` we can select subjects from the specified path. By default the pattern is "*" meaning all subjects will be selected. -As an example, imagine you have 19 FreeSurfer processed subjects labeled subject1 to subject19 in the ../data directory: +### Example Command: Sagittal using --data_dir instead of --csv_file +`--data_dir` specifies the path in which the data is located, with `--pattern` we can select subjects from the specified +path. By default, the pattern is `"*"` meaning all subjects will be selected (it is important to quote the pattern (i.e. +use `"*"`, NOT `*`). As an example, imagine you have 19 FreeSurfer processed subjects labeled subject1 to subject19 in +the `../data` directory: ``` -/home/user/FastSurfer/data +$HOME/FastSurfer/data ├── subject1 ├── subject2 ├── subject3 @@ -205,8 +220,8 @@ As an example, imagine you have 19 FreeSurfer processed subjects labeled subject └── trash ``` -Setting `--pattern` "*" will select all 19 subjects (subject1, ..., subject19). -Now, if only a subset should be used for the hdf5-file (e.g. subject 10 till subject19), this can be done by changing the `--pattern` flag to "subject1[0-9]": +Setting `--pattern` "*" will select all 19 subjects (subject1, ..., subject19). Now, if only a subset should be used for +the hdf5-file (e.g. subject 10 till subject19), this can be done by changing the `--pattern` flag to "subject1[0-9]": ```bash python3 generate_hdf5.py \ @@ -219,20 +234,25 @@ python3 generate_hdf5.py \ --gt_nocc mri/aseg.auto_noCCseg.mgz ``` -# 3. Training + +3. Training +----------- -The *FastSurferCNN* directory contains all the source code and modules needed to run the scripts. A list of python libraries used within the code can be found in __requirements.txt__. The main training script is called __run_model.py__ whose options can be set through a configuration file and command line arguments: -* `--cfg`: Path to the configuration file. Default: config/FastSurferVINN.yaml -* `--aug`: List of augmentations to use. Default: None. +The *FastSurferCNN* directory contains all the source code and modules needed to run the scripts. A list of python +libraries used within the code can be found in `requirements.txt`. The main training script is called `run_model.py` +whose options can be set through a configuration file and command line arguments: +* `--cfg`: Path to the configuration file. Default: `config/FastSurferVINN.yaml` +* `--aug`: List of augmentations to use. Default: `None`. * `--opt`: List of class options to use. -The `--cfg` file configures the model to be trained. See config/FastSurferVINN.yaml for an example and config/defaults.py for all options and default values. +The `--cfg` file configures the model to be trained. See `config/FastSurferVINN.yaml` for an example and +`config/defaults.py` for all options and default values. The configuration options include: -## Model options -* `MODEL_NAME`: Name of model [FastSurferCNN, FastSurferVINN]. Default: FastSurferVINN +### Model options +* `MODEL_NAME`: Name of model [`FastSurferCNN`, `FastSurferVINN`]. Default: FastSurferVINN * `NUM_CLASSES`: Number of classes to predict including background. Axial and coronal: 79 (default), Sagittal: 51. * `NUM_FILTERS`: Filter dimensions for Networks (all layers same). Default: 71 * `NUM_CHANNELS`: Number of input channels (slice thickness). Default: 7 @@ -243,41 +263,38 @@ The configuration options include: * `POOL`: Size of pooling filter. Default: 2 * `BASE_RES`: Base resolution of the segmentation model (after interpolation layer). Default: 1 -## Optimizer options - +### Optimizer options * `BASE_LR`: Base learning rate. Default: 0.01 -* `OPTIMIZING_METHOD`: Optimization method [sgd, adam, adamW]. Default: adamW +* `OPTIMIZING_METHOD`: Optimization method [`sgd`, `adam`, `adamW`]. Default: `adamW` * `MOMENTUM`: Momentum for optimizer. Default: 0.9 -* `NESTEROV`: Enables Nesterov for optimizer. Default: True -* `LR_SCHEDULER`: Learning rate scheduler [step_lr, cosineWarmRestarts, reduceLROnPlateau]. Default: cosineWarmRestarts - - -## Data options +* `NESTEROV`: Enables Nesterov for optimizer. Default: `True` +* `LR_SCHEDULER`: Learning rate scheduler [`step_lr`, `cosineWarmRestarts`, `reduceLROnPlateau`]. Default: + `cosineWarmRestarts` +### Data options * `PATH_HDF5_TRAIN`: Path to training hdf5-dataset * `PATH_HDF5_VAL`: Path to validation hdf5-dataset -* `PLANE`: Plane to load [axial, coronal, sagittal]. Default: coronal - -## Training options +* `PLANE`: Plane to load [`axial`, `coronal`, `sagittal`]. Default: `coronal` +### Training options * `BATCH_SIZE`: Input batch size for training. Default: 16 * `NUM_EPOCHS`: Number of epochs to train. Default: 30 * `SIZES`: Available image sizes for the multi-scale dataloader. Default: [256, 311 and 320] -* `AUG`: Augmentations. Default: ["Scaling", "Translation"] - -## Misc. Options +* `AUG`: Augmentations. Default: `["Scaling", "Translation"]` +### Misc. Options * `LOG_DIR`: Log directory for run * `NUM_GPUS`: Number of GPUs to use. Default: 1 * `RNG_SEED`: Select random seed. Default: 1 -Any option can alternatively be set through the command-line by specifying the option name (as defined in config/defaults.py) followed by a value, such as: `MODEL.NUM_CLASSES 51`. +Any option can alternatively be set through the command-line by specifying the option name (as defined in +`config/defaults.py`) followed by a value, such as: `MODEL.NUM_CLASSES 51`. -To train the network on a given hdf5-set, change into the *FastSurferCNN* directory and run -`run_model.py` as in the following examples: +To train the network on a given hdf5-set, change into the *FastSurferCNN* directory and run `run_model.py` as in the +following examples: -## Example Command: Training Default FastSurferVINN +### Example Command: Training Default FastSurferVINN Trains FastSurferVINN on multi-resolution images in the coronal plane: ```bash @@ -285,8 +302,9 @@ python3 run_model.py \ --cfg ./config/FastSurferVINN.yaml ``` -## Example Command: Training FastSurferVINN (Single Resolution) -Trains FastSurferVINN on single-resolution images in the sagittal plane by overriding the NUM_CLASSES, SIZES, PATH_HDF5_TRAIN, and PATH_HDF5_VAL options: +### Example Command: Training FastSurferVINN (Single Resolution) +Trains FastSurferVINN on single-resolution images in the sagittal plane by overriding the `NUM_CLASSES`, `SIZES`, +`PATH_HDF5_TRAIN`, and `PATH_HDF5_VAL` options: ```bash python3 run_model.py \ @@ -297,7 +315,7 @@ python3 run_model.py \ DATA.PATH_HDF5_VAL ./hdf5_sets/validation_sagittal_single_resolution.hdf5 \ ``` -## Example Command: Training FastSurferCNN +### Example Command: Training FastSurferCNN Trains FastSurferCNN using a provided configuration file and specifying no augmentations: ```bash diff --git a/FastSurferCNN/inference.py b/FastSurferCNN/inference.py index ed2a4bb68..c21ba79a4 100644 --- a/FastSurferCNN/inference.py +++ b/FastSurferCNN/inference.py @@ -190,7 +190,7 @@ def load_checkpoint(self, ckpt: str | os.PathLike): Parameters ---------- - ckpt : Union[str, os.PathLike] + ckpt : str, os.PathLike String or os.PathLike object containing the name to the checkpoint file. """ logger.info(f"Loading checkpoint {ckpt}") @@ -285,20 +285,6 @@ def get_model_width(self) -> int: """ return self.cfg.MODEL.WIDTH - def get_max_size(self) -> int | tuple[int, int]: - """ - Return the max size. - - Returns - ------- - int | tuple[int, int] - The maximum size, either a single value or a tuple (width, height). - """ - if self.cfg.MODEL.OUT_TENSOR_WIDTH == self.cfg.MODEL.OUT_TENSOR_HEIGHT: - return self.cfg.MODEL.OUT_TENSOR_WIDTH - else: - return self.cfg.MODEL.OUT_TENSOR_WIDTH, self.cfg.MODEL.OUT_TENSOR_HEIGHT - def get_device(self) -> torch.device: """ Return the device. @@ -386,7 +372,7 @@ def eval( start_index = end_index except: - logger.exception(f"Exception in batch {log_batch_idx + 1} of {plane} inference.") + logger.error(f"Exception in batch {log_batch_idx + 1} of {plane} inference.") raise else: logger.info(f"Inference on {log_batch_idx + 1} batches for {plane} successful") diff --git a/FastSurferCNN/models/networks.py b/FastSurferCNN/models/networks.py index ce974628e..d327630a2 100644 --- a/FastSurferCNN/models/networks.py +++ b/FastSurferCNN/models/networks.py @@ -271,28 +271,14 @@ def __init__(self, params: dict, padded_size: int = 256): self.height = params["height"] self.width = params["width"] - self.out_tensor_shape = tuple( - params.get("out_tensor_" + k, padded_size) for k in ["width", "height"] - ) + self.out_tensor_shape = tuple(params.get("out_tensor_" + k, padded_size) for k in ("width", "height")) - self.interpolation_mode = ( - params["interpolation_mode"] - if "interpolation_mode" in params - else "bilinear" - ) - if self.interpolation_mode not in ["nearest", "bilinear", "bicubic", "area"]: + self.interpolation_mode = params["interpolation_mode"] if "interpolation_mode" in params else "bilinear" + if self.interpolation_mode not in ("nearest", "bilinear", "bicubic", "area"): raise ValueError("Invalid interpolation mode") - self.crop_position = ( - params["crop_position"] if "crop_position" in params else "top_left" - ) - if self.crop_position not in [ - "center", - "top_left", - "top_right", - "bottom_left", - "bottom_right", - ]: + self.crop_position = params["crop_position"] if "crop_position" in params else "top_left" + if self.crop_position not in ("center", "top_left", "top_right", "bottom_left", "bottom_right"): raise ValueError("Invalid crop position") # Reset input channels to original number (overwritten in super call) @@ -322,16 +308,12 @@ def __init__(self, params: dict, padded_size: int = 256): # Code for Network Initialization for m in self.modules(): if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d): - nn.init.kaiming_normal_( - m.weight, mode="fan_out", nonlinearity="leaky_relu" - ) + nn.init.kaiming_normal_(m.weight, mode="fan_out", nonlinearity="leaky_relu") elif isinstance(m, nn.BatchNorm2d): nn.init.constant_(m.weight, 1) nn.init.constant_(m.bias, 0) - def forward( - self, x: Tensor, scale_factor: Tensor, scale_factor_out: Tensor | None = None - ) -> Tensor: + def forward(self, x: Tensor, scale_factor: Tensor, scale_factor_out: Tensor | None = None) -> Tensor: """ Feedforward through graph. @@ -340,8 +322,7 @@ def forward( x : Tensor Input image [N, C, H, W]. scale_factor : Tensor - Tensor of shape [N, 1] representing the scale factor for each image in the - batch. + Tensor of shape [N, 1] representing the scale factor for each image in the batch. scale_factor_out : Tensor, optional Tensor representing the scale factor for the output. Defaults to None. @@ -397,9 +378,7 @@ def build_model(cfg: 'yacs.config.CfgNode') -> FastSurferCNN | FastSurferVINN: model Object of the initialized model. """ - assert ( - cfg.MODEL.MODEL_NAME in _MODELS.keys() - ), f"Model {cfg.MODEL.MODEL_NAME} not supported" + assert cfg.MODEL.MODEL_NAME in _MODELS.keys(), f"Model {cfg.MODEL.MODEL_NAME} not supported" params = {k.lower(): v for k, v in dict(cfg.MODEL).items()} model = _MODELS[cfg.MODEL.MODEL_NAME](params, padded_size=cfg.DATA.PADDED_SIZE) return model diff --git a/FastSurferCNN/version.py b/FastSurferCNN/version.py index 99e0c6fab..990c26379 100644 --- a/FastSurferCNN/version.py +++ b/FastSurferCNN/version.py @@ -526,8 +526,7 @@ def read_and_close_version(project_file: TextIO | None = None) -> str: ----- See also FastSurferCNN.version.read_version_from_project_file """ - if project_file is None: - project_file = open(DEFAULTS.PROJECT_TOML) + project_file = open(project_file or DEFAULTS.PROJECT_TOML) try: version = read_version_from_project_file(project_file) finally: diff --git a/HypVINN/README.md b/HypVINN/README.md index 66ae036d1..558e175c5 100644 --- a/HypVINN/README.md +++ b/HypVINN/README.md @@ -113,7 +113,6 @@ Note: These weights (version 1.1) are retrained compared to paper ([version 1.0] ### Developer - Santiago Estrada : santiago.estrada@dzne.de ### Citation diff --git a/README.md b/README.md index add07eb93..59d65365e 100644 --- a/README.md +++ b/README.md @@ -4,9 +4,11 @@ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deep-MI/FastSurfer/blob/stable/Tutorial/Complete_FastSurfer_Tutorial.ipynb) -# Welcome to FastSurfer! -## Overview +Welcome to FastSurfer! +====================== +Overview +-------- This README contains all information needed to run FastSurfer - a fast and accurate deep-learning based neuroimaging pipeline. FastSurfer provides a fully compatible [FreeSurfer](https://freesurfer.net/) alternative for volumetric analysis (within minutes) and surface-based thickness analysis (within only around 1h run time). FastSurfer is transitioning to sub-millimeter resolution support throughout the pipeline. @@ -15,7 +17,6 @@ The FastSurfer pipeline consists of two main parts for segmentation and surface - the segmentation sub-pipeline (`seg`) employs advanced deep learning networks for fast, accurate segmentation and volumetric calculation of the whole brain and selected substructures. - the surface sub-pipeline (`recon-surf`) reconstructs cortical surfaces, maps cortical labels and performs a traditional point-wise and ROI thickness analysis. - ### Segmentation Modules - approximately 5 minutes (GPU), `--seg_only` only runs this part. @@ -37,7 +38,6 @@ Modules (all run by default): - allows the additional passing of a T2w image with `--t2 `, which will be registered to the T1w image (see `--reg_mode` option). - calculates volume statistics corrected for partial volume effects based on the T1w image (skipped if `--no_bias_field` is passed). - ### Surface reconstruction - approximately 60-90 minutes, `--surf_only` runs only [the surface part](recon_surf/README.md). - supports high-resolution images (up to 0.7mm, experimental beyond that). @@ -53,7 +53,8 @@ Notwithstanding module-specific limitations, resolution should be between 1mm an ![](doc/images/teaser.png) -## Getting started +Getting started +--------------- ### Installation There are three ways to run FastSurfer (links are to installation instructions): @@ -65,116 +66,134 @@ There are three ways to run FastSurfer (links are to installation instructions): The images we provide on [DockerHub](https://hub.docker.com/r/deepmi/fastsurfer) conveniently include everything needed for FastSurfer. You will also need a [FreeSurfer license](https://surfer.nmr.mgh.harvard.edu/fswiki/License) file for the [Surface pipeline](#surface-reconstruction). We have detailed per-OS Installation instructions in the [INSTALL.md](doc/overview/INSTALL.md) file. ### Usage - -All installation methods use the `run_fastsurfer.sh` call interface (replace `*fastsurfer-flags*` with [FastSurfer flags](doc/overview/FLAGS.md#required-arguments)), which is the general starting point for FastSurfer. However, there are different ways to call this script depending on the installation, which we explain here: - -1. For container installations, you need to define the hardware and mount the folders with the input (`/data`) and output data (`/output`): - (a) For __singularity__, the syntax is - ``` - singularity exec --nv \ - --no-home \ - -B /home/user/my_mri_data:/data \ - -B /home/user/my_fastsurfer_analysis:/output \ - -B /home/user/my_fs_license_dir:/fs_license \ - ./fastsurfer-gpu.sif \ - /fastsurfer/run_fastsurfer.sh - *fastsurfer-flags* - ``` - The `--nv` flag is needed to allow FastSurfer to run on the GPU (otherwise FastSurfer will run on the CPU). - - The `--no-home` flag tells singularity to not mount the home directory (see [Singularity documentation](doc/overview/SINGULARITY.md#mounting-home) for more info). - - The `-B` flag is used to tell singularity, which folders FastSurfer can read and write to. - - See also __[Example 2](doc/overview/EXAMPLES.md#example-2-fastsurfer-singularity)__ for a full singularity FastSurfer run command and [the Singularity documentation](doc/overview/SINGULARITY.md#fastsurfer-singularity-image-usage) for details on more singularity flags. - - (b) For __docker__, the syntax is - ``` - docker run --gpus all \ - -v /home/user/my_mri_data:/data \ - -v /home/user/my_fastsurfer_analysis:/output \ - -v /home/user/my_fs_license_dir:/fs_license \ - --rm --user $(id -u):$(id -g) \ - deepmi/fastsurfer:latest \ - *fastsurfer-flags* - ``` - The `--gpus` flag is needed to allow FastSurfer to run on the GPU (otherwise FastSurfer will run on the CPU). - - The `-v` flag is used to tell docker, which folders FastSurfer can read and write to. +All installation methods use the `run_fastsurfer.sh` call interface (replace the placeholder `<*fastsurfer-flags*>` with [FastSurfer flags](doc/scripts/RUN_FASTSURFER.md#required-arguments)), which is the general starting point for FastSurfer. However, there are different ways to call this script depending on the installation, which we explain here: + +1. For container installations, you need to set up the container (`<*singularity-flags*>` or `<*docker-flags*>`) in addition to the `<*fastsurfer-flags*>`: + 1. For __Singularity__, the syntax is + + ```bash + singularity run <*singularity-flags*> \ + fastsurfer.sif \ + <*fastsurfer-flags*> + ``` + This command has two placeholders for flags: `<*singularity-flags*>` and `<*fastsurfer-flags*>`. + `<*singularity-flags*>` [set up the singularity environment](doc/overview/SINGULARITY.md), `<*fastsurfer-flags*>` include the options that determine the [behavior of FastSurfer](doc/scripts/RUN_FASTSURFER.md): + ### Basic FastSurfer Flags + + - `--t1`: the path to the image to process. + - `--sd`: the path to the "Subjects Directory", where all results will be stored. + - `--sid`: the identified for the results for this image (folder inside "Subjects Directory"). + - `--fs_license`: path to the FreeSurfer license file. + + All options are explained in detail in the [run_fastsurfer.sh documentation](doc/scripts/RUN_FASTSURFER.md). + + An example for a simple full FastSurfer-Singularity command is + ```bash + singularity run --nv \ + -B $HOME/my/mri_data \ + -B $HOME/my/fastsurfer_analysis \ + -B /software/freesurfer/license.txt \ + fastsurfer.sif \ + --t1 $HOME/my/mri_data/participant1/image1.nii.gz \ + --sd $HOME/my/fastsurfer_analysis \ + --sid part1_img1 \ + --fs_license /software/freesurfer/license.txt + ``` + + See also __[Example 1](doc/overview/EXAMPLES.md#example-1-fastsurfer-singularity-or-apptainer)__ for a full singularity FastSurfer run command and [the Singularity documentation](doc/overview/SINGULARITY.md#fastsurfer-singularity-image-usage) for details on more singularity flags and how to create the `fastsurfer.sif` file. + + 2. For __docker__, the syntax is + ```bash + docker run <*docker-flags*> \ + deepmi/fastsurfer:-v \ + <*fastsurfer-flags*> + ``` + + The options for `<*docker-flags*>` and [`<*fastsurfer-flags*>`](README.md#basic-fastsurfer-flags) follow very similar patterns as for Singularity ([but the names of `<*docker-flags*>` are different](Docker/README.md#docker-flags)). - See also __[Example 1](doc/overview/EXAMPLES.md#example-1-fastsurfer-docker)__ for a full FastSurfer run inside a Docker container and [the Docker documentation](tools/Docker/README.md#docker-flags) for more details on the docker flags including `--rm` and `--user`. + __[Example 2](doc/overview/EXAMPLES.md#example-2-fastsurfer-docker)__ also details a full FastSurfer run inside a Docker container and [the Docker documentation](tools/Docker/README.md#docker-flags) for more details on `*docker flags*` and the naming of docker images (`-v`). -2. For a __native install__, you need to activate your FastSurfer environment (e.g. `conda activate fastsurfer_gpu`) and make sure you have added the FastSurfer path to your `PYTHONPATH` variable, e.g. `export PYTHONPATH=$(pwd)`. +2. For a __macOS package install__, start FastSurfer from Applications and call the `run_fastsurfer.sh` FastSurfer script with [FastSurfer flags](doc/scripts/RUN_FASTSURFER.md#required-arguments) from the terminal that is opened for you. - You will then be able to run fastsurfer with `./run_fastsurfer.sh *fastsurfer-flags*`. +3. For a __native install__, call the `run_fastsurfer.sh` FastSurfer script directly. Your FastSurfer python/conda environment needs to be [set up](doc/overview/INSTALL.md#native-ubuntu-2004-or-ubuntu-2204) and activated. - See also [Example 3](doc/overview/EXAMPLES.md#example-3-native-fastsurfer-on-subjectx-with-parallel-processing-of-hemis) for an illustration of the commands to run the entire FastSurfer pipeline (FastSurferCNN + recon-surf) natively. + ```bash + # activate fastsurfer environment + conda activate fastsurfer + + /path/to/fastsurfer/run_fastsurfer.sh <*fastsurfer-flags*> + ``` - -### FastSurfer_Flags -Please refer to [FASTSURFER_FLAGS](doc/overview/FLAGS.md). + In addition to the [Basic Flags](README.md#basic-fastsurfer-flags), note that you may need to use `--py python3.12` to specify your python version, see [FastSurfer flags for more details](doc/scripts/RUN_FASTSURFER.md#required-arguments). -## Examples -All the examples can be found here: [FASTSURFER_EXAMPLES](doc/overview/EXAMPLES.md) -- [Example 1: FastSurfer Docker](doc/overview/EXAMPLES.md#example-1-fastsurfer-docker) -- [Example 2: FastSurfer Singularity](doc/overview/EXAMPLES.md#example-2-fastsurfer-singularity) + [Example 3](doc/overview/EXAMPLES.md#example-3-native-fastsurfer-on-subjectx-with-parallel-processing-of-hemis) also illustrates the running the FastSurfer pipeline natively. + + +Examples +-------- +The documentation includes [6 detailed Examples](doc/overview/EXAMPLES.md) on how to use FastSurfer. +- [Example 1: FastSurfer Singularity](doc/overview/EXAMPLES.md#example-1-fastsurfer-singularity-or-apptainer) +- [Example 2: FastSurfer Docker](doc/overview/EXAMPLES.md#example-2-fastsurfer-docker) - [Example 3: Native FastSurfer on subjectX with parallel processing of hemis](doc/overview/EXAMPLES.md#example-3-native-fastsurfer-on-subjectx-with-parallel-processing-of-hemis) - [Example 4: FastSurfer on multiple subjects](doc/overview/EXAMPLES.md#example-4-fastsurfer-on-multiple-subjects) - [Example 5: Quick Segmentation](doc/overview/EXAMPLES.md#example-5-quick-segmentation) - [Example 6: Running FastSurfer on a SLURM cluster via Singularity](doc/overview/EXAMPLES.md#example-6-running-fastsurfer-on-a-slurm-cluster-via-singularity) - -## Output files - +Output files +------------ Modules output can be found here: [FastSurfer_Output_Files](doc/overview/OUTPUT_FILES.md) - [Segmentation module](doc/overview/OUTPUT_FILES.md#segmentation-module) +- [Corpus Callosum module](doc/overview/OUTPUT_FILES.md#corpus-callosum-module) - [Cerebnet module](doc/overview/OUTPUT_FILES.md#cerebnet-module) - [HypVINN module](doc/overview/OUTPUT_FILES.md#hypvinn-module) -- [Corpus Callosum module](doc/overview/OUTPUT_FILES.md#corpus-callosum-module) - [Surface module](doc/overview/OUTPUT_FILES.md#surface-module) -## System Requirements +System Requirements +------------------- -**Recommendation: At least 8 GB system memory and 8 GB NVIDIA graphics memory** +### Recommendation +- intel or AMD CPU (6 or more cores) +- 16 GB system memory +- nVidia graphics card (2016 or newer) +- 12 GB graphics memory -### Minimum Requirements: +FastSurfer supports multiple hardware acceleration modes: fully CPU (`--device cpu`), partial GPU +(`--device cuda --viewagg_device cpu`) and fully GPU (`--device cuda`). By default, FastSurfer will try to pick the best +option. These modes require different system and video memory capacities, see the table below. -| | --viewagg_device | Min GPU (in GB) | Min CPU (in GB) | -|:------|------------------|----------------:|----------------:| -| 1mm | gpu | 5 | 5 | -| 1mm | cpu | 2 | 7 | -| 0.7mm | gpu | 8 | 6 | -| 0.7mm | cpu | 3 | 9 | -| 0.7mm | --device cpu | 0 | 9 | +| Voxel size | mode: fully CPU | mode: partial gpu | mode: fully GPU | +|:-----------|---------------------------|-----------------------------------------|:----------------------| +| 1mm | system memory (RAM): 8 GB | RAM: 8 GB, graphics memory (VRAM): 2 GB | RAM: 8 GB, VRAM: 6 GB | +| 0.8mm | RAM: 8 GB | RAM: 8 GB, VRAM: 2 GB | RAM: 8 GB, VRAM: 8 GB | +| 0.7mm | RAM: 16 GB | RAM: 16 GB, VRAM: 3 GB | RAM: 8 GB, VRAM: 8 GB | The default device is the GPU. The view-aggregation device can be switched to CPU and requires less GPU memory. CPU-only processing ```--device cpu``` is much slower and not recommended. -## Expert usage +Expert usage +------------ Individual modules and the surface pipeline can be run independently of the full pipeline script documented in this documentation. This is documented in READMEs in subfolders, for example: [whole brain segmentation only with FastSurferVINN](FastSurferCNN/README.md), [cerebellum sub-segmentation](CerebNet/README.md), [hypothalamic sub-segmentation](HypVINN/README.md), [corpus callosum analysis](CorpusCallosum/README.md) and [surface pipeline only (recon-surf)](recon_surf/README.md). Specifically, the segmentation modules feature options for optimized parallelization of batch processing. - -## FreeSurfer Downstream Modules - +FreeSurfer Downstream Modules +----------------------------- FreeSurfer provides several Add-on modules for downstream processing, such as subfield segmentation ( [hippocampus/amygdala](https://surfer.nmr.mgh.harvard.edu/fswiki/HippocampalSubfieldsAndNucleiOfAmygdala), [brainstem](https://surfer.nmr.mgh.harvard.edu/fswiki/BrainstemSubstructures), [thalamus](https://freesurfer.net/fswiki/ThalamicNuclei) and [hypothalamus](https://surfer.nmr.mgh.harvard.edu/fswiki/HypothalamicSubunits) ) as well as [TRACULA](https://surfer.nmr.mgh.harvard.edu/fswiki/Tracula). We now provide symlinks to the required files, as FastSurfer creates them with a different name (e.g. using "mapped" or "DKT" to make clear that these file are from our segmentation using the DKT Atlas protocol, and mapped to the surface). Most subfield segmentations require `wmparc.mgz` and work very well with FastSurfer, so feel free to run those pipelines after FastSurfer. TRACULA requires `aparc+aseg.mgz` which we now link, but have not tested if it works, given that [DKT-atlas](https://mindboggle.readthedocs.io/en/latest/labels.html) merged a few labels. You should source FreeSurfer 7.3.2 to run these modules. -## Want to know more? - +Want to know more? +------------------ The DeepMI lab hosts an annual **FastSurfer course** at the German Center for Neurodegenerative Diseaes in Bonn, Germany. This is a 2.5-day, hands-on, introductory course on state-of-the-art deep-learning methods for fast and reliable neuroimage analysis. Participants will gain an understanding of modern methods for the analysis of structural brain images, learn how to run both the FastSurfer and FreeSurfer packages, and will know how to set up an analysis and work with the resulting outputs in the context of their own research projects. The course consists of lectures, demonstrations, practical exercises, and provides ample opportunities for discussions and informal exchange. The course typically takes place in **September**. Check out our [website](https://deep-mi.org/events) for details and current information! - -## Intended Use - +Intended Use +------------ This software can be used to compute statistics from an MR image for research purposes. Estimates can be used to aggregate population data, compare groups etc. The data should not be used for clinical decision support in individual cases and, therefore, does not benefit the individual patient. Be aware that for a single image, produced results may be unreliable (e.g. due to head motion, imaging artefacts, processing errors etc). We always recommend to perform visual quality checks on your data, as also your MR-sequence may differ from the ones that we tested. No contributor shall be liable to any damages, see also our software [LICENSE](LICENSE). -## References - +References +---------- If you use this for research publications, please cite: _Henschel L, Conjeti S, Estrada S, Diers K, Fischl B, Reuter M, FastSurfer - A fast and accurate deep learning based neuroimaging pipeline, NeuroImage 219 (2020), 117012. https://doi.org/10.1016/j.neuroimage.2020.117012_ @@ -188,8 +207,8 @@ _Estrada S, Kuegler D, Bahrami E, Xu P, Mousa D, Breteler MMB, Aziz NA, Reuter M Stay tuned for updates and follow us on [X/Twitter](https://twitter.com/deepmilab). -## Acknowledgements - +Acknowledgements +---------------- This project is partially funded by: - [Chan Zuckerberg Initiative](https://chanzuckerberg.com/eoss/proposals/fastsurfer-ai-based-neuroimage-analysis-package/) - [German Federal Ministry of Education and Research](https://www.gesundheitsforschung-bmbf.de/de/deepni-innovative-deep-learning-methoden-fur-die-rechnergestutzte-neuro-bildgebung-10897.php) diff --git a/Tutorial/README.md b/Tutorial/README.md index 200721eb4..2487860d6 100644 --- a/Tutorial/README.md +++ b/Tutorial/README.md @@ -1,45 +1,69 @@ -# FastSurfer Tutorial +FastSurfer Tutorial +=================== [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deep-MI/FastSurfer/blob/stable/Tutorial/Tutorial_FastSurferCNN_QuickSeg.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deep-MI/FastSurfer/blob/stable/Tutorial/Complete_FastSurfer_Tutorial.ipynb) -## Overview -This repository contains two google colab files which illustrate how to run and install FastSurfer. In order to use the notebooks, simply click on the link or optimally the google colab icon displayed at the top of the page. This way, the plots will be rendered correctly. If you have a Google account, you can interactively execute the run cells. Without a google account you can see the files and outputs generated by the last run. - -## Notebook 1 - Quick and Easy - FastSurfer Segmentation with three clicks -__[Notebook 1](https://colab.research.google.com/github/Deep-MI/FastSurfer/blob/stable/Tutorial/Tutorial_FastSurferCNN_QuickSeg.ipynb)__ contains a super quick and easy scenario in which you can run FastSurferCNN in just three clicks. You do not need any programming experience to get a segmentation in less than 60 s! - -## Notebook 2 - Complete FastSurfer Tutorial -__[Notebook 2](https://colab.research.google.com/github/Deep-MI/FastSurfer/blob/stable/Tutorial/Complete_FastSurfer_Tutorial.ipynb)__ is an extended version of the first one with information about how to set up FastSurfer on your local machine. It includes detailed installation instructions as well as examples of how to visualize and quality control your data. +Overview +-------- +This repository contains two google colab files which illustrate how to run and install FastSurfer. In order to use the +notebooks, simply click on the link or optimally the google colab icon displayed at the top of the page. This way, the +plots will be rendered correctly. If you have a Google account, you can interactively execute the run cells. Without a +google account you can see the files and outputs generated by the last run. + +Notebook 1 - Quick and Easy - FastSurfer Segmentation with three clicks +----------------------------------------------------------------------- +__[Notebook 1](https://colab.research.google.com/github/Deep-MI/FastSurfer/blob/stable/Tutorial/Tutorial_FastSurferCNN_QuickSeg.ipynb)__ +contains a super quick and easy scenario in which you can run FastSurferCNN in just three clicks. You do not need any +programming experience to get a segmentation in less than 60 s! + +Notebook 2 - Complete FastSurfer Tutorial +----------------------------------------- +__[Notebook 2](https://colab.research.google.com/github/Deep-MI/FastSurfer/blob/stable/Tutorial/Complete_FastSurfer_Tutorial.ipynb)__ +is an extended version of the first one with information about how to set up FastSurfer on your local machine. It +includes detailed installation instructions as well as examples of how to visualize and quality control your data. After a quick introduction, it covers three use cases: - Use case 1: Quick and Easy - FastSurfer Segmentation with three clicks (same as the first notebook) - Use case 2: Quick and a bit more advanced - Segmentation with FastSurfer on your local machine - Use case 3: Use case 3 - Surface models, Thickness maps and more: FastSurfer's recon-surf command -In addition, there is a small section covering [python-qatools](https://github.com/Deep-MI/qatools-python) called "Bonus - Quality analysis using qatools". +In addition, there is a small section covering [python-qatools](https://github.com/Deep-MI/qatools-python) called +"Bonus - Quality analysis using qatools". -## ISMRM 2021 slides -As part of the ISMRM Software Demo 2021, a presentation giving an overview of how to use FastSurfer was created as well. This contains references to the notebooks above as well as some background information on FastSurfer. Feel free to check it out: https://docs.google.com/presentation/d/1xuMFBo-AQMwdH8MplUWgEvn9z0bJk6p1mzi2vm-mKvs/edit?usp=sharing +ISMRM 2021 slides +----------------- +As part of the ISMRM Software Demo 2021, a presentation giving an overview of how to use FastSurfer was created as well. +This contains references to the notebooks above as well as some background information on FastSurfer. Feel free to check +it out: https://docs.google.com/presentation/d/1xuMFBo-AQMwdH8MplUWgEvn9z0bJk6p1mzi2vm-mKvs/edit?usp=sharing -## Requirements -If you want to follow along on your local machine, you need a working installation of FastSurfer. The steps are also covered in the second notebook (Sections B for Use case 1 and 2). In order to follow the installation instructions, some basic requirements have to be met. -In general, you need either a Linux OS, MacOS (+Docker) or Windows (+Docker) to run FastSurfer. Further, the segmentation requires a little less than 10 GB RAM. If you are using Docker on MAC, you have to make sure to adjust the memory settings accordingly (by default, they are limited to 2 GB runtime memory). +Requirements +------------ +If you want to follow along on your local machine, you need a working installation of FastSurfer. The steps are also +covered in the second notebook (Sections B for Use case 1 and 2). In order to follow the installation instructions, some +basic requirements have to be met. In general, you need either a Linux OS, MacOS (+Docker) or Windows (+Docker) to run +FastSurfer. Further, the segmentation requires a little less than 10 GB RAM. If you are using Docker on MAC, you have to +make sure to adjust the memory settings accordingly (by default, they are limited to 2 GB runtime memory). ### 1. Recommendation - Docker -Docker is an open platform for developing, shipping, and running applications. In a way, it allows the user to create software-packages which you as a user can simply download (or pull) and directly use. These include all dependencies you need for running the application. You do not need to install anything else. See the detailed documentation on [how to install docker](https://docs.docker.com/get-docker/) on your local machine. +Docker is an open platform for developing, shipping, and running applications. In a way, it allows the user to create +software-packages which you as a user can simply download (or pull) and directly use. These include all dependencies you +need for running the application. You do not need to install anything else. See the detailed documentation on +[how to install docker](https://docs.docker.com/get-docker/) on your local machine. ### 2. Local installation - If you decide against using docker, you need either python + pip or anaconda (conda) to install FastSurfer. #### 1. Python + pip -Python 3.10 is generally recommended to run FastSurfer. In addition you will need the package manager pip to install the python dependencies used for FastSurfer (see requirements.txt in the main directory for a list). On Linux, pip is not installed by default. You can install it via +Python 3.10 is generally recommended to run FastSurfer. In addition you will need the package manager pip to install the +python dependencies used for FastSurfer (see requirements.txt in the main directory for a list). On Linux, pip is not +installed by default. You can install it via ```bash sudo apt install python3-pip ``` -If not pre-installed, `setuptools` has to be installed before the contents of requirements.txt. This can be done via apt or pip: +If not pre-installed, `setuptools` has to be installed before the contents of requirements.txt. This can be done via apt +or pip: ```bash sudo apt install python3-setuptools ``` @@ -47,13 +71,16 @@ sudo apt install python3-setuptools pip install setuptools ``` -The optional recon-surf dependency, `scikit-sparse`, can not be installed sequentially with `numpy` (and thus can not be included in the requirements.txt file). Therefore, it should be installed separately, after the requirements.txt install: +The optional recon-surf dependency, `scikit-sparse`, can not be installed sequentially with `numpy` (and thus can not be +included in the requirements.txt file). Therefore, it should be installed separately, after the requirements.txt +install: ```bash pip install scikit-sparse=0.4.4 ``` -It is normally recommended to run your set ups in separate virtual environments (like e.g. [pipenv](https://pypi.org/project/pipenv/)/[virtualenv](https://pypi.org/project/virtualenv/)). Or you can use conda. +It is normally recommended to run your setups in separate virtual environments (like conda, +[pipenv](https://pypi.org/project/pipenv/) or [virtualenv](https://pypi.org/project/virtualenv/)). #### 2. Anaconda You can install anaconda via curl with the following command: @@ -63,5 +90,7 @@ curl -O https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh sh Miniconda3-latest-Linux-x86_64.sh # and follow the prompts. The defaults are generally good. ``` -You may have to open a new terminal or re-source your ~/.bashrc to get access to the conda command. See also the documentation for [conda](https://conda.io/projects/conda/en/latest/user-guide/getting-started.html) as well as the section about how to manage [conda environments](https://conda.io/projects/conda/en/latest/user-guide/getting-started.html#managing-environments). - +You may have to open a new terminal or re-source your ~/.bashrc to get access to the conda command. See also the +documentation for [conda](https://conda.io/projects/conda/en/latest/user-guide/getting-started.html) as well as the +section about how to manage +[conda environments](https://conda.io/projects/conda/en/latest/user-guide/getting-started.html#managing-environments). diff --git a/doc/api/CerebNet.datasets.rst b/doc/api/CerebNet.datasets.rst index ee987c526..bdb82d910 100644 --- a/doc/api/CerebNet.datasets.rst +++ b/doc/api/CerebNet.datasets.rst @@ -11,4 +11,3 @@ CerebNet.datasets load_data utils wm_merge_clean - \ No newline at end of file diff --git a/doc/api/FastSurferCNN.models.rst b/doc/api/FastSurferCNN.models.rst index 83be62c26..8db169d1d 100644 --- a/doc/api/FastSurferCNN.models.rst +++ b/doc/api/FastSurferCNN.models.rst @@ -11,4 +11,4 @@ FastSurferCNN.models losses networks sub_module - + diff --git a/doc/conf.py b/doc/conf.py index 0921c6b23..7bed29bc1 100644 --- a/doc/conf.py +++ b/doc/conf.py @@ -2,20 +2,19 @@ # # For the full list of built-in configuration values, see the documentation: # https://www.sphinx-doc.org/en/master/usage/configuration.html -import importlib -import io # -- Project information ----------------------------------------------------- # https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information +import importlib +import io import sys -import os from pathlib import Path # relative path so sphinx can locate the different modules directly for autosummary -sys.path.append(os.path.dirname(__file__) + "/..") -sys.path.append(os.path.dirname(__file__) + "/../recon_surf") -sys.path.append(os.path.dirname(__file__) + "/sphinx_ext") +sys.path.append(str(Path(__file__).parents[1])) +sys.path.append(str(Path(__file__).parents[1] / "recon_surf")) +sys.path.append(str(Path(__file__).parent / "sphinx_ext")) from resolve_links import LinkCodeResolver from FastSurferCNN.version import main as _version_info, parse_build_file @@ -81,6 +80,17 @@ # create anchors for which headings? myst_heading_anchors = 7 +# myst extensions +myst_enable_extensions = { + "substitution", +} + +# configure substitutions +myst_substitutions = { + # for now, the FASTSURFER_VERSION is hard-coded to 2.4.0 + "FASTSURFER_VERSION": version, +} + templates_path = ["_templates"] exclude_patterns = [ "_build", diff --git a/doc/developer/GIT-HOOKS.md b/doc/developer/GIT-HOOKS.md index 410ddf9b5..0bafa2539 100644 --- a/doc/developer/GIT-HOOKS.md +++ b/doc/developer/GIT-HOOKS.md @@ -1,22 +1,22 @@ git hook setup / CD =================== -The FastSurfer team has developed a pre-commit hook script to help implement +The FastSurfer team has developed a pre-commit hook script to help implement [Continuous Development and Testing](https://en.wikipedia.org/wiki/Continuous_testing). -This CI/CD expands on github workflows executing them locally. They require a local +This CI/CD expands on github workflows executing them locally. They require a local [uv installation](https://docs.astral.sh/uv/getting-started/installation/) as described for FastSurfer's [Native installation](../overview/INSTALL.md#native-ubuntu-2004-or-ubuntu-2204). Pre-commit Hook --------------- - The pre-commit hook script will: 1. Check for trailing white spaces in files 2. Run ruff to verify python code formatting is valid 3. Run codespell to check the spelling -4. Run sphinx-build to rebuild the documentation into `FastSurfer/doc-build` - Here, one important caveat for documentation editors is that sphinx-build may fail if the documentation file - structure is changed, without first cleaning the autosummary/autodoc-generated files. To do this, delete the following +4. Run sphinx-build to rebuild the documentation into `FastSurfer/doc-build`. + + Here, one important caveat for documentation editors is that sphinx-build may fail if the documentation file + structure is changed, without first cleaning the autosummary/autodoc-generated files. To do this, delete the following directory `FastSurfer/doc/api/generated`. ### Installation diff --git a/doc/overview/EDITING.md b/doc/overview/EDITING.md index 9e6836c15..25244bf78 100644 --- a/doc/overview/EDITING.md +++ b/doc/overview/EDITING.md @@ -1,58 +1,52 @@ -# Manual Edits +Manual Edits +============ +We have noticed that FastSurfer segmentations and surface results are very robust and rarely require any manual edits. +However, for your convenience, we allow manual edits in various stages of the FastSurfer pipeline to fix errors or inaccuracies in FastSurfer results. These editing options include approaches that are inherited from FreeSurfer as well as some FastSurfer-specific editing options. -We have noticed that FastSurfer segmentations and surface results are very robust and rarely require any manual edits. -However, for your convenience, we allow manual edits in various stages of the FastSurfer pipeline to fix errors or inaccuracies in FastSurfer results. -These editing options include approaches that are inherited from FreeSurfer as well as some FastSurfer-specific editing options. +The provided editing options may be changed or extended in the future, also depending on requests from the community. Furthermore, we invite users to [contribute](../../CONTRIBUTING.md) such changes and/or datasets of paired MRI images and edited files to improve FastSurfer's neural networks. -The provided editing options may be changed or extended in the future, also depending on requests from the community. -Furthermore, we invite users to [contribute](../../CONTRIBUTING.md) such changes and/or datasets of paired MRI images and edited files to improve FastSurfer's neural networks. +What are Edits? +--------------- +Edits are manual interventions into the pipeline that change intermediate results. By "editing" intermediate results, later steps operate on updated information. While rarely necessary, some errors of the pipeline (see below) can be addressed and the quality of results can be improved. -## What are Edits? - -Edits are manual interventions into the pipeline that change intermediate results. -By "editing" intermediate results, later steps operate on updated information. -While rarely necessary, some errors of the pipeline (see below) can be addressed and the quality of results can be improved. - -## What can be edited? - -Edits primarily affect FastSurfer's surface pipeline but some edits exist also during the segmentation steps. -To understand how edits affect different results and how to perform them, it is important to understand the order of processing steps in FastSurfer. - -## Pipeline overview +What can be edited? +------------------- +Edits primarily affect FastSurfer's surface pipeline but some edits exist also during the segmentation steps. To understand how edits affect different results and how to perform them, it is important to understand the order of processing steps in FastSurfer. +Pipeline overview +----------------- The FastSurfer pipeline comprises segmentation modules and the surface module. The outputs of the relevant modules are: 1. The **asegdkt module**: - - as the primary output: the whole brain segmentation (default name: `/mri/aparc.DKTatlas+aseg.deep.mgz`) - - as secondary outputs from the whole brain segmentation: - - the aseg without CC segmentation (`/mri/aseg.auto_noCCseg.mgz`) and + - with its primary output, the whole brain segmentation (default filename and path: `/mri/aparc.DKTatlas+aseg.deep.mgz`) + - with its secondary outputs from the whole brain segmentation: + - the aseg without CC segmentation (`/mri/aseg.auto_noCCseg.mgz`) and - the brainmask (`/mri/mask.mgz`). 2. The **biasfield module** uses the white matter segmentation (from aseg) and computes: - a bias field-corrected version of the conformed input image (`/mri/orig_nu.mgz`) and - optionally may also perform a Talairach registration (`/mri/transforms/talairach.(xfm|lta)`). -3. Other segmentation modules (e.g. the hypothalamus segmentation) use the biasfield corrected image as input. +3. Other segmentation modules (e.g. the hypothalamus segmentation) use the biasfield corrected image as input. 4. The **surface module** (**recon-surf**) generates surfaces and stats files based on outputs from the **asegdkt module**, including: 1. a Talairach registration (`/mri/transforms/talairach.(xfm|lta)`), if not already performed before, 2. the WM segmentation (`/mri/wm.mgz`) and filled version (`/mri/filled.mgz`) to initialize surfaces and 3. a brainmask (`/mri/brain.finalsurfs.mgz`) to guide the positioning of the pial surfaces. -## Possible Edits - -FastSurfer supports the following edits: +Possible Edits +-------------- +FastSurfer supports the following edits: 1. [Bias field corrected inputs](#bias-field-correction) (for improved image quality, not really an edit) 2. [asegdkt_segfile](#asegdkt_segfile): `/mri/aparc.DKTatlas+aseg.deep.mgz` via `/mri/aparc.DKTatlas+aseg.deep.manedit.mgz` 3. [Talairach registration](#talairach-registration): `/mri/transforms/talairach.xfm` (overwrites automatic results from `/mri/transforms/talairach.auto.xfm`) 4. [White matter segmentation](#white-matter-segmentation): `/mri/wm.mgz` and `/mri/filled.mgz` 5. [Pial placement](#pial-surface-placement): `/mri/brain.finalsurfs.mgz` via `/mri/brain.finalsurfs.manedit.mgz` -Note, as FastSurfer's surface pipeline is derived from FreeSurfer, some edit options and corresponding naming schemes are inherited from FreeSurfer. +Note, as FastSurfer's surface pipeline is derived from FreeSurfer, some editing options and corresponding naming schemes +are inherited from FreeSurfer. -## General process -Most editing options require that you have processed the case and you are able to access and modify the files in that image's directory, i.e. ``. -Based on a quality inspection of results, segmentations and surfaces, one of the edits (see above) is chosen and performed. -Then, the case is re-processed using the `--edits` flag, i.e. `run_fastsurfer.sh` is re-run (same FastSurfer command and FastSurfer version as before, but add `--edits`). -Finally, you would check in another quality inspection, if the quality issues are resolved. +General process +--------------- +Most editing options require that you have processed the case and you are able to access and modify the files in that image's directory, i.e. ``. Based on a quality inspection of results, segmentations and surfaces, one of the edits (see above) is chosen and performed. Then, the case is re-processed using the `--edits` flag, i.e. `run_fastsurfer.sh` is re-run (same FastSurfer command and FastSurfer version as before, but add `--edits`). Finally, you would check in another quality inspection, if the quality issues are resolved. For example (including the setup for native processing, see [Examples](EXAMPLES.md) for other processing options): @@ -63,7 +57,7 @@ export FREESURFER_HOME=/path/to/freesurfer source $FREESURFER_HOME/SetUpFreeSurfer.sh # Define data directory -export SUBJECTS_DIR=/home/user/my_fastsurfer_analysis +export SUBJECTS_DIR=$HOME/my_fastsurfer_analysis # Run FastSurfer $FASTSURFER_HOME/run_fastsurfer.sh \ @@ -73,8 +67,7 @@ $FASTSURFER_HOME/run_fastsurfer.sh \ --edits # more flags as needed, e.g. --3T --threads 4 ``` -Note, a re-run of the segmentation pipeline, as in the command above, should not be harmful, but is only required if the [asegdkt_segfile](#asegdkt_segfile) was edited. -Therefore, in most cases, we can skip the segmentation step with +Note, a re-run of the segmentation pipeline, as in the command above, should not be harmful, but is only required if the [asegdkt_segfile](#asegdkt_segfile) was edited. Therefore, in most cases, we can skip the segmentation step with ```bash # Setup FASTSURFER and FREESURFER ... (see above) @@ -86,50 +79,51 @@ $FASTSURFER_HOME/run_fastsurfer.sh \ --edits --surf_only # more flags as needed, e.g. --3T --threads 4 ``` -## Bias field correction - +Bias field correction +--------------------- This edit is "outside" of the FastSurfer pipeline and not really an edit. -### When to use -The *asegdkt module* failed or produced unreliable segmentation maps, especially if bias fields are very strong as for example for 7T images. -This can be detected by inspecting `/mri/aparc.DKTatlas+aseg.deep.mgz`, `/mri/aseg.auto_noCCseg.mgz` and `/mri/mask.mgz`. +### When to use +The *asegdkt module* failed or produced unreliable segmentation maps, especially if bias fields are very strong as for example for 7T images. This can be detected by inspecting `/mri/aparc.DKTatlas+aseg.deep.mgz`, `/mri/aseg.auto_noCCseg.mgz` and `/mri/mask.mgz`. -### What to do -Perform bias field correction prior to segmentation (FastSurfer) and provide the bias corrected image as input. Sometimes, even FastSurfer's own bias field correction can help, which can be found in `/mri/orig_nu.mgz`. -Alternatively, external bias field correction tools may help (e.g. *bias_correct* from SPM). -Run FastSurfer as a new run on this new input file. +### What to do +Perform bias field correction prior to segmentation (FastSurfer) and provide the bias corrected image as input. Sometimes, even FastSurfer's own bias field correction can help, which can be found in `/mri/orig_nu.mgz`. Alternatively, external bias field correction tools may help (e.g. *bias_correct* from SPM). Run FastSurfer as a new run on this new input file. For example: -- Step 1: Run FastSurfer to obtain a bias field corrected image (not needed if you already processed with FastSurfer a first time): - ```bash - # Setup FASTSURFER and FREESURFER ... (see above) - - # Run FastSurfer - $FASTSURFER_HOME/run_fastsurfer.sh \ - --sd $SUBJECTS_DIR --sid case_bias_only \ - --t1 /path/to/the/original/T1.mgz \ - --seg_only --no_hypothal --no_cereb --threads 16 +1. Run FastSurfer to obtain a bias field corrected image (not needed if you already processed with FastSurfer a first time): + ```bash + # Setup FASTSURFER and FREESURFER ... (see above) + + # Run FastSurfer + $FASTSURFER_HOME/run_fastsurfer.sh \ + --sd $SUBJECTS_DIR --sid case_bias_only \ + --t1 /path/to/the/original/T1.mgz \ + --seg_only --no_hypothal --no_cereb --threads 16 + ``` +2. Run FastSurfer again, but this time input the bias field corrected image (i.e. `orig_nu.mgz`) instead of original input image. The file `orig_nu.mgz` can be found in the output directory under the *mri* subfolder. The output produced from the second iteration should be saved in a different output directory for comparative analysis with the output produced in first iteration. + ```bash + # Setup FASTSURFER and FREESURFER ... (see above) + + # Run FastSurfer + $FASTSURFER_HOME/run_fastsurfer.sh \ + --sd $SUBJECTS_DIR --sid case_bias_corrected \ + --t1 $SUBJECTS_DIR/case_bias_only/mri/orig_nu.mgz \ + --fs_license $FREESURFER_HOME/.license # more flags as needed, e.g. --3T --threads 4 + ``` +3. Compare and check if bias field correction fixed the issues: + ```bash + freeview $SUBJECTS_DIR/case_bias_only/mri/orig_nu.mgz \ + $SUBJECTS_DIR/case_bias_corrected/mri/aparc.DKTatlas+aseg.deep.mgz \ + $SUBJECTS_DIR/case_bias_only/mri/orig_nu.mgz \ + $SUBJECTS_DIR/case_bias_corrected/mri/aparc.DKTatlas+aseg.deep.mgz ``` -- Step 2: Run FastSurfer again, but this time input the bias field corrected image (i.e. ```orig_nu.mgz```) instead of original input image. The file ```orig_nu.mgz``` can be found in the output directory under the *mri* subfolder. The output produced from the second iteration should be saved in a different output directory for comparative analysis with the output produced in first iteration. - ```bash - # Setup FASTSURFER and FREESURFER ... (see above) - - # Run FastSurfer - $FASTSURFER_HOME/run_fastsurfer.sh \ - --sd $SUBJECTS_DIR --sid case_bias_corrected \ - --t1 $SUBJECTS_DIR/case_bias_only/mri/orig_nu.mgz \ - --fs_license $FREESURFER_HOME/.license # more flags as needed, e.g. --3T --threads 4 - ``` -- Step 3: Compare and check if bias field correction fixed the issues: - ```bash - freeview $SUBJECTS_DIR/case_bias_only/mri/orig_nu.mgz $SUBJECTS_DIR/case_bias_corrected/mri/aparc.DKTatlas+aseg.deep.mgz $SUBJECTS_DIR/case_bias_only/mri/orig_nu.mgz $SUBJECTS_DIR/case_bias_corrected/mri/aparc.DKTatlas+aseg.deep.mgz - ``` -## asegdkt_segfile +asegdkt_segfile +--------------- -### When to use +### When to use (Minor) Segmentation errors such as over- and under-segmentation, of the gray and white matter, but also subcortical structures. -In particular, over- and under-segmentation of the brainmask, gray matter over segmentation into the dura, white matter undersegmentation in the cortex (causing missed gyri or cortical thickness overestimation). +In particular, over- and under-segmentation of the brainmask, gray matter over segmentation into the dura, white matter under-segmentation in the cortex (causing missed gyri or cortical thickness overestimation). In specific, such errors are inspected in `/mri/aparc.DKTatlas+aseg.deep.mgz`, `/mri/aseg.auto_noCCseg.mgz` and `/mri/mask.mgz`. ### What to do @@ -137,25 +131,27 @@ In specific, such errors are inspected in `/mri/aparc.DKTatlas+aseg 2. Open `/mri/aparc.DKTatlas+aseg.deep.manedit.mgz` (for example using Freeview) and resolve all errors/quality issues. 3. [Re-run FastSurfer](#general-process) to propagate the changes into other results. Among others, this updates `/mri/aseg.auto_noCCseg.mgz` and `/mri/mask.mgz`. -## Talairach registration +Talairach registration +---------------------- -### When to use +### When to use The estimated total intracranial volume is inaccurate. -### What to do +### What to do 1. Copy `/mri/transforms/talairach.auto.xfm` to `/mri/transforms/talairach.xfm`, replace the symlink that is already there. 2. Edit `/mri/transforms/talairach.xfm` to have the intended talairach matrix, see the [FreeSurfer tutorial for details](https://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/Talairach). 3. [Re-run FastSurfer](#general-process) to update the eTIV value in all stats files. See also: -## White matter segmentation +White matter segmentation +------------------------- ### When to use Over- and/or under-segmentation of the white matter: voxels that should be white matter are excluded, or those that should not are included. ### What to do -Often, these errors should be fixed by [asegdkt_segfile](#asegdkt_segfile) `/mri/aparc.DKTatlas+aseg.deep.manedit.mgz`, but if that is not successful: +Often, these errors should be fixed by [asegdkt_segfile](#asegdkt_segfile) `/mri/aparc.DKTatlas+aseg.deep.manedit.mgz`, but if that is not successful: 1. Open and edit `/mri/wm.mgz` and/or `/mri/filled.mgz`. 2. [Re-run FastSurfer](#general-process) to propagate the edits. @@ -163,7 +159,8 @@ The manual label 255 indicates a voxel should be included in the white matter an See also: -## Pial surface placement +Pial surface placement +---------------------- ### When to use Over- and/or under-segmentation of the cortical gray matter: voxels that should be gray matter are excluded, or those that should not are included. @@ -172,21 +169,21 @@ Over- and/or under-segmentation of the cortical gray matter: voxels that should ### What to do Often, these errors should be fixed in [asegdkt_segfile](#asegdkt_segfile) `/mri/aparc.DKTatlas+aseg.deep.manedit.mgz`, but if that is not successful: 1. Open end edit `/mri/brain.finalsurfs.manedit.mgz` (overwriting values in `/mri/brain.finalsurfs.mgz`). -2. [Re-run FastSurfer](#general-process) to fix the pial surface. +2. [Re-run FastSurfer](#general-process) to fix the pial surface. The manual label 255 indicates a voxel should be included in the gray matter and a voxel labeled 1 should not. See also: -## Side effects of edits - +Side effects of edits +--------------------- Technically, all edits are changes to the processing pipeline and may cause systematic effects in the analysis. Computational edits (e.g. [Bias Field Correction](#bias-field-correction)) should be integrated for all cases of an analysis. -Other effects are impossible to avoid (edit carefully), often very small and hard to account for. +Other effects are impossible to avoid (edit carefully), often very small and hard to account for. It is recommended to analyze whether files with edits distort results in a specific direction, i.e. how do effect sizes change if manually edited cases are excluded? -## Guide for experienced FreeSurfer users - -The following list enumerates editing options available in FreeSurfer and how to achieve similar edits in FastSurfer. +Guide for experienced FreeSurfer users +-------------------------------------- +The following list enumerates editing options available in FreeSurfer and how to achieve similar edits in FastSurfer. 1. Skull stripping by editing `/mri/brainmask.mgz` (delta to `brainmask.auto.mgz`) -> [asegdkt_segfile](#asegdkt_segfile) 2. Talairach registration by editing `/mri/transforms/talairach.xfm` (delta to `transforms/talairach.xfm`) -> [Talairach Registration](#talairach-registration) @@ -198,8 +195,8 @@ The following list enumerates editing options available in FreeSurfer and how to 8. In-file edits of the filled Fill white matter segmentation: `/mri/filled.mgz` (delta to `filled.auto.mgz`) -> [asegdkt_segfile](#asegdkt_segfile) or [White Matter Segmentation (filled.mgz)](#white-matter-segmentation) 9. Pial placement correction: `/mri/brain.finalsurfs.manedit.mgz` (delta to `brain.finalsurfs.mgz`) -> [Pial Surface Placement](#pial-surface-placement) -## What NOT to do - +What NOT to do +-------------- Do not edit the following files. These edits may not work as you expect and are considered **undefined behavior**. 1. The mask: `/mri/mask.mgz` 2. The brainmask: `/mri/brainmask.mgz` @@ -207,5 +204,5 @@ Do not edit the following files. These edits may not work as you expect and are 4. Seed points: `/scripts/seed-(pons|cc|lh|rh|ws).crs.man.dat` 5. The downstream subcortical segmentation: `/mri/aseg.presurf.mgz` -We hope that this will help with (some of) your editing needs. -Thanks for using FastSurfer. +We hope that this will help with (some of) your editing needs. +Thanks for using FastSurfer. diff --git a/doc/overview/EXAMPLES.md b/doc/overview/EXAMPLES.md index 930ee33fb..a6598cdfd 100644 --- a/doc/overview/EXAMPLES.md +++ b/doc/overview/EXAMPLES.md @@ -1,92 +1,100 @@ -# Examples - -## Example 1: FastSurfer Docker -After pulling one of our images from Dockerhub, you do not need to have a separate installation of FreeSurfer on your computer (it is already included in the Docker image). However, if you want to run ___more than just the segmentation CNN___, you need to [register at the FreeSurfer website](https://surfer.nmr.mgh.harvard.edu/registration.html) to acquire a valid license for free. The directory containing the license needs to be mounted and passed to the script via the `--fs_license` flag. Basically for Docker (as for Singularity below) you are starting a container image (with the run command) and pass several parameters for that, e.g. if GPUs will be used and mounting (linking) the input and output directories to the inside of the container image. In the second half of that call you pass parameters to our `run_fastsurfer.sh` script that runs inside the container (e.g. where to find the FreeSurfer license file, and the input data and other flags). - -To run FastSurfer on a given subject using the provided GPU-Docker, execute the following command: +Examples +======== +Example 1: FastSurfer Singularity (or Apptainer) +------------------------------------------------ +Singularity and Apptainer are alternative containerization solutions. Both have open-source distributions and are often +available in HPC settings. See our [Singularity docs](SINGULARITY.md) for more details. + +### Preparation +Build the Singularity image (see below or [these instructions](SINGULARITY.md)). If you intend to generate surfaces, +you need to [register at the FreeSurfer website](https://surfer.nmr.mgh.harvard.edu/registration.html) to acquire a +FreeSurfer license (for free). This license needs to be passed to FastSurfer via the `--fs_license` flag. If you do not +intend to generate surfaces, it is often not necessary to obtain a FreeSurfer license. ```bash -# 1. get the fastsurfer docker image (if it does not exist yet) -docker pull deepmi/fastsurfer - -# 2. Run command -docker run --gpus all -v /home/user/my_mri_data:/data \ - -v /home/user/my_fastsurfer_analysis:/output \ - -v /home/user/my_fs_license_dir:/fs_license \ - --rm --user $(id -u):$(id -g) deepmi/fastsurfer:latest \ - --fs_license /fs_license/license.txt \ - --t1 /data/subjectX/t1-weighted.nii.gz \ - --sid subjectX --sd /output \ - --3T \ - --threads 4 +# Build the singularity image (if it does not exist) +singularity build fastsurfer-gpu.sif docker://deepmi/fastsurfer ``` -Docker Flags: -* The `--gpus` flag is used to allow Docker to access GPU resources. With it, you can also specify how many GPUs to use. In the example above, _all_ will use all available GPUS. To use a single one (e.g. GPU 0), set `--gpus device=0`. To use multiple specific ones (e.g. GPU 0, 1 and 3), set `--gpus 'device=0,1,3'`. -* The `-v` commands mount your data, output, and directory with the FreeSurfer license file into the docker container. Inside the container these are visible under the name following the colon (in this case /data, /output, and /fs_license). -* The `--rm` flag takes care of removing the container once the analysis finished. -* The `--user $(id -u):$(id -g)` part automatically runs the container with your group- `id -g` and user-id `id -u`. All generated files will then belong to the specified user. Setting a user id is required! Running docker as root is discouraged. Note, that in the **rootless mode**, the operating system implements the translation to *your* user- and group id. Therefore, for rootless mode, you must set `--user 0` and add the FastSurfer `--allow_root` flag! - -FastSurfer Flag: -* The `--fs_license` points to your FreeSurfer license which needs to be available on your computer in the my_fs_license_dir that was mapped above. -* The `--t1` points to the t1-weighted MRI image to analyse (full path, with mounted name inside docker: /home/user/my_mri_data => /data) -* The `--sid` is the subject ID name (output folder name) -* The `--sd` points to the output directory (its mounted name inside docker: /home/user/my_fastsurfer_analysis => /output) -* The `--3T` changes the atlas for registration to the 3T atlas for better Talairach transforms and ICV estimates (eTIV) -* The `--threads` tells FastSurfer to use that many threads in segmentation and surface reconstruction. `max` will auto-detect the number of threads available, i.e. `16` on an 8-core system with hypterthreading. If the number of threads is greater than 1, FastSurfer will process the left and right hemispheres in parallel. - -Note, that the paths following `--fs_license`, `--t1`, and `--sd` are __inside__ the container, not global paths on your system, so they should point to the places where you mapped these paths above with the `-v` arguments (part after colon). - -A directory with the name as specified in `--sid` (here subjectX) will be created in the output directory if it does not exist. So in this example output will be written to /home/user/my_fastsurfer_analysis/subjectX/ . Make sure the output directory is empty, to avoid overwriting existing files. - -If you do not have a GPU, you can also run our CPU-Docker by dropping the `--gpus all` flag and specifying `--device cpu` at the end as a FastSurfer flag, see also [FastSurfer's docker documentation](../../tools/Docker/README.md) for more details. - -## Example 2: FastSurfer Singularity -After building the Singularity image (see below or [these instructions](SINGULARITY.md)), you also need to register at the FreeSurfer website (https://surfer.nmr.mgh.harvard.edu/registration.html) to acquire a valid license (for free) - same as when using Docker. This license needs to be passed to the script via the `--fs_license` flag. This is not necessary if you want to run the segmentation only. - -To run FastSurfer on a given subject using the Singularity image with GPU access, execute the following commands from a directory where you want to store singularity images. This will create a singularity image from our Dockerhub image and execute it: +### Running FastSurfer +To run FastSurfer on a given subject using the Singularity image with GPU access, execute the following commands from a +directory where you want to store singularity images. This will create a singularity image from our Dockerhub image and +execute it: ```bash -# 1. Build the singularity image (if it does not exist) -singularity build fastsurfer-gpu.sif docker://deepmi/fastsurfer - -# 2. Run command singularity exec --nv \ - --no-home \ - -B /home/user/my_mri_data:/data \ - -B /home/user/my_fastsurfer_analysis:/output \ - -B /home/user/my_fs_license_dir:/fs_license \ + --no-mount home,cwd -e \ + -B $HOME/my_mri_data:$HOME/my_mri_data \ + -B $HOME/my_fastsurfer_analysis:$HOME/my_fastsurfer_analysis \ + -B $HOME/my_fs_license.txt:$HOME/my_fs_license.txt \ ./fastsurfer-gpu.sif \ - /fastsurfer/run_fastsurfer.sh \ - --fs_license /fs_license/license.txt \ - --t1 /data/subjectX/t1-weighted.nii.gz \ - --sid subjectX --sd /output \ - --3T \ - --threads 4 + /fastsurfer/run_fastsurfer.sh \ + --fs_license $HOME/my_fs_license.txt \ + --t1 $HOME/my_mri_data/subjectX/t1-weighted.nii.gz \ + --sid subjectX --sd $HOME/my_fastsurfer_analysis \ + --3T \ + --threads 4 ``` ### Singularity Flags -* The `--nv` flag is used to access GPU resources. +* The `--nv` flag is used to access GPU resources. * The `--no-home` flag stops mounting your home directory into singularity. -* The `-B` commands mount your data, output, and directory with the FreeSurfer license file into the Singularity container. Inside the container these are visible under the name following the colon (in this case /data, /output, and /fs_license). +* The `-B` commands mount your data, output, and directory with the FreeSurfer license file into the Singularity container. Inside the container these are visible under the name following the colon (in this case /data, /output, and /fs_license). ### FastSurfer Flags -* The `--fs_license` points to your FreeSurfer license which needs to be available on your computer in the my_fs_license_dir that was mapped above. -* The `--t1` points to the t1-weighted MRI image to analyse (full path, with mounted name inside docker: /home/user/my_mri_data => /data) +* The `--fs_license` points to your FreeSurfer license which needs to be available on your computer in the my_fs_license_dir that was mapped above. +* The `--t1` points to the t1-weighted MRI image to analyse (full path, must be mounted via `-B`) * The `--sid` is the subject ID name (output folder name) -* The `--sd` points to the output directory (its mounted name inside docker: /home/user/my_fastsurfer_analysis => /output) +* The `--sd` points to the output directory (must be mounted via `-B`) * The `--3T` changes the atlas for registration to the 3T atlas for better Talairach transforms and ICV estimates (eTIV) -* The `--threads` tells FastSurfer to use that many threads in segmentation and surface reconstruction. `max` will auto-detect the number of threads available, i.e. `16` on an 8-core system with hypterthreading. If the number of threads is greater than 1, FastSurfer will process the left and right hemispheres in parallel. +* The `--threads` tells FastSurfer to use that many threads in segmentation and surface reconstruction. `max` will auto-detect the number of threads available, i.e. `16` on an 8-core system with hypterthreading. If the number of threads is greater than 1, FastSurfer will process the left and right hemispheres in parallel. Note, that the paths following `--fs_license`, `--t1`, and `--sd` are __inside__ the container, not global paths on your system, so they should point to the places where you mapped these paths above with the `-v` arguments (part after colon). -A directory with the name as specified in `--sid` (here subjectX) will be created in the output directory. So in this example output will be written to /home/user/my_fastsurfer_analysis/subjectX/ . Make sure the output directory is empty, to avoid overwriting existing files. +A directory with the name as specified in `--sid` (here subjectX) will be created in the output directory. So in this example output will be written to `$HOME/my_fastsurfer_analysis/subjectX/` . Make sure the output directory is empty, to avoid overwriting existing files. + +If you have no supported GPU, most Singularity images should automatically work (default to the CPU, just drop the `--nv` flag). Since execution on the CPU requires less driver installation, a custom, smaller CPU image is available `singularity build fastsurfer-cpu.sif docker://deepmi/fastsurfer:cpu-latest`. + +Example 2: FastSurfer Docker +---------------------------- +After pulling one of our images from Dockerhub, you do not need to have a separate installation of FreeSurfer on your computer (it is already included in the Docker image). However, if you want to run ___more than just the segmentation CNN___, you need to [register at the FreeSurfer website](https://surfer.nmr.mgh.harvard.edu/registration.html) to acquire a valid license for free. The directory containing the license needs to be mounted and passed to the script via the `--fs_license` flag. Basically for Docker (as for Singularity below) you are starting a container image (with the run command) and pass several parameters for that, e.g. if GPUs will be used and mounting (linking) the input and output directories to the inside of the container image. In the second half of that call you pass parameters to our `run_fastsurfer.sh` script that runs inside the container (e.g. where to find the FreeSurfer license file, and the input data and other flags). + +To run FastSurfer on a given subject using the provided GPU-Docker, execute the following command: + +```bash +docker run --gpus all -v $HOME/my_mri_data:$HOME/my_mri_data \ + -v $HOME/my_fastsurfer_analysis:$HOME/my_fastsurfer_analysis \ + -v $HOME/my_fs_license.txt:$HOME/my_fs_license.txt \ + --rm --user $(id -u):$(id -g) deepmi/fastsurfer:latest \ + --fs_license $HOME/my_fs_license.txt \ + --t1 $HOME/my_mri_data/subjectX/t1-weighted.nii.gz \ + --sid subjectX --sd $HOME/my_fastsurfer_analysis \ + --3T \ + --threads 4 +``` + +### Docker Flags +* The `--gpus` flag is used to allow Docker to access GPU resources. With it, you can also specify how many GPUs to use. In the example above, _all_ will use all available GPUS. To use a single one (e.g. GPU 0), set `--gpus device=0`. To use multiple specific ones (e.g. GPU 0, 1 and 3), set `--gpus 'device=0,1,3'`. If you do not have a supported GPU, just drop this flag to use the CPU. +* The `-v` commands mount your data, output, and directory with the FreeSurfer license file into the docker container. Inside the container these are visible under the name following the colon (in this case /data, /output, and /fs_license). +* The `--rm` flag takes care of removing the container once the analysis finished. +* The `--user $(id -u):$(id -g)` part automatically runs the container with your group- (`id -g`) and user-id (`id -u`). All generated files will then belong to the specified user. Without the flag, the docker container will return an error. If running the container as root is required (despite being against best practice, for example because it is run in a sandbox, pass `--user 0:0`). -You can run the Singularity equivalent of CPU-Docker by building a Singularity image from the CPU-Docker image and excluding the `--nv` argument in your Singularity exec command. Also append `--device cpu` as a FastSurfer flag. +### Docker image +* This command assumes you want to use the most recent (locally cached) version of FastSurfer `deepmi/fastsurfer:latest`. This will always include current nVidia drivers and libraries. +* For older libraries, an image with AMD drivers or a smaller, CPU-only docker image, images are available in [multiple configurations](https://hub.docker.com/r/deepmi/fastsurfer/tags). +### FastSurfer Flag +* The `--fs_license` points to your FreeSurfer license which needs to be available on your computer, replace all occurrences of `$HOME/my_fs_license.txt` (full path, must be mounted via `-v :`). +* The `--t1` points to the t1-weighted MRI image to analyse (full path, must be mounted via `-v :`) +* The `--sid` is the subject ID name (output folder name) +* The `--sd` points to the output directory (must be mounted via `-v :`) +* The `--3T` changes the atlas for registration to the 3T atlas for better Talairach transforms and ICV estimates (eTIV) +* The `--threads` tells FastSurfer to use that many threads in segmentation and surface reconstruction. `max` will auto-detect the number of threads available, i.e. `16` on an 8-core system with hyperthreading. If the number of threads is greater than 1, FastSurfer will process the left and right hemispheres in parallel. -## Example 3: Native FastSurfer on subjectX with parallel processing of hemis +A directory with the name as specified in `--sid` (here subjectX) will be created in the output directory if it does not exist. So in this example output will be written to `$HOME/my_fastsurfer_analysis/subjectX/`. Make sure the output directory is empty, to avoid overwriting existing files. +Example 3: Native FastSurfer on subjectX with parallel processing of hemis +-------------------------------------------------------------------------- For a native install you may want to make sure that you are on our stable branch, as the default dev branch is for development and could be broken at any time. For that you can directly clone the stable branch: ```bash @@ -94,7 +102,7 @@ git clone --branch stable https://github.com/Deep-MI/FastSurfer.git ``` More details (e.g. you need all dependencies in the right versions and also FreeSurfer locally) can be found in our [Installation guide](INSTALL.md). -Given you want to analyze data for subject which is stored on your computer under /home/user/my_mri_data/subjectX/t1-weighted.nii.gz, run the following command from the console (do not forget to source FreeSurfer!): +Given you want to analyze data for subject which is stored on your computer under `$HOME/my_mri_data/subjectX/t1-weighted.nii.gz`, run the following command from the console (do not forget to source FreeSurfer!): ```bash # Source FreeSurfer @@ -102,8 +110,8 @@ export FREESURFER_HOME=/path/to/freesurfer source $FREESURFER_HOME/SetUpFreeSurfer.sh # Define data directory -datadir=/home/user/my_mri_data -fastsurferdir=/home/user/my_fastsurfer_analysis +datadir=$HOME/my_mri_data +fastsurferdir=$HOME/my_fastsurfer_analysis # Run FastSurfer ./run_fastsurfer.sh --t1 $datadir/subjectX/t1-weighted-nii.gz \ @@ -111,11 +119,11 @@ fastsurferdir=/home/user/my_fastsurfer_analysis --threads 4 --3T ``` -The output will be stored in the $fastsurferdir (including the `aparc.DKTatlas+aseg.deep.mgz` segmentation under `$fastsurferdir/subjectX/mri` (default location)). Processing of the hemispheres will be run in parallel (--threads 4 >= 2) to significantly speed-up surface creation. Omit this flag to run the processing sequentially, e.g. if you want to save resources on a compute cluster. - +The output will be stored in the `$fastsurferdir` (including the `aparc.DKTatlas+aseg.deep.mgz` segmentation under `$fastsurferdir/subjectX/mri` (default location)). Processing of the hemispheres will be run in parallel (`--threads 4`, 4 >= 2) to significantly speed-up surface creation. Omit this flag to run the processing sequentially, e.g. if you want to save resources on a compute cluster. -## Example 4: FastSurfer on multiple subjects +Example 4: FastSurfer on multiple subjects +------------------------------------------ In order to run FastSurfer on multiple cases, you may use the helper script `brun_subjects.sh`. This script accepts multiple ways to define the subjects, for example a subjects_list file. Prepare the subjects_list file as follows (one line subject per line; delimited by `\n`): ``` @@ -125,15 +133,15 @@ subject3=path_to_t1 ... subject10=path_to_t1 ``` -Note, that all paths (`path_to_t1`) are as if you passed them to the `run_fastsurfer.sh` script via `--t1 ` so they may be with respect to the singularity or docker file system. Absolute paths are recommended. +Note, that all paths (`path_to_t1`) are as if you passed them to the `run_fastsurfer.sh` script via `--t1 ` so they may be with respect to the singularity or docker file system. Absolute paths are recommended. The `brun_fastsurfer.sh` script can then be invoked in docker, singularity or on the native platform as follows: ### Docker ```bash -docker run --gpus all -v /home/user/my_mri_data:/data \ - -v /home/user/my_fastsurfer_analysis:/output \ - -v /home/user/my_fs_license_dir:/fs_license \ +docker run --gpus all -v $HOME/my_mri_data:/data \ + -v $HOME/my_fastsurfer_analysis:/output \ + -v $HOME/my_fs_license_dir:/fs_license \ --entrypoint "/fastsurfer/brun_fastsurfer.sh" \ --rm --user $(id -u):$(id -g) deepmi/fastsurfer:latest \ --fs_license /fs_license/license.txt \ @@ -145,9 +153,9 @@ docker run --gpus all -v /home/user/my_mri_data:/data \ ```bash singularity exec --nv \ --no-home \ - -B /home/user/my_mri_data:/data \ - -B /home/user/my_fastsurfer_analysis:/output \ - -B /home/user/my_fs_license_dir:/fs_license \ + -B $HOME/my_mri_data:/data \ + -B $HOME/my_fastsurfer_analysis:/output \ + -B $HOME/my_fs_license_dir:/fs_license \ ./fastsurfer-gpu.sif \ /fastsurfer/brun_fastsurfer.sh \ --fs_license /fs_license/license.txt \ @@ -161,9 +169,9 @@ singularity exec --nv \ export FREESURFER_HOME=/path/to/freesurfer source $FREESURFER_HOME/SetUpFreeSurfer.sh -cd /home/user/FastSurfer -datadir=/home/user/my_mri_data -fastsurferdir=/home/user/my_fastsurfer_analysis +cd $HOME/FastSurfer +datadir=$HOME/my_mri_data +fastsurferdir=$HOME/my_fastsurfer_analysis # Run FastSurfer ./brun_fastsurfer.sh --subject_list $datadir/subjects_list.txt \ @@ -174,40 +182,44 @@ fastsurferdir=/home/user/my_fastsurfer_analysis ### Flags The `brun_fastsurfer.sh` script accepts almost all `run_fastsurfer.sh` flags (exceptions are `--t1` and `--sid`). In addition, it has [powerful parallelization options](../scripts/BATCH.md#parallelization-with-brun_fastsurfersh). -## Example 5: Quick Segmentation - +Example 5: Quick Segmentation +----------------------------- For many applications you won't need the surfaces. You can run only the aparc+DKT segmentation (in 1 minute on a GPU) via ```bash ./run_fastsurfer.sh --t1 $datadir/subject1/t1-weighted.nii.gz \ --asegdkt_segfile $outputdir/subject1/aparc.DKTatlas+aseg.deep.mgz \ --conformed_name $outputdir/subject1/conformed.mgz \ + --sd $HOME/my_fastsurfer_analysis \ + --sid subject1 \ --threads 4 --seg_only --no_cereb --no_hypothal ``` This will produce the segmentation in a conformed space (just as FreeSurfer would do). It also writes the conformed image that fits the segmentation. -Conformed means that the image will be isotropic in LIA orientation. +Conformed means that the image will be isotropic in LIA orientation. It will furthermore output a brain mask (`mri/mask.mgz`), a simplified segmentation file (`mri/aseg.auto_noCCseg.mgz`), the biasfield corrected image (`mri/orig_nu.mgz`), and the volume statistics (without eTIV) based on the FastSurferVINN segmentation (without the corpus callosum) (`stats/aseg+DKT.stats`). If you do not even need the biasfield corrected image and the volume statistics, you may add `--no_biasfield`. These steps especially benefit from larger assigned core counts `--threads 32`. -The above ```run_fastsurfer.sh``` commands can also be called from the Docker or Singularity images by passing the flags and adjusting input and output directories to the locations inside the containers (where you mapped them via the -v flag in Docker or -B in Singularity). +The above ```run_fastsurfer.sh``` commands can also be called from the Docker or Singularity images by passing the flags and adjusting input and output directories to the locations inside the containers (where you mapped them via the -v flag in Docker or -B in Singularity). ```bash # Docker -docker run --gpus all -v $datadir:/data \ - -v $outputdir:/output \ - --rm --user $(id -u):$(id -g) deepmi/fastsurfer:latest \ - --t1 /data/subject1/t1-weighted.nii.gz \ - --asegdkt_segfile /output/subject1/aparc.DKTatlas+aseg.deep.mgz \ - --conformed_name /output/subject1/conformed.mgz \ - --sd /output \ - --sid subject1 \ - --threads 4 --seg_only --3T +docker run --gpus all \ + -v $HOME/my_mri_data:$HOME/my_mri_data \ + -v $HOME/my_fastsurfer_analysis:$HOME/my_fastsurfer_analysis \ + -v $HOME/my_freesurfer_license.txt:$HOME/my_freesurfer_license.txt \ + --rm --user $(id -u):$(id -g) deepmi/fastsurfer:latest \ + --t1 $HOME/my_mri_data/subject1/t1-weighted.nii.gz \ + --asegdkt_segfile $HOME/my_fastsurfer_analysis/subject1/aparc.DKTatlas+aseg.deep.mgz \ + --conformed_name $HOME/my_fastsurfer_analysis/subject1/conformed.mgz \ + --sd $HOME/my_fastsurfer_analysis \ + --sid subject1 \ + --threads 4 --seg_only --3T --no_cereb --no_hypothal ``` -## Example 6: Running FastSurfer on a SLURM cluster via Singularity - +Example 6: Running FastSurfer on a SLURM cluster via Singularity +---------------------------------------------------------------- Starting with version 2.2, FastSurfer comes with a script that helps orchestrate FastSurfer optimally on a SLURM cluster: `srun_fastsurfer.sh`. This script distributes GPU-heavy and CPU-heavy workloads to different SLURM partitions and manages intermediate files in a work directory for IO performance. @@ -215,8 +227,10 @@ This script distributes GPU-heavy and CPU-heavy workloads to different SLURM par ```bash srun_fastsurfer.sh --partition seg=GPU_Partition \ --partition surf=CPU_Partition \ - --sd $outputdir \ - --data $datadir \ + --sd $HOME/my_fastsurfer_analysis \ + --data $HOME/my_mri_data \ + --pattern */t1-weighted.nii.gz \ + --remove_suffix /t1-weighted.nii.gz \ --singularity_image $HOME/images/fastsurfer-singularity.sif \ [...] # fastsurfer flags ``` @@ -224,4 +238,4 @@ srun_fastsurfer.sh --partition seg=GPU_Partition \ This will create three dependent SLURM jobs, one to segment, one for surface reconstruction and one for cleanup (which moves the data from the work directory to the `$outputdir`). There are many intricacies and options, so it is advised to use `--help`, `--debug` and `--dry` to inspect, what will be scheduled as well as run a test on a small subset. More control over subjects is available with `--subject_list`. -The `$outputdir` and the `$datadir` need to be accessible from cluster nodes. Most IO is performed on a work directory (automatically generated from `$HPCWORK` environment variable: `$HPCWORK/fastsurfer-processing/$(date +%Y%m%d-%H%M%S)`). Alternatively, an empty directory can be manually defined via `--work`. On successful cleanup, this directory will be removed. +The `$HOME/my_mri_data` and the `$HOME/my_fastsurfer_analysis` directories need to be accessible from cluster nodes. Most IO is performed on a work directory (automatically generated from `$HPCWORK` environment variable: `$HPCWORK/fastsurfer-processing/$(date +%Y%m%d-%H%M%S)`). Alternatively, an empty directory can be manually defined via `--work`. On successful cleanup, this directory will be removed to `$HOME/my_fastsurfer_analysis` (defined via `--sd`). diff --git a/doc/overview/INSTALL.md b/doc/overview/INSTALL.md index 28bcf40cc..16959bee0 100644 --- a/doc/overview/INSTALL.md +++ b/doc/overview/INSTALL.md @@ -1,51 +1,47 @@ -# Installation +Installation +============ +FastSurfer is a pipeline for the segmentation of human brain MRI data. It consists of two main components: the networks for the fast segmentation of an MRI (FastSurferVINN, CerebNet, ...) and the recon_surf script for the efficient creation of surfaces and most files and statistics that also FreeSurfer provides. -FastSurfer is a pipeline for the segmentation of human brain MRI data. It consists of two main components: the networks for the fast segmentation of an MRI (FastSurferVINN, CerebNet, ...) and the recon_surf script for the efficient creation of surfaces and most files and statistics that also FreeSurfer provides. - -The preferred way of installing and running FastSurfer is via Singularity or Docker containers on a Linux host system (with a GPU). We provide pre-build images at Dockerhub for various application cases: i) for only the segmentation (both GPU and CPU), ii) for only the CPU-based recon-surf pipeline, and iii) for the full pipeline (GPU or CPU). +The preferred way of installing and running FastSurfer is via Singularity or Docker containers on a Linux host system (with a GPU). We provide pre-build images at Dockerhub for various application cases: i) for only the segmentation (both GPU and CPU), ii) for only the CPU-based recon-surf pipeline, and iii) for the full pipeline (GPU or CPU). We also provide information on a native install on some operating systems, but since dependencies may vary, this can produce results different from our testing environment and we may not be able to support you if things don't work. Our testing is performed on Ubuntu 22.04 via our provided Docker images. -## Linux - +Linux +----- Recommended System Spec: 8 GB system memory, NVIDIA GPU with 8 GB graphics memory. -Minimum System Spec: 8 GB system memory (this requires running FastSurfer on the CPU only, which is much slower) - -Non-NVIDIA GPU architectures (AMD) are experimental and not officially supported, but seem to work well also. +Minimum System Spec: 8 GB system memory (this requires running FastSurfer on the CPU only, which is much slower) -### Singularity +Non-NVIDIA GPU architectures (AMD) are experimental and not officially supported, but seem to work well also. +### Singularity (or Apptainer) Assuming you have singularity installed already (by a system admin), you can build a Singularity image easily from our Dockerhub images. Run this command from a directory where you want to store singularity images: ```bash singularity build fastsurfer-gpu.sif docker://deepmi/fastsurfer:latest ``` -Additionally, [the Singularity README](SINGULARITY.md) contains detailed directions for building your own Singularity images from Docker. +Additionally, [the Singularity documentation](SINGULARITY.md) contains detailed directions for building your own Singularity images from Docker. -[Example 2](EXAMPLES.md#example-2-fastsurfer-singularity) explains how to run FastSurfer (for the full pipeline you will also need a FreeSurfer .license file!) and you can find details on how to build your own images here: [Docker](../../tools/Docker/README.md) and [Singularity](SINGULARITY.md). +[Example 1](EXAMPLES.md#example-1-fastsurfer-singularity-or-apptainer) explains how to run FastSurfer (for the full pipeline you will also need a FreeSurfer .license file!) and you can find details on how to build your own images here: [Docker](../../tools/Docker/README.md) and [Singularity](SINGULARITY.md). ### Docker - This is very similar to Singularity. Assuming you have Docker installed (by a system admin) you just need to pull one of our pre-build Docker images from dockerhub: ```bash docker pull deepmi/fastsurfer:latest ``` -[Example 1](EXAMPLES.md#example-1-fastsurfer-docker) explains how to run FastSurfer (for the full pipeline you will also need a FreeSurfer .license file!) and you can find details on how to [build your own image](../../tools/Docker/README.md). +[Example 2](EXAMPLES.md#example-2-fastsurfer-docker) explains how to run FastSurfer (for the full pipeline you will also need a FreeSurfer .license file!) and you can find details on how to [build your own image](../../tools/Docker/README.md). If you are using the **rootless mode**, you have to install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) and follow the [configuration for the rootless mode](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#rootless-mode). Otherwise, running FastSurfer with Docker will give you this error message ```docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]```. ### Native (Ubuntu 20.04 or Ubuntu 22.04) - In a native install you need to install all dependencies (distro packages, FreeSurfer in the supported version, python dependencies) yourself. Here we will walk you through what you need. #### 1. System Packages - You will need a few additional packages that may be missing on your system (for this you need sudo access or ask a system admin): ```bash @@ -63,7 +59,7 @@ sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test sudo apt install -y g++-11 ``` -You also need to have bash-3.2 or higher (check with `bash --version`). +You also need to have bash-3.2 or higher (check with `bash --version`). You also need a working version of python3.10 (we do not support other versions). These packages should be sufficient to install python dependencies and then run the FastSurfer neural network segmentation. If you want to run the full pipeline, you also need a [working installation of FreeSurfer](https://surfer.nmr.mgh.harvard.edu/fswiki/rel7downloads) (including its dependencies and a license file). @@ -71,10 +67,10 @@ If you are using pip, make sure pip is updated as older versions will fail. #### 2. uv for python -We recommend to install uv as your python environment and package manager. [uv](https://docs.astral.sh/uv/) is a very -fast package manager, which makes managing different environments even easier. See -[uv's documentation](https://docs.astral.sh/uv/getting-started/installation/) for more information on installation such -as [autocompletion info](https://docs.astral.sh/uv/getting-started/installation/#shell-autocompletion). +We recommend to install uv as your python environment and package manager. [uv](https://docs.astral.sh/uv/) is a very +fast package manager, which makes managing different environments even easier. See +[uv's documentation](https://docs.astral.sh/uv/getting-started/installation/) for more information on installation such +as [autocompletion info](https://docs.astral.sh/uv/getting-started/installation/#shell-autocompletion). ```bash wget -qO- https://astral.sh/uv/install.sh | sh @@ -91,7 +87,6 @@ cd FastSurfer ``` #### 4. Python environment - Create a new environment and install FastSurfer dependencies: ```bash @@ -134,12 +129,11 @@ python FastSurferCNN/download_checkpoints.py --all Once all dependencies are installed, you are ready to run the FastSurfer segmentation-only (!!) pipeline by calling ```./run_fastsurfer.sh --seg_only ....``` , see [Example 3](EXAMPLES.md#example-3-native-fastsurfer-on-subjectx-with-parallel-processing-of-hemis) for command line flags. #### 5. FreeSurfer -To run the full pipeline, you will need to install FreeSurfer (we recommend and support version 7.4.1) according to their [Instructions](https://surfer.nmr.mgh.harvard.edu/fswiki/rel7downloads). There is a freesurfer email list, if you run into problems during this step. +To run the full pipeline, you will need to install FreeSurfer (we recommend and support version 7.4.1) according to their [Instructions](https://surfer.nmr.mgh.harvard.edu/fswiki/rel7downloads). There is a freesurfer email list, if you run into problems during this step. Make sure, the `${FREESURFER_HOME}` environment variable is set, so FastSurfer finds the FreeSurfer binaries. ### AMD GPUs (experimental) - We have successfully run the segmentation on an AMD GPU (Radeon Pro W6600) using ROCm. For this to work you need to make sure you are using a supported (or semi-supported) GPU and the correct kernel version. AMD kernel modules need to be installed on the host system according to ROCm installation instructions and additional groups need to be setup and your user added to them, see https://rocm.docs.amd.com/projects/install-on-linux/en/latest/ . Build the Docker container with ROCm support. @@ -148,7 +142,7 @@ Build the Docker container with ROCm support. python tools/Docker/build.py --device rocm --tag my_fastsurfer:rocm ``` -You will need to add a couple of flags to your docker run command for AMD, see [Example 1](EXAMPLES.md#example-1-fastsurfer-docker) for `**other-docker-flags**` or `**fastsurfer-flags**`: +You will need to add a couple of flags to your docker run command for AMD, see [Example 2](EXAMPLES.md#example-2-fastsurfer-docker) for `**other-docker-flags**` or `*<*fastsurfer-flags*>*`: ```bash docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --device=/dev/kfd \ --device=/dev/dri --group-add video --ipc=host --shm-size 8G \ @@ -157,20 +151,19 @@ docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --device=/dev/ ``` Note, that this docker image is experimental, uses a different Python version and python packages, so results can differ from our validation results. Please do visual QC. -## MacOS - +MacOS +----- Processing on Mac CPUs is possible. On Apple Silicon, you can even use the GPU by passing ```--device mps```. Recommended System Spec: Mac with Apple Silicon M-Chip and 16 GB system memory. -For older Intel CPUs, we only support cpu-only, which will be 2-4 times slower. +For older Intel CPUs, we only support cpu-only, which will be 2-4 times slower. ### Docker (currently only supported for Intel CPUs) - Docker can be used on Intel Macs as it should be similarly fast as a native install there. It would allow you to run the full pipeline. First, install [Docker Desktop for Mac](https://docs.docker.com/get-docker/). -Start it and set Memory to 15 GB under Preferences -> Resources (or the largest you have, if you are below 15GB, it may fail). +Start it and set Memory to 15 GB under Preferences -> Resources (or the largest you have, if you are below 15GB, it may fail). Second, pull one of our Docker containers. Open a terminal window and run: @@ -178,12 +171,12 @@ Second, pull one of our Docker containers. Open a terminal window and run: docker pull deepmi/fastsurfer:latest ``` -Continue with the example in [Example 1](EXAMPLES.md#example-1-fastsurfer-docker). +Continue with the example in [Example 2](EXAMPLES.md#example-2-fastsurfer-docker). ### Package #### 1. Requirements -FastSurfer requires pre-installed python3.10+ and bash (at least 3.2). +FastSurfer requires pre-installed python3.10+ and bash (at least 3.2). You can install these via the packet manager brew. To install brew and then python3.10, execute the following in a Terminal: @@ -193,10 +186,10 @@ brew install python@3.10 ``` #### 2. FastSurfer package -From version 2.5 onward, FastSurfer ships a macOS installer package, which you can download from -[github](https://github.com/Deep-MI/FastSurfer/releases/). +From version 2.5 onward, FastSurfer ships a macOS installer package, which you can download from +[github](https://github.com/Deep-MI/FastSurfer/releases/). There are package installers for both the Apple M-chip architecture (`arm64`) and for legacy Intel chips (`x86_64`). -To install, double-click the installer and follow the installer instructions. +To install, double-click the installer and follow the installer instructions. After installation, you can find the FastSurfer applet, its source code, and selected FreeSurfer executables in the `/Applications` folder. @@ -216,10 +209,10 @@ export PYTORCH_ENABLE_MPS_FALLBACK=1 This will be at least twice as fast as `--device cpu`. Currently setting the fallback environment variable is necessary as `aten::max_unpool2d` is not yet implemented for MPS and will fall back to CPU. -## Windows +Windows +------- ### Docker (CPU version) - In order to run FastSurfer on your Windows system using docker make sure that you have: * [WSL2](https://learn.microsoft.com/en-us/windows/wsl/install) * [Docker Desktop](https://docs.docker.com/desktop/install/windows-install/) @@ -232,7 +225,7 @@ After everything is installed, start Windows PowerShell and run the following co docker pull deepmi/fastsurfer:cpu-latest ``` -Now you can run Fastsurfer the same way as described in [Example 1](EXAMPLES.md#example-1-fastsurfer-docker) for the CPU build, for example: +Now you can run Fastsurfer the same way as described in [Example 2](EXAMPLES.md#example-2-fastsurfer-docker) for the CPU build, for example: ```bash docker run -v C:/Users/user/my_mri_data:/data \ -v C:/Users/user/my_fastsurfer_analysis:/output \ @@ -248,7 +241,6 @@ Note, the [system requirements](https://github.com/Deep-MI/FastSurfer#system-req This was tested using Windows 10 Pro version 21H1 and the WSL Ubuntu 20.04 distribution ### Docker (GPU version) - In addition to the requirements from the CPU version, you also need to make sure that you have: * Windows 11 or Windows 10 21H2 or greater, * the latest WSL Kernel or at least 4.19.121+ (5.10.16.3 or later for better performance and functional fixes), @@ -263,7 +255,7 @@ After everything is installed, start Windows PowerShell and run the following co docker pull deepmi/fastsurfer:latest ``` -Now you can run Fastsurfer the same way as described in [Example 1](EXAMPLES.md#example-1-fastsurfer-docker), for example: +Now you can run Fastsurfer the same way as described in [Example 2](EXAMPLES.md#example-2-fastsurfer-docker), for example: ```bash docker run --gpus all \ -v C:/Users/user/my_mri_data:/data \ diff --git a/doc/overview/LONG.md b/doc/overview/LONG.md index 8d61b71e9..e68a8c32e 100644 --- a/doc/overview/LONG.md +++ b/doc/overview/LONG.md @@ -1,16 +1,16 @@ -# Longitudinal Processing - +Longitudinal Processing +======================= FastSurfer has a dedicated pipeline to quantify longitudinal changes in T1-weighted MRI. FastSurfer's longitudinal pipeline outperforms independent (cross-sectional) processing of individual MRIs across time in both FastSurfer and FreeSurfer, as well as even the longitudinal pipeline in FreeSurfer. -## What is Longitudinal Processing - +What is Longitudinal Processing? +-------------------------------- In longitudinal studies, MRIs of the same participant are acquired at different time points. Usually, the goal is to quantify potentially subtle anatomical changes representing early disease effects or effects of disease-modifying therapies or drug studies. In these situations, we know that most of the anatomy will be very similar, as compared to cross-sectional differences between participants. Longitudinal processing, as opposed to independent processing of each MRI, tries to make use of the joint information to reduce variance across time, leading to more sensitive estimates of longitudinal changes. This methodological approach leads to increased statistical power to detect subtle changes and, therefore, permits either finding smaller effects or reducing the number of participants needed to detect such an effect - saving time and money. Our paper for the FreeSurfer longitudinal stream (Reuter et al. [2012](https://doi.org/10.1016/j.neuroimage.2012.02.084)) nicely highlights these advantages, such as increased reliability and sensitivity, and describes the general idea. -Generally, the idea is to: +Generally, the idea is to: - Align images across time robustly into an unbiased mid-space (Reuter et al. [2010](https://doi.org/10.1016/j.neuroimage.2010.07.020)). - Construct a template image for each participant (called a within-person template). - Process the template image, e.g. to generate initial WM and GM surfaces. -- Process each time point, initializing or reusing results from the template, yet allowing enough freedom for results to evolve. +- Process each time point, initializing or reusing results from the template, yet allowing enough freedom for results to evolve. This approach is used in FreeSurfer and in FastSurfer and it avoids multiple issues that are inherent to other approaches: - It avoids the introduction of processing bias (Reuter, Fischl [2011](https://doi.org/10.1016/j.neuroimage.2011.02.076)) by treating all time points the same. @@ -18,9 +18,9 @@ This approach is used in FreeSurfer and in FastSurfer and it avoids multiple iss - It is flexible enough not to over-constrain (smooth) longitudinal effects. - It does not enforce or encourage directional temporal changes (e.g. atrophy) and can therefore be used in studying cyclic patterns, or crossover drug studies. -## How to Run Your Data - -We are providing a new entry script, `long_fastsurfer.sh`, to help you process longitudinal data. +How to Run Your Data +-------------------- +We are providing a new entry script, `long_fastsurfer.sh`, to help you process longitudinal data. ```bash # Setup FASTSURFER and FREESURFER @@ -29,7 +29,7 @@ export FREESURFER_HOME=/path/to/freesurfer source $FREESURFER_HOME/SetUpFreeSurfer.sh # Define data directory -export SUBJECTS_DIR=/home/user/my_fastsurfer_analysis +export SUBJECTS_DIR=$HOME/my_fastsurfer_analysis # Run FastSurfer longitudinally $FASTSURFER_HOME/long_fastsurfer.sh \ @@ -40,23 +40,23 @@ $FASTSURFER_HOME/long_fastsurfer.sh \ Here `` is a name you assign to this individual person and will be used in the output directory (`$SUBJECTS_DIR`) for the directory containing the within-subject template (e.g. "`--tid bert`"). The ` ` etc. are the global paths to the input full head T1w images for each time point (do not need to be bias corrected) in nifti or mgz format. The ` ` etc. are the ID names for each time point. Corresponding directories will be created in the output directory (`$SUBJECTS_DIR`), e.g. "`--tpids bert_1 bert_2`". These directories will contain the final results for each time point for downstream analysis. -Note, with a few exceptions, you can add additional flags that can be understood by `run_fastsurfer.sh`, which will be passed through, e.g. the `--3T` when working with 3T images. +Note, with a few exceptions, you can add additional flags that can be understood by `run_fastsurfer.sh`, which will be passed through, e.g. the `--3T` when working with 3T images. The above command will, of course, be slightly different when using your preferred installation in Singularity or Docker. For example, for Singularity: ```bash singularity exec --nv \ --no-mount cwd,home \ - -B /home/user/my_mri_data:/data \ - -B /home/user/my_fastsurfer_analysis:/output \ - -B /home/user/my_fs_license_dir:/fs_license \ + -B $HOME/my_mri_data \ + -B $HOME/my_fastsurfer_analysis \ + -B $HOME/my_fs_license \ ./fastsurfer-gpu.sif \ /fastsurfer/long_fastsurfer.sh \ - --fs_license /fs_license/license.txt \ + --fs_license $HOME/my_fs_license \ --tid \ - --t1s /data/ /data/ ... \ + --t1s $HOME/my_mri_data/ $HOME/my_mri_data/ ... \ --tpids ... \ - --sd /output \ + --sd $HOME/my_fastsurfer_analysis \ --3T ``` @@ -64,9 +64,9 @@ For Docker, this is very similar, but we need to specify the entrypoint explicit ```bash docker run --gpus all --rm --user $(id -u):$(id -g) \ - -v /home/user/my_mri_data:/data \ - -v /home/user/my_fastsurfer_analysis:/output \ - -v /home/user/my_fs_license_dir:/fs_license \ + -v $HOME/my_mri_data:/data \ + -v $HOME/my_fastsurfer_analysis:/output \ + -v $HOME/my_fs_license_dir:/fs_license \ --entrypoint "/fastsurfer/long_fastsurfer.sh" \ deepmi/fastsurfer:latest \ --fs_license /fs_license/license.txt \ @@ -77,18 +77,18 @@ docker run --gpus all --rm --user $(id -u):$(id -g) \ --3T ``` -For speed-up, you can also add ```--parallel_surf max --threads_surf 2``` to run all hemispheres of all time points for the surface module in parallel (if you have enough CPU threads and RAM). Also, sometimes the FreeSurfer license is not called `license.txt` but `.license` (from older FreeSurfer versions). - -Note, FastSurfer does not like it if you pass images with different voxel sizes across time. That should never happen anyway in a longitudinal study, as you would never even think of changing your acquisition sequences, right? That would introduce potential bias due to consistent changes in the way that time points are acquired. If you want to process that kind of data, conform images first (e.g. with ```mri_convert --conform```) and beware that you may introduce biases here (maybe account for the acquisition change with a time varying co-variate in your LME statistical model later). +For speed-up, you can also add ```--parallel_surf max --threads_surf 2``` to run all hemispheres of all time points for the surface module in parallel (if you have enough CPU threads and RAM). Also, sometimes the FreeSurfer license is not called `license.txt` but `.license` (from older FreeSurfer versions). -## Single Time Point Cases +Note, FastSurfer does not like it if you pass images with different voxel sizes across time. That should never happen anyway in a longitudinal study, as you would never even think of changing your acquisition sequences, right? That would introduce potential bias due to consistent changes in the way that time points are acquired. If you want to process that kind of data, conform images first (e.g. with ```mri_convert --conform```) and beware that you may introduce biases here (maybe account for the acquisition change with a time varying co-variate in your LME statistical model later). -Sometimes your longitudinal data set contains participants with only one time point, e.g. due to drop-out or QC exclusion. Instead of excluding single-time point cases completely (which may even bias results), you can include them for better statistics. While this obviously will not help to better estimate longitudinal slopes, linear mixed effects models (LMEs), for example, can include single time point data to obtain better estimates of cross-subject variance. +Single Time Point Cases +----------------------- +Sometimes your longitudinal data set contains participants with only one time point, e.g. due to drop-out or QC exclusion. Instead of excluding single-time point cases completely (which may even bias results), you can include them for better statistics. While this obviously will not help to better estimate longitudinal slopes, linear mixed effects models (LMEs), for example, can include single time point data to obtain better estimates of cross-subject variance. -HOWEVER, this requires that you process these cases also through the longitudinal stream! This is very important to ensure that they undergo the same processing steps as data from cases with multiple time points. Only then are the results comparable. The command is the same as above, just specify only the single t1 and the time point ID. It could not be any easier. - -## Behind the Scenes +HOWEVER, this requires that you process these cases also through the longitudinal stream! This is very important to ensure that they undergo the same processing steps as data from cases with multiple time points. Only then are the results comparable. The command is the same as above, just specify only the single t1 and the time point ID. It could not be any easier. +Behind the Scenes +----------------- `long_fastsurfer.sh` is just a helper script and will perform the following individual steps for you: 1. **Template Init**: It will prepare the subject template by calling `long_prepare_template.sh`: ```bash @@ -103,10 +103,10 @@ HOWEVER, this requires that you process these cases also through the longitudina 4. **Long Seg**: Next, the segmentation of each time point, which can theoretically run in parallel with the previous two steps, is performed `run_fastsurfer.sh --sid --long --seg_only ...`, 5. **Long Surf**: Again followed by the surface processing for each time point: `run_fastsurfer.sh --sid --long --surf_only`. This step needs to wait until 3. and 4. (for this time point) are finished. In this step, for example, surfaces are initialized with the ones obtained on the template above and only fine-tuned, instead of being recreated from scratch. -Internally, we use `brun_fastsurfer.sh` as a helper script to process multiple time points in parallel (in the LONG steps 4. and 5.). Here, `--parallel_seg` can be passed to `long_fastsurfer.sh` to specify the number of parallel runs during the segmentation step (4), which is usually limited by GPU memory, if run on the GPU. Further, `--parallel_surf` specifies the number of parallel surface runs on the CPU and is most impactful. It can be combined with `--threads_surf 2` (or higher) to switch on parallelization of the two hemispheres in each surface block. - -## Final Statistics +Internally, we use `brun_fastsurfer.sh` as a helper script to process multiple time points in parallel (in the LONG steps 4. and 5.). Here, `--parallel_seg` can be passed to `long_fastsurfer.sh` to specify the number of parallel runs during the segmentation step (4), which is usually limited by GPU memory, if run on the GPU. Further, `--parallel_surf` specifies the number of parallel surface runs on the CPU and is most impactful. It can be combined with `--threads_surf 2` (or higher) to switch on parallelization of the two hemispheres in each surface block. +Final Statistics +---------------- The final results will be located in `$SUBJECTS_DIR/tID1` ... for each time point. These directories will have the same structure as a regular FastSurfer/FreeSurfer output directory. Therefore, you can use the regular downstream analysis tools, e.g. to extract statistics from the stats files. Note that the surfaces are already in vertex correspondence across time for each participant. For group analysis, one would still need to map thickness estimates to the fsaverage spherical template (this is usually done with `mris_preproc`). For longitudinal statistics using the (recommended) linear mixed effects models, see our R toolbox [FS LME R](https://github.com/Deep-MI/fslmer), which can also analyze the mass-univariate situation, e.g. for cortical thickness maps. Alternatively, you can use this Matlab package: [LME Matlab](https://github.com/NeuroStats/lme) and our Matlab tools for time-to-event (survival) analysis: [Survival](https://github.com/NeuroStats/Survival). Note, that followup tools, e.g. Longitudinal Hippocampus and Amydala pipeline, require additional files. These files can be generated by running the [FastSurfer longitudinal outputs script `recon_surf/long_compat_segmentHA.py`](../scripts/long_compat_segmentHA.rst) on top of the longitudinal processing directory. @@ -117,14 +117,15 @@ export FREESURFER_HOME=/path/to/freesurfer source $FREESURFER_HOME/SetUpFreeSurfer.sh # Define data directory -export SUBJECTS_DIR=/home/user/my_fastsurfer_analysis +export SUBJECTS_DIR=$HOME/my_fastsurfer_analysis # Run long_compat_segmentHA.py script to create missing files and sym-links python $FASTSURFER_HOME/recon_surf/long_compat_segmentHA.py \ --tid ``` -## References +References +---------- - Reuter, Schmansky, Rosas, Fischl Within-subject template estimation for unbiased longitudinal image analysis. NeuroImage 61(4):1402-1418 @@ -133,8 +134,8 @@ python $FASTSURFER_HOME/recon_surf/long_compat_segmentHA.py \ Avoiding asymmetry-induced bias in longitudinal image processing. NeuroImage 57(1):19-21 [https://doi.org/10.1016/j.neuroimage.2011.02.076](https://doi.org/10.1016/j.neuroimage.2011.02.076) -- Reuter, Rosas, Fischl (2010). - Highly accurate inverse consistent registration: a robust approach. +- Reuter, Rosas, Fischl (2010). + Highly accurate inverse consistent registration: a robust approach. NeuroImage 53(4):1181-1196 [https://doi.org/10.1016/j.neuroimage.2012.02.084](https://doi.org/10.1016/j.neuroimage.2012.02.084) - Diers, Reuter diff --git a/doc/overview/OUTPUT_FILES.md b/doc/overview/OUTPUT_FILES.md index bb3c55bbc..52300bec6 100644 --- a/doc/overview/OUTPUT_FILES.md +++ b/doc/overview/OUTPUT_FILES.md @@ -2,7 +2,7 @@ Output files ============ Segmentation module ------------------- -The segmentation module outputs the files shown in the table below. The two primary output files are the `aparc.DKTatlas+aseg.deep.mgz` file, which contains the FastSurfer segmentation of cortical and subcortical structures based on the DKT atlas, and the `aseg+DKT.stats` file, which contains summary statistics for these structures. Note, that the surface model (downstream) corrects these segmentations along the cortex with the created surfaces. So if the surface model is used, it is recommended to use the updated segmentations and stats (see below). +The segmentation module outputs the files shown in the table below. The two primary output files are the `aparc.DKTatlas+aseg.deep.mgz` file, which contains the FastSurfer segmentation of cortical and subcortical structures based on the DKT atlas, and the `aseg+DKT.stats` file, which contains summary statistics for these structures. Note, that the surface model (downstream) corrects these segmentations along the cortex with the created surfaces. So if the surface model is used, it is recommended to use the updated segmentations and stats (see below). | directory | filename | module | description | |:----------|------------------------------|---------|--------------------------------------------------------------------| @@ -66,11 +66,11 @@ If a T2 image is also passed, the following images are created. Surface module -------------- -The surface module is run unless switched off by the `--seg_only` argument. It outputs a large number of files, which generally correspond to the FreeSurfer nomenclature and definition. A selection of important output files is shown in the table below, for the other files, we refer to the [FreeSurfer documentation](https://surfer.nmr.mgh.harvard.edu/fswiki). In general, the "mri" directory contains images, including segmentations, the "surf" folder contains surface files (geometries and vertex-wise overlay data), the "label" folder contains cortical parcellation labels, and the "stats" folder contains tabular summary statistics. Many files are available for the left ("lh") and right ("rh") hemisphere of the brain. Symbolic links are created to map FastSurfer files to their FreeSurfer equivalents, which may need to be present for further processing (e.g., with FreeSurfer downstream modules). +The surface module is run unless switched off by the `--seg_only` argument. It outputs a large number of files, which generally correspond to the FreeSurfer nomenclature and definition. A selection of important output files is shown in the table below, for the other files, we refer to the [FreeSurfer documentation](https://surfer.nmr.mgh.harvard.edu/fswiki). In general, the "mri" directory contains images, including segmentations, the "surf" folder contains surface files (geometries and vertex-wise overlay data), the "label" folder contains cortical parcellation labels, and the "stats" folder contains tabular summary statistics. Many files are available for the left ("lh") and right ("rh") hemisphere of the brain. Symbolic links are created to map FastSurfer files to their FreeSurfer equivalents, which may need to be present for further processing (e.g., with FreeSurfer downstream modules). After running this module, some of the initial segmentations and corresponding volume estimates are fine-tuned (e.g., surface-based partial volume correction, addition of corpus callosum labels). Specifically, this concerns the `aseg.mgz `, `aparc.DKTatlas+aseg.mapped.mgz`, `aparc.DKTatlas+aseg.deep.withCC.mgz`, which were originally created by the segmentation module or have earlier versions resulting from that module. -The primary output files are pial, white, and inflated surface files, the thickness overlay files, and the cortical parcellation (annotation) files. The preferred way of assessing this output is the [FreeView](https://surfer.nmr.mgh.harvard.edu/fswiki/FreeviewGuide) software. Summary statistics for volume and thickness estimates per anatomical structure are reported in the stats files, in particular the `aseg.stats`, and the left and right `aparc.DKTatlas.mapped.stats` files. +The primary output files are pial, white, and inflated surface files, the thickness overlay files, and the cortical parcellation (annotation) files. The preferred way of assessing this output is the [FreeView](https://surfer.nmr.mgh.harvard.edu/fswiki/FreeviewGuide) software. Summary statistics for volume and thickness estimates per anatomical structure are reported in the stats files, in particular the `aseg.stats`, and the left and right `aparc.DKTatlas.mapped.stats` files. | directory | filename | module | description | |:----------|----------------------------------------------------------------|---------|----------------------------------------------------------------------------------------------| diff --git a/doc/overview/QUICKSTART.md b/doc/overview/QUICKSTART.md index c01fa132c..c27eee155 100644 --- a/doc/overview/QUICKSTART.md +++ b/doc/overview/QUICKSTART.md @@ -1,7 +1,7 @@ -# Quick Start - -## Singularity or Docker - +Quick Start +=========== +Singularity or Docker +--------------------- For users with a linux workstation with a GPU (8GB) and Singularity (or Docker) installed, running FastSurfer is easy and fast! If you don't have a GPU, it will use the CPU and be quite a bit slower. And if you don't have Docker or Singularity, see below for how to run FastSurfer in the Cloud with Google Collab! @@ -17,13 +17,13 @@ cd fastsurfer-test singularity build fastsurfer-gpu.sif docker://deepmi/fastsurfer:latest # 2. Download an example brain MRI (if you don't have your own) -# If you have your own, copy it to this directory and adjust +# If you have your own, copy it to this directory and adjust # the filename after --t1 below. curl -k https://surfer.nmr.mgh.harvard.edu/pub/data/tutorial_data/buckner_data/tutorial_subjs/140/mri/orig.mgz -o "./140_orig.mgz" # 3. Run FastSurfer (full brain segmentation only) singularity exec --nv \ - --no-home \ + --no-mount home,cwd -e \ -B "$PWD" \ ./fastsurfer-gpu.sif \ /fastsurfer/run_fastsurfer.sh \ @@ -32,9 +32,9 @@ singularity exec --nv \ --seg_only --no_biasfield --no_cereb --no_hypothal ``` -That's it, it will run the full brain segmentation. For speed, we switched off the cerebellum and hypothalamic sub-segmentation (would add a couple minutes). +That's it, it will run the full brain segmentation. For speed, we switched off the cerebellum and hypothalamic sub-segmentation (would add a couple minutes). We also switched off the bias field correction, which is used to compute partial volume estimates for the statsfiles, so you might want to switch it on again if you want the volume statistics text file (under ```test-case/stats```). -Also if you need the estimated total intracranial volume for correcting the stats, you would either need to run the surface stream or switch on the Talairach registration with +Also if you need the estimated total intracranial volume for correcting the stats, you would either need to run the surface stream or switch on the Talairach registration with ```--tal_reg``` in the segmentation module. For the full surface stream, just remove the ```--seg_only``` and you need a FreeSurfer license file and pass it into the container, as described in more detail later. For your convenience here is the same procedure using Docker instead of Singularity: @@ -45,7 +45,7 @@ mkdir fastsurfer-test cd fastsurfer-test # 1. Download an example brain MRI (if you don't have your own) -# If you have your own, copy it to this directory and adjust +# If you have your own, copy it to this directory and adjust # the filename after --t1 below. curl -k https://surfer.nmr.mgh.harvard.edu/pub/data/tutorial_data/buckner_data/tutorial_subjs/140/mri/orig.mgz -o "./140_orig.mgz" @@ -63,10 +63,11 @@ You will find the full brain segmentation in ```./test-case/mri/aparc.DKTatlas+a ```bash # Convert mgz to nifti singularity exec --nv \ - --no-home \ + --no-mount home -e \ -B "$PWD" \ ./fastsurfer-gpu.sif \ - nib-convert "$PWD/test-case/mri/aparc.DKTatlas+aseg.deep.mgz" "$PWD/test-case/mri/aparc.DKTatlas+aseg.deep.nii.gz" + nib-convert "$PWD/test-case/mri/aparc.DKTatlas+aseg.deep.mgz" \ + "$PWD/test-case/mri/aparc.DKTatlas+aseg.deep.nii.gz" ``` and find the segmentation in ```./test-case/mri/aparc.DKTatlas+aseg.deep.nii.gz```. If you have FreeSurfer installed, just use FreeView to look at the result (or really any other image viewer): @@ -78,8 +79,8 @@ freeview -v 140_orig.mgz test-case/mri/aparc.DKTatlas+aseg.deep.mgz:colormap=lut Other interesting outputs of the segmentation are the ```aseg.auto_noCCseg.mgz``` containing a reduced segmentation according to FreeSurfer's aseg (no cortical sub-division and no corpus callosum, which is added later). Also ```mask.mgz``` can come in handy if you need a brainmask. And you get all of this within a few seconds (including startup of singularity or docker it is **20 sec** in total with a GeForce RTX 4080, **40 sec** with a Quadro RTX 4000 or Titan XP, CPU-only takes **5 minutes** longer on my machine). -## Google Colab - +Google Colab +------------ You can also run FastSurfer in the cloud with Google Colab. In order to use the notebooks, simply click on the link or optimally the google colab icon displayed at the top of the page. This way, the plots will be rendered correctly. If you have a Google account, you can interactively execute the run cells. Without a google account you can see the files and outputs generated by the last run. @@ -95,7 +96,4 @@ After a quick introduction, it covers three use cases: - Use case 2: Quick and a bit more advanced - Segmentation with FastSurfer on your local machine - Use case 3: Use case 3 - Surface models, Thickness maps and more: FastSurfer's recon-surf command -In addition, there is a small section covering [python-qatools](https://github.com/Deep-MI/qatools-python) called "Bonus - Quality analysis using qatools". - - - +In addition, there is a small section covering [python-qatools](https://github.com/Deep-MI/qatools-python) called "Bonus - Quality analysis using qatools". diff --git a/doc/overview/SECURITY.md b/doc/overview/SECURITY.md index 22974bc3b..60bc7c2a2 100644 --- a/doc/overview/SECURITY.md +++ b/doc/overview/SECURITY.md @@ -1,17 +1,16 @@ -# Security Policy - -## Supported Versions - -Versions of FastSurfer that are -currently being supported with security updates: +Security Policy +=============== +Supported Versions +------------------ +Versions of FastSurfer that are currently being supported with security updates: | Version | Supported | | ------- | ------------------ | | 2.0.x | :white_check_mark: | | < 2.0 | :x: | -## Reporting a Vulnerability - -Please Report Vulnerabilities as Github Issues. Use Vulnerability in the title and you can expect a quick response. -If possible include a description of the vulnerability and how to resolve it, e.g. by updating dependencies etc. -Thanks for you contribution! +Reporting a Vulnerability +------------------------- +Please Report Vulnerabilities as Github Issues. Use Vulnerability in the title and you can expect a quick response. If +possible include a description of the vulnerability and how to resolve it, e.g. by updating dependencies etc. Thanks for +you contribution! diff --git a/doc/overview/SINGULARITY.md b/doc/overview/SINGULARITY.md index a379241c2..80fe44f4b 100644 --- a/doc/overview/SINGULARITY.md +++ b/doc/overview/SINGULARITY.md @@ -1,89 +1,125 @@ -# Singularity Support +Singularity Support +=================== -For use on HPCs (or in other cases where Docker is not available or preferred) you can easily create a Singularity image from the Docker image. -Singularity uses its own image format, so the Docker images must be converted (we publish our releases as docker images available on [Dockerhub](https://hub.docker.com/r/deepmi/fastsurfer/tags)). +Containerization +---------------- +Containerization tools like Singularity, or Apptainer or Docker provide several advantages. +Most importantly, they allow for exactly same setup across different machines and even data centers and compute clusters. They thus increase reproducibility by reducing software differences between evaluations. +Additionally, errors and unexpected behavior is easier to track down, since the setup is significantly easier to reproduce for developers. +Finally, containers provide a security advantage, because the access to data is restricted to explicitly shared data reducing both the risk of data theft and data encryption attacks. This is strategy also called [sandboxing](https://en.wikipedia.org/wiki/Sandbox_(computer_security)). -## Singularity with the official FastSurfer Image -To create a Singularity image from the official FastSurfer image hosted on Dockerhub just run: +Using Singularity (or Apptainer) +-------------------------------- +In the following, we write "Singularity", but all steps work the same with the [open source Apptainer](https://apptainer.org). + +To execute code in a Singularity container, users have to: +1. [download](SINGULARITY.md#downloading-the-official-fastsurfer-image-for-singularity) or [create](SINGULARITY.md#creating-your-own-fastsurfer-singularity-image) a Singularity image of FastSurfer. +2. [Start the Singularity container from a Singularity image](SINGULARITY.md#starting-fastsurfer-with-from-a-singularity-image) by defining options for the container. It is useful, to think of the image as a "hard drive" and the container as a "simulated computer inside the computer". + + We refer to these "options for the container" in `<*singularity-flags*>`. They are not options to FastSurfer (referred to as `<*fastsurfer-flags*>`), but to the "simulated computer" and define access to data, hardware (e.g. graphics cards), etc. + +Downloading the official FastSurfer image for Singularity +--------------------------------------------------------- +Singularity uses its own image format, so we need to download and convert the official docker images available from [Dockerhub](https://hub.docker.com/r/deepmi/fastsurfer/tags). + +To create an official FastSurfer Singularity image, run: ```bash -singularity build /home/user/my_singlarity_images/fastsurfer-latest.sif docker://deepmi/fastsurfer:latest +# singularity build +singularity build $HOME/my_singlarity_images/fastsurfer-2.5.0.sif docker://deepmi/fastsurfer:cuda-v2.5.0 ``` -Singularity images are files - usually with the extension `.sif`. Here, we save the image in `/homer/user/my_singlarity_images`. -If you want to pick a specific FastSurfer version, you can also change the tag (`latest`) in `deepmi/fastsurfer:latest` to any tag. For example to use the cpu image hosted on [Dockerhub](https://hub.docker.com/r/deepmi/fastsurfer/tags) use the tag `cpu-latest`. +Singularity images are files with extension `.sif`. Here, we save the image in `$HOME/my_singlarity_images`. +If you want to pick a specific FastSurfer version, you can also change `cuda-v2.5.0` in the ``. For example to use the [cpu image](https://hub.docker.com/r/deepmi/fastsurfer/tags?name=cpu) (`cpu-v2.5.0`) or a [specific CUDA version](https://hub.docker.com/r/deepmi/fastsurfer/tags?name=cu1) (check, which version is available the current FastSurfer version, for example `cu118-v2.5.0`). -## Building your own FastSurfer Singularity Image -To build a custom FastSurfer Singularity image, the `tools/Docker/build.py` script supports a flag for direct conversion. -Simply add `--singularity /home/user/my_singlarity_images/fastsurfer-myimage.sif` to the call, which first builds the image with Docker and then converts the image to Singularity. +Creating your own FastSurfer Singularity image +---------------------------------------------- +To build a custom FastSurfer Singularity image, the `Docker/build.py` script supports a flag for direct conversion. +Simply add `--singularity $HOME/my_singlarity_images/fastsurfer-myimage.sif` to the call, which first builds the image with Docker and then converts the image to Singularity. If you want to manually convert the local Docker image `fastsurfer:myimage`, run: ```bash -singularity build /home/user/my_singlarity_images/fastsurfer-myimage.sif docker-daemon://fastsurfer:myimage +singularity build $HOME/my_singlarity_images/fastsurfer-myimage.sif docker-daemon://fastsurfer:myimage ``` For more information on how to create your own Docker images, see our [Docker guide](../../tools/Docker/README.md). -## FastSurfer Singularity Image Usage - -After building the Singularity image, you need to register at the FreeSurfer website (https://surfer.nmr.mgh.harvard.edu/registration.html) to acquire a valid license (for free) - just as when using Docker. This license needs to be passed to the script via the `--fs_license` flag. This is not necessary if you want to run the segmentation only. +Starting FastSurfer with from a Singularity image +------------------------------------------------- +After building the Singularity image, you need to [register at the FreeSurfer website](https://surfer.nmr.mgh.harvard.edu/registration.html) to acquire a valid license (for free) - just as when using Docker. This license needs to be passed to the script via the `--fs_license` flag. This is not necessary if you want to run the segmentation only. To run FastSurfer on a given subject using the Singularity image with GPU access, execute the following command: +`<*singularity-flags*>` includes flags that set up the singularity container: +- `--nv`: enable nVidia GPUs in Singularity (otherwise FastSurfer will run on the CPU), +- `-B `: is used to share data between the host and Singularity (only paths listed here will be available to FastSurfer, see [Singularity documentation](SINGULARITY.md#containerization) for more info). + This should specifically include the "Subject Directory". If two paths are given like `-B /my/path/host:/other`, this means `/my/path/host/somefile` will be accessible inside Singularity in directory as `/other/somefile`. + ```bash singularity exec --nv \ --no-mount home,cwd -e \ - -B /home/user/my_mri_data:/data \ - -B /home/user/my_fastsurfer_analysis:/output \ - -B /home/user/my_fs_license_dir:/fs \ - /home/user/fastsurfer-gpu.sif \ + -B $HOME/my_mri_data:/data \ + -B $HOME/my_fastsurfer_analysis:/output \ + -B $HOME/my_fs_license_dir:/fs \ + $HOME/fastsurfer-gpu.sif \ /fastsurfer/run_fastsurfer.sh \ --fs_license /fs/license.txt \ --t1 /data/subjectX/orig.mgz \ --sid subjectX --sd /output \ - --3T + --3T --threads 4 ``` ### Singularity Flags * `--nv`: This flag is used to access GPU resources. It should be excluded if you intend to use the CPU version of FastSurfer * `-e`: Do not transfer the environment variables from the host to the container. -* `--no-mount home,cwd`: This flag tells singularity to not mount the home directory or the current working directory inside the singularity image (see [Best Practice](#mounting-home)) -* `-B`: These commands mount your data, output, and directory with the FreeSurfer license file into the Singularity container. Inside the container these are visible under the name following the colon (in this case /data, /output, and /fs). +* `--no-mount home,cwd`: This flag tells singularity to not mount the home directory or the current working directory inside the singularity image (see [Best Practice](#best-practices)) +* `-B`: These commands mount your data, output, and directory with the FreeSurfer license file into the Singularity container. Inside the container these are visible under the name following the colon (in this case /data, /output, and /fs). ### FastSurfer Flags -* The `--fs_license` points to your FreeSurfer license which needs to be available on your computer in the my_fs_license_dir that was mapped above, if you want to run the full surface analysis. -* The `--t1` points to the t1-weighted MRI image to analyse (full path, with mounted name inside docker: /home/user/my_mri_data => /data) +* The `--fs_license` points to your FreeSurfer license (needs to be shared with the container using `-B`) +* The `--t1` points to the t1-weighted MRI image to analyse (needs to be shared with the container using `-B`) * The `--sid` is the subject ID name (output folder name) -* The `--sd` points to the output directory (its mounted name inside docker: /home/user/my_fastsurfer_analysis => /output) -* The `--3T` switches to the 3T atlas instead of the 1.5T atlas for Talairach registration. - -Note, that the paths following `--fs_license`, `--t1`, and `--sd` are __inside__ the container, not global paths on your system, so they should point to the places where you mapped these paths above with the `-B` arguments. +* The `--sd` points to the output directory (needs to be shared with the container using `-B`) +* The `--3T` switches to the 3T atlas instead of the 1.5T atlas for Talairach registration. -A directory with the name as specified in `--sid` (here subjectX) will be created in the output directory. So in this example output will be written to /home/user/my_fastsurfer_analysis/subjectX/ . Make sure the output directory is empty, to avoid overwriting existing files. +A directory with the name as specified in `--sid` (here subjectX) will be created in the output directory. So in this example output will be written to `$HOME/my_fastsurfer_analysis/subjectX/`. FastSurfer may overwrite files in `$HOME/my_fastsurfer_analysis/subjectX/`. ### Singularity without a GPU You can run the Singularity equivalent of CPU-Docker by building a Singularity image from the CPU-Docker image (replace # with the current version number) and excluding the `--nv` argument in your Singularity exec command as following: ```bash -cd /home/user/my_singlarity_images -singularity build fastsurfer-gpu.sif docker://deepmi/fastsurfer:cpu-v#.#.# - -singularity exec --no-mount home,cwd -e \ - -B /home/user/my_mri_data:/data \ - -B /home/user/my_fastsurfer_analysis:/output \ - -B /home/user/my_fs_license_dir:/fs \ - /home/user/fastsurfer-cpu.sif \ - /fastsurfer/run_fastsurfer.sh \ - --fs_license /fs/license.txt \ - --t1 /data/subjectX/orig.mgz \ - --sid subjectX --sd /output \ - --3T +cd $HOME/my_singlarity_images +singularity build fastsurfer-cpu-2.5.0.sif docker://deepmi/fastsurfer:cpu-v2.5.0 + +singularity exec --no-mount -e \ + -B $HOME/my_mri_data \ + -B $HOME/my_fastsurfer_analysis \ + -B $HOME/my_fs_license.txt \ + $HOME/fastsurfer-cpu-{{ FASTSURFER_VERSION }}.sif \ + /fastsurfer/run_fastsurfer.sh \ + --fs_license $HOME/my_fs_license.txt \ + --t1 $HOME/my_mri_data/subjectX/orig.mgz \ + --sid subjectX --sd $HOME/my_fastsurfer_analysis \ + --3T --threads 4 ``` -## Singularity Best Practice +Common problems +--------------- +1. Slow processing despite GPUs, log says `UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version ...)`. + + Your NVIDIA drivers are too old for the CUDA version used in the image you created, try using a different image with a different cuda version, for example for [CUDA 11](https://hub.docker.com/r/deepmi/fastsurfer/tags?name=cu11), or specify a different `--device` option if you built the underlying Docker image yourself. + +2. When building singularity image from the docker image via `singularity build docker-daemon://fastsurfer:myimage`, it may fail with an error message like this: + ``` + INFO: Starting build... + FATAL: While performing build: conveyor failed to get: loading image from docker engine: Error response from daemon: {"message":"client version 1.22 is too old. Minimum supported API version is 1.24, please upgrade your client to a newer version"} + ``` + - To solve this issue, you can export the image from docker with `docker save -o ` and then you can use singularity to build from that `singularity build docker-archive:`. -### Mounting Home +Best Practices +-------------- + +### Mounting Home and Current Working Directory Do not mount the user home directory into the singularity container as the home directory. - -Why? If the user inside the singularity container has access to a user directory, settings from that directory might bleed into the FastSurfer pipeline. For example, before FastSurfer 2.2 python packages installed in the user directory would replace those installed inside the image potentially causing incompatibilities. Since FastSurfer 2.2, `singularity exec ... --version +pip` outputs the FastSurfer version including a full list of python packages. -How? Singularity automatically mounts the home directory by default. To avoid this, specify `--no-mount home,cwd`. Additionally setting the `-e` flag will ensure that no environment variables will be passed from the host system into the container. +Why? If the user inside the singularity container has access to a user directory, settings from that directory might bleed into the FastSurfer pipeline. For example, before FastSurfer 2.2 python packages installed in the user directory would replace those installed inside the image potentially causing incompatibilities. Since FastSurfer 2.2, `singularity exec ... --version +pip` outputs the FastSurfer version including a full list of python packages. +How? Singularity automatically mounts the home directory by default. To avoid this, specify `--no-mount home,cwd`. Additionally setting the `-e` flag will ensure that no environment variables will be passed from the host system into the container. diff --git a/doc/overview/index.rst b/doc/overview/index.rst index 2fca45ff3..0fe587141 100644 --- a/doc/overview/index.rst +++ b/doc/overview/index.rst @@ -8,7 +8,6 @@ User Guide QUICKSTART.md INSTALL.md EXAMPLES.md - FLAGS.md OUTPUT_FILES.md modules/index docker diff --git a/doc/overview/intro.rst b/doc/overview/intro.rst index 9d82f0968..db96910a6 100644 --- a/doc/overview/intro.rst +++ b/doc/overview/intro.rst @@ -1,20 +1,20 @@ -########################## -Introduction to FastSurfer -########################## +############ +Introduction +############ We are excited that you are here. In this documentation we will help you get started with FastSurfer! -FastSurfer is an open-source AI software tool to extract quantiative measurements from human brain MRI (T1-weighted) images. +FastSurfer is an open-source AI software tool to extract quantitative measurements from human brain MRI (T1-weighted) images. You will learn about it's different segmentation and surface modules and how to install and run it natively or in the recommended Docker or Singularity images. But first let us tell you why we think FastSurfer is great: -* FastSurfer uses dedicated and fast AI methods (developed in-house). -* It is thoroughly validated across different scanners, field-strenghts, T1 sequences, ages, diseases, ... +* FastSurfer uses dedicated and fast AI methods (developed by `Deep-MI `_). +* It is thoroughly validated across different scanners, field-strengths, T1 sequences, ages, diseases, ... * FastSurfer is fully open-source using a permissive Apache license. * It is compatible with FreeSurfer, enabling FreeSurfer downstream tools to work directly. * It is much faster and provides increased reliability and sensitivity of the derived measures. * It natively supports high-resolution images (down to around 0.7mm) at high accuracy. -* It has modules for full-brain (aseg+aparcDKT), cerebellum and hypothalamic sub-segmentations. +* It has modules for full-brain (aseg+DKT), cerebellum and hypothalamic sub-segmentations. * The segmentation modules run within minutes and provide partial-volume corrected stats. * It has an optimized surface stream for cortical thickness analysis and improved correspondence. diff --git a/doc/overview/license.rst b/doc/overview/license.rst index 293ef81aa..f572c2923 100644 --- a/doc/overview/license.rst +++ b/doc/overview/license.rst @@ -2,8 +2,38 @@ FastSurfer License ################## +FastSurfer is licensed under the Apache license (see below). +FastSurfer uses (and the docker image is distributed with) FreeSurfer, which is licensed under the `FreeSurfer License `_. Users need a license key for FreeSurfer, which can be obtained for free on the `FreeSurfer website `_. +FastSurfer uses several python packages. Please refer to these packages for their respective licenses. Apache License ============== .. literalinclude:: ../../LICENSE + +Other Packages and Tools distributed or used with FastSurfer +============================================================ + +* [FreeSurfer](https://surfer.nmr.mgh.harvard.edu/) +* [h5py](https://www.h5py.org) +* [Jinja2](https://jinja.palletsprojects.com/en/stable/) +* [lapy](https://deep-mi.org/LaPy) +* [matplotlib](https://matplotlib.org/) +* [nibabel](https://nipy.org/nibabel/) +* [numpy](https://numpy.org/) +* [pandas](https://pandas.pydata.org/) +* [pillow](https://python-pillow.github.io/) +* [plotly](https://plotly.com/python/) +* [PyYAML](https://pyyaml.org/) +* [requests](https://requests.readthedocs.io/en/latest/) +* [scikit-image](https://scikit-image.org) +* [scikit-learn](https://scikit-learn.org) +* [scikit-sparse](https://github.com/scikit-sparse/scikit-sparse) +* [scipy](https://scipy.org) +* [SimpleITK](https://simpleitk.org/) +* [tensorboard](https://www.tensorflow.org/tensorboard) +* [torch](https://pytorch.org) +* [torchio](https://torchio.org) +* [torchvision](https://pytorch.org/vision) +* [tqdm](https://tqdm.github.io/) +* [yacs](https://github.com/rbgirshick/yacs) diff --git a/doc/overview/modules/CC.md b/doc/overview/modules/CC.md index b392a5c45..f59fec1ca 100644 --- a/doc/overview/modules/CC.md +++ b/doc/overview/modules/CC.md @@ -38,7 +38,7 @@ This file contains measurements from the middle sagittal slice and includes: #### **Thickness Analysis:** - `thickness`: Average corpus callosum thickness (mm) -- `thickness_profile`: Thickness profile (mm) of the corpus callosum slice (100 thickness values by default, listed from anterior to posterior CC ends) +- `thickness_profile`: Thickness profile (mm) of the corpus callosum slice (100 thickness values by default, listed from anterior to posterior CC ends) #### **Volume Measurements (when multiple slices processed):** - `cc_5mm_volume`: Total CC volume within 5mm slab using voxel counting (mm³) @@ -48,7 +48,7 @@ This file contains measurements from the middle sagittal slice and includes: All anatomical landmarks are given image voxel coordinates (LIA orientation) - `ac_center`: Anterior commissure coordinates in original image space (orig.mgz) - `pc_center`: Posterior commissure coordinates in original image space (orig.mgz) -- `ac_center_oriented_volume`: AC coordinates in standardized space (orient_volume.lta) +- `ac_center_oriented_volume`: AC coordinates in standardized space (orient_volume.lta) - `pc_center_oriented_volume`: PC coordinates in standardized space (orient_volume.lta) - `ac_center_upright`: AC coordinates in upright space (cc_up.lta) - `pc_center_upright`: PC coordinates in upright space (cc_up.lta) diff --git a/doc/scripts/BATCH.md b/doc/scripts/BATCH.md index 71d4b4317..bccd0ed17 100644 --- a/doc/scripts/BATCH.md +++ b/doc/scripts/BATCH.md @@ -3,50 +3,47 @@ BATCH: brun_fastsurfer.sh Usage ----- - ```{command-output} ./brun_fastsurfer.sh --help :cwd: /../ ``` Subject Lists ------------- - The input files and options may be specified in three ways: 1. By writing them into the console (or by piping them in) (default) (one case per line), 2. by passing a subject list file `--subject_list ` (one case per line), or -3. by passing them on the command line `--subjects "=" [more cases]` (no additional options +3. by passing them on the command line `--subjects "=" [more cases]` (no additional options supported). These files/input options will usually be in the format `= [additional options]`, where additional -options are optional and enable passing options different to the "general options" given on the command line to -`brun_fastsurfer.sh`. One example for such a case-specific option is an optional T2w image (e.g. for the +options are optional and enable passing options different to the "general options" given on the command line to +`brun_fastsurfer.sh`. One example for such a case-specific option is an optional T2w image (e.g. for the [HypVINN](../overview/OUTPUT_FILES.md#hypvinn-module)). An example subject list file might look like this: ``` 001=/data/study/raw/T1w-001.nii.gz --t2 /data/study/raw/T2w-001.nii.gz 002=/data/study/raw/T1w-002.nii.gz --t2 /data/study/raw/T2w-002.nii.gz 002=/data/study/raw/T1w-003-alt.nii.gz --t2 /data/study/raw/T2w-003.nii.gz -... +... ``` Parallelization with `brun_fastsurfer.sh` ----------------------------------------- - -`brun_fastsurfer.sh` has powerful builtin parallel processing capabilities. These are hidden underneath the +`brun_fastsurfer.sh` has powerful builtin parallel processing capabilities. These are hidden underneath the `--parallel* |max` and the `--device ` as well as `--viewagg_device ` flags. -One of the core properties of FastSurfer is the split into the segmentation (which uses Deep Learning and therefore -benefits from GPUs) and the surface pipeline (which does not benefit from GPUs). For ideal batch processing, we want +One of the core properties of FastSurfer is the split into the segmentation (which uses Deep Learning and therefore +benefits from GPUs) and the surface pipeline (which does not benefit from GPUs). For ideal batch processing, we want different resource scheduling. -`--parallel*` allows three parallel batch processing modes: serial, single parallel pipeline and dual parallel pipeline. +`--parallel*` allows three parallel batch processing modes: serial, single parallel pipeline and dual parallel pipeline. ### Serial processing (default) -Each case/image is processed after the other fully, i.e. surface reconstruction of case 1 is fully finished before +Each case/image is processed after the other fully, i.e. surface reconstruction of case 1 is fully finished before segmentation of case 2 is started. This setting is the default and represents the manual flags `--parallel 1`. ### Single parallel pipeline -This mode is ideal for CPU-based processing for segmentation. It will process segmentations and surfaces in series +This mode is ideal for CPU-based processing for segmentation. It will process segmentations and surfaces in series in the same process, but multiple cases are processed at the same time. ```bash @@ -55,20 +52,20 @@ $FASTSURFER_HOME/brun_fastsurfer.sh --parallel 4 --threads 2 will start 4 segmentations (and surface reconstructions) at the same time, and will start a fifth, when the surface processing of one of the four first cases is finished (`--parallel 4`). It will try to use 2 threads per case (`--threads 2`) and perform reconstruction of left and right hemispheres in parallel (`--threads 2`, 2 >= 2). -`--parallel max` will remove the limit and start all cases at the same time (each with the target number of threads +`--parallel max` will remove the limit and start all cases at the same time (each with the target number of threads given by `--threads`). ### Dual parallel pipeline -This is ideal for GPU-based processing for segmentation. It will process segmentations and surfaces in separate +This is ideal for GPU-based processing for segmentation. It will process segmentations and surfaces in separate pipelines, which is useful for optimized GPU loading. Multiple cases may be processed at the same time. ```bash $FASTSURFER_HOME/brun_fastsurfer.sh --device cuda:0-1 --parallel_seg 2 --parallel_surf max \ --threads_seg 8 --threads_surf 4 ``` -will start 2 parallel segmentations (`--parallel_seg 2`) using GPU 0 for case 1 and GPU 1 for case 2 -(`--device cuda:0-1` -- same as `--device cuda:0,1`). After one of these segmentations is finished, the segmentation of -case 3 will start on that same device as well as the surface reconstruction (without putting a limit on parallel +will start 2 parallel segmentations (`--parallel_seg 2`) using GPU 0 for case 1 and GPU 1 for case 2 +(`--device cuda:0-1` -- same as `--device cuda:0,1`). After one of these segmentations is finished, the segmentation of +case 3 will start on that same device as well as the surface reconstruction (without putting a limit on parallel surface reconstructions, `--parallel_surf max`). Each segmentation process will aim to use 8 threads/cores (`--threads_seg 8`) and each surface reconstruction process will aim to use 4 threads (`--threads_surf 4`) with both hemispheres processed in parallel (`--threads_surf 4`, 4 >= 2, so right hemisphere will use 2 threads and left as well). @@ -82,6 +79,8 @@ Questions Can I disable the progress bars in the output? > You can disable the progress bars by setting the TQDM_DISABLE environment variable to 1, if you have tqdm>=4.66. -> -> For docker, this can be done with the flag `-e`, e.g. `docker run -e TQDM_DISABLE=1 ...`, for singularity with the flag `--env`, e.g. `singularity exec --env TQDM_DISABLE=1 ...` and for native installations by prepending, e.g. `TQDM_DISABLE=1 ./run_fastsurfer.sh ...`. +> +> For docker, this can be done with the flag `-e`, e.g. `docker run -e TQDM_DISABLE=1 ...`, for singularity with the +> flag `--env`, e.g. `singularity exec --env TQDM_DISABLE=1 ...` and for native installations by prepending, e.g. +> `TQDM_DISABLE=1 ./run_fastsurfer.sh ...`. diff --git a/doc/overview/FLAGS.md b/doc/scripts/RUN_FASTSURFER.md similarity index 82% rename from doc/overview/FLAGS.md rename to doc/scripts/RUN_FASTSURFER.md index 735136fbc..d99c1f27d 100644 --- a/doc/overview/FLAGS.md +++ b/doc/scripts/RUN_FASTSURFER.md @@ -1,33 +1,40 @@ -# FastSurfer Flags -Next, you will learn hot wo specify the `*fastsurfer-flags*` by replacing `*fastsurfer-flags*` with your specific options. +run_fastsurfer.sh +================= +Next, you will learn how to specify the `*fastsurfer-flags*` by replacing `*fastsurfer-flags*` with your specific options. +`run_fastsurfer.sh` is the central command of FastSurfer. In general, `run_fastsurfer.sh` is called once for each T1w MRI image that is to be processed and each call will result in one "Subject Folder" with segmentation maps, surfaces and statistics tables. If you want to process multiple images, you can either loop through the images yourself or use [brun_fastsurfer.sh](BATCH.md) or [srun_fastsurfer.sh](SLURM.md), which are multi-subject extensions to `run_fastsurfer.sh`. -The `*fastsurfer-flags*` will usually at least include the subject directory (`--sd`; Note, this will be the mounted path - `/output` - for containers), the subject name/id (`--sid`) and the path to the input image (`--t1`). For example: +On this page, we explain FastSurfer's options, usually referred to as `<*fastsurfer-flags*>` in this documentation. +The `<*fastsurfer-flags*>` will usually at least include the subject directory (`--sd`), the subject name/id (`--sid`) and the path to the input image (`--t1`). For example: ```bash -... --sd /output --sid test_subject --t1 /data/test_subject_t1.nii.gz --3T +$FASTSURFER_HOME/run_fastsurfer.sh --sd $HOME/my_fastsurfer_data --sid test_subject --t1 $HOME/my_mri_data/test_subject_t1.nii.gz --3T ``` Additionally, you can use `--seg_only` or `--surf_only` to only run a part of the pipeline or `--no_biasfield`, `--no_cereb`, `--no_hypothal`, `--no_cc`, and `--no_asegdkt` to switch off individual segmentation modules. Here, we have also added the `--3T` flag, which tells FastSurfer to register against the 3T atlas which is only relevant for the ICV estimation (eTIV). -In the following, we give an overview of the most important options. You can view a [full list of options](FLAGS.md#full-list-of-flags) with +In the following, we give an overview of the most important options. You can view a [full list of options](RUN_FASTSURFER.md#full-list-of-flags) with ```bash ./run_fastsurfer.sh --help ``` -## Required arguments +Required arguments +------------------ * `--sd`: Output directory \$SUBJECTS_DIR (equivalent to FreeSurfer setup --> $SUBJECTS_DIR/sid/mri; $SUBJECTS_DIR/sid/surf ... will be created). * `--sid`: Subject ID for directory inside \$SUBJECTS_DIR to be created ($SUBJECTS_DIR/sid/...) * `--t1`: T1 full head input (does not need to be bias corrected, global path). The network was trained with conformed images (UCHAR, 256x256x256, 0.7mm - 1mm voxels and standard slice orientation). These specifications are checked in the run_prediction.py script and the image is automatically conformed if it does not comply. Note, outputs will be in the conformed space (following the FreeSurfer standard). -## Required for Docker when running surface module +### Conditionally required +Required for Docker when running surface module: * `--fs_license`: Path to FreeSurfer license key file (needed for the surface module and, if activated, the talairach registration `--tal_reg` in the segmentation). For local installs, your local FreeSurfer license will automatically be detected (usually `$FREESURFER_HOME/license.txt` or `$FREESURFER_HOME/.license`). Use this flag if autodetection fails or if you use Docker with the surface module. To get a license, [register (for free)](https://surfer.nmr.mgh.harvard.edu/registration.html). -## Segmentation pipeline arguments (optional) +Optional arguments +------------------------------------------ +### Segmentation pipeline arguments * `--seg_only`: Only run the brain segmentation pipeline and skip the surface pipeline. * `--seg_log`: Name and location for the log-file for the segmentation. Default: $SUBJECTS_DIR/$sid/scripts/deep-seg.log * `--viewagg_device`: Define where the view aggregation should be run on. Can be "auto" or a device (see --device). By default, the program checks if you have enough memory to run the view aggregation on the GPU. The total memory is considered for this decision. If this fails, or you actively specify "cpu" view aggregation is run on the CPU. Equivalently, if you pass a different device, view aggregation will be run on that device (no memory check will be done). -* `--device`: Select device for neural network segmentation (_auto_, _cpu_, _cuda_, _cuda:_, _mps_), where cuda means Nvidia GPU, you can select which one e.g. "cuda:1". Default: "auto", check GPU and then CPU. "mps" is for native MAC installs to use the Apple silicon (M-chip) GPU. +* `--device`: Select device for neural network segmentation (_auto_, _cpu_, _cuda_, _cuda:_, _mps_), where cuda means Nvidia GPU, you can select which one e.g. "cuda:1". Default: "auto", check GPU and then CPU. "mps" is for native MAC installs to use the Apple silicon (M-chip) GPU. * `--asegdkt_segfile`: Name of the segmentation file, which includes the aparc+DKTatlas-aseg segmentations. Requires an ABSOLUTE Path! Default location: \$SUBJECTS_DIR/\$sid/mri/aparc.DKTatlas+aseg.deep.mgz * `--no_cereb`: Switch off the cerebellum sub-segmentation. * `--no_hypothal`: Skip the hypothalamus segmentation. @@ -36,18 +43,18 @@ In the following, we give an overview of the most important options. You can vie * `--no_biasfield`: Deactivate the biasfield correction and calculation of partial volume-corrected statistics in the segmentation modules. * `--native_image` or `--keepgeom`: **Only supported for `--seg_only`**, segment in native image space (keep orientation, image size and voxel size of the input image), this also includes experimental support for anisotropic images (no extreme anisotropy). -## Surface pipeline arguments (optional) +### Surface pipeline arguments * `--surf_only`: Only run the surface pipeline. The segmentation created by FastSurferVINN must already exist in this case. * `--3T`: Only affects Talairach registration: use the 3T atlas instead of the 1.5T atlas (which is used if the flag is not provided). This gives better (more consistent with FreeSurfer) ICV estimates (eTIV) for 3T and better Talairach registration matrices, but has little impact on standard volume or surface stats. * `--fstess`: Use mri_tesselate instead of marching cube (default) for surface creation (not recommended, but more similar to FreeSurfer) * `--fsqsphere`: Use FreeSurfer default instead of novel spectral spherical projection for qsphere (also not recommended) * `--fsaparc`: Use FS aparc segmentations in addition to DL prediction (slower in this case and usually the mapped ones from the DL prediction are fine) * `--no_fs_T1`: Skip generation of `T1.mgz` (normalized `nu.mgz` included in standard FreeSurfer output) and create `brainmask.mgz` directly from `norm.mgz` instead. Saves 1:30 min. -* `--no_surfreg`: Skip the surface registration (which creates `sphere.reg`) to safe time. Note, `sphere.reg` will be needed for any cross-subject statistical analysis of thickness maps, so do not use this option if you plan to perform cross-subject analysis. +* `--no_surfreg`: Skip the surface registration (which creates `sphere.reg`) to safe time. Note, `sphere.reg` will be needed for any cross-subject statistical analysis of thickness maps, so do not use this option if you plan to perform cross-subject analysis. -## Some other flags (optional) +### Some other flags * `--threads`, `--threads_seg` and `--threads_surf`: Target number of threads for all modules, segmentation, and surface pipeline. The default (`1`) tells FastSurfer to only use one core. Note, that the default value may change in the future for better performance on multi-core architectures. If threads for surface reconstruction is greater than 1, both hemispheres are processed in parallel with half the threads allocated to each hemisphere. -* `--vox_size`: Forces processing at a specific voxel size. If a number between 0.7 and 1 is specified (below is experimental) the T1w image is conformed to that isotropic voxel size and processed. +* `--vox_size`: Forces processing at a specific voxel size. If a number between 0.7 and 1 is specified (below is experimental) the T1w image is conformed to that isotropic voxel size and processed. If "min" is specified (default), the voxel size is read from the size of the minimal voxel size (smallest per-direction voxel size) in the T1w image: If the minimal voxel size is bigger than 0.98mm, the image is conformed to 1mm isotropic. If the minimal voxel size is smaller or equal to 0.98mm, the T1w image will be conformed to isotropic voxels of that voxel size. @@ -56,7 +63,8 @@ In the following, we give an overview of the most important options. You can vie * `--conformed_name`: Name of the file in which the conformed input image will be saved. Default location: \$SUBJECTS_DIR/\$sid/mri/orig.mgz * `-h`, `--help`: Prints help text -## Full list of flags +Full list of flags +------------------ ```{command-output} ./run_fastsurfer.sh --help :cwd: /../ -``` \ No newline at end of file +``` diff --git a/doc/scripts/SLURM.md b/doc/scripts/SLURM.md index 8bc4068ef..baa7e0262 100644 --- a/doc/scripts/SLURM.md +++ b/doc/scripts/SLURM.md @@ -3,16 +3,13 @@ SLURM: srun_fastsurfer.sh Usage ----- - ```{command-output} ./srun_fastsurfer.sh --help :cwd: /../ ``` Debugging SLURM runs -------------------- - 1. Did the run succeed? - 1. Check whether all jobs are done (specifically the copy job). ```bash $ squeue -u $USER --Format JobArrayID,Name,State,Dependency @@ -22,24 +19,31 @@ Debugging SLURM runs 1750815_1 FastSurfer-Surf-kuegRUNNING (null) 1750815_2 FastSurfer-Surf-kuegRUNNING (null) ``` - Here, jobs are not finished yet. The FastSurfer-Cleanup-$USER Job moves data to the subject directory (--sd). - - 2. Check whether there are subject folders and log files in the subject directory, /slurm/logs for the latter. - - 3. Check the subject_success file in /slurm/scripts. It should have a line for each subject for both parts of the FastSurfer pipeline, e.g. `: Finished --seg_only successfully` or `: Finished --surf_only successfully`! If one of these is missing, the job was likely killed by slurm (e.g. because of the time or the memory limit). - - 4. For subjects that were unsuccessful (The subject_success will say so), check `//scripts/deep-seg.log` and `//scripts/recon-surf.log` to see what failed. - Can be found by looking for ": Failed <--seg_only/--surf_only> with exit code " in `/slurm/scripts/subject_success`. + Here, jobs are not finished yet. The FastSurfer-Cleanup-$USER Job moves data to the subject directory (`--sd`). + 2. Check whether there are subject folders and log files in the subject directory, /slurm/logs for + the latter. + 3. Check the subject_success file in `/slurm/scripts`. It should have a line for each subject for + both parts of the FastSurfer pipeline, e.g. `: Finished --seg_only successfully` or + `: Finished --surf_only successfully`! If one of these is missing, the job was likely killed by slurm + (e.g. because of the time or the memory limit). + 4. For subjects that were unsuccessful (The subject_success will say so), check + `//scripts/deep-seg.log` and + `//scripts/recon-surf.log` to see what failed. + Can be found by looking for `": Failed <--seg_only/--surf_only> with exit code "` in + `/slurm/scripts/subject_success`. + 5. For subjects that were terminated (missing in subject_success), find which job is associated with subject id + `grep "" slurm/logs/surf_*.log`, then look at the end of the job and the job step logs + (`surf_XXX_YY.log` and `surf_XXX_YY_ZZ.log`). If slurm terminated the job, it will say so there. You can increase + the time and memory budget in `srun_fastsurfer.sh` with `--time` and `--mem` flags. - 5. For subjects that were terminated (missing in subject_success), find which job is associated with subject id `grep "" slurm/logs/surf_*.log`, then look at the end of the job and the job step logs (surf_XXX_YY.log and surf_XXX_YY_ZZ.log). If slurm terminated the job, it will say so there. You can increase the time and memory budget in `srun_fastsurfer.sh` with `--time` and `--mem` flags. The following bash code snippet can help identify failed runs. ``` cd for sub in * do - if [[ -z "$(grep "$sub: Finished --surf" slurm/scripts/subject_success)" ]] - then + if [[ -z "$(grep "$sub: Finished --surf" slurm/scripts/subject_success)" ]] + then echo "$sub was terminated externally" - fi + fi done ``` diff --git a/doc/scripts/fastsurfer_cc.rst b/doc/scripts/fastsurfer_cc.rst index 78bc56195..cfa085b9e 100644 --- a/doc/scripts/fastsurfer_cc.rst +++ b/doc/scripts/fastsurfer_cc.rst @@ -1,8 +1,13 @@ CorpusCallosum: fastsurfer_cc.py ================================ .. note:: - FastSurfer-CC runs with FastSurfer by default, but can be run independently with the advanced interface provided here. - A FastSurfer segmentation is still required as input. + We recommend to run FastSurfer-CC with the standard `run_fastsurfer.sh` interfaces (see :doc:`/overview/FLAGS`)! + + This is an expert documentation for of FastSurfer CC, which can be run independently with the advanced interface provided here. However, the FastSurfer segmentation is still required as input. + + +.. + [Note] To tell sphinx where in the documentation CorpusCallosum/README.md can be linked to, it needs to be included somewhere .. include:: ../../CorpusCallosum/README.md :parser: fix_links.parser diff --git a/doc/scripts/fastsurfercnn.run_model.rst b/doc/scripts/fastsurfercnn.run_model.rst index 164290213..e60643133 100644 --- a/doc/scripts/fastsurfercnn.run_model.rst +++ b/doc/scripts/fastsurfercnn.run_model.rst @@ -1,5 +1,5 @@ FastSurferCNN: run_model.py -================================ +=========================== .. include:: ../../FastSurferCNN/README.md :parser: fix_links.parser diff --git a/doc/scripts/index.rst b/doc/scripts/index.rst index 4423bca13..5032aba08 100644 --- a/doc/scripts/index.rst +++ b/doc/scripts/index.rst @@ -4,6 +4,7 @@ Scripts .. toctree:: :maxdepth: 2 + RUN_FASTSURFER.md long_fastsurfer.rst BATCH.md SLURM.md diff --git a/doc/scripts/recon_surf.rst b/doc/scripts/recon_surf.rst index c3c07211b..7e9c7d378 100644 --- a/doc/scripts/recon_surf.rst +++ b/doc/scripts/recon_surf.rst @@ -13,6 +13,6 @@ Surface pipeline: recon-surf.sh .. Usage help text --------------- - + .. command-output:: ./recon_surf/recon-surf.sh --help :cwd: /../ diff --git a/doc/sphinx_ext/fix_links/parser.py b/doc/sphinx_ext/fix_links/parser.py index 9e492a112..328b84e2d 100644 --- a/doc/sphinx_ext/fix_links/parser.py +++ b/doc/sphinx_ext/fix_links/parser.py @@ -50,6 +50,8 @@ def __init__(self, parser: MarkdownIt): def update_section_level_state(self, section: nodes.section, level: int) -> None: """This method is fixed such that """ + # this is the parent level from the included document (so if we can propagate levels relatively into the new + # doc) -- this also means we can get negative levels, if we start with a high level heading parent_level = max( section_level for section_level in self._level_to_section @@ -75,7 +77,18 @@ def update_section_level_state(self, section: nodes.section, level: int) -> None self._heading_base = level new_level = 0 - super().update_section_level_state(section, new_level) + try: + super().update_section_level_state(section, new_level) + except ValueError as e: + msg = (f"Cannot fix heading level {level} to {new_level}: {e}, likely there is a heading with an incorrect " + f"heading level, i.e. uses heading '##' but should be using '###' or higher!") + from myst_parser.warnings_ import MystWarnings + self.create_warning( + msg, + MystWarnings.MD_HEADING_NON_CONSECUTIVE, + line=section.line, + append_to=self.current_node, + ) def _handle_relative_docs(self, destination: str) -> str: from os.path import relpath, normpath diff --git a/env/fastsurfer_reconsurf.yml b/env/fastsurfer_reconsurf.yml index dbdf23084..3b28a99a5 100644 --- a/env/fastsurfer_reconsurf.yml +++ b/env/fastsurfer_reconsurf.yml @@ -3,7 +3,7 @@ name: fastsurfer_reconsurf channels: - conda-forge - + dependencies: - lapy=1.0.1 - nibabel=5.1.0 diff --git a/recon_surf/README.md b/recon_surf/README.md index b6a4322b9..e034a3ba5 100644 --- a/recon_surf/README.md +++ b/recon_surf/README.md @@ -35,32 +35,28 @@ Note that it is recommended to run the surface pipeline via `run_fastsurfer.sh - Example 1: Surface module inside Docker --------------------------------------- - Docker can be used to simplify the installation (no FreeSurfer on system required). Given you already ran the segmentation pipeline, and want to just run the surface pipeline on top of it (i.e. on a different cluster), the following command can be used: ```bash -# 1. Pull the docker image (if it does not exist locally) -docker pull deepmi/fastsurfer:cpu-v?.?.? - -# 2. Run command -docker run -v /home/user/my_fastsurfer_analysis:/output \ - -v /home/user/my_fs_license_dir:/fs_license \ +# Run command +docker run -v $HOME/my_fastsurfer_analysis:$HOME/my_fastsurfer_analysis \ + -v $HOME/my_fs_license.txt:$HOME/my_fs_license.txt \ --entrypoint /fastsurfer/recon_surf/recon-surf.sh \ - --rm --user $(id -u):$(id -g) deepmi/fastsurfer:cpu-v?.?.? \ - --fs_license /fs_license/license.txt \ - --sid subjectX --sd /output --3T + --rm --user $(id -u):$(id -g) deepmi/fastsurfer:cpu-v{{ FASTSURFER_VERSION }} \ + --fs_license $HOME/my_fs_license.txt \ + --sid subjectX --sd $HOME/my_fastsurfer_analysis --3T ``` -Check [Dockerhub](https://hub.docker.com/r/deepmi/fastsurfer/tags) to find out the latest release version and replace the "?". +Note: Go to [deepmi on Dockerhub](https://hub.docker.com/r/deepmi/fastsurfer/tags) to find the latest release version (automatically detected as `deepmi/fastsurfer:cpu-v{{ FASTSURFER_VERSION }}`). Docker Flags: * The `-v` commands mount your output, and directory with the FreeSurfer license file into the Docker container. Inside the container these are visible under the name following the colon (in this case /output and /fs_license). -This call is very similar to calling the standard `run_fastsurfer.sh` script with the `--surf_only` flag and starting +This call is very similar to calling the standard `run_fastsurfer.sh` script with the `--surf_only` flag, which starts only the surface module. It assumes that this case `subjectX` exists already and that the output files of the segmentation module are available in the `subjectX/mri` directory (e.g. -`/home/user/my_fastsurfeer_analysis/subjectX/mri/aparc.DKTatlas+aseg.deep.mgz`, `mask.mgz`, `orig.mgz` etc.). The +`$HOME/my_fastsurfeer_analysis/subjectX/mri/aparc.DKTatlas+aseg.deep.mgz`, `mask.mgz`, `orig.mgz`, etc.). The directory will then be populated with the FreeSurfer file structure, including surfaces, statistics and labels file (equivalent to a FreeSurfer recon-all run). @@ -72,27 +68,27 @@ default, so this is for expert users who may want to try out specific flags that Given you already ran the segmentation pipeline, and want to just run the surface pipeline on top of it (i.e. on a different cluster), the following command can be used: ```bash -# 1. Build the singularity image (if it does not exist) -singularity build fastsurfer-cpu-v?.?.?.sif docker://deepmi/fastsurfer:cpu-v?.?.? +# 1. Build the singularity image (only if it does not exist) +singularity build fastsurfer-cpu-v{{ FASTSURFER_VERSION }}.sif docker://deepmi/fastsurfer:cpu-v{{ FASTSURFER_VERSION }} # 2. Run command singularity exec --no-home \ - -B /home/user/my_fastsurfer_analysis:/output \ - -B /home/user/my_fs_license_dir:/fs_license \ - ./fastsurfer-cpu-?.?.?.sif \ + -B $HOME/my_fastsurfer_analysis \ + -B $HOME/my_fs_license.txt \ + ./fastsurfer-cpu-{{ FASTSURFER_VERSION }}.sif \ /fastsurfer/recon_surf/recon-surf.sh \ - --fs_license /fs_license/license.txt \ - --sid subjectX --sd /output --3T \ - --t1 /subjectX/mri/orig.mgz \ - --asegdkt_segfile /subjectX/mri/aparc.DKTatlas+aseg.deep.mgz + --fs_license $HOME/my_fs_license.txt \ + --sid subjectX --sd $HOME/my_fastsurfer_analysis --3T \ + --t1 $HOME/my_fastsurfer_analysis/subjectX/mri/orig.mgz \ + --asegdkt_segfile $HOME/my_fastsurfer_analysis/subjectX/mri/aparc.DKTatlas+aseg.deep.mgz ``` -Check [Dockerhub](https://hub.docker.com/r/deepmi/fastsurfer/tags) to find out the latest release version and replace the "?". +Note: Go to [deepmi on Dockerhub](https://hub.docker.com/r/deepmi/fastsurfer/tags) to find the latest release version (automatically detected as `docker://deepmi/fastsurfer:cpu-v{{ FASTSURFER_VERSION }}`). ### Singularity Flags: * The `-B` commands mount your output, and directory with the FreeSurfer license file into the Singularity container. Inside the container these are visible under the name following the colon (in this case /data, /output, and /fs_license). -* The `--no-home` command disables the automatic mount of the users home directory (see [Best Practice](../doc/overview/SINGULARITY.md#mounting-home)) +* The `--no-mount home,cwd` command disables the automatic mount of the users home directory (see [Best Practice](../doc/overview/SINGULARITY.md#mounting-home-and-current-working-directory)) The `--t1` and `--asegdkt_segfile` flags point to the already existing conformed T1 input and segmentation from the segmentation module. Also other files from that pipeline will be reused (e.g. the `mask.mgz`, `orig_nu.mgz`). The @@ -101,36 +97,29 @@ file (equivalent to a FreeSurfer recon-all run). Example 3: Native installation - recon-surf on a single subject (subjectX) -------------------------------------------------------------------------- - -Given you want to analyze data for subjectX which is stored on your computer under `/home/user/my_mri_data/subjectX/orig.mgz`, +Given you want to analyze data for subjectX which is stored on your computer under `$HOME/my_mri_data/subjectX/orig.mgz`, run the following command from the console (do not forget to source FreeSurfer!): ```bash -# Source FreeSurfer +# Source FreeSurfer, defining FREESURFER_HOME will usually enable auto-detection in native installations export FREESURFER_HOME=/path/to/freesurfer source $FREESURFER_HOME/SetUpFreeSurfer.sh -# Define data directory -datadir=/home/user/my_mri_data -segdir=/home/user/my_segmentation_data -targetdir=/home/user/my_recon_surf_output # equivalent to FreeSurfer's SUBJECTS_DIR - # Run recon-surf ./recon-surf.sh --sid subjectX \ - --sd $targetdir \ + --sd $HOME/my_fastsurfer_analysis \ --py python3.10 \ --3T \ - --t1 /subjectX/mri/orig.mgz \ - --asegdkt_segfile /subjectX/mri/aparc.DKTatlas+aseg.deep.mgz + --t1 $HOME/my_fastsurfer_analysis/subjectX/mri/orig.mgz \ + --asegdkt_segfile $HOME/my_fastsurfer_analysis/subjectX/mri/aparc.DKTatlas+aseg.deep.mgz ``` The `--t1` and `--asegdkt_segfile` flags point to the already existing conformed T1 input and segmentation from the segmentation module. Also other files from that pipeline -will be reused (e.g. the `mask.mgz`, `orig_nu.mgz`, i.e. under `/home/user/my_fastsurfeer_analysis/subjectX/mri/mask.mgz`). The `subjectX` directory will then be populated with the FreeSurfer file structure, including surfaces, statistics and labels file (equivalent to a FreeSurfer recon-all run). -The script will generate a bias-field corrected image at `/home/user/my_fastsurfeer_analysis/subjectX/mri/orig_nu.mgz`, if this did not already exist. +will be reused (e.g. the `mask.mgz`, `orig_nu.mgz`, i.e. under `$HOME/my_fastsurfeer_analysis/subjectX/mri/mask.mgz`). The `subjectX` directory will then be populated with the FreeSurfer file structure, including surfaces, statistics and labels file (equivalent to a FreeSurfer recon-all run). +The script will generate a bias-field corrected image at `$HOME/my_fastsurfeer_analysis/subjectX/mri/orig_nu.mgz`, if this did not already exist. Example 4: recon-surf on multiple subjects ------------------------------------------ - Most of the recon_surf functionality can also be achieved by running `run_fastsurfer.sh` with the `--surf_only` flag. This means we can also use the `brun_fastsurfer.sh` command with `--surf_only` to achieve similar results (see also [Example 4](../doc/overview/EXAMPLES.md#example-4-fastsurfer-on-multiple-subjects). There are however some small differences to be aware of: @@ -140,18 +129,18 @@ There are however some small differences to be aware of: Invoke the following command (make sure you have enough resources to run the given number of subjects in parallel or drop the `--parallel_surf max` flag to run them in series!): ```bash -singularity exec --no-home \ - -B /home/user/my_fastsurfer_analysis:/output \ - -B /home/user/subjects_lists/:/lists \ - -B /home/user/my_fs_license_dir:/fs_license \ - ./fastsurfer.sif \ +singularity exec --no-mount home,cwd -e \ + -B $HOME/my_fastsurfer_analysis \ + -B $HOME/subjects_lists \ + -B $HOME/my_fs_license.txt \ + ./fastsurfer-cpu-{{ FASTSURFER_VERSION }}.sif \ /fastsurfer/brun_fastsurfer.sh \ --surf_only \ - --subjects_list /lists/subjects_list.txt \ + --subjects_list $HOME/subjects_lists/subjects_list.txt \ --parallel_surf max \ - --sd /output \ - --fs_license /fs_license/license.txt \ - --3T + --sd $HOME/my_fastsurfer_analysis \ + --fs_license $HOME/my_fs_license.txt \ + --3T --threads 4 ``` A dedicated subfolder will be used for each subject within the target directory. diff --git a/recon_surf/utils/README.md b/recon_surf/utils/README.md index 585c603f0..c58768232 100644 --- a/recon_surf/utils/README.md +++ b/recon_surf/utils/README.md @@ -1,9 +1,9 @@ -# Utilities - +Utilities +========= This directory contains some useful utility scripts. -## Command Time Extraction - +Command Time Extraction +----------------------- The `extract_recon_surf_time_info.py` script can be used to generate a yaml file containing information on the commands executed in recon_surf from a `recon-surf.log` file. Every command has a corresponding entry, which includes the information: * cmd_name: the full command @@ -20,7 +20,6 @@ Entries are grouped according to the section in `recon_surf.sh` in which the com * `--time_units`: Units for duration: s (seconds) or m (minutes; default) ### Example - The following will extract recon_surf command time information from `123456/scripts/recon-surf.log` and save it in `123456/scripts/recon-surf_times.yaml` (with durations in minutes). ``` diff --git a/srun_fastsurfer.sh b/srun_fastsurfer.sh index 3258967f5..362583262 100755 --- a/srun_fastsurfer.sh +++ b/srun_fastsurfer.sh @@ -401,13 +401,13 @@ check_fs_license "$fs_license" check_seg_surf_only "$seg_only" "$surf_only" check_out_dir "$out_dir" -if [[ "$cpu_only" == "true" ]] && [[ "$timelimit_seg" -lt 6 ]] +if [[ "$cpu_only" == "true" ]] && [[ "$timelimit_seg" -lt 11 ]] then log "WARNING!!!" log "------------------------------------------------------------------------" log "You specified the segmentation shall be performed on the cpu, but the" - log "time limit per segmentation is less than 6 minutes (default is optimized " - log "for GPU acceleration @ 5 minutes). This is very likely insufficient!" + log "time limit per segmentation is less than 11 minutes (default is optimized " + log "for GPU acceleration @ 10 minutes). This is very likely insufficient!" log "------------------------------------------------------------------------" fi diff --git a/stools.sh b/stools.sh index 42263cf77..717c07738 100755 --- a/stools.sh +++ b/stools.sh @@ -1,6 +1,6 @@ #!/bin/bash -# script for functions used by srun_fastsurfer.sh and srun_freesufer.sh +# script for functions used by srun_fastsurfer.sh and brun_freesufer.sh function read_cases () { diff --git a/test/README.md b/test/README.md index e38298502..eb11dbdc6 100644 --- a/test/README.md +++ b/test/README.md @@ -3,4 +3,4 @@ Test documentation This is not an API or user documentation and thus is not part of doc. -There is currently exactly one test suite called [quicktest](quicktest/README.md). \ No newline at end of file +There is currently exactly one test suite called [quicktest](quicktest/README.md). \ No newline at end of file diff --git a/test/quicktest/README.md b/test/quicktest/README.md index 3c3c22652..551a9d3f3 100644 --- a/test/quicktest/README.md +++ b/test/quicktest/README.md @@ -13,12 +13,12 @@ The `quicktest` suite requires - A definition of the test setup in the following environment variables: - `REF_DIR`: known-good reference data - `SUBJECTS_DIR`: to-compare/test data - - `SUBJECTS_LIST`: comma separated list of + - `SUBJECTS_LIST`: comma separated list of Test 1: Search for errors in to-compare log files ------------------------------------------------- -Contained in test_errors_in_logfiles.py +Contained in test_errors_in_logfiles.py Test 2: Check existence of expected files in to-compare subject directory diff --git a/test/quicktest/common.py b/test/quicktest/common.py index fb6a0757c..a629ddb42 100644 --- a/test/quicktest/common.py +++ b/test/quicktest/common.py @@ -90,7 +90,7 @@ def __init__(self, config_file: Path): def threshold(self, label_or_key: int | str) -> tuple[str, float]: """ Return a threshold for a label or key. - + Parameters ---------- label_or_key : int | str diff --git a/tools/Docker/README.md b/tools/Docker/README.md index 6c8b46afe..0cb6c9e9c 100644 --- a/tools/Docker/README.md +++ b/tools/Docker/README.md @@ -1,8 +1,9 @@ -# FastSurfer Docker Support -## Pull FastSurfer from DockerHub +FastSurfer Docker Support +========================= -We provide pre-built Docker images with support for nVidia GPU-acceleration and for CPU-only use on [Docker Hub](https://hub.docker.com/r/deepmi/fastsurfer/tags). -In order to quickly get the latest Docker image, simply execute: +Pull FastSurfer from DockerHub +------------------------------ +We provide pre-built Docker images with support for nVidia GPU-acceleration and for CPU-only use on [Docker Hub](https://hub.docker.com/r/deepmi/fastsurfer/tags). In order to quickly get the latest Docker image, simply execute: ```bash docker pull deepmi/fastsurfer @@ -10,20 +11,22 @@ docker pull deepmi/fastsurfer This will download the newest, official FastSurfer image with support for nVidia GPUs. -Image are named and tagged as follows: `deepmi/fastsurfer:-`, where `` is `gpu` for support of nVidia GPUs and `cpu` without hardware acceleration (the latter is smaller and thus faster to download). -Similarly, `` can be a version string (`latest` or `v#.#.#`, where `#` are digits, for example `v2.2.2`), for example: +Image are named and tagged as follows: `deepmi/fastsurfer:-`, where `` is `gpu` for support of NVIDIA GPUs and `cpu` without hardware acceleration (the latter is smaller and thus faster to download). +Similarly, `` can be a version string (`latest` or `v#.#.#`, where `#` are digits, for example `v2.5.0`), for example: ```bash -docker pull deepmi/fastsurfer:cpu-v2.2.2 +docker pull deepmi/fastsurfer:cpu-v2.5.0 ``` -### Running the official Docker Image -After pulling the image, you can start a FastSurfer container and process a T1-weighted image (both segmentation and surface reconstruction) with the following command: +Running the (official) Docker Image +----------------------------------- +After pulling the image, you can start a FastSurfer container and process a T1-weighted image (both segmentation and +surface reconstruction) with the following command: ```bash -docker run --gpus all -v /home/user/my_mri_data:/data \ - -v /home/user/my_fastsurfer_analysis:/output \ - -v /home/user/my_fs_license_dir:/fs_license \ +docker run --gpus all -v $HOME/my_mri_data:/data \ + -v $HOME/my_fastsurfer_analysis:/output \ + -v $HOME/my_fs_license_dir:/fs_license \ --rm --user $(id -u):$(id -g) deepmi/fastsurfer:latest \ --fs_license /fs_license/license.txt \ --t1 /data/subjectX/t1-weighted.nii.gz \ @@ -31,26 +34,25 @@ docker run --gpus all -v /home/user/my_mri_data:/data \ --threads 4 --3T # and more flags ``` -#### Docker Flags -* `--gpus`: This flag is used to access GPU resources. With it, you can also specify how many GPUs to use. In the example above, _all_ will use all available GPUS. To use a single one (e.g. GPU 0), set `--gpus device=0`. To use multiple specific ones (e.g. GPU 0, 1 and 3), set `--gpus "device=0,1,3"`. -* `-v`: This commands mount your data, output and directory with the FreeSurfer license file into the docker container. Inside the container these are visible under the name following the colon (in this case /data, /output, and /fs_license). -* `--rm`: The flag takes care of removing the container once the analysis finished. -* `-d`: This is optional. You can add this flag to run in detached mode (no screen output and you return to shell) -* `--user $(id -u):$(id -g)`: Run the container with your account (your user-id and group-id), which are determined by `$(id -u)` and `$(id -g)`, respectively. Running the docker container as root `-u 0:0` is strongly discouraged. +### Docker Flags +* `--gpus`: This argument is used to access GPU resources. With it, you can also specify how many GPUs to use. In the example above, `all` will make every GPU available to FastSurfer in the Docker container. To use a single one (e.g. GPU 0), set `--gpus device=0`. To use multiple specific GPUs (e.g. GPU 0, 1 and 3), use `--gpus "device=0,1,3"`. +* `-v`: This argument defines which and how data is shared between the host system and the docker container. By default, no data is shared between the host and the container. `-v` is used to explicitly share data. It follows the format `-v ::`. In its simplest form, `` and `` are the same and folders inside the container are the same as on the host. `:` may be left out or `:ro` to indicate that files from this folder may not be modified by the docker container (readonly). The following files need to be shared: input files, output folder (subjects directory) and FreeSurfer license. +* `--user $(id -u):$(id -g)`: Which user the container runs as (relevant for file access, the user-id and group-id, **required**!). `$(id -u)` and `$(id -g)` determine the user and group, respectively. Running the docker container as root `--user 0:0` is strongly discouraged and must be combined with the FastSurfer flag `--allow_root`. +* `--rm`: The flag takes care of removing the container (cleanup of the container) once the analysis finished (optional, but recommended). +* `-d`: You can add this flag to run in detached mode (no screen output, and you return to shell, optional). #### Advanced Docker Flags * `--group-add `: If additional user groups are required to access files, additional groups may be added via `--group-add [,...]` or `--group-add $(id -G )`. -#### FastSurfer Flags -* The `--fs_license` points to your FreeSurfer license which needs to be available on your computer in the `my_fs_license_dir` that was mapped above. -* The `--t1` points to the t1-weighted MRI image to analyse (full path, with mounted name inside docker: /home/user/my_mri_data => /data) -* The `--sid` is the subject ID name (output folder name) -* The `--sd` points to the output directory (its mounted name inside docker: /home/user/my_fastsurfer_analysis => /output) -* [more flags](../../doc/overview/FLAGS.md#fastsurfer-flags) +### FastSurfer Flags +In principle, the same as [basic run_fastsurfer.sh](../../doc/scripts/RUN_FASTSURFER.md#required-arguments) with the +following modifications: +* The `--fs_license` cannot be auto-detected and must be passed. Must point to your FreeSurfer license how it is + accessible inside the container (check `-v` [above](#docker-flags), `--fs_license` works in concert with `-v`). +* `--t1` and `--sd` are required! They work in concert with `-v` [above](#docker-flags). +* `--sid` is the subject ID name (output folder), same as in [basic run_fastsurfer.sh](../../doc/scripts/RUN_FASTSURFER.md). -Note, that the paths following `--fs_license`, `--t1`, and `--sd` are __inside__ the container, not global paths on your system, so they should point to the places where you mapped these paths above with the `-v` arguments. - -A directory with the name as specified in `--sid` (here subjectX) will be created in the output directory (specified via `--sd`). So in this example output will be written to /home/user/my_fastsurfer_analysis/subjectX/ . Make sure the output directory is empty, to avoid overwriting existing files. +A directory with the name as specified in `--sid` (here subjectX) will be created in the output directory (specified via `--sd`). So in this example output will be written to `$HOME/my_fastsurfer_analysis/subjectX/`. Make sure the output directory is empty, to avoid overwriting existing files. All other available flags are identical to the ones explained on the main page [README](../../README.md). @@ -61,8 +63,8 @@ All other available flags are identical to the ones explained on the main page [ How? Docker does not mount the home directory by default, so unless you manually set the `HOME` environment variable, all should be fine. -## FastSurfer Docker Image Creation - +FastSurfer Docker Image Creation +-------------------------------- Within this directory, we currently provide a build script and Dockerfile to create multiple Docker images for users (usually developers) who wish to create their own Docker images for 3 platforms: * Nvidia / CUDA (Example 1) @@ -81,7 +83,7 @@ Also note, in order to run our Docker containers on a Mac, users need to increas The build script `build.py` supports additional args, targets and options, see `python tools/Docker/build.py --help`. Note, that the build script's main function is to select parameters for build args, but also create the FastSurfer-root/BUILD.info file, which will be used by FastSurfer to document the version (including git hash of the docker container). This BUILD.info file must exist for the docker build to be successful. -In general, if you specify `--dry_run` the command will not be executed but sent to stdout, so you can run `python build.py --device cuda --dry_run | bash` as well. Note, that build.py uses some dependencies from FastSurfer, so you will need to set the PYTHONPATH environment variable to the FastSurfer root (include of `FastSurferCNN` must be possible) and we only support Python 3.10. +In general, if you specify `--dry_run` the command will not be executed but sent to stdout, so you can run `python build.py --device cuda --dry_run | bash` as well. By default, the build script will tag your image as `"fastsurfer:[{device}-]{version_tag}"`, where `{version_tag}` is `{version-identifer from pyproject.toml}_{current git-hash}` and `{device}` is the value to `--device` (omitted for `cuda`), but a custom tag can be specified by `--tag {tag_name}`. @@ -89,59 +91,57 @@ By default, the build script will tag your image as `"fastsurfer:[{device}-]{ver Note, we recommend using BuildKit to build docker images (e.g. `DOCKER_BUILDKIT=1` -- the build.py script already always adds this). To install BuildKit, run `wget -qO ~/.docker/cli-plugins/docker-buildx https://github.com/docker/buildx/releases/download//buildx-.`, for example `wget -qO ~/.docker/cli-plugins/docker-buildx https://github.com/docker/buildx/releases/download/v0.12.1/buildx-v0.12.1.linux-amd64`. See also https://github.com/docker/buildx#manual-download. ### Example 1: Build GPU FastSurfer Image - In order to build your own Docker image for FastSurfer (FastSurferCNN + recon-surf; on GPU; including FreeSurfer) yourself simply execute the following command after traversing into the *Docker* directory: ```bash -PYTHONPATH= -python build.py --device cuda --tag my_fastsurfer:cuda +python tools/Docker/build.py --device cuda --tag my_fastsurfer:cuda ``` The build script allows more specific options, that specify different CUDA options as well (see `build.py --help`). For running the analysis, the command is the same as above for the prebuild option: ```bash -docker run --gpus all -v /home/user/my_mri_data:/data \ - -v /home/user/my_fastsurfer_analysis:/output \ - -v /home/user/my_fs_license_dir:/fs_license \ - --rm --user $(id -u):$(id -g) my_fastsurfer:cuda \ - --fs_license /fs_license/license.txt \ - --t1 /data/subjectX/t1-weighted.nii.gz \ - --sid subjectX --sd /output +docker run --gpus all \ + -v $HOME/my_mri_data:$HOME/my_mri_data \ + -v $HOME/my_fastsurfer_analysis:$HOME/my_fastsurfer_analysis \ + -v $HOME/my_fs_license.txt:$HOME/my_fs_license.txt \ + --rm --user $(id -u):$(id -g) my_fastsurfer:cuda \ + --fs_license $HOME/my_fs_license.txt \ + --t1 $HOME/my_mri_data/subjectX/t1-weighted.nii.gz \ + --sid subjectX --sd $HOME/my_fastsurfer_analysis \ + --threads 4 --3T ``` ### Example 2: Build CPU FastSurfer Image - In order to build the docker image for FastSurfer (FastSurferCNN + recon-surf; on CPU; including FreeSurfer) simply go to the parent directory (FastSurfer) and execute the docker build command directly: ```bash -python build.py --device cpu --tag my_fastsurfer:cpu +python tools/Docker/build.py --device cpu --tag my_fastsurfer:cpu ``` As you can see, only the `--device` to the build command is changed from `cuda` to `cpu`. -For running the analysis, the command is basically the same as above, except for removing the `--gpus all` GPU option: +To run the analysis, the command is basically the same as above, except for removing the `--gpus all` GPU option: ```bash -docker run -v /home/user/my_mri_data:/data \ - -v /home/user/my_fastsurfer_analysis:/output \ - -v /home/user/my_fs_license_dir:/fs_license \ +docker run -v $HOME/my_mri_data:$HOME/my_mri_data \ + -v $HOME/my_fastsurfer_analysis:$HOME/my_fastsurfer_analysis \ + -v $HOME/my_fs_license.txt:$HOME/my_fs_license.txt \ --rm --user $(id -u):$(id -g) my_fastsurfer:cpu \ - --fs_license /fs_license/license.txt \ - --t1 /data/subjectX/t1-weighed.nii.gz \ - --sid subjectX --sd /output + --fs_license $HOME/my_fs_license.txt \ + --t1 $HOME/my_mri_data/subjectX/t1-weighted.nii.gz \ + --device cpu \ + --sid subjectX --sd $HOME/my_fastsurfer_analysis \ + --threads 16 --3T ``` FastSurfer will automatically detect, that no GPU is available and use the CPU. ### Example 3: Experimental Build for AMD GPUs - -Here we build an experimental image to test performance when running on AMD GPUs. Note that you need a supported OS and Kernel version and supported GPU for the RocM to work correctly. You need to install the Kernel drivers into -your host machine kernel (`amdgpu-install --usecase=dkms`) for the amd docker to work. For this follow: -https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/quick-start.html#rocm-install-quick, https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/amdgpu-install.html#amdgpu-install-dkms and https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html +Here we build an experimental image to test performance when running on AMD GPUs. Note that you need a supported OS and Kernel version and supported GPU for the RocM to work correctly. You need to install the Kernel drivers into your host machine kernel (`amdgpu-install --usecase=dkms`) for the amd docker to work. For this follow: https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/quick-start.html#rocm-install-quick, https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/amdgpu-install.html#amdgpu-install-dkms and https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html ```bash -python build.py --device rocm --tag my_fastsurfer:rocm +python tools/Docker/build.py --device rocm --tag my_fastsurfer:rocm ``` and run segmentation only: @@ -149,11 +149,15 @@ and run segmentation only: ```bash docker run --rm --security-opt seccomp=unconfined \ --device=/dev/kfd --device=/dev/dri --group-add video \ - -v /home/user/my_mri_data:/data \ - -v /home/user/my_fastsurfer_analysis:/output \ - my_fastsurfer:rocm \ - --t1 /data/subjectX/t1-weighted.nii.gz \ - --sid subjectX --sd /output + -v $HOME/my_mri_data:$HOME/my_mri_data \ + -v $HOME/my_fastsurfer_analysis:$HOME/my_fastsurfer_analysis \ + -v $HOME/my_fs_license.txt:$HOME/my_fs_license.txt \ + --user $(id -u):$(id -g) my_fastsurfer:rocm \ + --fs_license $HOME/my_fs_license.txt \ + --t1 $HOME/my_mri_data/subjectX/t1-weighted.nii.gz \ + --sid subjectX --sd $HOME/my_fastsurfer_analysis \ + --parallel \ + # alternatively: --device cuda is also possible (or --device cuda:0 to specify the GPU ``` In conflict with the official ROCm documentation (above), we also needed to add the group render `--group-add render` (in addition to `--group-add video`). @@ -163,16 +167,20 @@ Note, we tested on an AMD Radeon Pro W6600, which is [not officially supported]( ```bash docker run --rm --security-opt seccomp=unconfined \ --device=/dev/kfd --device=/dev/dri --group-add video --group-add render \ - -v /home/user/my_mri_data:/data \ - -v /home/user/my_fastsurfer_analysis:/output \ - -e HSA_OVERRIDE_GFX_VERSION=10.3.0 \ - my_fastsurfer:rocm \ - --t1 /data/subjectX/t1-weighted.nii.gz \ - --sid subjectX --sd /output + -v $HOME/my_mri_data:$HOME/my_mri_data \ + -v $HOME/my_fastsurfer_analysis:$HOME/my_fastsurfer_analysis \ + -v $HOME/my_fs_license.txt:$HOME/my_fs_license.txt \ + -e HSA_OVERRIDE_GFX_VERSION=10.3.0 \ + --user $(id -u):$(id -g) my_fastsurfer:rocm \ + --fs_license $HOME/my_fs_license.txt \ + --t1 $HOME/my_mri_data/subjectX/t1-weighted.nii.gz \ + --sid subjectX --sd $HOME/my_fastsurfer_analysis \ + --parallel \ + # alternatively: --device cuda is also possible (or --device cuda:0 to specify the GPU ``` -## Build docker image with attestation and provenance - +Build docker image with attestation and provenance +-------------------------------------------------- To build a docker image with attestation and provenance, i.e. Software Bill Of Materials (SBOM) information, several requirements have to be met: 1. The image must be built with version v0.11+ of BuildKit (we recommend you [install BuildKit](#buildkit) independent of attestation). @@ -204,16 +212,16 @@ To build a docker image with attestation and provenance, i.e. Software Bill Of M Also note, that the image storage location with containerd is not defined by the docker config file `/etc/docker/daemon.json`, but by the containerd config `/etc/containerd/config.toml`, which will likely not exist. You can [create a default config](https://github.com/containerd/containerd/blob/main/docs/getting-started.md#customizing-containerd) file with `containerd config default > /etc/containerd/config.toml`, in this config file edit the `"root"`-entry (default value is `/var/lib/containerd`). 4. Finally, you can now build the FastSurfer image with `python tools/Docker/build.py ... --attest`. This will add the additional flags to the docker build command. -## Setting the ssl_verify parameter of mamba - +Setting the ssl_verify parameter of mamba +----------------------------------------- The `build.py` script supports the `--ssl_verify` flag, which can be passed `"False"` or the path to an alternative root certificate. ```bash -python build.py --device cpu --tag my_fastsurfer:cpu --ssl_verify /path/to/custom-cert.srt +python tools/Docker/build.py --device cpu --tag my_fastsurfer:cpu --ssl_verify /path/to/custom-cert.srt ``` -## Building for release - +Building for release +-------------------- Make sure, you are building on a machine that has [containerd-storage and Buildkit](#build-docker-image-with-attestation-and-provenance). ```bash @@ -221,7 +229,7 @@ Make sure, you are building on a machine that has [containerd-storage and Buildk build_dir=$HOME/FastSurfer-build img=deepmi/fastsurfer # the version can be identified with: $build_dir/run_fastsurfer.sh --version -version=2.4.3 +version=2.5.0 # the cuda and rocm version can be identified with: python $build_dir/tools/Docker/build.py --help | grep -E ^[[:space:]]+--device cuda=126 cudas=("cuda118" "cuda124" "cuda$cuda")