Registration¶
To register with the challenge, open a pull request (PR) to the SuperElastix GitHub repository with your method (how to make a PR). You have officially signed up for the challenge when your first PR is merged into SuperElastix. Both individual and team participations are welcome.
You are free to withdraw from the challenge at any point, but note that your code will be part of the git history of SuperElastix even if you submit a PR that deletes your code.
Participation¶
To participate in the challenge you must:
- Implement your method in SuperElastix OR submit a parameter file that uses an existing component (guidelines). Only code in the official SuperElastix repository on Github will be 1) considered for participation and 2) benchmarked and published on the leaderboards.
- Describe your method in the README.md file in your submission directory in the GitHub repository.
- Use open source software. You are free to include any library in your method as long as it is compatible with the Apache 2.0 License and can be built as part of the SuperElastix build system.
In addition:
- You have a maximum of one hour of wall clock time for registration of a pair of images.
- External data (e.g. a pretrained TensorFlow model) is allowed and should be downloaded at build time via the module’s CMake script.
- Participants can submit multiple parameter files (within reasonable limits).
- Leaderboards will be updated continuously.
- All participants will be invited to collaborate on a journal paper summarizing the challenge outcomes at the time of paper submission. We aim to submit the results to one of the leading journals in the field.
Rights
The challenge organization does not claim any ownership or any rights to the developed works, but the code will have to be published under the Apache 2.0 License. This is because the code needs to be in SuperElastix to be benchmarked, and SuperElastix itself is Apache 2.0 Licensed.
Data¶
The submissions will be tested on 4 lung datasets and 4 brain data sets. All data sets are open access except for the SPREAD lung data set which will be kept private. We do not provide direct download links, but participants are encouraged to download the data from the original source (see the data page or click 'instructions' below for details). The data sets are:
Name | Type | Access | Type | Ground truth |
POPI (download, instructions) | 4DCT Lung | Public | 6 EF intra-subject, same timepoint | Points |
DIR-LAB (download, instructions) | 4DCT Lung | Public | 10 EF intra-subject, same timepoint | Points |
SPREAD (hidden data) | CT Lung | Private | 21 SF intra-subject, diff timepoint | Points |
EMPIRE (download, instructions) | CT Lung | Public | 30 intra-subject, same timepoint | Points |
LPBA40 (download, instructions) | MR Brain | Public | 40 inter-subject | Segmentation |
ISBR18 (download, instructions) | MR Brain | Public | 18 inter-subject | Segmentation |
CUMC12 (download, instructions) | MR Brain | Public | 12 inter-subject | Segmentation |
MGH10 (download, instructions) | MR Brain | Public | 10 inter-subject | Segmentation |
BRAINS (hidden data for WBIR2018 evaluation) | MR Brain | Public | 20 inter-subject | Segmentation |
Evaluation¶
Evaluations are pooled over all datasets within a category. For example, a submission to the lung category will be evaluated on all lung data sets, and the ranking will subsequently be calculated based on mean performance on all images in all data sets.
For lung data, we evaluate the following metrics:
- Target Registration Error (TRE)
- Mean Hausdorff distance
- Ratio of singularities in the deformation field to the number of voxels
- Inverse consistency error
All lung data sets use points for ground truth.
For brain data, we evaluate the following metrics:
- Mean DICE score
- Mean Hausdorff distance
- Ratio of singularities in the deformation field to the number of voxels
- Inverse consistency error
All brain data sets use manual segmentations for ground truth.
Within each category, we compute the rankings for individual metrics. We do not compute a pooled ranking. All results are displayed on the leaderboard. The leaderboard is updated weekly.