MIDRC COVIDx Challenge

Organized by challenge-organizer - Current server time: June 10, 2023, 5:37 p.m. UTC

Previous

Final
Oct. 25, 2022, 5 p.m. UTC

Current

Training
Aug. 29, 2022, midnight UTC

End

Competition Ends
Feb. 23, 2023, 10 p.m. UTC

MIDRC horizontal logo

 

MIDRC COVIDx Challenge

Brought to you by the MIDRC Grand Challenges Working Group and MedICI

The goal of this Challenge is to train an AI/machine learning model in the task of distinguishing between COVID-negative and COVID-positive patients using frontal-view portable chest radiographs (CXRs). The Challenge has 3 phases: (1) a training phase (which includes practice Docker submissions), (2) a validation phase, and (3) a test phase (see "Important Dates" below). Please make sure to familiarize yourself with the "Terms and Conditions" before considering participation (see link on the left).

Prizes

Cash prizes, generously provided by the SPIE (International Society for Optics and Photonics), are available to eligible teams (see "Terms and Conditions") as follows:

    • $5,500 for 1st place
    • $3,000 for 2nd place
    • $1,500 for 3rd place

Challenge Logistics

During the Challenge training phase, you are encouraged to use publicly available data for model development and training, such as the MIDRC open data commons at data.midrc.org. We provide instructions on COVIDx Challenge GitHub repo to help you build a training cohort from data.midrc.org (see also the "Get Data" link under the "Participate" tab). All model training and fine-tuning needs to be performed on your own hardware. The Challenge platform submission system should be used for inference only using a trained model during the Docker practice submission period and for the validation and test phases of the Challenge. 

During the Docker practice submission dates within the training phase, a small set of chest radiographs in DICOM format will be available for download. It is mandatory to practice submissions during the Docker practice time period. During this time, the Challenge platform will perform inference on this very limited number of practice DICOM images using your submitted Docker containers. This period is intended to allow you to troubleshoot general issues with Docker submission, reading DICOM images, and to verify that your algorithm's output by case is the same locally as on the Challenge platform. This small dataset is not intended for training. Please use this opportunity to "test drive" the Docker submission process and resolve any issues you encounter to minimize potential problems during later phases. Only limited technical assistance will be available after the training phase. Please always (and especially during this practice period) use the "Forums" tab to post any questions so that Challenge organizers can help, if needed, and other participants can benefit from questions and answers.

During the Challenge validation phase, you will submit Docker containers with your trained models to perform inference on the unpublished validation set (which will not be available for download) on the Challenge platform. The validation phase allows you to (1) further familiarize yourself with the Dockerization and submission process of your code and trained model(s) and (2) to fine-tune your model(s). A leaderboard will be available during the validation phase to promote friendly competition. A maximum of 10 submissions per team is allowed in this phase. 

During the Challenge test phase, you will submit your most promising model(s) for inference and evaluation on the unpublished test set (which will not be available for download) on the Challenge platform. A leaderboard will not be available during the test phase. Performance of test phase submissions will be reported after conclusion of the Challenge. A maximum of 3 submissions per team is allowed in this phase. 

Important Points

  1. CXRs are not available for download except for a small set during the Docker practice submission time period. 

  2. It is mandatory to practice submitting a Docker archive to the platform during the Docker practice submission time period. 

  3. CXRs used on the Challenge platform are all portable chest radiographs in the anteroposterior (AP) view. There will be no pediatric exams. For more details see the "Get Data" link under the "Participate" tab.

  4. CXRs on the Challenge platform are available in DICOM format only. Any potential conversion from DICOM to a different image format must be performed within your submitted Docker container. 

  5. Within the Challenge platform, Docker submissions will not be allowed to access the internet (e.g., downloading pre-trained ImageNet weights will not be possible). Everything needed to successfully run your model needs to be included in your submitted Docker container.

Please go to "Challenge Details" (left) to learn more about this Challenge and, if you like, go to "Tutorials" to watch tutorial videos on Docker installation, using Docker, downloading data from data.midrc.org, and the submission process to this Challenge platform. Please also review "Terms and Conditions" (left) for additional important information. 

Once you are ready to participate, go to the "Participate" tab and login or register for an account when prompted. You are required to agree to the Challenge "Terms and Conditions" to access the Challenge platform.

Discussion of the Challenge on the Forum ("Forums" tab) is encouraged. The Forum should be used for any questions you may have about this Challenge. 

 


 Important Dates

  • Monday, August 29, 2022 - Team registration opens

  • September 6 - Training phase and (mandatory) Docker practice submission open

  • October 3, noon Eastern time - Training phase and Docker practice submission end

  • October 3, noon Eastern time - Validation phase begins 

  • October 17 - Registration closes

  • October 24, noon Eastern time - Validation phase closes

  • October 24, noon Eastern time - Test phase begins 

  • November 7, 5pm Eastern time - Final Dockerized submission deadline; conclusion of Challenge

  • November 21 - Top finishers notified and rankings released 

  • November 27 through December 1 – Challenge results and top-ranked finishers announced at RSNA 2022, Chicago

Note that the Challenge Platform times at the top of this page are in the UTC timezone (EST+5)

MIDRC COVIDx Challenge: Details 

Challenge Task

Distinghuish between portable chest radiographs (CXRs) of COVID-positive and COVID-negative patients. 

Performance Metrics

The primary performance metric to rank submissions is the area under the ROC curve (AUC) in the task of distinghuising between CXRs of COVID-positive and COVID-negative patients. Submissions will be ranked using the primary performance metric. A statistically significant difference in performance between the winner and runners-up is not required to "win" the Challenge. In order to win monetary awards, however, it is required that your model's performance significantly exceeds random guessing (p<0.05) and that you make your trained models public on the MIDRC GitHub (see "Terms and Conditions"). Only classification performance on the test set will be used to rank submissions. A secondary performance metric, log-loss, will be used to break ties, if needed.

Output of Your Model/Algorithm

The output of your model should be a score proportional to the likelihood that a chest radiograph depicts a COVID-positive patient. This score can have any range but should be continuous or ordinal, not binary.

Formatting the Output of Your Model

The output of your method should be provided in a single comma-separated CSV file with image name in the first column and output score in the second column. 

* Make sure the header and rows are in this specific format:

 

fileNamePath,class
<dicom-name-1>.dcm,<float decimal prediction likelihood of being COVID-posiitive>
<dicom-name-2>.dcm,<float decimal prediction likelihood of being COVID-positive>
<dicom-name-3>.dcm,<float decimal prediction likelihood of being COVID-positive>
...
etc.
 

Submissions to the Challenge Platform

You need to supply a zip archive that contains a Dockerfile, all neccessary code, and a trained model to allow the Challenge platform to build a Docker Image to run and evaluate your model on the validation or test set, depending on the Challenge phase. Example zip archives suitable for submission are provided in the "Starting Kit" (go to the "Participate" tab, then to "Files"). The platform will be open for submissions during the validation and test phases. The Challenge organizers will use the submitted archives to run inference on the validation and test sets, respectively, and report the performance of your model back to you. Each trained model needs to be submitted in its own zip archive. 

it is important to note that all model training and fine-tuning needs to be performed on the participants' own hardware. The Challenge platform only performs inference using trained models submitted in the required format (as described above) during the validation and test phases.

Training Phase

Only a few practice cases will be made available to you. These cases are not intended for model training.

Train your model(s) locally on your own computer. You are free to use in-house or publicly available data, MIDRC or otherwise, in the training of your model(s). We have provided instructions on the COVIDx Challenge GitHub repo to help you create a training cohort for download from data.midrc.org (see also the "Get Data" link under the "Participate" tab).

It is strongly recommended that you upload the starting kit examples and run them in the training phase to learn the mechanics of the upload and submission system. It is advisable to try out Dockerization of your code and watch the tutorial videos on submitting to this platform (see top of this page) and check out the example submissions (go to the "Participate" tab, then to "Files"). Build and run your Docker Image locally on your own computer to make sure it builds, runs, creates output in the specified format, and performs as expected. 

For mandatory Docker practice submissions, download the practice cases we provide (go to the "Participate" tab, then to "Files") to practice and troubleshoot Docker submission. The practice data set contains 10 portable frontal CXR images in DICOM format and a CSV file with the reference standard label for each image. This allows you to check whether your algorithm provides the same output when it is run on the platform as when it is run on your local computer. 

Validation Phase

Data will not be made available to you in this phase. Instead, inference on the validation set will occur on the Challenge platform using your submitted inference code (trained model). In order to do so, submit to the Challenge platform a zip archive that includes all necessary code for the platform to build a Docker Image of your model. The validation phase is intended for you to verify that submission of your model in the required format (1) runs on the Challenge platform, (2) performs inference on the validation set, (3) creates output in the expected format, and (4) performs as expected. Fine-tuning of your model may be performed locally based on the performance of your model on the validation set. 

The CXRs in the validation set (inaccessible to you as a participant) are in DICOM format. Image preprocessing, if any, needs to be performed within your submitted code.  

Note that you should not submit a built Docker Image but rather a zip archive with all the elements for the Challenge platform to build and run a Docker Image with your trained model. Example zip archives suitable for submission are provided in the "Starting Kit" (go to the "Participate" tab, then to "Files"').

Model performance in the validation phase will not be used to determine the final ranking in the Challenge. A leaderboard will be available during this phase. You may fine-tune your model(s) using the performance on the validation set as a guide with a limited number of submissions (see 'Terms and Conditions').

Test Phase

Data will not be made available to you in this phase. Submit your final zip archive(s) (that include all necessary code for the Challenge platform to build a Docker Image) for inference and evaluation on the test set. Note that you should not submit a built Docker Image but rather a zip archive with all the elements for the Challenge platform to build and run a Docker Image with your trained model.

The CXRs in the test set (inaccessible to you as participant) are in DICOM format. Image preprocessing, if any, needs to be performed within your submitted code.  

Performance during the test phase will determine the final ranking of submissions in the overall Challenge. There will be no leaderboard during this phase. Performance will be reported after conclusion of the Challenge. Submissions will be ranked using the primary performance metric. A statistically significant difference in performance between the winner and runners-up is not required to "win" the Challenge. To break ties, the secondary performance metric will be used, if needed. You have a limited number of submissions during the test phase (see 'Terms and Conditions").

In the test phase, a description of your model and training data (plain text or Word file) needs to be included in your zip archive submission in order for your submission to be considered a valid submission, i.e., for its performance to be reported back to you and to be part of the Challenge. 

The Challenge platform

The system specifications are as follows:

Azure VM Name vCPU RAM (GB) Temp Storage SSD (GB) GPU GPU Memory (GB) Max uncached disk throughput:
IOPS/MBps
Max NICs
Standard_NC6s_v3 6 112 736 1 16 20000/200 4

NOTE THAT internet connectivity is not provided within the Challenge platform. All necessary code, model weights, and library requirements need to be provided in your submission. 

Local Computer Requirements

It is advisable to have Docker installed on your local computer so you can check locally how your code runs within a Docker Image. Go to https://docs.docker.com/ to learn more about how to install Docker on your own computer. The videos at the "Tutorials" link (left) provide additional information. 

Docker Images will be built and run on the Challenge platform with Docker version 20.10.13 and above, so, if possible, a local install of Docker should be that version or higher.

MIDRC COVIDx Challenge: Terms and Conditions

The MIDRC COVIDx Challenge is organized in the spirit of cooperative scientific progress that benefits the common good and health outcomes for all. Your contributions could greatly advance the diagnosis and treatment of COVID-19.

By participating in this Challenge, each participant agrees to the following:

  • Anonymous participation is not allowed.
  • Participants from the same research group, company, or collaboration are required to participate as a team, i.e., form a team within the Challenge platform. Individual participants should form a single-user team. 
  • Entry by commercial entities is permitted but must be disclosed.
  • No conflict of interest may exist for any team to be considered in the final ranking as per the MIDRC Grand Challenge Conflict Policy.
  • To be considered for monetary prizes, participants must agree that MIDRC will make participants' trained models public on the MIDRC GitHub (the contents of the Docker containers and/or the source code). For further details and restrictions see the paragraph on performance metrics below. Furthermore, descriptions of participants’ methods and results may become part of presentations, publications, and subsequent analyses derived from the Challenge (with proper attribution to the participants) at the discretion of the organizers. While methods and results may become part of Challenge reports and publications, participants may choose not to disclose their identity and remain anonymous for the purpose of these reports. Cash prizes available to those eligible teams are generously provided by the SPIE (International Society for Optics and Photonics) as follows:
    • $5,500 for 1st place
    • $3,000 for 2nd place
    • $1,500 for 3rd place

As part of the registration process, particpants will select one of two options:

  • Upon Challenge completion, our team agrees that our trained models and Docker submission WILL be made public by MIDRC, allowing us to be eligible for a monetary award.
  • Our team wishes to participate in the Challenge, but we do NOT wish for our submission and trained models to be made public by MIDRC. We understand that our team will therefore not be eligible for a monetary award.

Important Points

  • Once participants make a submission within the test phase of the Challenge, they will be considered fully vested in the challenge, so that their performance results will become part of any presentations, publications, or subsequent analyses derived from the Challenge at the discretion of the organizers. Participants can choose to have their results reported anonymously in these presentations and publications.
  • Only fully automated methods are acceptable for the Challenge as the submission is in the form of Docker containers. It is not possible to submit manual annotations or interactive methods.
  • Please note there will be a small subset of cases available for download and for inference on the Challenge platform during the Docker practice period. This small set is intended for participants to troubleshoot the Docker submission process and verify model output. This small set is not intended for model training. Every participant team is strongly encouraged to practice Docker submission during the stated dates
  • The primary performance metric to rank submissions will be the area under the ROC curve (AUC) in the task of distinghuising between CXRs of COVID-positive and COVID-negative patients. Submissions will be ranked using the primary performance metric. A statistically significant difference in performance between the winner and runners-up is not required to "win" the Challenge. In order to win monetary awards, however, it is required that your model's performance significantly exceeds random guessing (p<0.05). Only classification performance on the test set will be used to rank submissions. A secondary performance metric, log-loss, will be used to break ties, if needed. In summary:
    • primary metric:  area under the ROC curve 
    • secondary metric (in case of a tie):  log loss (cross entropy)
  • We strongly encourage the use of datasets from open and public imaging repositories for model development and training. Participants are encouraged to build and download training cohorts from the open MIDRC Data Commons, https://data.midrc.org/, (see the "Get Data" link under the "Participate" tab). Note that this Challenge involves only portable chest radiographs. 
  • Participants will be required to disclose a description of their methods and training data used.
  • Using transfer learning/fine-tuning of models pretrained on general-purpose datasets (e.g., ImageNet) is allowed.
  • The deadline for registration is October 17. Registration after this deadline will not be considered for the Challenge.
  • Team size is limited to 8 participants.
  • Participants may only join one team.
  • Important dates:
    • Monday, August 29, 2022 - Team registration opens
    • September 6 - Training phase and (mandatory) Docker practice submission open
    • October 3, noon Eastern time - Training phase and Docker practice submission end
    • October 3, noon Eastern time - Validation phase begins 
    • October 17 - Registration closes
    • October 24, noon Eastern time - Validation phase closes
    • October 24, noon Eastern time - Test phase begins 
    • November 7, 5pm Eastern time - Final Dockerized submission deadline; conclusion of Challenge
    • November 21 - Top finishers notified and rankings released 
    • November 27 through December 1 – Challenge results and top-ranked finishers announced at RSNA 2022, Chicago
  • In the validation phase, 10 total submissions (that finish without errors flagged by the Challenge platform) are allowed per team. After the maximum number of submissions for a team is reached, the Challenge system will not accept further submissions to the applicable Challenge phase.  
  • In the test phase, 3 total submissions (that finish without errors flagged by the Challenge platform) are allowed per team. All submissions from a team will be scored, and the highest-performing submission will determine the team's ranking within the Challenge.

Participants are strongly encouraged to agree to MIDRC making their code publicly available after completion of the Challenge. 

Participation in this Challenge acknowledges the educational and community-building nature of the Challenge and commits participants to conduct consistent with this spirit for the advancement of the medical imaging research community.  See this article for a discussion of lessons learned from the LUNGx Challenge, sponsored by SPIE, AAPM, and NCI.

Conflict of Interest:

All participants must attest that they are not directly affiliated with the labs of any of the Challenge organizers or major contributors.

MIDRC COVIDx Challenge: Tutorials

Docker Installation Tutorials

Using Docker Tutorials

Practice with the Demo

  1. 'How to' Step 1: Sign Up and Login
  2. 'How to' Step 2: Practice Data, Docker Image Building and Testing
  3. 'How to' Step 3: Uploading Submissions to the Challenge Platform

Downloading Data and Cohort Building at data.midrc.org

  1. General YouTube video on cohort building within MIDRC
  2. Data Download QuickStart Guide (steps 1-4,7)
  3. More details on downloading from data.midrc.org powered by the Gen3 client (link is also in QuickStart Guide above)
  4. Jupyter notebooks 

Training

Start: Aug. 29, 2022, midnight

Description: Training phase: create models and upload your Docker archives to perform inference on the practice cases

Validation

Start: Oct. 3, 2022, 5 p.m.

Description: Validation phase: create models and upload your Docker archives to perform inference on the validation data and rank on the leaderboard

Final

Start: Oct. 25, 2022, 5 p.m.

Description: Test phase; deploy your final models (up to 3) on the test set for final ranking of participants/submissions

Competition Ends

Feb. 23, 2023, 10 p.m.

You must be logged in to participate in competitions.

Sign In
# Username Score
1 rzhang229 0.750
2 magou190 0.722
3 challenge-organizer 0.694