Challenge Open Access

Will Two Do? Varying Dimensions in Electrocardiography: The PhysioNet/Computing in Cardiology Challenge 2021

Matthew Reyna Nadi Sadr Annie Gu Erick Andres Perez Alday Chengyu Liu Salman Seyedi Amit Shah Gari D. Clifford

Published: Feb. 25, 2021. Version: 1.0.2 <View latest version>


When using this resource, please cite the original publication:

Perez Alday EA, Gu A, Shah AJ, Robichaux C, Wong AI, Liu C, Liu F, Rad AB, Elola A, Seyedi S, Li Q, Sharma A, Clifford GD, Reyna MA. Classification of 12-lead ECGs: the PhysioNet/Computing in Cardiology Challenge 2020. Physiol Meas. 2020 Nov 11. http://doi.org/10.1088/1361-6579/abc960.

Additionally, when using this resource, please cite: (show more options)
Reyna, M., Sadr, N., Gu, A., Perez Alday, E. A., Liu, C., Seyedi, S., Shah, A., & Clifford, G. D. (2021). Will Two Do? Varying Dimensions in Electrocardiography: The PhysioNet/Computing in Cardiology Challenge 2021 (version 1.0.2). PhysioNet. https://doi.org/10.13026/jz9p-0m02.

Please include the standard citation for PhysioNet: (show more options)
Goldberger, A., Amaral, L., Glass, L., Hausdorff, J., Ivanov, P. C., Mark, R., ... & Stanley, H. E. (2000). PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation [Online]. 101 (23), pp. e215–e220.

Abstract

The electrocardiogram (ECG) is a non-invasive representation of the electrical activity of the heart. Although the twelve-lead ECG is the standard diagnostic screening system for many cardiological issues, the limited accessibility of twelve-lead ECG devices provides a rationale for smaller, lower-cost, and easier to use devices. While single-lead ECGs are limiting [1], reduced-lead ECG systems hold promise, with evidence that subsets of the standard twelve leads can capture useful information [2], [3], [4] and even be comparable to twelve-lead ECGs in some limited contexts. In 2017 we challenged the public to classify AF from a single-lead ECG, and in 2020 we challenged the public to diagnose a much larger number of cardiac problems using twelve-lead recordings. However, there is limited evidence to demonstrate the utility of reduced-lead ECGs for capturing a wide range of diagnostic information.

In this year’s Challenge, we ask the following question: ‘Will two do?’ This year’s Challenge builds on last year’s Challenge [5], which asked participants to classify cardiac abnormalities from twelve-lead ECGs. We are asking you to build an algorithm that can classify cardiac abnormalities from twelve-lead, six-lead, four-lead, three-lead, and two-lead ECGs. We will test each algorithm on databases of these reduced-lead ECGs, and the differences in performances of the algorithms on these databases will reveal the utility of reduced-lead ECGs in comparison to standard twelve-lead EGCs.


Objective

The goal of the 2021 Challenge is to identify clinical diagnoses from twelve-lead, six-lead (I, II, III, aVR, aVL, aVF), four-lead (I, II, III, V2), three-lead (I, II, V2), and two-lead (I and II) ECG recordings.

We ask participants to design and implement a working, open-source algorithm that, based only on the provided twelve-lead ECG recordings and routine demographic data, can automatically identify any cardiac abnormalities present in the recording. We will award prizes for the top performing twelve-lead, six-lead, four-lead, three-lead, and two-lead algorithms.


Participation


Registering for the Challenge and Conditions of Participation

To participate in the Challenge, register your team by providing the full names, affiliations, and official email addresses of your entire team before you submit your algorithm. The details of all authors must be exactly the same as the details in your abstract submission to Computing in Cardiology. You may update your author list by completing this form again (read the form for details), but changes to your authors must not contravene the rules of the Challenge.

Algorithms

For each ECG recording, your algorithm must identify a set of one or more classes as well as a probability or confidence score for each class. As an example, suppose that your classifier identifies atrial fibrillation (164889003) and a first-degree atrioventricular block (270492004) with probabilities of 90% and 60%, respectively, for a particular recording, but it does not identify any other rhythm types. Your code might produce the following output for the recording:

#Record ID
164889003, 270492004, 164909002, 426783006, 59118001, 284470004,  164884008,  429622005, 164931005
  1,       1,         0,         0,         0,        0,          0,          0,         0
0.9,       0.6,       0.2,       0.05,      0.2,      0.35,       0.35,       0.1,       0.1

We have implemented two example algorithms in MATLAB and Python as templates for successful submissions.

  • The Python classifier implements a random forest classifier that uses age, sex, and the root mean square of the ECG lead signals as features by extracting the available ECG leads and demographic data from the WFDB header file (the .hea file).
  • The MATLAB classifier implements a linear regression model that uses age, sex, and the root mean square of the ECG lead signals as features from the available ECG leads and demographic data from the WFDB header file (the .hea file).

Submitting your Algorithm

Please use the above example code as templates for your submissions.

Please see the submission instructions for detailed information about how to submit a successful Challenge entry. We will open scoring in January. We will provide feedback on your entry as soon as possible, so please wait at least 72 hours before contacting us about the status of your entry.

Like last year’s Challenge, we will continue to require code both for your trained model and for testing your model. If we cannot reproduce your model from the training code, then you will not be eligible for ranking or a prize.

We will run your training code on Google Cloud using 10 vCPUs, 65 GB RAM, 100 GB disk space, and an optional NVIDIA T4 Tensor Core GPU with 16 GB VRAM. Your training code has a 72 hour time limit without a GPU and a 48 hour time limit with a GPU.

We will run your trained model on Google Cloud using 6 vCPUs, 39 GB RAM, 100 GB disk space, and an optional NVIDIA T4 Tensor Core GPU with 16 GB VRAM. Your trained model has a 24 hour time limit on each of the validation and test sets.

We are using an N1 custom machine type to run submissions on GCP. If you would like to use a predefined machine type, then the n1-highmem-8 is the closest predefined machine type, but with 2 fewer vCPUs and 13 GB less RAM. For GPU submissions, we use the 418.40.04 driver version.


Data Description

The training data contains twelve-lead ECGs. The validation and test data contains twelve-lead, six-lead, four-lead, three-lead, and two-lead ECGs:

  1. Twelve leads: I, II, III, aVR, aVL, aVF, V1, V2, V3, V4, V5, V6
  2. Six leads: I, II, III, aVR, aVL, aVF
  3. Four leads: I, II, III, V2
  4. Three leads: I, II, V2
  5. Two leads: I, II

Each ECG recording has one or more labels that describe cardiac abnormalities (and/or a normal sinus rhythm). We mapped the labels for each recording to SNOMED-CT codes. The lists of scored labels and unscored labels are given with the evaluation code; see the scoring section for details.

Data Sources

The Challenge data include recordings from last year’s Challenge and many new recordings for this year’s Challenge:

  1. CPSC Database and CPSC-Extra Database
  2. INCART Database
  3. PTB and PTB-XL Database
  4. The Georgia 12-lead ECG Challenge (G12EC) Database
  5. Augmented Undisclosed Database
  6. Chapman-Shaoxing and Ningbo Database
  7. The University of Michigan (UMich) Database

The Challenge data include annotated twelve-lead ECG recordings from six sources in four countries across three continents. These databases include over 100,000 twelve-lead ECG recordings with over 88,000 ECGs shared publicly as training data, 6,630 ECGs retained privately as validation data, and 36,266 ECGs retained privately as test data.

  • The first source is the China Physiological Signal Challenge in 2018 (CPSC 2018), which was held during the 7th International Conference on Biomedical Engineering and Biotechnology in Nanjing, China. This source contains two databases: the data from CPSC 2018 (the CPSC Database) and unused data from CPSC 2018 (the CPSC-Extra Database). Together, these databases contain 13,256 ECGs (10,330 ECGs shared as training data, 1,463 retained as validation data, and 1,463 retained as test data). We shared the training set and an unused dataset from CPSC 2018 as training data, and we split the test set from CPSC 2018 into validation and test sets. Each recording is between 6 and 144 seconds long with a sampling frequency of 500 Hz.

  • The second source is the St Petersburg INCART 12-lead Arrhythmia Database. This source contains 74 annotated ECGs (all shared as training data) extracted from 32 Holter monitor recordings. Each recording is 30 minutes long with a sampling frequency of 257 Hz.

  • The third source is the Physikalisch-Technische Bundesanstalt (PTB) and includes two public datasets: the PTB and the PTB-XL databases. The source contains 22,353 ECGs (all shared as training data). Each recording is between 10 and 120 seconds long with a sampling frequency of either 500 or 1,000 Hz.

  • The fourth source is a Georgia database which represents a unique demographic of the Southeastern United States. This source contains 20,672 ECGs (10,344 ECGs shared as training data, 5,167 retained as validation data, and 5,161 retained as test data). Each recording is between 5 and 10 seconds long with a sampling frequency of 500 Hz.

  • The fifth source is an undisclosed American database that is geographically distinct from the Georgia database. This source contains 10,000 ECGs (all retained as test data).

  • The sixth source is the Chapman University, Shaoxing People’s Hospital (Chapman-Shaoxing) and Ningbo First Hospital (Ningbo) database [6], [7]. This source contains 45,152 ECGS (all shared as training data). Each recording is 10 seconds long with a sampling frequency of 500 Hz.

  • The seventh source is UMich Database from the University of Michigan. This source contains 19,642 ECGs (all retained as test data). Each recording is 10 seconds long with a sampling frequency of either 250 Hz or 500 Hz.

Like other real-world datasets, different databases may have different proportions of cardiac abnormalities, but all of the labels in the validation or test data are represented in the training data. Moreover, while this is a curated dataset, some of the data and labels are likely to have errors, and an important part of the Challenge is to work out these issues. In particular, some of the databases have human-overread machine labels with single or multiple human readers, so the quality of the labels varies between databases. You can find more information about the label mappings of the Challenge training data in this table.

The six-lead, four-lead, three-lead, and two-lead validation data are reduced-lead versions of the twelve-lead validation data: the same recordings with the same header data but only with signal data for the relevant leads.

We are not planning to release the test data at any point, including after the end of the Challenge. Requests for the test data will not receive a response. We do not release test data to prevent overfitting on the test data and claims or publications of inflated performances. We will entertain requests to run code on the test data after the Challenge on a limited basis based on publication necessity and capacity. (The Challenge is largely staged by volunteers.)

Data Format

All data was formatted in WFDB format. Each ECG recording uses a binary MATLAB v4 file (see page 27) for the ECG signal data and a plain text file in WFDB header format for the recording and patient attributes, including the diagnosis, i.e., the labels for the recording. The binary files can be read using the load function in MATLAB and the scipy.io.loadmat function in Python; see our MATLAB and Python example code for working examples. The first line of the header provides information about the total number of leads and the total number of samples or time points per lead, the following lines describe how each lead was encoded, and the last lines provide information on the demographics and diagnosis of the patient.

For example, a header file A0001.hea may have the following contents:

A0001 12 500 7500 05-Feb-2020 11:39:16
A0001.mat 16+24 1000/mV 16 0 28 -1716 0 I
A0001.mat 16+24 1000/mV 16 0 7 2029 0 II
A0001.mat 16+24 1000/mV 16 0 -21 3745 0 III
A0001.mat 16+24 1000/mV 16 0 -17 3680 0 aVR
A0001.mat 16+24 1000/mV 16 0 24 -2664 0 aVL
A0001.mat 16+24 1000/mV 16 0 -7 -1499 0 aVF
A0001.mat 16+24 1000/mV 16 0 -290 390 0 V1
A0001.mat 16+24 1000/mV 16 0 -204 157 0 V2
A0001.mat 16+24 1000/mV 16 0 -96 -2555 0 V3
A0001.mat 16+24 1000/mV 16 0 -112 49 0 V4
A0001.mat 16+24 1000/mV 16 0 -596 -321 0 V5
A0001.mat 16+24 1000/mV 16 0 -16 -3112 0 V6
#Age: 74
#Sex: Male
#Dx: 426783006
#Rx: Unknown
#Hx: Unknown
#Sx: Unknown

From the first line of the file, we see that the recording number is A0001, and the recording file is A0001.mat. The recording has 12 leads, each recorded at a 500 Hz sampling frequency, and contains 7500 samples. From the next 12 lines of the file (one for each lead), we see that each signal was written at 16 bits with an offset of 24 bits, the floating point number (analog-to-digital converter (ADC) units per physical unit) is 1000/mV, the resolution of the analog-to-digital converter (ADC) used to digitize the signal is 16 bits, and the baseline value corresponding to 0 physical units is 0. The first value of the signal (-1716, etc.), the checksum (0, etc.), and the lead name (I, etc.) are the last three entries of each of these lines. From the final 6 lines, we see that the patient is a 74-year-old male with a diagnosis (Dx) of 426783006, which is the SNOMED-CT code for sinus rhythm. The medical prescription (Rx), history (Hx), and symptom or surgery (Sx) are unknown. Please visit WFDB header format for more information on the header file and variables.

Data Access

The training data from the 2021 Challenge can be downloaded from these links. Links to the CPSC2018 datasets are not provided herein, please see the latest version of this project for further details. You can use the MD5 hash to verify the integrity of the tar.gz file:

  1. CPSC 2018 Training Set (CPSC 2018), 6,877 recordings
  2. China 12-Lead ECG Challenge Database (CPSC2018-Extra), 3,453 recordings
  3. St Petersburg INCART 12-lead Arrhythmia Database, 74 recordings: link; MD5 hash: 525dde6bd26bff0dcb35189e78ae7d6d; (headers only)
  4. PTB Diagnostic ECG Database, 516 recordings: link; MD5 hash: 3df4662a8a9189a6a5924424b0fcde0e; (headers only)
  5. PTB-XL Electrocardiography Database, 21,837 recordings
  6. Georgia 12-Lead ECG Challenge Database, 10,344 recordings: link; MD5 hash: d064e3bf164d78a070e78fe5227d985c; (headers only)
  7. Chapman University, Shaoxing People’s Hospital (Chapman-Shaoxing) 12-lead ECG Database, 10,247 recordings: link; MD5 hash: a3e10171eba1e7520a38919594d834e5; (headers only)
  8. Ningbo First Hospital (Ningbo) 12-lead ECG Database, 34,905 recordings: link; MD5 hash: 84171145922078146875394acb89b765; (headers only)

If you are unable to use these links to access the data, or if you want to use a command-line tool to access the data through Google Colab, then you can use these commands:

wget -O WFDB_StPetersburg.tar.gz https://pipelineapi.org:9555/api/download/physionettraining//WFDB_StPetersburg.tar.gz/
wget -O WFDB_PTB.tar.gz https://pipelineapi.org:9555/api/download/physionettraining/WFDB_PTB.tar.gz/
wget -O WFDB_Ga.tar.gz https://pipelineapi.org:9555/api/download/physionettraining/WFDB_Ga.tar.gz/
wget -O WFDB_ChapmanShaoxing.tar.gz https://pipelineapi.org:9555/api/download/physionettraining/WFDB_ChapmanShaoxing.tar.gz/
wget -O WFDB_Ningbo.tar.gz https://pipelineapi.org:9555/api/download/physionettraining/WFDB_Ningbo.tar.gz/


Evaluation


Scoring

For last year’s Challenge, we developed a new scoring metric that awards partial credit to misdiagnoses that result in similar treatments or outcomes as the true diagnosis as judged by our cardiologists. This scoring metric reflects the clinical reality that some misdiagnoses are more harmful than others and should be scored accordingly. Moreover, it reflects the fact that confusing some classes is less harmful than confusing others.

We are starting this year’s Challenge with this scoring metric, but we welcome feedback. It is defined as follows:

Let C = [ci] be a collection of diagnoses. We compute a multi-class confusion matrix A = [aij], where aij is the number of recordings in a database that were classified as belonging to class ci but actually belong to class cj. We assign different weights W = [wij] to different entries in this matrix based on the similarity of treatments or differences in risks. The score s is given by s = Σij wij aij, which is a generalized version of the traditional accuracy metric. The score s is then normalized so that a classifier that always outputs the true class(es) receives a score of 1 and an inactive classifier that always outputs the normal class receives a score of 0.

The scoring metric is designed to award full credit to correct diagnoses and partial credit to misdiagnoses with similar risks or outcomes as the true diagnosis. A classifier that returns only positive outputs typically receives a negative score, i.e., a lower score than a classifier that returns only negative outputs.

The leaderboard provides the scores of successful submissions on the hidden data.

Rules and Deadlines

Overview of rules

There are two phases for the Challenge: an unofficial phase and an official phase. The unofficial phase of the Challenge allows us to introduce and ‘beta test’ the data, scores, and submission system before the official phase of the Challenge. Participation in the unofficial phase is mandatory for participating in the official phase of the Challenge because it helps us to improve the official phase.

Entrants may have an overall total of up to 15 scored entries over both the unofficial and official phases of the competition (see the below table). All deadlines occur at 11:59pm GMT on the dates mentioned below, and all dates are during 2021 unless indicated otherwise. If you do not know the difference between GMT and your local time, then find it out before the deadline!

Please submit your entries early to ensure that you have the most chances for success. If you wait until the last few days to submit your entries, then you may not receive feedback before the submission deadline, and you may be unable to resubmit your entries if there are unexpected errors or issues with your submissions. Every year, several teams wait until the last few days to submit their first entry and are unable to debug their work before the deadline.

Timing and priority of entries

Although we score on a first-come-first-serve basis, please note that if you submit more than one entry in a 24-hour period, your second entry may be deprioritized compared to other teams’ first entries. If you submit more than one entry in the final 24 hours before the Challenge deadline, then we may be unable to provide feedback or a score for more than one of your entries. It is unlikely that we will be able to debug any code in the final days of the Challenge.

For these reasons, we strongly suggest that you start submitting entries at least 5 days before the unofficial deadline and 10 days before the official deadline. We have found that the earlier teams enter the Challenge, the better they do because they have time to digest feedback and performance. We therefore suggest entering your submissions many weeks before the deadline to give yourself the best chance for success.

Key dates/deadlines

  Start End Submissions
Unofficial phase 24 December 2020 8 April 2021 1-5 scored entries (*)
Hiatus 9 April 2021 30 April 2021 N/A
Abstract deadline 24 April 2021 24 April 2021 1 abstract
Official phase 1 May 2021 15 August 2021 1-10 scored entries (*)
Abstract decisions released 21 June 2021 21 June 2021 N/A
Wild card entry date 31 July 2021 31 July 2021 N/A
Hiatus 16 August 2021 11 September 2021 N/A
Preprint deadline 1 September 2021 1 September 2021 One 4-page paper (**)
Conference 12 September 2021 15 September 2021 1 presentation (***)
Final scores released 16 September 2021 16 September 2021 N/A
Final paper deadline 23 September 2021 30 September 2021 One 4-page paper (***)

(* Entries that fail to score do not count against limits.)

(** Must include preliminary scores.)

(*** Must include final scores, your ranking in the Challenge, and any updates to your work as a result of feedback after presenting at CinC. This final paper daedline is earlier than the deadline given by CinC so that we can check these details.)

To be eligible for the open-source award, you must do all the following:

  1. Register for the Challenge here.
  2. Submit at least one open-source entry that can be scored during the unofficial phase.
  3. Submit an abstract to CinC by the abstract submission deadline. Include your team name and score from the unofficial phase in your abstract. Please select ‘PhysioNet/CinC Challenge’ as the topic of your abstract so that it can be identified easily by the abstract review committee. Please read “Advice on Writing an Abstract” for important information on writing a successful abstract.
  4. Submit at least one open-source entry that can be scored during the official phase.
  5. Submit a full 4-page paper on your work to CinC by the above preprint deadline.
  6. One of your team members must attend CinC 2021 to present your work either orally or as a poster (depending on your abstract acceptance). If you have a poster, then you must stand by it to defend your work. No shows (oral or poster) will be disqualified. One of your team members must also attend the closing ceremony to collect your prize. No substitutes will be allowed.
  7. Submit a full 4-page paper on your work to CinC by the above final paper deadline. Please note that we expect the abstract to change significantly, both in terms of results and methods. You may also update your title with the caveat that it must not be substantially similar to the title of the competition or contain the words ‘physionet’ ‘challenge’ or ‘competition’.

You must not submit an analysis of this year’s Challenge data to other conferences or journals until after CinC 2021 so that we can discuss the Challenge in a single forum. If we discover evidence that you have submitted elsewhere before the end of CinC 2021, then you will be disqualified and de-ranked on the website, banned from future Challenges, and the journal/conference will be contacted to request your article be withdrawn for contravention of the terms of use.

There are many reasons for this policy: 1) we do not release results on the test data before the end of CinC, and only reporting results on the training data increases the likelihood of overfitting and is not comparable to the official results on the test data, and 2) attempting to publish on the Challenge data before the Challengers present their results is unprofessional and comes across as a territorial grab. This requirement stands even if your abstract is rejected, but you may continue to enter the competition and receive scores. (However, unless you are accepted into the conference at a later date as a ‘wild card’ entry, you will not be eligible to win a prize.) Of course, any publicly available data that was available before the Challenge is exempted from this condition, but any of the novelty of the Challenge (the Challenge design, the Challenge data that you downloaded from this page because it was processed for the Challenge, the scoring function, etc.) is not exempted.

After the Challenge is over and the final scores have been posted (in late September), everyone may then submit their work to a journal or another conference. In particular, we encourage all entrants (including those who missed the opportunity to compete or attend CinC 2021) to submit extended analysis and articles to the special issue, taking into account the publications and discussions at CinC 2021.

Wild Card Entries

If your abstract is rejected or if you otherwise failed to qualify during the unofficial period, then there is still a chance to present as CinC and win the Challenge. A ‘wild card’ entry has been reserved for a high-scoring entry from a team that was unable to submit an accepted abstract to CinC by the original abstract submission deadline. A successful entry must be submitted by the wild card entry deadline. We will contact eligible teams and ask them to submit an abstract. The abstract will still be reviewed as thoroughly as any other abstract accepted for the conference. See Advice on Writing an Abstract.

Advice on Writing an Abstract

To improve your chances of having your abstract accepted, we offer the following advice:

  • Ensure that all of your authors agree on your abstract, and be sure that all of the author details match your registration information, including email addresses.
  • Stick to the word limit and deadline on the conference website. Include time for errors, internet outages, etc.
  • Select ‘PhysioNet/CinC Challenge’ as the submission topic so it can be identified easily by the abstract review committee. However, do not include the words ‘PhysioNet’ or ‘PhysioNet/CinC’ or ‘Challenge’ in the title because this creates confusion with the hundreds of other articles and the main descriptor of the Challenge.
  • Your title, abstract and author list (collaborators) can be modified in September when you submit the final paper.
  • While your work is bound to change, the quality of your abstract is a good indicator of the final quality of your work. We suggest you spell check, write in full sentences, and be specific about your approaches. Include your method’s cross validated training performance (using the Challenge metrics) and your score provided by the Challenge submission system. If you omit or inflate this latter score, then your abstract will be rejected.
  • Do not be embarrassed by any low scores. We do not expect high scores at this stage. We are focused on the thoughtfulness of the approach and quality of the abstract.
  • If you are unable to receive a score during the unofficial phase, then you can still submit, but the work should be very high quality and you should include the cross validation results of your algorithm on training set.

You will be notified if your abstract has been accepted by email from CinC in June. You may not enter more than one abstract describing your work in the Challenge. We know you may have multiple ideas, and the actual abstract will evolve over the course of the Challenge. More information, particularly on discounts and scholarships, can be found here. We are sorry, but the Challenge Organizers do not have extra funds to enable discounts or funding to attend the conference.

Again, we cannot guarantee that your code will be run in time for the CinC abstract deadline, especially if you submit your code immediately before the deadline. It is much more important to focus on writing a high-quality abstract describing your work and submit this to the conference by abstract deadline. Please follow these instructions here carefully.

Please make sure that all of your team members are authors on your abstract. If you need to add or subtract authors, do this at least a week before the abstract deadline. Asking us to alter your team membership near or after the deadline is going to lead to confusion that could affect your score during review. It is better to be more inclusive on the abstract in terms of authorship, though, and if we find authors have moved between abstracts/teams without permission, then this is likely to lead to disqualification. As noted above, you may change the authors/team members later in the Challenge.

Please make sure that you include your team name, your official score as it appears on the leaderboard, and cross validation results in your abstract using the scoring metrics for this year’s Challenge (especially if you are unable to receive a score or are scoring poorly). The novelty of your approach and the rigor of your research is much more important during the unofficial phase. Please make sure you describe your technique and any novelty very specifically. General statements such as ‘a 1D CNN was used’ are uninformative and will score poorly in review.

The Organizers of the Challenge have no ability to help with any problems with the abstract submission system. We do not operate it. Please do not email us with issues related to the abstract submission system.

Open-Source Licenses

We encourage the use of open-source licenses for your entries.

Entries with non open-source licenses will be scored but not ranked in the official competition. All scores will be made public. At the end of the competition, all entries will be posted publicly, and therefore automatically mirrored on several sites around the world. We have no control over these sites, so we cannot remove your code even on request. Code which the organizers deem to be functional will be made publicly available after the end of the Challenge. You can request to withdraw from the Challenge, so that your entry’s performance is no longer listed in the official leader board, up until a week before the end of the official phase. However, the Organizers reserve the right to publish any submitted open-source code after the official phase is over. The Organizers also retain the right to use a copy of submitted code for non-commercial use. This allows us to re-score if definitions change and validate any claims made by competitors.

If no license is specified in your submission, then the license given in the example code will be added to your entry, i.e., we will assume that you have released your code under the BSD 3-Clause license.

Rules on Competing in Teams / Collaboration

To maintain the scientific impact of the Challenges, it is important that all Challengers contribute truly independent ideas. For this reason, we impose the following rules on team composition/collaboration:

  1. Multiple teams from a single entity (such as a company, university, or university department) are allowed as long as the teams are truly independent and do not share team members (at any point), code, or any ideas. Multiple teams from the same research group or company unit are not allowed because of the difficulty of maintaining independence in those situations. If there is any question on independence, the teams will be required to supply an official letter from the company that indicates that the teams do not interact at any point (socially or professionally) and work in separate facilities, as well as the location of those facilities.
  2. You can join an existing team before the abstract deadline as long as you have not belonged to another team or communicated with another team about the current Challenge. You may update your author list by completing this form again (check the ‘Update team members’ box on the form), but changes to your authors must not contravene the rules of the Challenge.
  3. You may use public code from another team if they posted it before the competition.
  4. You may not make your Challenge code publicly available during the Challenge or use any code from another Challenger that was shared, intentionally or not, during the course of the Challenge.
  5. You may not publicly post information describing your methods (blog, vlog, code, preprint, presentation, talk, etc.) or give a talk outside your own research group at any point during the Challenge that reveals the methods you have employed or will employ in the Challenge. Obviously, you can talk about and publish the same methods on other data as long as you don’t indicate that you used or planned to use it for the Challenge.
  6. You must use the same team name and email address for your team throughout the course of the Challenge. The email address should be the same as the one used to register for the Challenge, and to submit your abstract to CinC. Note that the submitter of the conference article/code does not need to present at the conference or be in any particular location in the author order on the abstract/poster/paper, but they must be a contributing member of the team. If your team uses multiple team names and/or email addresses to enter the Challenge, please contact the Organizers immediately to avoid disqualification of all team members concerned. Ambiguity will result in disqualification.
  7. If you participate in the Challenge as part of a class project, then please treat your class as a single team — please use the same team name as other groups in your class, limit the number of submissions from your class to the number allowed for each team, and feel free to present your work within your class. If your class needs more submissions than the Challenge submission limits allow, then please perform cross-validation on the training data to evaluate your work.

If we discover evidence of the contravention of these rules, then you will be ineligible for a prize and your entry publicly marked as possibly associated with another entry. Although we will contact the team(s) in question, time and resources are limited and the Organizers must use their best judgement on the matter in a short period of time. The Organizers’ decision on rule violations will be final.

Conference Attendance

CinC 2021 will take place from 12-15 September 2021 in Brno, Czech Republic. You must attend the whole conference to be eligible for prizes. If you send someone in your place who is not a team member or co-author, then you will be disqualified and your abstract will be removed from the proceedings. In particular, it is vital that the presenter (oral or poster) can defend your work and have in-depth knowledge of all decisions made during the development of your algorithm. Due to this year’s challenges, both in person and remote attendance are allowed. If you require a visa to attend the conference, we strongly suggest that you apply as soon as possible. Please contact the local conference organizing committee (not the Challenge Organizers) for any visa sponsorship letters and answer any questions concerning the conference.

Hackathon

Due to the uncertainties around travel, we have unfortunately decided not to run the Hackathon again this year.


Acknowledgements

This year’s Challenge is generously co-sponsored by Google, MathWorks, and the Gordon and Betty Moore Foundation.

Obtaining Complimentary MATLAB Licenses

MathWorks has generously decided to sponsor this Challenge by providing complimentary licenses to all teams that wish to use MATLAB. Users can apply for a license and learn more about MATLAB support by visiting the PhysioNet Challenge page from MathWorks. If you have questions or need technical support, then please contact MathWorks at studentcompetitions@mathworks.com.

Obtaining Complimentary Google Cloud Platform Credits

Google has generously agreed to provide Google Cloud Platform (GCP) credits for a limited nunber of teams for this Challenge.

At the time of launching this Challenge, Google Cloud offers multiple services for free on a one-year trial basis and $300 in cloud credits. Additionally, if teams are based at an educational institution in selected countries, then they can access free GCP training online.

Google Cloud credits will be made available to teams that requested credits when registering for the Challenge. Only one credit will be provided to one email address associated with each team, and teams must have a successful entry to the official phase of the Challenge and an accepted abstract to CinC.

The Challenge Organizers, their employers, PhysioNet and Computing in Cardiology accept no responsibility for the loss of credits, or failure to issue credits for any reason. Please note, by requesting credits, you are granting us permission to forward your details to Google for the distribution of credits. You can register for these credits during the Challenge registration process.


Conflicts of Interest

The authors have no conflicts of interest to declare.


References

  1. Drew BJ, Pelter MM, Adams MG, Wung SF. 12-lead ST-segment monitoring vs single-lead maximum ST-segment monitoring for detecting ongoing ischemia in patients with unstable coronary syndromes. American Journal of Critical Care. 1998 Sep 1;7(5):355.
  2. Drew BJ, Pelter MM, Brodnick DE, Yadav AV, Dempel D, Adams MG. Comparison of a new reduced lead set ECG with the standard ECG for diagnosing cardiac arrhythmias and myocardial ischemia. Journal of electrocardiology. 2002 Oct 1;35(4):13-21.
  3. Green M, Ohlsson M, Forberg JL, Björk J, Edenbrandt L, Ekelund U. Best leads in the standard electrocardiogram for the emergency detection of acute coronary syndrome. Journal of electrocardiology. 2007 May 1;40(3):251-6.
  4. Aldrich HR, Hindman NB, Hinohara T, Jones MG, Boswick J, Lee KL, Bride W, Califf RM, Wagner GS. Identification of the optimal electrocardiographic leads for detecting acute epicardial injury in acute myocardial infarction. The American journal of cardiology. 1987 Jan 1;59(1):20-3.
  5. Alday EA, Gu A, Shah AJ, Robichaux C, Wong AK, Liu C, Liu F, Rad AB, Elola A, Seyedi S, Li Q. Classification of 12-lead ecgs: the physionet/computing in cardiology challenge 2020. Physiological measurement. 2020 Dec 29;41(12):124003.
  6. Zheng J, Zhang J, Danioko S, Yao H, Guo H, Rakovski C. A 12-lead electrocardiogram database for arrhythmia research covering more than 10,000 patients. Scientific Data. 2020 Feb 12;7(1):1-8.
  7. Zheng J, Chu H, Struppa D, Zhang J, Yacoub SM, El-Askary H, Chang A, Ehwerhemuepha L, Abudayyeh I, Barrett A, Fu G. Optimal multi-stage arrhythmia classification approach. Scientific reports. 2020 Feb 19;10(1):1-7.

Share
Access

Access Policy:
Anyone can access the files, as long as they conform to the terms of the specified license.

License (for files):
Creative Commons Attribution 4.0 International Public License

Corresponding Author
You must be logged in to view the contact information.
Versions

Files

Total uncompressed size: 14.6 KB.

Access the files
Folder Navigation: <base>
Name Size Modified
papers
LICENSE.txt (download) 14.5 KB 2021-02-25
SHA256SUMS.txt (download) 77 B 2021-02-25