Database Credentialed Access

Chest X-ray segmentation images based on MIMIC-CXR

Li-Ching Chen Po-Chih Kuo Ryan Wang Judy Gichoya Leo Anthony Celi

Published: Aug. 18, 2022. Version: 1.0.0

When using this resource, please cite: (show more options)
Chen, L., Kuo, P., Wang, R., Gichoya, J., & Celi, L. A. (2022). Chest X-ray segmentation images based on MIMIC-CXR (version 1.0.0). PhysioNet.

Please include the standard citation for PhysioNet: (show more options)
Goldberger, A., Amaral, L., Glass, L., Hausdorff, J., Ivanov, P. C., Mark, R., ... & Stanley, H. E. (2000). PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation [Online]. 101 (23), pp. e215–e220.


As more and more artificial intelligence (AI) or deep learning technologies have been applied to medical image applications such as radiological finding identification in chest X-rays (CXRs), the interpretability of the prediction model is crucial for building trust in AI. In pulmonary pathology detection, the CXR images with proper anatomical segmentations could aid in interpreting the models. However, the accuracy of the auto-segmentation algorithms was not high enough to create such a benchmark. In this project, we provided segmentation results of 1,141 frontal-view CXRs randomly selected from the MIMIC-CXR database. These CXRs were first processed into a pair of segmented images with the lung lobes and the rest parts by deep learning-based algorithms. We then manually filtered out the incorrect segmentation results. The segmented images maybe helpful for model interpretability.


With the increased application of deep learning models to chest X-rays (CXRs), there is value in investigating model interpretability.  Grad-CAM [1] is one approach that has been proposed for interpreting models, but it has limitations [2]. Segmenting images can help to ensure that models focus on the lung area instead of the shortcuts such as the image background and hospital tokens [3]. Oh et al. [4] demonstrates that their proposed model improves sensitivity for classifying COVID-19 compared to the state-of-art COVID-Net model [5], since only patches from CXRs that contain the lung area are used in the model training.

Two popular datasets for segmentation tasks are the Japanese Radiological Society Technology (JRST) database [6] and Montgomery County (MC) database [7-9]. The JRST database contains 247 frontal CXRs, including 154 CXRs with lung nodules and 93 CXRs without lung nodules. The MC database comprises 138 frontal CXRs, with 80 normal CXRs and 58 CXRs with pulmonary tuberculosis. However, there is no clinical data available in these two datasets and they were constructed based on certain diseases. Although MIMIC-CXR contributes a significant number of medical images and annotations to support extensive studies [10–12], there are no available segmentation labels for developing new segmentation methods and appraising existing models.

The U-Net has been widely used in medical imaging segmentation [13–16]. Reza et al. [13] used the MC database and the U-Net model and achieved 94.9% in dice similarity coefficient and 96.8% in binary accuracy for lung segmentation using. While most of the CXRs segmentations are implemented on the normal CXRs database, models would lack generalizability on diverse CXRs, which stems from acquisition or morbidity. A manual selection of the automatically segmented images is still necessary, because the CXR segmentation model may fail on CXRs that show devices, artifacts, or abnormalities such as opacity and fibrosis [8,17]. In our experiment, more than 70% of auto-segmented images were identified with incorrect anatomical landmarks of lung areas.

We have applied segmented images to test the ethnicity classifier trained on the frontal CXRs [18]. We tested the model on the lung and non-lung regions segmented CXRs, respectively. The model performs far better in non-lung segmented images than in lung segmented images, showing that the model did not identify race by the lung region in the CXR. In addition, we tested the model on MIMIC-CXRs to identify 14 radiological findings and achieved an average AUC score of 0.715 [95% CI: 0.68 - 0.75] for non-lung regions (edema: 0.784, consolidation: 0.678, pleural effusion: 0.849, pneumothorax: 0.807, atelectasis: 0.745, cardiomegaly: 0.848), and 0.62 [95% CI: 0.59 - 0.65] for lung-only CXRs (edema: 0.728, consolidation: 0.674, pleural effusion: 0.625, pneumothorax: 0.550, atelectasis: 0.595, cardiomegaly: 0.691).


We employed the TernausNet for the first-stage auto-segmentation [16,19]. The model is based on U-Net architecture with a pre-trained vgg11 encoder and batch normalization. The softmax function is applied for output and negative log-likelihood for training. Adam optimizer was used with a 0.0005 learning rate, and the model was trained for 100 epochs. The train-validation-test split was 80-10-10, and the training data [11] incorporated some augmentation such as horizontal and vertical shift, minor zoom and padding. The images were resized to 512 by 512 before feeding into the network.

After the segmentation process, the images remained 512 by 512 for storage. We processed 4,091 CXRs randomly selected from MIMIC-CXR and eliminated false recognitions by human examination, yielding a total of 1,141 lung images corresponding to 1,141 non-lung images. The criteria of an accepted image are that both lung lobes are intact, follow the boundary of the heart and have a clear boundary from the clavicle to the diaphragm.

Data Description

Two folders, Lung and Non-lung, contain lung and non-lung region images, respectively. All images are stored in JPEG format. The naming convention for images consists of the view of the image (lung or non-lung region), subject_id and study_id from the original MIMIC-CXR dataset, for example, non-lung_img-10030487_50519814.jpg. The CSV file stores filenames of each image, and their corresponding subject_id, study_id and view.

The dataset comprises 616 females and 525 males. We collected demographic information from MIMIC-IV according to subject_id [12,20]. There are 148 White, 382 Black, 411 Hispanic, and 200 Asian. There are 9 patients under the age of twenty, 230 in the range of twenty to forty, 425 in the range of forty to sixty, 249 in the range of sixty to eighty, and 79 patients above the age of eighty.

Usage Notes

This dataset has been used in Gichoya et. al’s research [18] to experiment with the ability of race classification models to classify race on segmented CXRs. This dataset can be used to examine the algorithm and interpret the model developed by researchers. When training a detection model for pulmonary pathology, we expect that the model would mostly utilize the features within lung lobes as human experts do.

Users can test the model on both types of segmented images such that the model needs to predict depending on limited information from segmented CXRs. If the pulmonary pathology detection model performs better on background-segmentation images, the model may have learned shortcuts inadvertently during the training process. In addition, users can employ heatmaps to visualize the image parts that the model relies on when making the prediction. In this way, users can make sure that the model focuses on the reasonable features in lung lobes.

To obtain additional information on CXRs, users are advised to use this database together with MIMIC-CXR and MIMIC-IV. The user can access the original CXR from MIMIC-CXR through subject_id and study_id. The code repository provides an example of displaying images and the segmentation image generation process [21].



This dataset is derived from MIMIC-CXR and exists under the same IRB.

Conflicts of Interest

We declare no competing interests.


  1. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE international conference on computer vision. 2017. p. 618–26.
  2. Chattopadhay A, Sarkar A, Howlader P, Balasubramanian VN. Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In: 2018 IEEE winter conference on applications of computer vision (WACV). IEEE; 2018. p. 839–47.
  3. Geirhos R, Jacobsen JH, Michaelis C, Zemel R, Brendel W, Bethge M, Wichmann FA. Shortcut learning in deep neural networks. Nature Machine Intelligence. 2020 Nov;2(11):665-73.
  4. Oh Y, Park S, Ye JC. Deep learning covid-19 features on cxr using limited training data sets. IEEE transactions on medical imaging. 2020;39(8):2688–700.
  5. Wang L, Lin ZQ, Wong A. Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images. Scientific Reports. 2020 Nov 11;10(1):1-2.
  6. Shiraishi J, Katsuragawa S, Ikezoe J, Matsumoto T, Kobayashi T, Komatsu K, et al. Development of a digital image database for chest radiographs with and without a lung nodule: receiver operating characteristic analysis of radiologists’ detection of pulmonary nodules. American Journal of Roentgenology. 2000;174(1):71–4.
  7. Jaeger S, Candemir S, Antani S, Wáng Y-XJ, Lu P-X, Thoma G. Two public chest X-ray datasets for computer-aided screening of pulmonary diseases. Quantitative imaging in medicine and surgery. 2014;4(6):475.
  8. Candemir S, Jaeger S, Palaniappan K, Musco JP, Singh RK, Xue Z, et al. Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration. IEEE transactions on medical imaging. 2013;33(2):577–90.
  9. Rusak F, Wang D, Arzhaeva Y. (2018): Lung Segmentation Data Kit. v1. CSIRO. Data Collection.
  10. Johnson AE, Pollard TJ, Berkowitz SJ, Greenbaum NR, Lungren MP, Deng C, et al. MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports. Scientific data. 2019;6(1):1–8.
  11. Johnson A, Lungren M, Peng Y, Lu Z, Mark R, Berkowitz S, Horng S. MIMIC-CXR-JPG - chest radiographs with structured labels (version 2.0.0). PhysioNet. 2019. Available from:
  12. Goldberger AL, Amaral LA, Glass L, Hausdorff JM, Ivanov PC, Mark RG, et al. PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals. circulation. 2000 Jun 13;101(23):e215–20.
  13. Reza S, Amin OB, Hashem MMA. TransResUNet: Improving U-Net Architecture for Robust Lungs Segmentation in Chest X-rays. In: 2020 IEEE Region 10 Symposium (TENSYMP). IEEE; 2020. p. 1592–5.
  14. Minaee S, Boykov YY, Porikli F, Plaza AJ, Kehtarnavaz N, Terzopoulos D. Image segmentation using deep learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2021 Feb 17.
  15. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. InInternational Conference on Medical image computing and computer-assisted intervention 2015 Oct 5 (pp. 234-241). Springer, Cham.
  16. Iglovikov V, Shvets A. Ternausnet: U-net with vgg11 encoder pre-trained on imagenet for image segmentation. arXiv preprint arXiv:180105746. 2018 Jan 17.
  17. Candemir S, Antani S. A review on lung boundary detection in chest X-rays. International journal of computer assisted radiology and surgery. 2019;14(4):563–76.
  18. Gichoya JW, Banerjee I, Bhimireddy AR, Burns JL, Celi LA, Chen LC, et al. AI recognition of patient race in medical imaging: a modelling study [Internet]. Vol. 4, The Lancet Digital Health. Elsevier BV; 2022. p. e406–14. Available from:
  19. Github repository containing the segmentation code. [accessed on: 9 August 2022]
  20. Johnson A, Bulgarelli L, Pollard T, Horng S, Celi L A, Mark R. MIMIC-IV (version 1.0). PhysioNet. 2021. Available from:
  21. Github repository containing MIMIC-CXR segmentation code. [accessed on: 9 August 2022]

Parent Projects
Chest X-ray segmentation images based on MIMIC-CXR was derived from: Please cite them when using this project.

Access Policy:
Only credentialed users who sign the DUA can access the files.

License (for files):
PhysioNet Credentialed Health Data License 1.5.0

Data Use Agreement:
PhysioNet Credentialed Health Data Use Agreement 1.5.0

Required training:
CITI Data or Specimens Only Research

Corresponding Author
You must be logged in to view the contact information.