Resources


Database Restricted Access

Smartphone-Captured Chest X-Ray Photographs

Po-Chih Kuo, ChengChe Tsai, Diego M Lopez, Alexandros Karargyris, Tom Pollard, Alistair Johnson, Leo Anthony Celi

Smartphone-captured CXR images including photographs taken from MIMIC-CXR and CheXpert, photographs taken by resident doctors, and photographs taken with different devices.

smartphone photograph cxr

Published: Sept. 27, 2020. Version: 1.0.0


Database Credentialed Access

Eye Gaze Data for Chest X-rays

Alexandros Karargyris, Satyananda Kashyap, Ismini Lourentzou, Joy Wu, Matthew Tong, Arjun Sharma, Shafiq Abedin, David Beymer, Vandana Mukherjee, Elizabeth Krupinski, Mehdi Moradi

This dataset was a collected using an eye tracking system while a radiologist interpreted and read 1,083 public CXR images. The dataset contains the following aligned modalities: image, transcribed report text, dictation audio and eye gaze data.

convolutional network heatmap eye tracking explainability audio chest cxr chest x-ray radiology multimodal deep learning machine learning

Published: Sept. 12, 2020. Version: 1.0.0


Database Credentialed Access

Medical-Diff-VQA: A Large-Scale Medical Dataset for Difference Visual Question Answering on Chest X-Ray Images

Xinyue Hu, Lin Gu, Qiyuan An, Mengliang Zhang, liangchen liu, Kazuma Kobayashi, Tatsuya Harada, Ronald Summers, Yingying Zhu

MIMIC-Diff-VQA provides a large-scale dataset for Difference visual question answering in medical chest x-ray images.

difference visual question answering difference vqa vqa chest x-ray visual question answering

Published: Feb. 3, 2025. Version: 1.0.1


Database Credentialed Access

Medical-Diff-VQA: A Large-Scale Medical Dataset for Difference Visual Question Answering on Chest X-Ray Images

Xinyue Hu, Lin Gu, Qiyuan An, Mengliang Zhang, liangchen liu, Kazuma Kobayashi, Tatsuya Harada, Ronald Summers, Yingying Zhu

MIMIC-Diff-VQA provides a large-scale dataset for Difference visual question answering in medical chest x-ray images.

difference visual question answering difference vqa vqa chest x-ray visual question answering

Published: Feb. 3, 2025. Version: 1.0.1


Database Credentialed Access

Medical-CXR-VQA dataset: A Large-Scale LLM-Enhanced Medical Dataset for Visual Question Answering on Chest X-Ray Images

Xinyue Hu, Lin Gu, Kazuma Kobayashi, liangchen liu, Mengliang Zhang, Tatsuya Harada, Ronald Summers, Yingying Zhu

Medical-CXR-VQA provides a large-scale LLM-enhanced dataset for visual question answering in medical chest x-ray images.

Published: Jan. 21, 2025. Version: 1.0.0


Database Restricted Access

Visual Question Answering evaluation dataset for MIMIC CXR

Timo Kohlberger, Charles Lau, Tom Pollard, Andrew Sellergren, Atilla Kiraly, Fayaz Jamil

This dataset provides 224 VQAs for 40 test set cases, and 111 VQAs for 23 validation set cases of the MIMIC CXR dataset.

Published: Jan. 28, 2025. Version: 1.0.0


Database Open Access

Heart and lung segmentations for MIMIC-CXR/MIMIC-CXR-JPG and Montgomery County TB databases

Benjamin Duvieusart, Felix Krones, Guy Parsons, Lionel Tarassenko, Bartlomiej W Papiez, Adam Mahdi

Heart and lung segmentations for 200 MIMIC-CXR/MIMIC-CXR-JPG chest x-rays and heart segmentations for 138 Montgomery County tuberculosis chest X-rays.

segmentation heart and lungs montgomery country tb mimic-cxr

Published: Aug. 14, 2023. Version: 1.0.0


Database Restricted Access

Pulmonary Edema Severity Grades Based on MIMIC-CXR

Ruizhi Liao, Geeticka Chauhan, Polina Golland, Seth Berkowitz, Steven Horng

Pulmonary edema metadata and labels for MIMIC-CXR

Published: Feb. 9, 2021. Version: 1.0.1


Database Credentialed Access

MIMIC-CXR Database

Alistair Johnson, Tom Pollard, Roger Mark, Seth Berkowitz, Steven Horng

Chest radiographs in DICOM format with associated free-text reports.

computer vision chest x-rays natural language processing radiology mimic machine learning

Published: July 23, 2024. Version: 2.1.0


Database Credentialed Access

MS-CXR-T: Learning to Exploit Temporal Structure for Biomedical Vision-Language Processing

Shruthi Bannur, Stephanie Hyland, Qianchu Liu, Fernando Pérez-García, Max Ilse, Daniel Coelho de Castro, Benedikt Boecking, Harshita Sharma, Kenza Bouzid, Anton Schwaighofer, Maria Teodora Wetscherek, Hannah Richardson, Tristan Naumann, Javier Alvarez Valle, Ozan Oktay

The MS-CXR-T is a multimodal benchmark that enhances the MIMIC-CXR v2 dataset by including expert-verified annotations. Its goal is to evaluate biomedical visual-language processing models in terms of temporal semantics extracted from image and text.

disease progression cxr vision-language processing chest x-ray radiology multimodal

Published: March 17, 2023. Version: 1.0.0