Resources


Database Credentialed Access

Medical-CXR-VQA dataset: A Large-Scale LLM-Enhanced Medical Dataset for Visual Question Answering on Chest X-Ray Images

Xinyue Hu, Lin Gu, Kazuma Kobayashi, liangchen liu, Mengliang Zhang, Tatsuya Harada, Ronald Summers, Yingying Zhu

Medical-CXR-VQA provides a large-scale LLM-enhanced dataset for visual question answering in medical chest x-ray images.

Published: Jan. 21, 2025. Version: 1.0.0


Database Open Access

CheXmask Database: a large-scale dataset of anatomical segmentation masks for chest x-ray images

Nicolas Gaggion, Candelaria Mosquera, Martina Aineseder, Lucas Mansilla, Diego Milone, Enzo Ferrante

CheXmask Database is a 657,566 uniformly annotated chest radiographs with segmentation masks. Images were segmented using HybridGNet, with automatic quality control indicated by RCA scores.

automatic quality assesment chest x-ray segmentation medical image segmentation

Published: Jan. 22, 2025. Version: 1.0.0


Database Credentialed Access

Eye Gaze Data for Chest X-rays

Alexandros Karargyris, Satyananda Kashyap, Ismini Lourentzou, Joy Wu, Matthew Tong, Arjun Sharma, Shafiq Abedin, David Beymer, Vandana Mukherjee, Elizabeth Krupinski, Mehdi Moradi

This dataset was a collected using an eye tracking system while a radiologist interpreted and read 1,083 public CXR images. The dataset contains the following aligned modalities: image, transcribed report text, dictation audio and eye gaze data.

convolutional network heatmap eye tracking explainability audio chest cxr chest x-ray radiology machine learning deep learning multimodal

Published: Sept. 12, 2020. Version: 1.0.0


Database Credentialed Access

EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray Images

Seongsu Bae, Daeun Kyung, Jaehee Ryu, Eunbyeol Cho, Gyubok Lee, Sunjun Kweon, Jungwoo Oh, Lei JI, Eric Chang, Tackeun Kim, Edward Choi

We present EHRXQA, the first multi-modal EHR QA dataset combining structured patient records with aligned chest X-ray images. EHRXQA contains a comprehensive set of QA pairs covering image-related, table-related, and image+table-related questions.

question answering chest x-ray benchmark evaluation multi-modal question answering ehr question answering semantic parsing machine learning electronic health records deep learning visual question answering

Published: July 23, 2024. Version: 1.0.0


Database Credentialed Access

MS-CXR: Making the Most of Text Semantics to Improve Biomedical Vision-Language Processing

Benedikt Boecking, Naoto Usuyama, Shruthi Bannur, Daniel Coelho de Castro, Anton Schwaighofer, Stephanie Hyland, Harshita Sharma, Maria Teodora Wetscherek, Tristan Naumann, Aditya Nori, Javier Alvarez Valle, Hoifung Poon, Ozan Oktay

MS-CXR is a new dataset containing 1162 chest X-ray bounding box labels paired with radiology text descriptions, annotated and verified by two board-certified radiologists.

vision-language processing chest x-ray phrase grounding localization

Published: Nov. 15, 2024. Version: 1.1.0


Database Restricted Access

LATTE-CXR: Locally Aligned TexT and imagE, Explainable dataset for Chest X-Rays

Elham Ghelichkhan, Tolga Tasdizen

This dataset includes bounding box-statement pairs for chest X-ray images, derived from radiologists’ eye-tracking data (for explainability) and annotations, for local visual-language models.

eye-tracking chest x-ray dataset automatically generated dataset caption-guided object detection image captioning with region-level description grounded radiology report generation phrase grounding xai multi-modal learning local visual-language models localization

Published: Feb. 4, 2025. Version: 1.0.0


Database Credentialed Access

Medical-Diff-VQA: A Large-Scale Medical Dataset for Difference Visual Question Answering on Chest X-Ray Images

Xinyue Hu, Lin Gu, Qiyuan An, Mengliang Zhang, liangchen liu, Kazuma Kobayashi, Tatsuya Harada, Ronald Summers, Yingying Zhu

MIMIC-Diff-VQA provides a large-scale dataset for Difference visual question answering in medical chest x-ray images.

difference visual question answering difference vqa vqa chest x-ray visual question answering

Published: Feb. 3, 2025. Version: 1.0.1


Database Credentialed Access

RadGraph2: Tracking Findings Over Time in Radiology Reports

Adam Dejl, Sameer Khanna, Patricia Therese Pile, Kibo Yoon, Steven QH Truong, Hanh Duong, Agustina Saenz, Pranav Rajpurkar

RadGraph2 is a dataset of 800 chest radiology reports annotated using a fine-grained entity-relationship schema, which captures key findings as well as mentions of changes that occurred in comparison with the previous radiology studies.

chest x-rays relation extraction disease progression information extraction radiology reports named entity recognition

Published: Aug. 8, 2024. Version: 1.0.0


Database Credentialed Access

Radiology Report Expert Evaluation (ReXVal) Dataset

Feiyang Yu, Mark Endo, Rayan Krishnan, Ian Pan, Andy Tsai, Eduardo Pontes Reis, Eduardo Kaiser Ururahy Nunes Fonseca, Henrique Lee, Zahra Shakeri, Andrew Ng, Curtis Langlotz, Vasantha Kumar Venugopal, Pranav Rajpurkar

The Radiology Report Expert Evaluation (ReXVal) Dataset is a publicly available dataset of radiologist evaluations of errors in automatically generated radiology reports.

Published: June 20, 2023. Version: 1.0.0


Database Restricted Access

REFLACX: Reports and eye-tracking data for localization of abnormalities in chest x-rays

Ricardo Bigolin Lanfredi, Mingyuan Zhang, William Auffermann, Jessica Chan, Phuong-Anh Duong, Vivek Srikumar, Trafton Drew, Joyce Schroeder, Tolga Tasdizen

This dataset contains 3032 cases of eye-tracking data collected while five radiologists dictated reports for frontal chest x-rays, synchronized timestamped dictation transcription, and manual labels for validation of localization of abnormalities.

eye tracking radiology report reflacx fixations computer vision chest x-rays gaze radiology machine learning deep learning

Published: Sept. 27, 2021. Version: 1.0.0