Resources


Database Credentialed Access

EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray Images

Seongsu Bae, Daeun Kyung, Jaehee Ryu, et al.

We present EHRXQA, the first multi-modal EHR QA dataset combining structured patient records with aligned chest X-ray images. EHRXQA contains a comprehensive set of QA pairs covering image-related, table-related, and image+table-related questions.

question answering machine learning electronic health records evaluation chest x-ray multi-modal question answering ehr question answering semantic parsing benchmark deep learning visual question answering

Published: July 23, 2024. Version: 1.0.0


Database Credentialed Access

EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray Images

Seongsu Bae, Daeun Kyung, Jaehee Ryu, et al.

We present EHRXQA, the first multi-modal EHR QA dataset combining structured patient records with aligned chest X-ray images. EHRXQA contains a comprehensive set of QA pairs covering image-related, table-related, and image+table-related questions.

question answering machine learning electronic health records evaluation chest x-ray multi-modal question answering ehr question answering semantic parsing benchmark deep learning visual question answering

Published: July 23, 2024. Version: 1.0.0


Database Credentialed Access

EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray Images

Seongsu Bae, Daeun Kyung, Jaehee Ryu, et al.

We present EHRXQA, the first multi-modal EHR QA dataset combining structured patient records with aligned chest X-ray images. EHRXQA contains a comprehensive set of QA pairs covering image-related, table-related, and image+table-related questions.

question answering machine learning electronic health records evaluation chest x-ray multi-modal question answering ehr question answering semantic parsing benchmark deep learning visual question answering

Published: July 23, 2024. Version: 1.0.0


Database Credentialed Access

Eye Gaze Data for Chest X-rays

Alexandros Karargyris, Satyananda Kashyap, Ismini Lourentzou, et al.

This dataset was a collected using an eye tracking system while a radiologist interpreted and read 1,083 public CXR images. The dataset contains the following aligned modalities: image, transcribed report text, dictation audio and eye gaze data.

convolutional network heatmap eye tracking explainability audio chest cxr machine learning chest x-ray radiology multimodal deep learning

Published: Sept. 12, 2020. Version: 1.0.0


Database Credentialed Access

Eye Gaze Data for Chest X-rays

Alexandros Karargyris, Satyananda Kashyap, Ismini Lourentzou, et al.

This dataset was a collected using an eye tracking system while a radiologist interpreted and read 1,083 public CXR images. The dataset contains the following aligned modalities: image, transcribed report text, dictation audio and eye gaze data.

convolutional network heatmap eye tracking explainability audio chest cxr machine learning chest x-ray radiology multimodal deep learning

Published: Sept. 12, 2020. Version: 1.0.0


Database Contributor Review

A multimodal dental dataset facilitating machine learning research and clinic services

Wenjing Liu, Yunyou Huang, Suqin Tang

A new dental dataset that contains 169 patients, three commonly used dental image models, and images of various health conditions of the oral cavity.

Published: Oct. 11, 2024. Version: 1.1.0


Database Credentialed Access

RadGraph: Extracting Clinical Entities and Relations from Radiology Reports

Saahil Jain, Ashwin Agrawal, Adriel Saporta, et al.

RadGraph is a dataset of entities and relations in full-text chest X-ray radiology reports, which are obtained using a novel information extraction (IE) schema to capture clinically relevant information in a radiology report.

entity and relation extraction graph multi-modal natural language processing radiology

Published: June 3, 2021. Version: 1.0.0


Database Credentialed Access

MIMIC-Ext-MIMIC-CXR-VQA: A Complex, Diverse, And Large-Scale Visual Question Answering Dataset for Chest X-ray Images

Seongsu Bae, Daeun Kyung, Jaehee Ryu, et al.

We introduce MIMIC-Ext-MIMIC-CXR-VQA, a complex, diverse, and large-scale dataset designed for Visual Question Answering (VQA) tasks within the medical domain, focusing primarily on chest radiographs.

question answering machine learning electronic health records evaluation chest x-ray radiology benchmark multimodal deep learning visual question answering

Published: July 19, 2024. Version: 1.0.0


Database Credentialed Access

MIMIC-Ext-MIMIC-CXR-VQA: A Complex, Diverse, And Large-Scale Visual Question Answering Dataset for Chest X-ray Images

Seongsu Bae, Daeun Kyung, Jaehee Ryu, et al.

We introduce MIMIC-Ext-MIMIC-CXR-VQA, a complex, diverse, and large-scale dataset designed for Visual Question Answering (VQA) tasks within the medical domain, focusing primarily on chest radiographs.

question answering machine learning electronic health records evaluation chest x-ray radiology benchmark multimodal deep learning visual question answering

Published: July 19, 2024. Version: 1.0.0


Database Restricted Access

LATTE-CXR: Locally Aligned TexT and imagE, Explainable dataset for Chest X-Rays

Elham Ghelichkhan, Tolga Tasdizen

This dataset includes bounding box-statement pairs for chest X-ray images, derived from radiologists’ eye-tracking data (for explainability) and annotations, for local visual-language models.

eye-tracking chest x-ray dataset automatically generated dataset caption-guided object detection image captioning with region-level description grounded radiology report generation phrase grounding xai multi-modal learning local visual-language models localization

Published: Feb. 4, 2025. Version: 1.0.0