Resources


Database Credentialed Access

MS-CXR: Making the Most of Text Semantics to Improve Biomedical Vision-Language Processing

Benedikt Boecking, Naoto Usuyama, Shruthi Bannur, Daniel Coelho de Castro, Anton Schwaighofer, Stephanie Hyland, Maria Teodora Wetscherek, Tristan Naumann, Aditya Nori, Javier Alvarez Valle, Hoifung Poon, Ozan Oktay

MS-CXR is a new dataset containing 1162 Chest X-ray bounding box labels paired with radiology text descriptions, annotated and verified by two board-certified radiologists.

vision-language processing chest x-ray

Published: May 16, 2022. Version: 0.1


Database Credentialed Access

MS-CXR: Making the Most of Text Semantics to Improve Biomedical Vision-Language Processing

Benedikt Boecking, Naoto Usuyama, Shruthi Bannur, Daniel Coelho de Castro, Anton Schwaighofer, Stephanie Hyland, Maria Teodora Wetscherek, Tristan Naumann, Aditya Nori, Javier Alvarez Valle, Hoifung Poon, Ozan Oktay

MS-CXR is a new dataset containing 1162 Chest X-ray bounding box labels paired with radiology text descriptions, annotated and verified by two board-certified radiologists.

vision-language processing chest x-ray

Published: May 16, 2022. Version: 0.1


Database Credentialed Access

MS-CXR-T: Learning to Exploit Temporal Structure for Biomedical Vision-Language Processing

Shruthi Bannur, Stephanie Hyland, Qianchu Liu, Fernando Pérez-García, Max Ilse, Daniel Coelho de Castro, Benedikt Boecking, Harshita Sharma, Kenza Bouzid, Anton Schwaighofer, Maria Teodora Wetscherek, Hannah Richardson, Tristan Naumann, Javier Alvarez Valle, Ozan Oktay

The MS-CXR-T is a multimodal benchmark that enhances the MIMIC-CXR v2 dataset by including expert-verified annotations. Its goal is to evaluate biomedical visual-language processing models in terms of temporal semantics extracted from image and text.

cxr disease progression vision-language processing multimodal radiology chest x-ray

Published: March 17, 2023. Version: 1.0.0


Database Credentialed Access

MS-CXR-T: Learning to Exploit Temporal Structure for Biomedical Vision-Language Processing

Shruthi Bannur, Stephanie Hyland, Qianchu Liu, Fernando Pérez-García, Max Ilse, Daniel Coelho de Castro, Benedikt Boecking, Harshita Sharma, Kenza Bouzid, Anton Schwaighofer, Maria Teodora Wetscherek, Hannah Richardson, Tristan Naumann, Javier Alvarez Valle, Ozan Oktay

The MS-CXR-T is a multimodal benchmark that enhances the MIMIC-CXR v2 dataset by including expert-verified annotations. Its goal is to evaluate biomedical visual-language processing models in terms of temporal semantics extracted from image and text.

cxr disease progression vision-language processing multimodal radiology chest x-ray

Published: March 17, 2023. Version: 1.0.0


Database Credentialed Access

FFA-IR: Towards an Explainable and Reliable Medical Report Generation Benchmark

Mingjie Li, Wenjia Cai, Rui Liu, Yuetian Weng, Xiaoyun Zhao, Cong Wang, Xin Chen, Zhong Liu, Caineng Pan, Mengke Li, Yingfeng Zheng, Yizhi Liu, Flora Salim, Karin Verspoor, Xiaodan Liang, Xiaojun Chang

Benchmark dataset for report generation based on fundus fluorescein angiography images and reports.

fundus fluorescein angiography explainable and reliable evaluation vision and language medical report generation

Published: Sept. 21, 2021. Version: 1.0.0


Database Credentialed Access

RaDialog Instruct Dataset

Chantal Pellegrini, Ege Özsoy, Benjamin Busam, Nassir Navab, Matthias Keicher

Image-based instruct data for Chest X-Ray understanding and analysis.

medical image understaning radiology chatbot radiology report generation radiology assistant large vision-language models

Published: July 12, 2024. Version: 1.1.0


Database Credentialed Access

RaDialog Instruct Dataset

Chantal Pellegrini, Ege Özsoy, Benjamin Busam, Nassir Navab, Matthias Keicher

Image-based instruct data for Chest X-Ray understanding and analysis.

medical image understaning radiology chatbot radiology report generation radiology assistant large vision-language models

Published: July 12, 2024. Version: 1.1.0


Database Credentialed Access

MIMIC-Ext-MIMIC-CXR-VQA: A Complex, Diverse, And Large-Scale Visual Question Answering Dataset for Chest X-ray Images

Seongsu Bae, Daeun Kyung, Jaehee Ryu, Eunbyeol Cho, Gyubok Lee, Sunjun Kweon, Jungwoo Oh, Lei JI, Eric Chang, Tackeun Kim, Edward Choi

We introduce MIMIC-Ext-MIMIC-CXR-VQA, a complex, diverse, and large-scale dataset designed for Visual Question Answering (VQA) tasks within the medical domain, focusing primarily on chest radiographs.

question answering multimodal radiology machine learning evaluation visual question answering electronic health records benchmark deep learning chest x-ray

Published: July 19, 2024. Version: 1.0.0


Database Credentialed Access

FFA-IR: Towards an Explainable and Reliable Medical Report Generation Benchmark

Mingjie Li, Wenjia Cai, Rui Liu, Yuetian Weng, Xiaoyun Zhao, Cong Wang, Xin Chen, Zhong Liu, Caineng Pan, Mengke Li, Yingfeng Zheng, Yizhi Liu, Flora Salim, Karin Verspoor, Xiaodan Liang, Xiaojun Chang

Benchmark dataset for report generation based on fundus fluorescein angiography images and reports.

fundus fluorescein angiography explainable and reliable evaluation vision and language medical report generation

Published: Sept. 21, 2021. Version: 1.0.0


Challenge Credentialed Access

ShAReCLEF eHealth Evaluation Lab 2014 (Task 2): Disorder Attributes in Clinical Reports

Danielle Mowery

The ShARe/CLEF eHealth 2014 Challenge (Task 2) on Disorder Attributes in Clinical Reports

Published: Nov. 1, 2013. Version: 1.0