A Labeled Ophthalmic Ultrasound Dataset with Medical Report Generation Based on Cross-modal Deep Learning

Kavli Affiliate: Jing Wang

| First 5 Authors: Jing Wang, Junyan Fan, Meng Zhou, Yanzhu Zhang, Mingyu Shi

| Summary:

Ultrasound imaging reveals eye morphology and aids in diagnosing and treating
eye diseases. However, interpreting diagnostic reports requires specialized
physicians. We present a labeled ophthalmic dataset for the precise analysis
and the automated exploration of medical images along with their associated
reports. It collects three modal data, including the ultrasound images, blood
flow information and examination reports from 2,417 patients at an
ophthalmology hospital in Shenyang, China, during the year 2018, in which the
patient information is de-identified for privacy protection. To the best of our
knowledge, it is the only ophthalmic dataset that contains the three modal
information simultaneously. It incrementally consists of 4,858 images with the
corresponding free-text reports, which describe 15 typical imaging findings of
intraocular diseases and the corresponding anatomical locations. Each image
shows three kinds of blood flow indices at three specific arteries, i.e., nine
parameter values to describe the spectral characteristics of blood flow
distribution. The reports were written by ophthalmologists during the clinical
care. The proposed dataset is applied to generate medical report based on the
cross-modal deep learning model. The experimental results demonstrate that our
dataset is suitable for training supervised models concerning cross-modal
medical data.

| Search Query: ArXiv Query: search_query=au:”Jing Wang”&id_list=&start=0&max_results=3

Read More