With this challenge, we made available a large dataset of 1200 annotated retinal fundus images. In addition, an evaluation framework has been designed to allow all the submitted results to be evaluated and compared with one another in a uniform manner. 

REFUGE Challenge consists of THREE Tasks:
  1. Classification of clinical Glaucoma
  2. Segmentation of Optic Disc and Cup
  3. Localization of Fovea (macular center)  -- NEW Task added

Imaging Data

All fundus images are stored as JPEG files
 

Reference Standard

Task 1. The reference standard for glaucoma presence obtained from the health records, which is not based on fundus image ONLY, but also take OCT, Visual Field, and other facts into consideration. For training data, glaucoma and non-glaucoma labels (a.k.a. the reference standard) are reflected on the image folder names.

Task 2. Manual pixel-wise annotations of the optic disc and cup were obtained by SEVEN (was 3 as proposed) independent GLAUCOMA  SPECIALISTS from Zhongshan Ophthalmic Center, Sun Yat-sen University, China. The reference standard for the segmentation task was created from the seven annotations, which were merged into single annotation by another SENIOR GLAUCOMA  SPECIALIST. It is stored as a BMP image with the same size as the corresponding fundus image with the following labels:

128: Optic Disc (Grey color)
0: Optic Cup (Black color)

The numbers in front of the structures indicate the pixel-wise labels. All other pixels are labeled as 255(White Color).
Image
Image w Annotations
Annotation Masks

 

Task 3. Manual pixel-wise annotations of the fovea (macular center) were obtained by 7 independent GLAUCOMA  SPECIALISTS. The reference standard for localization task was created by using the average of selected annotations from the 7 annotations, for each individual images by another independent GLAUCOMA  SPECIALIST.

Training and Off-site and On-site Test datasets

A total of 1200 color fundus photographs are available. The dataset is split 1:1:1 into 3 subsets equally for training, offline validation and onsite test, stratified to have equal glaucoma presence percentage. Training set with a total of 400 color fundus image will be provided together with the corresponding glaucoma status and the unified manual pixel-wise annotations (a.k.a. ground truth). Testing consists of 800 color fundus images and is further split into 400 off-site validation set images and 400 on-site test set images.

 

Submission Guidelines

General Guidelines.
By the time of submitting your results, please prepare a single .ZIP file. Inside the ZIP must be: a folder, named "segmentation", with the segmentation results; a CSV file named "fovea_location_results.csv" with the fovea localization results; a CSV file named "classification_results.csv" with the classification results. Please, take into account that submitting unorganized files could lead to a wrong evaluation.

 

Challenge Task 1: Glaucoma Classification. 
The classification results should be provided in a single CSV file, named “classification_results.csv”, with the first column corresponding to the filename of the test fundus image (including the extension “.jpg”) and the second column containing the estimated classification probability/risk of the image belonging to a patient diagnosed with glaucoma (value from 0.0 to 1.0).
 
Challenge Task 2: Optic disc/cup Segmentation. 
The segmentation results should be provided in a “segmentation” folder, as one image per test image, with the segmented pixels labeled in the same way as in the reference standard (bmp (8-bit) files with 0: optic cup, 128: optic disc, 255: elsewhere). Please, make sure that your submitted segmentation files are named according to the original image names and with the same extension.
 
Challenge Task 3 (Optional Task): Fovea Localization. 
The localization results should be provided in a single CSV file, named “fovea_location_results.csv”, with the first column corresponding to the filename of the test fundus image (including the extension “.jpg”), the second column containing the X-coordinate and the third column containing the Y-coordinate. Please, make sure that your submitted segmentation files are named according to the original image names and with the same extension.

Evaluation Framework

This challenge evaluates the performance of the algorithms for: (1) glaucoma classification, and (2) optic disc/cup segmentation. Thus there will be two main leaderboards. The average score across the two leaderboards will determine the final ranking of the challenge. In case of a tie, the classification leaderboard score has the preference.
 
Classification results will be compared to the clinical grading of glaucoma. Receiver operating curve will be created across all the test set images and an area under the curve (AUC) will be calculated. Each team receives a rank (1=best) based on the obtained AUC value. This ranking forms the classification leaderboard.
 
Submitted segmentation results will be compared to the reference standard. the disc and cup Dice indices (DI), and the cup-to-disc ratio (CDR) will be calculated as segmentation evaluation measures. Each team receives a rank (1=best) for each evaluation measure based on the mean value of the measure over the set of test images. The segmentation score is then determined by adding the three individual ranks (2xDI and 1xCDR ). The team with the lowest score will be ranked #1 on the segmentation leaderboard.

Please NOTE that Task 3 (fovea localization) is OPTIONAL for MICCAI ONSITE challenge and NOT counted into the final leaderboards. However, it is ESSENTIAL for the ONLINE challenge on the TEST dataset. The evaluation criterion is the Average Euclidean Distance between the estimations and ground truth, which is the lower the better.

MICCAI 2018 and On-site part of the Challenge at OMIA Workshop 

The REFUGE challenge will be hosted at the MICCAI 2018 conference in conjunction with OMIA workshop. There will also be an on-site part of the challenge when the second part of the test set will be released. The participants will have 1 hour on the day of the challenge to provide the results on the "on-site test set". Papers submitted to the REFUGE will be automatically considered for the challenge-part of the OMIA workshop unless otherwise stated by the participants. Each team can have up to 2 names appear in the author list of the challenge review paper (edited by the organizers with the papers submitted from the participants) .
 
A paper must be submitted together with the off-site validation results by Jul 28 for the teams intent to attend the on-site challenge.
The camera-ready paper (max. 8 pages, PDF in Springer LNCS format) to be submitted by 29 Aug 2018 via email.
In the manuscript please describe the methods used, the novelty of the methodology and how it fits with the state-of-the-art, and a qualitative and quantitative analysis of results on the training/validation data (and on other reasonable settings).