DRIVE Dataset

Estimated reading: 3 minutes 201 views

Visualization of the Drive train dataset on the Deep Lake UI

DRIVE Dataset

What is DRIVE Dataset?

The DRIVE (Digital Retinal Images for Vessel Extraction) dataset has been created to enable studies on retinal vessel segmentation. DRIVE dataset pictures were collected from diabetic retinopathy patients in the Netherlands. The dataset contains images from 400 diabetic patients between 25-and 90 years of age. 40 photographs have been randomly selected, with only 7 showing signs of mild early diabetic retinopathy. Automatic retinal map generation and branch point extraction were used to record time or multimedia images, and synthesize the retinal image mosaic.

Download DRIVE Dataset in Python

Instead of downloading the DRIVE dataset in Python, you can effortlessly load it in Python via our Deep Lake open-source with just one line of code.

Load DRIVE Dataset Training Subset in Python

				
					import deeplake
ds = deeplake.load("hub://activeloop/drive-train")
				
			

Load DRIVE Dataset Testing Subset in Python

				
					import deeplake
ds = deeplake.load("hub://activeloop/drive-test")
				
			

DRIVE Dataset Structure

Data Fields
For training set:
  • rgb_images: tensor containing the image.
  • manual_masks/mask: tensor containing mask.
  • masks/mask: tensor containing mask.
For the testing set:
  • rgb_images: tensor containing the image.
  • masks: tensor containing mask.
DRIVE Data Splits
  • The DRIVE dataset training set is composed of 20 images. Only a single manual segmentation of the vasculature is provided for training images.
  • The DRIVE dataset test set is composed of 20 images. There are 2 manual segmentations provided where one is used as a golden standard and the other is used for comparing computer-generated segmentation with that of manual segmentation.

How to use DRIVE Dataset with PyTorch and TensorFlow in Python

Train a model on DRIVE dataset with PyTorch in Python

Let’s use Deep Lake built-in PyTorch one-line dataloader to connect the data to the compute:

				
					dataloader = ds.pytorch(num_workers=0, batch_size=4, shuffle=False)
				
			
Train a model on DRIVE dataset with TensorFlow in Python
				
					dataloader = ds.tensorflow()
				
			

DRIVE Dataset Creation

Data Collection and Normalization Information
Images were taken using a Canon CR5 non-mydriatic 3 CCD camera with a 45-degree field of view (FOV). Each image was captured at 768 x 584 pixels using 8 bits per color plane. The FOV of each image is circular and has a diameter of approximately 540 pixels. In this database, the image is cropped around the FOV. For each image, a mask image depicting the FOV is provided.

Additional Information about DRIVE Dataset

DRIVE Dataset Description

  • Homepage: https://drive.grand-challenge.org/
  • Repository: N/A
  • Paper: Staal, J., Abràmoff, M. D., Niemeijer, M., Viergever, M. A., & Van Ginneken, B. (2004). Ridge-based vessel segmentation in color images of the retina. IEEE transactions on medical imaging, 23(4), 501-509.
  • Point of Contact: N/A
DRIVE Dataset Curators
Joes Staal, M.D. Abramoff, M. Niemeijer, M.A. Viergever, B. van Ginneken
DRIVE Dataset Licensing Information
Deep Lake users may have access to a variety of publicly available datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have a license to use the datasets. It is your responsibility to determine whether you have permission to use the datasets under their license. If you’re a dataset owner and do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thank you for your contribution to the ML community!
DRIVE Dataset Citation Information
				
					@article{staal2004ridge,
  title={Ridge-based vessel segmentation in color images of the retina},
  author={Staal, Joes and Abr{\`a}moff, Michael D and Niemeijer, Meindert and Viergever, Max A and Van Ginneken, Bram},
  journal={IEEE transactions on medical imaging},
  volume={23},
  number={4},
  pages={501--509},
  year={2004},
  publisher={IEEE}
}
				
			
CONTENTS