Machine Learning Datasets Machine Learning Datasets
  • GitHub
  • Slack
  • Documentation
Get Started
Machine Learning Datasets Machine Learning Datasets
Get Started
Machine Learning Datasets
  • GitHub
  • Slack
  • Documentation

Docy

Machine Learning Datasets

  • Folder icon closed Folder open iconDatasets
    • MNIST
    • ImageNet Dataset
    • COCO Dataset
    • CIFAR 10 Dataset
    • CIFAR 100 Dataset
    • FFHQ Dataset
    • Places205 Dataset
    • GTZAN Genre Dataset
    • GTZAN Music Speech Dataset
    • The Street View House Numbers (SVHN) Dataset
    • Caltech 101 Dataset
    • LibriSpeech Dataset
    • dSprites Dataset
    • PUCPR Dataset
    • RAVDESS Dataset
    • GTSRB Dataset
    • CSSD Dataset
    • ATIS Dataset
    • Free Spoken Digit Dataset (FSDD)
    • not-MNIST Dataset
    • ECSSD Dataset
    • COCO-Text Dataset
    • CoQA Dataset
    • FGNET Dataset
    • ESC-50 Dataset
    • GlaS Dataset
    • UTZappos50k Dataset
    • Pascal VOC 2012 Dataset
    • Pascal VOC 2007 Dataset
    • Omniglot Dataset
    • HMDB51 Dataset
    • Chest X-Ray Image Dataset
    • NIH Chest X-ray Dataset
    • Fashionpedia Dataset
    • DRIVE Dataset
    • Kaggle Cats & Dogs Dataset
    • Lincolnbeet Dataset
    • Sentiment-140 Dataset
    • MURA Dataset
    • LIAR Dataset
    • Stanford Cars Dataset
    • SWAG Dataset
    • HASYv2 Dataset
    • WFLW Dataset
    • Visdrone Dataset
    • 11k Hands Dataset
    • QuAC Dataset
    • LFW Deep Funneled Dataset
    • LFW Funneled Dataset
    • Office-Home Dataset
    • LFW Dataset
    • PlantVillage Dataset
    • Optical Handwritten Digits Dataset
    • UCI Seeds Dataset
    • STN-PLAD Dataset
    • FER2013 Dataset
    • Adience Dataset
    • PPM-100 Dataset
    • CelebA Dataset
    • Fashion MNIST Dataset
    • Google Objectron Dataset
    • CARPK Dataset
    • CACD Dataset
    • Flickr30k Dataset
    • Kuzushiji-Kanji (KKanji) dataset
    • KMNIST
    • EMNIST Dataset
    • USPS Dataset
    • MARS Dataset
    • HICO Classification Dataset
    • NSynth Dataset
    • RESIDE dataset
    • Electricity Dataset
    • DRD Dataset
    • Caltech 256 Dataset
    • AFW Dataset
    • PACS Dataset
    • TIMIT Dataset
    • KTH Actions Dataset
    • WIDER Face Dataset
    • WISDOM Dataset
    • DAISEE Dataset
    • WIDER Dataset
    • LSP Dataset
    • UCF Sports Action Dataset
    • Wiki Art Dataset
    • FIGRIM Dataset
    • ANIMAL (ANIMAL10N) Dataset
    • OPA Dataset
    • DomainNet Dataset
    • HAM10000 Dataset
    • Tiny ImageNet Dataset
    • Speech Commands Dataset
    • 300w Dataset
    • Food 101 Dataset
    • VCTK Dataset
    • LOL Dataset
    • AQUA Dataset
    • LFPW Dataset
    • ARID Video Action dataset
    • NABirds Dataset
    • SQuAD Dataset
    • ICDAR 2013 Dataset
    • Animal Pose Dataset
  • Folder icon closed Folder open iconDeep Lake Docs Home
  • Folder icon closed Folder open iconDataset Visualization
  • API Basics
  • Storage & Credentials
  • Getting Started
  • Tutorials (w Colab)
  • Playbooks
  • Data Layout
  • Folder icon closed Folder open iconShuffling in ds.pytorch()
  • Folder icon closed Folder open iconStorage Synchronization
  • Folder icon closed Folder open iconTensor Relationships
  • Folder icon closed Folder open iconQuickstart
  • Folder icon closed Folder open iconHow to Contribute

SWAG Dataset

Estimated reading: 5 minutes

SWAG Dataset

What is SWAG Dataset?

The SWAG (Situations With Adversarial Generations) dataset comprises 113,000 multiple-choice questions covering a wide range of grounded scenarios and is taken from two consecutive video captions in the ActivityNet Captions database, as well as the Large Scale Movie Description Challenge. This dataset is created using adversarial filtering. SWAG dataset can be used in the research toward commonsense NLI.

Download SWAG Dataset in Python

Instead of downloading the SWAG dataset in Python, you can effortlessly load it in Python via our Deep Lake open-source with just one line of code.

Load SWAG Dataset Training Subset in Python

				
					import deeplake
ds = deeplake.load('hub://activeloop/swag-train')
				
			

Load SWAG Dataset Testing Subset in Python

				
					import deeplake
ds = deeplake.load('hub://activeloop/swag-test')
				
			

Load SWAG Dataset Validation Subset in Python

				
					import deeplake
ds = deeplake.load('hub://activeloop/swag-val')
				
			

SWAG Dataset Structure

SWAG Data Fields

For the training and validation set

  • video_id: tensor that contains video id.
  • fold_ind: tensor that contains fold id.
  • start_phrase: tensor containing the start phrase of the context.
  • gold_ending: tensor containing better ending.
  • distractor_0: tensor containing the first distractor. The answer with the first distractor is considered to have the best quality.
  • distractor_1: tensor containing the second distractor.
  • distractor_2: tensor containing the third distractor.
  • distractor_3: tensor containing the fourth distractor. The answer with the fourth distractor is considered to have the lowest quality.
  • gold_source: tensor containing labels gold and gen. gen indicates generated best answer and gold indicates the real answer which is considered as the second best.
  • gold_type: label containing values ‘pos’ and ‘unl’
  • distractor_0_type: label containing values ‘pos’ and ‘unl’
  • distractor_1_type: label containing values ‘pos’ and ‘unl’
  • distractor_2_type: label containing values ‘pos’ and ‘unl’
  • distractor_3_type: label containing values ‘n/a’, ‘pos’ and ‘unl’
  • sentence_1: tensor containing the first sentence.
  • sentence_2: tensor containing the second sentence.
For the test set
 
  • video_id: tensor that contains video id.
  • fold_ind: tensor that contains fold id.
  • start_phrase: tensor containing the start phrase of the context.
  • gold_source: tensor containing labels gold and gen. gen indicates generated best answer and gold indicates the real answer which is considered as the second best.
  • ending0: tensor containing first ending.
  • ending1: tensor containing second ending.
  • ending2: tensor containing third ending.
  • ending3: tensor containing fourth ending.
  • sentence_1: tensor containing the first sentence.
  • sentence_2: tensor containing the second sentence
SWAG Data Splits
  • The SWAG dataset train set was composed of 73,000 multiple-choice questions about grounded situations.
  • The SWAG dataset validation set was composed of 20,000 multiple-choice questions about grounded situations.
  • The SWAG dataset test set was composed of 20,000 multiple choice questions about grounded situations for the (blind) test.

How to use SWAG Dataset with PyTorch and TensorFlow in Python

Train a model on SWAG dataset with PyTorch in Python

Let’s use Deep Lake built-in PyTorch one-line data loader to connect the data to the compute:

				
					dataloader = ds.pytorch(num_workers=0, batch_size=4, shuffle=False)
				
			
Train a model on SWAG dataset with TensorFlow in Python
				
					dataloader = ds.tensorflow()
				
			

SWAG Dataset Creation

Source Data
Data Collection and Normalization Information

The dataset was created by taking two consecutive video captions from the ActivityNet Captions website and the LSMD challenge. These datasets differ somewhat in nature and allow us to attain larger coverage. A constituency parser is utilized to break the second sentence into noun and verb phrases for every pair of captions. Each question has a gold manually-validated ending and three distractors.

Additional Information about SWAG Dataset

SWAG Dataset Description

  • Homepage: https://rowanzellers.com/swag/
  • Repository: https://github.com/rowanz/swagaf/tree/master/data
  • Paper: Rowan Zellers, Yonatan Bisk, Roy Schwartz, Yejin Choi: Swag: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference
  • Point of Contact: https://rowanzellers.com/#contact
SWAG Dataset Curators

Rowan Zellers, Yonatan Bisk, Roy Schwartz, Yejin Choi

SWAG Dataset Licensing Information

MIT Licence

SWAG Dataset Citation Information
				
					@inproceedings{zellers2018swagaf,
    title={SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference},
    author={Zellers, Rowan and Bisk, Yonatan and Schwartz, Roy and Choi, Yejin},
    booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
    year={2018}
}
				
			

SWAG Dataset FAQs

What is the SWAG dataset for Python?

The SWAG dataset (Situations With Adversarial Generations) is made up of 113,000 multiple-choice questions about grounded situations. It is a large-scale dataset for the tasks of grounded commonsense inference, unifying natural language inference, and physically grounded reasoning.

What is the SWAG dataset used for?

The SWAG dataset is used to train NLP models that can handle multiple-choice questions.

How to download the SWAG dataset in Python?

Load the SWAG dataset with one line of code using Activeloop Deep Lake the open-source package made in Python. Check out detailed instructions on how to load the SWAG dataset training subset in Python, load the SWAG dataset testing subset in Python, and load the SWAG dataset validation subset in Python.

How can I use SWAG dataset in PyTorch or TensorFlow?

You can train a model on the SWAG dataset with PyTorch in Python or train a model on the SWAG dataset with TensorFlow in Python. You can stream the SWAG dataset while training a model in PyTorch or TensorFlow with one line of code using the open-source package Activeloop Deep Lake which is written in Python.

Datasets - Previous Stanford Cars Dataset Next - Datasets GTZAN Genre Dataset
Leaf Illustration

© 2022 All Rights Reserved by Snark AI, inc dba Activeloop