Machine Learning Datasets Machine Learning Datasets
  • GitHub
  • Slack
  • Documentation
Get Started
Machine Learning Datasets Machine Learning Datasets
Get Started
Machine Learning Datasets
  • GitHub
  • Slack
  • Documentation

Docy

Machine Learning Datasets

  • Folder icon closed Folder open iconDatasets
    • MNIST
    • ImageNet Dataset
    • COCO Dataset
    • CIFAR 10 Dataset
    • CIFAR 100 Dataset
    • FFHQ Dataset
    • Places205 Dataset
    • GTZAN Genre Dataset
    • GTZAN Music Speech Dataset
    • The Street View House Numbers (SVHN) Dataset
    • Caltech 101 Dataset
    • LibriSpeech Dataset
    • dSprites Dataset
    • PUCPR Dataset
    • RAVDESS Dataset
    • GTSRB Dataset
    • CSSD Dataset
    • ATIS Dataset
    • Free Spoken Digit Dataset (FSDD)
    • not-MNIST Dataset
    • ECSSD Dataset
    • COCO-Text Dataset
    • CoQA Dataset
    • FGNET Dataset
    • ESC-50 Dataset
    • GlaS Dataset
    • UTZappos50k Dataset
    • Pascal VOC 2012 Dataset
    • Pascal VOC 2007 Dataset
    • Omniglot Dataset
    • HMDB51 Dataset
    • Chest X-Ray Image Dataset
    • NIH Chest X-ray Dataset
    • Fashionpedia Dataset
    • DRIVE Dataset
    • Kaggle Cats & Dogs Dataset
    • Lincolnbeet Dataset
    • Sentiment-140 Dataset
    • MURA Dataset
    • LIAR Dataset
    • Stanford Cars Dataset
    • SWAG Dataset
    • HASYv2 Dataset
    • WFLW Dataset
    • Visdrone Dataset
    • 11k Hands Dataset
    • QuAC Dataset
    • LFW Deep Funneled Dataset
    • LFW Funneled Dataset
    • Office-Home Dataset
    • LFW Dataset
    • PlantVillage Dataset
    • Optical Handwritten Digits Dataset
    • UCI Seeds Dataset
    • STN-PLAD Dataset
    • FER2013 Dataset
    • Adience Dataset
    • PPM-100 Dataset
    • CelebA Dataset
    • Fashion MNIST Dataset
    • Google Objectron Dataset
    • CARPK Dataset
    • CACD Dataset
    • Flickr30k Dataset
    • Kuzushiji-Kanji (KKanji) dataset
    • KMNIST
    • EMNIST Dataset
    • USPS Dataset
    • MARS Dataset
    • HICO Classification Dataset
    • NSynth Dataset
    • RESIDE dataset
    • Electricity Dataset
    • DRD Dataset
    • Caltech 256 Dataset
    • AFW Dataset
    • PACS Dataset
    • TIMIT Dataset
    • KTH Actions Dataset
    • WIDER Face Dataset
    • WISDOM Dataset
    • DAISEE Dataset
    • WIDER Dataset
    • LSP Dataset
    • UCF Sports Action Dataset
    • Wiki Art Dataset
    • FIGRIM Dataset
    • ANIMAL (ANIMAL10N) Dataset
    • OPA Dataset
    • DomainNet Dataset
    • HAM10000 Dataset
    • Tiny ImageNet Dataset
    • Speech Commands Dataset
    • 300w Dataset
    • Food 101 Dataset
    • VCTK Dataset
    • LOL Dataset
    • AQUA Dataset
    • LFPW Dataset
    • ARID Video Action dataset
    • NABirds Dataset
    • SQuAD Dataset
    • ICDAR 2013 Dataset
    • Animal Pose Dataset
  • Folder icon closed Folder open iconDeep Lake Docs Home
  • Folder icon closed Folder open iconDataset Visualization
  • API Basics
  • Storage & Credentials
  • Getting Started
  • Tutorials (w Colab)
  • Playbooks
  • Data Layout
  • Folder icon closed Folder open iconShuffling in ds.pytorch()
  • Folder icon closed Folder open iconStorage Synchronization
  • Folder icon closed Folder open iconTensor Relationships
  • Folder icon closed Folder open iconQuickstart
  • Folder icon closed Folder open iconHow to Contribute

TIMIT Dataset

Estimated reading: 4 minutes

Visualization of Timit dataset in Deep Lake UI

TIMIT dataset

What is TIMIT Dataset?

The TIMIT Acoustic-Phonetic Continuous Speech Corpus dataset is a standard dataset used for the evaluation of automatic speech recognition systems. It contains recordings of 630 speakers. Also, the recordings include eight dialects of American English. Each speaker in the dataset reads 10 phonetically-rich sentences. The TIMIT corpus includes time-aligned orthographic, phonetic, and word transcriptions. It also includes a 16-bit, 16kHz speech waveform file for each phrase said. The TIMIT corpus transcriptions have been hand-verified.

Download TIMIT Dataset in Python

Instead of downloading the TIMIT dataset in Python, you can effortlessly load it in Python via our Deep Lake open-source with just one line of code.

Load TIMIT Dataset Training Subset in Python

				
					import deeplake
ds = deeplake.load("hub://activeloop/timit-train")
				
			

Load TIMIT Dataset Testing Subset in Python

				
					import deeplake
ds = deeplake.load("deeplake://activeloop/mnist-test")
				
			

TIMIT Dataset Structure

TIMIT Data Fields
  • audios: tensor represent audio in wav format.
  • texts: tensor representing the text spoken in the audio.
  • dialects: tensor representing the dialect of the speaker.
  • is_sentences: tensor to identify if the audio is a sentence.
  • is_word_files: tensor to identify if the audio is a word.
  • is_phoenetics: tensor to identify if the audio is phonetic.
  • speaker_ids: tensor representing the speaker id.
TIMIT Data Splits
  • The TIMIT dataset training set is composed of 4620 audio files.
  • The TIMIT dataset test set is composed of 1690 audio files.

How to use TIMIT Dataset with PyTorch and TensorFlow in Python

Train a model on TIMIT dataset with PyTorch in Python

Let’s use Deep Lake built-in PyTorch one-line dataloader to connect the data to the compute:

				
					dataloader = ds.pytorch(num_workers=0, batch_size=4, shuffle=False)
				
			
Train a model on the TIMIT dataset with TensorFlow in Python
				
					dataloader = ds.tensorflow()
				
			

Additional Information about TIMIT Dataset

TIMIT Dataset Description

  1. Homepage:https://catalog.ldc.upenn.edu/LDC93s1
  2. Paper: Garofolo, John S., et al. TIMIT Acoustic-Phonetic Continuous Speech Corpus LDC93S1. Web Download. Philadelphia: Linguistic Data Consortium, 1993.
Timit Dataset Curators

John S. Garofolo, Lori F. Lamel, William M. Fisher, Jonathan G. Fiscus, David S. Pallett, Nancy L. Dahlgren, Victor Zue

TIMIT Dataset Licensing Information

Deep Lake users may have access to a variety of publicly available datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have a license to use the datasets. It is your responsibility to determine whether you have permission to use the datasets under their license.

If you’re a dataset owner and do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thank you for your contribution to the ML community!

Timit Dataset Citation Information
				
					@inproceedings,
  title = {TIMIT Acoustic-Phonetic Continuous Speech Corpus},
  author = {John S. Garofolo, Lori F. Lamel, William M. Fisher, Jonathan G. Fiscus, David S. Pallett, Nancy L. Dahlgren, Victor Zue},
  booktitle = {Linguistic Data Consortium, },
  year = {1993} 
}
				
			

TIMIT Dataset FAQs

What is the TIMIT dataset for Python?

The TIMIT dataset is a speech dataset that is often used for the evaluation of automatic speech recognition systems. It was developed by Texas Instruments and MIT with DARPA’s (Defense Advanced Research Projects Agency) financial support. It is often used in the domain of speech recognition.

How to download the TIMIT dataset in Python?

You can load the TIMIT dataset fast with one line of code using the open-source package Activeloop Deep Lake in Python. See detailed instructions on how to load the TIMIT dataset training subset in Python.

How can I use the TIMIT dataset in PyTorch or TensorFlow?

You can stream the TIMIT dataset while training a model in PyTorch or TensorFlow with one line of code using the open-source package Activeloop Deep Lake in Python. See detailed instructions on how to train a model on the TIMIT dataset with PyTorch in Python or train a model on TIMIT with Tensorflow in Python.

Datasets - Previous PACS Dataset Next - Datasets KTH Actions Dataset
Leaf Illustration

© 2022 All Rights Reserved by Snark AI, inc dba Activeloop