Created
November 21, 2019 18:00
-
-
Save n0obcoder/1aa1b9a94c7147f4e81bebe72cb6a278 to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # Let's start by loading in the image data | |
| import torch | |
| from torchvision import datasets, transforms | |
| # Defining the transforms that we want to apply to the data. | |
| # Resizing the image to (224,224), | |
| # Randomly flipping the image horizontally(with the default probability of 0.5), | |
| # Converting the image to Tensore (converting the pixel values btween 0 and 1), | |
| # Normalizing the 3-channel data using the 'Imagenet' stats | |
| data_transforms = transforms.Compose([ | |
| transforms.Resize((224, 224)), | |
| transforms.RandomHorizontalFlip(), | |
| transforms.ToTensor(), | |
| transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) | |
| ]) | |
| print('data_transforms: ', data_transforms) | |
| dataset = datasets.ImageFolder(data_dir, transform = data_transforms) | |
| print('dataset: ', dataset) | |
| # We need to split the dataset between train and val datasets | |
| train_percentage = 0.8 | |
| train_size = int(len(dataset)*train_percentage) | |
| val_size = len(dataset) - train_size | |
| train_dataset, val_dataset = torch.utils.data.random_split(dataset, [train_size, val_size]) | |
| print('\nnumber of examples in train_dataset: ', len(train_dataset)) | |
| print('number of examples in val_dataset : ', len(val_dataset)) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| data_transforms: Compose( | |
| Resize(size=(224, 224), interpolation=PIL.Image.BILINEAR) | |
| RandomHorizontalFlip(p=0.5) | |
| ToTensor() | |
| Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) | |
| ) | |
| dataset: Dataset ImageFolder | |
| Number of datapoints: 1266 | |
| Root Location: mobile-gallery-image-classification-data/mobile_gallery_image_classification/train | |
| Transforms (if any): Compose( | |
| Resize(size=(224, 224), interpolation=PIL.Image.BILINEAR) | |
| RandomHorizontalFlip(p=0.5) | |
| ToTensor() | |
| Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) | |
| ) | |
| Target Transforms (if any): None | |
| number of examples in train_dataset: 1012 | |
| number of examples in val_dataset : 254 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment