This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| true_b = 1 | |
| true_w = 2 | |
| N = 100 | |
| # Data Generation | |
| np.random.seed(42) | |
| x = np.random.rand(N, 1) | |
| epsilon = (.1 * np.random.randn(N, 1)) | |
| y = true_b + true_w * x + epsilon |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| from sklearn.preprocessing import StandardScaler | |
| scaler = StandardScaler(with_mean=True, with_std=True) | |
| # We use the TRAIN set ONLY to fit the scaler | |
| scaler.fit(x_train) | |
| # Now we can use the already fit scaler to TRANSFORM | |
| # both TRAIN and VALIDATION sets | |
| scaled_x_train = scaler.transform(x_train) | |
| scaled_x_val = scaler.transform(x_val) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # Generates train and validation sets | |
| # It uses the same train_idx and val_idx as before, | |
| # but it applies to bad_x | |
| bad_x_train, y_train = bad_x[train_idx], y[train_idx] | |
| bad_x_val, y_val = bad_x[val_idx], y[val_idx] |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| true_b = 1 | |
| true_w = 2 | |
| N = 100 | |
| # Data Generation | |
| np.random.seed(42) | |
| # We divide w by 10 | |
| bad_w = true_w / 10 | |
| # And multiply x by 10 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| losses = [] | |
| val_losses = [] | |
| train_step = make_train_step(model, loss_fn, optimizer) | |
| for epoch in range(n_epochs): | |
| for x_batch, y_batch in train_loader: | |
| x_batch = x_batch.to(device) | |
| y_batch = y_batch.to(device) | |
| loss = train_step(x_batch, y_batch) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| losses = [] | |
| train_step = make_train_step(model, loss_fn, optimizer) | |
| for epoch in range(n_epochs): | |
| for x_batch, y_batch in train_loader: | |
| # the dataset "lives" in the CPU, so do our mini-batches | |
| # therefore, we need to send those mini-batches to the | |
| # device where the model "lives" | |
| x_batch = x_batch.to(device) | |
| y_batch = y_batch.to(device) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| def make_train_step(model, loss_fn, optimizer): | |
| # Builds function that performs a step in the train loop | |
| def train_step(x, y): | |
| # Sets model to TRAIN mode | |
| model.train() | |
| # Makes predictions | |
| yhat = model(x) | |
| # Computes loss | |
| loss = loss_fn(y, yhat) | |
| # Computes gradients |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| torch.manual_seed(42) | |
| x_tensor = torch.from_numpy(x).float() | |
| y_tensor = torch.from_numpy(y).float() | |
| # Builds dataset with ALL data | |
| dataset = TensorDataset(x_tensor, y_tensor) | |
| # Splits randomly into train and validation datasets | |
| train_dataset, val_dataset = random_split(dataset, [80, 20]) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| from torch.utils.data.dataset import random_split | |
| x_tensor = torch.from_numpy(x).float() | |
| y_tensor = torch.from_numpy(y).float() | |
| dataset = TensorDataset(x_tensor, y_tensor) | |
| train_dataset, val_dataset = random_split(dataset, [80, 20]) | |
| train_loader = DataLoader(dataset=train_dataset, batch_size=16) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| from torch.utils.data.sampler import SubsetRandomSampler | |
| train_sampler = SubsetRandomSampler(train_idx) | |
| val_sampler = SubsetRandomSampler(val_idx) | |
| x_tensor = torch.from_numpy(x).float() | |
| y_tensor = torch.from_numpy(y).float() | |
| dataset = TensorDataset(x_tensor, y_tensor) |