This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # pytorch embeddings | |
| import torch | |
| from torch.optim import Adam | |
| import torch.nn as nn | |
| from torch.autograd import Variable | |
| import torch.nn.functional as F | |
| import pandas as pd | |
| import numpy as np |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| import torch | |
| import torch.nn as nn | |
| import numpy as np | |
| import matplotlib.pyplot as plt | |
| from torch.autograd import Variable | |
| import torch.nn.functional as F | |
| from sklearn.datasets import load_iris | |
| from sklearn.metrics import accuracy_score | |
| from sklearn.model_selection import train_test_split | |
| import pandas as pd |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| library(broom) | |
| library(tidyverse) | |
| library(gridExtra) | |
| # imagine we run a marketing campaign and 1265 (n) people of 6000 (m) people sign up | |
| # to the service | |
| n <- 1265 | |
| m <- 6000 | |
| # You want to estimate, given this data, what is the probability (p) that |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| df %>% | |
| ggplot(aes(probs, logloss, colour = label)) + | |
| geom_line(size = .8) + | |
| labs( | |
| title = "Log Loss over estimated probability", | |
| x = "Estimated probabilty", | |
| y = "Log Loss", | |
| colour = "Label" | |
| ) + | |
| annotate("text", label = "Log Loss penalises overly confident, |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| library(tidyverse) | |
| # create range of probabilities | |
| probs <- seq(0, 1, 0.001) | |
| # apply log loss over full range of propabilites | |
| t1 <- logloss(1, probs) | |
| t0 <- logloss(0, probs) | |
| # tidy the data |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| logloss <- function(label, p) { | |
| if (label == 1) { | |
| x <- -log(p) | |
| } else { | |
| x <- -log(1 - p) | |
| } | |
| x | |
| } |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| import torch | |
| import torch.nn as nn | |
| from torch.autograd import Variable | |
| import torch.functional as F | |
| from torch.utils.data import Dataset, DataLoader | |
| from torch.utils.data.sampler import SubsetRandomSampler | |
| import numpy as np | |
| from sklearn.datasets import load_boston | |
| from sklearn.preprocessing import StandardScaler |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| import torch | |
| import torch.nn as nn | |
| from torch.autograd import Variable | |
| import torch.functional as F | |
| from torch.utils.data import Dataset, DataLoader | |
| import numpy as np | |
| from sklearn.datasets import load_boston | |
| from sklearn.preprocessing import StandardScaler |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| library(modelr) | |
| library(dplyr) | |
| library(ggplot2) | |
| library(animation) | |
| library(gganimate) | |
| options(scipen = 999) | |
| n <- 200 # number of observations | |
| bias <- 4 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| from keras.datasets import mnist | |
| from keras.models import Sequential | |
| from keras.layers.core import Dense, Dropout, Activation | |
| from keras.utils import np_utils | |
| import numpy as np | |
| l1_nodes = 200 | |
| l2_nodes = 100 | |
| final_layer_nodes = 10 |