This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| from keras.applications.nasnet import Xception, preprocess_input | |
| from keras.models import Sequential, Model | |
| from keras.layers.core import Lambda | |
| from keras.backend import tf as ktf | |
| # Initialize a Xception model | |
| Xception_model = Xception(include_top=True, weights='imagenet', input_tensor=None, input_shape=None) | |
| # Any required pre-processing should be baked into the model | |
| input_tensor = Input(shape=(None, None, 3)) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| def preact_conv(inputs, k=3, filters=64): | |
| outputs = BatchNormalization()(inputs) | |
| outputs = Activation('relu')(outputs) | |
| outputs = Conv2D(filters, kernel_size=(k, k), padding='same', | |
| kernel_initializer="glorot_normal")(outputs) | |
| return outputs | |
| def ResidualBlock(inputs, kernal_size=3, filters=64): | |
| outputs = preact_conv(inputs, k=kernal_size, n_filters=filters) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| from sklearn.preprocessing import StandardScaler | |
| X = StandardScaler().fit_transform(X) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| import numpy as np | |
| # Compute the mean of the data | |
| mean_vec = np.mean(X, axis=0) | |
| # Compute the covariance matrix | |
| cov_mat = (X - mean_vec).T.dot((X - mean_vec)) / (X.shape[0]-1) | |
| # OR we can do this with one line of numpy: |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # Compute the eigen values and vectors using numpy | |
| eig_vals, eig_vecs = np.linalg.eig(cov_mat) | |
| # Make a list of (eigenvalue, eigenvector) tuples | |
| eig_pairs = [(np.abs(eig_vals[i]), eig_vecs[:,i]) for i in range(len(eig_vals))] | |
| # Sort the (eigenvalue, eigenvector) tuples from high to low | |
| eig_pairs.sort(key=lambda x: x[0], reverse=True) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # Only keep a certain number of eigen vectors based on | |
| # the "explained variance percentage" which tells us how | |
| # much information (variance) can be attributed to each | |
| # of the principal components | |
| exp_var_percentage = 0.97 # Threshold of 97% explained variance | |
| tot = sum(eig_vals) | |
| var_exp = [(i / tot)*100 for i in sorted(eig_vals, reverse=True)] | |
| cum_var_exp = np.cumsum(var_exp) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # Compute the projection matrix based on the top eigen vectors | |
| num_features = X.shape[1] | |
| proj_mat = eig_pairs[0][1].reshape(num_features,1) | |
| for eig_vec_idx in range(1, num_vec_to_keep): | |
| proj_mat = np.hstack((proj_mat, eig_pairs[eig_vec_idx][1].reshape(num_features,1))) | |
| # Project the data | |
| pca_data = X.dot(proj_mat) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # Importing libs | |
| import seaborn as sns | |
| import pandas as pd | |
| import numpy as np | |
| import matplotlib.pyplot as plt | |
| # Create a random dataset | |
| data = pd.DataFrame(np.random.random((10,6)), columns=["Iron Man","Captain America","Black Widow","Thor","Hulk", "Hawkeye"]) | |
| print(data) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # Importing libs | |
| import seaborn as sns | |
| import matplotlib.pyplot as plt | |
| from scipy.stats import skewnorm | |
| # Create the data | |
| speed = skewnorm.rvs(4, size=50) | |
| size = skewnorm.rvs(4, size=50) | |
| # Create and shor the 2D Density plot |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # Import libs | |
| import pandas as pd | |
| import seaborn as sns | |
| import numpy as np | |
| import matplotlib.pyplot as plt | |
| # Get the data | |
| df=pd.read_csv("avengers_data.csv") | |
| print(df) |