Skip to content

Instantly share code, notes, and snippets.

@ortsed
Created September 10, 2019 03:14
Show Gist options
  • Save ortsed/0d536ff38893e10634dd0b055a460093 to your computer and use it in GitHub Desktop.
Save ortsed/0d536ff38893e10634dd0b055a460093 to your computer and use it in GitHub Desktop.
# keras https://keras.io/
from keras.models import Sequential
from keras import models
from keras import layers
from keras import optimizers
model = Sequential()
model.add(layers.Dense(50, activation='relu', input_shape=(2000,)))
model.add(layers.Dense(1, activation='relu'))
model.add(layers.Dropout(0.3))
model.compile(loss='categorical_crossentropy', optimizer="SGD", metrics=['accuracy'])
history = model.fit(train, label_train, epochs=120, batch_size=256)
from keras import regularizers
model.add(layers.Dense(50, activation='relu',kernel_regularizer=regularizers.l2(0.005), input_shape=(2000,))) #2 hidden layers
from keras.utils import to_categorical
y_train = to_categorical(y_train)
history.history["acc"]
history.history["loss"]
y_pred = model.predict(test)
model.evaluate(train, label_train)
keras.optimizers:
SGD
RMSprop
Adagrad
Adadelta
Adam
Adamax
Nadam
metrics: #https://keras.io/metrics/
mae
acc
binary_accuracy
sparse_categorical_accuracy
categorical_accuracy
top_k_categorical_accuracy
loss: # https://keras.io/losses/
mean_squared_error
binary_crossentropy
mean_absolute_error
mean_absolute_percentage_error
## Image data
# 1. reshape to single vector
X.reshape(-1, 255)
imagenet=inception_v3.InceptionV3(weights='imagenet',include_top=False)
imagenet_new=imagenet.output
new_model = models.Sequential()
new_model.add(imagenet)
new_model.add(GlobalAveragePooling2D())
new_model.add(Dense(1024,activation='relu'))
new_model.add(Dense(1024,activation='relu')) #dense layer 2
new_model.add(Dense(512,activation='relu')) #dense layer 3
new_model.add(Dense(1,activation='sigmoid')) #final layer with softmax activation
- try overfitting on small sample
- use dropout + L2 to adjust output
- if depth increases but error does change, change parameters
- batch normalization
- SGD + momentum
- Beamsearch
- Xavier initialization, N{0,.01} initialization - initial weights
- convolution layers
random search better than grid search
- check for dead units
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment