Created
April 26, 2018 05:36
-
-
Save saurabhghatnekar/c2f47f977206092296f22a3e6781dfd8 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
/home/heimdall/anaconda3/bin/python /mnt/attic/DADA/codes/cnn/Bnorm.py | |
/home/heimdall/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. | |
from ._conv import register_converters as _register_converters | |
Using TensorFlow backend. | |
/home/heimdall/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6 | |
return f(*args, **kwds) | |
channels_last | |
2018-04-26 11:02:40.173029: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA | |
_________________________________________________________________ | |
Layer (type) Output Shape Param # | |
================================================================= | |
conv2d_1 (Conv2D) (None, 62, 62, 32) 896 | |
_________________________________________________________________ | |
batch_normalization_1 (Batch (None, 62, 62, 32) 128 | |
_________________________________________________________________ | |
dropout_1 (Dropout) (None, 62, 62, 32) 0 | |
_________________________________________________________________ | |
conv2d_2 (Conv2D) (None, 60, 60, 64) 18496 | |
_________________________________________________________________ | |
batch_normalization_2 (Batch (None, 60, 60, 64) 256 | |
_________________________________________________________________ | |
dropout_2 (Dropout) (None, 60, 60, 64) 0 | |
_________________________________________________________________ | |
conv2d_3 (Conv2D) (None, 58, 58, 64) 36928 | |
_________________________________________________________________ | |
batch_normalization_3 (Batch (None, 58, 58, 64) 256 | |
_________________________________________________________________ | |
dropout_3 (Dropout) (None, 58, 58, 64) 0 | |
_________________________________________________________________ | |
flatten_1 (Flatten) (None, 215296) 0 | |
_________________________________________________________________ | |
dense_1 (Dense) (None, 1024) 220464128 | |
_________________________________________________________________ | |
batch_normalization_4 (Batch (None, 1024) 4096 | |
_________________________________________________________________ | |
dropout_4 (Dropout) (None, 1024) 0 | |
_________________________________________________________________ | |
dense_2 (Dense) (None, 2) 2050 | |
================================================================= | |
Total params: 220,527,234 | |
Trainable params: 220,524,866 | |
Non-trainable params: 2,368 | |
_________________________________________________________________ | |
_________________________________________________________________ | |
Layer (type) Output Shape Param # | |
================================================================= | |
conv2d_1 (Conv2D) (None, 62, 62, 32) 896 | |
_________________________________________________________________ | |
batch_normalization_1 (Batch (None, 62, 62, 32) 128 | |
_________________________________________________________________ | |
dropout_1 (Dropout) (None, 62, 62, 32) 0 | |
_________________________________________________________________ | |
conv2d_2 (Conv2D) (None, 60, 60, 64) 18496 | |
_________________________________________________________________ | |
batch_normalization_2 (Batch (None, 60, 60, 64) 256 | |
_________________________________________________________________ | |
dropout_2 (Dropout) (None, 60, 60, 64) 0 | |
_________________________________________________________________ | |
conv2d_3 (Conv2D) (None, 58, 58, 64) 36928 | |
_________________________________________________________________ | |
batch_normalization_3 (Batch (None, 58, 58, 64) 256 | |
_________________________________________________________________ | |
dropout_3 (Dropout) (None, 58, 58, 64) 0 | |
_________________________________________________________________ | |
flatten_1 (Flatten) (None, 215296) 0 | |
_________________________________________________________________ | |
dense_1 (Dense) (None, 1024) 220464128 | |
_________________________________________________________________ | |
batch_normalization_4 (Batch (None, 1024) 4096 | |
_________________________________________________________________ | |
dropout_4 (Dropout) (None, 1024) 0 | |
_________________________________________________________________ | |
dense_2 (Dense) (None, 2) 2050 | |
================================================================= | |
Total params: 220,527,234 | |
Trainable params: 220,524,866 | |
Non-trainable params: 2,368 | |
_________________________________________________________________ | |
starting load data | |
done load data | |
100%|██████████| 125/125 [00:00<00:00, 148.85it/s] | |
100%|██████████| 8/8 [00:00<00:00, 156.80it/s] | |
Train on 125 samples, validate on 8 samples | |
Epoch 1/8 | |
64/125 [==============>...............] - ETA: 16s - loss: 1.3197 - acc: 0.5781 | |
125/125 [==============================] - 29s 233ms/step - loss: 3.0316 - acc: 0.6080 - val_loss: 6.2061 - val_acc: 0.5000 | |
Epoch 00001: val_loss improved from inf to 6.20606, saving model to bnorm_3.hdf5 | |
Epoch 2/8 | |
64/125 [==============>...............] - ETA: 7s - loss: 4.5018 - acc: 0.6250 | |
125/125 [==============================] - 15s 123ms/step - loss: 2.8835 - acc: 0.7040 - val_loss: 5.8238 - val_acc: 0.5000 | |
Epoch 00002: val_loss improved from 6.20606 to 5.82379, saving model to bnorm_3.hdf5 | |
Epoch 3/8 | |
64/125 [==============>...............] - ETA: 8s - loss: 0.6147 - acc: 0.9062 | |
125/125 [==============================] - 18s 142ms/step - loss: 0.5773 - acc: 0.9200 - val_loss: 2.7759 - val_acc: 0.6250 | |
Epoch 00003: val_loss improved from 5.82379 to 2.77594, saving model to bnorm_3.hdf5 | |
Epoch 4/8 | |
64/125 [==============>...............] - ETA: 7s - loss: 0.4361 - acc: 0.9531 | |
125/125 [==============================] - 18s 143ms/step - loss: 0.2764 - acc: 0.9520 - val_loss: 3.5696 - val_acc: 0.5000 | |
Epoch 00004: val_loss did not improve | |
Epoch 5/8 | |
64/125 [==============>...............] - ETA: 8s - loss: 0.0067 - acc: 1.0000 | |
125/125 [==============================] - 17s 134ms/step - loss: 0.1071 - acc: 0.9680 - val_loss: 0.0208 - val_acc: 1.0000 | |
Epoch 00005: val_loss improved from 2.77594 to 0.02075, saving model to bnorm_3.hdf5 | |
Epoch 6/8 | |
64/125 [==============>...............] - ETA: 8s - loss: 0.0039 - acc: 1.0000 | |
125/125 [==============================] - 17s 139ms/step - loss: 0.0228 - acc: 0.9920 - val_loss: 3.2272 - val_acc: 0.6250 | |
Epoch 00006: val_loss did not improve | |
Epoch 7/8 | |
64/125 [==============>...............] - ETA: 8s - loss: 0.0358 - acc: 0.9844 | |
125/125 [==============================] - 17s 132ms/step - loss: 0.0375 - acc: 0.9840 - val_loss: 3.2688 - val_acc: 0.6250 | |
Epoch 00007: val_loss did not improve | |
Epoch 8/8 | |
64/125 [==============>...............] - ETA: 7s - loss: 0.0050 - acc: 1.0000 | |
125/125 [==============================] - 17s 134ms/step - loss: 0.0072 - acc: 1.0000 - val_loss: 3.3076 - val_acc: 0.6250 | |
Epoch 00008: val_loss did not improve | |
1 ../../validation/rust/image (85).png | |
0 ../../validation/mildew/image (19).png | |
0 ../../validation/mildew/image (14).png | |
1 ../../validation/rust/image (9).png | |
0 ../../validation/mildew/image (8).png | |
0 ../../validation/mildew/image (10).png | |
1 ../../validation/rust/image (46).png | |
1 ../../validation/rust/image (45).png | |
Process finished with exit code 0 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment