Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save crackcomm/c60ba4a100e9a359624915b17ef90e0a to your computer and use it in GitHub Desktop.
Save crackcomm/c60ba4a100e9a359624915b17ef90e0a to your computer and use it in GitHub Desktop.
cmd : WARNING:tensorflow:From mnist.py:151: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
At C:\Users\Pah\nni\experiments\zgAa9CvL\trials\RxQ3z\run.ps1:9 char:1
+ cmd /c python mnist.py 2>C:\Users\Pah\nni\experiments\zgAa9CvL\trials ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (WARNING:tensorf...future version.:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
WARNING:tensorflow:From D:\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:260: maybe_download (from
tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Please write your own downloading logic.
WARNING:tensorflow:From D:\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:262: extract_images (from
tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
WARNING:tensorflow:From D:\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:267: extract_labels (from
tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
WARNING:tensorflow:From D:\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:110: dense_to_one_hot (from
tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.one_hot on tensors.
WARNING:tensorflow:From D:\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:290: DataSet.__init__ (from
tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
WARNING:tensorflow:From D:\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will
be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From mnist.py:103: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
WARNING:tensorflow:From mnist.py:113: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Future major versions of TensorFlow will allow gradients to flow
into the labels input on backprop by default.
See `tf.nn.softmax_cross_entropy_with_logits_v2`.
2
019-05-07 11:32:20
.1
2106
5:
I t
enso
rflo
w/core/co
mmon
_run
time
/gpu
/gpu
_dev
ice.
cc:1
433]
Found d
evic
e 0
with
pro
pert
ies:
na
me:
GeForce
GTX
950
maj
or:
5 mi
nor:
2 m
emor
yClo
ckRate(GH
z):
1.27
85
p
ciBu
sID:
000
0:02
:00.
0
to
talMemory
: 2.
00Gi
B fr
eeMe
mory
: 1.
64Gi
B
2
019-05-07
11:3
2:20
.121
758:
I t
enso
rflo
w/co
re/c
ommon_ru
ntim
e/gp
u/gp
u_de
vice
.cc:
1512
] Ad
ding vi
sibl
e gp
u de
vice
s: 0
2
019-05-07 11:32:20.7276
62:
I t
enso
rflo
w/co
re/c
ommon_ru
ntim
e/gp
u/gpu_d
evic
e.cc
:984
] De
vice
int
erco
nnec
t St
reamExe
cuto
r wi
th s
tren
gth
1 ed
ge m
atri
x:
2019-0
5-07
11:
32:2
0.72
8060
: I
tenso
rflo
w/co
re/comm
on_r
unti
me/g
pu/g
pu_d
evic
e.cc
:990
]
0
201
9-05
-07 1
1:32
:20.
7282
95:
I te
nsor
flow/cor
e/co
mmon
_run
time
/gpu
/gpu
_dev
ice.
cc:1
003] 0:
N
2
019-05-
07 1
1:32
:20.
7287
31:
I te
nsorflo
w/co
re/c
ommo
n_ru
ntim
e/gp
u/gp
u_de
vice
.cc:11
15]
Crea
ted
Tens
orFl
ow d
evic
e (/
job:
localho
st/r
epli
ca:0
/tas
k:0/
devi
ce:G
PU:0
wit
h 1378
MB m
emor
y) -
> ph
ysic
al G
PU (
devi
ce:
0, name
: Ge
Forc
e GT
X 95
0, p
ci bu
s id
: 00
00:0
2:00.0,
com
pute
cap
abil
ity:
5.
2)
2
019-05-07 11:32:21
.29091
9: I
ten
sorf
low/
stre
am_e
xecu
tor/
dso_
loader.
cc:1
52]
succ
essf
ully
ope
ned
CUDA
lib
rary cu
blas
64_1
00.d
ll l
ocal
ly
2
019-05-07 11:32:22
.
746046:
W t
enso
rflo
w/co
re/c
ommo
n_ru
ntim
e/bf
c_alloc
ator
.cc:
211]
All
ocat
or (
GPU_
0_bf
c) r
an out
of m
emor
y tr
ying
to
allo
cate
2.5
9GiB
. Th
e calle
r in
dica
tes
that
thi
s is
not
a f
ailu
re, but
may
mea
n th
at t
here
cou
ld b
e pe
rfor
mance
gain
s if
mor
e me
mory
wer
e av
aila
ble.
2
019-05-07
11:3
2:22
.800
884:
W t
enso
rflo
w/core/
comm
on_r
unti
me/b
fc_a
lloc
ator
.cc:
211]
Alloca
tor
(GPU
_0_bf
c) r
an o
ut o
f me
mory
try
ing to
allo
cate
1.3
4GiB
. Th
e ca
ller
ind
icat
es that
this
is
not
a fa
ilur
e, b
ut m
ay m
ean
that th
ere
coul
d be
per
form
ance
gai
ns i
f mo
re mem
ory
were
ava
ilab
le.
2
019-05-
07 1
1:32
:22.8016
37:
W te
nsor
flow
/cor
e/co
mmon
_run
time
/bfc_al
loca
tor.
cc:2
11]
Allo
cato
r (G
PU_0
_bfc
) ran o
ut o
f me
mory
try
ing
to a
lloc
ate
2.17
GiB. The c
alle
r in
dica
tes
that
thi
s is
not
a f
ailure,
but
may
mea
n th
at t
here
cou
ld b
e pe
rforman
ce g
ains
if
more
mem
ory
were
ava
ilab
le.
2
019-05-07 11:32:23.
03
3944
: W
tens
orfl
ow/c
ore/
common_runt
ime/
bfc_
allo
cato
r.cc
:211
] Al
loca
tor
(GPU_0_b
fc)
ran
out
of m
emor
y tr
ying
to
allo
cate 2.
10Gi
B. T
he c
alle
r in
dica
tes
that
thi
s is not
a fa
ilur
e, b
ut m
ay m
ean
that
the
re c
ould be
perf
orma
nce
gain
s if
more
mem
ory
were
availa
ble.
2
019-05
-07 1
1:32
:23.
0347
22:
W tensorf
low/
core
/com
mon_
runt
ime/
bfc_
alloc
ator
.cc:211
] Al
loca
tor
(GPU
_0_b
fc)
ran
out
of m
emory t
ryin
g to
all
ocat
e 1.
37Gi
B. T
he c
alle
r indic
ates
tha
t th
is i
s no
t a
fail
ure,
but
may me
an t
hat
ther
e co
uld
be p
erfo
rman
ce g
ains if
mor
e me
mory
wer
e av
aila
ble.
2
019-05-07 11
:
32:
23.
0698
21:
W te
nsor
flow
/cor
e/com
mon_
runt
ime/
bfc_
allo
cato
r.cc
:211
] Al
locat
or (
GPU_
0_bf
c) r
an o
ut o
f me
mory
try
ing t
o al
loca
te 3
.90G
iB.
The
call
er i
ndic
ates
that
thi
s is
not
a f
ailu
re,
but
may
mean
that
the
re c
ould
be
perf
orma
nce
gain
s if
more
memo
ry w
ere
avai
labl
e.
2
019-05-
07 11
:32:
23.0
7059
8: W
ten
sorf
low/
core
/com
mon_r
unti
me/b
fc_a
lloc
ator
.cc:
211]
All
ocat
or (G
PU_0
_bfc
) ra
n ou
t of
mem
ory
tryi
ng t
o all
ocat
e 2.
06Gi
B. T
he c
alle
r in
dica
tes
that
this
is
not
a fa
ilur
e, b
ut m
ay m
ean
that
ther
e co
uld
be p
erfo
rman
ce g
ains
if
more
memo
ry w
ere
avai
labl
e.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment