Skip to content

Instantly share code, notes, and snippets.

@kamalhussain
Last active June 20, 2017 19:53
Show Gist options
  • Save kamalhussain/fb0ec2e6a412e7ecdec50f21a29682a7 to your computer and use it in GitHub Desktop.
Save kamalhussain/fb0ec2e6a412e7ecdec50f21a29682a7 to your computer and use it in GitHub Desktop.
ubuntu@ip-10-0-0-12:~$ kubectl logs -p hub-deployment-3265205087-9kvcl --namespace=indigo
[I 2017-06-20 19:28:07.622 JupyterHub app:720] Loading cookie_secret from env[JPY_COOKIE_SECRET]
[W 2017-06-20 19:28:07.654 JupyterHub app:864] No admin users, admin interface will be unavailable.
[W 2017-06-20 19:28:07.654 JupyterHub app:865] Add any administrative users to `c.Authenticator.admin_users` in config.
[I 2017-06-20 19:28:07.654 JupyterHub app:892] Not using whitelist. Any authenticated user will be allowed.
[I 2017-06-20 19:28:07.681 JupyterHub app:1453] Hub API listening on http://0.0.0.0:8081/hub/
[E 2017-06-20 19:28:07.689 JupyterHub app:1139] Proxy appears to be running at http://100.65.188.75:80/, but I can't access it (HTTP 403: Forbidden)
Did CONFIGPROXY_AUTH_TOKEN change?
singleuser.storage.type dynamic
singleuser.storage.capacity 10Gi
singleuser.storage.home_mount_path /home/jovyan
singleuser.memory.guarantee 1G
auth.type dummy
admin.access True
cull.enabled True
cull.timeout 3600
cull.every 600
hub.base_url /
hub.db_url sqlite:///jupyterhub.sqlite
singleuser.cmd jupyterhub-singleuser
ubuntu@ip-10-0-0-12:~$ kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
indigo hub-deployment-3265205087-9kvcl 0/1 CrashLoopBackOff 6 7m
indigo proxy-deployment-51742714-7qxzz 1/1 Running 0 19m
kube-system dns-controller-915266321-w1tf3 1/1 Running 0 1h
kube-system etcd-server-events-ip-172-20-44-187.us-west-2.compute.internal 1/1 Running 0 1h
kube-system etcd-server-ip-172-20-44-187.us-west-2.compute.internal 1/1 Running 0 1h
kube-system kube-apiserver-ip-172-20-44-187.us-west-2.compute.internal 1/1 Running 0 1h
kube-system kube-controller-manager-ip-172-20-44-187.us-west-2.compute.internal 1/1 Running 0 1h
kube-system kube-dns-141550303-0kpng 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-0n8dn 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-0xzlg 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-1b98v 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-2r2l1 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-2vlh9 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-32fvz 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-3k37t 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-3nf89 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-3tw6j 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-3x5gs 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-3xh6r 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-4m4m3 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-5bdg4 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-5v5b2 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-6c1lf 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-6hd9t 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-6pp75 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-709vl 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-796tx 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-7s0gr 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-7zn29 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-81mdg 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-82kzp 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-8483b 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-8d0rb 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-8l6rv 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-9012l 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-946l8 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-982k5 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-9vwbb 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-b552z 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-bf54h 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-bgg8m 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-bkd0s 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-br8qz 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-bzspk 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-cjq9b 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-dn45g 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-f60c4 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-f6kl3 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-f7rh1 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-fz4w4 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-g2v3r 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-grtnn 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-h06js 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-hzv48 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-j6kx4 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-jq1j7 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-jq1lz 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-k0gm0 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-kdjpw 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-lc2xh 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-mhhp6 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-mjp0b 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-mrd74 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-nvj3s 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-nwcvx 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-phrqw 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-pq7v6 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-r5d12 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-s7946 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-sjcmw 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-sk2pd 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-skd0r 3/3 Running 0 1h
kube-system kube-dns-141550303-svrcb 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-t01n4 3/3 Running 0 1h
kube-system kube-dns-141550303-t98nv 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-v66tx 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-v8hmz 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-vc1tn 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-vl4zh 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-w8svs 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-x737n 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-xbhsr 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-xj22t 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-xzm4m 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-zbbp6 0/3 OutOfcpu 0 1h
kube-system kube-dns-141550303-zpcxk 0/3 OutOfcpu 0 1h
kube-system kube-dns-autoscaler-387649234-8vcws 1/1 Running 0 1h
kube-system kube-proxy-ip-172-20-35-212.us-west-2.compute.internal 1/1 Running 0 1h
kube-system kube-proxy-ip-172-20-44-187.us-west-2.compute.internal 1/1 Running 0 1h
kube-system kube-proxy-ip-172-20-46-97.us-west-2.compute.internal 1/1 Running 0 1h
kube-system kube-scheduler-ip-172-20-44-187.us-west-2.compute.internal 1/1 Running 0 1h
kube-system tiller-deploy-3703072393-0rl0p
------------------------
ubuntu@ip-10-0-0-12:~$ kubectl logs -p hub-deployment-3189191595-d5xws --namespace=indigo
[I 2017-06-20 19:44:15.765 JupyterHub app:720] Loading cookie_secret from env[JPY_COOKIE_SECRET]
[W 2017-06-20 19:44:15.807 JupyterHub app:864] No admin users, admin interface will be unavailable.
[W 2017-06-20 19:44:15.807 JupyterHub app:865] Add any administrative users to `c.Authenticator.admin_users` in config.
[I 2017-06-20 19:44:15.807 JupyterHub app:892] Not using whitelist. Any authenticated user will be allowed.
[I 2017-06-20 19:44:15.828 JupyterHub app:1453] Hub API listening on http://0.0.0.0:8081/hub/
[E 2017-06-20 19:44:15.835 JupyterHub app:1139] Proxy appears to be running at http://100.65.189.200:80/, but I can't access it (HTTP 403: Forbidden)
Did CONFIGPROXY_AUTH_TOKEN change?
singleuser.storage.type dynamic
singleuser.storage.capacity 10Gi
singleuser.storage.home_mount_path /home/jovyan
singleuser.memory.guarantee 1G
auth.type dummy
admin.access True
cull.enabled True
cull.timeout 3600
cull.every 600
hub.base_url /
hub.db_url sqlite://
singleuser.cmd jupyterhub-singleuser
ubuntu@ip-10-0-0-12:~$ kubectl describe pod hub-deployment-3189191595-d5xws --namespace=indigo
Name: hub-deployment-3189191595-d5xws
Namespace: indigo
Node: ip-172-20-35-212.us-west-2.compute.internal/172.20.35.212
Start Time: Tue, 20 Jun 2017 19:40:28 +0000
Labels: name=hub-pod
pod-template-hash=3189191595
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"indigo","name":"hub-deployment-3189191595","uid":"5071a88d-55f0-11e7-a0c9-027e69b...
Status: Running
IP: 100.96.2.11
Controllers: ReplicaSet/hub-deployment-3189191595
Containers:
hub-container:
Container ID: docker://91686784091a1f7fcd9aaa7792217b73daf930320e5d04538282b762b85f0653
Image: jupyterhub/k8s-hub:ve9661be
Image ID: docker-pullable://jupyterhub/k8s-hub@sha256:f881fa9c557e4b2c756ab52bf7132a18ac0aed55a89682cbeda881047fe88e85
Port: 8081/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 01 Jan 0001 00:00:00 +0000
Finished: Tue, 20 Jun 2017 19:44:15 +0000
Ready: False
Restart Count: 4
Requests:
cpu: 200m
memory: 512Mi
Environment:
SINGLEUSER_IMAGE: jupyterhub/k8s-singleuser-sample:v0.3.1
JPY_COOKIE_SECRET: <set to the key 'hub.cookie-secret' in secret 'hub-secret'> Optional: false
POD_NAMESPACE: indigo (v1:metadata.namespace)
CONFIGPROXY_AUTH_TOKEN: <set to the key 'proxy.token' in secret 'hub-secret'> Optional: false
Mounts:
/etc/jupyterhub/config/ from config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dpmx9 (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: hub-config-1
Optional: false
default-token-dpmx9:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-dpmx9
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
4m 4m 1 default-scheduler NormalScheduled Successfully assigned hub-deployment-3189191595-d5xws to ip-172-20-35-212.us-west-2.compute.internal
4m 4m 1 kubelet, ip-172-20-35-212.us-west-2.compute.internal spec.containers{hub-container} NormalCreated Created container with id 57046d6e57754b48cc1540a8f192342b945df79ae3d5ee755ca570ffff22fd6c
4m 4m 1 kubelet, ip-172-20-35-212.us-west-2.compute.internal spec.containers{hub-container} NormalStarted Started container with id 57046d6e57754b48cc1540a8f192342b945df79ae3d5ee755ca570ffff22fd6c
2m 2m 1 kubelet, ip-172-20-35-212.us-west-2.compute.internal spec.containers{hub-container} NormalCreated Created container with id c9082b7238401d4d854cc26733dc255e8c7382ed0ed10a78e6341cd3115e3860
2m 2m 1 kubelet, ip-172-20-35-212.us-west-2.compute.internal spec.containers{hub-container} NormalStarted Started container with id c9082b7238401d4d854cc26733dc255e8c7382ed0ed10a78e6341cd3115e3860
2m 2m 1 kubelet, ip-172-20-35-212.us-west-2.compute.internal WarningFailedSync Error syncing pod, skipping: failed to "StartContainer" for "hub-container" with CrashLoopBackOff: "Back-off 10s restarting failed container=hub-container pod=hub-deployment-3189191595-d5xws_indigo(5077e2ff-55f0-11e7-a0c9-027e69b560b0)"
2m 2m 1 kubelet, ip-172-20-35-212.us-west-2.compute.internal spec.containers{hub-container} Normal Started Started container with id 3688fe6784e2b1786b4830075908f47ec9e842c6d4f0771937e8184cdcf81c1e
2m 2m 1 kubelet, ip-172-20-35-212.us-west-2.compute.internal spec.containers{hub-container} Normal Created Created container with id 3688fe6784e2b1786b4830075908f47ec9e842c6d4f0771937e8184cdcf81c1e
2m 2m 2 kubelet, ip-172-20-35-212.us-west-2.compute.internal Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hub-container" with CrashLoopBackOff: "Back-off 20s restarting failed container=hub-container pod=hub-deployment-3189191595-d5xws_indigo(5077e2ff-55f0-11e7-a0c9-027e69b560b0)"
2m 2m 1 kubelet, ip-172-20-35-212.us-west-2.compute.internal spec.containers{hub-container} Normal Created Created container with id 79f8194a7ede8b4f320613529c3fd82a57d121354e0f6d19a08fc27c107cc820
2m 2m 1 kubelet, ip-172-20-35-212.us-west-2.compute.internal spec.containers{hub-container} Normal Started Started container with id 79f8194a7ede8b4f320613529c3fd82a57d121354e0f6d19a08fc27c107cc820
2m 1m 4 kubelet, ip-172-20-35-212.us-west-2.compute.internal Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hub-container" with CrashLoopBackOff: "Back-off 40s restarting failed container=hub-container pod=hub-deployment-3189191595-d5xws_indigo(5077e2ff-55f0-11e7-a0c9-027e69b560b0)"
1m 1m 1 kubelet, ip-172-20-35-212.us-west-2.compute.internal spec.containers{hub-container} Normal Created Created container with id 91686784091a1f7fcd9aaa7792217b73daf930320e5d04538282b762b85f0653
4m 1m 5 kubelet, ip-172-20-35-212.us-west-2.compute.internal spec.containers{hub-container} Normal Pulled Container image "jupyterhub/k8s-hub:ve9661be" already present on machine
1m 1m 1 kubelet, ip-172-20-35-212.us-west-2.compute.internal spec.containers{hub-container} Normal Started Started container with id 91686784091a1f7fcd9aaa7792217b73daf930320e5d04538282b762b85f0653
2m 6s 13 kubelet, ip-172-20-35-212.us-west-2.compute.internal spec.containers{hub-container} Warning BackOff Back-off restarting failed container
1m 6s 6 kubelet, ip-172-20-35-212.us-west-2.compute.internal Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hub-container" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=hub-container pod=hub-deployment-3189191595-d5xws_indigo(5077e2ff-55f0-11e7-a0c9-027e69b560b0)"
ubuntu@ip-10-0-0-12:~$ kubectl describe pod hub-deployment-3189191595-c1wj2 --namespace=indigo
Name: hub-deployment-3189191595-c1wj2
Namespace: indigo
Node: ip-172-20-46-97.us-west-2.compute.internal/172.20.46.97
Start Time: Tue, 20 Jun 2017 19:48:26 +0000
Labels: name=hub-pod
pod-template-hash=3189191595
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"indigo","name":"hub-deployment-3189191595","uid":"5071a88d-55f0-11e7-a0c9-027e69b...
Status: Running
IP: 100.96.1.5
Controllers: ReplicaSet/hub-deployment-3189191595
Containers:
hub-container:
Container ID: docker://2d12bbdf5ae25b44c9018a32e066db422f2feef91acd8409a0a5035c2e7af812
Image: jupyterhub/k8s-hub:ve9661be
Image ID: docker-pullable://jupyterhub/k8s-hub@sha256:f881fa9c557e4b2c756ab52bf7132a18ac0aed55a89682cbeda881047fe88e85
Port: 8081/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 01 Jan 0001 00:00:00 +0000
Finished: Tue, 20 Jun 2017 19:51:19 +0000
Ready: False
Restart Count: 3
Requests:
cpu: 200m
memory: 512Mi
Environment:
SINGLEUSER_IMAGE: jupyterhub/k8s-singleuser-sample:v0.3.1
JPY_COOKIE_SECRET: <set to the key 'hub.cookie-secret' in secret 'hub-secret'> Optional: false
POD_NAMESPACE: indigo (v1:metadata.namespace)
CONFIGPROXY_AUTH_TOKEN: <set to the key 'proxy.token' in secret 'hub-secret'> Optional: false
Mounts:
/etc/jupyterhub/config/ from config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dpmx9 (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: hub-config-1
Optional: false
default-token-dpmx9:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-dpmx9
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
3m 3m 1 default-scheduler NormalScheduled Successfully assigned hub-deployment-3189191595-c1wj2 to ip-172-20-46-97.us-west-2.compute.internal
3m 3m 1 kubelet, ip-172-20-46-97.us-west-2.compute.internal spec.containers{hub-container} NormalCreated Created container with id 13ffb0e0b944d0304813b21343f2c63cd3c33b121ff18456f9f49bd60e84524b
3m 3m 1 kubelet, ip-172-20-46-97.us-west-2.compute.internal spec.containers{hub-container} NormalStarted Started container with id 13ffb0e0b944d0304813b21343f2c63cd3c33b121ff18456f9f49bd60e84524b
1m 1m 1 kubelet, ip-172-20-46-97.us-west-2.compute.internal spec.containers{hub-container} NormalStarted Started container with id dcc4fa3b91d8f037680fac8f8c9fcb0c3b514997f143fa670a587721034ccd38
1m 1m 1 kubelet, ip-172-20-46-97.us-west-2.compute.internal spec.containers{hub-container} NormalCreated Created container with id dcc4fa3b91d8f037680fac8f8c9fcb0c3b514997f143fa670a587721034ccd38
1m 1m 1 kubelet, ip-172-20-46-97.us-west-2.compute.internal WarningFailedSync Error syncing pod, skipping: failed to "StartContainer" for "hub-container" with CrashLoopBackOff: "Back-off 10s restarting failed container=hub-container pod=hub-deployment-3189191595-c1wj2_indigo(6d266e1c-55f1-11e7-a0c9-027e69b560b0)"
53s 53s 1 kubelet, ip-172-20-46-97.us-west-2.compute.internal spec.containers{hub-container} Normal Created Created container with id e008be1047df95275bafba4eb09a1f62b63e5b520ed6c175169508f2b9bd5a59
52s 52s 1 kubelet, ip-172-20-46-97.us-west-2.compute.internal spec.containers{hub-container} Normal Started Started container with id e008be1047df95275bafba4eb09a1f62b63e5b520ed6c175169508f2b9bd5a59
51s 39s 2 kubelet, ip-172-20-46-97.us-west-2.compute.internal Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hub-container" with CrashLoopBackOff: "Back-off 20s restarting failed container=hub-container pod=hub-deployment-3189191595-c1wj2_indigo(6d266e1c-55f1-11e7-a0c9-027e69b560b0)"
3m 24s 4 kubelet, ip-172-20-46-97.us-west-2.compute.internal spec.containers{hub-container} Normal Pulled Container image "jupyterhub/k8s-hub:ve9661be" already present on machine
24s 24s 1 kubelet, ip-172-20-46-97.us-west-2.compute.internal spec.containers{hub-container} Normal Created Created container with id 2d12bbdf5ae25b44c9018a32e066db422f2feef91acd8409a0a5035c2e7af812
23s 23s 1 kubelet, ip-172-20-46-97.us-west-2.compute.internal spec.containers{hub-container} Normal Started Started container with id 2d12bbdf5ae25b44c9018a32e066db422f2feef91acd8409a0a5035c2e7af812
1m 11s 5 kubelet, ip-172-20-46-97.us-west-2.compute.internal spec.containers{hub-container} Warning BackOff Back-off restarting failed container
22s 11s 2 kubelet, ip-172-20-46-97.us-west-2.compute.internal Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hub-container" with CrashLoopBackOff: "Back-off 40s restarting failed container=hub-container pod=hub-deployment-3189191595-c1wj2_indigo(6d266e1c-55f1-11e7-a0c9-027e69b560b0)"
ubuntu@ip-10-0-0-12:~$
ubuntu@ip-10-0-0-12:~$ kubectl logs -p hub-deployment-3189191595-c1wj2 --namespace=indigo
[I 2017-06-20 19:52:14.613 JupyterHub app:720] Loading cookie_secret from env[JPY_COOKIE_SECRET]
[W 2017-06-20 19:52:14.656 JupyterHub app:864] No admin users, admin interface will be unavailable.
[W 2017-06-20 19:52:14.656 JupyterHub app:865] Add any administrative users to `c.Authenticator.admin_users` in config.
[I 2017-06-20 19:52:14.656 JupyterHub app:892] Not using whitelist. Any authenticated user will be allowed.
[I 2017-06-20 19:52:14.677 JupyterHub app:1453] Hub API listening on http://0.0.0.0:8081/hub/
[E 2017-06-20 19:52:14.683 JupyterHub app:1139] Proxy appears to be running at http://100.65.189.200:80/, but I can't access it (HTTP 403: Forbidden)
Did CONFIGPROXY_AUTH_TOKEN change?
singleuser.storage.type dynamic
singleuser.storage.capacity 10Gi
singleuser.storage.home_mount_path /home/jovyan
singleuser.memory.guarantee 1G
auth.type dummy
admin.access True
cull.enabled True
cull.timeout 3600
cull.every 600
hub.base_url /
hub.db_url sqlite://
singleuser.cmd jupyterhub-singleuser
ubuntu@ip-10-0-0-12:~$
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment