Skip to content

Instantly share code, notes, and snippets.

@eak24
Created January 10, 2019 04:01
Show Gist options
  • Save eak24/d7e098c522db17dcffd9148a2f64272d to your computer and use it in GitHub Desktop.
Save eak24/d7e098c522db17dcffd9148a2f64272d to your computer and use it in GitHub Desktop.
Jupyterhub upgrade log https
RELEASE=jhub
NAMESPACE=jhub
helm upgrade --install $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version 0.7.0 \
--values config.yaml --dry-run --debug
[debug] Created tunnel using local port: '57299'
[debug] SERVER: "127.0.0.1:57299"
[debug] Fetched jupyterhub/jupyterhub to /Users/ethankeller/.helm/cache/archive/jupyterhub-0.7.0.tgz
2019/01/09 22:46:35 Warning: Merging destination map for chart 'jupyterhub'. Cannot overwrite table item 'extraConfig', with non table value: map[]
REVISION: 15
RELEASED: Wed Jan 9 22:46:34 2019
CHART: jupyterhub-0.7.0
USER-SUPPLIED VALUES:
hub:
extraConfig: |-
from oauthenticator.generic import GenericOAuthenticator
c.JupyterHub.authenticator_class = GenericOAuthenticator
import os
os.environ["OAUTH_AUTHORIZE_URL"] = "https://oauth.onshape.com/oauth/authorize?response_type=code&client_id=SCRUBBED="
os.environ["OAUTH2_TOKEN_URL"] = "https://oauth.onshape.com/oauth/token"
c.GitHubOAuthenticator.oauth_callback_url = 'http://192.168.99.100:31351/hub/oauth_callback'
c.GitHubOAuthenticator.client_id = 'SCRUBBED'
c.GitHubOAuthenticator.client_secret = 'SCRUBBED'
c.GenericOAuthenticator.login_service = 'Onshape'
c.GenericOAuthenticator.userdata_url = 'https://oauth.onshape.com/oauth/authorize?response_type=code&client_id=SCRUBBED'
c.GenericOAuthenticator.token_url = "https://oauth.onshape.com/oauth/token"
c.GenericOAuthenticator.userdata_method = 'POST'
extraEnv:
OAUTH2_AUTHORIZE_URL: https://oauth.onshape.com/oauth/authorize?response_type=code&client_id=SCRUBBED
OAUTH2_TOKEN_URL: https://oauth.onshape.com/oauth/token
proxy:
https:
hosts:
- https://jake-onshape-application.com
letsencrypt:
contactEmail: [email protected]
secretToken: SCRUBBED
singleuser:
image:
name: ethan92429/jake
tag: 0.0.3
COMPUTED VALUES:
auth:
admin:
access: true
users: null
dummy:
password: null
ldap:
dn:
search: {}
user: {}
user: {}
state:
cryptoKey: null
enabled: false
type: dummy
whitelist:
users: null
cull:
concurrency: 10
enabled: true
every: 600
maxAge: 0
podCuller:
image:
name: jupyterhub/k8s-pod-culler
tag: 0.7.0
timeout: 3600
users: false
debug:
enabled: false
hub:
activeServerLimit: null
allowNamedServers: false
annotations:
prometheus.io/path: /hub/metrics
prometheus.io/scrape: "true"
baseUrl: /
concurrentSpawnLimit: 64
consecutiveFailureLimit: 5
cookieSecret: null
db:
pvc:
accessModes:
- ReadWriteOnce
annotations: {}
selector: {}
storage: 1Gi
storageClassName: null
subPath: null
type: sqlite-pvc
upgrade: null
url: null
deploymentStrategy:
rollingUpdate: null
type: Recreate
extraConfig: |-
from oauthenticator.generic import GenericOAuthenticator
c.JupyterHub.authenticator_class = GenericOAuthenticator
import os
os.environ["OAUTH_AUTHORIZE_URL"] = "https://oauth.onshape.com/oauth/authorize?response_type=code&client_id=SCRUBBED"
os.environ["OAUTH2_TOKEN_URL"] = "https://oauth.onshape.com/oauth/token"
c.GitHubOAuthenticator.oauth_callback_url = 'http://192.168.99.100:31351/hub/oauth_callback'
c.GitHubOAuthenticator.client_id = 'SCRUBBED'
c.GitHubOAuthenticator.client_secret = 'SCRUBBED'
c.GenericOAuthenticator.login_service = 'Onshape'
c.GenericOAuthenticator.userdata_url = 'https://oauth.onshape.com/oauth/authorize?response_type=code&client_id=SCRUBBED'
c.GenericOAuthenticator.token_url = "https://oauth.onshape.com/oauth/token"
c.GenericOAuthenticator.userdata_method = 'POST'
extraConfigMap: {}
extraContainers: []
extraEnv:
OAUTH2_AUTHORIZE_URL: https://oauth.onshape.com/oauth/authorize?response_type=code&client_id=SCRUBBED
OAUTH2_TOKEN_URL: https://oauth.onshape.com/oauth/token
extraVolumeMounts: []
extraVolumes: []
fsGid: 1000
image:
name: jupyterhub/k8s-hub
tag: 0.7.0
imagePullPolicy: IfNotPresent
labels: {}
networkPolicy:
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
enabled: false
nodeSelector: {}
pdb:
enabled: true
publicURL: null
resources:
requests:
cpu: 200m
memory: 512Mi
service:
ports:
nodePort: null
type: ClusterIP
services: {}
uid: 1000
ingress:
annotations: {}
enabled: false
hosts: []
tls: null
prePuller:
continuous:
enabled: false
extraImages: []
hook:
enabled: true
extraEnv: {}
image:
name: jupyterhub/k8s-image-awaiter
tag: 0.7.0
pause:
image:
name: gcr.io/google_containers/pause
tag: "3.0"
proxy:
chp:
image:
name: jupyterhub/configurable-http-proxy
pullPolicy: IfNotPresent
tag: 3.0.0
resources:
requests:
cpu: 200m
memory: 512Mi
https:
enabled: true
hosts:
- https://jake-onshape-application.com
letsencrypt:
contactEmail: [email protected]
manual:
cert: null
key: null
secret:
crt: ""
key: ""
name: ""
type: letsencrypt
labels: {}
lego:
image:
name: jetstack/kube-lego
pullPolicy: IfNotPresent
tag: 0.1.6
resources: {}
networkPolicy:
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
enabled: false
nginx:
image:
name: quay.io/kubernetes-ingress-controller/nginx-ingress-controller
pullPolicy: IfNotPresent
tag: 0.15.0
proxyBodySize: 64m
resources: {}
nodeSelector: {}
pdb:
enabled: true
secretToken: SCRUBBED
service:
annotations: {}
labels: {}
nodePorts:
http: null
https: null
type: LoadBalancer
rbac:
enabled: true
singleuser:
cloudMetadata:
enabled: false
ip: 169.254.169.254
cmd: jupyterhub-singleuser
cpu:
guarantee: null
limit: null
defaultUrl: null
events: true
extraAnnotations: {}
extraEnv: {}
extraLabels: {}
extraResource:
guarantees: {}
limits: {}
fsGid: 100
image:
name: ethan92429/jake
pullPolicy: IfNotPresent
tag: 0.0.3
imagePullSecret:
email: null
enabled: false
password: null
registry: null
username: null
initContainers: null
lifecycleHooks: null
memory:
guarantee: 1G
limit: null
networkPolicy:
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 169.254.169.254/32
enabled: false
networkTools:
image:
name: jupyterhub/k8s-network-tools
tag: 0.7.0
nodeSelector: {}
schedulerStrategy: null
serviceAccountName: null
startTimeout: 300
storage:
capacity: 10Gi
dynamic:
pvcNameTemplate: claim-{username}{servername}
storageAccessModes:
- ReadWriteOnce
storageClass: null
volumeNameTemplate: volume-{username}{servername}
extraVolumeMounts: []
extraVolumes: []
homeMountPath: /home/jovyan
static:
pvcName: null
subPath: '{username}'
type: dynamic
uid: 1000
HOOKS:
---
# hook-image-awaiter
apiVersion: batch/v1
kind: Job
metadata:
name: hook-image-awaiter
labels:
component: image-puller
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
hub.jupyter.org/deletable: "true"
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
"helm.sh/hook-weight": "10"
spec:
template:
metadata:
labels:
component: image-puller
app: jupyterhub
release: jhub
spec:
restartPolicy: Never
serviceAccountName: hook-image-awaiter
containers:
- image: jupyterhub/k8s-image-awaiter:0.7.0
name: hook-image-awaiter
imagePullPolicy: IfNotPresent
command:
- /image-awaiter
- -ca-path=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
- -auth-token-path=/var/run/secrets/kubernetes.io/serviceaccount/token
- -api-server-address=https://$(KUBERNETES_SERVICE_HOST):$(KUBERNETES_SERVICE_PORT)
- -namespace=jhub
- -daemonset=hook-image-puller
---
# hook-image-puller
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: hook-image-puller
labels:
component: hook-image-puller
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
hub.jupyter.org/deletable: "true"
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
"helm.sh/hook-weight": "-10"
spec:
selector:
matchLabels:
component: hook-image-puller
app: jupyterhub
release: jhub
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 100%
template:
metadata:
labels:
component: hook-image-puller
app: jupyterhub
release: jhub
spec:
terminationGracePeriodSeconds: 0
automountServiceAccountToken: false
initContainers:
- name: image-pull-singleuser
image: ethan92429/jake:0.0.3
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- echo "Pulling complete"
- name: image-pull-metadata-block
image: jupyterhub/k8s-network-tools:0.7.0
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- echo "Pulling complete"
nodeSelector: {}
containers:
- name: pause
image: gcr.io/google_containers/pause:3.0
---
# hook-image-awaiter
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: hook-image-awaiter
labels:
component: image-puller
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
hub.jupyter.org/deletable: "true"
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
"helm.sh/hook-weight": "0"
rules:
- apiGroups: ["apps"] # "" indicates the core API group
resources: ["daemonsets"]
verbs: ["get"]
---
# hook-image-awaiter
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: hook-image-awaiter
labels:
component: image-puller
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
hub.jupyter.org/deletable: "true"
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
"helm.sh/hook-weight": "0"
subjects:
- kind: ServiceAccount
name: hook-image-awaiter
namespace: jhub
roleRef:
kind: Role
name: hook-image-awaiter
apiGroup: rbac.authorization.k8s.io
---
# hook-image-awaiter
apiVersion: v1
kind: ServiceAccount
metadata:
name: hook-image-awaiter
labels:
component: image-puller
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
hub.jupyter.org/deletable: "true"
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
"helm.sh/hook-weight": "0"
MANIFEST:
---
# Source: jupyterhub/templates/hub/pdb.yaml
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: hub
labels:
component: hub
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
spec:
minAvailable: 1
selector:
matchLabels:
component: hub
app: jupyterhub
release: jhub
---
# Source: jupyterhub/templates/proxy/pdb.yaml
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: proxy
labels:
component: proxy
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
spec:
minAvailable: 1
selector:
matchLabels:
component: proxy
app: jupyterhub
release: jhub
---
# Source: jupyterhub/templates/hub/secret.yaml
kind: Secret
apiVersion: v1
metadata:
name: hub-secret
labels:
component: hub
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
type: Opaque
data:
proxy.token: "SCRUBBED"
---
# Source: jupyterhub/templates/hub/configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: hub-config
labels:
component: hub
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
data:
cull.enabled: "true"
cull.users: "false"
cull.timeout: "3600"
cull.every: "600"
cull.concurrency: "10"
cull.max-age: "0"
auth.type: "dummy"
auth.state.enabled: "false"
auth.admin.access: "true"
singleuser.network-tools.image.name: "jupyterhub/k8s-network-tools"
singleuser.network-tools.image.tag: "0.7.0"
singleuser.cloud-metadata: |
enabled: false
ip: 169.254.169.254
singleuser.start-timeout: "300"
singleuser.image-pull-policy: "IfNotPresent"
singleuser.cmd: "jupyterhub-singleuser"
singleuser.events: "true"
singleuser.uid: "1000"
singleuser.fs-gid: "100"
singleuser.node-selector: "{}"
singleuser.storage.type: "dynamic"
singleuser.storage.home_mount_path: "/home/jovyan"
singleuser.storage.extra-volumes: "[]"
singleuser.storage.extra-volume-mounts: "[]"
singleuser.storage.capacity: "10Gi"
singleuser.storage.dynamic.pvc-name-template: "claim-{username}{servername}"
singleuser.storage.dynamic.volume-name-template: "volume-{username}{servername}"
singleuser.storage.dynamic.storage-access-modes: "[ReadWriteOnce]"
singleuser.memory.guarantee: "1G"
singleuser.extra-labels: |
hub.jupyter.org/network-access-hub: "true"
kubespawner.common-labels: |
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: jupyterhub
hub.allow-named-servers: "false"
hub.concurrent-spawn-limit: "64"
hub.consecutive-failure-limit: "5"
hub.extra-config.default.py: |
from oauthenticator.generic import GenericOAuthenticator
c.JupyterHub.authenticator_class = GenericOAuthenticator
import os
os.environ["OAUTH_AUTHORIZE_URL"] = "https://oauth.onshape.com/oauth/authorize?response_type=code&client_id=SCRUBBED"
os.environ["OAUTH2_TOKEN_URL"] = "https://oauth.onshape.com/oauth/token"
c.GitHubOAuthenticator.oauth_callback_url = 'http://192.168.99.100:31351/hub/oauth_callback'
c.GitHubOAuthenticator.client_id = 'SCRUBBED'
c.GitHubOAuthenticator.client_secret = 'SCRUBBED'
c.GenericOAuthenticator.login_service = 'Onshape'
c.GenericOAuthenticator.userdata_url = 'https://oauth.onshape.com/oauth/authorize?response_type=code&client_id=SCRUBBED'
c.GenericOAuthenticator.token_url = "https://oauth.onshape.com/oauth/token"
c.GenericOAuthenticator.userdata_method = 'POST'
hub.base_url: "/"
hub.db_url: "sqlite:///jupyterhub.sqlite"
debug.enabled: "false"
---
# Source: jupyterhub/templates/proxy/autohttps/configmap-nginx.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-proxy-config
labels:
component: autohttps
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
data:
proxy-body-size: "64m"
---
# Source: jupyterhub/templates/hub/pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: hub-db-dir
labels:
component: hub
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "1Gi"
---
# Source: jupyterhub/templates/hub/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: hub
labels:
component: hub
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
---
# Source: jupyterhub/templates/proxy/autohttps/rbac.yaml
# This is way too many permissions, but apparently the nginx-controller
# is written to sortof assume it is clusterwide ingress provider.
# So we keep this as is, for now.
apiVersion: v1
kind: ServiceAccount
metadata:
name: autohttps
labels:
component: autohttps
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
---
# Source: jupyterhub/templates/proxy/autohttps/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-jhub
labels:
component: autohttps
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- namespaces
resourceNames:
- "jhub"
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- extensions
resources:
- ingresses/status
verbs:
- update
---
# Source: jupyterhub/templates/proxy/autohttps/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-jhub
labels:
component: autohttps
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-jhub
subjects:
- kind: ServiceAccount
name: autohttps
namespace: jhub
---
# Source: jupyterhub/templates/hub/rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: hub
labels:
component: hub
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods", "persistentvolumeclaims"]
verbs: ["get", "watch", "list", "create", "delete"]
- apiGroups: [""] # "" indicates the core API group
resources: ["events"]
verbs: ["get", "watch", "list"]
---
# Source: jupyterhub/templates/proxy/autohttps/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: kube-lego
labels:
component: autohttps
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- create
- get
- delete
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- update
- create
- list
- patch
- delete
- watch
- apiGroups:
- ""
resources:
- endpoints
- secrets
verbs:
- get
- create
- update
---
# Source: jupyterhub/templates/proxy/autohttps/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx
labels:
component: autohttps
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
rules:
- apiGroups:
- ""
resources:
- configmaps
- namespaces
- pods
- secrets
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
- ingress-controller-leader-jupyterhub-proxy-tls
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- create
- get
- update
---
# Source: jupyterhub/templates/hub/rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: hub
labels:
component: hub
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
subjects:
- kind: ServiceAccount
name: hub
namespace: jhub
roleRef:
kind: Role
name: hub
apiGroup: rbac.authorization.k8s.io
---
# Source: jupyterhub/templates/proxy/autohttps/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: kube-lego
labels:
component: autohttps
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
roleRef:
kind: Role
name: kube-lego
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: autohttps
namespace: jhub
---
# Source: jupyterhub/templates/proxy/autohttps/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx
labels:
component: autohttps
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
roleRef:
kind: Role
name: nginx
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: autohttps
namespace: jhub
---
# Source: jupyterhub/templates/hub/service.yaml
apiVersion: v1
kind: Service
metadata:
name: hub
labels:
component: hub
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
spec:
type: ClusterIP
selector:
component: hub
app: jupyterhub
release: jhub
ports:
- protocol: TCP
port: 8081
targetPort: 8081
---
# Source: jupyterhub/templates/proxy/autohttps/service.yaml
apiVersion: v1
kind: Service
metadata:
name: proxy-http
labels:
component: autohttps
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
annotations: {}
spec:
type: ClusterIP
selector:
component: proxy
app: jupyterhub
release: jhub
ports:
- protocol: TCP
port: 8000
targetPort: 8000
---
# Source: jupyterhub/templates/proxy/service.yaml
apiVersion: v1
kind: Service
metadata:
name: proxy-public
labels:
component: proxy-public
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
spec:
selector:
# TODO: Refactor to utilize the helpers
component: autohttps
release: jhub
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
# allow proxy.service.nodePort for http
- name: https
port: 443
protocol: TCP
targetPort: 443
type: LoadBalancer
---
# Source: jupyterhub/templates/proxy/service.yaml
apiVersion: v1
kind: Service
metadata:
name: proxy-api
labels:
component: proxy-api
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
spec:
selector:
component: proxy
app: jupyterhub
release: jhub
ports:
- protocol: TCP
port: 8001
targetPort: 8001
---
# Source: jupyterhub/templates/hub/deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: hub
labels:
component: hub
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
spec:
replicas: 1
selector:
matchLabels:
component: hub
app: jupyterhub
release: jhub
strategy:
rollingUpdate: null
type: Recreate
template:
metadata:
labels:
component: hub
app: jupyterhub
release: jhub
hub.jupyter.org/network-access-proxy-api: "true"
hub.jupyter.org/network-access-proxy-http: "true"
hub.jupyter.org/network-access-singleuser: "true"
annotations:
# This lets us autorestart when the secret changes!
checksum/config-map: SCRUBBED
checksum/secret: SCRUBBED
prometheus.io/path: /hub/metrics
prometheus.io/scrape: "true"
spec:
nodeSelector: {}
affinity:
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: component
operator: In
values: ['proxy']
- key: release
operator: In
values: ["jhub"]
volumes:
- name: config
configMap:
name: hub-config
- name: secret
secret:
secretName: hub-secret
- name: hub-db-dir
persistentVolumeClaim:
claimName: hub-db-dir
serviceAccountName: hub
securityContext:
runAsUser: 1000
fsGroup: 1000
containers:
- name: hub
image: jupyterhub/k8s-hub:0.7.0
command:
- jupyterhub
- --config
- /srv/jupyterhub_config.py
- --upgrade-db
volumeMounts:
- mountPath: /etc/jupyterhub/config/
name: config
- mountPath: /etc/jupyterhub/secret/
name: secret
- mountPath: /srv/jupyterhub
name: hub-db-dir
resources:
requests:
cpu: 200m
memory: 512Mi
imagePullPolicy: IfNotPresent
env:
- name: SINGLEUSER_IMAGE
value: "ethan92429/jake:0.0.3"
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CONFIGPROXY_AUTH_TOKEN
valueFrom:
secretKeyRef:
name: hub-secret
key: proxy.token
- name: "OAUTH2_AUTHORIZE_URL"
value: "https://oauth.onshape.com/oauth/authorize?response_type=code&client_id=SCRUBBED"
- name: "OAUTH2_TOKEN_URL"
value: "https://oauth.onshape.com/oauth/token"
ports:
- containerPort: 8081
name: hub
---
# Source: jupyterhub/templates/proxy/autohttps/deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: autohttps
labels:
component: autohttps
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
spec:
replicas: 1
selector:
matchLabels:
component: autohttps
app: kube-lego
release: jhub
template:
metadata:
labels:
component: autohttps
app: kube-lego
release: jhub
hub.jupyter.org/network-access-proxy-http: "true"
spec:
serviceAccountName: autohttps
nodeSelector: {}
terminationGracePeriodSeconds: 60
affinity:
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: component
operator: In
values: ['hub']
- key: release
operator: In
values: ["jhub"]
containers:
- name: nginx
image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0"
imagePullPolicy: IfNotPresent
resources:
{}
args:
- /nginx-ingress-controller
- --default-backend-service=jhub/proxy-http
- --configmap=jhub/nginx-proxy-config
- --ingress-class=jupyterhub-proxy-tls
- --watch-namespace=jhub
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: kube-lego
image: "jetstack/kube-lego:0.1.6"
imagePullPolicy: IfNotPresent
resources:
{}
env:
- name: LEGO_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LEGO_WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LEGO_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: LEGO_EMAIL
# [email protected]
value: "[email protected]"
- name: LEGO_SUPPORTED_INGRESS_PROVIDER
value: "nginx"
- name: LEGO_SUPPORTED_INGRESS_CLASS
value: "jupyterhub-proxy-tls,dummy"
- name: LEGO_DEFAULT_INGRESS_CLASS
value: "jupyterhub-proxy-tls"
- name: LEGO_KUBE_ANNOTATION
value: "hub.jupyter.org/tls-terminator"
- name: LEGO_URL
value: "https://acme-v01.api.letsencrypt.org/directory"
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 1
---
# Source: jupyterhub/templates/proxy/deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: proxy
labels:
component: proxy
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
spec:
replicas: 1
selector:
matchLabels:
component: proxy
app: jupyterhub
release: jhub
template:
metadata:
labels:
component: proxy
app: jupyterhub
release: jhub
hub.jupyter.org/network-access-hub: "true"
hub.jupyter.org/network-access-singleuser: "true"
annotations:
# This lets us autorestart when the secret changes!
checksum/hub-secret: SCRUBBED
checksum/proxy-secret: SCRUBBED
spec:
nodeSelector: {}
terminationGracePeriodSeconds: 60
affinity:
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: component
operator: In
values: ['hub']
- key: release
operator: In
values: ["jhub"]
containers:
- name: chp
image: jupyterhub/configurable-http-proxy:3.0.0
command:
- configurable-http-proxy
- --ip=0.0.0.0
- --api-ip=0.0.0.0
- --api-port=8001
- --default-target=http://$(HUB_SERVICE_HOST):$(HUB_SERVICE_PORT)
- --error-target=http://$(HUB_SERVICE_HOST):$(HUB_SERVICE_PORT)/hub/error
- --port=8000
resources:
requests:
cpu: 200m
memory: 512Mi
env:
- name: CONFIGPROXY_AUTH_TOKEN
valueFrom:
secretKeyRef:
name: hub-secret
key: proxy.token
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
name: proxy-public
- containerPort: 8001
name: api
---
# Source: jupyterhub/templates/proxy/autohttps/ingress-internal.yaml
# This is solely used to provide auto HTTPS with our bundled kube-lego
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jupyterhub-internal
labels:
component: autohttps
app: jupyterhub
release: jhub
chart: jupyterhub-0.7.0
heritage: Tiller
annotations:
kubernetes.io/ingress.provider: nginx
kubernetes.io/ingress.class: jupyterhub-proxy-tls
hub.jupyter.org/tls-terminator: "true"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: proxy-http
servicePort: 8000
host: https://jake-onshape-application.com
tls:
- secretName: kubelego-tls-proxy-jhub
hosts:
- https://jake-onshape-application.com
Release "jhub" has been upgraded. Happy Helming!
LAST DEPLOYED: Wed Jan 9 22:45:57 2019
NAMESPACE: jhub
STATUS: FAILED
NOTES:
Thank you for installing JupyterHub!
Your release is named jhub and installed into the namespace jhub.
You can find if the hub and proxy is ready by doing:
kubectl --namespace=jhub get pod
and watching for both those pods to be in status 'Ready'.
You can find the public IP of the JupyterHub by doing:
kubectl --namespace=jhub get svc proxy-public
It might take a few minutes for it to appear!
Note that this is still an alpha release! If you have questions, feel free to
1. Read the guide at https://z2jh.jupyter.org
2. Chat with us at https://gitter.im/jupyterhub/jupyterhub
3. File issues at https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment