Skip to content

Instantly share code, notes, and snippets.

@danehans
Last active January 7, 2020 17:54
Show Gist options
  • Save danehans/9d84d8f300bde9b6e97a6d2f6fb89295 to your computer and use it in GitHub Desktop.
Save danehans/9d84d8f300bde9b6e97a6d2f6fb89295 to your computer and use it in GitHub Desktop.
test_nodelocal_dns_ocp
apiVersion: v1
kind: Service
metadata:
name: dns-cache-default
namespace: openshift-dns
labels:
dns.operator.openshift.io/owning-dns: default
spec:
ports:
- name: dns
port: 53
protocol: UDP
targetPort: 53
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
selector:
dns.operator.openshift.io/daemonset-dns-cache: default
# replace 172.30.205.37 with the svc ip of 01_svc.yaml
#
apiVersion: v1
kind: ConfigMap
metadata:
name: dns-cache-default
namespace: openshift-dns
labels:
dns.operator.openshift.io/owning-dns-cache: default
data:
Corefile: |
cluster.local:53 {
errors
cache {
success 9984 30
denial 9984 5
}
reload
loop
bind 169.254.20.10 172.30.205.37
forward . 172.30.0.10 {
force_tcp
}
prometheus :9153
health 169.254.20.10:8080
}
in-addr.arpa:53 {
errors
cache 30
reload
loop
bind 169.254.20.10 172.30.205.37
forward . 172.30.0.10 {
force_tcp
}
prometheus :9153
}
ip6.arpa:53 {
errors
cache 30
reload
loop
bind 169.254.20.10 172.30.205.37
forward . 172.30.0.10 {
force_tcp
}
prometheus :9153
}
.:53 {
errors
cache 30
reload
loop
bind 169.254.20.10 172.30.205.37
forward . 172.30.0.10 {
force_tcp
}
prometheus :9153
}
# replace 172.30.205.37 with the svc ip of 01_svc.yaml
#
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-local-dns-default
namespace: openshift-dns
labels:
dns.operator.openshift.io/owning-dns-cache: default
spec:
updateStrategy:
rollingUpdate:
maxUnavailable: 10%
selector:
matchLabels:
dns.operator.openshift.io/daemonset-dns-cache: default
template:
metadata:
labels:
dns.operator.openshift.io/daemonset-dns-cache: default
spec:
priorityClassName: system-node-critical
serviceAccountName: dns
hostNetwork: true
dnsPolicy: Default # Don't use cluster DNS.
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
containers:
- name: node-cache
image: k8s.gcr.io/k8s-dns-node-cache:1.15.7
resources:
requests:
cpu: 25m
memory: 5Mi
args: [ "-localip", "169.254.20.10,172.30.205.37", "-conf", "/etc/Corefile", "-upstreamsvc", "dns-cache-default" ]
securityContext:
privileged: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
host: 169.254.20.10
path: /health
port: 8080
initialDelaySeconds: 60
timeoutSeconds: 5
volumeMounts:
- mountPath: /run/xtables.lock
name: xtables-lock
readOnly: false
- name: config-volume
mountPath: /etc/coredns
- name: kube-dns-config
mountPath: /etc/kube-dns
volumes:
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
- name: kube-dns-config
configMap:
name: kube-dns
optional: true
- name: config-volume
configMap:
name: dns-cache-default
items:
- key: Corefile
path: Corefile.base
# Appears that we are hitting https://github.com/kubernetes/kubernetes/issues/71305
# @danwinship has this workaround: https://github.com/kubernetes/kubernetes/issues/71305#issuecomment-521978797
# and https://github.com/danwinship/iptables-wrappers
#
$ oc logs node-local-dns-default-j5dfh -n openshift-dns
2020/01/07 02:41:50 2020-01-07T02:41:50.405Z [INFO] Using Corefile /etc/Corefile
2020/01/07 02:41:50 2020-01-07T02:41:50.407Z [INFO] Tearing down
2020/01/07 02:41:50 2020-01-07T02:41:50.594Z [INFO] Hit error during teardown - Link not found
2020/01/07 02:41:50 2020-01-07T02:41:50.594Z [INFO] Setting up networking for node cache
2020/01/07 02:41:50 2020-01-07T02:41:50.693Z [INFO] Tearing down
2020/01/07 02:41:51 2020-01-07T02:41:51.328Z [FATAL] Failed to setup - error checking rule: exit status 3: Notice: The NOTRACK target is converted into CT target in rule listing and saving.
iptables v1.6.0: can't initialize iptables table `raw': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
, Exiting
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment