Skip to content

Instantly share code, notes, and snippets.

@RXminuS
Created February 8, 2017 09:12
Show Gist options
  • Save RXminuS/12d7155522ea1de788ed4c5b4ea60763 to your computer and use it in GitHub Desktop.
Save RXminuS/12d7155522ea1de788ed4c5b4ea60763 to your computer and use it in GitHub Desktop.
AutoScaling, Load-Balanced, gRPC Micro-Service Kubernetes Configuration
--
apiVersion: v1
kind: ConfigMap
metadata:
name: linkerd-config
namespace: app
data:
config.yaml: |-
admin:
port: 9990
namers:
- kind: io.l5d.k8s
experimental: true
host: 127.0.0.1
port: 8001
routers:
- protocol: h2
experimental: true
label: grpc
client:
loadBalancer:
kind: ewma
maxEffort: 10
decayTimeMs: 15000
servers:
- port: 8080
ip: 0.0.0.0
baseDtab: |
# this directs http2 traffic straight to the specified service
# this can be changed to read the service name header and redirect
# traffic to different services based on that:
# /srv => /#/io.l5d.k8s/<namespace>/service;
# /h2 => /srv ;
/h2 => /#/io.l5d.k8s/<namespace>/<port-name>/<service-name>;
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
namespace: app
labels:
type: app
spec:
replicas: 3
template:
metadata:
labels:
type: app
name: app
annotations:
#this annotation makes sure the containers get scheduled on a nodepool that
#autoscales based on container requirements. This feature might not be available
#outside of GCE...but why would you use anything else!
scheduler.alpha.kubernetes.io/affinity: >
{
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "cloud.google.com/gke-nodepool",
"operator": "In",
"values": ["autoscaling-nodepool"]
}
]
}
]
}
}
}
spec:
containers:
- name: app
image: company/microservice:0.0.1
imagePullPolicy: Always
ports:
resources:
requests:
memory: "1Gi"
limits:
memory: "1Gi"
ports:
- name: grpc
containerPort: 50051
- name: linkerd
image: buoyantio/linkerd:latest
args:
- "/io.buoyant/linkerd/config/config.yaml"
ports:
- name: ext
containerPort: 8080
- name: admin
containerPort: 9990
volumeMounts:
- name: "linkerd-config"
mountPath: "/io.buoyant/linkerd/config"
readOnly: true
#This container is used by linkerd to resolve the service name
#to an actual local IP
- name: kubectl
image: buoyantio/kubectl:1.2.3
args:
- "proxy"
- "-p"
- "8001"
dnsPolicy: ClusterFirst
volumes:
- name: linkerd-config
configMap:
name: "linkerd-config"
---
kind: Service
apiVersion: v1
metadata:
namespace: app
name: app
spec:
selector:
type: app
type: LoadBalancer
loadBalancerSourceRanges:
#You can easily turn this into an internal load balancer so that any service (inside and outside of your K8S network)
#can reach it. This again won't work outisde Google's super smart router which will optimize traffic to take the shortest route.
#So even if you give an internal service the public IP of this loadbalancer the traffic will be optimized to use the internal route.
#In contrast...on AWS you're f#ck'd, they just route you outside your private network and then back which means your public IP isn't accepted
#and you have to fiddle with resolving private IP's manually...ugh
- "0.0.0.0/0"
ports:
- name: ext
port: 80
targetPort: 8080
- name: grpc
port: 50051
targetPort: grpc
# The admin shouldn't be reachable outside of Kubernetes. Just use `kubectl proxy` or `kubectl port-forward` to access it securely!
---
kind: Service
apiVersion: v1
metadata:
namespace: app
name: admin
spec:
selector:
type: app
ports:
- name: admin
port: 9990
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: app
namespace: app
spec:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: app
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 30
#Custom metrics will soon be supported!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment