Skip to content

Instantly share code, notes, and snippets.

curl -XPOST -H "Content-Type: application/json" \
-d '{ "prefix": "/v1/user/", "service": "usersvc", "rewrite": "/v2/" }' \
$AMBASSADORURL/ambassador/mapping/user
@richarddli
richarddli / telepresence-openshift
Created August 1, 2017 18:18
blog post on telepresence & openshift
# Telepresence: Fast, local development and debugging on Kubernetes and OpenShift
OpenShift makes it easy to deploy your containers, but it can also slow down your development cycle.
The problem is that containers (or microservices) running on OpenShift are running in a different environment than your laptop.
Your container may talk to other containers running on OpenShift, or rely on platform features like volumes or secrets, and those features are not available when running your code locally.
How then can you debug them with a debugger, how can you get a quick code/test feedback loop during initial development?
There are a [variety of approaches](https://www.datawire.io/guide/deployment/development-environments-microservices/
) to setting up your development environment for microservices. In this blog post we'll demonstrate how you can have the best of both worlds, the OpenShift runtime platform and the speed of local development, by using an open source tool called [Telepresence](http://www.telepresence.
# Traffic splitting proxy
## Background
Canary deployments are a popular way of testing microservices. In a canary deployment, a small percentage of traffic is routed to the new version of a service, while the original (presumably stable) version of your service manages most of the traffic. The service author can then monitor the new version of the service to make sure that it doesn't crash, etc. Then, traffic to the new version can be gradually increased.
A more sophisticated implementation of this approach can automate the monitoring/testing portion. With this approach, a proxy can multicast an incoming request to three separate instances:
* the candidate version
* the stable version (primary)
# Developing and debugging services on Kubernetes
A typical Kubernetes application consists of multiple, separate services ("aka microservices"). A service may depend on multiple other services in order to function correctly. For example, a "shopping cart" service might depend on a "product" service and a "user service".
In this multi-service architecture, development and debugging a single service can be complicated. Typically, your options are:
* run all of your services locally (requires lots of setup/maintenance)
* run all your services in a remote Kubernetes cluster (requires a CI pipeline to make updates and makes debugging cumbersome)
## A different way
@richarddli
richarddli / prom-operator.yaml
Last active January 4, 2018 21:35
prom-operator.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus-operator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus-operator
subjects:
- kind: ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus
rules:
---
apiVersion: v1
kind: Service
metadata:
labels:
service: ambassador-admin
name: ambassador-admin
spec:
type: NodePort
ports:
---
apiVersion: v1
kind: Service
metadata:
labels:
service: ambassador
name: ambassador
spec:
type: LoadBalancer
ports:
---
apiVersion: v1
kind: Service
metadata:
name: ambassador-monitor
labels:
service: ambassador-monitor
spec:
selector:
service: ambassador
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
spec:
serviceAccountName: prometheus
serviceMonitorSelector:
matchLabels:
ambassador: monitoring
resources: