- Edit /etc/rc.conf
cloned_interfaces="bridge0 tap0 tap1 tap2 lagg0"
ifconfig_bridge0="addm lagg0 addm tap0 addm tap1 addm tap2"
vm_list="unifi zm1 grafana"
- Create a tap for grafana, and add it to the bridge.
ifconfig tap2 create
alias kc='kubectl' | |
alias kclf='kubectl logs --tail=200 -f' | |
alias kcgs='kubectl get service -o wide' | |
alias kcgd='kubectl get deployment -o wide' | |
alias kcgp='kubectl get pod -o wide' | |
alias kcgn='kubectl get node -o wide' | |
alias kcdp='kubectl describe pod' | |
alias kcds='kubectl describe service' | |
alias kcdd='kubectl describe deployment' | |
alias kcdf='kubectl delete -f' |
cloned_interfaces="bridge0 tap0 tap1 tap2 lagg0"
ifconfig_bridge0="addm lagg0 addm tap0 addm tap1 addm tap2"
vm_list="unifi zm1 grafana"
ifconfig tap2 create
#!/bin/bash -e | |
# Select which Docker version to use on CoreOS with torcx. | |
# Specify the available Docker version to enable. | |
version=17.09 | |
# Create modifiable torcx paths if they don't exist already. | |
mkdir -p /etc/torcx/profiles /var/lib/torcx/store | |
# Download the torcx manifest file for the currently running OS version. |
func main() { | |
// Set logging output to standard console out | |
log.SetOutput(os.Stdout) | |
sigs := make(chan os.Signal, 1) // Create channel to receive OS signals | |
stop := make(chan struct{}) // Create channel to receive stop signal | |
signal.Notify(sigs, os.Interrupt, syscall.SIGTERM, syscall.SIGINT) // Register the sigs channel to receieve SIGTERM | |
wg := &sync.WaitGroup{} // Goroutines can add themselves to this to be waited on so that they finish |
A blood black nothingness began to spin. | |
Began to spin. | |
Let's move on to system. | |
System. | |
Feel that in your body. |
Usage: node strace_log_analyzer.js strace.log /tmp
This scripts parses input file that must contain strace log from a single thread. Then script calculates:
strace log must be collcted with -t -T -f
options.
const i = 'gfudi'; | |
const k = s => s.split('').map(c => String.fromCharCode(c.charCodeAt() - 1)).join(''); | |
self[k(i)](urlWithYourPreciousData); |
--- | |
- hosts: localhost | |
connection: local | |
tasks: | |
- name: Load containers tags | |
include_vars: "{{ item }}" | |
with_items: | |
- ../kubespray/roles/download/defaults/main.yml | |
- ../kubespray/roles/kubernetes-apps/ansible/defaults/main.yml | |
- ../var/images.yml |
https://stackoverflow.com/questions/48993286/is-it-possible-to-route-traffic-to-a-specific-pod?rq=1
You can guarantee session affinity with services, but not as you are describing. So, your customers 1-1000 won't use pod-1, but they will use all the pods (as a service makes a simple load balancing), but each customer, when gets back to hit your service, will be redirected to the same pod.
Note: always within time specified in (default 10800):