Registration
curl -v -X POST -H "Accept: application/xml" -H "Content-type: application/xml" --data @reg.xml http://localhost:8761/eureka/apps/test
Listing
curl -X GET -H "Accept: application/json" http://localhost:8761/eureka/apps
defaults write com.apple.LaunchServices/com.apple.launchservices.secure LSHandlers -array-add \ | |
'{LSHandlerContentType=public.plain-text;LSHandlerRoleAll=com.macromates.textmate.preview;}' |
#!/usr/bin/env python | |
import boto.vpc | |
import time | |
REGION_NAME = 'us-west-2' | |
AMI_ID = 'ami-8e27adbe' # Amazon Linux AMI | |
conn = boto.vpc.connect_to_region(REGION_NAME) | |
# Create a VPC |
curl -k -X GET \ | |
-H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \ | |
https://$KUBERNETES_PORT_443_TCP_ADDR:$KUBERNETES_SERVICE_PORT_HTTPS |
Registration
curl -v -X POST -H "Accept: application/xml" -H "Content-type: application/xml" --data @reg.xml http://localhost:8761/eureka/apps/test
Listing
curl -X GET -H "Accept: application/json" http://localhost:8761/eureka/apps
Let's take a look at how Kubernetes jobs are crafted. I had been jamming some kind of work-around shell scripts in the entrypoint* for some containers in the vnf-asterisk project that Leif and I have been working on. And that's not perfect when we can use Kubernetes jobs, or in their new parlance, "run to completion finite workloads" (I'll stick to calling them "jobs"). They're one-shot containers that do one thing, and then end (sort of like a "oneshot" of systemd units, at least how we'll use them today). I like the idea of using them to complete some service discovery for me when other pods are coming up. Today we'll fire up a pod, and spin up a job to discover that pod (by querying the API for info about it), and put info into etcd. Let's get the job done.
This post also exists as a [gist on github](https
A tiny (265 byte) utility to create state machine components using two pure functions.
The API is a single function that accepts 2 pure functions as arguments:
I had to capture kubelet systemd logs using Fluentd and send them to an Elastic search cluster.
I initially started off creating a custom dockerImage with v0.12-debian-onbuild
as the base image, believing, that i needed to install the fluentd-systemd plugin as part of it. It turned out later on upon inspection that there already is an image provided by fluent in the official repo v0.12-debian-elasticsearch
image (https://github.com/fluent/fluentd-kubernetes-daemonset) which includes the systemd plugin as part of the dockerImage. Awesome!
Should have looked more closer earlier 🙂
Note: The fluentd pod requires privileged access to allow it to read /var/log/journal. So you would have to use a SecurityContext for your Pod/container if you do decide to build a custom docker image.
Now the next problem i faced was Upon creation of a fluentd daemonspec on kubernetes, it still wouldn't read the logs from the journal. Here is the spec btw,
package main | |
import ( | |
"crypto/tls" | |
"crypto/x509" | |
"fmt" | |
"io/ioutil" | |
"net/http" | |
) |
It never used to be possible to get an A+ rating, as Java missed a couple of necessary features
wget http://download.java.net/java/GA/jdk9/9/binaries/jdk-9+181_linux-x64_bin.tar.gz
/* | |
Copyright (c) 2017, Sky UK Ltd All rights reserved. | |
Redistribution and use in source and binary forms, with or without modification, are permitted provided | |
that the following conditions are met: | |
Redistributions of source code must retain the above copyright notice, this list of conditions and the | |
following disclaimer. | |
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and |