This assumes we are running on Ubuntu. For background see:
- https://coreos.com/blog/running-kubernetes-example-on-CoreOS-part-1/
- https://coreos.com/blog/running-kubernetes-example-on-CoreOS-part-2/
- https://coreos.com/docs/quickstart/
- https://coreos.com/docs/running-coreos/platforms/vagrant/
cd /tmp
wget https://storage.googleapis.com/golang/go1.3.1.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.3.1.linux-amd64.tar.gz
See Setting GoPath environment variable
mkdir $HOME/go
go get github.com/tools/godep
go get github.com/coreos/etcd
It's important to get version 0.4.6 because the master breaks kubernetes.
cd $GOHOME/src/github.com/coreos/etcd
git checkout tags/v0.4.6
go install github.com/coreos/etcd
sudo ln -s "$GOPATH/bin/etcd" /usr/bin/etcd
mkdir -p $GOPATH/src/github.com/GoogleCloudPlatform/
cd $GOPATH/src/github.com/GoogleCloudPlatform/
git clone [email protected]:GoogleCloudPlatform/kubernetes.git
cd kubernetes
hack/local-up-cluster.sh
Then in another terminal:
export KUBERNETES_PROVIDER="local"
cluster/kubecfg.sh list /pods
cluster/kubecfg.sh list /services
cluster/kubecfg.sh list /replicationControllers
Don't believe the Kubernetes tutorial, you can't use port 8080 here because Kubernetes is already using it - use something else!
cluster/kubecfg.sh -p 4871:80 run dockerfile/nginx 1 myNginx
cluster/kubecfg.sh list /pods
cluster/kubecfg.sh list /services
cluster/kubecfg.sh list /replicationControllers
I cannot create a replication controller with replica size greater than 1! What gives?
You are running a single minion setup. This has the limitation of only supporting a single replica of a given pod. If you are interested in running with larger replica sizes, we encourage you to try the local vagrant setup or one of the cloud providers.
wget https://dl.bintray.com/mitchellh/vagrant/vagrant_1.6.5_x86_64.deb
sudo dpkg -i vagrant_1.6.5_x86_64.deb
git clone https://github.com/coreos/coreos-vagrant.git
cd coreos-vagrant
cp user-data.sample user-data
cp config.rb.sample config.rb
Note you need to [generate new etcd token]. There are two ways to do this:
-
You can do it manually via https://discovery.etcd.io/new You need to place the token in
user-data
and repeat this every time you launch the cluster. -
Alternatively you can uncomment the lines in config.rb:
if File.exists?('user-data') && ARGV[0].eql?('up') require 'open-uri' require 'yaml'
token = open('https://discovery.etcd.io/new').read data = YAML.load(IO.readlines('user-data')[1..-1].join) data['coreos']['etcd']['discovery'] = token lines = YAML.dump(data).split("\n") lines[0] = '#cloud-config' open('user-data', 'r+') do |f| f.puts(lines.join("\n")) end
end
vagrant up
Note Vagrant copies user-data
to /var/lib/coreos-vagrant/vagrantfile-user-data
ssh-add ~/.vagrant.d/insecure_private_key
vagrant ssh core-01 -- -A