-
-
Save openstacker/88b61e705b63ec4893801bf5c181098a to your computer and use it in GitHub Desktop.
0. Create key | |
openstack --insecure keypair create feilong --public-key ~/.ssh/id_rsa.pub | |
1. Upload image | |
scp ~/Downloads/Fedora-Atomic-26-20170723.0.x86_64.qcow2 [email protected]:~/ | |
2. Create image (NOTE: Be careful the disk-format) | |
openstack --insecure image create fedora-atomic --disk-format qcow2 --container-format bare --property os_distro=fedora-atomic --file ./Fedora-Atomic-26-20170723.0.x86_64.qcow2 | |
3. Create cluster template | |
openstack coe cluster template create k8s-fc27-v1.12.7 --keypair feilong --flavor ds1G --master-flavor ds2G --coe kubernetes --external-network public --network-driver calico --docker-storage-driver=overlay2 --image=<> --labels=etcd_volume_size=5,kube_tag=v1.12.7 | |
openstack coe cluster template create k8s-fc27-v1.13.10 --keypair feilong --flavor c1.c2r4 --master-flavor c1.c2r4 --coe kubernetes --external-network public --network-driver calico --docker-storage-driver=overlay2 --image=<> --labels=container_infra_prefix=docker.io/catalystcloud/,cloud_provider_enabled=true,cloud_provider_tag=1.14.0-catalyst,kube_tag=v1.13.10,ingress_controller=octavia,octavia_ingress_controller_tag=1.14.0-catalyst,heat_container_agent_tag=stein-stable,etcd_volume_size=20,prometheus_monitoring=true,keystone_auth_enabled=True,k8s_keystone_auth_tag=v1.15.0,auto_scaling_enabled=False,auto_healing_enabled=True,auto_healing_controller=magnum-auto-healer,magnum_auto_healer_tag=v1.15.0 --public | |
4. Set password for fedora image | |
/opt/cat/openstack/magnum/lib/python2.7/site-packages/magnum/drivers/common/templates/kubernetes/fragments | |
#cloud-config | |
password: atomic | |
ssh_pwauth: True | |
chpasswd: { expire: False } | |
4. Create cluster | |
openstack --insecure coe cluster create --name k8scluster --cluster-template k8s --node-count 1 --timeout 120 | |
(22:27:29) strigazi: openstack stack update <stack_id> --existing -P minions_to_remove=<comma separated list with resources ids or private ips> -P number_of_minions=<integer> | |
(22:28:31) strigazi: the resource id can be found either from the name of the vms or by doing openstack stack resource list -n 2 <stack_id> | |
5. VNC | |
run command "ssh -N -L 5900:10.0.0.81:5901 [email protected]" on host. Then run "gvncviewer 127.0.0.1" to start VNC | |
6. All cluster information are in /etc/sysconfig/heat-params, it can be sourced, then run scripts under /var/lib/cloud/instances/<instance_id>/scripts to debug | |
7. Run 'journalctl -u heat-container-agent --no-pager' to check the log about software deployment of heat agent | |
8. Busybox | |
apiVersion: v1 | |
kind: Pod | |
metadata: | |
name: busybox | |
namespace: default | |
spec: | |
containers: | |
- image: busybox | |
command: | |
- sleep | |
- "3600" | |
imagePullPolicy: IfNotPresent | |
name: busybox | |
restartPolicy: Always | |
9. Error found in devstack | |
{u'message': u'Exceeded maximum number of retries. Exhausted all hosts available for retrying build failures for instance 5bcaf6ce-3dde-43d2-a1ae-c9a217d7f658.', u'code': 500, u'details': u' File "/opt/stack/nova/nova/conductor/manager.py", line 578, in build_instances\n raise exception.MaxRetriesExceeded(reason=msg)\n', u'created': u'2018-02-22T00:52:15Z'} | |
It's because nova can't get ip assigned from Neutron, the workaround is restart all Neutron service by 'systemctl restart devstack@q-*' | |
10. dashboard logs | |
[fedora@k8scluster-sc7jximdohmh-master-0 kubernetes]$ kubectl get pods --all-namespaces -o wide | |
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE | |
default alpine-1-75b69bb5cc-8k9qs 1/1 Running 0 34m 192.168.160.139 k8scluster-sc7jximdohmh-minion-0 | |
default alpine-2-8699dd9657-ffc7x 1/1 Running 0 34m 192.168.160.138 k8scluster-sc7jximdohmh-minion-0 | |
default alpine-3-7d56d867df-vcj2v 1/1 Running 0 35m 192.168.160.137 k8scluster-sc7jximdohmh-minion-0 | |
default alpine-4-5f97677c-4nh4q 1/1 Running 0 35m 192.168.160.136 k8scluster-sc7jximdohmh-minion-0 | |
default alpine-56cb6f6969-2bssh 1/1 Running 0 8h 192.168.160.131 k8scluster-sc7jximdohmh-minion-0 | |
kube-system calico-kube-controllers-7c5c6f69c7-hx77z 1/1 Running 0 10h 10.0.0.4 k8scluster-sc7jximdohmh-minion-0 | |
kube-system calico-node-dktcd 2/2 Running 0 33m 10.0.0.11 k8scluster-sc7jximdohmh-master-0 | |
kube-system calico-node-n85r6 2/2 Running 0 10h 10.0.0.4 k8scluster-sc7jximdohmh-minion-0 | |
kube-system coredns-5864cfd79d-f2wjr 1/1 Running 37 10h 192.168.160.130 k8scluster-sc7jximdohmh-minion-0 | |
kube-system heapster-68b976dd7-kfslh 1/1 Running 0 10h 192.168.160.129 k8scluster-sc7jximdohmh-minion-0 | |
kube-system kubernetes-dashboard-846b8b6844-mchtk 1/1 Running 54 10h 192.168.160.128 k8scluster-sc7jximdohmh-minion-0 | |
[fedora@k8scluster-sc7jximdohmh-master-0 kubernetes]$ kubectl logs kubernetes-dashboard-846b8b6844-mchtk -n kube-system | |
2018/03/05 11:19:48 Starting overwatch | |
2018/03/05 11:19:48 Using in-cluster config to connect to apiserver | |
2018/03/05 11:19:48 Using service account token for csrf signing | |
2018/03/05 11:19:48 No request provided. Skipping authorization | |
2018/03/05 11:19:49 Successful initial request to the apiserver, version: v1.9.3 | |
2018/03/05 11:19:49 Generating JWE encryption key | |
2018/03/05 11:19:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting | |
2018/03/05 11:19:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system | |
2018/03/05 11:19:49 Initializing JWE encryption key from synchronized object | |
2018/03/05 11:19:49 Creating remote Heapster client for heapster:80 | |
2018/03/05 11:19:49 Auto-generating certificates | |
2018/03/05 11:19:49 Successfully created certificates | |
2018/03/05 11:19:49 Serving securely on HTTPS port: 8443 | |
2018/03/05 11:19:49 Successful request to heapster | |
[fedora@k8scluster-sc7jximdohmh-master-0 kubernetes]$ | |
11. dns logs | |
[fedora@k8scluster-sc7jximdohmh-master-0 kubernetes]$ kubectl logs coredns-5864cfd79d-f2wjr -n kube-system | |
.:53 | |
2018/03/05 11:04:26 [INFO] CoreDNS-1.0.1 | |
2018/03/05 11:04:26 [INFO] linux/amd64, go1.9.2, 99e163c3 | |
CoreDNS-1.0.1 | |
linux/amd64, go1.9.2, 99e163c3 | |
[fedora@k8scluster-sc7jximdohmh-master-0 kubernetes]$ |
openstack coe cluster template create kubernetes-v1.11.2-production --master-flavor c1.c2r4 --flavor c1.c4r8 --coe kubernetes --external-network public --network-driver calico --docker-storage-driver overlay2 --dns-nameserver <see region's name server> --volume-driver cinder --master-lb-enabled --labels etcd_volume_size=10 kube_tag=v1.11.2-1,prometheus_monitoring=True --image $(openstack image show fedora-atomic-27-x86_64 -c id -f value)
openstack coe cluster template create kubernetes-v1.11.2-development --master-flavor c1.c2r2 --flavor c1.c2r2 --coe kubernetes --external-network --network-driver calico --docker-storage-driver overlay2 --volume-driver cinder --dns-nameserver <see region's name server> --labels kube_tag=v1.11.2-1 --image $(openstack image show fedora-atomic-27-x86_64 -c id -f value)
+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | insecure_registry | - | | labels | {u'octavia_enabled': u'true', u'etcd_volume_size': u'8', u'dns_service_ip': u'REDACTED', u'kube_tag': u'v1.12.0', u'portal_network_cidr': u'REDACTED', u'container_infra_prefix': u'REDACTED', u'calico_ipv4pool': u'REDACTED'} | | updated_at | - | | floating_ip_enabled | False | | fixed_subnet | REDACTED | | master_flavor_id | m4.large | | uuid | REDACTED | no_proxy | - | | https_proxy | - | | tls_disabled | False | | keypair_id | - | | public | False | | http_proxy | - | | docker_volume_size | - | | server_type | vm | | external_network_id | REDACTED | | cluster_distro | fedora-atomic-v3 | | image_id | fedora-atomic-27-dev | | volume_driver | - | | registry_enabled | False | | docker_storage_driver | overlay2 | | apiserver_port | - | | name | k8s-v1.12.0-octavia | | created_at | 2018-09-28T15:23:58+00:00 | | network_driver | calico | | fixed_network | REDACTED | | coe | kubernetes | | flavor_id | m4.large | | master_lb_enabled | True | | dns_nameserver | REDACTED | +-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
openstack coe cluster template create kubernetes--v1.11.2-development --master-flavor c1.c2r2 --flavor c1.c2r2 --coe kubernetes --external-network aaa --network-driver calico --docker-storage-driver overlay2 --volume-driver cinder --dns-nameserver 202.78.240.215 --image 83833f4f-5d09-44cd-9e23-b0786fc580fd --labels kube_tag=v1.11.2-1
Production problems:
-
Sandbox start failed
-
Stack delete failed with error ”Resource DELETE failed: Conflict: resources.network.resources.private_subnet: Unable to complete operation on subnet 0b65ff86-13a5-460f-96c3-d3b20377df60. One or more ports have an IP allocation from this subnet.“
-
Create LB service failed
.
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/