OpenShift Origin - Platform as a Service (PaaS) - Application Container Platform Solution : Quick start (IEP-AED)
- Introduction
- Installation Options
- Option 1: OpenShift Installation (All-in-one) using Vagrant
- Option 2: OpenShift installation (All-in-one) using Docker
- Option 3: OpenShift Installation using OpenShift provided binaries (Server & Client)
- Setting up environment variables and other pre-login activities
- Login into OpenShift Cluster Server
- Deploy Integrated Docker Registry
- Loading the Default Image Streams and Templates
- Configure and Deploy Router
- FIS on OpenShift using S2I
- FIS on OpenShift using Fabric8 Maven workflow
- OpenShift Installation using Container Development Kit (CDK)
- Miscellaneous
- OpenShift is an application container platform solution, Platform as a Service (PaaS).
- Built around a core of Docker container packaging (Containerization), Kubernetes container cluster management and etcd distributed persistence storage.
- DevOps solution includes (for Java),
- Provides both cloud and on-premise container platform solution (PaaS).
- Written in Go and AngularJS.
- Supports integration with IDEs.
- A software containerization platform.
- Packages your application into a standardized unit for software development.
- Wraps a piece of software in a complete file system that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.
- An open-source distributed key-value data store that provides a reliable way to store data across a cluster of machines.
- From Kubernetes/OpenShift perspective, etcd is the backend for service discovery and stores cluster state and configuration.
- An open-source system for automating deployment, scaling, and management of containerized applications.
- Groups containers that make up an application into logical units for easy management and discovery.
- Cloud
- Private
- Public
- On-premise
Vagrant (by HashiCorp) is a portable environment provider. Creates and configures lightweight, reproducible, and portable development environments.
VirtualBox (by Oracle) is a virtualization product, supports multiple-platforms.
- Operating System: Ubuntu-14.10 (64 bit)
Use Vagrant (by HashiCorp) to create and configure lightweight, reproducible, and portable development environments.
sudo dpkg -i virtualbox-5.1_5.1.14-112924-Ubuntu-trusty_amd64.deb
sudo dpkg -i vagrant_1.9.1_x86_64.deb
mkdir -p /opt/openshift/all-in-one-demo
cd /opt/openshift/all-in-one-demo
vagrant init openshift/origin-all-in-one
vagrant up --provider virtualbox
sudo apt-get install apt-transport-https ca-certificates
curl -fsSL https://yum.dockerproject.org/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb https://apt.dockerproject.org/repo/ \
ubuntu-$(lsb_release -cs) \
main"
sudo apt-get update
apt-get -y install docker-engine
sudo docker run hello-world
This step installs a docker image "openshift/origin" on a Docker platform with the name "openshiftorigin". i.e. Creates a Docker container named "openshiftorigin" with an image "openshift/origin" retrieved from Docker Registry.
export OPENSHIFT_DOC_CONTAINER_NAME=openshiftorigin
sudo docker run -d \
--name "$OPENSHIFT_DOC_CONTAINER_NAME" \
--privileged \
--pid=host \
--net=host \
--restart=always \
-v /:/rootfs:ro \
-v /var/run:/var/run:rw \
-v /dev:/dev \
-v /sys:/sys:ro \
-v /var/lib/docker:/var/lib/docker:rw \
-v /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes \
-v /var/lib/kubelet/:/var/lib/kubelet:rw \
openshift/origin start | tee Openshift_Doc_Container_Id
Note: The Docker generated container id can be retrived from Openshift_Doc_Container_Id file in future.
Export generated container id: export OPENSHIFT_DOC_CONTAINER_ID=$(cat Openshift_Doc_Container_Id)
docker start $OPENSHIFT_DOC_CONTAINER_NAME
Start docker daemon instance if not started on bootup
nohup docker daemon \
--insecure-registry="172.30.56.30:5000" --insecure-registry 172.30.0.0/24 \
--iptables=true \
--debug=true \
--log-level=debug \
--icc=true &
oc get ep -o "jsonpath=https://{$.items[?(@.metadata.name==\"kubernetes\")].subsets[0].addresses[0].ip}:{$.items[?(@.metadata.name==\"kubernetes\")].subsets[0].ports[?(@.name==\"https\")].port}"
Sample Output: https://192.168.56.101:8443
You can assign this into a variable for future reference,
export CLUSTER_API_SERVER_URL=`oc get ep -o "jsonpath=https://{$.items[?(@.metadata.name==\"kubernetes\")].subsets[0].addresses[0].ip}:{$.items[?(@.metadata.name==\"kubernetes\")].subsets[0].ports[?(@.name==\"https\")].port}"`
cd /opt/openshift/lib
gunzip openshift-origin-server-v1.4.1-3f9807a-linux-64bit.tar.gz
tar -xvf openshift-origin-server-v1.4.1-3f9807a-linux-64bit.tar
mv openshift-origin-server-v1.4.1-3f9807a-linux-64bit /opt/openshift/lib/openshift-server
export OPENSHIFT_SERVER_INSTALLATION_DIR=/opt/openshift/lib/openshift-server
cd $OPENSHIFT_SERVER_INSTALLATION_DIR
cd $OPENSHIFT_SERVER_INSTALLATION_DIR
nohup ./openshift start &
Ref Start Docker Daemon Manually
If chosen option 2:
export OPENSHIFT_MASTER_DIR=/var/lib/docker/aufs/diff/$OPENSHIFT_DOC_CONTAINER_ID/var/lib/origin/openshift.local.config/master
else if chosen option 3:
export OPENSHIFT_MASTER_DIR=$OPENSHIFT_SERVER_INSTALLATION_DIR/openshift.local.config/master
Switch to OpenShift master directory:
cd $OPENSHIFT_MASTER_DIR
Set KUBECONFIG and CURL_CA_BUNDLE environment variables if you would like to login into OpenShift as "system:admin".
export KUBECONFIG=$OPENSHIFT_MASTER_DIR/admin.kubeconfig
export CURL_CA_BUNDLE=$OPENSHIFT_MASTER_DIR/ca-bundle.crt
chmod +r $KUBECONFIG
Login into Openshift server as syste admin (cluster scoped) oc login -u system:admin -n default
The user "system:admin" is an high privileged id in cluster scope. This user can create/access all the resources created in cluster scope or project scope. Usually we use this id for granting access to cluster scoped or cross-project scoped objects.
Login into Openshift as regular user (project scoped)
oc login -u configadmin -p configadmin https://192.168.56.101:8443
Where, https://192.168.56.101:8443 is OpenShift Cluster API server URL.
Add "priviliged" Security Context Constraint (SCC) to user "registry" (service account):
oadm policy add-scc-to-user privileged system:serviceaccount:default:registry
Add "registry-editor", "image-puller", "image-pusher", "image-builder" roles to "registry" (service account):
oadm policy add-role-to-user registry-editor system:serviceaccount:default:registry -n default
oadm policy add-role-to-user system:image-puller system:serviceaccount:default:registry -n default
oadm policy add-role-to-user system:image-pusher system:serviceaccount:default:registry -n default
oadm policy add-role-to-user system:image-builder system:serviceaccount:default:registry -n default
This command creates a Docker registry (service, build config, deployment config, service account, route,..).
oadm registry --service-account='registry' \
--images='openshift/origin-${component}:${version}' --mount-host=/tmp
oc get svc/docker-registry -n default -o 'jsonpath={.spec.clusterIP}:{.spec.ports[0].port}'
Sample Output: 172.30.56.30:5000
You can assign this into a variable for future reference,
export DOCKER_REGISTRY_HOST=`oc get svc/docker-registry -n default -o 'jsonpath={.spec.clusterIP}'`
export DOCKER_REGISTRY_PORT=`oc get svc/docker-registry -n default -o 'jsonpath={.spec.ports[0].port}'`
export DOCKER_REGISTRY_HOST_AND_PORT=`oc get svc/docker-registry -n default -o 'jsonpath={.spec.clusterIP}:{.spec.ports[0].port}'`
oc describe svc/docker-registry
http://172.17.0.3:5000/healthz
oadm ca create-server-cert \
--signer-cert=$OPENSHIFT_MASTER_DIR/ca.crt \
--signer-key=$OPENSHIFT_MASTER_DIR/ca.key \
--signer-serial=$OPENSHIFT_MASTER_DIR/ca.serial.txt \
--hostnames='docker-registry.default.svc.cluster.local,$DOCKER_REGISTRY_HOST' \
--cert=/etc/secrets/registry.crt \
--key=/etc/secrets/registry.key
oc secrets new registry-secret \
/etc/secrets/registry.crt \
/etc/secrets/registry.key
oc secrets add serviceaccounts/registry secrets/registry-secret
oc secrets add serviceaccounts/default secrets/registry-secret
oc volume dc/docker-registry --add --type=secret \
--secret-name=registry-secret -m /etc/secrets
oc env dc/docker-registry \
REGISTRY_HTTP_TLS_CERTIFICATE=/etc/secrets/registry.crt \
REGISTRY_HTTP_TLS_KEY=/etc/secrets/registry.key
oc patch dc/docker-registry -p '{"spec": {"template": {"spec": {"containers":[{
"name":"registry",
"livenessProbe": {"httpGet": {"scheme":"HTTPS"}},
"readinessProbe": {"httpGet": {"scheme":"HTTPS"}}
}]}}}}'
oc logs dc/docker-registry | grep tls
cd $OPENSHIFT_MASTER_DIR
dcertsdir=/etc/docker/certs.d
destdir_addr=$dcertsdir/$DOCKER_REGISTRY_HOST_AND_PORT
destdir_name=$dcertsdir/docker-registry.default.svc.cluster.local:$DOCKER_REGISTRY_PORT
sudo mkdir -p $destdir_addr $destdir_name
sudo cp ca.crt $destdir_addr
sudo cp ca.crt $destdir_name
Login as system admin and do the following,
oc project openshift
cd ~
git clone https://github.com/openshift/openshift-ansible
cd ~/openshift-ansible/roles/openshift_examples/files/examples/latest
for fileName in `find . -name '*.json'`
do
oc create -f $fileName -n openshift
done
rm -rf ~/openshift-ansible
The OpenShift router (load balancer) is the ingress point for all external traffics destined for OpenShift services. OpenShift supports the following two router plug-ins,
- HAProxy template router (openshift3/ose-haproxy-router) - Default router choice.
- F5 router - Integrates with an existing F5 BIG-IP
Here we go with default router (HAProxy) creation, Login as system admin and execute the following commands,
oc project default
echo '{"kind":"ServiceAccount","apiVersion":"v1","metadata":{"name":"router"}}' | oc create -f -
oadm policy add-cluster-role-to-user \
cluster-reader \
system:serviceaccount:default:router
oadm policy add-scc-to-user privileged system:serviceaccount:default:router
oadm router --dry-run \
--service-account=router
oadm router router --replicas=1 --service-account=router
oc new-project fis-openshift-s2i --display-name="FIS on OpenShift using S2I" --description="FIS on OpenShift using S2I"
oadm policy add-role-to-user admin configadmin -n fis-openshift-s2i
curl https://github.com/dhana-git/fis-openshift-s2i-karaf-camel-qs/blob/master/quickstart-template.json > /opt/openshift/templates/fis-openshift-s2i-karaf-camel-qs-quickstart-template.json
oc create -f /opt/openshift/templates/fis-openshift-s2i-karaf-camel-qs-quickstart-template.json -n fis-openshift-s2i
oc new-project fis-openshift-fabric8-maven --display-name="FIS on OpenShift using Fabric8 Maven workflow" --description="FIS on OpenShift using Fabric8 Maven workflow"
oadm policy add-role-to-user admin configadmin -n fis-openshift-fabric8-maven
oadm policy add-role-to-user system:image-pusher configadmin -n fis-openshift-fabric8-maven
oadm policy add-role-to-user system:image-puller configadmin -n fis-openshift-fabric8-maven
oadm policy add-role-to-user system:image-builder configadmin -n fis-openshift-fabric8-maven
#oadm policy add-scc-to-user privileged configadmin -n fis-openshift-fabric8-maven
cd /opt/openshift/all-in-one-demo
git clone https://github.com/dhana-git/fis-openshift-s2i-karaf-camel-qs.git
cd /opt/openshift/all-in-one-demo/fis-openshift-s2i-karaf-camel-qs
export DOCKER_HOST=unix:///var/run/docker.sock
export KUBERNETES_MASTER=$CLUSTER_API_SERVER_URL
export KUBERNETES_DOMAIN=poc.openshift.dev
oc login -u <username> -p <password/token> -n <OPENSHIFT_NAMESPACE> <https://CLUSTER_API_HOST:8443>
E.g.: oc login -u configadmin -p -n fis-openshift-fabric8-maven $CLUSTER_API_SERVER_URL
mvn -Pf8-deploy \
-Ddocker.pull.registry=<Docker registry host to pull an image> \
-Ddocker.push.registry=<Docker registry host to pull an image> \
-Ddocker.push.username=<OpenShift username> \
-Ddocker.push.password=<OpenShift user passowrod/token> \
-Dopenshift.project.name=<OpenShift project/namespace>
E.g.:
mvn -Pf8-deploy
-Ddocker.pull.registry=registry.access.redhat.com
-Ddocker.push.registry=$(oc get svc/docker-registry -n default -o 'jsonpath={.spec.clusterIP}:{.spec.ports[0].port}')
-Ddocker.push.username=$(oc whoami)
-Ddocker.push.password=$(oc whoami -t)
-Dopenshift.project.name=$(oc project -q)
The CDK is designed to simplify the configuration and setup of Linux container development environments and provides an on-ramp to building container-based applications. By using Vagrant, open source software to create, configure and deploy virtual environments, the CDK enables developers across Microsoft Windows, Mac OS X and Linux operating systems to more quickly create containerized applications for deployment on Red Hat certified container hosts.
Ref [Install Vagrant](#install-vagrant)
mkdir -p /cygdrive/c/Users/C241251/LLY/NGIF/Workspace/openshift-poc/lib
export OPENSHIFT_WORK_DIR=/cygdrive/c/Users/C241251/LLY/NGIF/Workspace/openshift-poc
export HOST_USER_HOME=/cygdrive/c/Users/C241251
cd $OPENSHIFT_WORK_DIR
mv /tmp/rhel-cdk-kubernetes-7.2-25.x86_64.vagrant-virtualbox.box $OPENSHIFT_WORK_DIR/lib/
mv /tmp/cdk-2.1.0.zip $OPENSHIFT_WORK_DIR/lib/
mv /tmp/Vagrantfile $OPENSHIFT_WORK_DIR/lib/
Switch to project's root dir cd $OPENSHIFT_WORK_DIR
vagrant init
vagrant box add --name cdkv2 ./lib/rhel-cdk-kubernetes-7.2*.x86_64.vagrant-virtualbox.box
cd $OPENSHIFT_WORK_DIR/lib/cdk-2.1.0/plugins
vagrant-registration plugin (for guest registration) vagrant plugin install vagrant-registration-1.2.2.gem vagrant-service-manager plugin (obtain information about Docker, kubernetes, openshift services) vagrant plugin install vagrant-service-manager-1.1.0.gem vagrant-sshfs plugin (shared/synchronized folder using SSHFS) vagrant plugin install vagrant-sshfs-1.1.0.gem
CDK Root Dir export CDK_ROOT=$OPENSHIFT_WORK_DIR/lib/cdk-2.1.0
Openshift-Vagrant dir export CDK_OSE_DIR=$OPENSHIFT_WORK_DIR/lib/cdk-2.1.0/components/rhel/rhel-ose cd $CDK_OSE_DIR
mv Vagrantfile Vagrantfile.orig
cp $OPENSHIFT_WORK_DIR/lib/Vagrantfile $CDK_OSE_DIR/
vi $HOST_USER_HOME/.vagrant.d/Vagrantfile
vagrant up
oc logout
oc status -v
oc whoami -t
oc projects
oc project
oc project iep-aed-fuse-proj
oc describe svc/docker-registry
oc delete all -l app=iep-aed-fuse-service
oadm diagnostics
docker top openshiftorigin
docker ps -a -q
docker stop $(docker ps -a -q)
docker restart $(docker ps -a -q)
docker rm -f 6cf3469fb728
docker rm -f $(docker ps -a -q)
docker rm $(docker ps -aq --filter status=exited)
docker rmi -f $(docker images -q)
kubectl get nodes
oc describe nodes
kubectl get pods --show-all
oc describe pods
oadm manage-node --list-pods <node-name>
oadm manage-node --list-pods mayans-virtualbox
vi /etc/default/docker
DOCKER_OPTS="--insecure-registry="172.30.143.135:5000" --insecure-registry 172.30.0.0/24 --iptables=true --debug=true --log-level=debug --icc=true"
systemctl start docker
systemctl restart docker
systemctl enable docker
systemctl daemon-reload