Skip to content

Instantly share code, notes, and snippets.

@dhana-git
Last active February 23, 2017 07:12
Show Gist options
  • Select an option

  • Save dhana-git/ff429be0ece8709155b10f50823ea4a8 to your computer and use it in GitHub Desktop.

Select an option

Save dhana-git/ff429be0ece8709155b10f50823ea4a8 to your computer and use it in GitHub Desktop.
OpenShift Origin - Platform as a Service (PaaS) - Application Container Platform Solution : Quick start (IEP-AED)

OpenShift Origin - Platform as a Service (PaaS) - Application Container Platform Solution : Quick start (IEP-AED)

Table of Contents

Introduction

  • OpenShift is an application container platform solution, Platform as a Service (PaaS).
  • Built around a core of Docker container packaging (Containerization), Kubernetes container cluster management and etcd distributed persistence storage.
  • DevOps solution includes (for Java),
    • Jenkins for Continuous Integration (CI).
    • Git for source code management and version control (SCM).
    • Maven (for java) for dependency and build management.
  • Provides both cloud and on-premise container platform solution (PaaS).
  • Written in Go and AngularJS.
  • Supports integration with IDEs.

Docker

  • A software containerization platform.
  • Packages your application into a standardized unit for software development.
  • Wraps a piece of software in a complete file system that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.

etcd

  • An open-source distributed key-value data store that provides a reliable way to store data across a cluster of machines.
  • From Kubernetes/OpenShift perspective, etcd is the backend for service discovery and stores cluster state and configuration.

Kubernetes

  • An open-source system for automating deployment, scaling, and management of containerized applications.
  • Groups containers that make up an application into logical units for easy management and discovery.

Installation Options

Vagrant

Vagrant (by HashiCorp) is a portable environment provider. Creates and configures lightweight, reproducible, and portable development environments.

VirtualBox

VirtualBox (by Oracle) is a virtualization product, supports multiple-platforms.

OS and Other Software Package Details

  • Operating System: Ubuntu-14.10 (64 bit)

Option 1: OpenShift Installation (All-in-one) using Vagrant

Use Vagrant (by HashiCorp) to create and configure lightweight, reproducible, and portable development environments.

Install VirtualBox

sudo dpkg -i virtualbox-5.1_5.1.14-112924-Ubuntu-trusty_amd64.deb

Install Vagrant

sudo dpkg -i vagrant_1.9.1_x86_64.deb

Initialize Vagrant project

mkdir -p /opt/openshift/all-in-one-demo
cd /opt/openshift/all-in-one-demo
vagrant init openshift/origin-all-in-one

Bring up OpenShift

vagrant up --provider virtualbox

Option 2: OpenShift installation (All-in-one) using Docker

Install Docker

Install Dependencies
sudo apt-get install apt-transport-https ca-certificates
Add apt-key and update repo
curl -fsSL https://yum.dockerproject.org/gpg | sudo apt-key add -

sudo add-apt-repository \
    "deb https://apt.dockerproject.org/repo/ \
    ubuntu-$(lsb_release -cs) \
    main"

sudo apt-get update
Install Docker Engine
apt-get -y install docker-engine
Verify Docker installation by running hello-world image
sudo docker run hello-world

Install "openshift/origin" image on Docker

This step installs a docker image "openshift/origin" on a Docker platform with the name "openshiftorigin". i.e. Creates a Docker container named "openshiftorigin" with an image "openshift/origin" retrieved from Docker Registry.

export OPENSHIFT_DOC_CONTAINER_NAME=openshiftorigin

sudo docker run -d \
--name "$OPENSHIFT_DOC_CONTAINER_NAME" \
--privileged \
--pid=host \
--net=host \
--restart=always \
-v /:/rootfs:ro \
-v /var/run:/var/run:rw \
-v /dev:/dev \
-v /sys:/sys:ro \
-v /var/lib/docker:/var/lib/docker:rw \
-v /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes \
-v /var/lib/kubelet/:/var/lib/kubelet:rw \
openshift/origin start | tee Openshift_Doc_Container_Id

Note: The Docker generated container id can be retrived from Openshift_Doc_Container_Id file in future.

Export generated container id: export OPENSHIFT_DOC_CONTAINER_ID=$(cat Openshift_Doc_Container_Id)

Start Docker container ("openshiftorigin")

docker start $OPENSHIFT_DOC_CONTAINER_NAME

Start Docker Daemon Manually

Start docker daemon instance if not started on bootup

nohup docker daemon \
--insecure-registry="172.30.56.30:5000" --insecure-registry 172.30.0.0/24 \
--iptables=true \
--debug=true \
--log-level=debug \
--icc=true &
HINT: To get OpenShift/Kubernetes cluster server API URL
oc get ep -o "jsonpath=https://{$.items[?(@.metadata.name==\"kubernetes\")].subsets[0].addresses[0].ip}:{$.items[?(@.metadata.name==\"kubernetes\")].subsets[0].ports[?(@.name==\"https\")].port}"

Sample Output: https://192.168.56.101:8443

You can assign this into a variable for future reference,

export CLUSTER_API_SERVER_URL=`oc get ep -o "jsonpath=https://{$.items[?(@.metadata.name==\"kubernetes\")].subsets[0].addresses[0].ip}:{$.items[?(@.metadata.name==\"kubernetes\")].subsets[0].ports[?(@.name==\"https\")].port}"`

Option 3: OpenShift Installation using OpenShift provided binaries (Server & Client)

Install OpenShift Server

cd /opt/openshift/lib
gunzip openshift-origin-server-v1.4.1-3f9807a-linux-64bit.tar.gz
tar -xvf openshift-origin-server-v1.4.1-3f9807a-linux-64bit.tar
mv openshift-origin-server-v1.4.1-3f9807a-linux-64bit /opt/openshift/lib/openshift-server
export OPENSHIFT_SERVER_INSTALLATION_DIR=/opt/openshift/lib/openshift-server
cd $OPENSHIFT_SERVER_INSTALLATION_DIR

Start Openshift

cd $OPENSHIFT_SERVER_INSTALLATION_DIR
nohup ./openshift start &

Start Docker Daemon

Ref Start Docker Daemon Manually

Setting up environment variables and other pre-login activities

Set OpenShift master directory

If chosen option 2:

export OPENSHIFT_MASTER_DIR=/var/lib/docker/aufs/diff/$OPENSHIFT_DOC_CONTAINER_ID/var/lib/origin/openshift.local.config/master

else if chosen option 3:

export OPENSHIFT_MASTER_DIR=$OPENSHIFT_SERVER_INSTALLATION_DIR/openshift.local.config/master

Switch to OpenShift master directory:

cd $OPENSHIFT_MASTER_DIR

Set the environment variables and change modes (Optional)

Set KUBECONFIG and CURL_CA_BUNDLE environment variables if you would like to login into OpenShift as "system:admin".

export KUBECONFIG=$OPENSHIFT_MASTER_DIR/admin.kubeconfig
export CURL_CA_BUNDLE=$OPENSHIFT_MASTER_DIR/ca-bundle.crt
chmod +r $KUBECONFIG

Login into OpenShift Cluster Server

Login as system admin

Login into Openshift server as syste admin (cluster scoped) oc login -u system:admin -n default

The user "system:admin" is an high privileged id in cluster scope. This user can create/access all the resources created in cluster scope or project scope. Usually we use this id for granting access to cluster scoped or cross-project scoped objects.

Login as regular user

Login into Openshift as regular user (project scoped)

oc login -u configadmin -p configadmin  https://192.168.56.101:8443

Where, https://192.168.56.101:8443 is OpenShift Cluster API server URL.

Deploy Integrated Docker Registry

Add "priviliged" Security Context Constraint (SCC) to user "registry" (service account):

oadm policy add-scc-to-user privileged system:serviceaccount:default:registry

Add "registry-editor", "image-puller", "image-pusher", "image-builder" roles to "registry" (service account):

oadm policy add-role-to-user registry-editor system:serviceaccount:default:registry -n default
oadm policy add-role-to-user system:image-puller system:serviceaccount:default:registry -n default
oadm policy add-role-to-user system:image-pusher system:serviceaccount:default:registry -n default
oadm policy add-role-to-user system:image-builder system:serviceaccount:default:registry -n default

Create registry:

This command creates a Docker registry (service, build config, deployment config, service account, route,..).

oadm registry --service-account='registry' \
    --images='openshift/origin-${component}:${version}' --mount-host=/tmp
HINT: To get Docker Registry's ClusterIP and Port:
oc get svc/docker-registry -n default -o 'jsonpath={.spec.clusterIP}:{.spec.ports[0].port}'

Sample Output: 172.30.56.30:5000

You can assign this into a variable for future reference,

export DOCKER_REGISTRY_HOST=`oc get svc/docker-registry -n default -o 'jsonpath={.spec.clusterIP}'`
export DOCKER_REGISTRY_PORT=`oc get svc/docker-registry -n default -o 'jsonpath={.spec.ports[0].port}'`
export DOCKER_REGISTRY_HOST_AND_PORT=`oc get svc/docker-registry -n default -o 'jsonpath={.spec.clusterIP}:{.spec.ports[0].port}'`
Check the status of registry service
oc describe svc/docker-registry
Registry URL Health Check
http://172.17.0.3:5000/healthz

Securing the Registry (Optional)

Create a server certificate
oadm ca create-server-cert \
    --signer-cert=$OPENSHIFT_MASTER_DIR/ca.crt \
    --signer-key=$OPENSHIFT_MASTER_DIR/ca.key \
    --signer-serial=$OPENSHIFT_MASTER_DIR/ca.serial.txt \
    --hostnames='docker-registry.default.svc.cluster.local,$DOCKER_REGISTRY_HOST' \
    --cert=/etc/secrets/registry.crt \
    --key=/etc/secrets/registry.key
Create secrets for the registry certificates
oc secrets new registry-secret \
    /etc/secrets/registry.crt \
    /etc/secrets/registry.key
Add the secret to the registry pod’s service accounts
oc secrets add serviceaccounts/registry secrets/registry-secret
oc secrets add serviceaccounts/default  secrets/registry-secret
Add the secret volumes to the registry
oc volume dc/docker-registry --add --type=secret \
    --secret-name=registry-secret -m /etc/secrets
Enable TLS
oc env dc/docker-registry \
    REGISTRY_HTTP_TLS_CERTIFICATE=/etc/secrets/registry.crt \
    REGISTRY_HTTP_TLS_KEY=/etc/secrets/registry.key
Update httpGet scheme in dc for Liveness and Readiness probes
oc patch dc/docker-registry -p '{"spec": {"template": {"spec": {"containers":[{
    "name":"registry",
    "livenessProbe":  {"httpGet": {"scheme":"HTTPS"}},
    "readinessProbe":  {"httpGet": {"scheme":"HTTPS"}}
  }]}}}}'
Check if TLS enabled
oc logs dc/docker-registry | grep tls
Copy CA certs to Docker certs dir
cd $OPENSHIFT_MASTER_DIR
dcertsdir=/etc/docker/certs.d
destdir_addr=$dcertsdir/$DOCKER_REGISTRY_HOST_AND_PORT
destdir_name=$dcertsdir/docker-registry.default.svc.cluster.local:$DOCKER_REGISTRY_PORT

sudo mkdir -p $destdir_addr $destdir_name
sudo cp ca.crt $destdir_addr    
sudo cp ca.crt $destdir_name

Loading the Default Image Streams and Templates

Login as system admin and do the following,

oc project openshift
cd ~
git clone https://github.com/openshift/openshift-ansible

cd ~/openshift-ansible/roles/openshift_examples/files/examples/latest
for fileName in `find . -name '*.json'`
do 
    oc create -f $fileName -n openshift
done

rm -rf ~/openshift-ansible

Configure and Deploy Router

The OpenShift router (load balancer) is the ingress point for all external traffics destined for OpenShift services. OpenShift supports the following two router plug-ins,

  1. HAProxy template router (openshift3/ose-haproxy-router) - Default router choice.
  2. F5 router - Integrates with an existing F5 BIG-IP

Here we go with default router (HAProxy) creation, Login as system admin and execute the following commands,

oc project default
echo '{"kind":"ServiceAccount","apiVersion":"v1","metadata":{"name":"router"}}' | oc create -f -
oadm policy add-cluster-role-to-user \
    cluster-reader \
    system:serviceaccount:default:router

oadm policy add-scc-to-user privileged system:serviceaccount:default:router

oadm router --dry-run \
    --service-account=router

oadm router router --replicas=1 --service-account=router

FIS on OpenShift using S2I

Create New Project

oc new-project fis-openshift-s2i --display-name="FIS on OpenShift using S2I" --description="FIS on OpenShift using S2I"

oadm policy add-role-to-user admin configadmin -n fis-openshift-s2i

Add Image-Stream/Template (If required)

curl https://github.com/dhana-git/fis-openshift-s2i-karaf-camel-qs/blob/master/quickstart-template.json > /opt/openshift/templates/fis-openshift-s2i-karaf-camel-qs-quickstart-template.json

oc create -f /opt/openshift/templates/fis-openshift-s2i-karaf-camel-qs-quickstart-template.json -n fis-openshift-s2i

FIS on OpenShift using Fabric8 Maven workflow

Create New Project

oc new-project fis-openshift-fabric8-maven --display-name="FIS on OpenShift using Fabric8 Maven workflow" --description="FIS on OpenShift using Fabric8 Maven workflow"

oadm policy add-role-to-user admin configadmin -n fis-openshift-fabric8-maven

Add image-puller/pusher/builder roles to regular user (configadmin)

oadm policy add-role-to-user system:image-pusher configadmin -n fis-openshift-fabric8-maven
oadm policy add-role-to-user system:image-puller configadmin -n fis-openshift-fabric8-maven
oadm policy add-role-to-user system:image-builder configadmin -n fis-openshift-fabric8-maven
#oadm policy add-scc-to-user privileged configadmin -n fis-openshift-fabric8-maven

Clone Git repository

cd /opt/openshift/all-in-one-demo

git clone https://github.com/dhana-git/fis-openshift-s2i-karaf-camel-qs.git

cd /opt/openshift/all-in-one-demo/fis-openshift-s2i-karaf-camel-qs

Set environment variables required for maven plugins

export DOCKER_HOST=unix:///var/run/docker.sock
export KUBERNETES_MASTER=$CLUSTER_API_SERVER_URL
export KUBERNETES_DOMAIN=poc.openshift.dev

Login into OpenShift

oc login -u <username> -p <password/token> -n <OPENSHIFT_NAMESPACE>  <https://CLUSTER_API_HOST:8443>

E.g.: oc login -u configadmin -p -n fis-openshift-fabric8-maven $CLUSTER_API_SERVER_URL

Trigger Maven Build

mvn -Pf8-deploy \
    -Ddocker.pull.registry=<Docker registry host to pull an image> \
    -Ddocker.push.registry=<Docker registry host to pull an image> \
    -Ddocker.push.username=<OpenShift username> \
    -Ddocker.push.password=<OpenShift user passowrod/token> \
    -Dopenshift.project.name=<OpenShift project/namespace>

E.g.: mvn -Pf8-deploy
-Ddocker.pull.registry=registry.access.redhat.com
-Ddocker.push.registry=$(oc get svc/docker-registry -n default -o 'jsonpath={.spec.clusterIP}:{.spec.ports[0].port}')
-Ddocker.push.username=$(oc whoami)
-Ddocker.push.password=$(oc whoami -t)
-Dopenshift.project.name=$(oc project -q)

OpenShift Installation using Container Development Kit (CDK)

The CDK is designed to simplify the configuration and setup of Linux container development environments and provides an on-ramp to building container-based applications. By using Vagrant, open source software to create, configure and deploy virtual environments, the CDK enables developers across Microsoft Windows, Mac OS X and Linux operating systems to more quickly create containerized applications for deployment on Red Hat certified container hosts.

Install Vagrant (if required)

Ref [Install Vagrant](#install-vagrant)

Workspace setup

Create openshift project root and lib directories
mkdir -p /cygdrive/c/Users/C241251/LLY/NGIF/Workspace/openshift-poc/lib
export OPENSHIFT_WORK_DIR=/cygdrive/c/Users/C241251/LLY/NGIF/Workspace/openshift-poc
export HOST_USER_HOME=/cygdrive/c/Users/C241251
cd $OPENSHIFT_WORK_DIR
move the artifacts from /tmp to $OPENSHIFT_WORK_DIR/lib
mv /tmp/rhel-cdk-kubernetes-7.2-25.x86_64.vagrant-virtualbox.box $OPENSHIFT_WORK_DIR/lib/
mv /tmp/cdk-2.1.0.zip $OPENSHIFT_WORK_DIR/lib/
mv /tmp/Vagrantfile $OPENSHIFT_WORK_DIR/lib/

Switch to project's root dir cd $OPENSHIFT_WORK_DIR

Configure CDK

Initialize Vagrant working directory

vagrant init

Add vagrant box (cdk with openshift, kubernetes on virtualbox)

vagrant box add --name cdkv2 ./lib/rhel-cdk-kubernetes-7.2*.x86_64.vagrant-virtualbox.box

Vagrant plugin installation

cd $OPENSHIFT_WORK_DIR/lib/cdk-2.1.0/plugins

vagrant-registration plugin (for guest registration) vagrant plugin install vagrant-registration-1.2.2.gem vagrant-service-manager plugin (obtain information about Docker, kubernetes, openshift services) vagrant plugin install vagrant-service-manager-1.1.0.gem vagrant-sshfs plugin (shared/synchronized folder using SSHFS) vagrant plugin install vagrant-sshfs-1.1.0.gem

Configure CDK

Set environment variables

CDK Root Dir export CDK_ROOT=$OPENSHIFT_WORK_DIR/lib/cdk-2.1.0

Openshift-Vagrant dir export CDK_OSE_DIR=$OPENSHIFT_WORK_DIR/lib/cdk-2.1.0/components/rhel/rhel-ose cd $CDK_OSE_DIR

Backup original Vagrantfile

mv Vagrantfile Vagrantfile.orig

Copy the given Vagrant file into CDK OCE work directory

cp $OPENSHIFT_WORK_DIR/lib/Vagrantfile $CDK_OSE_DIR/
vi $HOST_USER_HOME/.vagrant.d/Vagrantfile

Bring up Vagrant

vagrant up

Miscellaneous

OpenShift CLI Commands

Logout of OpenShift session
oc logout
Display an overview of the current project
oc status -v
Get access token
oc whoami -t
List all the projects (authorized to)
oc projects
Get current project
oc project
Switch to the mentioned project
oc project iep-aed-fuse-proj
Check the status of registry service
oc describe svc/docker-registry
Delete all the resources by selector (label)
oc delete all -l app=iep-aed-fuse-service

OpenShift Admin CLI Commands

Run Diagnostics
oadm diagnostics

Docker CLI Commands

Display the running processes of a container
docker top openshiftorigin
Find all docker containers
docker ps -a -q
Stop all the containers
docker stop $(docker ps -a -q)
Restart all the containers
docker restart $(docker ps -a -q)
Remove (force) the specified containers
docker rm -f 6cf3469fb728
Remove (force) all the containers
docker rm -f $(docker ps -a -q)
Remove non-running containers
docker rm $(docker ps -aq --filter status=exited)
Delete all the images
docker rmi -f $(docker images -q)
List nodes in your cluster
kubectl get nodes
oc describe nodes
List pods
kubectl get pods --show-all
oc describe pods

oadm manage-node --list-pods <node-name>
oadm manage-node --list-pods mayans-virtualbox

Configure Docker Daemon

vi /etc/default/docker
DOCKER_OPTS="--insecure-registry="172.30.143.135:5000" --insecure-registry 172.30.0.0/24 --iptables=true --debug=true --log-level=debug --icc=true"

Docker Daemon Lifecycle Management

systemctl start docker
systemctl restart docker
systemctl enable docker
systemctl daemon-reload
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment