-
Bring up standard OpenShift cluster -- the method of setup should not matter (openshift-ansible or oc cluster up). Additionally, it should not matter if there is a cluster-scoped Ansible Service Broker present or not. The requirement is that the user creating the
install.yml
file should have sufficient permissions to create the broker's resources (cluster-admin
). Also, ensure the brew registry is configured as an insecure registry which contains the test image: (brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888
). -
Confirm the catalog's controller-manager pod is running with the broker feature flag has been enabled. The command should contain
NamespacedServiceBroker=true
.
oc describe pod <CONTROLLER_MANAGER_POD> -n <CATALOG_NAMESPACE>
should list this.
- Once the catalog's pods are confirmed healthy, the namespaced broker can
be installed with
oc create -f
on the following file. The namespace where the broker is going to be installed can also be customized, in this case I chosetest-ns-broker
for the namespace. It will be automatically created thanks to thecreate_broker_namespace=true
argument:
# install.yml
---
apiVersion: v1
kind: Namespace
metadata:
name: automation-broker-apb
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: automation-broker-apb
namespace: automation-broker-apb
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: automation-broker-apb
roleRef:
name: cluster-admin
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: automation-broker-apb
namespace: automation-broker-apb
---
apiVersion: v1
kind: Pod
metadata:
name: automation-broker-apb
namespace: automation-broker-apb
spec:
serviceAccount: automation-broker-apb
restartPolicy: Never
containers:
- name: apb
image: 'brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/automation-broker-apb:v3.11.0'
args: [ "provision", "--extra-vars", '{ "broker_image": "brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/ose-ansible-service-broker:v3.11", "broker_kind": "ServiceBroker", "broker_namespace": "test-ns-broker", "create_broker_namespace": true }' ]
imagePullPolicy: IfNotPresent
The APB should run to successful completion without failed tasks.
-
Wait until the broker pods within the
broker_namespace
have started and are healthy (this could take a little bit depending on the configured discovery repos). -
oc get servicebrokers -n test-ns-broker
should list a single automation broker. This is the list of namespaced brokers within this namespace. -
By default, the broker is deployed without any registry configured. The tester should manually add a dockerhub registry by editing the broker's config map and setting up the dockerhub config.
oc edit -n test-ns-broker configmap broker-config
and add the following registry config to theregistry
section, for example:
registry:
- type: dockerhub
name: dh
url: https://registry.hub.docker.com
org: ansibleplaybookbundle
tag: latest
white_list:
- ".*-apb$"
black_list:
- ".*automation-broker-apb$"
-
Rollout a new broker pod with
oc rollout latest -n test-ns-broker automation-broker
-
Wait until the broker's status is marked as
Successfully fetched catalog
when running an oc describe on theServiceBroker
object with:oc get servicebroker -n test-ns-broker <broker-name>
. Note: it is expected that there will be a few initial warning events reporting a failure to fetch the catalog from the broker. This is normal while the catalog tries to contact the broker and the broker is still coming up. -
Confirm the
ServiceClasses
andServicePlans
that the namespaced broker provides to the catalog have been created as expected. This can be done either by waiting for the catalog to automatically relist, or can be executed manually by runningoc edit servicebroker automation-broker -n test-ns-broker
and incrementing therelistRequests
field. The following commands should list clases and plans:
oc get serviceclasses -n test-ns-broker
oc get serviceplans -n test-ns-broker
- A full regression should be run through without failure with the following
ServiceInstance
andServiceBinding
. The notable difference here are theservice{Class,Plan}ExternalName
fields on the service instance, rather thanclusterService{Class,Plan}ExternalName
. These refer to the namespaced class and plans rather than those on the cluster level.
mediawiki instance
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: mediawiki
namespace: test-ns-broker
spec:
serviceClassExternalName: dh-mediawiki-apb
servicePlanExternalName: default
parameters:
app_name: mediawiki
mediawiki_db_schema: "mediawiki"
mediawiki_site_name: "Mediawiki-CI"
mediawiki_site_lang: "en"
mediawiki_admin_user: "ci-user"
mediawiki_admin_pass: "admin"
postgresql instance
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: postgresql
namespace: test-ns-broker
spec:
serviceClassExternalName: dh-postgresql-apb
servicePlanExternalName: dev
parameters:
app_name: "postgresql"
postgresql_database: "admin"
postgresql_password: "admin"
postgresql_user: "admin"
postgresql_version: "9.6"
binding
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceBinding
metadata:
name: mediawiki-postgresql-binding
namespace: test-ns-broker
spec:
instanceRef:
name: postgresql
secretName: mediawiki-postgresql-binding