Skip to content

Instantly share code, notes, and snippets.

@imShakil
Created May 31, 2024 06:46
Show Gist options
  • Save imShakil/cd73d987530d9f175b14fc614cb4364e to your computer and use it in GitHub Desktop.
Save imShakil/cd73d987530d9f175b14fc614cb4364e to your computer and use it in GitHub Desktop.
gluu-cloud-native-with-microk8s

Installation of Gluu Cloud Native in microk8s based kubernetes

  1. Make sure you have installed microk8s, and enable dns, hostpath-storage, helm3:
sudo microk8s.enable dns
sudo microk8s.enable hostpath-storage
sudo microk8s.enable helm3

to alias:

sudo snap alias microk8s.kubectl kubectl
sudo snap alias microk8s.helm3 helm
  1. Download the pygluu script
wget https://github.com/GluuFederation/cloud-native-edition/releases/download/v1.8.23/pygluu-kubernetes-linux-amd64.pyz
  1. Make it executable
chmod +x ./pygluu-kubernetes-linux-amd64.pyz
  1. run the command to install gluu using helm
./pygluu-kubernetes-linux-amd64.pyz helm-install
  1. create docker registry to allow pulling image from gluu repo
kubectl create secret docker-registry -n gluu regcred --docker-server=https://index.docker.io/v1/ --docker-username=gluu1056 --docker-password=dckr_pat_0zR0mdkH00ehm73il7Vpgz0rBrY
  1. enable kubernetes ingress to access from the browser
sudo microk8s.enable ingress
  1. apply ingress to all the services
sudo microk8s kubectl get daemonset.apps/nginx-ingress-microk8s-controller -n ingress -o yaml | sed -s "s@ingress-class=public@ingress-class=nginx@g" | microk8s kubectl apply -f -

Upgrade from ldap to postgre

  1. Export entries for each tree (o=gluu, o=site, o=metric) as .ldif file.
mkdir -p custom_ldif
kubectl -n gluu exec -ti gluu-opendj-0 -- /opt/opendj/bin/ldapsearch -D "cn=directory manager" -p 1636 --useSSL -w Mh@006 --trustAll -b "o=gluu" -s sub objectClass=* > custom_ldif/01_gluu.ldif
kubectl -n gluu exec -ti gluu-opendj-0 -- /opt/opendj/bin/ldapsearch -D "cn=directory manager" -p 1636 --useSSL -w Mh@006 --trustAll -b "o=site" -s sub objectClass=* > custom_ldif/02_site.ldif
kubectl -n gluu exec -ti gluu-opendj-0 -- /opt/opendj/bin/ldapsearch -D "cn=directory manager" -p 1636 --useSSL -w Mh@006 --trustAll -b "o=metric" -s sub objectClass=* > custom_ldif/03_metric.ldif
  1. Create configmaps for each .ldif file if each file below 1MB:
kubectl -n gluu create cm custom-gluu-ldif --from-file=custom_ldif/01_gluu.ldif
kubectl -n gluu create cm custom-site-ldif --from-file=custom_ldif/02_site.ldif
kubectl -n gluu create cm custom-metric-ldif --from-file=custom_ldif/03_metric.ldif
  1. Prepare postgreSQL for db migration
kubectl create ns postgres
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install postgresql --set auth.rootPassword=Mh@006,auth.database=gluu,auth.username=admin,auth.password=Mh@006 bitnami/postgresql -n postgres
  1. Migrating entries from .ldif files may take a while, hence we will be migrating them offline using a separate k8s job.

    • create sql_password file to store password for postgres user:
    kubectl -n gluu create secret generic offline-sql-pass --from-file=sql_password
    
    • Create offline-persistence-load.yaml:
    apiVersion: batch/v1
    kind: Job
    metadata:
    name: offline-persistence-load
    spec:
    template:
        metadata:
        annotations:
            sidecar.istio.io/inject: "false"                  
        spec:
        restartPolicy: Never
        imagePullSecrets:
            - name: regcred
        volumes:
            - name: custom-gluu-ldif
            configMap:
                name: custom-gluu-ldif
            - name: custom-site-ldif
            configMap:
                name: custom-site-ldif
            - name: custom-metric-ldif
            configMap:
                name: custom-metric-ldif
            - name: sql-pass
            secret:
                secretName: offline-sql-pass # adjust the value according to your setup
        containers:
            - name: offline-persistence-load
            image: gluufederation/persistence:4.5.3-2
            volumeMounts:
                - name: custom-gluu-ldif
                mountPath: /app/custom_ldif/01_gluu.ldif
                subPath: 01_gluu.ldif
                - name: custom-site-ldif
                mountPath: /app/custom_ldif/02_site.ldif
                subPath: 02_site.ldif
                - name: custom-metric-ldif
                mountPath: /app/custom_ldif/03_metric.ldif
                subPath: 03_metric.ldif
                - name: sql-pass
                mountPath: "/etc/gluu/conf/sql_password"
                subPath: sql_password
            envFrom:
                - configMapRef:
                    name: gluu-config-cm # adjust the name according to your setup
            env:
                - name: GLUU_PERSISTENCE_IMPORT_BUILTIN_LDIF
                value: "false" # [DONT CHANGE] skip builtin LDIF files generated by the image container
                - name: GLUU_PERSISTENCE_TYPE
                value: "sql" # [DONT CHANGE]
                - name: GLUU_SQL_DB_DIALECT
                value: "pgsql" # [DONT CHANGE]
                - name: GLUU_SQL_DB_NAME
                value: "gluu" # adjust according to your setup
                - name: GLUU_SQL_DB_HOST
                value: "postgresql.postgres.svc.cluster.local" # adjust according to your setup
                - name: GLUU_SQL_DB_PORT
                value: "5432" # adjust according to your setup
                - name: GLUU_SQL_DB_USER
                value: "admin" # adjust according to your setup
                - name: GLUU_SQL_DB_SCHEMA
                value: "public" # [default value] adjust according to your setup
    
  2. Deploy the job:

kubectl -n gluu apply -f offline-persistence-load.yaml
  1. Make sure there's no error while running the job before proceeding to the next step. If there's no error, the job and secret can be deleted safely:
kubectl -n gluu delete secret offline-sql-pass
kubectl -n gluu delete job offline-persistence-load
  1. Switch the persistence by adding the following to the existing values.yaml:
global:
  gluuPersistenceType: sql
  upgrade:
    enabled: false
config:
  configmap:
    cnSqlDbName: gluu
    cnSqlDbPort: 5432
    cnSqlDbDialect: pgsql
    cnSqlDbHost: postgresql.postgres.svc.local.cluster
    cnSqlDbUser: admin
    cnSqlDbTimezone: UTC
    cnSqldbUserPassword: <postgres-user-password>

  1. Finally run:
helm upgrade gluu gluu/gluu -n gluu -f values.yaml

Test

root@imShakil-neutral-rooster:~# kubectl get ns
NAME              STATUS   AGE
default           Active   6d
gluu              Active   13h
ingress           Active   13h
ingress-nginx     Active   13h
kube-node-lease   Active   6d
kube-public       Active   6d
kube-system       Active   6d
postgres          Active   142m
root@imShakil-neutral-rooster:~# kubectl get pods -n gluu
NAME                                READY   STATUS                       RESTARTS      AGE
gluu-jackrabbit-first-0             1/1     Running                      0             13h
gluu-opendj-backup-28618800-bhs45   0/1     Completed                    0             147m
gluu-opendj-backup-28618859-djswm   0/1     Completed                    0             94m
gluu-opendj-backup-28618860-4hfs9   0/1     Completed                    0             87m
gluu-opendj-backup-28618919-smjlt   0/1     CreateContainerConfigError   0             34m
gluu-oxauth-568d676454-vp4zx        1/1     Running                      0             46m
gluu-oxshibboleth-0                 1/1     Running                      0             46m
gluu-oxtrust-0                      1/1     Running                      1 (43m ago)   46m
gluu-scim-5d4bc598b8-9cvkq          1/1     Running                      0             46m
oot@imShakil-neutral-rooster:~# kubectl get pods -n postgres
NAME           READY   STATUS    RESTARTS   AGE
postgresql-0   1/1     Running   0          143m
global:
usrEnvs:
normal: {}
secret: {}
istio:
ingress: false
enabled: false
namespace: istio-system
additionalLabels: {}
additionalAnnotations: {}
alb:
ingress:
enabled: false
adminUiEnabled: true
openidConfigEnabled: true
uma2ConfigEnabled: true
webfingerEnabled: true
webdiscoveryEnabled: true
scimConfigEnabled: true
scimEnabled: true
u2fConfigEnabled: true
fido2Enabled: false
fido2ConfigEnabled: false
authServerEnabled: true
casaEnabled: false
passportEnabled: false
shibEnabled: true
additionalLabels: {}
additionalAnnotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:xxxx:certificate/xxxxxx
alb.ingress.kubernetes.io/auth-session-cookie: custom-cookie
cloud:
testEnviroment: true
upgrade:
enabled: false
image:
repository: gluufederation/upgrade
tag: 4.5.1-1
sourceVersion: "4.5"
targetVersion: "4.5"
storageClass:
allowVolumeExpansion: true
allowedTopologies: []
mountOptions:
- debug
# -- parameters:
#fsType: ""
#kind: ""
#pool: ""
#storageAccountType: ""
#type: ""
parameters: {}
provisioner: microk8s.io/hostpath
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
gcePdStorageType: pd-standard
azureStorageAccountType: Standard_LRS
azureStorageKind: Managed
lbIp: "143.244.152.31"
domain: imshakil-neutral-rooster.gluu.info
isDomainRegistered: "false"
enableSecurityContextWithNonRegisteredDomain: "true"
ldapServiceName: opendj
gluuPersistenceType: sql
gluuJackrabbitCluster: "false"
configAdapterName: kubernetes
configSecretAdapter: kubernetes
cnGoogleApplicationCredentials: /etc/gluu/conf/google-credentials.json
cnAwsSharedCredentialsFile: /etc/gluu/conf/aws_shared_credential_file
cnAwsConfigFile: /etc/gluu/conf/aws_config_file
cnAwsSecretsReplicaRegionsFile: /etc/gluu/conf/aws_secrets_replica_regions
oxauth:
enabled: true
appLoggers:
enableStdoutLogPrefix: "true"
authLogTarget: "STDOUT"
authLogLevel: "INFO"
httpLogTarget: "FILE"
httpLogLevel: "INFO"
persistenceLogTarget: "FILE"
persistenceLogLevel: "INFO"
persistenceDurationLogTarget: "FILE"
persistenceDurationLogLevel: "INFO"
ldapStatsLogTarget: "FILE"
ldapStatsLogLevel: "INFO"
scriptLogTarget: "FILE"
scriptLogLevel: "INFO"
auditStatsLogTarget: "FILE"
auditStatsLogLevel: "INFO"
cleanerLogTarget: "FILE"
cleanerLogLevel: "INFO"
fido2:
enabled: false
appLoggers:
enableStdoutLogPrefix: "true"
fido2LogTarget: "STDOUT"
fido2LogLevel: "INFO"
persistenceLogTarget: "FILE"
persistenceLogLevel: "INFO"
scim:
enabled: true
appLoggers:
enableStdoutLogPrefix: "true"
scimLogTarget: "STDOUT"
scimLogLevel: "INFO"
persistenceLogTarget: "FILE"
persistenceLogLevel: "INFO"
persistenceDurationLogTarget: "FILE"
persistenceDurationLogLevel: "INFO"
scriptLogTarget: "FILE"
scriptLogLevel: "INFO"
config:
enabled: true
jobTtlSecondsAfterFinished: 300
jackrabbit:
enabled: true
appLoggers:
jackrabbitLogTarget: "STDOUT"
jackrabbitLogLevel: "INFO"
persistence:
enabled: true
oxtrust:
enabled: true
gluuCustomJavaOptions: "-XshowSettings:vm -XX:MaxRAMPercentage=80"
appLoggers:
enableStdoutLogPrefix: "true"
oxtrustLogTarget: "STDOUT"
oxtrustLogLevel: "INFO"
httpLogTarget: "FILE"
httpLogLevel: "INFO"
persistenceLogTarget: "FILE"
persistenceLogLevel: "INFO"
persistenceDurationLogTarget: "FILE"
persistenceDurationLogLevel: "INFO"
ldapStatsLogTarget: "FILE"
ldapStatsLogLevel: "INFO"
scriptLogTarget: "FILE"
scriptLogLevel: "INFO"
auditStatsLogTarget: "FILE"
auditStatsLogLevel: "INFO"
cleanerLogTarget: "FILE"
cleanerLogLevel: "INFO"
velocityLogLevel: "INFO"
velocityLogTarget: "FILE"
cacheRefreshLogLevel: "INFO"
cacheRefreshLogTarget: "FILE"
cacheRefreshPythonLogLevel: "INFO"
cacheRefreshPythonLogTarget: "FILE"
apachehcLogLevel: "INFO"
apachehcLogTarget: "FILE"
opendj:
enabled: true
oxshibboleth:
enabled: true
gluuCustomJavaOptions: ""
appLoggers:
enableStdoutLogPrefix: "true"
idpLogTarget: "STDOUT"
idpLogLevel: "INFO"
scriptLogTarget: "FILE"
scriptLogLevel: "INFO"
auditStatsLogTarget: "FILE"
auditStatsLogLevel: "INFO"
consentAuditLogTarget: "FILE"
consentAuditLogLevel: "INFO"
ldapLogLevel: ""
messagesLogLevel: ""
encryptionLogLevel: ""
opensamlLogLevel: ""
propsLogLevel: ""
httpclientLogLevel: ""
springLogLevel: ""
containerLogLevel: ""
xmlsecLogLevel: ""
oxd-server:
enabled: false
appLoggers:
oxdServerLogTarget: "STDOUT"
oxdServerLogLevel: "INFO"
nginx-ingress:
enabled: true
oxauth-key-rotation:
enabled: false
cr-rotate:
enabled: false
config:
usrEnvs:
normal: {}
secret: {}
orgName: Gluu
email: [email protected]
adminPass: Mh@006
ldapPass: Mh@006
redisPass: P@assw0rd
countryCode: US
state: TX
city: Austin
salt: ""
configmap:
cnSqlDbDialect: pgsql
cnSqlDbHost: postgresql.postgres.svc.cluster.local
cnSqlDbPort: 5432
cnSqlDbName: gluu
cnSqlDbUser: admin
cnSqlDbTimezone: UTC
cnSqlPasswordFile: /etc/gluu/conf/sql_password
cnSqldbUserPassword: Mh@006
gluuOxdApplicationCertCn: oxd-server
gluuOxdAdminCertCn: oxd-server
gluuCouchbaseCrt:
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURlakNDQW1LZ0F3SUJBZ0lKQUwyem5UWlREUHFNTUEwR0NTcUdTSWIzRFFFQkN3VUFNQzB4S3pBcEJnTlYKQkFNTUlpb3VZMkpuYkhWMUxtUmxabUYxYkhRdWMzWmpMbU5zZFhOMFpYSXViRzlqWVd3d0hoY05NakF3TWpBMQpNRGt4T1RVeFdoY05NekF3TWpBeU1Ea3hPVFV4V2pBdE1Tc3dLUVlEVlFRRERDSXFMbU5pWjJ4MWRTNWtaV1poCmRXeDBMbk4yWXk1amJIVnpkR1Z5TG14dlkyRnNNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUIKQ2dLQ0FRRUFycmQ5T3lvSnRsVzhnNW5nWlJtL2FKWjJ2eUtubGU3dVFIUEw4Q2RJa1RNdjB0eHZhR1B5UkNQQgo3RE00RTFkLzhMaU5takdZZk41QjZjWjlRUmNCaG1VNmFyUDRKZUZ3c0x0cTFGT3MxaDlmWGo3d3NzcTYrYmlkCjV6Umw3UEE0YmdvOXVkUVRzU1UrWDJUUVRDc0dxVVVPWExrZ3NCMjI0RDNsdkFCbmZOeHcvYnFQa2ZCQTFxVzYKVXpxellMdHN6WE5GY0dQMFhtU3c4WjJuaFhhUGlva2pPT2dyMkMrbVFZK0htQ2xGUWRpd2g2ZjBYR0V0STMrKwoyMStTejdXRkF6RlFBVUp2MHIvZnk4TDRXZzh1YysvalgwTGQrc2NoQTlNQjh3YmJORUp2ZjNMOGZ5QjZ0cTd2CjF4b0FnL0g0S1dJaHdqSEN0dFVnWU1oU0xWV3UrUUlEQVFBQm80R2NNSUdaTUIwR0ExVWREZ1FXQkJTWmQxWU0KVGNIRVZjSENNUmp6ejczZitEVmxxREJkQmdOVkhTTUVWakJVZ0JTWmQxWU1UY0hFVmNIQ01Sanp6NzNmK0RWbApxS0V4cEM4d0xURXJNQ2tHQTFVRUF3d2lLaTVqWW1kc2RYVXVaR1ZtWVhWc2RDNXpkbU11WTJ4MWMzUmxjaTVzCmIyTmhiSUlKQUwyem5UWlREUHFNTUF3R0ExVWRFd1FGTUFNQkFmOHdDd1lEVlIwUEJBUURBZ0VHTUEwR0NTcUcKU0liM0RRRUJDd1VBQTRJQkFRQk9meTVWSHlKZCtWUTBXaUQ1aSs2cmhidGNpSmtFN0YwWVVVZnJ6UFN2YWVFWQp2NElVWStWOC9UNnE4Mk9vVWU1eCtvS2dzbFBsL01nZEg2SW9CRnVtaUFqek14RTdUYUhHcXJ5dk13Qk5IKzB5CnhadG9mSnFXQzhGeUlwTVFHTEs0RVBGd3VHRlJnazZMRGR2ZEN5NVdxWW1MQWdBZVh5VWNaNnlHYkdMTjRPUDUKZTFiaEFiLzRXWXRxRHVydFJrWjNEejlZcis4VWNCVTRLT005OHBZN05aaXFmKzlCZVkvOEhZaVQ2Q0RRWWgyTgoyK0VWRFBHcFE4UkVsRThhN1ZLL29MemlOaXFyRjllNDV1OU1KdjM1ZktmNUJjK2FKdWduTGcwaUZUYmNaT1prCkpuYkUvUENIUDZFWmxLaEFiZUdnendtS1dDbTZTL3g0TklRK2JtMmoKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
gluuCouchbasePass: P@ssw0rd
gluuCouchbaseSuperUserPass: P@ssw0rd
gluuCouchbaseSuperUser: admin
gluuCouchbaseUrl: cbgluu.default.svc.cluster.local
gluuCouchbaseBucketPrefix: gluu
gluuCouchbaseUser: gluu
gluuCouchbaseIndexNumReplica: 0
gluuCouchbasePassFile: /etc/gluu/conf/couchbase_password
gluuCouchbaseSuperUserPassFile: /etc/gluu/conf/couchbase_superuser_password
gluuCouchbaseCertFile: /etc/certs/couchbase.crt
gluuPersistenceLdapMapping: ''
gluuCacheType: NATIVE_PERSISTENCE
gluuSyncShibManifests: true
gluuSyncCasaManifests: false
gluuMaxRamPercent: "75.0"
containerMetadataName: kubernetes
gluuRedisUrl: redis:6379
gluuRedisUseSsl: "false"
gluuRedisType: STANDALONE
gluuRedisSslTruststore: ""
gluuRedisSentinelGroup: ""
gluuOxtrustConfigGeneration: true
gluuOxtrustBackend: oxtrust:8080
gluuOxauthBackend: oxauth:8080
gluuOxdServerUrl: oxd-server:8443
gluuOxdBindIpAddresses: "*"
gluuLdapUrl: opendj:1636
gluuJackrabbitPostgresUser: jackrabbit
gluuJackrabbitPostgresPasswordFile: /etc/gluu/conf/postgres_password
gluuJackrabbitPostgresDatabaseName: jackrabbit
gluuJackrabbitPostgresHost: postgresql.postgres.svc.cluster.local
gluuJackrabbitPostgresPort: 5432
gluuJackrabbitAdminId: admin
gluuJackrabbitAdminPassFile: /etc/gluu/conf/jackrabbit_admin_password
gluuJackrabbitSyncInterval: 300
gluuJackrabbitUrl: http://jackrabbit:8080
gluuJackrabbitAdminIdFile: /etc/gluu/conf/jackrabbit_admin_id
gluuDocumentStoreType: JCA
cnGoogleServiceAccount: SWFtTm90YVNlcnZpY2VBY2NvdW50Q2hhbmdlTWV0b09uZQo=
cnGoogleProjectId: google-project-to-save-config-and-secrets-to
cnGoogleSpannerInstanceId: ""
cnGoogleSpannerDatabaseId: ""
cnGoogleSpannerEmulatorHost: ""
cnSecretGoogleSecretVersionId: "latest"
cnSecretGoogleSecretNamePrefix: gluu
cnAwsAccessKeyId: ""
cnAwsSecretAccessKey: ""
cnAwsSecretsEndpointUrl: ""
cnAwsSecretsNamePrefix: gluu
cnAwsDefaultRegion: us-west-1
cnAwsProfile: gluu
cnAwsSecretsReplicaRegions: []
lbAddr: ""
gluuOxtrustApiEnabled: true
gluuOxtrustApiTestMode: false
gluuScimProtectionMode: "TEST"
gluuPassportEnabled: false
gluuPassportFailureRedirectUrl: ""
gluuCasaEnabled: false
gluuSamlEnabled: true
gluuPersistenceType: ldap
image:
repository: gluufederation/config-init
tag: 4.5.1-1
pullSecrets:
- name: regcred
volumes: []
volumeMounts: []
lifecycle: {}
dnsPolicy: ""
dnsConfig: {}
migration:
enabled: false
migrationDir: /ce-migration
migrationDataFormat: ldif
resources:
limits:
cpu: 300m
memory: 300Mi
requests:
cpu: 300m
memory: 300Mi
additionalLabels: {}
additionalAnnotations: {}
tolerations: []
affinity: {}
nodeSelector: {}
nginx-ingress:
certManager:
certificate:
enabled: false
issuerKind: ClusterIssuer
issuerName: ""
issuerGroup: cert-manager.io
ingress:
enabled: true
legacy: false
path: /
adminUiEnabled: true
adminUiLabels: {}
adminUiAdditionalAnnotations: {}
openidConfigEnabled: true
openidConfigLabels: {}
openidAdditionalAnnotations: {}
deviceCodeEnabled: true
deviceCodeLabels: {}
deviceCodeAdditionalAnnotations: {}
firebaseMessagingEnabled: true
firebaseMessagingLabels: {}
firebaseMessagingAdditionalAnnotations: {}
uma2ConfigEnabled: true
uma2ConfigLabels: {}
uma2AdditionalAnnotations: {}
webfingerEnabled: true
webfingerLabels: {}
webfingerAdditionalAnnotations: {}
webdiscoveryEnabled: true
webdiscoveryLabels: {}
webdiscoveryAdditionalAnnotations: {}
scimConfigEnabled: true
scimConfigLabels: {}
scimConfigAdditionalAnnotations: {}
scimEnabled: true
scimLabels: {}
scimAdditionalAnnotations: {}
u2fConfigEnabled: true
u2fConfigLabels: {}
u2fAdditionalAnnotations: {}
fido2ConfigEnabled: false
fido2ConfigLabels: {}
fido2ConfigAdditionalAnnotations: {}
fido2Enabled: false
fido2Labels: {}
authServerEnabled: true
authServerLabels: {}
authServerAdditionalAnnotations: {}
casaEnabled: false
casaLabels: {}
casaAdditionalAnnotations: {}
passportEnabled: false
passportLabels: {}
passportAdditionalAnnotations: {}
shibEnabled: true
shibLabels: {}
shibAdditionalAnnotations: {}
additionalLabels: {}
additionalAnnotations:
kubernetes.io/ingress.class: "public"
hosts:
- imshakil-neutral-rooster.gluu.info
tls:
- secretName: tls-certificate # DON'T change
hosts:
- imshakil-neutral-rooster.gluu.info
jackrabbit:
topologySpreadConstraints: {}
pdb:
enabled: true
maxUnavailable: 1
hpa:
enabled: true
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
metrics: []
behavior: {}
usrEnvs:
normal: {}
secret: {}
dnsPolicy: ""
dnsConfig: {}
image:
pullPolicy: IfNotPresent
repository: gluufederation/jackrabbit
tag: 4.5.1-1
pullSecrets:
- name: regcred
replicas: 1
resources:
limits:
cpu: 1500m
memory: 1000Mi
requests:
cpu: 1500m
memory: 1000Mi
secrets:
gluuJackrabbitAdminPass: Mh@006
gluuJackrabbitPostgresPass: ''
service:
jackRabbitServiceName: jackrabbit
name: http-jackrabbit
port: 8080
clusterId: "first"
storage:
size: 5Gi
livenessProbe:
tcpSocket:
port: http-jackrabbit
initialDelaySeconds: 25
periodSeconds: 25
timeoutSeconds: 5
readinessProbe:
tcpSocket:
port: http-jackrabbit
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 5
volumes: []
volumeMounts: []
lifecycle: {}
additionalLabels: {}
additionalAnnotations: {}
tolerations: []
affinity: {}
nodeSelector: {}
opendj:
topologySpreadConstraints: {}
pdb:
enabled: true
maxUnavailable: 1
backup:
enabled: true
cronJobSchedule: "*/59 * * * *"
hpa:
enabled: true
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
metrics: []
behavior: {}
usrEnvs:
normal: {}
secret: {}
dnsPolicy: ""
dnsConfig: {}
image:
pullPolicy: IfNotPresent
repository: gluufederation/opendj
tag: 4.5.1-1
pullSecrets:
- name: regcred
persistence:
size: 4Gi
ports:
tcp-admin:
nodePort: ""
port: 4444
protocol: TCP
targetPort: 4444
tcp-ldap:
nodePort: ""
port: 1389
protocol: TCP
targetPort: 1389
tcp-ldaps:
nodePort: ""
port: 1636
protocol: TCP
targetPort: 1636
tcp-repl:
nodePort: ""
port: 8989
protocol: TCP
targetPort: 8989
tcp-serf:
nodePort: ""
port: 7946
protocol: TCP
targetPort: 7946
udp-serf:
nodePort: ""
port: 7946
protocol: UDP
targetPort: 7946
replicas: 1
resources:
limits:
cpu: 1500m
memory: 2000Mi
requests:
cpu: 1500m
memory: 2000Mi
livenessProbe:
exec:
command:
- python3
- /app/scripts/healthcheck.py
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 20
readinessProbe:
tcpSocket:
port: 1636
initialDelaySeconds: 60
timeoutSeconds: 5
periodSeconds: 25
failureThreshold: 20
volumes: []
volumeMounts: []
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "python3 /app/scripts/deregister_peer.py 1>&/proc/1/fd/1"]
additionalLabels: {}
additionalAnnotations: {}
tolerations: []
affinity: {}
nodeSelector: {}
gluuRedisEnabled: false
persistence:
usrEnvs:
normal: {}
secret: {}
dnsPolicy: ""
dnsConfig: {}
image:
pullPolicy: IfNotPresent
repository: gluufederation/persistence
tag: 4.5.1-1
pullSecrets:
- name: regcred
resources:
limits:
cpu: 300m
memory: 300Mi
requests:
cpu: 300m
memory: 300Mi
volumes: []
volumeMounts: []
lifecycle: {}
additionalLabels: {}
additionalAnnotations: {}
tolerations: []
affinity: {}
nodeSelector: {}
oxauth:
topologySpreadConstraints: {}
pdb:
enabled: true
maxUnavailable: "90%"
hpa:
enabled: true
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
metrics: []
behavior: {}
usrEnvs:
normal: {}
secret: {}
dnsPolicy: ""
dnsConfig: {}
image:
pullPolicy: IfNotPresent
repository: gluufederation/oxauth
tag: 4.5.1-1
pullSecrets:
- name: regcred
replicas: 1
resources:
limits:
cpu: 2500m
memory: 2500Mi
requests:
cpu: 2500m
memory: 2500Mi
service:
oxAuthServiceName: oxauth
name: http-oxauth
port: 8080
livenessProbe:
exec:
command:
- python3
- /app/scripts/healthcheck.py
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 5
readinessProbe:
exec:
command:
- python3
- /app/scripts/healthcheck.py
initialDelaySeconds: 25
periodSeconds: 25
timeoutSeconds: 5
volumes: []
volumeMounts: []
lifecycle: {}
additionalLabels: {}
additionalAnnotations: {}
tolerations: []
affinity: {}
nodeSelector: {}
oxtrust:
topologySpreadConstraints: {}
pdb:
enabled: true
maxUnavailable: 1
hpa:
enabled: true
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
metrics: []
behavior: {}
usrEnvs:
normal: {}
secret: {}
dnsPolicy: ""
dnsConfig: {}
image:
pullPolicy: IfNotPresent
repository: gluufederation/oxtrust
tag: 4.5.2-1
pullSecrets:
- name: regcred
replicas: 1
resources:
limits:
cpu: 2500m
memory: 2500Mi
requests:
cpu: 2500m
memory: 2500Mi
service:
name: http-oxtrust
port: 8080
clusterIp: None
oxTrustServiceName: oxtrust
livenessProbe:
exec:
command:
- python3
- /app/scripts/healthcheck.py
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 5
readinessProbe:
exec:
command:
- python3
- /app/scripts/healthcheck.py
initialDelaySeconds: 25
periodSeconds: 25
timeoutSeconds: 5
volumes: []
volumeMounts: []
lifecycle: {}
additionalLabels: {}
additionalAnnotations: {}
tolerations: []
affinity: {}
nodeSelector: {}
fido2:
topologySpreadConstraints: {}
pdb:
enabled: true
maxUnavailable: "90%"
hpa:
enabled: true
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
metrics: []
behavior: {}
usrEnvs:
normal: {}
secret: {}
dnsPolicy: ""
dnsConfig: {}
image:
pullPolicy: IfNotPresent
repository: gluufederation/fido2
tag: 4.5.1-1
pullSecrets:
- name: regcred
replicas: ''
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 500m
memory: 500Mi
service:
fido2ServiceName: fido2
name: http-fido2
port: 8080
livenessProbe:
httpGet:
path: /fido2/restv1/fido2/configuration
port: http-fido2
initialDelaySeconds: 25
periodSeconds: 25
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /fido2/restv1/fido2/configuration
port: http-fido2
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 5
volumes: []
volumeMounts: []
lifecycle: {}
additionalLabels: {}
additionalAnnotations: {}
tolerations: []
affinity: {}
nodeSelector: {}
scim:
topologySpreadConstraints: {}
pdb:
enabled: true
maxUnavailable: "90%"
hpa:
enabled: true
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
metrics: []
behavior: {}
usrEnvs:
normal: {}
secret: {}
dnsPolicy: ""
dnsConfig: {}
image:
pullPolicy: IfNotPresent
repository: gluufederation/scim
tag: 4.5.1-1
pullSecrets:
- name: regcred
replicas: 1
resources:
limits:
cpu: 1000m
memory: 1000Mi
requests:
cpu: 1000m
memory: 1000Mi
service:
scimServiceName: scim
name: http-scim
port: 8080
livenessProbe:
httpGet:
path: /scim/restv1/scim/v2/ServiceProviderConfig
port: 8080
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /scim/restv1/scim/v2/ServiceProviderConfig
port: 8080
initialDelaySeconds: 25
periodSeconds: 25
timeoutSeconds: 5
volumes: []
volumeMounts: []
lifecycle: {}
additionalLabels: {}
additionalAnnotations: {}
tolerations: []
affinity: {}
nodeSelector: {}
oxd-server:
topologySpreadConstraints: {}
pdb:
enabled: true
maxUnavailable: "90%"
hpa:
enabled: true
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
metrics: []
behavior: {}
usrEnvs:
normal: {}
secret: {}
dnsPolicy: ""
dnsConfig: {}
image:
pullPolicy: IfNotPresent
repository: gluufederation/oxd-server
tag: 4.5.1-1
pullSecrets:
- name: regcred
replicas: 1
resources:
limits:
cpu: 1000m
memory: 400Mi
requests:
cpu: 1000m
memory: 400Mi
service:
oxdServerServiceName: oxd-server
livenessProbe:
exec:
command:
- curl
- -k
- https://localhost:8443/health-check
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 5
readinessProbe:
exec:
command:
- curl
- -k
- https://localhost:8443/health-check
initialDelaySeconds: 25
periodSeconds: 25
timeoutSeconds: 5
volumes: []
volumeMounts: []
lifecycle: {}
additionalLabels: {}
additionalAnnotations: {}
tolerations: []
affinity: {}
nodeSelector: {}
casa:
topologySpreadConstraints: {}
pdb:
enabled: true
maxUnavailable: "90%"
hpa:
enabled: true
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
metrics: []
behavior: {}
usrEnvs:
normal: {}
secret: {}
dnsPolicy: ""
dnsConfig: {}
image:
pullPolicy: IfNotPresent
repository: gluufederation/casa
tag: 4.5.1-1
pullSecrets:
- name: regcred
replicas: ''
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 500m
memory: 500Mi
service:
casaServiceName: casa
port: 8080
name: http-casa
livenessProbe:
httpGet:
path: /casa/health-check
port: http-casa
initialDelaySeconds: 25
periodSeconds: 25
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /casa/health-check
port: http-casa
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 5
volumes: []
volumeMounts: []
lifecycle: {}
additionalLabels: {}
additionalAnnotations: {}
tolerations: []
affinity: {}
nodeSelector: {}
oxpassport:
topologySpreadConstraints: {}
pdb:
enabled: true
maxUnavailable: "90%"
hpa:
enabled: true
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
metrics: []
behavior: {}
usrEnvs:
normal: {}
secret: {}
dnsPolicy: ""
dnsConfig: {}
image:
pullPolicy: IfNotPresent
repository: gluufederation/oxpassport
tag: 4.5.1-1
pullSecrets:
- name: regcred
replicas: ''
resources:
limits:
cpu: 700m
memory: 900Mi
requests:
cpu: 700m
memory: 900Mi
service:
oxPassportServiceName: oxpassport
port: 8090
name: http-passport
livenessProbe:
httpGet:
path: /passport/health-check
port: http-passport
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 20
readinessProbe:
httpGet:
path: /passport/health-check
port: http-passport
initialDelaySeconds: 25
periodSeconds: 25
timeoutSeconds: 5
failureThreshold: 20
volumes: []
volumeMounts: []
lifecycle: {}
additionalLabels: {}
additionalAnnotations: {}
tolerations: []
affinity: {}
nodeSelector: {}
oxshibboleth:
topologySpreadConstraints: {}
pdb:
enabled: true
maxUnavailable: 1
hpa:
enabled: true
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
metrics: []
behavior: {}
usrEnvs:
normal: {}
secret: {}
dnsPolicy: ""
dnsConfig: {}
image:
pullPolicy: IfNotPresent
repository: gluufederation/oxshibboleth
tag: 4.5.1-1
pullSecrets:
- name: regcred
replicas: 1
resources:
limits:
cpu: 1000m
memory: 1000Mi
requests:
cpu: 1000m
memory: 1000Mi
service:
sessionAffinity: ClientIP
port: 8080
oxShibbolethServiceName: oxshibboleth
name: http-oxshib
livenessProbe:
exec:
command:
- python3
- /app/scripts/healthcheck.py
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 5
readinessProbe:
exec:
command:
- python3
- /app/scripts/healthcheck.py
initialDelaySeconds: 25
periodSeconds: 25
timeoutSeconds: 5
volumes: []
volumeMounts: []
lifecycle: {}
additionalLabels: {}
additionalAnnotations: {}
tolerations: []
affinity: {}
nodeSelector: {}
cr-rotate:
usrEnvs:
normal: {}
secret: {}
dnsPolicy: ""
dnsConfig: {}
image:
pullPolicy: IfNotPresent
repository: gluufederation/cr-rotate
tag: 4.5.1-1
pullSecrets:
- name: regcred
resources:
limits:
cpu: 200m
memory: 200Mi
requests:
cpu: 200m
memory: 200Mi
service:
crRotateServiceName: cr-rotate
port: 8084
name: http-cr-rotate
volumes: []
volumeMounts: []
lifecycle: {}
additionalLabels: {}
additionalAnnotations: {}
tolerations: []
affinity: {}
nodeSelector: {}
oxauth-key-rotation:
usrEnvs:
normal: {}
secret: {}
dnsPolicy: ""
dnsConfig: {}
image:
pullPolicy: IfNotPresent
repository: gluufederation/certmanager
tag: 4.5.1-1
pullSecrets:
- name: regcred
keysLife: 48
keysStrategy: NEWER
keysPushDelay: 0
keysPushStrategy: NEWER
resources:
limits:
cpu: 300m
memory: 300Mi
requests:
cpu: 300m
memory: 300Mi
volumes: []
volumeMounts: []
lifecycle: {}
additionalLabels: {}
additionalAnnotations: {}
tolerations: []
affinity: {}
nodeSelector: {}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment