Skip to content

Instantly share code, notes, and snippets.

@phette23
Last active December 10, 2025 21:02
Show Gist options
  • Select an option

  • Save phette23/6016c45a2649003f69cd65a94291b6dd to your computer and use it in GitHub Desktop.

Select an option

Save phette23/6016c45a2649003f69cd65a94291b6dd to your computer and use it in GitHub Desktop.
MInikube Testing Invenio Helm

Testing helm-invenio on Minikube

Testing helm-invenio chart locally using Minikube.

The values file uses the front matter starter image ghcr.io/front-matter/invenio-rdm-starter:v12.1.0.0 but that uses gunicorn for a web server instead of uwsgi so the web pod crashes. The invenio demo images are incompatible with minikube on apple silicon. The most complete test would be to use a locally built Invenio image or our images in Artifact Registry with an image pull secret.

Prerequisites

brew install minikube kubectl helm chart-testing

Configure Minikube

docker desktop start
minikube start
minikube addons enable ingress
minikube addons enable storage-provisioner
kubectl cluster-info # verify it's working
kubectl get nodes

Preparing the Helm Chart

Pull the helm-invenio repo or a fork of it.

Create a values override file (values.yaml). See the one included in this gist.

Installing the Chart

# with minikube running, kubectl using minikube context
kubectl create namespace invenio
helm install invenio ./charts/invenio \
    --namespace invenio \
    -f values.yaml
kubectl get pods -n invenio -w # watch for pods to come up
# Check logs of app pods
kubectl logs -n invenio -l app.kubernetes.io/component=web
kubectl logs -n invenio -l app.kubernetes.io/component=worker
kubectl logs -n invenio -l app.kubernetes.io/component=worker-beat

Testing

Lint all the charts like the repo does: ct lint --all.

Verify Default Configuration

Check that workers are running with default settings:

# Exec into worker pod
kubectl exec -it -n invenio (kubectl get pod -n invenio -l app.kubernetes.io/component=worker \
  -o jsonpath='{.items[0].metadata.name}') -- /bin/bash
# Inside the pod, check celery workers
celery -A invenio_app.celery inspect active_queues

Changes

Apply new values:

helm upgrade invenio ./charts/invenio \
    --namespace invenio \
    -f values.yaml
# verify worker command
kubectl get pod -n invenio -l app.kubernetes.io/component=worker -o yaml | grep -A5 "command:"
# run a shell on the worker pod
kubectl exec -it -n invenio (kubectl get pod -n invenio -l app.kubernetes.io/component=worker \
  -o jsonpath='{.items[0].metadata.name}') -- /bin/bash
celery -A invenio_app.celery inspect active_queues
celery -A invenio_app.celery inspect stats

Accessing the Application

Port forward a service:

kubectl port-forward -n invenio service/flower-management 5555:5555 &
open http://localhost:5555

Cleanup

helm uninstall invenio -n invenio
kubectl delete namespace invenio
minikube stop

Additional Invenio Helm Chart Notes

Dependencies (these will all have to be replaced):

Need to add secrets and passwords or installation fails:

  • invenio.hostname
  • rabbitmq.auth.password
  • postgresql.auth.password

the v12.0.0-beta3 docker image tag didn't work so changed web and worker images to use ghcr.io/inveniosoftware/demo-inveniordm/demo-inveniordm:latest but maybe even that isn't wise as it's not a stable release

Reduced all replicas to 1 because it couldn't run locally, runs out of resources. OpenSearch runs four types of pods (coordinating, data, ingest, and master) and we reduce the replicaCount of each to 1. Set redis.replica.replicaCount: 1 too.

To re-do the helm install you have to delete the shared-volume pvc too which helm refuses to cleanup:

helm delete invenio
kubectl delete pvc shared-volume

Outstanding issues:

  • the web/worker pods crash, the images are the wrong architecture, if you try to docker run one inside minikube you get an error "WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested"
---
global:
timezone: "America/Los_Angeles"
# This is the only compatible Invenio image I can find but it uses gunicorn instead of uwsgi
# so it doesn't work the chart's web pod. If minikube can't pull it, try:
# docker pull ghcr.io/front-matter/invenio-rdm-starter:v12.1.0.0
# minikube image load ghcr.io/front-matter/invenio-rdm-starter:v12.1.0.0
image:
registry: ghcr.io
repository: front-matter/invenio-rdm-starter
tag: v12.1.0.0
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
ingress:
annotations: {}
enabled: false
class: ""
tlsSecretNameOverride: ""
invenio:
hostname: localhost
existing_secret: false
init: false
default_users: [] # Requires invenio.init=true
demo_data: false # Setting invenio.demo_data=true requires also setting default_users!
sentry:
enabled: false
dsn: ""
datacite:
enabled: false
remote_apps:
enabled: false
existing_secret: false
secret_name: "remote-apps-secrets"
credentials: {}
extra_config: {}
nginx:
image: "nginx:1.24.0"
max_conns: 100
assets:
location: /opt/invenio/var/instance/static
records:
client_max_body_size: 100m
files:
client_max_body_size: 50G
resources:
requests:
cpu: 250m
memory: 500Mi
limits:
cpu: 250m
memory: 500Mi
extra_server_config: ""
denied_ips: ""
denied_uas: ""
web:
imagePullSecret: ""
replicas: 1
terminationGracePeriodSeconds: 60
uwsgi:
processes: 6
threads: 4
autoscaler:
enabled: false
scaler_cpu_utilization: 65
max_web_replicas: 10
min_web_replicas: 2
# ! might need to disable due to https://github.com/inveniosoftware/helm-invenio/issues/104
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
startupProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
assets:
location: /opt/invenio/var/instance/static
resources:
requests:
cpu: 500m
memory: 500Mi
limits:
cpu: 1000m
memory: 1Gi
annotations: {}
worker:
enabled: true
# commandOverride: celery -A invenio_app.celery worker -c 2 -l DEBUG -Q celery,low,test
app: invenio_app.celery
queues: ""
concurrency: 2
log_level: INFO
replicas: 1
run_mount_path: /var/run/celery
celery_pidfile: /var/run/celery/celerybeat.pid
celery_schedule: /var/run/celery/celery-schedule
resources:
requests:
cpu: 500m
memory: 500Mi
limits:
cpu: 1000m
memory: 1Gi
volumes:
enabled: false
workerBeat:
# commandOverride: celery -A invenio_app.celery beat -l DEBUG -s /var/run/celery/celery-schedule --pidfile /var/run/celery/celerybeat.pid
resources:
requests:
cpu: 500m
memory: 200Mi
limits:
cpu: "2"
memory: 500Mi
persistence:
enabled: true
name: "shared-volume"
access_mode: ReadWriteMany
size: 10G
storage_class: ""
redis:
enabled: true
auth:
enabled: false # Dangerous! This lets Invenio connect to Redis unauthenticated!
master:
disableCommands: [] # Dangerous! This lets us run the `FLUSHALL` and `FLUSHDB` commands! Unfortunately, they are required by the wipe_recreate.sh script when installing Invenio.
resources:
limits:
cpu: "1"
memory: 2Gi
requests:
cpu: 500m
memory: 500Mi
replica:
disableCommands: [] # Dangerous! This lets us run the `FLUSHALL` and `FLUSHDB` commands! Unfortunately, they are required by the wipe_recreate.sh script when installing Invenio.
replicaCount: 1
resources:
limits:
cpu: "1"
memory: 2Gi
requests:
cpu: 500m
memory: 500Mi
rabbitmq:
enabled: true
auth:
password: password
resources:
limits:
cpu: "1"
memory: 2Gi
requests:
cpu: "1"
memory: 2Gi
rabbitmqExternal: {}
# rabbitmq management dashboard
flower:
enabled: false
image: "mher/flower:2.0"
secret_name: "flower-secrets"
default_username: "flower"
default_password: "flower_password"
host: ""
resources:
requests:
memory: 125Mi
cpu: 0.02
limits:
memory: 250Mi
cpu: 0.1
postgresql:
enabled: true
auth:
username: invenio
database: invenio
password: invenio
postgresqlExternal: {}
opensearch:
coordinating:
replicaCount: 1
data:
replicaCount: 1
ingest:
replicaCount: 1
master:
replicaCount: 1
enabled: true
sysctlImage:
enabled: false
externalOpensearch: {}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment