Running SpringBoot (or for that matter any) Application as a non-root user on OpenShift on IBM Cloud
Everyone gets that you need to run containers as non-root users whenever possible. This is preferred everywhere, and I think it's just a good habit overall. For trivial applications, this isn't a very big deal, but what about applications which depend on mounting to persistent storage, especially of the NFS flavor. And it might be that when you get the NFS export, the userid for the mount needs to be changed, perhaps to a random userid that's being dictated to you. If you're looking for an example scenario like this, look no further.
I'll assume you have a Dockerfile already, probably one that looks a bit like the one attached to this gist. This is a straight up example where during the container build, some things are done as root and ultimately we specify that the container intends to run as a specific user java_user
in this example:
FROM adoptopenjdk/openjdk8-openj9:ubi-jre
USER root
RUN groupadd --gid 1000 java_group \
&& useradd --uid 1000 --gid java_group --shell /bin/bash --create-home java_user
USER java_user
COPY --chown=java_user:java_group target/*.jar /app.jar
EXPOSE 8080
CMD ["java","-jar","/app.jar"]
You could drop this into the Spring Pet Clinic repo for example and be good to go.
Once you've pushed that container to a registry of your choosing, it's time to get to work on an OpenShift cluster. Create a project and a service account for use in the deployment of the application and grant it access to a SCC which can run with anyuid ... the approach here around access to file volumes from the restricted scc will be another time.
oc new-project scc-testing
oc create servicesccount deploytest
oc adm policy add-scc-to-user anyuid -z deploytest
Imagine that your application needs access to a storage volume which will be shared with each pod instance. If it's a web server or content management system, this may be where all the files that are commonly accessed by each pod. Any way you look at this, on Kubernetes, this will be a RWX volume. On IBM Cloud on your OpenShift cluster, you can create a 20GB volume like this with this resource:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mydata
annotations:
volume.beta.kubernetes.io/storage-class: "ibmc-file-bronze"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
There are several storage tiers, in this case the bronze tier is selected which will give a relatively low IOPS volume but that's immaterial for this example. For higher performance select the
silver
orgold
tiers and use a larger storage request.
Now before your application starts, since it will be running as non-root, you will need to adjust the permissions at the root level of the volume. You can do this with an initContainer
in the deployment. This is the main reason why the service account for this deployment has access to the anyuid
class. In order to chown
the volume in the initContainer
there will be some elevated privileges involved. Once the initContainer
exits things will proceed as the target non-root user.
In OpenShift - projects will have preferred userids, you can see these with the command:
oc get project scc-testing -o yaml
apiVersion: project.openshift.io/v1
kind: Project
metadata:
annotations:
openshift.io/description: ""
openshift.io/display-name: ""
openshift.io/requester: IAM#[email protected]
openshift.io/sa.scc.mcs: s0:c27,c9
openshift.io/sa.scc.supplemental-groups: 1000720000/10000
openshift.io/sa.scc.uid-range: 1000720000/10000
creationTimestamp: "2020-09-22T18:58:30Z"
name: scc-testing
resourceVersion: "272118"
selfLink: /apis/project.openshift.io/v1/projects/scc-testing
uid: f33c046c-bd74-4e95-90b2-71d5500f0057
spec:
finalizers:
- kubernetes
status:
phase: Active
Notice the uid-range
, this reflects the preferred uid range for pods running in the project. Let's create a deployment that will prefer to use the first uid and gid from the range as set in the deployment security context. At the same time, since there needs to be a permission change performed by chown
in the init container, the preferred uid for that will be the root user:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: petclinic
name: petclinic
spec:
replicas: 1
selector:
matchLabels:
app: petclinic
template:
metadata:
creationTimestamp: null
labels:
app: petclinic
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000720000
fsGroup: 1000720000
serviceAccountName: deploytest
initContainers:
- name: set-ownership
image: busybox
command: [ 'sh', '-c', 'echo preparing mydata; chown 1000720000:1000720000 /mydata; touch /mydata/test.txt' ]
volumeMounts:
- name: volume
mountPath: /mydata
securityContext:
runAsNonRoot: false
runAsUser: 0
containers:
- image: timrodocker/petclinic:ubi
name: podloader
imagePullPolicy: Always
volumeMounts:
- mountPath: /data
name: volume
resources:
requests:
cpu: "500m"
securityContext:
runAsNonRoot: true
runAsUser: 1000720000
volumes:
- name: volume
persistentVolumeClaim:
claimName: mydata
And that's it... deploy this yaml and you can check the status of the set-ownership
container in the pod to verify that there are no errors. And once the actual pod is running, you can check out the status:
oc exec -it petclinic-7947c94cbd-tbxrr bash
bash-4.4$ ls -al /
total 43964
drwxr-xr-x. 1 root root 4096 Sep 25 00:33 .
drwxr-xr-x. 1 root root 4096 Sep 25 00:33 ..
-rw-r--r--. 1 java_user java_group 44933679 Sep 24 13:53 app.jar
lrwxrwxrwx. 1 root root 7 Aug 12 2018 bin -> usr/bin
dr-xr-xr-x. 2 root root 4096 Aug 12 2018 boot
drwxr-xr-x. 2 1000720000 1000720000 4096 Sep 25 00:34 data
...
bash-4.4$ whoami
1000720000
bash-4.4$ groups
root groups: cannot find name for group ID 1000720000
1000720000
Notice the ownership from the ls
command. The app.jar
file copied in the Dockerfile is owned by the intended user and group, but is mode 755. This means that as the SpringBoot application starts, requiring read access to the code, it still works even though the process owner is 100072000
and group matches this. The /data
directory from the volume is owned by the userid running the process in the container so any actions that require write access to this path will be successful.