The idea of use the native permissions from Linux on the containers will work, because when the container is running it acts as a normal Linux machine, respecting the user/groups permissions.
Using the next Dockerfile as an example
# I'm using the ubuntu image only for testing porpouse, the same will apply for RHEL
FROM ubuntu
# Switch to root
USER 0
# The directories and files will be created under the root user
RUN mkdir myProtectedDirectory \
&& touch file
# Switch to a normal user again
USER 1001
So if the user try to access/modify any file/directory created by the root user the operation will fail with the classic permission denied : An example :
This idea is even used by the netCore RedHat images to install all the dependencies needed, but at the end switch to an user with less privileges. Someimportant highlights:
-
The team will need to be sure to setup the permissions correctly for the new user giving the user access/permissionsonly to the bynaries needed to run their apps: https://github.com/redhat-developer/s2i-dotnetcore/blob/722d30e554537e9f02ec59b3466998929cbfbc78/2.1/build/Dockerfile.rhel7#L45
-
The use of the USER instructions needs to be controlled on the linter at the build time, only users with "middleware" permissions should be able to use the USER instruction to switch the context. Because if the App teams are able to switch to USER 0 at build time, all this effort will be wasted.
-
Normally an application should not require to modify anything related at the OS level, that´s why the S2I (Source to Image) approach is popular right now. Your app should be self-contained and the container is only providing base runtimes or SDK needed to run your app. That's means this is not the traditional way which means: upload your binaries, but also install the SQL server and the WebServer, configure the firewall etc etc. In other words, for this approach the user will not be able to perform any root modification.