Used to identify situations where server is running but may not behave optimally, i.e. sluggish response or corrupt backend. Such situations can be generally only be fixed after a restart.
Kubernetes kills the container and restarts it in case of liveness probe responding with a failure code.
Used to identify situations where server is not ready to accept requests yet. Such situations are generally recovered after waiting for some time.
Kubernetes does not forward traffic to pods if readiness probe responds with a failure code. It just waits for the readiness probe to return success.
Minio server exposes two un-authenticated, healthcheck related endpoints - liveness probe and readiness probe at /minio/health/live
and /minio/health/ready
respectively.
- Liveness probe handler does a list buckets internally, if that is successful, server returns a status 200 OK. If it fails, it returns 503.
- Readiness probe handler does a go routine count internally. If the number of go routines is higher than a threshold, server returns 503 else it returns 200. Currently the arbitrary threshold for total goroutines is set to 500.
Sample configuration in a Kubernetes yaml
file.
livenessProbe:
httpGet:
path: /minio/health/live
port: 9000
initialDelaySeconds: 10
periodSeconds: 20
The /minio/health/ready endpoint is currently identical to /minio/health/live.

Are you refering to a fork? Or a former implementation of the readiness endpoint?