| Item | Specification |
|---|---|
| Custom access key | environment MINIO_ACCESS_KEY |
| Custom secret key | environment MINIO_SECRET_KEY |
| Turn off web browser | environment MINIO_BROWSER=off |
| Listening on bucket notifications | using an extended S3 API |
| Support for bucket notifications | postgres, amqp, nats, elasticsearch, redis, kafka (in-progress) |
| Shared Backend (FS) | In-progress |
| apiVersion: v1 | |
| kind: PersistentVolumeClaim | |
| metadata: | |
| # This name uniquely identifies the PVC. Will be used in deployment below. | |
| name: minio-pv-claim | |
| labels: | |
| app: minio-storage-claim | |
| spec: | |
| # Read more about access modes here: http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes | |
| accessModes: |
| apiVersion: v1 | |
| kind: PersistentVolume | |
| metadata: | |
| name: minio-pv-1 | |
| spec: | |
| # Size of your PV | |
| capacity: | |
| # This is limited by the size of GCE Persistent disk. | |
| # For example, to create a 10 TB backend, uncomment below line | |
| # storage: 10Ti |
| apiVersion: v1 | |
| kind: Service | |
| metadata: | |
| name: minio-service | |
| spec: | |
| type: LoadBalancer | |
| ports: | |
| - port: 9000 | |
| targetPort: 9000 | |
| protocol: TCP |
| apiVersion: v1 | |
| kind: Service | |
| metadata: | |
| name: minio | |
| labels: | |
| app: minio | |
| spec: | |
| clusterIP: None | |
| ports: | |
| - port: 9000 |
| apiVersion: v1 | |
| kind: Service | |
| metadata: | |
| name: minio-server | |
| labels: | |
| app: minio | |
| spec: | |
| ports: | |
| - port: 9000 | |
| targetPort: 9000 |
| apiVersion: v1 | |
| kind: Service | |
| metadata: | |
| name: minio-endpoint | |
| spec: | |
| selector: | |
| app: minio | |
| ports: | |
| - name: "s3" | |
| port: 9000 |
| apiVersion: v1 | |
| kind: Service | |
| metadata: | |
| name: minio | |
| labels: | |
| app: minio | |
| spec: | |
| clusterIP: None | |
| ports: | |
| - port: 9000 |
-
Pre-Conditions: https://docs.docker.com/engine/swarm/swarm-tutorial/#/three-networked-host-machines For distributed Minio to run, you need 4 networked host machines.
-
Create a new swarm and set the manager. SSH to one of the host machine, which you want to set as manager and run:
docker swarm init --advertise-addr <MANAGER-IP> -
Current node should become the manager. Check using:
docker node ls -
Open a terminal and ssh into the machine where you want to run a worker node.
-
Run the command as output by the step where master is created. It will add the current machine (as a worker) to the swarm. Add all the workers similarly.
-
Check if all the machines are added as workers, SSH to the master and run:
docker node ls
Storage has been long thought of as a complex, difficult to setup system. Even after the advent of quick deployment mechanisms like containers, problem somewhat persisted because of ephemeral nature of containers - it seemed counter-intuitive to store mission critical data on something that itself is supposed to be disposable.
Minio is an open source, S3 compatible, cloud-native object storage server, that makes storage as easy as launching a Docker container. On Hyper.sh, Minio servers are backed by Hyper.sh volumes that make sure, even if a container running Minio server goes down, the data is safe in the volume. As a true cloud-native application, Minio scales very well in a multi-tenant cloud environment.
Docker containers provide isolated environment for application execution, Hyper.sh enables effortless scaling by running multiple instances of these isolated applications. To scale Minio as per your s