-
-
Save zicklag/d4a16addb23275152b10144c2f53deed to your computer and use it in GitHub Desktop.
version: '3.5' | |
# WARNING: Haven't tested this version of this YAML exactly, but it *should* be correct. | |
services: | |
master-1: | |
image: chrislusf/seaweedfs:latest | |
networks: | |
- hostnet | |
command: "master -port=9333 -defaultReplication=001 -peers=localhost:9333,localhost:9334,localhost:9335" | |
entrypoint: /usr/bin/weed | |
# TODO: The storage mountpoint is /data for all services | |
volumes: | |
- master-1-data:/data | |
master-2: | |
image: chrislusf/seaweedfs:latest | |
networks: | |
- hostnet | |
command: "master -port=9334 -defaultReplication=001 -peers=localhost:9333,localhost:9334,localhost:9335" | |
entrypoint: /usr/bin/weed | |
volumes: | |
- master-2-data:/data | |
master-3: | |
image: chrislusf/seaweedfs:latest | |
networks: | |
- hostnet | |
command: "master -port=9335 -defaultReplication=001 -peers=localhost:9333,localhost:9334,localhost:9335" | |
entrypoint: /usr/bin/weed | |
volumes: | |
- master-3-data:/data | |
volume-1: | |
image: chrislusf/seaweedfs:latest | |
networks: | |
- hostnet | |
command: 'volume -mserver=localhost:9333,localhost:9334,localhost:9335 -port=8080' | |
volumes: | |
- volume-1-data:/data | |
volume-2: | |
image: chrislusf/seaweedfs:latest | |
networks: | |
- hostnet | |
command: 'volume -mserver=localhost:9333,localhost:9334,localhost:9335 -port=8081' | |
volumes: | |
- volume-2-data:/data | |
filer: | |
image: chrislusf/seaweedfs:latest | |
networks: | |
- hostnet | |
command: 'filer -master=localhost:9333,localhost:9334,localhost:9335 -port=8888' | |
tty: true | |
stdin_open: true | |
volumes: | |
- filer-data:/data | |
networks: | |
hostnet: | |
external: true | |
name: host | |
volumes: | |
# "driver: local" is implied on all of these volumes because driver is not specified | |
- master-1-data: | |
- master-2-data: | |
- master-3-data: | |
- volume-1-data: | |
- volume-2-data: | |
- filer-data: |
There you go!
Could you explain the setup of the volumes, in this context?
@dkdndes The yaml was actually missing the volumes
section, which I added to the bottom. The data will be put in stack-scoped Docker named volumes on the local filesystem.
For example, if you deployed this YAML in a stack named seaweedfs
you would end up with the following volumes:
seaweedfs_master-1-data
seaweedfs_master-2-data
seaweedfs_master-3-data
seaweedfs_volume-1-data
seaweedfs_volume-2-data
seaweedfs_filer-data
The volume for each service will be stored locally on the server that the service is deployed on.
Edit:
This particular example was made assuming that all of the services were actually running on the same machine ( even though it is a Docker swarm yaml ). You would want to structure the yaml a bit differently for running on a Cluster:
If you were running in a cluster you would more likely want to have one volume-server
service that runs with a global scale and a single volume named something like volume-server-data
. That service will then run on every server in the cluster and store its data in the seaweedfs_volume-server-data
volume on each host.
For the master servers you would need to use labels to constrain them to run on specific hosts so that they don't lose their volumes when getting spun up on other hosts.
@zicklag There is no filer redundancy? What happens if filer node goes down ? I guess it will affect all the container storage. How to make it resilient to it?
@zicklag thank you for the update, any chance you provide the swarm updated version?
@dkdndes I'm playing with this setup: Swarm Cluster on 3 nodes
The only problem I have is that only 1 filer works. It means that if the filer node goes offline, the mounted volumes become inaccessible. If anyone has an idea how to make filers resilient it would be awesome.
@dkdndes I haven't actually used SeaweedFS in a while and I don't have a Swarm cluster to test on at the moment. You are probably best trying out @xirius's YAML.
@xirius, in order to scale the filer you have to setup an external filer store, such as Cassandra or a plethora of other databases. Then you can scale to any number of filers as long as they all point at the same filer store. Of course that means that now you have to take into account how you are going to scale the chosen filer store as well.
@zicklag oh I see, thanks. I assumed filer should replicate the metadata for redundancy.
Hey @zicklag, it looks like we're missing the volume mounts for the volume servers. You mind adding that to this gist?