-
-
Save zicklag/d4a16addb23275152b10144c2f53deed to your computer and use it in GitHub Desktop.
version: '3.5' | |
# WARNING: Haven't tested this version of this YAML exactly, but it *should* be correct. | |
services: | |
master-1: | |
image: chrislusf/seaweedfs:latest | |
networks: | |
- hostnet | |
command: "master -port=9333 -defaultReplication=001 -peers=localhost:9333,localhost:9334,localhost:9335" | |
entrypoint: /usr/bin/weed | |
# TODO: The storage mountpoint is /data for all services | |
volumes: | |
- master-1-data:/data | |
master-2: | |
image: chrislusf/seaweedfs:latest | |
networks: | |
- hostnet | |
command: "master -port=9334 -defaultReplication=001 -peers=localhost:9333,localhost:9334,localhost:9335" | |
entrypoint: /usr/bin/weed | |
volumes: | |
- master-2-data:/data | |
master-3: | |
image: chrislusf/seaweedfs:latest | |
networks: | |
- hostnet | |
command: "master -port=9335 -defaultReplication=001 -peers=localhost:9333,localhost:9334,localhost:9335" | |
entrypoint: /usr/bin/weed | |
volumes: | |
- master-3-data:/data | |
volume-1: | |
image: chrislusf/seaweedfs:latest | |
networks: | |
- hostnet | |
command: 'volume -mserver=localhost:9333,localhost:9334,localhost:9335 -port=8080' | |
volumes: | |
- volume-1-data:/data | |
volume-2: | |
image: chrislusf/seaweedfs:latest | |
networks: | |
- hostnet | |
command: 'volume -mserver=localhost:9333,localhost:9334,localhost:9335 -port=8081' | |
volumes: | |
- volume-2-data:/data | |
filer: | |
image: chrislusf/seaweedfs:latest | |
networks: | |
- hostnet | |
command: 'filer -master=localhost:9333,localhost:9334,localhost:9335 -port=8888' | |
tty: true | |
stdin_open: true | |
volumes: | |
- filer-data:/data | |
networks: | |
hostnet: | |
external: true | |
name: host | |
volumes: | |
# "driver: local" is implied on all of these volumes because driver is not specified | |
- master-1-data: | |
- master-2-data: | |
- master-3-data: | |
- volume-1-data: | |
- volume-2-data: | |
- filer-data: |
@dkdndes I'm playing with this setup: Swarm Cluster on 3 nodes
The only problem I have is that only 1 filer works. It means that if the filer node goes offline, the mounted volumes become inaccessible. If anyone has an idea how to make filers resilient it would be awesome.
@dkdndes I haven't actually used SeaweedFS in a while and I don't have a Swarm cluster to test on at the moment. You are probably best trying out @xirius's YAML.
@xirius, in order to scale the filer you have to setup an external filer store, such as Cassandra or a plethora of other databases. Then you can scale to any number of filers as long as they all point at the same filer store. Of course that means that now you have to take into account how you are going to scale the chosen filer store as well.
@zicklag oh I see, thanks. I assumed filer should replicate the metadata for redundancy.
@zicklag thank you for the update, any chance you provide the swarm updated version?