Mastodon can store its assets in Amazon S3 (it speaks the S3 protocol). But it does not, by default, speak to Backblaze B2.
There are a couple of reasons why you might want that:
- you already know and trust B2
- it's cheaper
- you do not want to make your instance dependent on AWS
- etc.
We can however deploy a local copy of minio, which is a storage server emulating the S3 and B2 protocols, and use it as a gateway from one to the other.
Basically:
Mastodon --(S3 protocol)--> minio --(B2 protocol)--> B2
Let's do it.
Sign up, then create a public bucket and call it something globally unique. Note that this will show up in all asset URLs, so choose something descriptive like the name of your instance.
Generate the Account ID and Application Key for your B2 account and hold on to them for later.
Now you want to determine which URL your assets will show up under. The easiest way to do that is by uploading a test file through the B2 admin page, then click on that file and determine its public "friendly" URL (example: https://f001.backblazeb2.com/file/mybucketname/testfile.txt
). You'll need this server info later.
First let's get the minio server running. To your docker-compose.yml
file, add:
services:
# [... other servers, such as db and redis]
minio:
restart: always
image: minio/minio:RELEASE.2018-03-30T00-38-44Z
env_file: .env.production
command: gateway b2
ports:
- "9000:9000"
networks:
- internal_network
- external_network
Make sure to get the indentation right. There might be new minio releases since I wrote this, feel free to check and change that release info.
We need to add the gateway info to our env variables in .env.production
. Edit the file and add:
MINIO_ACCESS_KEY=1234567890ab
MINIO_SECRET_KEY=0123456789abcdef01234567...
Replace the dummy values here with your account ID and application key you collected from B2 before.
Now spin up the new container: docker-compose up -d
.
To test you can use the minio client mc
. Install it, then:
mc config host add myb2 http://127.0.0.1:9000 accountID applicationKey
mc ls myb2
Once again, replace accountID and applicationKey with the secrets from B2.
This should give you something like:
[1970-01-01 00:00:00 UTC] 0B mybucketname/
If you need server logs from minio, docker-compose logs minio
is your friend.
Now that minio is working let's hook it all up to mastodon.
First, in your nginx config, if you have a content security policy (and you should), make sure you make a change similar to this:
add_header Content-Security-Policy [...] img-src 'self' data: https://f001.backblazeb2.com; media-src 'self' data: https://f001.backblazeb2.com; [...];
Basically, add the host for your bucket that you determined above as a valid source for img and media sources.
Reload nginx.
In .env.production
, add the following:
S3_ENABLED=true
S3_PROTOCOL=https
S3_HOSTNAME=minio:9000
S3_ENDPOINT=http://minio:9000
S3_BUCKET=mybucketname
AWS_ACCESS_KEY_ID=accountID
AWS_SECRET_ACCESS_KEY=applicationKey
S3_CLOUDFRONT_HOST=f001.backblazeb2.com/file/mybucketname
Except for the first four lines, make sure to add the correct values for your setup in each of the lines.
If you want, you can docker-compose up -d
now and notice that most of your assets are missing, but new ones (federated posts) should start working as they come in.
Use your browser's developer tools to see if anything is still wrong (assets cannot be loaded or URLs aren't right, you'll see errors in there).
All good? Hurrah.
Finally, let's push all existing assets to our bucket, which will fix the missing assets you're seeing.
cd mastodon/public/system # old asset directory!
mc mirror . myb2/mybucketname
This will upload all existing assets. You can run this multiple times if it gets interrupted.
You'll see a few errors about "cannot delete", which is intentional, you can ignore those.
You're done? Reload mastodon for great success.
When you're all done (maybe after letting it run for a couple days), you can empty out the public/system
directory on your main server, it's all on B2 now.
🎉
Thanks for this!