A CloudFormation stack that you can run in your AWS account to host up a dedicated Satisfactory server.
Thanks to https://github.com/wolveix/satisfactory-server for the Docker image!
The dedicated server application runs on ECS Fargate, so you get a more-or-less "serverless" setup. It uses Fargate Spot, which allows you to get the cheapest possible setup, though AWS may choose to stop and restart your server. FWIW I've never actually observed that happening.
The game files and saves are stored on EFS, a network-attached storage system that allows these files to persist when/if ECS tasks stop and restart. On a daily basis, the save files are copied up from EFS to an S3 bucket in your account, named satistfactory-backups-{aws account id number}
. This makes for a cheap daily backup + easier access to those files.
When an ECS task launches, it gets a public IP address, with the exposed ports required to access the dedicated server application from the Satisfactory game client. However, if the task/container ever stops, a new one will launch to replace it, and it will have a new IP address. Because of this, we need to work with DNS records that we can update.
You must bring your own domain name that you own, provided as a stack parameter. For example, I might own a domain rclark.life
. The stack builds a Route53 hosted zone for a subdomain of your domain, for example satisfactory.rclark.life
. That hosted zone's name servers are a stack output. After launching the stack, you are responsible for making an NS record under the owned domain that references these name servers. That (in a sense) forwards traffic through your domain registrar to AWS Route53.
The stack also creates a Lambda function. Every time a new ECS task starts, the Lambda function runs. It finds out the new container's IP address, and updates an A record in the Route53 hosted zone, for example www.satisfactory.rclark.life
.
That means in the Satisfactory game client, you connect to the server at a domain name like, for example, www.satisfactory.rclark.life
.
If the server application crashes (and it will), or if AWS stops your task (I haven't noticed), you will have to exit your Satisfactory game client all the way to your desktop. Wait a few minutes before launching it again. In that time a new ECS task launches, and the DNS A record gets updated by the Lambda function. It appears that the game client will only do the DNS lookup when the client launches, so you do have to exit the client and start it again after the record has been updated.
Roughly, it seems to cost about $50-60 USD per month to run this setup 24/7. Almost all of that cost is from running the ECS Fargate Spot task constantly. My AWS bill last month was $60.08, and $46.03 of that was ECS.
You can turn the dedicated server off and back on again by making adjustments to the ECS service's desired task count. The ECS service can be found in the ECS console by browsing to the games
cluster. That cluster should host just 1 service called satisfactory-server
.
Set the number of desired tasks to 0
to tell ECS to run nothing. When you want to play again, set it back to 1
. If you do this, you'll reduces the monthly cost dramatically... unless you actually play for most of the day on most days, in which case you're just gonna have to pay up.
Note: Never set the desired task count > 1
. There'd be 2 dedicated servers trying to access the same gamefiles and save files at that point, and things would definitely get weird.
Here are some aws-cli commands you can use to try and troubleshoot anything going wrong. Make sure you set the region properly for whatever AWS region you launched the stack into. Either add --region
flags, or setup a default region in your ~/.aws/config
file.
# OFF
aws ecs update-service \
--cluster games \
--service satisfactory-server \
--desired-count 0
## ON
aws ecs update-service \
--cluster games \
--service satisfactory-server \
--desired-count 1
aws ecs execute-command \
--cluster games \
--task $(aws ecs list-tasks \
--service-name satisfactory-server \
--cluster games \
--query "taskArns[0]" \
--output text) \
--container satisfactory-server \
--command "/bin/bash" \
--interactive
aws ec2 describe-network-interfaces \
--network-interface-ids $(aws ecs describe-tasks \
--cluster games \
--tasks $(aws ecs list-tasks \
--service-name satisfactory-server \
--cluster games \
--query "taskArns[0]" \
--output text) \
--query "tasks[0].attachments[0].details[1].value" \
--output text) \
--query "NetworkInterfaces[0].Association.PublicIp" \
--output text
aws lambda invoke \
--function-name satisfactory-dns-refresher \
--invocation-type EVENT \
--payload '{}'
I had the following error trying to deploy this.
Resource handler returned message: "Invalid request provided: No available mount targets for fs-0b86c607a9ca70783 found in subnet-07134ff076354354d. Found the following mount targets in unavailable lifecycle states: fsmt-0b3010f74571ca4f9=creating, fsmt-0b3010f74571ca4f9=creating. (Service: DataSync, Status Code: 400, Request ID: f4021d01-95b8-48ba-8f07-9a15421ac96c)" (RequestToken: ad3a8344-2ba7-2938-200a-94cbaba58198, HandlerErrorCode: InvalidRequest)
when it was trying to create the BackupSource AWS::DataSync::LocationEFSit feels like there's an order of operations to do this
trying another deploy it happened again.
when I tried uploading the template and viewing it in application composer, it seems to have added a
BackupSource: Type: AWS::DataSync::LocationEFS Properties: EfsFilesystemArn: !GetAtt Disk.Arn Subdirectory: /home/satisfactory-server/saved/server # on ECS task, saves in /config/saved/server Ec2Config: SecurityGroupArns: - !Sub arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:security-group/${DiskAccess.GroupId} SubnetArn: !Sub arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:subnet/${Subnet}
I attempted to add a
DependsOn: Disk
to that but it still failed. any ideas ?--
update: ignore the part about it adding a BackupSource. I hade case sensitive search on.
I THINK the DependsOn actually has to be on Mount instead of Disk. but I actually tried a different way first. ( actually i tried both, but deploying 2 copies of this stack simultaneously failed with duplicate names for the cluster (Games) and the lambda function (Refresher)
I tried re-deploying with
Stack Failure Options
set to Preserve Successfully Provisioned Resources, then retried when it failed. that seems to have worked better.