This is an opinionated handbook on how I migrated all my Rails apps off the cloud and into VPS.
This is how I manage real production loads for my Rails apps. It assumes:
- Rails 7+
- Ruby 3+
- PostgreSQL
- Ubuntu Server 24.04
- Capistrano, Puma, Nginx
- Good expertise with Linux systems
It took me days to figure all this out, I wrote this mainly to document my own processes so I don't forget the steps when I need to do things again – I hope it's also useful for you if you're thinking about moving off the cloud.
ed25519
is the new de-facto standard for SSH keys, replacing RSA. But you need a couple gems implementing it before you can start using ed25519
ssh keys:
gem install ed25519 bcrypt_pbkdf
This script I made should handle everything to get a fresh Ubuntu Server machine into a production-ready server you can deploy Rails apps to using Capistrano.
After it runs, follow the post-install instructions to:
- Make sure it has the right SSH keys to access it
- Set up a GitHub SSH key to deploy code from private repos
- Set up log rotation so the disk doesn't get full
- Define how to rotate logs for your Rails app:
sudo nano /etc/logrotate.d/appname
And write the log rotation rules:
/home/rails/apps/appname/current/log/*.log {
daily
rotate 7
missingok
notifempty
compress
delaycompress
copytruncate
create 0664 rails rails
sharedscripts
postrotate
sudo -u rails /bin/bash -lc "~/.rvm/bin/rvm default do bundle exec pumactl -S /home/rails/apps/appname/shared/tmp/pids/puma.state -F /home/rails/apps/appname/shared/puma.rb restart"
sudo -u rails systemctl --user restart sidekiq >/dev/null 2>&1 || true
endscript
}
/home/rails/apps/appname/current/log/nginx.*.log {
daily
rotate 7
missingok
notifempty
compress
delaycompress
create 0644 www-data adm
sharedscripts
postrotate
[ -s /run/nginx.pid ] && kill -USR1 `cat /run/nginx.pid`
endscript
}
- Test the newly defined log rotation configuration:
logrotate -d /etc/logrotate.d/myapp
- Verify the
logrotate
service is up and running:
systemctl status logrotate.timer
- (Optional) If you want to force log rotation immediately:
sudo logrotate -f /etc/logrotate.d/appname
Note: Adjust paths, user names, and other details according to your specific server setup. Like, change rotate 7
to rotate 3
if you only need to keep 3 days worth of logs instead of 7 to avoid filling the disk faster.
- Add the domain
- Set up the A records for @ and www
- On SSL/TLS, change to
Full (strict)
or else the page will fail with too many redirections
Enable Capistrano deployments to the project with Multirail
Once done and configured, the project is deployable with cap production deploy
Enforce master.key in config/environments/production.rb
:
config.require_master_key = true
In the server, make sure master.key
exists at /home/rails/apps/PROJECT_NAME/shared/config/master.key
Add:
config.generators do |g|
g.orm :active_record, primary_key_type: :uuid
end
config.active_job.queue_adapter = :sidekiq
config.active_job.queue_name_prefix = "myapp_production"
And that's it! If your Rails app is small enough, just this will do the trick. You can run the database in the same machine as your Rails app, and that should work unless you get crazy amounts of traffic.
What if your app starts growing and it starts being slow? (high CPU, high load times, etc.)
This is usually because PostgreSQL is using most of the CPU. First try scaling up the server to have more CPU, that usually solves it.
When that doesn't cut it anymore, separate the PostgreSQL database into a different instance:
-
Launch a new Hetzner instance in the same region with only a public IPv6 IP (to have internet connection while setup, we'll remove this later) – make sure to add it to a private network (add the main web server to the same private network too)
-
Set up the instance with my PostgreSQL production server setup script
-
Edit your psql config as described in the post-setup instructions. If your PostgreSQL server is beefy enough, and your application requires it, you may try increasing some of the default settings in psql by editing
postgresql.conf
(usually located undernano /etc/postgresql/<POSTGRESQL_VERSION>/main/postgresql.conf
, but check the post-setup instructions to make sure):
# Connection Settings
max_connections = 500
superuser_reserved_connections = 3
# Memory Settings
shared_buffers = 8GB
work_mem = 16MB
maintenance_work_mem = 1GB
effective_cache_size = 24GB
It took me a while to understand what PostgreSQL's max_connections meant and how it related to Rails' connection pool. Rails has some automatic connection pooling to the database, so each thread doesn't connect directly to the DB, instead it goes through a pool.
That pool size is the # of Puma workers * # threads/worker * pool size per worker (as defined in config/database.yml)
If you have 8 Puma workers, 2 threads per worker and 10 connections per pool, you will have a total pool size of 8210 = 160 connections. Your PostgreSQL server should then have 160 or more max_connections so the pool threads don't wait indefinitely for a DB connection to be available.
- Restart the PostgreSQL server:
sudo systemctl restart postgresql
-
Follow the post-setup instructions to add a strong password to the default
postgresql
user -
Instead of creating a user and DB interactively for our Rails app, do
sudo -u postgres psql
and manually set up the right DB and user for the Rails project (appname
is your Rails app name):
CREATE USER appname WITH PASSWORD 'strong_password';
CREATE DATABASE appname_production;
GRANT ALL PRIVILEGES ON DATABASE appname_production TO appname;
ALTER DATABASE appname_production OWNER TO appname;
- (If you're using Blazer) also set up the Blazer PostgreSQL role:
CREATE USER blazer WITH PASSWORD 'strong_password';
GRANT CONNECT ON DATABASE appname_production TO blazer;
GRANT USAGE ON SCHEMA public TO blazer;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO blazer;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO blazer;
-
Set up log rotation for PostgreSQL
-
Update
apps/appname/shared/config/database.yml
to point the database to10.0.0.x
(your DB instance private IP) instead oflocalhost
. Update the user and password too, if necessary -
Re-deploying the Rails app with
rails production deploy
should run all necessary migrations -
Remove the public IP and internet interface from the DB instance so it's only accesible via the private network
This process will take 1-2h depending on how large the DB is.
We will make a 1:1 replica, carbon copy of the old database. This is painful. This has to be done in a very precise way paying attention to the details, or else everything will fail.
From now on, we will refer to the original database as the "donor" database, and the new database will just be the "new" database.
- First, use
turnout
to put your donor in maintenance mode so there's no new writes to the donor DB and we can export the data completely and safely:
cd ~/apps/appname/current && RAILS_ENV=production bundle exec rake maintenance:start allowed_ips="x.x.x.x"
- Make a dump of the donor DB. You may want to attach an additional volume to your instance to hold your backup if it's too heavy:
pg_dump -h [db_host] -U appname -d appname_production -b -v --clean --if-exists -f appname_production_dump.sql
The --clean
and --if-exists
flags are important – they wrap the insert statements in DROP
queries so they overwrite whatever is in the new database when importing.
DO NOT export with --data--only
, for one it's incompatible with the other flags, and it's not necessary.
-
scp
this dump into the new Rails web instance (configure whatever ssh certs are necessary to make scp work) -
Run
md5sum
in both files (dump in the donor machine and in the new production machine) to ensure the dump was copied right -
From the new Rails web instance (which should be in the same private network as the target DB instance), we need to set up a clean PostgreSQL database to accept a carbon copy of the old production.
What a clean new PostgreSQL database means: a database newly created, with the right user and permissions, database name and role name matching the donor DB, no data, and a schema exactly like the donor schema.
To set this up, first log into psql as the superuser postgres
:
psql -h [db_host] -U postgres
Create all necessary extensions for your Rails app:
CREATE EXTENSION IF NOT EXISTS plpgsql;
CREATE EXTENSION IF NOT EXISTS pgcrypto;
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
-- Make sure to create other extensions you're using -- search your Rails codebase for "enable_extension"
It might be a good idea to drop and re-create the new database from scratch to ensure a clean state:
DROP DATABSE appname_production;
CREATE DATABASE appname_production;
Grant appropriate permissions to the new database and user appname
. We're assuming you created the role in the previous section:
GRANT ALL PRIVILEGES ON DATABASE appname_production TO appname;
ALTER DATABASE appname_production OWNER TO appname;
GRANT USAGE ON SCHEMA public TO appname;
GRANT ALL ON ALL FUNCTIONS IN SCHEMA public TO appname;
Now, before we can import the dump, we need to set the current session_replication_role
for the user appname
to replica:
ALTER USER appname SET session_replication_role = 'replica';
Ensure it has been set right with:
psql -h [db_host] -U appname -d appname_production -c "SHOW session_replication_role;"
Ensure we've ran all necessary Rails migrations so the database has the right schema, and only the schema:
cd ~/apps/appname/current && RAILS_ENV=production bundle exec rails db:migrate
Only now we'll be able to import the database dump:
psql -h [db_host | localhost] -U appname -d appname_production < appname_production_dump.sql
Let's make sure we return the appname
user its session_replication_role
to the default value:
ALTER USER appname RESET session_replication_role;
- Verify all data has been imported right by running
ANALYZE;
to update statistics and verify record counts for all tables:
ANALYZE;
SELECT schemaname, relname, n_live_tup, n_dead_tup, last_analyze, last_autoanalyze
FROM pg_stat_user_tables
ORDER BY relname ASC;
-
Use
allgood
to make sure the new DB connection is healthy -
Redirect the DNS to the new server and disable maintenance mode in
turnout
, things should be running now in the new server
And that's it! By now you should have successfully migrated a Rails app from any cloud provider and into your own machine.
Below are just some commands / code snippets I found useful to have around and that I always needed to google again because I kept forgetting them.
cd /home/rails/apps/appname/current && RAILS_ENV=production bundle exec <rake_command>
Goal: make the prompt say ubuntu@easily-identifiable-machine-name
instead of the deafult EC2 names like ubuntu@ip-172-168-1-41
sudo hostnamectl set-hostname my-identifiable-name
Sometimes you need to install a new version of Ruby, but it complains about the OpenSSL version or the libffi version or whatever. On a macOS machine using Homebrew, you can just do:
rvm install 3.3.0 --with-openssl-dir=$(brew --prefix openssl) --with-ffi-dir=$(brew --prefix libffi)
To guarantee you're using the right OpenSSL and libffi versions and so the configure step runs without problems.
Use CloudCraft to create an empty blueprint, then just navigate betweent the services and see their estimations.
sudo su - postgres
psql
or:
sudo -u postgres psql
Then \l
to list all DBs, \c
to connect, \dt
to list tables inside one DB.
Dump a database (as Linux user postgres
):
pg_dump database_name > dump_filename.sql
Restore a pg_dump
:
psql database_name < dump_filename.sql
We can export large databases in a custom format -F c
so that they take up less space on disk:
pg_dump -h endpoint.us-east-1.rds.amazonaws.com -U username -d database_production -F c -b -v -f database_production_dump.sql
but then we have to import them with pg_restore
instead of psql
:
pg_restore -U <username> -h localhost -d <new_db_name> -v database_production_dump.sql
The error is:
ERROR: database "database_name" is being accessed by other users DETAIL: There is 1 other session using the database.
First, stop further connections:
REVOKE CONNECT ON DATABASE database_name FROM public;
Then, connect to the database \c database_name
Then, terminate all connections to the database:
SELECT pid, pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE datname = current_database() AND pid <> pg_backend_pid();
Then you can drop the database.
SELECT table_schema "DB Name",
ROUND(SUM(data_length + index_length) / 1024 / 1024, 1) "DB Size in MB"
FROM information_schema.tables
GROUP BY table_schema;
Exporting ALL databases:
mysqldump -u root -p --all-databases > all_db.sql
Exporting SOME databases:
mysqldump -u root -p --databases database1 database2 > some_db.sql
Importing DB dumps:
mysql -u root -p < alldb.sql