- Create Django App per DO tutorial or the DO quickstart (older)
- ensure you have set the following LOCAL environment variables, e.g. in
settings.local
or other means:
- ensure you have set the following LOCAL environment variables, e.g. in
import os
os.environ.setdefault('DEVELOPMENT_MODE', 'True')
os.environ.setdefault('DEBUG', 'True')
- Create Spaces (S3 Bucket) for media
- (upload media to new space)
- Optional settings for the new storage space if you want to use a custom endpoint:
- In the Space settings, add a CORS configuration for the base domain (not sure this necessary)
- Enable CDN for the new space
- Add a new subdomain e.g.
meda.example.net
to the space
- Get access key for space in API -> Spaces Keys
- Add environment variables for media — rough guide in this DO tutorial
pip install django-storages boto3; pip freeze > requirements.txt
- update
settings.py
, probably like the following:
# S3 (spaces) storage
# these all do what I expect
AWS_ACCESS_KEY_ID = os.getenv('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.getenv('AWS_SECRET_ACCESS_KEY')
AWS_DEFAULT_ACL = 'public-read' # public files by default
AWS_QUERYSTRING_AUTH = False # dont add validation querystrings
AWS_S3_FILE_OVERWRITE = False # append characters to dupe filenames
# need this!
AWS_STORAGE_BUCKET_NAME = os.getenv('AWS_STORAGE_BUCKET_NAME')
# not sure how these are put together
AWS_S3_ENDPOINT_URL = os.getenv('AWS_S3_ENDPOINT_URL')
AWS_S3_CUSTOM_DOMAIN = os.getenv('AWS_S3_CUSTOM_DOMAIN')
AWS_S3_OBJECT_PARAMETERS = {
'CacheControl': 'max-age=86400',
}
# for user uploads
if DEBUG:
DEFAULT_FILE_STORAGE = 'django.core.files.storage.FileSystemStorage'
else:
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
MEDIA_URL = f"{AWS_S3_ENDPOINT_URL}/" # not sure this matters in prod?
- Configure Domains:
- App > Settings > Domains > Add domain
- If your nameservers are already on DO, “We manage your domain” may be safe
- it appears they just append to existing records
- MX records may be disrupted(? temporarily?)
- Set
ALLOWED_HOSTS
to allow traffic from all those domains- append to
ALLOWED_HOSTS
insettings.py
, e.g.ALLOWED_HOSTS += os.getenv("DJANGO_ALLOWED_HOSTS", "127.0.0.1,localhost").split(",")
(note+=
) - ...or add them to the environment variable
DJANGO_ALLOWED_HOSTS
, comma-separated with no spaces- this seems more reliable
- OK to add wildcard domains e.g.
${APP_DOMAIN},example.com,.example.net
?
- append to
- Create a custom endpoint for media
- don’t forget to add this endpoint to
AWS_S3_CUSTOM_DOMAIN
in the environment variables
- don’t forget to add this endpoint to
- Upload legacy DB dump probably following this HOWTO — did this manually from a json file using
./manage.py loaddata
but this is not ideal. WIP:- create a new empty db
- temporarily disable trusted sources on that db OR (better) add the local computer to trusted sources
- This seemed to work:
pg_restore -d <your_connection_URI> --no-owner --jobs 4 <path/to/your_dump_file.pgsql>
- point the db environment variable to the new db & rebuild (?)
- Do NOT run
migrate
- Encrypt sensitive environment variables in the GUI
- Disable Trusted Sources TODO: find more secure means of doing this
- Copy the connection settings from the REMOTE database
- Dump the db on your local:
pg_dump -h <remote_host> -p <port> -U <remote_user> -Fc <remote_database> > path/to/dbdump.psql
- You will be prompted for a password, have that handy
- Re-enable Trusted Sources
- Create a new, empty DB on your local.
- Do NOT run
migrate
- Do NOT run
- Load the exported db to the local:
pg_restore -h localhost -U <local_user> -d <local_database> data/dbdump.pgsql
- Private file service on S3
- redirect away from wildcard domains (e.g. to www)
- build commands e.g.
./manage.py migrate
(Here’s is a hint anyway) - need more info on DNS/domains
- periodically dumping DB for local dev
- sync media for local dev
- automated spaces backup