@credit Yan Zhu (https://github.com/nina-zhu)
Before you start this guide, you should run through the "How To Serve Flask Applications with uWSGI and Nginx on Ubuntu 14.04" guide. This is necessary in order to set up virtualenv, uWSGI and Nginx. In that guide, we developed a very simple flask application having just 1 file "firstflask.py" with only 9 lines of code, it is a sample for showing you the general steps. In this guide, we will create a complete user session listing application, with login, logout functionality. We will use MariaDB to store the users records, and use Redis to store the session data and background tasks.
Let's get started.
First we should set up the MariaDB server.
Install the packages from the repositories by typing:
sudo apt-get update
sudo apt-get install mariadb-server libmariadbclient-dev libssl-dev
You will be asked to select and confirm a password for the administrative MariaDB account.
Most likely, MariaDB is already started after installing, you can run following command to check:
ps -ef | grep mysql
If it is not started, start it manually by running:
sudo service mysql start
After you run through the previous guide "How To Serve Flask Applications with uWSGI and Nginx on Ubuntu 14.04", you must have a start.sh
file, add the above command to that file to start necessary services in one go.
And to stop MariaDB server manually, run:
sudo service mysql stop
Similarly, add the above command to the stop.sh
file from the previous guide.
After installing MariaDB from Ubuntu's repositories, it will be started automatically when system boots up. If you want, you can disable its automatically starting by running:
sudo update-rc.d mysql disable
This will change S19mysql
to K81mysql
in /etc/rc[2-5].d
directories, thus won't start MariaDB database server at boot time.
Get MariaDB server started, you can then run through a simple security script to perform the necessary initial configuration:
sudo mysql\_secure\_installation
You'll be asked for the administrative password you set for MariaDB during installation. Afterwards, you'll be asked a series of questions. Besides the first question, asking you to choose another administrative password, select yes for each question.
With the installation and initial database configuration out of the way, we can move on to create our database and database user.
We can start by loggin into an interactive session with our database software by typing the following:
mysql -u root -p
You will be prompted for the administrative password you selected during installation. Afterwards, you will be given a prompt.
First, we will create a database for our Flask project. Each project should have its own isolated database for security reasons. We will call our database "burnin" as our Flask application is an experimental app for the "burn-in" project. We'll set the default type for the database to UTF-8, which is what Flask expects:
CREATE DATABASE burnin CHARACTER SET UTF8;
Remember to end all commands at an SQL prompt with a semicolon.
Next, we will create a database user which we will use to connect to and interact with the database. Set the password to something strong and secure (substitute user and password with your own ones):
CREATE USER user@localhost IDENTIFIED BY 'password';
Now, all we need to do is give our database user access rights to the database we created (substitute user to yours):
GRANT ALL PRIVILEGES ON burnin.* TO user@localhost;
Flush the changes so that they will be available during the current session:
FLUSH PRIVILEGES;
Exit the SQL prompt to get back to your regular shell session:
exit
Now that our database is set up, we can start developing our Flask application. Before writing application code, we need to prepare a virtual environment and install some packages.
Follow the section "Create Flask Projects" in the guide "How To Serve Flask Applications with uWSGI and Nginx on Ubuntu 14.04", we first create our virtual environment with the name of our project, here is "burnin":
mkvirtualenv burnin
Your prompt will change to indicate that you are now operating within your new virtual environment. Then we install Flask locally in our virtual environment:
pip install Flask
Next we have to choose a database client package that will allow us to use the database we configured. Many people prefer SQLAlchemy for database access. SQLAlchemy is the Python SQL toolkit and Object Relational Mapper that gives application developers the full power and flexibility of SQL. We will use SQLAlchemy in this guide.
Like Flask, we install SQLAlchemy locally in the virtual environment:
pip install SQLAlchemy
Since we use MariaDB as database backend, we select PyMySQL
as the dialect/DBAPI option. Install it via:
pip install PyMySQL
After installing these necessary packages, we can write our application now.
In this case it's encouraged to use a package instead of a module for your flask application and drop the models into a separate module. While that is not necessary, it makes a lot of sense. Here we use a package instead of a module. Note that the following steps are done in the virtual environment.
Make a directory to hold our Flask project, move into the directory afterwards:
mkdir ~/burnin
cd ~/burnin
Then we create a child directory of the same name to hold the code itself, and move into the child directory:
mkdir burnin
cd burnin
To make it a package, there must be a __init__.py
file:
vi __init__.py
We first add the imports we need then create our actual application then add the config section. Next we define a teardown_appcontext()
decorator which will be executed every time the application context tears down, we use it to remove database sessions at the end of the request or when the application shuts down. At last we define 2 views, one for index page, another for users listing page. For index page, we just redirect to users listing page. And for users listing page, we just show a JSON string of the available users in the database. The code is:
from flask import Flask, redirect, url_for, json
from burnin.database import db_session
from burnin.models import User
application = Flask(__name__)
application.config.from_object(__name__)
application.config.update(dict(
JSONIFY_PRETTYPRINT_REGULAR=False
))
application.config.from_envvar('FLASK_SERVER_SETTINGS', silent=True)
@application.teardown_appcontext
def shutdown_dbsession(exception=None):
db_session.remove()
@application.route('/')
def index():
return redirect(url_for('users'))
@application.route('/api/users')
def users():
users = db_session.query(User).all()
return json.jsonify([user.to_dict() for user in users])
Save and close the file. As you can see, we import things from burnin.database
and burnin.models
. burnin.database
defines how to connect to database and initialize database data, burnin.models
defines our User
model. We create database.py
:
vi database.py
With following content (substitute user:password with the database user you created early on):
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
from sqlalchemy.ext.declarative import declarative_base
engine = create_engine(
'mysql+pymysql://user:password@localhost/burnin?charset=utf8',
connect_args = {
'port': 3306
},
echo='debug',
echo_pool=True
)
db_session = scoped_session(
sessionmaker(
bind=engine,
autocommit=False,
autoflush=False
)
)
Base = declarative_base()
def init_db():
import burnin.models
Base.metadata.create_all(engine)
from burnin.models import User
db_session.add_all([
User(username='admin', password='fortinet'),
User(username='test', password='fortinet')
])
db_session.commit()
print 'Initialized the database.'
Save and close the file. From the above code you can see that we use SQLAlchemy in a declarative way, it allows you to define tables and models in one go, you'll see that in models.py
. The create_engine()
connect string shows that we are using MySQL/PyMySQL as dialect/DBAPI combination. With the scoped_session
, SQLAlchemy provides contextual/thread-local sessions for us, so we don't have to care about threads here. We defined the init_db()
function to initialize database data, which is creating users
table and adding 2 records in it. Then we create models.py
:
vi models.py
with following code:
from sqlalchemy import Column, Integer, String
from burnin.database import Base
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
username = Column(String(20), unique=True)
password = Column(String(20))
def __init__(self, username=None, password=None):
self.username = username
self.password = password
def __repr__(self):
return "<User(id=%d, username='%s', password='%s')>" % (self.id, self.username, self.password)
def to_dict(self):
return {
'id': self.id,
'username': self.username,
'password': self.password
}
Save and close the file. You can see, to define a model, just subclass the Base
class that was created in database.py
. We defined a User
model mapping to the table users
with 3 columns id
, username
, password
. In order to jsonify the list of users, we defined a to_dict()
function in the class to transfer a User
object to a common Python dict.
At this point, we've made a "burnin" package, let's create some scripts to use this package. Get back to the project root directory, create a script to initialize database data:
cd ..
vi initdb.py
Put following lines in it:
from burnin.database import init_db
init_db()
Save and close it. Then create a script to run the development server:
vi runserver.py
Add these lines in it:
from burnin import application
application.run(debug=True)
Save and close it. At last we create a file that will serve as the WSGI entry point for our application which will tell our uWSGI server how to interact with the application, we call the file wsgi.py
:
vi wsgi.py
The file is incredibly simple, we can simply import the Flask instance from our application and then run it:
from burnin import application
if __name__ == "__main__":
application.run()
Save and close the file. Now that the codes are finished, let's execute them to run our application.
First we have to initialize the database data, just run the command:
python initdb.py
You should see "Initialized the database." at the last line of the output.
We've prepared the database, now can test our Flask app by typing:
python runserver.py
Visit the URL specified in the terminal output (most likely http://127.0.0.1:5000/
) in your web browser. You should see something like this:
[{"id":1,"password":"fortinet","username":"admin"},{"id":2,"password":"fortinet","username":"test"}]
When you are finished, hit CTRL-C
in your terminal window to stop the Flask development server.
Our application is now written and our entry point established. We can now move on to uWSGI.
The first thing we will do is testing uWSGI serving to make sure that uWSGI can serve our application.
We can do this by simply passing it the name of our entry point. We'll also specify the socket so that it will be started on a publicly available interface, and the protocol so that it will use HTTP instead of the uwsgi
binary protocol. We also need to specify home
(or virtualenv
or venv
or pyhome
) to set PYTHONHOME/virtualenv (substitute user with yours):
uwsgi --socket 0.0.0.0:8000 --protocol=http --home /home/user/.virtualenvs/burnin -w wsgi
If you visit your server's domain name or IP address with :8000
appended to the end in your web browser, you should see the page that you saw before when running the Flask development server.
When you have confirmed that it's functioning properly, press CTRL-C
in your terminal window.
We are now done with our virtual environment, so we can deactivate it:
deactivate
Any operations now will be done to the system's Python environment.
We have tested that uWSGI is able to serve our application, but we want something more robust for long-term usage. As you see in the guide "How To Serve Flask Applications with uWSGI and Nginx on Ubuntu 14.04", we run uWSGI in "Emperor mode", and have a directory called /etc/uwsgi/sites
to store our configuration files, we can create a uWSGI configuration file there with the options we want. We call it burnin.ini
:
sudo vi /etc/uwsgi/sites/burnin.ini
Inside, you should already know, we will start off with the [uwsgi]
header so that uWSGI knows to apply the settings. We still use variables project
and base
just like in the previous guide. We set the chdir
option to change into the project root directory, and set the home
option to indicate the virtual environment for our project. We'll specify the module by referring to our wsgi.py
file, minus the extension. The configuration of these items will look like this (substitute user with yours):
[uwsgi]
project = burnin
base = /home/user
chdir = %(base)/%(project)
home = %(base)/.virtualenvs/%(project)
module = wsgi
Next, we'll tell uWSGI to start up in master and cheaper mode for dynamic session scaling:
[uwsgi]
project = burnin
base = /home/user
chdir = %(base)/%(project)
home = %(base)/.virtualenvs/%(project)
module = wsgi
master = true
processes = 10
cheaper = 2
cheaper-initial = 5
cheaper-step = 1
cheaper-algo = spare
cheaper-overload = 5
When we were testing, we exposed uWSGI on a network port. However, we're going to be using Nginx to handle actual client connections, which will then pass requests to uWSGI. Since these components are operating on the same computer, a Unix socket is preferred because it is more secure and faster. We'll call the socket burnin.sock
and place it in our project directory.
We'll also have to change the permissions on the socket. We've given the Nginx group ownership of the uWSGI process in the previous guide, here we need to make sure the group owner of the socket can read information from it and write to it. We will also clean up the socket when the process stops by adding the vacuum
option:
[uwsgi]
project = burnin
base = /home/user
chdir = %(base)/%(project)
home = %(base)/.virtualenvs/%(project)
module = wsgi
master = true
processes = 10
cheaper = 2
cheaper-initial = 5
cheaper-step = 1
cheaper-algo = spare
cheaper-overload = 5
socket = %(base)/%(project)/%(project).sock
chmod-socket = 660
vacuum = true
You may have noticed that we did not specify a protocol like we did from the command line. That is because by default, uWSGI speaks using the uwsgi
protocol, a fast binary protocol designed to communicate with other servers. Nginx can speak this protocol natively, so it's better to use this than to force communication by HTTP.
When you are finished, save and close the file.
Now we need to configure Nginx to proxy requests.
Begin by creating a new server block configuration file in Nginx's sites-available
directory. We'll simply call this "burnin" to keep in line with the rest of the guide:
sudo vi /etc/nginx/sites-available/burnin
Open up a server block and tell Nginx to listen on the default port 80. We also need to tell it to use this block for requests for our server's domain name or IP address, and not to worry if it can't find a favicon (substitute server_domain_or_IP with yours):
server {
listen 80;
server_name server_domain_or_IP;
location = /favicon.ico { access_log off; log_not_found off; }
}
The only other thing that we need to add is a location block that matches every request. Within this block, we'll include the uwsgi_params
file that specifies some general uWSGI parameters that need to be set. We'll then pass the requests to the socket we defined using the uwsgi_pass
directive (substitute user with yours):
server {
listen 80;
server_name server_domain_or_IP;
location = /favicon.ico { access_log off; log_not_found off; }
location / {
include uwsgi_params;
uwsgi_pass unix:/home/user/burnin/burnin.sock;
}
}
That's actually all we need to serve our application. Save and close the file when you're finished.
To enable the Nginx server block configuration we've just created, link the file to the sites-enabled
directory:
sudo ln -s /etc/nginx/sites-available/burnin /etc/nginx/sites-enabled
With the file in that directory, we can test for syntax errors by typing:
sudo nginx -t
If this returns without indicating any issues, we can restart the Nginx process to read our new config. But wait, if you remember, we created a sample project named "firstflask" in the previous guide and it listened on port 80 too. To avoid conflicting, we disable that project:
sudo rm /etc/nginx/sites-enabled/firstflask
If you also don't want the uWSGI server to serve the sample project "firstflask" anymore, remove its configuration file:
sudo rm /etc/uwsgi/sites/firstflask.ini
If you remember, we have 2 scripts called start.sh
and stop.sh
to start and stop Nginx, uWSGI and MariaDB in one go, so let's restart in this way (substitute path_to_script with your own path where you store "start.sh" and "stop.sh"):
cd path_to_script
./stop.sh
./start.sh
You should now be able to go to your server's domain name or IP address in your web browser and see the JSON string of users.
At this point, the users page can be seen by anyone who is accessing our application. We are going to enhance it, by adding login requirement. So we'll use session, and need to have a session handler responsible for storing and retrieving data saved into sessions. In Flask, there is an object called session
that is implemented on top of cookies by default, but here we'll use Redis.
Redis is an open source key-value cache and storage system, also referred to as a data structure server for its advanced support for several data types, such as hashes, lists, sets, and bitmaps, amongst others.
The first thing we need to do is get the Redis server installed.
Refer to Redis Quick Start, the suggested way of installing Redis is compiling it from sources as Redis has no dependencies other than a working GCC compiler and libc. Installing it using the package manager of your Linux distribution is somewhat discouraged as usually the available version is not the latest.
You can either download the latest Redis tar ball from the redis.io web site, or you can alternatively use this special URL that always points to the latest stable Redis version.
In order to compile Redis follow this simple steps:
wget http://download.redis.io/redis-stable.tar.gz
tar xvzf redis-stable.tar.gz
cd redis-stable
make
At this point you can try if your build works correctly by typing make test
, but this is an optional step. After the compilation the src
directory inside the Redis distribution is populated with the different executables that are part of Redis:
- redis-server is the Redis server itself.
- redis-sentinel is the Redis Sentinel executable (monitoring and failover).
- redis-cli is the command line interface utility to talk with Redis.
- redis-benchmark is used to check Redis performances.
- redis-check-aof and redis-check-dump are useful in the rare event of corrupted data files.
It is a good idea to copy both the Redis server and the command line interface in proper places, either manually using the following commands:
sudo cp src/redis-server /usr/local/bin/
sudo cp src/redis-cli /usr/local/bin/
Or just using:
sudo make install
And make sure that /usr/local/bin
is in your PATH
environment variable so that you can execute both the binaries without specifying the full path.
The simplest way to start the Redis server is just executing the redis-server
binary without any argument:
redis-server
In this way Redis is started without any explicit configuration file, so all the parameters will use the internal default. This is perfectly fine if you are starting Redis just to play a bit with it or for development, but for production environments you should use a configuration file.
In order to start Redis with a configuration file use the full path of the configuration file as first argument, like this:
redis-server /etc/redis/redis.conf
You should use the redis.conf
file included in the root directory of the Redis source code distribution as a template to write your configuration file.
To check if Redis is working, try this command:
redis-cli ping
This will connect to a Redis instance running on localhost
at port 6379
(You can change the host and port used by redis-cli
, just try the --help
option to check the usage information). You should get a PONG
as response.
To shutdown Redis, use command:
redis-cli shutdown
Like previously installed Nginx, uWSGI, MariaDB, we can also start Redis as a system service. If you install Redis in the suggested way, i.e. compiling it from sources, Redis won't become a service automatically, we need to do this manually with the init script shipped with Redis distribution.
First make sure you already copied redis-server
and redis-cli
executables under /usr/local/bin
.
Then create a directory where to store your Redis configuration file and a directory to store your data:
sudo mkdir /etc/redis
sudo mkdir /var/redis
As we explained in the uWSGI section, instead of using a network port (a TCP socket), since all of the components are operating on a single server, we can use a Unix socket, this is more secure and offers better performance. And we also have to make the socket be under group www-data
so that the uwsgi
instance can access it. As a result, we don't copy the init script that you'll find in the Redis distribution under the utils
directory into /etc/init.d
, instead, we new an init script by ourselves:
sudo vi /etc/init.d/redis
Write following code:
#!/bin/sh
#
# Simple Redis init.d script conceived to work on Linux systems
# as it does use of the /proc filesystem.
EXEC=/usr/local/bin/redis-server
CLIEXEC=/usr/local/bin/redis-cli
DIR=/var/run/redis
PIDFILE=$DIR/redis.pid
SOCKET=$DIR/redis.sock
GROUP=www-data
CONF=/etc/redis/redis.conf
case "$1" in
start)
if [ -f $PIDFILE ]
then
echo "$PIDFILE exists, process is already running or crashed"
else
[ -d $DIR ] || mkdir $DIR
echo "Starting Redis server..."
start-stop-daemon --start --pidfile $PIDFILE --exec $EXEC --group $GROUP -- $CONF
ret=$?
if [ $ret -eq 1 ]
then
echo "Nothing was done, probably because a matching process was already running, check if it is running under group '$GROUP', if not, stop first and start again"
elif [ $ret -eq 0 ]
then
echo "Redis started"
else
echo "Failed to start Redis"
fi
fi
;;
stop)
if [ ! -f $PIDFILE ]
then
echo "$PIDFILE does not exist, process is not running"
else
PID=$(cat $PIDFILE)
echo "Stopping Redis server..."
$CLIEXEC -s $SOCKET shutdown
while [ -x /proc/${PID} ]
do
echo "Waiting for Redis to shutdown ..."
sleep 1
done
echo "Redis stopped"
fi
;;
*)
echo "Please use start or stop as first argument"
;;
esac
After you are finished, save and close the init script.
Copy the template configuration file redis.conf
in the root directory of the Redis distribution into /etc/redis/
:
sudo cp redis.conf /etc/redis/
Edit the configuration file:
sudo vi /etc/redis/redis.conf
Perform the following changes:
- Set
daemonize
to yes (by default it is set to no) - Set the
pidfile
to/var/run/redis/redis.pid
- Set the
port
to0
so Redis will not listen on a TCP socket - Uncomment the
unixsocket
andunixsocketperm
lines, setunixsocket
to/var/run/redis/redis.sock
, setunixsocketperm
to660
- Set your preferred
loglevel
- Set the
logfile
to/var/log/redis.log
- Set the
dir
to/var/redis/
(very important step!)
When you are finished, save and close the file.
Finally if you want Redis to start automatically after system boots up, you can add the new Redis init script to all the default runlevels using the following command:
sudo update-rc.d redis defaults
You are done! Now you can try running your instance with:
sudo service redis start
Add the above command into start.sh
to run all services in one go, and add the following command into stop.sh
:
sudo service redis stop
Redis started in this way uses the Unix socket, so if you use redis-cli
to talk with it, run commands like this:
sudo redis-cli -s /var/run/redis/redis.sock ping
Because the Unix socket permission is 660
, i.e. only user root
and group www-data
can read/write the socket, the above command used sudo
.
To avoid permission problem later on, add your user into www-data
group (substitute user with yours):
sudo usermod -aG www-data user
You may have to log out and log in again to make the modification take effect.
Then you can leave out sudo
:
redis-cli -s /var/run/redis/redis.sock ping
Of course using Redis just from the command line interface is not enough as the goal is to use it from our Flask application. In order to do so you need to download and install a Redis client library for your programming language.
The recommended redis client for Python is redis-py.
Like SQLAlchemy, we install redis-py
in the virtual environment, so enter in the virtual environment:
workon burnin
To install redis-py
, simply:
pip install redis
The first usage of Redis is as a storage backend for the actual session data.
We first go to "burnin" project directory, make a module in the inner "burnin" directory:
cd ~/burnin/burnin
vi redis_session.py
Write the following code, which implements a session backend using redis. It allows you to either pass in a redis client or will connect to the redis instance on localhost. All the keys are prefixed with a specified prefix which defaults to session:
:
import pickle
from datetime import timedelta
from uuid import uuid4
from redis import StrictRedis
from werkzeug.datastructures import CallbackDict
from flask.sessions import SessionInterface, SessionMixin
class RedisSession(CallbackDict, SessionMixin):
def __init__(self, initial=None, sid=None, new=False):
def on_update(self):
self.modified = True
CallbackDict.__init__(self, initial, on_update)
self.sid = sid
self.new = new
self.modified = False
class RedisSessionInterface(SessionInterface):
serializer = pickle
session_class = RedisSession
def __init__(self, redis=None, prefix='session:'):
if redis is None:
redis = StrictRedis(host='localhost', port=6379, db=0)
self.redis = redis
self.prefix = prefix
def generate_sid(self):
return str(uuid4())
def get_redis_expiration_time(self, app, session):
if session.permanent:
return app.permanent_session_lifetime
return timedelta(days=1)
def open_session(self, app, request):
sid = request.cookies.get(app.session_cookie_name)
if not sid:
sid = self.generate_sid()
return self.session_class(sid=sid, new=True)
val = self.redis.get(self.prefix + sid)
if val is not None:
data = self.serializer.loads(val)
return self.session_class(data, sid=sid)
return self.session_class(sid=sid, new=True)
def save_session(self, app, session, response):
domain = self.get_cookie_domain(app)
if not session:
self.redis.delete(self.prefix + session.sid)
if session.modified:
response.delete_cookie(app.session_cookie_name, domain=domain)
return
redis_exp = self.get_redis_expiration_time(app, session)
cookie_exp = self.get_expiration_time(app, session)
val = self.serializer.dumps(dict(session))
self.redis.setex(self.prefix + session.sid, int(redis_exp.total_seconds()), val)
response.set_cookie(app.session_cookie_name, session.sid, expires=cookie_exp, httponly=True, domain=domain)
The session interface provides a simple way to replace the session implementation that Flask is using (which is by using a signed cookie). Class flask.sessions.SessionInterface
is the basic interface you have to implement in order to replace the default session interface which uses werkzeug's securecookie implementation. The only methods you have to implement are open_session()
and save_session()
, the others have useful defaults which you don't need to change.
The session object returned by the open_session()
method has to provide a dictionary like interface plus the properties and methods from the flask.sessions.SessionMixin
. Here we subclass werkzeug.datastructures.CallbackDict
and flask.sessions.SessionMixin
.
As you can see, we store only sid in cookie, and use "prefix+sid" as the key in redis, the value is the serialized string of the session data.
Save and close the file when you are finished.
Now let's use this redis session in our application.
Open __init__.py
:
vi __init__.py
Change the content to:
from flask import Flask, request, session, redirect, url_for, render_template
from burnin.database import db_session
from burnin.models import User
from burnin.redis_session import RedisSessionInterface
from redis import StrictRedis
application = Flask(__name__)
application.session_interface = RedisSessionInterface(StrictRedis(unix_socket_path='/var/run/redis/redis.sock'))
application.config.from_object(__name__)
application.config.update(dict(
JSONIFY_PRETTYPRINT_REGULAR=False
))
application.config.from_envvar('FLASK_SERVER_SETTINGS', silent=True)
@application.teardown_appcontext
def shutdown_dbsession(exception=None):
db_session.remove()
@application.route('/')
def index():
if 'username' in session:
return redirect(url_for('users'))
return redirect(url_for('login'))
@application.route('/login', methods=['GET', 'POST'])
def login():
error = None
if request.method == 'POST':
username = request.form['username']
userList = db_session.query(User).filter(User.username == username).all()
if len(userList) != 1:
error = 'Invalid username'
elif userList[0].password != request.form['password']:
error = 'Invalid password'
else:
session['username'] = username
return redirect(url_for('users'))
return render_template('login.html', error=error)
@application.route('/logout')
def logout():
session.pop('username', None)
return redirect(url_for('index'))
@application.route('/api/users')
def users():
if 'username' not in session:
return redirect(url_for('login'))
users = db_session.query(User).all()
return render_template('users.html', users=users)
# Set the secret key, keep this really secret.
application.secret_key = '}\xa6\xa5\x81\x03\x8c \xe7sH\xf7G)\x10\xc8)\x8fgA\xc7V\xa8\x0f\xe1'
Our Redis server is using a Unix socket, so here we pass in a redis client using this socket to RedisSessionInterface
, and assign this RedisSessionInterface
instance to application.session_interface
. Then we can use the session
object as usual. We define views login
and logout
, change views index
and users
to check if user is logged in.
If you want to use sessions in Flask applications, you have to set the Flask.secret_key
. A session basically makes it possible to remember information from one request to another. The way Flask does this is by using a signed cookie. So the users can look at the session content, but not modify it unless they know the secret key, so make sure to set that to something complex and unguessable.
How to generate good secret keys? The problem with random is that it's hard to judge what is truly random. And a secret key should be as random as possible. Your operating system has ways to generate pretty random stuff based on a cryptographic random generator which can be used to get such a key:
python
>>> import os
>>> os.urandom(24)
Just take that thing and copy/paster it into your code and you're done.
As we used render_template
, now we should start working on the templates. If we were to request the URLs now, we would only get an exception that Flask cannot find the templates. The templates are using Jinja2 syntax and have autoescaping enabled by default. This means that unless you mark a value in the code with Markup
or with the |safe
filter in the template, Jinja2 will ensure that special characters such as <
or >
are escaped with their XML equivalents.
We can also use template inheritance which makes it possible to reuse the layout of the website in all pages.
Create templates
folder under the inner "burnin" directory:
mkdir templates
Create layout template first:
vi templates/layout.html
With content:
<!doctype html>
<title>Burn In</title>
<div>
<h1>Burn In</h1>
{% block body %}{% endblock %}
</div>
Then create login template:
vi templates/login.html
Put following lines:
{% extends "layout.html" %}
{% block body %}
<h2>Login</h2>
{% if error %}
<p><strong>Error:</strong> {{ error }}</p>
{% endif %}
<form action="{{ url_for('login') }}" method="POST">
<dl>
<dt>User Name:</dt>
<dd><input type="text" name="username"></dd>
<dt>Password:</dt>
<dd><input type="password" name="password"></dd>
<dd><input type="submit" value="Log In"></dd>
</dl>
</form>
{% endblock %}
The login template basically just displays a form to allow the user to log in.
Last create users template:
vi templates/users.html
With following content:
{% extends "layout.html" %}
{% block body %}
<div style="float:right;">
<em>Logged in as</em>
<strong>{{ session.username }}</strong>
<a href="{{ url_for('logout') }}" style="margin-left:10px;">Log Out</a>
</div>
<h2>Users</h2>
{% for user in users %}
<hr/>
<dl>
<dt>ID:</dt>
<dd>{{ user.id }}</dd>
<dt>User Name:</dt>
<dd>{{ user.username }}</dd>
<dt>Password:</dt>
<dd>{{ user.password }}</dd>
</dl>
{% endfor %}
{% endblock %}
Now you can start the development server:
cd ~/burnin
python runserver.py
Go to http://127.0.0.1:5000/
to have a look. You should see a login page, providing wrong username or password will see error message, providing correct username and password to log in, you can see users page, and can click "Log Out" link to log out.
If everything is OK, press CTRL-C
to stop the development server. And deactivate the virtual environment:
deactivate
Then restart services using scripts (substitute path_to_script with your own path where you store "start.sh" and "stop.sh"):
cd path_to_script
./stop.sh
./start.sh
Now go to your server's domain name or IP address in your web browser, you should see the same result.
Asynchronous, or non-blocking, processing is a method of separating the execution of certain tasks from the main flow of a program. This provides you with several advantages, including allowing your user-facing code to run without interruption.
Message passing is a method which program components can use to communicate and exchange information. It can be implemented synchronously or asynchronously and can allow discrete processes to communicate without problems.
Celery is a task queue that is built on an asynchronous message passing system. It can be used as a bucket where programming tasks can be dumped. The program that passed the task can continue to execute and function responsively, and then later on, it can poll celery to see if the computation is complete and retrieve the data.
While celery is written in Python, its protocol can be implemented in any language. It can even function with other languages through webhooks.
By implementing a job queue into your program's environment, you can easily offload tasks and continue to handle interactions from your users. This is a simple way to increase the responsiveness of your applications and not get locked up while performing long-running computations.
Celery is written in Python, and as such, it is easy to install in the same way that we handle regular Python packages. We will follow the recommended procedures for handling Python packages by entering a virtual environment to install it. This helps us keep our environment stable and not effect the larger system.
Enter the virtual environment:
workon burnin
Your prompt will change to reflect that you are now operating in the virtual environment. This will ensure that our Python packages are installed locally instead of globally.
Now that we have activated the environment, we can install celery with pip
:
pip install celery
Celery requires a solution to send and receive messages; usually this comes in the form of a separate service called a message broker.
There are quite a few options for brokers available to choose from, including relational databases, NoSQL databases, key-value stores, and actual messaging systems.
We will be configuring celery to use the Redis key-value store, as it is feature-complete and already integrated into our "burnin" project.
In order to use celery's task queuing capabilities, our first step after installation must be to create a Celery instance, this is called the celery application. It serves the same purpose as the Flask
object in Flask, just for Celery. Since this instance is used as the entry-point for everything you want to do in Celery, like creating tasks and managing workers, it must be possible for other modules to import it.
Let's create a Python script inside our inner "burnin" directory called celery_tasks.py
:
cd ~/burnin/burnin
vi celery_tasks.py
The first thing we should do is import the Celery function from the celery package:
from celery import Celery
While you can use Celery without any reconfiguration with Flask, it becomes a bit nicer by subclassing tasks and adding support for Flask's application contexts and hooking it up with the Flask configuration:
from celery import Celery
def make_celery(app):
celery = Celery(app.import_name, backend=app.config['CELERY_RESULT_BACKEND'], broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
TaskBase = celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
celery.Task = ContextTask
return celery
The function creates a new Celery object, configures it with the broker from the application config, updates the rest of the Celery config from the Flask config and then creates a subclass of the task that wraps the task execution in an application context.
We now create the Celery app using the above function:
from celery import Celery
from burnin import application
def make_celery(app):
celery = Celery(app.import_name, backend=app.config['CELERY_RESULT_BACKEND'], broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
TaskBase = celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
celery.Task = ContextTask
return celery
application.config.update(
CELERY_BROKER_URL='redis+socket:///var/run/redis/redis.sock',
CELERY_RESULT_BACKEND='redis+socket:///var/run/redis/redis.sock',
CELERY_ACCEPT_CONTENT=['json'],
CELERY_TASK_SERIALIZER='json',
CELERY_RESULT_SERIALIZER='json'
)
celery_app = make_celery(application)
Still in this file, now we add our tasks.
Each celery task must be introduced with the decorator @app.task
, where the app
must be replaced with the actual celery app variable name, here is celery_app
. This allows celery to identify functions that it can add its queuing functions to. After each decorator, we simply create a function that our workers can run.
We only add one task that will generate prime numbers (taken from RosettaCode). This can be a long-running process, so it is a good example for how we can deal with asynchronous worker processes when we are waiting for a result:
from celery import Celery
from burnin import application
def make_celery(app):
celery = Celery(app.import_name, backend=app.config['CELERY_RESULT_BACKEND'], broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
TaskBase = celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
celery.Task = ContextTask
return celery
application.config.update(
CELERY_BROKER_URL='redis+socket:///var/run/redis/redis.sock',
CELERY_RESULT_BACKEND='redis+socket:///var/run/redis/redis.sock',
CELERY_ACCEPT_CONTENT=['json'],
CELERY_TASK_SERIALIZER='json',
CELERY_RESULT_SERIALIZER='json'
)
celery_app = make_celery(application)
@celery_app.task
def gen_prime(x):
multiples = []
results = []
for i in xrange(2, x+1):
if i not in multiples:
results.append(i)
for j in xrange(i*i, x+1, i):
multiples.append(j)
return results
Save and close the file.
We can now start a worker process that will be able to accept connections from applications. It will use the file we just created to learn about the tasks it can perform.
The celery
program can be used to start the worker, you need to run the worker in the outer "burnin" directory (make sure the Redis server is started):
cd ~/burnin
celery worker --app=burnin.celery_tasks.celery_app
When I tried here, I saw following error:
[2016-08-15 17:33:43,258: ERROR/MainProcess] Unrecoverable error: TypeError("__init__() got an unexpected keyword argument 'socket_connect_timeout'",)
Traceback (most recent call last):
File "/home/nina/.virtualenvs/burnin/local/lib/python2.7/site-packages/celery/worker/__init__.py", line 206, in start
self.blueprint.start(self)
File "/home/nina/.virtualenvs/burnin/local/lib/python2.7/site-packages/celery/bootsteps.py", line 123, in start
step.start(parent)
File "/home/nina/.virtualenvs/burnin/local/lib/python2.7/site-packages/celery/bootsteps.py", line 374, in start
return self.obj.start()
File "/home/nina/.virtualenvs/burnin/local/lib/python2.7/site-packages/celery/worker/consumer.py", line 279, in start
blueprint.start(self)
File "/home/nina/.virtualenvs/burnin/local/lib/python2.7/site-packages/celery/bootsteps.py", line 123, in start
step.start(parent)
File "/home/nina/.virtualenvs/burnin/local/lib/python2.7/site-packages/celery/worker/consumer.py", line 479, in start
c.connection = c.connect()
File "/home/nina/.virtualenvs/burnin/local/lib/python2.7/site-packages/celery/worker/consumer.py", line 376, in connect
callback=maybe_shutdown,
File "/home/nina/.virtualenvs/burnin/local/lib/python2.7/site-packages/kombu/connection.py", line 369, in ensure_connection
interval_start, interval_step, interval_max, callback)
File "/home/nina/.virtualenvs/burnin/local/lib/python2.7/site-packages/kombu/utils/__init__.py", line 246, in retry_over_time
return fun(*args, **kwargs)
File "/home/nina/.virtualenvs/burnin/local/lib/python2.7/site-packages/kombu/connection.py", line 237, in connect
return self.connection
File "/home/nina/.virtualenvs/burnin/local/lib/python2.7/site-packages/kombu/connection.py", line 742, in connection
self._connection = self._establish_connection()
File "/home/nina/.virtualenvs/burnin/local/lib/python2.7/site-packages/kombu/connection.py", line 697, in _establish_connection
conn = self.transport.establish_connection()
File "/home/nina/.virtualenvs/burnin/local/lib/python2.7/site-packages/kombu/transport/virtual/__init__.py", line 809, in establish_connection
self._avail_channels.append(self.create_channel(self))
File "/home/nina/.virtualenvs/burnin/local/lib/python2.7/site-packages/kombu/transport/virtual/__init__.py", line 791, in create_channel
channel = self.Channel(connection)
File "/home/nina/.virtualenvs/burnin/local/lib/python2.7/site-packages/kombu/transport/redis.py", line 464, in __init__
self.client.info()
File "/home/nina/.virtualenvs/burnin/local/lib/python2.7/site-packages/kombu/utils/__init__.py", line 325, in __get__
value = obj.__dict__[self.__name__] = self.__get(obj)
File "/home/nina/.virtualenvs/burnin/local/lib/python2.7/site-packages/kombu/transport/redis.py", line 908, in client
return self._create_client(async=True)
File "/home/nina/.virtualenvs/burnin/local/lib/python2.7/site-packages/kombu/transport/redis.py", line 861, in _create_client
return self.AsyncClient(connection_pool=self.async_pool)
File "/home/nina/.virtualenvs/burnin/local/lib/python2.7/site-packages/kombu/transport/redis.py", line 882, in __init__
self.connection = self.connection_pool.get_connection('_')
File "/home/nina/.virtualenvs/burnin/local/lib/python2.7/site-packages/redis/connection.py", line 897, in get_connection
connection = self.make_connection()
File "/home/nina/.virtualenvs/burnin/local/lib/python2.7/site-packages/redis/connection.py", line 906, in make_connection
return self.connection_class(**self.connection_kwargs)
TypeError: __init__() got an unexpected keyword argument 'socket_connect_timeout'
After searching on the internet, it is said to be related with kombu and redis-py versions, see Calling a task returns a TypeError in 3.1.19 with redis as a broker. The versions of these components in my machine are:
redis-server 3.2.1
redis-py 2.10.5
celery 3.1.23
kombu 3.0.35
And note the key point: the redis server is using Unix domain socket.
There is no solution found on the internet. I tried downloading kombu to 3.0.34, then celery worker
can be run, but calling a task would report the same error, just with different traceback. However, the final places where the error is raised are the same, it is line 906 return self.connection_class(**self.connection_kwargs)
in redis/connection.py
, so I made a change in this file:
cd ~/.virtualenvs/burnin/lib/python2.7/site-packages/redis
vi connection.py
Before line 906, add following lines:
if self.connection_class.description_format.startswith('UnixDomainSocketConnection'):
self.connection_kwargs.pop('socket_connect_timeout', None)
self.connection_kwargs.pop('socket_keepalive', None)
self.connection_kwargs.pop('socket_keepalive_options', None)
Yes, we also need to remove socket_keepalive
and socket_keepalive_options
, otherwise errors of them will be raised.
Save and close the file.
Now run the worker again:
cd ~/burnin
celery worker --app=burnin.celery_tasks.celery_app
The worker should start.
Now that the celery worker is running, we can call our task using the Python interpreter. As the celery worker is not run in the background as a daemon now, it occupies the current terminal, so we open a new terminal, enter into the virtual environment, change the working directory, invoke the Python interpreter:
workon burnin
cd ~/burnin/
python
Then execute following Python codes:
>>> from burnin.celery_tasks import gen_prime
>>> result = gen_prime.delay(20)
>>> result.ready()
True
>>> result.get()
[2, 3, 5, 7, 11, 13, 17, 19]
When result.ready()
returns True
, we can call result.get()
to get the task result. It may also return False
, such as:
>>> result = gen_prime.delay(20000)
>>> result.ready()
False
We gave a big integer as the x
argument of gen_prime
, causing the task need more time to finish.
After you tried out, exit the Python interpreter:
>>> exit()
Now we go to call our task in our Flask application, that's actually what we want to do.
Open __init__.py
in the inner "burnin" directory:
vi burnin/__init__.py
Change the contents to:
from flask import Flask, request, session, redirect, url_for, render_template
from burnin.database import db_session
from burnin.models import User
from burnin.redis_session import RedisSessionInterface
from redis import StrictRedis
application = Flask(__name__)
application.session_interface = RedisSessionInterface(StrictRedis(unix_socket_path='/var/run/redis/redis.sock'))
application.config.from_object(__name__)
application.config.update(dict(
JSONIFY_PRETTYPRINT_REGULAR=False
))
application.config.from_envvar('FLASK_SERVER_SETTINGS', silent=True)
@application.teardown_appcontext
def shutdown_dbsession(exception=None):
db_session.remove()
@application.route('/')
def index():
if 'username' in session:
return redirect(url_for('users'))
return redirect(url_for('login'))
@application.route('/login', methods=['GET', 'POST'])
def login():
error = None
if request.method == 'POST':
username = request.form['username']
userList = db_session.query(User).filter(User.username == username).all()
if len(userList) != 1:
error = 'Invalid username'
elif userList[0].password != request.form['password']:
error = 'Invalid password'
else:
session['username'] = username
return redirect(url_for('users'))
return render_template('login.html', error=error)
@application.route('/logout')
def logout():
session.pop('username', None)
return redirect(url_for('index'))
@application.route('/api/users')
def users():
if 'username' not in session:
return redirect(url_for('login'))
users = db_session.query(User).all()
return render_template('users.html', users=users)
# Set the secret key, keep this really secret.
application.secret_key = '}\xa6\xa5\x81\x03\x8c \xe7sH\xf7G)\x10\xc8)\x8fgA\xc7V\xa8\x0f\xe1'
from burnin.celery_tasks import celery_app, gen_prime
@application.route('/api/tasks')
def tasks():
if 'username' not in session:
return redirect(url_for('login'))
tasks = []
if 'tasks' in session:
tasks = session['tasks']
taskInfos = []
for task in tasks:
taskId = task['taskId']
asyncResult = celery_app.AsyncResult(taskId)
info = { 'x': task['x'] }
if asyncResult.ready():
info.update({
'state': 'Done',
'result': asyncResult.get()
})
else:
info.update({
'state': 'Doing',
'result': '-'
})
taskInfos.append(info)
return render_template('tasks.html', tasks=taskInfos)
@application.route('/api/tasks/add', methods=['POST'])
def add_task():
x = request.form['x']
x = int(x)
asyncResult = gen_prime.delay(x)
tasks = []
if 'tasks' in session:
tasks = session['tasks']
tasks.append({
'x': x,
'taskId': asyncResult.id
})
session['tasks'] = tasks
return redirect(url_for('tasks'))
We added some contents after the application.secret_key
line, mainly two route views, one for showing tasks list, another for adding a new task. We only defined 1 task, but can give different x
argument to generate different task instances.
Save and close the file.
Finally, add the template file:
vi burnin/templates/tasks.html
With content:
{% extends "layout.html" %}
{% block body %}
<form action="{{ url_for('add_task') }}" method="POST">
<label for="x">x:</label>
<input type="text" name="x">
<input type="submit" value="Add Task">
</form>
<h2>Tasks</h2>
{% for task in tasks %}
<hr/>
<dl>
<dt>x:</dt>
<dd>{{ task.x }}</dd>
<dt>State:</dt>
<dd>{{ task.state }}</dd>
<dt>Result:</dt>
<dd>{{ task.result }}</dd>
</dl>
{% endfor %}
{% endblock %}
Save and close the file.
Now run the development server (make sure mysql
and redis
services are started):
python runserver.py
Go to http://127.0.0.1:5000/
in your browser, after logged in, go to http://127.0.0.1:5000/api/tasks
, you should see an empty tasks. Input a number as x, click "Add Task", you should see your task state and result.
If everything is OK, press CTRL-C
to stop the development server. Go to the celery worker terminal, press CTRL-C
to stop the worker. We are going to run the worker in the background as a daemon.
First deactivate the virtual environment:
deactivate
Celery does not daemonize itself, we use the generic init script celeryd
provided by Celery distribution in the extra/generic-init.d/ directory.
Create an init script:
sudo vi /etc/init.d/celeryd
Copy and paste the content in extra/generic-init.d/celeryd
to it, save and close it. Make it executable:
sudo chmod +x /etc/init.d/celeryd
New a configuration file (As the init script requires config script must be owned by root, we use sudo
here):
sudo vi ~/burnin/celery.conf
With following shell scripts as content (substitute user with yours):
# Names of nodes to start
CELERYD_NODES="burnin"
# Absolute path to the 'celery' command
CELERY_BIN="/home/user/.virtualenvs/burnin/bin/celery"
# App instance to use
CELERY_APP="burnin.celery_tasks.celery_app"
# Where to chdir at start
CELERYD_CHDIR="/home/user/burnin"
# %N will be replaced with the nodename
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
# Workers should run as an unprivileged user
CELERYD_USER="user"
CELERYD_GROUP="www-data"
# If enabled, pid and log directories will be created if missing, and owned by the user/group configured
CELERY_CREATE_DIRS=1
Remember to substitute user with yours. Save and close the file.
Now run the following command to start the celery worker (substitute user with yours):
sudo CELERY_DEFAULTS=/home/user/burnin/celery.conf /etc/init.d/celeryd start
The init script requires this program can only be used by the root user, so we prepended sudo
.
You should see the celery worker is run in the background, the terminal returns back to the shell prompt.
Now activate the virtual environment, start the development server again:
workon burnin
cd ~/burnin
python runserver.py
Go to http://127.0.0.1:5000/
in your browser, you should be able to do the same things as previously did.
Press CTRL-C
to stop the development server. Deactivate the virtual environment:
deactivate
Run following command to stop the celery worker (substitute user with yours):
sudo CELERY_DEFAULTS=/home/user/burnin/celery.conf /etc/init.d/celeryd stop
Now add the start command of celery worker into start.sh
(substitute path_to_script with your own path where you store "start.sh" and "stop.sh"):
cd path_to_script
vi start.sh
The final content is (substitute user with yours):
sudo service nginx start
sudo service uwsgi start
sudo service mysql start
sudo service redis start
sudo CELERY_DEFAULTS=/home/user/burnin/celery.conf /etc/init.d/celeryd start
Celery should be started after redis to ensure connection to broker.
Save and close the file.
In the same way, add the stop command of celery worker into stop.sh
:
vi stop.sh
The final content is (substitute user with yours):
sudo CELERY_DEFAULTS=/home/user/burnin/celery.conf /etc/init.d/celeryd stop
sudo service nginx stop
sudo service uwsgi stop
sudo service mysql stop
sudo service redis stop
Note that we should stop celery before redis, otherwise celery will find connection to broker lost, and try to re-establish the connection.
Save and close the file.
Run stop.sh
to stop already started services (mysql, redis) first:
./stop.sh
Then run start.sh
to start all the components (nginx, uwsgi, mysql, redis, celery) at one go:
./start.sh
Visit your server's IP address or domain name in your web browser, you should see the same site. Note that I did not give any link to /api/tasks
in any page, you should type it directly in the web browser address bar. You can input a big number, such as 10000, as the x
, to see the "Doing" state, and there will be no result at that time. After a while, manually refresh the page, the state will probably become "Done", and you can also see the result.
References: