Guide is working, but it's missing details on how to install pg_trgm extension. https://github.com/fire/pkgsrc-wip/tree/pg_trgm
- 2 GB of RAM
- 2 GB of swap
- 2 processor cores
/** | |
* Unpivot a pivot table of any size. | |
* | |
* @param {A1:D30} data The pivot table. | |
* @param {1} fixColumns Number of columns, after which pivoted values begin. Default 1. | |
* @param {1} fixRows Number of rows (1 or 2), after which pivoted values begin. Default 1. | |
* @param {"city"} titlePivot The title of horizontal pivot values. Default "column". | |
* @param {"distance"[,...]} titleValue The title of pivot table values. Default "value". | |
* @return The unpivoted table | |
* @customfunction |
proxy_cache_path /tmp/nginx levels=1:2 keys_zone=my_zone:10m inactive=60m; | |
proxy_cache_key "$host$request_uri-$format"; | |
server { | |
listen 999; | |
server_name _; | |
set $format jpg; | |
if ( $http_accept ~* 'webp' ) { | |
set $format webp; |
#!/bin/env python | |
import timeit | |
loops = 1000 | |
setup = """ | |
import MySQLdb | |
db = MySQLdb.connect(host="remotedb.example.com", | |
read_default_file="/root/.my.cnf", |
Guide is working, but it's missing details on how to install pg_trgm extension. https://github.com/fire/pkgsrc-wip/tree/pg_trgm
By default when Nginx starts receiving a response from a FastCGI backend (such as PHP-FPM) it will buffer the response in memory before delivering it to the client. Any response larger than the set buffer size is saved to a temporary file on disk.
This process is outlined at the Nginx ngx_http_fastcgi_module page manual page.
<?php | |
// CONFIG | |
$servers = array( | |
array('Local', '127.0.0.1', 6379), | |
); | |
// END CONFIG | |
$server = 0; | |
if (isset($_GET['s']) && intval($_GET['s']) < count($servers)) { | |
$server = intval($_GET['s']); |
For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.
Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon
with HyperThreading enabled, but it can work without problem on slower machines.
You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.
from <app_name>.models import Foo | |
# ... | |
# Using the default admin interface: | |
admin.site.register(Foo) |