Skip to content

Instantly share code, notes, and snippets.

location / {
index index.php index.html index.htm;
rewrite ^/api/(.*)$ /$1 last;
try_files $uri $uri/ /index.php?$uri&$args;
}
server {
listen 81;
root /var/sites/app/webroot/;
server_name localhost;
rewrite_log on;
location ^~ /api/ {
class percona::install {
Package {
require => Apt::Source["percona"]
}
package { "mysql":
ensure => purged,
before => Package["percona-server-server-5.5"],
}
{
"template": "logstash-*",
"settings" : {
"number_of_shards" : 1,
"number_of_replicas" : 0,
"index" : {
"query" : { "default_field" : "@message" },
"store" : { "compress" : { "stored" : true, "tv": true } }
}
},
If you want, I can try and help with pointers as to how to improve the indexing speed you get. Its quite easy to really increase it by using some simple guidelines, for example:
- Use create in the index API (assuming you can).
- Relax the real time aspect from 1 second to something a bit higher (index.engine.robin.refresh_interval).
- Increase the indexing buffer size (indices.memory.index_buffer_size), it defaults to the value 10% which is 10% of the heap.
- Increase the number of dirty operations that trigger automatic flush (so the translog won't get really big, even though its FS based) by setting index.translog.flush_threshold (defaults to 5000).
- Increase the memory allocated to elasticsearch node. By default its 1g.
- Start with a lower replica count (even 0), and then once the bulk loading is done, increate it to the value you want it to be using the update_settings API. This will improve things as possibly less shards will be allocated to each machine.
- Increase the number of machines you have so