Awesome PHP has been relocated permanently to its own Github repository. No further updates will made to this gist.
Please open an issue for any new suggestions.
#!/bin/bash | |
BUCKETNAME="your_s3_bucket" | |
LOGDIR="/opt/nginx/logs" | |
LOGDATE=$(date +"%Y%m%d") | |
LOGFILES=( "access" "ssl-access" ) | |
BOT_LOGFILES=( "bots-access" "bots-ssl-access" ) | |
echo "Moving access logs to dated logs.." |
If you want, I can try and help with pointers as to how to improve the indexing speed you get. Its quite easy to really increase it by using some simple guidelines, for example: | |
- Use create in the index API (assuming you can). | |
- Relax the real time aspect from 1 second to something a bit higher (index.engine.robin.refresh_interval). | |
- Increase the indexing buffer size (indices.memory.index_buffer_size), it defaults to the value 10% which is 10% of the heap. | |
- Increase the number of dirty operations that trigger automatic flush (so the translog won't get really big, even though its FS based) by setting index.translog.flush_threshold (defaults to 5000). | |
- Increase the memory allocated to elasticsearch node. By default its 1g. | |
- Start with a lower replica count (even 0), and then once the bulk loading is done, increate it to the value you want it to be using the update_settings API. This will improve things as possibly less shards will be allocated to each machine. | |
- Increase the number of machines you have so |
Awesome PHP has been relocated permanently to its own Github repository. No further updates will made to this gist.
Please open an issue for any new suggestions.