I hereby claim:
- I am fforbeck on github.
- I am fforbeck (https://keybase.io/fforbeck) on keybase.
- I have a public key ASCsk2AV2WsPAhosHTW23A9I0NuwzfIHdSzpSnKYRaCUUgo
To claim this, I am signing this object:
I hereby claim:
To claim this, I am signing this object:
| "settings": { | |
| "analysis": { | |
| "filter": { | |
| "hashtag_filter": { | |
| "type": "word_delimiter", | |
| "type_table": [ | |
| "# => ALPHA", | |
| "@ => ALPHA" | |
| ] | |
| } |
| sudo rabbitmqctl set_policy -p host --apply-to queues max-msg-size "^queue.name.pattern$" '{"max-length-bytes":100000000}' |
| [ req ] | |
| distinguished_name="req_distinguished_name" | |
| prompt="no" | |
| [ req_distinguished_name ] | |
| C="<country>" | |
| ST="<state>" | |
| L="1234" | |
| O="1234" | |
| CN="{DOMAIN}" |
| rabbitmqctl purge_queue "<target.queue>" -p <vhost> |
| #!/usr/bin/env bash | |
| URL="http://localhost:15672/cli/rabbitmqadmin" | |
| VHOST="<>" | |
| USER="<>" | |
| PWD="<>" | |
| QUEUE="<>" | |
| FAILED_QUEUE="<>" |
| # The command deletes the parameter after all messages are moved origin to target queue | |
| rabbitmqctl set_parameter -p <vhost> shovel "<origin.queue.name>" '{"src-uri":"amqp://<user>:<pwd>@/<vhost_name>","src-queue":"<origin.queue.name>","dest-uri":"amqp://<user>:<pwd>@/<vhost_name>","dest-exchange":"<target.queue.name>","prefetch-count":1,"reconnect-delay":5,"add-forward-headers":false,"ack-mode":"on-confirm","delete-after":"queue-length"}' |
| ## | |
| # http://stackoverflow.com/questions/19967472/elasticsearch-unassigned-shards-how-to-fix | |
| ## | |
| NODE="YOUR NODE NAME" | |
| IFS=$'\n' | |
| for line in $(curl -s 'localhost:9200/_cat/shards' | fgrep UNASSIGNED); do | |
| INDEX=$(echo $line | (awk '{print $1}')) | |
| SHARD=$(echo $line | (awk '{print $2}')) | |
| curl -XPOST 'localhost:9200/_cluster/reroute' -d '{ |
| If you want, I can try and help with pointers as to how to improve the indexing speed you get. Its quite easy to really increase it by using some simple guidelines, for example: | |
| - Use create in the index API (assuming you can). | |
| - Relax the real time aspect from 1 second to something a bit higher (index.engine.robin.refresh_interval). | |
| - Increase the indexing buffer size (indices.memory.index_buffer_size), it defaults to the value 10% which is 10% of the heap. | |
| - Increase the number of dirty operations that trigger automatic flush (so the translog won't get really big, even though its FS based) by setting index.translog.flush_threshold (defaults to 5000). | |
| - Increase the memory allocated to elasticsearch node. By default its 1g. | |
| - Start with a lower replica count (even 0), and then once the bulk loading is done, increate it to the value you want it to be using the update_settings API. This will improve things as possibly less shards will be allocated to each machine. | |
| - Increase the number of machines you have so |
| // installed Clojure packages: | |
| // | |
| // * BracketHighlighter | |
| // * lispindent | |
| // * SublimeREPL | |
| // * sublime-paredit | |
| { | |
| "word_separators": "/\\()\"',;!@$%^&|+=[]{}`~?", | |
| "paredit_enabled": true, |