- First download
dbfpy: http://sourceforge.net/projects/dbfpy/files/latest/download?source=files - Then install:
sudo python setup.py install
To convert DBF file to CSV:
./dbf2csv database.dbf
dbfpy: http://sourceforge.net/projects/dbfpy/files/latest/download?source=filessudo python setup.py installTo convert DBF file to CSV:
./dbf2csv database.dbf
This is a collection of snippets, not a comprehensive guide. I suggest you start with Operational PGP.
Here is an incomplete list of things that are different from other approaches:
Thanks to this article by Christoph Berg
Directories and files
~/| #!/usr/bin/env bash | |
| # | |
| # Backup selected directories to a Backblaze B2 bucket | |
| # | |
| # Example daily cron: | |
| # @daily /usr/local/bin/b2backup >/dev/null | |
| # | |
| # Account creds | |
| id=xxxxxxxxxx |
| #!/usr/bin/env python | |
| import sys | |
| # Lamson is an application, but also the best way to read email without | |
| # struggling with "battery include" libraries. | |
| from lamson.encoding import from_string as parse_mail | |
| from pyelasticsearch import ElasticSearch | |
| from pyelasticsearch.exceptions import ElasticHttpNotFoundError |
Elasticsearch has many metrics that can be used to determine if a cluster is healthy. Listed below are the metrics that are currently a good idea to monitor with the reason(s) why they should be monitored and any possible recourse for issues.
Unless otherwise noted, all of the API requests work starting with 1.0.0. If a newer version is required for a given metric, then it is noted by the metric's name.
Metrics are an easy way to monitor the health of a cluster and they can be easily accessed from the HTTP API. Each Metrics table is broken down by their source.
I use Namecheap.com as a registrar, and they resale SSL Certs from a number of other companies, including Comodo.
These are the steps I went through to set up an SSL cert.
| >>> import boto | |
| >>> ec2 = boto.connect_ec2() | |
| >>> stats = ec2.get_all_instance_status() | |
| >>> stats | |
| [InstanceStatus:i-67c81e0c] | |
| >>> stat = stats[0] | |
| >>> stat | |
| InstanceStatus:i-67c81e0c | |
| >>> stat.id | |
| u'i-67c81e0c' |
| If you want, I can try and help with pointers as to how to improve the indexing speed you get. Its quite easy to really increase it by using some simple guidelines, for example: | |
| - Use create in the index API (assuming you can). | |
| - Relax the real time aspect from 1 second to something a bit higher (index.engine.robin.refresh_interval). | |
| - Increase the indexing buffer size (indices.memory.index_buffer_size), it defaults to the value 10% which is 10% of the heap. | |
| - Increase the number of dirty operations that trigger automatic flush (so the translog won't get really big, even though its FS based) by setting index.translog.flush_threshold (defaults to 5000). | |
| - Increase the memory allocated to elasticsearch node. By default its 1g. | |
| - Start with a lower replica count (even 0), and then once the bulk loading is done, increate it to the value you want it to be using the update_settings API. This will improve things as possibly less shards will be allocated to each machine. | |
| - Increase the number of machines you have so |
disable_flush and disable_recovery (TD)