docker run -it --rm --env MAX_HEAP_SIZE=2G --env HEAP_NEWSIZE=800M cassandra:3.0
# Standard HTTP-to-gRPC status code mappings | |
# Ref: https://github.com/grpc/grpc/blob/master/doc/http-grpc-status-mapping.md | |
# | |
error_page 400 = @grpc_internal; | |
error_page 401 = @grpc_unauthenticated; | |
error_page 403 = @grpc_permission_denied; | |
error_page 404 = @grpc_unimplemented; | |
error_page 429 = @grpc_unavailable; | |
error_page 502 = @grpc_unavailable; | |
error_page 503 = @grpc_unavailable; |
I hereby claim:
- I am ansrivas on github.
- I am ansrivas (https://keybase.io/ansrivas) on keybase.
- I have a public key ASAlcvQo0iSTEiCrJYXEwbD7lgglWrZlg8W04kG8tv2vAQo
To claim this, I am signing this object:
import hmac | |
from hashlib import sha1 | |
import base64 | |
import time | |
import urllib | |
s3_path = '/g4ebucket/data.tgz' | |
s3_access_key = 'hsjahhjj33' | |
s3_secret_key = 'kAJSJSDhAKJSj/kajskSAKj/=' | |
s3_expiry = time.time() + 60 * 10 ## 10 minutes |
from sys import argv | |
from base64 import b64encode | |
from datetime import datetime | |
from Crypto.Hash import SHA, HMAC | |
def create_signature(secret_key, string): | |
""" Create the signed message from api_key and string_to_sign """ | |
string_to_sign = string.encode('utf-8') | |
hmac = HMAC.new(secret_key, string_to_sign, SHA) | |
return b64encode(hmac.hexdigest()) |
default['sshd']['sshd_config']['AuthenticationMethods'] = 'publickey,keyboard-interactive:pam' | |
default['sshd']['sshd_config']['ChallengeResponseAuthentication'] = 'yes' | |
default['sshd']['sshd_config']['PasswordAuthentication'] = 'no' |
That's a good question. It mostly comes down to how many individual metrics and how many samples per second you plan to ingest. The number of actual targets isn't as big an issue as the scrapes are cheap, a simple http GET, but the sample ingestion takes some work.
RAM is a big factor
- It limits how much data you can crunch with queries
- It limits how much data can be buffered before writing to the disk storage
Network throughput is not a huge issue. A single server with millons of timeseries and 100k samples/second only needs a few megabits/second.
CPU is important, a large server can easily use many cores.
scrape_configs: | |
- job_name: 'self' | |
consul_sd_configs: | |
- server: 'consul.service.consul:8500' | |
services: [] | |
relabel_configs: | |
- source_labels: [__meta_consul_tags] | |
regex: .*,metrics,.* | |
action: keep | |
- source_labels: [__meta_consul_service] |
-
https://www.nomadproject.io/docs/job-specification/resources.html
-
https://www.hashicorp.com/blog/load-balancing-strategies-for-consul
-
https://www.nomadproject.io/guides/load-balancing/fabio.html
-
https://www.nomadproject.io/guides/load-balancing/fabio.html
-
https://medium.com/@mustwin/service-discovery-and-load-balancing-with-hashicorps-nomad-db435c590c26