Skip to content

Instantly share code, notes, and snippets.

View ansrivas's full-sized avatar

Ankur Srivastava ansrivas

View GitHub Profile
@ansrivas
ansrivas / errors.grpc_conf
Created July 2, 2019 10:46 — forked from nginx-gists/errors.grpc_conf
Deploying NGINX Plus as an API Gateway, Part 3: Publishing gRPC Services
# Standard HTTP-to-gRPC status code mappings
# Ref: https://github.com/grpc/grpc/blob/master/doc/http-grpc-status-mapping.md
#
error_page 400 = @grpc_internal;
error_page 401 = @grpc_unauthenticated;
error_page 403 = @grpc_permission_denied;
error_page 404 = @grpc_unimplemented;
error_page 429 = @grpc_unavailable;
error_page 502 = @grpc_unavailable;
error_page 503 = @grpc_unavailable;
@ansrivas
ansrivas / cassandra-docker.md
Created June 30, 2019 20:47
restrict cassandra memory requirements
docker run -it --rm --env MAX_HEAP_SIZE=2G --env HEAP_NEWSIZE=800M cassandra:3.0

Keybase proof

I hereby claim:

  • I am ansrivas on github.
  • I am ansrivas (https://keybase.io/ansrivas) on keybase.
  • I have a public key ASAlcvQo0iSTEiCrJYXEwbD7lgglWrZlg8W04kG8tv2vAQo

To claim this, I am signing this object:

@ansrivas
ansrivas / timed_url_aws.py
Created April 22, 2019 12:31
Use hmac to generate timed url
import hmac
from hashlib import sha1
import base64
import time
import urllib
s3_path = '/g4ebucket/data.tgz'
s3_access_key = 'hsjahhjj33'
s3_secret_key = 'kAJSJSDhAKJSj/kajskSAKj/='
s3_expiry = time.time() + 60 * 10 ## 10 minutes
@ansrivas
ansrivas / hmac-sha1.py
Created April 21, 2019 23:21 — forked from binaryatrocity/hmac-sha1.py
HMAC-SHA1 Python example
from sys import argv
from base64 import b64encode
from datetime import datetime
from Crypto.Hash import SHA, HMAC
def create_signature(secret_key, string):
""" Create the signed message from api_key and string_to_sign """
string_to_sign = string.encode('utf-8')
hmac = HMAC.new(secret_key, string_to_sign, SHA)
return b64encode(hmac.hexdigest())
@ansrivas
ansrivas / attributes.rb
Created April 20, 2019 07:48 — forked from lizthegrey/attributes.rb
Hardening SSH with 2fa
default['sshd']['sshd_config']['AuthenticationMethods'] = 'publickey,keyboard-interactive:pam'
default['sshd']['sshd_config']['ChallengeResponseAuthentication'] = 'yes'
default['sshd']['sshd_config']['PasswordAuthentication'] = 'no'
@ansrivas
ansrivas / prometheus-config.md
Created March 22, 2019 20:47
prometheus-config.md

That's a good question. It mostly comes down to how many individual metrics and how many samples per second you plan to ingest. The number of actual targets isn't as big an issue as the scrapes are cheap, a simple http GET, but the sample ingestion takes some work.

RAM is a big factor

  • It limits how much data you can crunch with queries
  • It limits how much data can be buffered before writing to the disk storage

Network throughput is not a huge issue. A single server with millons of timeseries and 100k samples/second only needs a few megabits/second.

CPU is important, a large server can easily use many cores.

@ansrivas
ansrivas / prometheus.yml
Last active March 10, 2019 19:45
prometheus.yml example
scrape_configs:
- job_name: 'self'
consul_sd_configs:
- server: 'consul.service.consul:8500'
services: []
relabel_configs:
- source_labels: [__meta_consul_tags]
regex: .*,metrics,.*
action: keep
- source_labels: [__meta_consul_service]