use xenv to capture the keys which you want to replace
- Get original key mapping using
xmodmap -pke > ~/keymaptable
$ cat ~/.Xmodmap
! -*- coding: utf-8 -*-
! swapped 49 with 94
use xenv to capture the keys which you want to replace
xmodmap -pke > ~/keymaptable$ cat ~/.Xmodmap
! -*- coding: utf-8 -*-
! swapped 49 with 94
| # to generate your dhparam.pem file, run in the terminal | |
| openssl dhparam -out /etc/nginx/ssl/dhparam.pem 2048 |
| # Standard HTTP-to-gRPC status code mappings | |
| # Ref: https://github.com/grpc/grpc/blob/master/doc/http-grpc-status-mapping.md | |
| # | |
| error_page 400 = @grpc_internal; | |
| error_page 401 = @grpc_unauthenticated; | |
| error_page 403 = @grpc_permission_denied; | |
| error_page 404 = @grpc_unimplemented; | |
| error_page 429 = @grpc_unavailable; | |
| error_page 502 = @grpc_unavailable; | |
| error_page 503 = @grpc_unavailable; |
docker run -it --rm --env MAX_HEAP_SIZE=2G --env HEAP_NEWSIZE=800M cassandra:3.0
I hereby claim:
To claim this, I am signing this object:
| import hmac | |
| from hashlib import sha1 | |
| import base64 | |
| import time | |
| import urllib | |
| s3_path = '/g4ebucket/data.tgz' | |
| s3_access_key = 'hsjahhjj33' | |
| s3_secret_key = 'kAJSJSDhAKJSj/kajskSAKj/=' | |
| s3_expiry = time.time() + 60 * 10 ## 10 minutes |
| from sys import argv | |
| from base64 import b64encode | |
| from datetime import datetime | |
| from Crypto.Hash import SHA, HMAC | |
| def create_signature(secret_key, string): | |
| """ Create the signed message from api_key and string_to_sign """ | |
| string_to_sign = string.encode('utf-8') | |
| hmac = HMAC.new(secret_key, string_to_sign, SHA) | |
| return b64encode(hmac.hexdigest()) |
| default['sshd']['sshd_config']['AuthenticationMethods'] = 'publickey,keyboard-interactive:pam' | |
| default['sshd']['sshd_config']['ChallengeResponseAuthentication'] = 'yes' | |
| default['sshd']['sshd_config']['PasswordAuthentication'] = 'no' |
That's a good question. It mostly comes down to how many individual metrics and how many samples per second you plan to ingest. The number of actual targets isn't as big an issue as the scrapes are cheap, a simple http GET, but the sample ingestion takes some work.
RAM is a big factor
Network throughput is not a huge issue. A single server with millons of timeseries and 100k samples/second only needs a few megabits/second.
CPU is important, a large server can easily use many cores.