Skip to content

Instantly share code, notes, and snippets.

View aburan28's full-sized avatar

Adam Buran aburan28

  • San Francisco Bay Area
View GitHub Profile
ssl-default-bind-options no-sslv3
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
ssl-default-server-options no-sslv3
global
pidfile /var/run/haproxy.pid
tune.ssl.default-dh-param 2048
log 127.0.0.1:1514 local0
# disable sslv3, prefer modern ciphers
allow 127.0.0.0/8;
allow 10.0.0.0/8;
allow 192.168.0.0/16;
allow 172.16.0.0/12;
deny all;
version: '2'
services:
aqua-csp-service:
image: aquasec/csp:3.0
hostname: aqua-csp
environment:
BATCH_INSTALL_ENFORCE_MODE: n
BATCH_INSTALL_GATEWAY: csp
BATCH_INSTALL_NAME: default
BATCH_INSTALL_TOKEN: aqua-csp
- name: Add docker apt repo
apt_repository:
repo: 'deb https://apt.dockerproject.org/repo ubuntu-{{ ansible_distribution_release }} main'
state: present
register: result
- name: Import the Docker repository key
when: result|success
apt_key:
url: https://apt.dockerproject.org/gpg
@aburan28
aburan28 / rsabd.py
Created February 12, 2018 17:19 — forked from ryancdotorg/rsabd.py
backdoored rsa key generation
#!/usr/bin/env python
import sys
import gmpy
import curve25519
from struct import pack
from hashlib import sha256
from binascii import hexlify, unhexlify
"I've often seen this quote used to justify obviously bad code or code that, while its performance has not been measured, could probably be made faster quite easily, without increasing code size or compromising its readability.
In general, I do think early micro-optimizations may be a bad idea. However, macro-optimizations (things like choosing an O(log N) algorithm instead of O(N^2)) are often worthwhile and should be done early, since it may be wasteful to write a O(N^2) algorithm and then throw it away completely in favor of a O(log N) approach.
Note the words may be: if the O(N^2) algorithm is simple and easy to write, you can throw it away later without much guilt if it turns out to be too slow. But if both algorithms are similarly complex, or if the expected workload is so large that you already know you'll need the faster one, then optimizing early is a sound engineering decision that will reduce your total workload in the long run.
Thus, in general, I think the right approach is to find out what
--add-runtime runtime Register an additional OCI compatible runtime (default [])
--allow-nondistributable-artifacts list Allow push of nondistributable artifacts to registry
--api-cors-header string Set CORS headers in the Engine API
--authorization-plugin list Authorization plugins to load
--bip string Specify network bridge IP
-b, --bridge string Attach containers to a network bridge
--cgroup-parent string Set parent cgroup for all containers
--cluster-advertise string Address or interface name to advertise
--cluster-store string URL of the distributed storage backend
--cluster-store-opt map Set cluster store options (default map[])
{
"description": "sshFS plugin for Docker",
"documentation": "https://docs.docker.com/engine/extend/plugins/",
"entrypoint": ["/docker-volume-sshfs"],
"network": {
"type": "host"
},
"interface" : {
"types": ["docker.volumedriver/1.0"],
"socket": "sshfs.sock"
usage: ./bwrap [OPTIONS...] COMMAND [ARGS...]
--help Print this help
--version Print version
--args FD Parse nul-separated args from FD
--unshare-all Unshare every namespace we support by default
--share-net Retain the network namespace (can only combine with --unshare-all)
--unshare-user Create new user namespace (may be automatically implied if not setuid)
--unshare-user-try Create new user namespace if possible else continue by skipping it
--unshare-ipc Create new ipc namespace
Brief summary of control files (details).
tasks # attach a task(thread) and show list of threads
cgroup.procs # show list of processes
cgroup.event_control # an interface for event_fd()
memory.usage_in_bytes # show current usage for memory
memory.memsw.usage_in_bytes # show current usage for memory+Swap
memory.limit_in_bytes # set/show limit of memory usage
memory.memsw.limit_in_bytes # set/show limit of memory+Swap usage
memory.failcnt # show the number of memory usage hits limits