GitHub supports several lightweight markup languages for documentation; the most popular ones (generally, not just at GitHub) are Markdown and reStructuredText. Markdown is sometimes considered easier to use, and is often preferred when the purpose is simply to generate HTML. On the other hand, reStructuredText is more extensible and powerful, with native support (not just embedded HTML) for tables, as well as things like automatic generation of tables of contents.
## At the http level | |
map $http_cookie $is_secure { | |
default 0; | |
~SESS 1; # there's a session cookie (use SSL - authenticated user) | |
} | |
map $is_secure $not_secure { | |
1 0; | |
0 1; | |
} |
# git config; arlimus, public domain | |
## Make your adjustments | |
######################## | |
[user] | |
name = Your Name | |
email = [email protected] | |
[core] |
- Don't run as root.
- For sessions, set
httpOnly
(andsecure
totrue
if running over SSL) when setting cookies. - Use the Helmet for secure headers: https://github.com/evilpacket/helmet
- Enable
csrf
for preventing Cross-Site Request Forgery: http://expressjs.com/api.html#csrf - Don't use the deprecated
bodyParser()
and only use multipart explicitly. To avoid multiparts vulnerability to 'temp file' bloat, use thedefer
property andpipe()
the multipart upload stream to the intended destination.
# For each dependency's pkgMeta, get "license" if it exists, otherwise get the "type" field of each of "licenses", or "unknown" if that is also empty | |
# I'm sure there's a better way to do this with jq | |
bower list -jq | jq '.dependencies | to_entries[] | { (.key): .value | .pkgMeta | (.license // ((.licenses // [{type: "unknown"}])[] | .type)) }' |
RDBMS-based job queues have been criticized recently for being unable to handle heavy loads. And they deserve it, to some extent, because the queries used to safely lock a job have been pretty hairy. SELECT FOR UPDATE followed by an UPDATE works fine at first, but then you add more workers, and each is trying to SELECT FOR UPDATE the same row (and maybe throwing NOWAIT in there, then catching the errors and retrying), and things slow down.
On top of that, they have to actually update the row to mark it as locked, so the rest of your workers are sitting there waiting while one of them propagates its lock to disk (and the disks of however many servers you're replicating to). QueueClassic got some mileage out of the novel idea of randomly picking a row near the front of the queue to lock, but I can't still seem to get more than an an extra few hundred jobs per second out of it under heavy load.
So, many developers have started going straight t
FROM ubuntu:precise | |
MAINTAINER Christoph Hartmann "[email protected]" | |
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list | |
RUN apt-get update | |
# compile logstash forwarder | |
RUN apt-get install -y wget git | |
RUN wget --no-check-certificate https://go.googlecode.com/files/go1.1.1.linux-amd64.tar.gz | |
RUN tar -C /usr/local -xzf go1.1.1.linux-amd64.tar.gz |
# Colors | |
end="\033[0m" | |
black="\033[0;30m" | |
blackb="\033[1;30m" | |
white="\033[0;37m" | |
whiteb="\033[1;37m" | |
red="\033[0;31m" | |
redb="\033[1;31m" | |
green="\033[0;32m" | |
greenb="\033[1;32m" |
# encoding: utf-8 | |
### Sample script to export Chef Server nodes and import them to Chef Compliance | |
### Change the 'api_url', 'api_user', 'api_pass' and 'api_org' variables below | |
### Change the nodes_array json suit your environment | |
### Go to your chef-repo and check Chef Server access first | |
# cd chef-repo; knife environment list | |
### Save this Ruby script as kitchen_sink.rb and run it like this: | |
# cat kitchen_sink.rb | knife exec | |
### Chef Compliance API docs: https://docs.chef.io/api_compliance.html |