In Git you can add a submodule to a repository. This is basically a sub-repository embedded in your main repository. This can be very useful. A couple of usecases of submodules:
- Separate big codebases into multiple repositories.
| //Decodes Base64 | |
| #include <openssl/bio.h> | |
| #include <openssl/evp.h> | |
| #include <string.h> | |
| #include <stdio.h> | |
| int calcDecodeLength(const char* b64input) { //Calculates the length of a decoded base64 string | |
| int len = strlen(b64input); | |
| int padding = 0; |
| // you can omit DI for _light_ dependencies | |
| var async = require('async'); | |
| module.exports.inject = function( dependencies ){ | |
| // no direct require of _heavy_ dependencies | |
| var mysql = dependencies.mysql; | |
| var redis = dependencies.redis; | |
| // do whatever |
| function myLib() { | |
| return { | |
| code: function () {} | |
| , goes: function () {} | |
| , here: function () {} | |
| } | |
| } | |
| exporter('myLib', myLib); |
| var cluster = require('cluster'); | |
| var PORT = +process.env.PORT || 1337; | |
| if (cluster.isMaster) { | |
| // In real life, you'd probably use more than just 2 workers, | |
| // and perhaps not put the master and worker in the same file. | |
| cluster.fork(); | |
| cluster.fork(); | |
| cluster.on('disconnect', function(worker) { |
| build: | |
| $(CC) -o queue.o queue.c |
| #!/bin/sh | |
| export SYSROOT=/home/julian/dev/rpi/rpi-buildroot/output/rootfs-debug/staging | |
| export PATH=/home/julian/dev/rpi/ct-ng/host/bin/:$PATH | |
| echo "Build" | |
| make -j9 V=1 || exit 0 | |
| echo "Install" | |
| make DESTDIR=$(pwd)/install/ install |
| ; CouchDB Config | |
| ; Drop in PREFIX/local.d/npmjs.ini | |
| [couch_httpd_auth] | |
| public_fields = appdotnet, avatar, avatarMedium, avatarLarge, date, email, fields, freenode, fullname, github, homepage, name, roles, twitter, type, _id, _rev | |
| users_db_public = true | |
| [httpd] | |
| secure_rewrites = false |
The prep-script.sh will setup the latest Node and install the latest perf version on your Linux box.
When you want to generate the flame graph, run the following (folder locations taken from install script):
sudo sysctl kernel.kptr_restrict=0
# May also have to do the following:
# (additional reading http://unix.stackexchange.com/questions/14227/do-i-need-root-admin-permissions-to-run-userspace-perf-tool-perf-events-ar )
sudo sysctl kernel.perf_event_paranoid=0
#Container Resource Allocation Options in docker-run
now see: https://docs.docker.com/engine/reference/run/#runtime-constraints-on-resources
You have various options for controlling resources (cpu, memory, disk) in docker. These are principally via the docker-run command options.
##Dynamic CPU Allocation
-c, --cpu-shares=0
CPU shares (relative weight, specify some numeric value which is used to allocate relative cpu share)