#Container Resource Allocation Options in docker-run
now see: https://docs.docker.com/engine/reference/run/#runtime-constraints-on-resources
You have various options for controlling resources (cpu, memory, disk) in docker. These are principally via the docker-run command options.
##Dynamic CPU Allocation
-c, --cpu-shares=0
CPU shares (relative weight, specify some numeric value which is used to allocate relative cpu share)
##Reserved CPU Allocation
--cpuset=""
specify which CPUs by processor number (0=1st, n=nth cpu) in which to allow execution (contiguous range: "0-3", or discontiguous: "0,3,4" ranges)
You can see these processors being used by viewing in mpstat (note mpstat requires you do a time interval inorder to get the diff in usage... ) e.g. $ mpstat -P ALL 2 5
some further reading: http://stackoverflow.com/questions/26282072/puzzled-by-the-cpushare-setting-on-docker https://www.kernel.org/doc/Documentation/scheduler/sched-design-CFS.txt
Alternatively: LXC (if not using default libcontainer) You can also allocate the cups on LXC containers with on using the --lxc-config option:
--lxc-conf=[]
(lxc exec-driver only) Add custom lxc options
--lxc-conf="lxc.cgroup.cpuset.cpus = 0,1"
--lxc-conf="lxc.cgroup.cpu.shares = 1234"
#Testing cpu allocation You can use this to experiment with the --cpu-share, --cpuset flags of docker run. see here: http://agileek.github.io/docker/2014/08/06/docker-cpuset/
USAGE (depending on the number of cpus in your machine, numbers are from 0 to n):
sudo docker run -it --rm --cpuset=0,1 agileek/cpuset-test /cpus 2
sudo docker run -it --rm --cpuset=3 agileek/cpuset-test
install sysstat for monitoring (if memstat not available) also htop is quite nice for visualizing this
$ sudo apt-get install sysstat
$ memstat -P ALL 2 10
or $ htop
e.g. burn cpu's number 1 and 6
$ docker run -ti --rm --cpuset=1,6 agileek/cpuset-test /cpus 2
$ memstat -P ALL 2 #update every 2 seconds
13:56:32 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
13:56:34 all 29.80 0.00 0.38 0.69 0.00 0.00 0.00 0.00 0.00 69.13
13:56:34 0 13.43 0.00 1.00 2.49 0.00 0.00 0.00 0.00 0.00 83.08
13:56:34 1 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
13:56:34 2 2.03 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 97.97
13:56:34 3 14.78 0.00 0.99 2.46 0.00 0.00 0.00 0.00 0.00 81.77
13:56:34 4 2.51 0.00 0.50 0.50 0.00 0.00 0.00 0.00 0.00 96.48
13:56:34 5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
13:56:34 6 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
13:56:34 7 4.50 0.00 0.50 0.50 0.00 0.00 0.00 0.00 0.00 94.50
##RAM Allocation -m, --memory="" Memory limit (format: , where unit = b, k, m or g)
##Disk Allocation Resizing container filesystems. http://jpetazzo.github.io/2014/01/29/docker-device-mapper-resize/ moby/moby#471 https://github.com/snitm/docker/blob/master/daemon/graphdriver/devmapper/README.md
Is there a way to utilize this concept to have two (or more) docker containers running on a disjoint set of cpus and memory where the two containers do not affect the performance of one another? Or will there always be kernel and memory overhead that will slow the containers down?
For example: If have an 8 cpu system and run a performance sensitive program on cpus 0-3, and also run the same exact program in parallel on cpus 4-7. I my experience it seems that the performance is degraded by running the program on disjoints sets of the hardware. I would have thought that it would not, essentially using docker to divide the system resources equally without overlapping, in a pseudo-virtualized fashion. Is this possible with Docker?