bin/kafka-topics.sh --zookeeper localhost:2181 --list
bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic mytopic
bin/kafka-topics.sh --zookeeper localhost:2181 --alter --topic mytopic --config retention.ms=1000
... wait a minute ...
/** | |
* This class allows to inject into objects through a base class, | |
* so we don't have to repeat injection code everywhere. | |
* | |
* The performance drawback is about 0.013 ms per injection on a very slow device, | |
* which is negligible in most cases. | |
* | |
* Example: | |
* <pre>{@code |
#!/bin/bash | |
# Modified Pi-hole script to generate a generic hosts file | |
# for use with dnsmasq's addn-hosts configuration | |
# original : https://github.com/jacobsalmela/pi-hole/blob/master/gravity-adv.sh | |
# The Pi-hole now blocks over 120,000 ad domains | |
# Address to send ads to (the RPi) | |
piholeIP="127.0.0.1" | |
outlist='./adblock.hosts' |
bin/kafka-topics.sh --zookeeper localhost:2181 --list
bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic mytopic
bin/kafka-topics.sh --zookeeper localhost:2181 --alter --topic mytopic --config retention.ms=1000
... wait a minute ...
It's easy enough to set up your machine as a swarm manager for local development on a single node swarm. But how about setting up multiple local nodes using Docker Machine in case you want to simulate a multiple node environment (maybe to test HA features)?
The following script demonstrates a simple way to specify the number of manager and worker nodes you want and then bootstrap a swarm.
You can also check out the sample as a Github project here.
Via brew or other method
In order to work on every connection and on any TLD, dnsmasq
needs to be the first DNS resolver receving the query.
And since dnsmasq
is a local process, all DNS queries need to go to 127.0.0.1
On macOS, /etc/resolv.conf
is automaticaly created, depending on a variety of things (network settings, etc), so it cannot be edited.
Put this on your wp-config.php
/* That's all, stop editing! Happy blogging. */
define('FS_METHOD', 'direct');
#!/usr/bin/env sh | |
set -e | |
echo "Pulling latest code..." | |
git pull | |
echo "Deleting local branches that were removed in remote..." | |
git fetch -p | |
git branch -vv | awk '/: gone]/{print $1}' | xargs git branch -D | |
echo "Remaining local branches:" | |
git branch -vv |
#!/bin.sh | |
DOCKER_COMPOSE_VERSION=1.14.0 | |
# Download docker-compose to the permanent storage | |
echo 'Downloading docker-compose to the permanent VM storage...' | |
sudo mkdir -p /var/lib/boot2docker/bin | |
sudo curl -sL https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` -o /var/lib/boot2docker/bin/docker-compose | |
sudo chmod +x /var/lib/boot2docker/bin/docker-compose | |
sudo ln -sf /var/lib/boot2docker/bin/docker-compose /usr/local/bin/docker-compose |
Whichever route you take to implementing containers, you’ll want to steer clear of common pitfalls that can undermine the efficiency of your Docker stack.
The beauty of containers—and an advantage of containers over virtual machines—is that it is easy to make multiple containers interact with one another in order to compose a complete application. There is no need to run a full application inside a single container. Instead, break your application down as much as possible into discrete services, and distribute services across multiple containers. This maximizes flexibility and reliability.
It is possible to install a complete Linux operating system inside a container. In most cases, however, this is not necessary. If your goal is to host just a single application or part of an application in the container, you need to install only the essential
Snippet from docker-compose:
secrets:
- source: "docker_secrets_expand"
target: "/docker_secrets_expand.sh"
mode: "0555"
- db_password
environment:
DB_PASSWORD:DOCKER-SECRET->db_password