To list all Kafka topics in our cluster, use the following commaand:
./kafka-topics.sh --zookeeper <ZooKeeperHost>:2181 --list
#kafka
To list all Kafka topics in our cluster, use the following commaand:
./kafka-topics.sh --zookeeper <ZooKeeperHost>:2181 --list
#kafka
I use this config to automatically remove trailing spaces in my yaml files:
autocmd BufWritePre *.c,*.php :%s/\s\+$//ge
#ansible #vim #terraform
If you want to configure water marks or you are getting these errors:
low disk watermark [85%] exceeded on [XXXXXXX][prod-elasticsearch-13] free: 209.9gb[14.2%], replicas will not be assigned to this node
Consider changing the limits by using this command:
curl -XPUT 'http://localhost:9200/_cluster/settings' -H 'Content-Type: application/json' -d '{
"transient" : {
"cluster.routing.allocation.disk.watermark.flood_stage" : "99%",
"cluster.routing.allocation.disk.watermark.low" : "90%",
To loop through a dictionary in ansible, you can use the following:
- name: "{{ nginx_name }} | configure nginx by templating the config files"
template:
src: "{{ item.source }}"
dest: "{{ item.destination }}"
mode: "0644"
with_items:
- { source: "nginx.conf.j2", destination: "{{nginx_conf_dir}}/nginx.conf" }
- { source: "fastcgi_params.j2", destination: "{{nginx_conf_dir}}/fastcgi_params" }
We use a wrapper script to allow the developers to fire up K8s namespaces with all the deployments. Sometimes, they exaggerate and open a lot of them.
I wrote this function to limit the number of namespaces they can open. The scripts assumes each namespaces has a label with the owner tag.
function limit_namespaces_per_user() {
# Define the user and how many namespaces to allow per user
local ALLOWED_NS="${1}"
To launch a temporary Kibana pod you can use the following:
k run -i --tty --rm --image=kibana:6.8.15 --env="ELASTICSEARCH_URL=http://<ServiceNameOfElasticSearchPod>:9200" --port=5601 kibana
k port-forward pod/kibana 5601:5601
Then you can browse Safari/ Chrome/ Firefox and access localhost:5601
If you want to debug your deployment of kafka-connect, pass in the env variable or edit the K8s deployment:
- name: CONNECT_LOG4J_LOGGERS
value: org.apache.kafka.connect=DEBUG
You can also use multiple loggers:
- name: CONNECT_LOG4J_LOGGERS
value: "log4j.logger.io.confluent.inbound-kafka-to-s3=DEBUG,org.apache.kafka.connect=DEBUG"
If you get this error while trying to launch Telepresence on your K8s env:
Failed to pull image "datawire/telepresence-k8s:0.109":
rpc error: code = Unknown desc = Error response from daemon:
toomanyrequests: You have reached your pull rate limit.
You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Upload it to your registry, for example AWS ECR:
If you're using Docker Hub as a docker registry and using an anonymous account at some point you will hit the rate limit. To know where you stand use this:
IMAGE="ratelimitpreview/test"
TOKEN=$(curl "https://auth.docker.io/token?service=registry.docker.io&scope=repository:$IMAGE:pull" | jq -r .token)
curl --head -H "Authorization: Bearer $TOKEN" https://registry-1.docker.io/v2/$IMAGE/manifests/latest
This will output:
If you want to resolve a specific domain with a specific resolver/nameserver, add this:
sudo mkdir /etc/resolver
sudo echo "nameserver <IPAddress>" >> /etc/resolver/internal.domain.io
This was tested on Big Sur 11.2.3