Skip to content

Instantly share code, notes, and snippets.

View denzhel's full-sized avatar
🇺🇦

Dennis Zheleznyak denzhel

🇺🇦
View GitHub Profile
@denzhel
denzhel / curl_query_rabbitmq_api.md
Created July 28, 2022 10:42
query rabbitmq's API with curl

To query rabbit's API using curl(e.g list all queues):

curl -u '<user>:<password>' -i -H "content-type:application/json" -X GET "http://127.0.0.1:15672/api/queues"
  1. I’ve identified we have a problem in one of our elasticsearch clusters -> it reached the low water mark - 87%
  2. Shards are not allocated to the nodes who passed the water mark - a couple of them
  3. I’ve increased water marks:
  • low - 95%
  • high - 97%
  • flood - 99%
  1. In the meantime we discussed on our plan and strategy and decided to scale out the cluster by 3 nodes
  2. I’ve created 3 nodes using Terraform
  3. I’ve deployed elasticsearch one by one using Ansible
  4. Decreased the water marks back to default to encourage the cluster to rebalance the
@denzhel
denzhel / elasticsearch_restore_settings_to_default.md
Created July 18, 2022 19:03
elasticsearch restore settings to default

In case you want to restore changes cluster settings to default, you should use the following command:

curl -XPUT 'http://localhost:9200/_cluster/settings' -H 'Content-Type: application/json' -d '{ 
"transient" : { 
"cluster.routing.allocation.disk.watermark.flood_stage" : null,
"cluster.routing.allocation.disk.watermark.low" : null,
"cluster.routing.allocation.disk.watermark.high" : null 
} 
}'
@denzhel
denzhel / kafka_connect_partition_assignment.md
Created July 14, 2022 12:36
kafka connect assign partitions to tasks better

We have 1000 topics, 1 partition each. Our kafka connect has the following config:

tasks.max: 10

We expected each each task to handle 100 topics but NO ! Instead, single topics partition will be handled by the first task only. Meaning, the first task will handle ALL the 1000 topics. Why ? RangeAssignor is the default kafka connect partition assingor.

To allow a better even spread across all the workers, I configured our kafka connect with:

@denzhel
denzhel / linux_mem_usage_by_process.md
Created June 26, 2022 17:05
linux - show memory usage by process

To show the memory usage by process, use the following:

ps -o pid,user,%mem,command ax | sort -b -k3 -r

You can also use top and issue a Shift+M combination

@denzhel
denzhel / travis_ci_list_variables.md
Created June 22, 2022 19:27
check travisCI secured env variables

I wrote a short script that requires a GitHub PAT(personal access token) from your account and checks if the supplied env is configured as a secure env variable in the repository's travisCI settings:

#!/bin/bash
# list all travisCI repositories and check the configured secured env variables

# decalre and check some variables
if [ -z "${GITHUB_TOKEN}" ]; then
	echo "ERROR: GITHUB_TOKEN variable was not provided"
@denzhel
denzhel / kc_distributed_status.md
Created June 22, 2022 19:18
kafka connect distributed cluster status

To view the status of your kafka connect distributed cluster connectors, you can use the following command from inside the pod or machine:

curl -s localhost:8083/connectors?expand=status | jq

This will result:

{
  "inbound-kafka-to-s3": {
    "status": {
@denzhel
denzhel / s3_bucket_remove_objects.md
Created June 12, 2022 07:48
remove all objects in aws s3 bucket

To remove all objects in AWS S3 bucket, use the following command for localstack:

aws --endpoint-url=http://localstack:4566 s3 rm s3://<bucketName> --recursive

Removing --endpoint-url will delete REAL S3 objects - BE CAREFUL.

@denzhel
denzhel / cp_kafka_connect_localstack.md
Created June 12, 2022 07:32
use cp-kafka-connect with localstack

To allow cp-kafka-connect connectors, for example S3, to access localstack resources in the cluster, use the following not well documented configs:

"store.url": "http://localstack:4566",
"s3.region": "localstackDefinedRegion",
"s3.bucket.name": "someBucketName",
"aws.access.key.id": "test",
"aws.secret.access.key": "test"
@denzhel
denzhel / dok_kubeconfig.md
Created May 10, 2022 18:55
digitalocean k8s save your kubeconfig

To add the kube context to your .kube directory you can use the following command:

doctl kubernetes cluster kubeconfig show <clusterName> >> ~/.kube/config_<clusterName>

The reason I did not use save is to seperate clusters to dedicated files.