To query rabbit's API using curl(e.g list all queues):
curl -u '<user>:<password>' -i -H "content-type:application/json" -X GET "http://127.0.0.1:15672/api/queues"
To query rabbit's API using curl(e.g list all queues):
curl -u '<user>:<password>' -i -H "content-type:application/json" -X GET "http://127.0.0.1:15672/api/queues"
In case you want to restore changes cluster settings to default, you should use the following command:
curl -XPUT 'http://localhost:9200/_cluster/settings' -H 'Content-Type: application/json' -d '{
"transient" : {
"cluster.routing.allocation.disk.watermark.flood_stage" : null,
"cluster.routing.allocation.disk.watermark.low" : null,
"cluster.routing.allocation.disk.watermark.high" : null
}
}'
We have 1000 topics, 1 partition each. Our kafka connect has the following config:
tasks.max: 10
We expected each each task to handle 100 topics but NO !
Instead, single topics partition will be handled by the first task only. Meaning, the first task will handle ALL the
1000 topics. Why ? RangeAssignor
is the default kafka connect partition assingor.
To allow a better even spread across all the workers, I configured our kafka connect with:
To show the memory usage by process, use the following:
ps -o pid,user,%mem,command ax | sort -b -k3 -r
You can also use top
and issue a Shift+M
combination
I wrote a short script that requires a GitHub PAT(personal access token) from your account and checks if the supplied env is configured as a secure env variable in the repository's travisCI settings:
#!/bin/bash
# list all travisCI repositories and check the configured secured env variables
# decalre and check some variables
if [ -z "${GITHUB_TOKEN}" ]; then
echo "ERROR: GITHUB_TOKEN variable was not provided"
To view the status of your kafka connect distributed cluster connectors, you can use the following command from inside the pod or machine:
curl -s localhost:8083/connectors?expand=status | jq
This will result:
{
"inbound-kafka-to-s3": {
"status": {
To remove all objects in AWS S3 bucket, use the following command for localstack:
aws --endpoint-url=http://localstack:4566 s3 rm s3://<bucketName> --recursive
Removing --endpoint-url
will delete REAL S3 objects - BE CAREFUL.
To allow cp-kafka-connect
connectors, for example S3, to access localstack resources in the cluster,
use the following not well documented configs:
"store.url": "http://localstack:4566",
"s3.region": "localstackDefinedRegion",
"s3.bucket.name": "someBucketName",
"aws.access.key.id": "test",
"aws.secret.access.key": "test"
To add the kube context to your .kube directory you can use the following command:
doctl kubernetes cluster kubeconfig show <clusterName> >> ~/.kube/config_<clusterName>
The reason I did not use save
is to seperate clusters to dedicated files.