You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
That's 2.6 TB which is nonsense because the whole db is less than 4gigs.
I believe the old versions get deleted by default every 5 minutes. The space they occupied gets reclaimed on every defrag.
This seems impossibly high I agree, this is a worst-case approach as it assumes all object versions are equal - this is most likely never the case. Some object types, like those managed by an operator may increase the object size incrementally over time.
However it can still be useful with the inclusion of the object count and # of versions, to help spot potential object abuse in a cluster. If you have a better way I'd be happy to adopt it.
The methods above don't actually give you an object count. The only reason it is sometimes 0 is because the key was deleted sometime between when you listed the keys and when the loop got around to asking for it.
To actually figure out how many revisions are being stored you have to recursively get --rev mod_revision to check how many previous versions are stored.
etcdctl get --write-out=json "/kubernetes.io/operators.coreos.com/operators/ocs-operator.openshift-storage"| jq 'del(.kvs[].value)'
{
"header": {
"cluster_id": 14841639068965180000,
"member_id": 10276657743932975000,
"revision": 1102830774,
"raft_term": 2
},
"kvs": [
{
"key": "L2t1YmVybmV0ZXMuaW8vb3BlcmF0b3JzLmNvcmVvcy5jb20vb3BlcmF0b3JzL29jcy1vcGVyYXRvci5vcGVuc2hpZnQtc3RvcmFnZQ==",
"create_revision": 136141,
"mod_revision": 1100941289,
"version": 123416317
}
],
"count": 1
}
etcdctl get --write-out=json --rev 1100941289 "/kubernetes.io/operators.coreos.com/operators/ocs-operator.openshift-storage"| jq 'del(.kvs[].value)'
{"level":"warn","ts":"2024-01-17T12:42:32.904797-0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00025c000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = OutOfRange desc = etcdserver: mvcc: required revision has been compacted"}
Error: etcdserver: mvcc: required revision has been compacted
I kinda think it's not worth the time/effort to do this because Kubernetes does a compaction every 5 minutes anyway by default.
We can also speed the whole thing up by using the get --from-key feature to iterate through the keys. This iterates through my 150k key etcd db in about 9s.
Note: you will want to reduce LIMIT to 50 or so if you're running this with an in-use etcd server. I found that restoring a snapshot locally and running reports against that is much safer and more reliable way to do analysis of a production server.
here is a modified version wrapped in bash functions for easier discovery and invocation. Hoping this can help others
# Produces a file in current directory of format# - fullkey: full etcd-key# - k8s api# - k8s group# - k8s namespace# - resource name# - size in bytes# - versions## Example:
#{
# "fullkey": "/registry/apiextensions.k8s.io/customresourcedefinitions/grafanas.grafana.integreatly.org",# "api": "customresourcedefinitions",# "group": "apiextensions.k8s.io",# "namespace": null,# "resource": "grafanas.grafana.integreatly.org",# "size": 336909,# "versions": 3#}##
#{"fullkey":"/registry/apiextensions.k8s.io/customresourcedefinitions/grafanas.grafana.integreatly.org","api":"customresourcedefinitions","group":"apiextensions.k8s.io","namespace":null,"resource":"grafanas.grafana.integreatly.org","size":336909,"versions":3}
# Credit from https://gist.github.com/dkeightley/8f2211d6e93a0d5bc294242248ca8fbf?permalink_comment_id=4836323#gistcomment-4836323functionextract_k8s_etcd_keys_size_and_versions() {
LIMIT=500
TMPFILE=$(mktemp)
RESULT_FILE=keys_raw.json
NEXT_KEY=$(etcdctl get --limit 1 --keys-only --prefix / )whiletrue;do
etcdctl get --limit $LIMIT --write-out=json --from-key "$NEXT_KEY"|
tee >(jq -r '(.kvs[-1].key | @base64d),(.count)'>$TMPFILE)|
jq -c '.kvs[1:][] | ( (.key | @base64d) as $key | ($key | split("/")) as $keya | { "fullkey": $key, "api": $keya[3], "group": $keya[2], "namespace": $keya[5], "resource": $keya[4], "size": (.value | @base64d | length), "versions": (.version), } )'|
tee -a $RESULT_FILE|
jq -r '.fullkey'if [ "$(sed -n -e 2p <$TMPFILE)"=="1" ];thenbreakfi
NEXT_KEY="$(sed -n -e 1p <$TMPFILE)"echo"Remaining keys: $(sed -n -e 2p <$TMPFILE)"done
rm $TMPFILE
}
functionextract_k8s_etcd_keys_size_and_versions_commented() {
LIMIT=500
TMPFILE=$(mktemp)
RESULT_FILE=keys_raw.json
NEXT_KEY=$(etcdctl get --limit 1 --keys-only --prefix / )whiletrue;do
etcdctl get --limit $LIMIT --write-out=json --from-key "$NEXT_KEY"|
tee >(jq -r '(.kvs[-1].key | @base64d),(.count)'>$TMPFILE)|# Main jq processing pipeline:
jq -c '.kvs[1:][] | # Extract all key-value pairs except the first one (already processed), iterate over each ( (.key | @base64d) as $key | # Decode the base64-encoded key and store it in variable $key ($key | split("/")) as $keya | # Split the decoded key by "/" delimiter and store the array in variable $keya { "fullkey": $key, # Store the full decoded key path "api": $keya[3], # Extract the API component (2nd element after split, e.g., "registry") "group": $keya[2], # Extract the group component (3rd element, e.g., "acme.cert-manager.io") "namespace": $keya[5], # Extract the namespace component (4th element, e.g., "challenges") "resource": $keya[4], # Extract the resource name (5th element, e.g., "08-mdb-spike") "size": (.value | @base64d | length), # Decode the base64-encoded value and calculate its byte length "versions": (.version), # Extract the version number from the etcd key-value metadata } )'|
tee -a $RESULT_FILE|
jq -r '.fullkey'if [ "$(sed -n -e 2p <$TMPFILE)"=="1" ];thenbreakfi
NEXT_KEY="$(sed -n -e 1p <$TMPFILE)"echo"Remaining keys: $(sed -n -e 2p <$TMPFILE)"done
rm $TMPFILE
}
functiondisplay_largest_10_groups_by_size() {
jq -s 'group_by(.group) | map({ group: (.[0].group), total: ([.[] | .size] | reduce .[] as $num (0; .+$num)) }) | sort_by(.total) | reverse | .[0:10]' keys_raw.json
}
functiondisplay_largest_10_namespaces_by_size() {
jq -s 'group_by(.namespace) | map({ namespace: (.[0].namespace), total: ([.[] | .size] | reduce .[] as $num (0; .+$num)) }) | sort_by(.total) | reverse | .[0:10]' keys_raw.json
}
functiondisplay_largest_10_namespaces_by_size_commented() {
jq -s ' # Group all objects by their namespace field group_by(.namespace) # For each group, create a new object with: # - namespace: the namespace name (taken from first element) # - total: sum of all size values in that namespace | map({ namespace: (.[0].namespace), total: ( # Extract all size values from the current group [.[] | .size] # Use reduce to sum all size values: # - Iterate through each size value (as $num) # - Start with accumulator = 0 # - For each iteration, add $num to the accumulator (. + $num) # - Result is the total sum of all sizes in this namespace | reduce .[] as $num (0; . + $num) ) }) # Sort the resulting array by the total field in ascending order | sort_by(.total) # Reverse to get descending order (largest first) | reverse # Take only the first 10 elements (top 10 largest namespaces) | .[0:10]' keys_raw.json
}
functiondisplay_largest_10_namespaces_by_size_time_versions_commented() {
jq -s ' # Define a function to format numbers with thousand separators (commas) def format_number: tostring # Split string into array of characters, reverse it | explode | reverse # Insert comma (ASCII 44) every 3 digits | to_entries | map( if (.key > 0 and (.key % 3) == 0) then [44, .value] # 44 is ASCII code for comma else [.value] end ) | flatten # Reverse back and convert to string | reverse | implode; # Group all entries by namespace field group_by(.namespace) # Transform each group into namespace summary | map({ namespace: (.[0].namespace), # Calculate total size: sum of (size × versions) for all resources total_bytes: ( map(.size * .versions) # Multiply size by version count for each resource | add # Sum all values (cleaner than reduce) ) }) # Sort by total size in descending order | sort_by(.total_bytes) | reverse # Keep only top 10 namespaces | limit(10; .[]) # Collect back into array and format | [.] | map({ namespace: .namespace, total_bytes: (.total_bytes | format_number ) })' keys_raw.json
}
#functiondisplay_largest_10_namespaces_by_key_count() {
jq -s 'group_by(.namespace) | map({ namespace: (.[0].namespace), count: (. | length)}) | sort_by(.count) | reverse | .[0:10]' keys_raw.json
}
functiondisplay_largest_10_groups_by_key_count() {
jq -s 'group_by(.group) | map({ group: (.[0].group), count: (. | length)}) | sort_by(.count) | reverse | .[0:10]' keys_raw.json
}
functiondisplay_highest_10_versions() {
jq -s 'sort_by(.versions) | reverse | .[0:10]' keys_raw.json
}
The object size computations fail if the object count is zero. Here's a chunk that accounts for that:
This calls etcdctl multiple times for each key, which seems kinda inefficient, so I'll look at refactoring to be better.