Skip to content

Instantly share code, notes, and snippets.

@sanchezzzhak
Created January 18, 2018 13:43
Show Gist options
  • Save sanchezzzhak/511fd140e8809857f8f1d84ddb937015 to your computer and use it in GitHub Desktop.
Save sanchezzzhak/511fd140e8809857f8f1d84ddb937015 to your computer and use it in GitHub Desktop.
clickhouse get tables size
SELECT table,
formatReadableSize(sum(bytes)) as size,
min(min_date) as min_date,
max(max_date) as max_date
FROM system.parts
WHERE active
GROUP BY table
@sanchezzzhak
Copy link
Author

if you have run out of disk space, you should pay attention to the logs.
Use any query above to identify the abnormal amount of data.

Find partition id

select * from system.parts where table = 'trace_log'

manual drop log

ALTER TABLE system.trace_log (DROP PARTITION 202504);  -- 202504 value from column partition

After manual cleaning, you can use this manual for disable trace_log and other log metrics
https://kb.altinity.com/altinity-kb-setup-and-maintenance/altinity-kb-system-tables-eat-my-disk/

@kislayaaakash
Copy link

Hi everyone, I new to Clickhouse. But my use case is also similar. I want to develop a time-series based dashboard in Grafana which shows how all table's size varies in a timely fashion. Does anyone has an idea how to get that done. Like my query just gives the table size for current time. I want table size over a period of time. Can someone please help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment