Skip to content

Instantly share code, notes, and snippets.

@jrudolph
Created March 11, 2025 15:11
Show Gist options
  • Save jrudolph/4ca575f5b9b6505a82545ca71a3badcc to your computer and use it in GitHub Desktop.
Save jrudolph/4ca575f5b9b6505a82545ca71a3badcc to your computer and use it in GitHub Desktop.
Limiting IO bandwidth of processes with cgroup in Linux

Here's a way to limit the IO usage of a process (or process group).

What did not really make a difference is using ionice to change the priority. In this case, the reason might be, that the background job is a highly concurrent rclone job and it might still dominate the IO queues by the sheer amount of concurrent IO scheduled.

What does work, is using cgroups to limit the total IO bandwidth for a group.

Create a group:

sudo mkdir -p /sys/fs/cgroup/backup

# Enable io controller if needed
echo +io | sudo tee /sys/fs/cgroup/cgroup.subtree_control

Set limits:

# turn down weight from 100 to 50
echo 50 | sudo tee /sys/fs/cgroup/backup/io.weight
# 9:0 is the block device, change as needed, rbps is bytes/second read, wbps same for write
echo "9:0 rbps=20000000 wbps=10000000" | sudo tee /sys/fs/cgroup/backup/io.max

Place backup process into cgroup:

echo $$ | sudo tee /sys/fs/cgroup/backup/cgroup.procs && /path/to/backup/script.sh
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment