Skip to content

Instantly share code, notes, and snippets.

@2E0PGS
Last active May 29, 2024 13:55
Show Gist options
  • Save 2E0PGS/2560d054819843d1e6da76ae57378989 to your computer and use it in GitHub Desktop.
Save 2E0PGS/2560d054819843d1e6da76ae57378989 to your computer and use it in GitHub Desktop.
Fixing khugepaged CPU usage VMware Workstation

If you run VMware Workstation 11 or above you may encounter high CPU usage from process khugepaged on Ubuntu 15.04+

The fix is to disable transparent hugepages. It seems Ubuntu has it enabled by default.

You can check the current status on your system by running:

cat /sys/kernel/mm/transparent_hugepage/enabled

cat /sys/kernel/mm/transparent_hugepage/defrag

Fedora outputs: always [madvise] never but Ubuntu outputs: [always] madvise never

Fedora seems to not be effected but I havn't tested it myself.

So I suggest not using madvise and just disable it totally.

To disable it run the following commands as root:

echo never > /sys/kernel/mm/transparent_hugepage/enabled

echo never > /sys/kernel/mm/transparent_hugepage/defrag

That will only disable it for the current session.

To have it persistant across reboots I suggest adding this to your rc.local:

# Fix for VMware Workstation 11+ khugepaged.
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag

Ensure this goes above the line:

exit 0

More info and references:

@exeq89
Copy link

exeq89 commented Jun 12, 2022

i have lowered the memory for 3d graphics and its better.
my config is:
ryzen 5 5600g / 32 gb ram
guest is win10 with 6 cpu and 16 gb ram.
lowered graphics to 256mb.

@clapbr
Copy link

clapbr commented Aug 1, 2022

i have lowered the memory for 3d graphics and its better. my config is: ryzen 5 5600g / 32 gb ram guest is win10 with 6 cpu and 16 gb ram. lowered graphics to 256mb.

same here, reduced to 1Gb and its fine now

@thebahadir
Copy link

thebahadir commented Nov 18, 2022

I've been dealing with this issue for months on Debian 11. None of the mentioned methods were the solution for me. And today i solved my problem. What was described here helped to solve my problem.

@msizanoen1
Copy link

msizanoen1 commented Feb 2, 2023

These three lines as root should fully disable kernel memory defragmentation:

echo never > /sys/kernel/mm/transparent_hugepage/defrag
sysctl -w vm.compaction_proactiveness=0
sysctl -w vm.extfrag_threshold=1000

Note that this will greatly increase memory fragmentation and therefore memory pressure as compation is fully disabled and should be reverted when VMWare is not in use.

How to revert:

sysctl -w vm.compaction_proactiveness=20
sysctl -w vm.extfrag_threshold=500
echo always > /sys/kernel/mm/transparent_hugepage/defrag
sysctl -w vm.compact_memory=1

@shuhaowu
Copy link

shuhaowu commented Jan 11, 2024

I tried the above options by @msizanoen1. Indeed the VM is now usable. However, it is still slow and I see that kswapd0 now occasionally will peg 1 CPU at 100%, despite the fact that I have no swap enabled. This process is only running when I run a VMware VM, which suggests this is somehow linked...

@msizanoen1
Copy link

I tried the above options by @msizanoen1. Indeed the VM is not usable. However, it is still slow and I see that kswapd0 now occasionally will peg 1 CPU at 100%, despite the fact that I have no swap enabled. This process is only running when I run a VMware VM, which suggests this is somehow linked...

It's likely that not using swap was the cause for kswapd consuming 100%. Generally it's not recommended to run a Linux system without some kind of swap, and this might be especially true when running with memory defragmentation disabled.

@eudocimus
Copy link

Try setting /proc/sys/vm/compaction_proactiveness to 1. The thing is, you need to compact eventually. What you want to avoid is a war between VMware and kernel. This will obvisouly happen if you compact too eagerly, which is the default. But if you compact too lazily, for example by not being proactive at all, you will run into a situation where you must do it reactively, with the same bad result. This is as if everyone has kept missing the proactiveness part. Setting it to 1 has now worked for me for some time, at least weeks, if not months.

@msizanoen1
Copy link

msizanoen1 commented May 29, 2024

Try setting /proc/sys/vm/compaction_proactiveness to 1. The thing is, you need to compact eventually. What you want to avoid is a war between VMware and kernel. This will obvisouly happen if you compact too eagerly, which is the default. But if you compact too lazily, for example by not being proactive at all, you will run into a situation where you must do it reactively, with the same bad result. This is as if everyone has kept missing the proactiveness part. Setting it to 1 has now worked for me for some time, at least weeks, if not months.

AFAIK (and through my own testing and reading of the kernel source) setting vm.extfrag_threshold=1000 and disabling transparent hugepage defrag will prevent the kernel from ever compacting memory, reactively or not, and will cause it to fall back to swapping pages out of memory and/or invoking the OOM killer instead.

vm.compaction_proactiveness and /sys/kernel/mm/transparent_hugepage/defrag controls different aspects of proactive memory compaction, while vm.extfrag_threshold controls reactive memory compaction (e.g. when the kernel needs to allocate a large chunk of continuous memory). Setting vm.extfrag_threshold to 1000 disables reactive memory compaction.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment