As I was tweaking the Kong config I use in my VPS, I took the opportunity to take some screenshots of the Kong container memory usage pattern. What follows is a brief summary of my findings.
The deployment is pretty small:
$ http :8001/services | jq '.data | length'
11
$ http :8001/routes | jq '.data | length'
11
$ http :8001/plugins | jq '.data | length'
3
# all basic auth
The baseline behavior is as seen on the 12:06-12:26 time window.
At around 12:32, I restarted Kong, changing the mem_cache_size
from 128mb
(default) to 32mb
. Soon thereafter, the comsumption grew, slowly, as seen below:
Zooming in a bit:
Zooming out, to include the past day:
The memory usage grows, and plateaus at ~ 670mb.
An hour later (after lunch : ), I pushed a config change, removing a few unused plugins:
-
acme
-
rate-limiting
-
ip-restriction
A few captures of the moments following the restart (around 13:52):
Zooming in:
Now, zooming out to see more clearly the initial effect of both config changes:
As before, usage kept growing, plateauing in the same vicinity (~ 670mb):
The goal here is only to share what I found interesting while poking around; no significant conclusions were drawn; however, it would IMO be valuable to further study this behavior and understand the reason for the numbers here -- in particular the memory usage plateau. (As Zhgonwei mentioned in his memory usage demo and today in standup, the memory used by the shms is displayed in each of the workers, even though it's shared, so the numbers don't match actual memory usage.)