Skip to content

Instantly share code, notes, and snippets.

@holmberd
Last active September 25, 2024 05:44
Show Gist options
  • Save holmberd/44fa5c2555139a1a46e01434d3aaa512 to your computer and use it in GitHub Desktop.
Save holmberd/44fa5c2555139a1a46e01434d3aaa512 to your computer and use it in GitHub Desktop.
Adjusting child processes for PHP-FPM (Nginx)

Adjusting child processes for PHP-FPM (Nginx)

When setting these options consider the following:

  • How long is your average request?
  • What is the maximum number of simultaneous visitors the site(s) get?
  • How much memory on average does each child process consume?

Determine if the max_children limit has been reached.

  • sudo grep max_children /var/log/php?.?-fpm.log.1 /var/log/php?.?-fpm.log

Determine system RAM and average pool size memory.

  • free -h
  • All fpm processes: ps -ylC php-fpm7.0 --sort:rss
  • Average memory: ps --no-headers -o "rss,cmd" -C php-fpm7.0 | awk '{ sum+=$1 } END { printf ("%d%s\n", sum/NR/1024,"M") }'
  • All fpm processes memory: ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }' | grep php-fpm

Calculate max_children

Based on RAM

  • pm.max_children = Total RAM dedicated to the web server / Max child process size

  • System RAM: 2GB

  • Average Pool size: 85Mb

  • pm.max_children = 1500MB / 85MB = 17

Based on average script execution time

  • max_children = (average PHP script execution time) * (PHP requests per second)
  • visitors = max_children * (seconds between page views) / (avg. execution time)

Configure

sudo vim /etc/php/7.0/fpm/pool.d/www.conf

pm.max_children = 17
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
pm.max_request = 1000
; Choose how the process manager will control the number of child processes.
; Possible Values:
;   static  - a fixed number (pm.max_children) of child processes;
;   dynamic - the number of child processes are set dynamically based on the
;             following directives:
;             pm.max_children      - the maximum number of children that can
;                                    be alive at the same time.
;
;             pm.start_servers     - the number of children created on startup.
;                                    this value must not be less than min_spare_servers 
;                                    and not greater than max_spare_servers.
;
;             pm.min_spare_servers - the minimum number of children in 'idle'
;                                    state (waiting to process). If the number
;                                    of 'idle' processes is less than this
;                                    number then some children will be created.
;
;             pm.max_spare_servers - the maximum number of children in 'idle'
;                                    state (waiting to process). If the number
;                                    of 'idle' processes is greater than this
;                                    number then some children will be killed.
; Note: This value is mandatory.
@nabtron
Copy link

nabtron commented Feb 14, 2022

@thanksmia this might be because your server is creating individual files for each virtual server / domain that you add. You need to check other files in the same folder of pool.d named as random numbers, and find the one that is for your server (it has the name of your server account on top of each file) and edit it as explained here: https://socalledhacker.com/index.php/2022/02/14/error-fpm-initialization-failed-solved/

@CRC-Mismatch
Copy link

CRC-Mismatch commented Jun 7, 2023

@abdennour I'm not sure you'd really want to set anything to "auto", since Kubernetes doesn't "replace" the presented values for available CPU cores or RAM size for each pod, it only applies its limits and that's it; if you run free -h or check for the number of CPU cores inside a pod, you'll see that the reported numbers are for the actual node this pod is running inside of. Setting everything to "auto" will only lead to a scenario where your HPA scales everything, but your ingress doesn't know that those pods are already over their limit, leading to a snowball effect inside each pod, leading to OOM kills and broken pipes for that pod while PHP-FPM restarts or tries to respawn dead workers that have no mCPUs or RAM available to work with (and the HPA starts going crazy, scaling to the max since from its POV, the scaling isn't solving the excessive resource consumption for some of the pods)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment