Skip to content

Instantly share code, notes, and snippets.

@ekryski
Forked from jjb/gist:7389552
Created January 9, 2016 20:08
Show Gist options
  • Save ekryski/3ab7d82505684ecbb891 to your computer and use it in GitHub Desktop.
Save ekryski/3ab7d82505684ecbb891 to your computer and use it in GitHub Desktop.
Ruby 2.1 memory configuration

This all applies to Ruby 2.1. In some cases a setting is not available in 2.0, this is noted. There is also a different with 1.9, 1.8, and REE --- these are not noted.

All the relevant code is in https://github.com/ruby/ruby/blob/trunk/gc.c

RUBY_HEAP_MIN_SLOTS

default: 10000

The number of heap slots to start out with. This should be set high enough so that your app has enough or almost enough memory after loading so that it doesn't have to allocate more memory on the first request (althogh this probably isn't such a big deal for most apps).

(todo: figure out how big a slot is. i think the answer can be infered from this code.)

RUBY_FREE_MIN

default: 4096

The number of free slots that should be present after GC finishes running.

I don't really understand what this means. Since ruby never gives back memory that it took, and GC is done independent of allocation, why would the post-GC count of free slots have any meaning?

note: this was formerly known in REE as RUBY_HEAP_FREE_MIN, and some guides I've seen erroneously use it for MRI as well

RUBY_HEAP_SLOTS_GROWTH_FACTOR

default: 1.8

not available in ruby 2.0

Next time Ruby needs new heap slots it will this multiplier to determine how much memory to allocate. See RUBY_HEAP_SLOTS_INCREMENT for discussion of what this is multiplying.

RUBY_HEAP_SLOTS_GROWTH_MAX

default: 0 (disable)

not available in ruby 2.0

???

RUBY_GC_MALLOC_LIMIT

default: 16 * 1024 * 1024 (16MB)

The number of C data structures that can be allocated before triggering the garbage collector.

In the context of a web application, if this is set lower than the objects that are generated in a single request, then ruby will have to pause to do GC once or more than once for every request. (Here is a discussion of this setting and how to profile it)[http://meta.discourse.org/t/tuning-ruby-and-rails-for-discourse/4126].

The ideal scenario would be to do GC asynchronosly from requests. Phusion Passenger has this feature. So does unicorn. Seems like it it could be achieved with middleware but I'm surprised there isn't a solution out there.

RUBY_GC_MALLOC_LIMIT_MAX

default: 32 * 1024 * 1024 (32MB)

not available in ruby 2.0

???

RUBY_GC_MALLOC_LIMIT_GROWTH_FACTOR

default: 1.4

not available in ruby 2.0

???

RUBY_HEAP_SLOTS_INCREMENT

Not available in MRI. Instead of a definable slot size, MRI uses the logic in this code, which I have so far not been able to grok, to determine the slot size.

The number of new slots to allocate when all initial slots are used

config used by famous people

Discourse

source

RUBY_GC_MALLOC_LIMIT=90000000

37 singnals

listed here, source unknown

RUBY_HEAP_MIN_SLOTS=600000
RUBY_GC_MALLOC_LIMIT=59000000
RUBY_FREE_MIN=100000

twitter

listed here, source unknown

RUBY_HEAP_MIN_SLOTS=500000
RUBY_HEAP_SLOTS_INCREMENT=250000 # not available in MRI
RUBY_HEAP_SLOTS_GROWTH_FACTOR=1
RUBY_GC_MALLOC_LIMIT=50000000

references / further reading

appendix: memory config in ruby source reference

ENV config                          MRI constant                   MRI variable

RUBY_HEAP_MIN_SLOTS                 GC_HEAP_MIN_SLOTS              initial_heap_min_slots
RUBY_FREE_MIN                       GC_HEAP_MIN_FREE_SLOTS         initial_heap_min_free_slots
RUBY_HEAP_SLOTS_GROWTH_FACTOR       GC_HEAP_GROWTH_FACTOR          initial_growth_factor
RUBY_HEAP_SLOTS_GROWTH_MAX          GC_HEAP_GROWTH_MAX             initial_growth_max
RUBY_GC_MALLOC_LIMIT                GC_MALLOC_LIMIT                initial_malloc_limit
RUBY_GC_MALLOC_LIMIT_MAX            GC_MALLOC_LIMIT_MAX            initial_malloc_limit_max
RUBY_GC_MALLOC_LIMIT_GROWTH_FACTOR  GC_MALLOC_LIMIT_GROWTH_FACTOR  initial_malloc_limit_growth_factor
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment