Running through a set of common CRUD operations. Each object on both ORMs uses a single model containing 3 attributes: a String
, a Fixnum
, and a Time
. Each benchmark operates on 5000 objects.
user system total real
AR 0.230000 0.010000 0.240000 ( 0.240430)
PORO 0.000000 0.000000 0.000000 ( 0.001539)
Instantiating ActiveRecord objects appears to be extremely expensive. It took 240ms of CPU time to instantiate 5000 AR objects.
user system total real
AR 1.740000 0.340000 2.080000 ( 4.078796)
Perpetuity 0.450000 0.120000 0.570000 ( 2.603278)
Perpetuity all-in-one 0.240000 0.050000 0.290000 ( 0.453151)
Looking at the CPU time, Perpetuity's almost 4x as fast at insertion. We look at CPU time because the real
column includes time spent on I/O. If you subtract real - total
, you get nearly identical values for Perpetuity and ActiveRecord because the Postgres spends the same amount of time saving the serialized forms regardless of the ORM.
Also, since CPU time is about half of wall-clock time in ActiveRecord's case, AR can only insert 2 objects at a time in separate threads before running up against MRI's GIL (during I/O, MRI releases the lock). In Perpetuity's case, you can insert 5 objects concurrently before touching it.
Additionally, Perpetuity has the ability to insert several objects in a single SQL query (that's the all_in_one
benchmark above). I don't know if this is possible with ActiveRecord, but as you can see this cuts down insertion time dramatically. This is a great feature for seed data, tests, etc.
user system total real
ActiveRecord 0.070000 0.000000 0.070000 ( 0.081238)
Perpetuity 0.170000 0.020000 0.190000 ( 0.183290)
At first glance, this benchmark looks like ActiveRecord is a better choice for read-heavy applications (which, I imagine, is most of them) because ActiveRecord retrieves and deserializes database rows over 2x as fast as Perpetuity::Postgres. However, keep reading to see why this is an example of a benchmark that only contains enough information to mislead you.
user system total real
ActiveRecord 0.110000 0.010000 0.120000 ( 0.166371)
Perpetuity 0.180000 0.020000 0.200000 ( 0.271154)
In a paginated query (250 pages of 20 objects), ActiveRecord is no longer 2x as fast as Perpetuity::Postgres, though it is still significantly faster.
user system total real
ActiveRecord 0.760000 0.130000 0.890000 ( 1.305209)
Perpetuity 0.440000 0.130000 0.570000 ( 0.943153)
When we get those same 5000 objects one at a time, ActiveRecord loses its lead. I didn't benchmark every step in between to see where it crosses over, but it appears that Perpetuity::Postgres is more efficient at generating SQL queries whereas ActiveRecord wins at deserialization.
user system total real
ActiveRecord 0.250000 0.010000 0.260000 ( 0.267259)
Perpetuity 0.180000 0.010000 0.190000 ( 0.197323)
Here is where we see why ActiveRecord being faster at deserialization doesn't matter. Once you begin using the object's attributes, Perpetuity pulls way ahead. This is due to how the differences in how ActiveRecord and Perpetuity store data in your objects.
Perpetuity takes all of the data from each result and puts each attribute into an instance variable. This takes time when you first pull it out of the database, but instance variables are extremely quick to access.
ActiveRecord stores all of your object's state in its @attributes
instance variable. It's all stored as a hash with string keys. So when you say user.email
, it is functionally equivalent to user.attributes["email"]
. Dereferencing this hash is significantly slower because of the way Ruby hashes work. Without getting too far into it, it uses the string value's hash
method to determine its place in the underlying data structure and computing a string's hash value is not cheap. The worst part is that this hash lookup happens in addition to already having to look up the @attributes
instance variable.
In some quick benchmarks on my machine, hash lookups appear to take about 3x as long as ivars. The String#hash
call alone is 2 of those 3x.
user system total real
ActiveRecord#save 1.370000 0.320000 1.690000 ( 3.599979)
ActiveRecord#update_attributes 1.470000 0.330000 1.800000 ( 3.733342)
Perpetuity::Mapper#save 0.340000 0.120000 0.460000 ( 2.325159)
Perpetuity::Mapper#update 0.250000 0.100000 0.350000 ( 2.215275)
Here we compare the different ways of saving updates to an object. When comparing the save
methods (they have identical functionality — they push updates made to an object since it was loaded back to the database), Perpetuity uses almost 75% less CPU time.
When looking at the update
/update_attributes
methods, the difference is even larger. Perpetuity uses over 80% less CPU time there.
Also, even though update
is a little bit faster than save
on Perpetuity (save
is implemented in terms of update
), it's highly recommended that you save
the object instead. Mapper#update
is faster because it passes the data directly to the database after sanitization. Your domain model cannot do any checks on this data first.