Skip to content

Instantly share code, notes, and snippets.

@adamlwatson
Created November 15, 2011 18:19
Show Gist options
  • Save adamlwatson/1367854 to your computer and use it in GitHub Desktop.
Save adamlwatson/1367854 to your computer and use it in GitHub Desktop.
Testing throughput bottlenecks in Goliath from Mongoid gem
0. gem install goliath (should be v0.9.4 as of writing this)
1. gem install mongoid (should be v2.3.3 as of writing this)
2. create and save test.rb file (see below)
3. % ruby test.rb -sv
4. % ab -n500 -c10 http://localhost:9000/status
5. note the req/s results (somewhere around 80-90 req/s on my machine)
6. edit the test.rb file, remove the 'require mongoid' line
7. repeat steps 3 and 4.
8. note the req/s results (somewhere aaround 550 req/s on my machine)
Conclusion: Mongoid is patching something in its initialization that is causing severe performance bottlenecks in Goliath.
$:<< '../lib' << 'lib'
require 'goliath'
require 'mongoid'
class HelloWorld < Goliath::API
use Goliath::Rack::Heartbeat
def response(env)
[200, {}, ""]
end
end
@sujal
Copy link

sujal commented Nov 16, 2011

Adam, what ruby version are you using?

@adamlwatson
Copy link
Author

Sujal, I tried this under both of these versions:

ruby 1.9.2p180 (2011-02-18 revision 30909) [x86_64-darwin11.2.0]
ruby 1.9.2p290 (2011-07-09 revision 32553) [x86_64-darwin11.1.0]

The performance difference is due to running goliath without the "-e production" flag. Problem solved! (Thanks, Ilya!)

@sujal
Copy link

sujal commented Nov 16, 2011

Have you done any testing when a query is involved? I'm not getting anywhere near this kind of performance when talking to the DB.

@adamlwatson
Copy link
Author

adamlwatson commented Nov 16, 2011 via email

@sujal
Copy link

sujal commented Nov 16, 2011

Thanks for answering all my questions. I really appreciate it.

The thing I'm puzzling through is how real that 500-600 number you're citing is. The key is that your test isn't really taking advantage of what async frameworks bring to the table. Since all your data is running on a local box, IO latency is negligible, so I would expect your test to perform as fast as a threaded server. In other words, if your requests take 2ms, you should get 500rps. If they take less, you should get even more.

I've been testing with MongoDB running across a WAN (from my box here at my friends house to my server at home) and there performance takes a hit. I'm still trying to validate my testing (I just sent a message to the goliath list), so take this with a grain of salt. For example, my performance issues last night turned out to be a bug in apache bench on OS X Lion. ;-)

Sujal

@sujal
Copy link

sujal commented Nov 16, 2011

(the short version of what I'm puzzling through is how do I know that MongoDB calls are really async... that's where this all started)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment