Skip to content

Instantly share code, notes, and snippets.

@chapel
Created May 17, 2011 01:33
Show Gist options
  • Save chapel/975717 to your computer and use it in GitHub Desktop.
Save chapel/975717 to your computer and use it in GitHub Desktop.
[20:18] mraleph: ryah: I have an interesting observation for you about that infamous and sad comparison with erlang. If I throttle accept (i.e. accept return after 50 accepted connections from https://github.com/joyent/node/blob/master/lib/net.js#L907 then response rate seems to improve by 20-30% and number of errors drops.
[20:18] baudehlo: haven't they always?
[20:18] baudehlo: a|i: https://github.com/squaremo/rabbit.js
[20:19] ryah: mraleph: interesting
[20:19] mjr_: mraleph: I figured it was something like that
[20:19] ryah: do you think we're blocked on accepting?
[20:19] kersny has joined the channel
[20:19] ryah: that's an easy fix
[20:19] a|i: baudehlo: any ideas which approach is more efficient?
[20:19] AsDfGh1231 has joined the channel
[20:19] mjr_: Every time I've looked at node performance, it always goes into some kind of concurrency explosion and needs some kind of throttling.
[20:19] baudehlo: in qpsmtpd I had the accept queue automatically ramp up and down.
[20:20] mraleph: well connections arrive in batches like 100 − 200 connections per call to that function.
[20:20] baudehlo: might be worth stealing.
[20:20] mraleph: sometimes it takes upto 30ms to process all of them
[20:20] onar has joined the channel
[20:20] ryah: i wonder if there's a smart way to choose when to stop
[20:20] Lagnus has joined the channel
[20:20] mraleph: I don't think stupid trottling is an anwer.
[20:20] mraleph: just wanted to share
[20:20] dnyy has joined the channel
[20:21] ryah: mraleph: yeah, that's interesting
[20:21] baudehlo: ryah: here's what I do in qpsmtpd: https://github.com/smtpd/qpsmtpd/blob/master/qpsmtpd-async#L375
[20:21] dnyy: is this http://stackoverflow.com/questions/4871932/using-npm-to-install-or-update-required-packages-just-like-bundler-for-rubygems (first answer) the standard way of doing it?
[20:21] chapel: those brute force testers aren't very realistic either
[20:22] baudehlo: ryah: the basic idea is that we want to balance getting back to the main loop with accepting as much as we can.
[20:22] chapel: has anyone thought of using node to build a reddit/digg simulator?
[20:22] mraleph: I am curious what will happen if you put more meat into request handler.
[20:22] ryah: baudehlo: i'd think the timer would be unnecessary
[20:22] chapel: where it could span over multiple servers and hit one target?
[20:22] mjr_: baudehlo: that's interesting. How do you know when to slow it down?
[20:23] ryah: baudehlo: you just want to get back to processing other sockets and not stuck in a accept loop
[20:23] mraleph: like fetching file or image or something from fs, or rendering a template, etc.
[20:23] xtianw has joined the channel
[20:23] mw____ has joined the channel
[20:23] quackquack: tjholowaychuk: does Jade have a logo?
[20:23] tdegrunt has joined the channel
[20:23] baudehlo: mjr_: it's imperfect.
[20:24] baudehlo: it needs down-throttling as well as up-throttling.
[20:24] mjr_: I see. Tricky problem.
[20:24] baudehlo: right now if there's no more connections in 30 seconds it resets it back to 20.
[20:24] perezd: anyone in here using RQueue?
[20:24] ryah: i wonder if libev priorities could be used here...
[20:25] baudehlo: I tried a bunch of different methods and that just seemed to work without too much hassle.
[20:25] ryah: like give accept fd high priority but decrement it each time you accept a connection in a loop
[20:25] ryah: or something..
[20:25] tjholowaychuk: quackquack: nope
[20:25] quackquack: tjholowaychuk: k, thx
[20:26] mjr_: Honestly, I think node performance is "fine" for now.
[20:26] tjholowaychuk: dont have a express logo either
[20:26] tjholowaychuk: some day
[20:26] baudehlo: ryah: I think just have a maximum number of accept() calls is a good first step
[20:26] mjr_: Lots of bugs out there that are more important IMO than beating Erlang.
[20:26] baudehlo: agreed.
[20:26] mraleph: ryah: I also still see that V8 goes to runtime for some properties like 'emit', '_headers' and some others pretty often. but that does not seem to be the bottleneck. it tried fighting it by changing code here and there, but it had no effect. so I think the main overhead is in something else
[20:26] ryah: mjr_ definitely
[20:26] chapel: mjr_: I don't think its to beat erlang, but ryah has always wanted node to be fast
[20:27] mraleph: bugs are always important more than anything else
[20:27] baudehlo: For Haraka which is entirely unoptimised it's already as fast as a competitor's C smtp server (that using async i/o).
[20:27] ryah: especially since im in the middle of the network rewrite from hell
[20:27] chapel: and if a way forward can be found to improve it, then why not?
[20:27] onar has joined the channel
[20:27] baudehlo: ACTION might submit a patch
[20:27] quackquack: tjholowaychuk: i was wondering cause im building a text editor, and putting litter icons by the files to indicate type
[20:27] olauzon has joined the channel
[20:28] tjholowaychuk: quackquack: ah cool :)
[20:28] baudehlo: ryah: is the accept code the same for all Net stuff or specific to http?
[20:28] ryah: mraleph: that's promising..
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment