Fibur is a library that allows concurrency during Ruby I/O operations without needing to make use of callback systems. Traditionally in Ruby, to achieve concurrency during blocking I/O operations, programmers would make use of Fibers and callbacks. Fibur eliminates the need for wrapping your I/O calls with Fibers and a callback. It allows you to write your blocking I/O calls the way you normally would, and still have concurrent execution during those I/O calls.
Say you have a method that fetches data from a network resource:
require 'net/http'
def network_read uri
Net::HTTP.get_response uri
end
We need to fetch that data say 100 times, so we'll wrap it in a loop:
100.times { network_read }
If we benchmark this code:
require 'benchmark'
require 'net/http'
require 'uri'
def network_read uri
Net::HTTP.get_response uri
end
uri = URI('http://google.com/')
Benchmark.bm do |x|
x.report('loop') { 100.times { network_read uri } }
end
On my machine it takes about 5 seconds:
$ ruby test.rb
user system total real
loop 0.210000 0.070000 0.280000 ( 5.731776)
Now lets modify our benchmark to wrap each call to network_read
in a Fibur:
require 'benchmark'
require 'net/http'
require 'uri'
require 'fibur' # use the Fibur gem.
def network_read uri
Net::HTTP.get_response uri
end
uri = URI('http://google.com/')
Benchmark.bm(5) do |x|
x.report('loop') { 100.times { network_read uri } }
x.report('fibur') {
100.times.map {
Fibur.new { network_read uri }
}.map(&:join)
}
end
Output from our benchmark:
$ ruby -I. test.rb
user system total real
loop 0.220000 0.070000 0.290000 ( 5.732683)
fibur 0.110000 0.050000 0.160000 ( 0.197434)
Wrapping each call to network_read
in a fibur brought the time down to 0.2 seconds! Using Fiburs, we were able to gain full concurrency during our I/O operations, and we didn't have to modify our network_read
method.
Fibur only works on Ruby 1.9, and you can get it by installing the fibur
gem.
I encourage you to check out the source.
I think that, per the widespread adoption of, vis a vis, evented programming, as heretofore java.util.concurrent and its expansive support for locked and lockless data types, pursuant to the need for reentrancy and full processor utilization that you'll find the necessity, herein, of implementing callbacks concordant with harmonizing processor caches, instruction-level parallelism, the traveling salesman problem, and transactional memory scaling; QED some sort of reactor loop will need decoupling from your central abstraction and exposed via a dependency injection container, with bytecodes.