Skip to content

Instantly share code, notes, and snippets.

@meson10
Last active August 29, 2015 14:25
Show Gist options
  • Save meson10/65421f04ea238cba8d58 to your computer and use it in GitHub Desktop.
Save meson10/65421f04ea238cba8d58 to your computer and use it in GitHub Desktop.
Hijacking that uses Reopening instead of Reassignment
require 'logger'
# require 'stringio'
# StringIO also do as IO, but IO#reopen fails.
# The problem is that a StringIO cannot exist in the O/S's file descriptor
# table. STDERR.reopen(...) at the low level does a dup() or dup2() to
# copy one file descriptor to another.
#
# I have two options:
#
# (1) $stderr = StringIO.new
# Then any program which writes to $stderr will be fine. But anything
# which writes to STDERR will still go to file descriptor 2.
#
# (2) reopen STDERR with something which exists in the O/S file descriptor
# table: e.g. a file or a pipe.
#
# I canot use a file, hence a Pipe.
def make_logger(ident)
lg = Logger.new(STDERR)
lg.level = Logger::DEBUG
original_formatter = Logger::Formatter.new
lg.formatter = proc do |severity, datetime, progname, msg|
original_formatter.call(severity, datetime, ident, msg.dump)
end
lg
end
def capture_output(pipes)
streams = [STDOUT, STDERR]
# Save the streams to be reassigned later.
# Actually it doesn't matter because the child process would be killed
# anyway after the work is done.
saved = streams.each do |stream|
stream.dup
end
begin
streams.each_with_index do |stream, ix|
# Probably I should not use IX, otherwise stdout and stderr can arrive
# out of order, which they should?
# If I reopen both of them on the same PIPE, they are guaranteed to
# arrive in order.
stream.reopen(pipes[ix])
end
yield
ensure
# This is sort of meaningless, just makes sense aesthetically.
# To return what was borrowed.
streams.each_with_index do |stream, i|
stream.reopen(saved[i])
end
end
end
class Request
def initialize(pipes)
@pipes = pipes
end
def func_in_fork
3.times do |i|
puts "Regular puts message #{i}"
$stdout.puts "#{i} on stdout"
$stderr.puts "#{i} on stderr"
end
end
def process
capture_output(@pipes) do
func_in_fork
end
end
end
def process_request(sender)
out_r, out_w = IO.pipe
err_r, err_w = IO.pipe
parent_io = [out_r, err_r]
child_io = [out_w, err_w]
# Logger does the actual trick of attaching a sender to every message.
lg = make_logger sender
pid = Process.fork do
r = Request.new(child_io)
r.process
end
# Cleanup the writers in Parent process.
child_io.each {|io| io.close }
io_threads = []
# Wait to listen on both stderr and stdout pipe.
pub_mutex = Mutex.new
parent_io.each do |reader|
io_threads << Thread.new {
loop {
begin
data = reader.readline
pub_mutex.synchronize { lg.debug data }
rescue EOFError
# awkward blank rescue block
rescue Exception => e
lg.error e.message
lg.error e.backtrace
end
}
}
end
wait_thr = Process.detach(pid)
wait_thr.join
pub_mutex.synchronize {
io_threads.each { |th|
th.kill
}
}
# Cleanup.
parent_io.each{|io| io.close unless io.closed?}
end
process_request("123")
puts "Regular done on stdout!"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment