-
-
Save jbardin/821d08cb64c01c84b81a to your computer and use it in GitHub Desktop.
package proxy | |
import ( | |
"io" | |
"log" | |
"net" | |
) | |
func Proxy(srvConn, cliConn *net.TCPConn) { | |
// channels to wait on the close event for each connection | |
serverClosed := make(chan struct{}, 1) | |
clientClosed := make(chan struct{}, 1) | |
go broker(srvConn, cliConn, clientClosed) | |
go broker(cliConn, srvConn, serverClosed) | |
// wait for one half of the proxy to exit, then trigger a shutdown of the | |
// other half by calling CloseRead(). This will break the read loop in the | |
// broker and allow us to fully close the connection cleanly without a | |
// "use of closed network connection" error. | |
var waitFor chan struct{} | |
select { | |
case <-clientClosed: | |
// the client closed first and any more packets from the server aren't | |
// useful, so we can optionally SetLinger(0) here to recycle the port | |
// faster. | |
srvConn.SetLinger(0) | |
srvConn.CloseRead() | |
waitFor = serverClosed | |
case <-serverClosed: | |
cliConn.CloseRead() | |
waitFor = clientClosed | |
} | |
// Wait for the other connection to close. | |
// This "waitFor" pattern isn't required, but gives us a way to track the | |
// connection and ensure all copies terminate correctly; we can trigger | |
// stats on entry and deferred exit of this function. | |
<-waitFor | |
} | |
// This does the actual data transfer. | |
// The broker only closes the Read side. | |
func broker(dst, src net.Conn, srcClosed chan struct{}) { | |
// We can handle errors in a finer-grained manner by inlining io.Copy (it's | |
// simple, and we drop the ReaderFrom or WriterTo checks for | |
// net.Conn->net.Conn transfers, which aren't needed). This would also let | |
// us adjust buffersize. | |
_, err := io.Copy(dst, src) | |
if err != nil { | |
log.Printf("Copy error: %s", err) | |
} | |
if err := src.Close(); err != nil { | |
log.Printf("Close error: %s", err) | |
} | |
srcClosed <- struct{}{} | |
} |
thanks jbardin, but strange, io.Copy or io.CopyN will use lots of memory. it's a socks5 proxy by my side.Many connections at a time(web).
(sorry everyone, notifications still don't work on gists)
@deckarep
Those checks are only valid if you're copying a file. It's how Go interfaces with the linux sendfile
syscall.
@Sen
If memory is increasing, you're leaking connections somewhere. If memory consumption is just high because you have too many connections, you can implement your own io.Copy with a smaller buffer, but that's only going to help for so long.
- When client is closed, I think srvConn should call CloseWrite, which means clientConn will NOT write to srvConn anymore.
- Both
srvConn.CloseRead()
andcliConn.CloseRead()
are unnecessary, becausesrc.Close()
in firstbroker
which will trigger an error in the otherbrocker
that will calldst.Close()
.
What do you guys think of what I've done here: https://github.com/nhooyr/tlswrapd/blob/965c34913d5c8635b07ec8fb4918174298fa4855/proxy.go#L87-L110 ?
I think it is better than this because it is more simple but accomplishes the same thing. It is also not specific to TCP, any net.Conn
(such as tcp.Conn
which is what I'm using in there) will work.
@nhooyr To simplify the closing of the connections, we can use sync.Once
:
close := sync.Once{}
cp := func(dst io.WriteCloser, src io.ReadCloser) {
b := buffers.Get()
defer buffers.Put(b)
io.CopyBuffer(dst, src, b)
close.Do(func() {
dst.Close()
src.Close()
})
}
go cp(src, dst)
cp(dst, src)
Per your comment (line 45) about inlining io.Copy and dropping the ReaderFrom/WriterTo checks...it seams when dealing with TCPConn you wouldn't want to do that right? TCPConn will use the ReaderFrom and forgo allocations when looking at io.Copy source code. I could be wrong about this.
By the way, thanks for this example, this is exactly what I was looking for from my stackoverflow.com comment!!!