-
-
Save eulerfx/6ecf02441dbc1cee375c to your computer and use it in GitHub Desktop.
open Hopac | |
open Hopac.Infixes | |
open Hopac.Job.Infixes | |
open Hopac.Extensions | |
type IoCh<'i, 'o> = Ch<'i * IVar<'o>> | |
module IoCh = | |
let create (f:'i -> Job<'o>) : Job<IoCh<'i, 'o>> = | |
Ch.create() >>= fun ch -> | |
let loop() = Job.delay <| fun () -> | |
Ch.take ch >>= fun (i:'i,iv:IVar<'o>) -> f i >>= IVar.fill iv | |
loop() |> Job.foreverServer >>% ch | |
let sendReceive (ch:IoCh<'i, 'o>) (i:'i) : Job<'o> = | |
Job.delay <| fun () -> | |
let iv = ivar() | |
Ch.send ch (i,iv) >>. iv | |
let fifo (f:'i -> Job<'o>) : Job<'i -> Job<'o>> = | |
create f |>> sendReceive |
What is the purpose of the MemoryCache here? My guess is that the number of keys is large enough (perhaps unbounded) that you want to clean up the MBPs to avoid a space leak and that there is no other essential reason for using the MemoryCache class.
Yes exactly - if all the MBPs are kept in memory, the service would very quickly run out of memory. The timeout is arbitrary to some extent - it is meant to allow inputs read in a batch to process before expiring, but still keeping the lifetime short such that there aren't a large number of these in memory at a given point in time. It would be better to have natural expiration through the GC, but MBP doesn't become automatically eligible for collection. It seems like Hopac would be better suited to this style of expiration.
WRT the expensive call - yes, the API of MemoryCache is a bit awkward - it doesn't allow lazy creation (as ConcurrentDictionary does for example). But it did solve the memory issue and the workload the service is largely IO bound anyway such that it didn't incur a performance hit overall. I figured writing my own cache would take more time and be more error prone so MemoryCache seemed like a good compromise.
BTW, is it guaranteed that a MBP is not processing items when it is being disposed of by the MemoryCache?
That is a good point and I'm not 100% sure - I've been logging the message count during expiration and haven't seen it become an issue, but it is a hole in the design.
Hmm... BTW, is it guaranteed that a MBP is not processing items when it is being disposed of by the MemoryCache? Say a large number of items is queued for a MBP and then there is a >5 second gap before next item for the same key. That could be another good reason to write your own cache.