Increase speed and throughput without sacrificing quality.
A problem with current solution with "small" aggregates.
sequenceDiagram
participant Bridge
participant EMPSA Scheme
participant EMPSA DMS
participant Bank
Bridge ->> EMPSA Scheme: authorise
EMPSA Scheme ->> EMPSA DMS: authorise
EMPSA DMS ->> Bank: authorise
note over Bridge: technical timeout on http
Bridge ->> EMPSA Scheme: release
Bank -->> EMPSA DMS: authorised
note over EMPSA Scheme, EMPSA DMS: inconsistent state
note over EMPSA Scheme, EMPSA DMS: both aggregates need state
note over EMPSA Scheme, EMPSA DMS: for out of order processing
If we have one big aggregate for EMPSA and DMS, reasoning about interactions with bank and bridge gets simpler.
sequenceDiagram
participant Bridge
participant Scheme
participant Bank
Bridge ->> Scheme: authorise
Scheme ->> Bank: authorise
note over Bridge: technical timeout on http
Bridge ->> Scheme: release
Bank -->> Scheme: authorised
note over Scheme: one place to decide what to do
note over Scheme: with "conflicting" events
I think we shouldn't necessarily tie code structure to aggregate structure. Aggregates can aggregate multiple independent systems. When teaching Erlang it is common that students want to tie process structure with code structure (one module per process), but those are different concepts.
Invariants vs Corrective Policies IMO, one payment with authorisation and potentially multiple captures should be one aggregate, because "can't capture more than authorised" is an invariant. We can discuss if refunds would be part of this big aggregate, but since there is less of them, if we "overrefund", we can use a corrective policy.
There is no "correct" size of the aggregate. There are only trade-offs. Small (Predrag's style) aggregates help with cognitive overload of tangled code on a micro level. Big (Deniel's style) aggregates help with consistency and reasoning about flows on a macro level.
Bigger aggregates won't require "technical" aggregate a.k.a. dictionaries.
Daniel said: "Retry-ability on every step is a cognitive overload". I disagree. Retry-ability on each step is invisible in the code. And while debugging, reported as a sequence of events. It prevents temporary setbacks like DB/http timeouts from stopping the processing. It is easier to resume from a later point if there were some operations already applied successfully like "TokenBurned".
[[Evented]]