Before delving into how we support database (DB) transactions, let's first consider how we can handle operations that only pertain to the unhappy path. This consideration is important because, as we will see, it will become relevant when we reach the main topic.
Most of the time, the unhappy path is something managed by the caller (e.g., a controller rendering an error in case of failure). However, there are situations where it makes sense to encapsulate part of the unhappy path within the operations class. For instance, you might want to log the failure somewhere.
When relying on the vanilla #steps
method, the implementation is straightforward:
class CreateUser < Dry::Operation
def call(input)
steps do
attrs = step validate(input)
user = step persist(attrs)
assign_initial_roles(user)
step send_welcome_email(user)
user
end.tap do |result|
log_failure(result) if result.failure?
end
end
end
However, it is beneficial to automatically prepend the #steps
method in #call
for a couple of reasons:
- It reduces boilerplate.
- It consolidates the interface of the operation, ensuring it always returns a result object.
This leads us to a single option: calling a hook for the failure case:
class CreateUser < Dry::Operation
def call(input)
attrs = step validate(input)
user = step persist(attrs)
assign_initial_roles(user)
step send_welcome_email(user)
user
end
private
def on_failure(user)
log_failure(user)
end
end
Instead of allowing the registration of multiple hooks, it is better to allow a single one where users can dispatch to other methods if needed. This approach allows us to skip dealing with hook ordering and makes the flow more linear.
There is no need, at least for now, to make the hook method name configurable; on_failure
is sufficient.
It's worth noting that we now allow multiple methods to be prepended, as in operate_on :call, :run
. Therefore, we need a way to specify which hook to call for a given prepended method. We can achieve this by providing a second argument to on_failure
when the method is defined with arity 2:
def on_failure(result, method)
case method
when :call
do something()
when :run
do something_else()
end
end
Leaving aside the interface for now, we have two architectural options:
- Wrap the entire
#steps
call in a transaction:
class CreateUser < Dry::Operation
use_db_transaction
def call(input)
attrs = step validate(input)
user = step persist(attrs)
assign_initial_roles(user)
step send_welcome_email(user)
user
end
end
Benefits:
- It supports composing operation classes within a single transaction:
CreateUser.new >> CreatePost.new
Drawbacks:
- It wraps potentially expensive operations in a transaction, such as
send_welcome_email
in the example. - It is not optimized, though not critical, to wrap
validate
in a transaction.
We find the drawbacks to be unacceptable. If we were to support this option, we would need to use hooks for the setup and success cases:
class CreateUser < Dry::Operation
use_db_transaction
def call(input)
user = step persist(attrs)
assign_initial_roles(user)
user
end
private
def setup(input)
step validate(input)
end
def on success(user)
step send_welcome_email(user)
end
end
In this case, the introduced indirection is also considered unacceptable. While we need to support a hook for the on_failure
scenario, dry-operation
should prioritize readability when focusing on the happy path.
- Explicitly wrap the steps that need to run in a transaction:
class CreateUser < Dry::Operation
use_db_transaction
def call(input)
attrs = step validate(input)
transaction do
user = step persist(attrs)
assign_initial_roles(user)
end
step send_welcome_email(user)
user
end
end
Benefits:
- It is explicit.
- It enhances readability.
Drawbacks:
- It requires manual setup.
- It makes it impossible to compose operation classes within a single transaction.
In this case, the drawbacks are considered acceptable. There is no way to completely conceal the fact that we are dealing with a database transaction, and developers need to consider it. Furthermore, one of the key concepts of dry-operation
is the decoupling of individual operations. Therefore, we should encourage the composition of operations rather than groups of operations in the documentation.
A Dry::Operation.db_adapter
method could be sufficient to configure how Dry::Operation#transaction
works.
We can think of three ORM-style libraries we want to support: ROM
, Sequel
, and ActiveRecord
. Different libraries might require different options, and we can use different option names in any case. For example:
class CreateUser < Dry::Operation
db_adapter :rom, container: Deps[:rom], gateway: :default
# ...
end
- Support the
#on_failure
hook. - Support the
#transaction
method through.db_adapter
. - Support ROM
- Support AR
- Support Sequel
A few thoughts here. I won't be able to go into perfect detail here, but I figure it's better to get something shared now than keep you waiting any longer, Marc — sorry about that!
Setup of DB adapters
This feels ungainly to me, particularly the part where we're having to pass in a dependency (the rom container) at the class-level:
(Minor note:
Deps[:rom]
won't do what you want here; you probably want something likeAppOrSlice["persistence.rom"]
(with the exact component name to be determined as we go about building Hanami 2.2.)With Hanami we're encouraging users to provide dependencies at the instance level, and this is the opposite of that.
So I wonder if there's some way to allow these adapters to work with instance-level deps...
Could it be stripped back such that it's more like this:
This would basically be a glorified
include
of a module (perhaps we could actually just allow it to be aninclude
as our first iteration?) that adds code expecting arom
-like object to be available as an instance method, and adds methods that use it to expose a nicetransaction do
interface to the users writing these operation classes.Then, in Hanami apps we could add some separate code that makes sure this rom object is provided automatically as an instance-level dependency.
FWIW, this arrangement gets us closer to the kind of idea that @jodosha pitched in his response above, while making sure we still have dry-operation catch the transaction failures and raise errors as required.
To sum it up: I'd like our DB adapters at their core not to manage state: they should simply add the behaviour that expects the relevant database connection objects to be there, and then use them as appropriate. The job of providing those database connection objects is then a layer above these DB adapters.
In terms of that extra layer: to make dry-operation easy-to-use in a wider range of situations, we may indeed want to bundle some code that finds and loads those connection objects, and we might end up adding hooks to enable that as part of that single class-level API, but in putting this together, we should make sure these two functions are cleanly separated internally, and that it's possible to exercise one without the other, for the case you want to take greater control of providing the database connection object (like we would do in Hanami apps).
In fact, when you boil it down to this being a "module that exposes useful API to users that fits with dry-operation's expectations for failure handling", there's nothing in here that's intrinsically connected to "databases". So I'd encourage us to think about naming this feature so that it could be used for a wider range of use cases. Adapters? Extensions?
Failure hook
I'm fine with the idea of us having this built into dry-operation. Like you outline, this is a necessary hook for us to provide given that our
steps do
behaviour will be prepended over the user's entry point method.Like @jodosha says, I think we should encourage users to localise their failure handling as much as possible (i.e. directly in the step methods wherever possible, and in the case of result-returning methods from injected dependencies, by wrapping the call to that dependency in another local method, which can catch failures and adjust as required).
But in the case where these direct approaches cannot be taken, the user has
on_failure
as a final hook to inject their customisations before control is returned to the caller.Some thoughts:
def on_failure(value)
— I think this is what you're suggesting in your initial proposal, but I wanted to confirm just in case :)def on_<method_name>_failure
, e.g.on_call_failure
andon_run_failure
in the case of classes where bothcall
andrun
are both declared as steps methods. But I think this would make this feature a lot harder to communicate and document to users, so having the single well-known name makes sense to me.While I was here, I realised that one thing we're losing from dry-transaction is having a standard failure structure that's exposed to the user. In dry-transaction provided a "step matcher" API when calling the transaction and yielding a block:
I'm not proposing we do this for dry-operation, but rather just pointing out that we've lost this kind of information, because our dry-operation steps don't have names, whereas with dry-transaction's class-level DSL, we gave every step a name. I do wonder if we might give ourselves this ability as an opt-in addition somehow...
Maybe? Or some variation on this, possibly?
Then we could not only pass the failure value to
on_failure
, but also the name (or possibly full set of kwargs, in the case above?) of the failed step, which would give the code in theon_failure
the ability to do even more expressive things, like providing a more structured failure:No matter what, I want to make sure we still support the most minimal usage, so just
step validate(input)
, but this discussion on the failure hook does make me wonder for our possibilities for "progressive enhancement" here. 🤔