- Leaders within a team who understand the centrality of the domain can put their software project back on course.
- Software developer is like a researche, both have the responsability to tackle the messiness of the real world through complicated domain that has never been formalized.
- There are systematic ways of thinking that developers can employ to search for insight and produce effective models.
- Binding the model and the implementation.
- Cultivating a language based on the model.
- Developing a knowledge-rich model.
- Distilling the model.
- Brainstorming and experimenting.
This kind of crunching turns the knowledge of the team into valuable models.
Effective domain modelers are knowledge crunchers.
Try organize one idea after another, searching for the simple view that makes sense of the mass.
Knowledge crunching is not a solitary activity.
The interaction between team members changes as all members crunch the model together.
The constant refinement of the domain model forces the developers to learn the important principles of the business they are assisting, rather than to produce functions machanically.
Because the domain experts are feeding into it, the model reflects deep knowledge of the business. The abstractions are true business principles.
These models are never perfect; they evolve.
They must be practical and useful in making sense of the domain.
They must be rigorous enough to make the application simple to implement and understand.
We never know enough.
Domains that seem less technically daunting can be deceiving: we don’t realize how much we don’t know. This ignorance leads us to make false assumptions
Highly productive teams grow their knowledge consciously, practicing continuous learning.
For developers, this means improving technical knowledge [...] But it also includes serious learning about the specific domain they are working in.
These self-educated team members form a stable core of people [...] The accumulated knowledge in the minds of this core team makes them more effective knowledge crunchers.
Knowledge crunching is an exploration, and you can’t know where you will end up.
The vocabulary of that
UBIQUITOUS LANGUAGE
includes the names of classes and prominent operations.
The
LANGUAGE
includes terms to discuss rules that have been made explicit in the model.
The model relationships become the combinatory rules all languages have.
The model-based language should be used among developers to describe not only artifacts in the system, but tasks and functionality.
The more pervasively the language is used, the more smoothly understanding will flow.
As gaps are found in the language, new words will enter the discussion. These changes to the language will be recognized as changes in the domain model.
One of the best ways to refine a model is to explore with speech [...]
It is vital that we play around with words and phrases, harnessing our linguistic abilities to the modeling effort [...]
Describe scenarios out loud [...]
Find easier ways to say what you need to say [...]
[...] take those new ideas back down to the diagrams and code.
- Be concise
When domain experts use this
LANGUAGE
in discussions with developers or among themselves, they quickly discover areas where the model is inadequate for their needs or seems wrong to them.
The domain experts can use the language of the model in writing use cases, and can work even more directly with the model by specifying acceptance tests.
These dialects should not contain alternative vocabularies for the same domain that reflect distinct models.
Diagrams are a means of communication and explanation, and they facilitate brainstorming. They serve these ends best if they are minimal.
A well-written implementation should be transparent, revealing the model underlying it.
The diagram’s purpose is to help communicate and explain the model.
A document shouldn’t try to do what the code already does well.
Other documents need to illuminate meaning, to give insight into large-scale structures, and to focus attention on core elements.
By keeping documents minimal and focusing them on complementing code and conversation, documents can stay connected to the project.
- Diagrams need to be minimal
- Documents need to be minimal
Well-written code can be very communicative, but the message it communicates is not guaranteed to be accurate.
It takes fastidiousness to write code that doesn’t just do the right thing but also says the right thing.
To communicate effectively, the code must be based on the same language used to write the requirements - the same language that the developers speak with each other and with domain experts.
[...] one model should underlie implementation, design, and team communication.
Explanatory models also present the domain in a way that is simply different, and multiple, diverse explanations help people learn.
Tightly relating the code to an underlying model gives the code meaning and makes the model relevant.
Whatever the cause, software that lacks a concept at the foundation of its design is, at best, a mechanism that does useful things without explain its actions.
When a model doesn't seem to be pratical for implementation, we must search for a new one.
When a model doesn't faithfully express the key concepts of the domain, we must search for a new one.
The imperative to relate the domain model closely to the design adds one more criterion for choosing the more useful models out of the universe of possible models.
The single model [between desing and implementation] reduces the chances of error [...].
The model has to be carefully crafted to make for a pratical implementation.
Knowledge crunchers explore model options and refine them into pratical software elements.
Development becomes an iterative process of refining the model, the design, and the code as a single activity.
[...] the real breakthrough of object design comes when the code express the concepts of a model.
When a design is based on a model that reflects the basics concerns of the users and domain experts, the bones of the design can be revealed to user to a greater extent than with other design approaches.
Revealing the model gives the user more access to the potential of the software and yields consistent, predictable behavior.
All teams have specialized roles for members, but overseparation of responsability for analysis, modeling, design, and programming interferes with
MODEL-DRIVEN DESIGN
.
The overall effect of a model can be very sensitive to details [...]
[...] certain aspects of the model turned out to be wildly inefficient on our technology platform [...] [, however] Relatively minor changes could have fixed the problem, but by then it didn't matter. The developers were well on their way to writing software that did work [...] [, because] They could no longer risk being saddled with dictates of the architect in the ivory tower.
If the people who write the code do not feel responsible for the model, or don’t understand how to make the model work for an application, then the model has nothing to do with the software.
If developers don’t realize that changing code changes the model, then their refactoring will weaken the model rather than strengthen it.
Finally, the knowledge and skills of experienced designers won’t be transferred to other developers if the division of labor prevents the kind of collaboration that conveys the subtleties of coding a
MODEL-DRIVEN DESIGN
.
- Make developers feel responsible for the model.
- Make developers understand how to make the model work for an application.
- Make developers realize that changing code changes the model.
With
MODEL-DRIVEN DESIGN
, a portion of the code is an expression of the model;
Any technical person contributing to the model must spend some time touching the code [...]
Anyone responsible for changing the code must learn to express a model through the code.
Every developer must be involved in some level of discussion about the model and have contact with domain experts.
The sharp separation of modeling and programming doesn't work [...]
DOMAIN-DRIVEN DESIGN
puts model to work to solve problems for an application.
Through knowledge crunching, a team distills a torrent of chaotic information into a pratical model.
A
MODEL-DRIVEN DESIGN
intimately connects the model and the implementation.
The
UBIQUITOUS LANGUAGE
is the channel for all thant information to flow between developers, domain experts, and the software.
The result is software that provides rich functionality based on fundamental understanding of the core domain.
[...] pratical design and implementation of a model's individual elements can be relatively systematic.
Defining model elements according to certain distinctions sharpens their meanings.
We need to decouple the domain objects from other functions of the system, so we can avoid confusing the domain concepts with other concepts related only to software technology [...]
Software programs involve design and code to carry out many different kinds of tasks.
When the domain-related code is diffused through such a large amount of other code, it becomes extremely difficult to see and to reason about. Superficial changes to the UI can actually change business logic.
With all the technologies and logic involved in each activity, a program must be kept very simple or it becomes impossible to understand.
Creating programs that can handle very complex tasks calls for separation of concerns, allowing concentration on different parts of the design in isolation.
The value of layers is that each specializes in a particular aspect of a computer program. This specialization allows more cohesive designs of each aspect, and it makes these designs much easier to interpret.
[...] the crucial separation of domain layer that enables
MODEL-DRIVEN DESIGN
.
Develop a design within each layer that is cohesive and that depends only on layers below.
Follow standard architectural patterns to provide loose coupling to the layers above.
Concentrate all the code related to the domain model in one layer and isolate it from the user interface, application, and infraestructure code.
This allows a model to evolve to be rich enough clear to capture essential business knowledge and put it to work.
Separating the domain layer from the infrastructure and user interface layers allows a much cleaner design of each layer.
Isolated layers are much less expensive to maintain, because they tend to evolve at different rates and respond to differenct needs.
The separation also helps with deployment in distributed system, by allowing different layers to be placed flexibly in different servers or clients [...]
Layers are meant to be loosely coupled, with design dependencies in only one direction.
Upper layers can use or manipulate elements of lower ones straightforwardly by calling their public interfaces [...] But When an object of a lower level needs to communicate upward, we need mechanism, drawing on architectural patterns for relating layers such as callbacks or
OBSERVERS
.
The infrastructure layer usually does not initiate action in the domain layer. Being "bellow" the domain layer, it should have no specific knowledge of the domain it is serving.
The application and domain layers call on the
SERVICES
provided by the infrastructure layer. But not all infrastructure comes in the form ofSERVICES
callable from the higher layers.
When applying a framework, the team needs to focus on its goal: building an implementation that express a domain model and uses it to solve important problems.
A lot of the downside of frameworks can be avoided by applying them selectively to solve difficult problems without looking for a one-size-fits-all solution. Judiciously applying only the most valuable of framework features reduces the coupling of the implementation and the framework, allowing more flexibility in later design decisions. More important, given how very complicated many of the current frameworks are to use, this minimalism helps keep the business objects readable and expressive.
[...] we must guard against our enthusiasm for technical solutions; elaborate frameworks can also straitjacket application developers.
The domain model is a set of concepts. The “domain layer” is the manifestation of that model and all directly related design elements. The design and implementation of business logic constitute the domain layer. In a
MODEL-DRIVEN DESIGN
, the software constructs of the domain layer mirror the model concepts.
Isolating the domain implementation is a prerequisite for domain-driven design.
[...] the best part of isolating the domain is getting all that other stuff out of the way so that we can really focus on the domain design.
Focus on making distinctions among the three patterns of model elements that express the model:
Entities
,Value Objects
, andServices
.
Defining objects that clearly follow one pattern or the other makes the objects less ambiguous and lays out the path toward specific choices for robust design.
A SERVICE is something that is done for a client on request. They emerge in the domain when some activity is modeled that corresponds to something the software must do, but does not correspond with state.
The design has to specify a particular traversal mechanism whose behavior is consistent with the association in the model.
There are at least three ways of making associations more tractable.
- Imposing a traversal direction
- Adding a qualifier, effectively reducing multiplicity
- Eliminating nonessential associations
Understanding the domain may reveal a natural directional bias.
Pragmatically, we can reduce the relationship to a unidirectional association
Very often, deeper understanding leads to a “qualified” relationship.
Let’s refine the model by qualifying the association [...] reducing its multiplicity.
Carefully distilling and constraining the model’s associations will take you a long way toward a
MODEL-DRIVEN DESIGN
.
Many objects are not fundamentally defined by their attributes, but rather by a thread of continuity and identity.
Some objects are not defined primarily by their attributes. They represent a thread of identity that runs through time and often across distinct representations.
Mistaken identity can lead to data corruption.
An object defined primarily by its identity is called an
ENTITY
.
An ENTITY is anything that has continuity through a life cycle and distinctions independent of attributes [...]
But sometimes the identity is important only in the context of the system, such as the identity of a computer process.
When an object is distinguished by its identity, rather than its attributes, make this primary to its definition in the model. Keep the class definition simple and focused on life cycle continuity and identity. Define a means of distinguishing each object regardless of its form or history. Be alert to requirements that call for matching objects by attributes. Define an operation that is guaranteed to produce a unique result for each object, possibly by attaching a symbol that is guaranteed unique. This means of identification may come from the outside, or it may be an arbitrary identifier created by and for the system, but it must correspond to the identity distinctions in the model. The model must define what it means to be the same thing.
[...] the most basic responsibility of
ENTITIES
is to establish continuity so that behavior can be clear and predictable.
Rather than focusing on the attributes or even the behavior, strip the
ENTITY
object’s definition down to the most intrinsic characteristics, particularly those that identify it or are commonly used to find or match it.
Add only behavior that is essential to the concept and attributes that are required by that behavior.
Beyond that, look to remove behavior and attributes into other objects associated with the core
ENTITY
.
Beyond identity issues, ENTITIES tend to fulfill their responsibilities by coordinating the operations of objects they own.
Each
ENTITY
must have an operational way of establishing its identity with another object [...] An identifying attribute must be guaranteed to be unique [...]
Sometimes certain data attributes, or combinations of attributes, can be guaranteed or simply constrained to be unique within the system. This approach provides a unique key for the
ENTITY
.
When there is no true unique key made up of the attributes of an object, another common solution is to attach to each instance a symbol (such as a number or a string) that is unique within the class.
The generation algorithm must guarantee uniqueness within the system, which can be a challenge with concurrent processing and in distributed systems.
What does it mean for two objects to be the same thing? It is easy enough to stamp each object with an ID, or to write an operation that compares two instances, but if these IDs or operations don’t correspond to some meaningful distinction in the domain, they just confuse matters more.
Many objects have no conceptual identity. These objects describe some characteristic of a thing.
Tracking the identity of
ENTITIES
is essential, but attaching identity to other objects can hurt system performance, add analytical work, and muddle the model by making all objects look the same.
Software design is a constant battle with complexity. We must make distinctions so that special handling is applied only where necessary.
However, if we think of this category of object as just the absence of identity, we haven’t added much to our toolbox or vocabulary. In fact, these objects have characteristics of their own and their own significance to the model. These are the objects that describe things.
VALUE OBJECTS
are instantiated to represent elements of the design that we care about only for what they are, not who or which they are.
A
VALUE OBJECT
can be an assemblage of other objects.
VALUE OBJECTS
can even referenceENTITIES
.
VALUE OBJECTS
are often passed as parameters in messages between objects. They are frequently transient, created for an operation and then discarded.
VALUE OBJECTS
are used as attributes ofENTITIES
(and otherVALUES
).
Treat the
VALUE OBJECT
as immutable.
We don’t care which instance we have of a
VALUE OBJECT
. This lack of constraints gives us design freedom we can use to simplify the design or optimize performance. This involves making choices about copying, sharing, and immutability.
In fact, the two Person objects might not need their own name instances. The same Name object could be shared between the two Person objects (each with a pointer to the same name instance) with no change in their behavior or identity. That is, their behavior will be correct until some change is made to the name of one person. Then the other person’s name would change also! To protect against this, in order for an object to be shared safely, it must be immutable: it cannot be changed except by full replacement.
The same issues arise when an object passes one of its attributes to another object as an argument or return value. [...] The
VALUE
could be changed in a way that corrupts the owner, by violating the owner’s invariants. This problem is avoided either by making the passed object immutable, or by passing a copy.
Creating extra options for performance tuning can be important because
VALUE OBJECTS
tend to be numerous. [...] [, so] [...] we could share just one instance of an outlet and point to it a hundred times (an example of FLYWEIGHT [Gamma et al. 1995]). [...] such an optimization can make the difference between a usable system and one that slows to a crawl, choked on millions of redundant objects.
The economy of copying versus sharing depends on the implementation environment. Although copies may clog the system with huge numbers of objects, sharing can slow down a distributed system.
When a copy is passed between two machines, a single message is sent and the copy lives independently on the receiving machine. But if a single instance is being shared, only a reference is passed, requiring a message back to the object for each interaction.
Sharing is best restricted to those cases in which it is most valuable and least troublesome:
- When saving space or object count in the database is critical
- When communication overhead is low (such as in a centralized server)
- When the shared object is strictly immutable
[...] the lack of direct language support for a conceptual distinction does not mean that the distinction is not useful. It just means that more discipline is needed to maintain the rules that will be only implicit in the implementation.
When a
VALUE OBJECT
is designated immutable in the design, developers are free to make decisions about issues such as copying and sharing on a purely technical basis, secure in the knowledge that the application does not rely on particular instances of the objects.
Immutability is a great simplifier in an implementation, making sharing and reference passing safe. [...] If the value of an attribute changes, you use a different
VALUE OBJECT
, rather than modifying the existing one.
Even so, there are cases when performance considerations will favor allowing a VALUE OBJECT to be mutable.
- If the
VALUE
changes frequently - If object creation or deletion is expensive
- If replacement (rather than modification) will disturb clustering (as discussed in the previous example)
- If there is not much sharing of VALUES, or if such sharing is forgone to improve clustering or for some other technical reason
Just to reiterate: If a VALUE’s implementation is to be mutable, then it must not be shared.
Defining
VALUE OBJECTS
and designating them as immutable is a case of following a general rule: Avoiding unnecessary constraints in a model leaves developers free to do purely technical performance tuning.
Explicitly defining the essential constraints lets developers tweak the design while keeping safe from changing meaningful behavior.
[...] bidirectional associations between two VALUE OBJECTS just make no sense.
In some cases, the clearest and most pragmatic design includes operations that do not conceptually belong to any object.
[...] when we force an operation into an object that doesn’t fit the object’s definition, the object loses its conceptual clarity and becomes hard to understand or refactor.
Complex operations can easily swamp a simple object, obscuring its role. And because these operations often draw together many domain objects, [...] the added responsibility will create dependencies on all those objects, tangling concepts that could be understood independently.
A
SERVICE
is an operation offered as an interface that stands alone in the model, without encapsulating state, asENTITIES
andVALUE OBJECTS
do.SERVICES
are a common pattern in technical frameworks, but they can also apply in the domain layer.
Unlike
ENTITIES
andVALUE OBJECTS
, it is defined purely in terms of what it can do for a client.
A
SERVICE
should still have a defined responsibility, and that responsibility and the interface fulfilling it should be defined as part of the domain model.
Operation names should come from the
UBIQUITOUS LANGUAGE
or be introduced into it. Parameters and results should be domain objects.
SERVICES
should be used judiciously and not allowed to strip theENTITIES
andVALUE OBJECTS
of all their behavior.
A good SERVICE has three characteristics.
- The operation relates to a domain concept that is not a natural part of an ENTITY or VALUE OBJECT.
- The interface is defined in terms of other elements of the domain model.
- The operation is stateless.
Statelessness here means that any client can use any instance of a particular
SERVICE
without regard to the instance’s individual history.
[...] it may have side effects. But the
SERVICE
does not hold state of its own that affects its own behavior [...]
When a significant process or transformation in the domain is not a natural responsibility of an
ENTITY
orVALUE OBJECT
, add an operation to the model as a standalone interface declared as aSERVICE
.
It can be harder to distinguish application
SERVICES
from domainSERVICES
.
If a
SERVICE
were devised to make appropriate debits and credits for a funds transfer, that capability would belong in the domain layer. Funds transfer has a meaning in the banking domain language, and it involves fundamental business logic. TechnicalSERVICES
should lack any business meaning at all.
[...] to put the “transfer” operation on the Account object would be awkward, because the operation involves two accounts and some global rules.
We might like to create a Funds Transfer object to represent the two entries plus the rules and history around the transfer.
Medium-grained, stateless
SERVICES
can be easier to reuse in large systems because they encapsulate significant functionality behind a simple interface. Also, fine-grained objects can lead to inefficient messaging in a distributed system.
This pattern favors interface simplicity over client control and versatility. It provides a medium grain of functionality very useful in packaging components of large or distributed systems.
There are technical considerations, but cognitive overload is the primary motivation for modularity.
MODULES
give people two views of the model: They can look at detail within aMODULE
without being overwhelmed by the whole, or they can look at relationships betweenMODULES
in views that exclude interior detail.
The
MODULES
in the domain layer should emerge as a meaningful part of the model, telling the story of the domain on a larger scale.
Low coupling between MODULES minimizes this cost, and makes it possible to analyze the contents of one
MODULE
with a minimum of reference to others that interact.
This high cohesion of objects with related responsibilities allows modeling and design work to concentrate within a single
MODULE
, a scale of complexity a human mind can easily handle.
MODULES
and the smaller elements should coevolve, but typically they do not. [...] Letting the MODULES reflect changing understanding of the domain will also allow more freedom for the objects within them to evolve.
Like everything else in a domain-driven design,
MODULES
are a communications mechanism.
Choose
MODULES
that tell the story of the system and contain a cohesive set of concepts.
Seek low coupling in the sense of concepts that can be understood and reasoned about independently of each other.
Give the MODULES names that become part of the
UBIQUITOUS LANGUAGE
.MODULES
and their names should reflect insight into the domain.
And when there has to be a trade-off, it is best to go with the conceptual clarity, even if it means more references between
MODULES
or occasional ripple effects when changes are made to aMODULE
.
MODULES
need to coevolve with the rest of the model. This means refactoringMODULES
right along with the model and code. But this refactoring often doesn’t happen.
Whatever development technology the implementation will be based on, we need to look for ways of minimizing the work of refactoring
MODULES
, and minimizing clutter in communicating to other developers.
An example of a very useful framework standard is the enforcement of
LAYERED ARCHITECTURE
by placing infrastructure and user interface code into separate groups of packages, leaving the domain layer physically separated into its own set of packages.
Unless there is a real intention to distribute code on different servers, keep all the code that implements a single conceptual object in the same
MODULE
, if not the same object.
Use packaging to separate the domain layer from other code. Otherwise, leave as much freedom as possible to the domain developers to package the domain objects in ways that support their model and design choices.
Resist the temptation to add anything to the domain objects that does not closely relate to the concepts they represent. These design elements have their job to do: they express the model. There are other domain-related responsibilities that must be carried out and other data that must be managed in order to make the system work, but they don’t belong in these objects.
Every object has a life cycle.
The challenges fall into two categories.
- Maintaining integrity throughout the life cycle
- Preventing the model from getting swamped by the complexity of managing the life cycle
This chapter will address these issues through three patterns. First,
AGGREGATES
tighten up the model itself by defining clear ownership and boundaries, avoiding a chaotic, tangled web of objects. This pattern is crucial to maintaining integrity in all phases of the life cycle.
[...] using
FACTORIES
to create and reconstitute complex objects andAGGREGATES
, keeping their internal structure encapsulated.
Finally,
REPOSITORIES
address the middle and end of the life cycle, providing the means of finding and retrieving persistent objects while encapsulating the immense infrastructure involved.
Although
REPOSITORIES
andFACTORIES
do not themselves come from the domain, they have meaningful roles in the domain design. These constructs complete theMODEL-DRIVEN DESIGN
by giving us accessible handles on the model objects.
An
AGGREGATE
is a cluster of associated objects that we treat as a unit for the purpose of data changes.
Each
AGGREGATE
has a root and a boundary.
The boundary defines what is inside the
AGGREGATE
.
The root is a single, specific
ENTITY
contained in theAGGREGATE
.
The root is the only member of the
AGGREGATE
that outside objects are allowed to hold references to, although objects within the boundary may hold references to each other.
ENTITIES
other than the root have local identity, but that identity needs to be distinguishable only within the AGGREGATE, because no outside object can ever see it out of the context of the rootENTITY
.
Invariants, which are consistency rules that must be maintained whenever data changes, will involve relationships between members of the
AGGREGATE
.
- The root
ENTITY
has global identity and is ultimately responsible for checking invariants. - Root
ENTITIES
have global identity.ENTITIES
inside the boundary have local identity, unique only within theAGGREGATE
. - Nothing outside the
AGGREGATE
boundary can hold a reference to anything inside, except to the rootENTITY
. - As a corollary to the previous rule, only
AGGREGATE
roots can be obtained directly with database queries. - Objects within the
AGGREGATE
can hold references to otherAGGREGATE
roots. - A delete operation must remove everything within the
AGGREGATE
boundary at once. - When a change to any object within the
AGGREGATE
boundary is committed, all invariants of the wholeAGGREGATE
must be satisfied.
Cluster the
ENTITIES
andVALUE OBJECTS
intoAGGREGATES
and define boundaries around each. Choose oneENTITY
to be the root of eachAGGREGATE
, and control all access to the objects inside the boundary through the root.
Allow external objects to hold references to the root only. Transient references to internal members can be passed out for use within a single operation only. Because the root controls access, it cannot be blindsided by changes to the internals.
When creation of an object, or an entire
AGGREGATE
, becomes complicated or reveals too much of the internal structure,FACTORIES
provide encapsulation.
a
FACTORY
encapsulates the knowledge needed to create a complex object orAGGREGATE
. It provides an interface that reflects the goals of the client and an abstract view of the created object.
Shift the responsibility for creating instances of complex objects and
AGGREGATES
to a separate object, which may itself have no responsibility in the domain model but is still part of the domain design.
Provide an interface that encapsulates all complex assembly and that does not require the client to reference the concrete classes of the objects being instantiated. Create entire
AGGREGATES
as a piece, enforcing their invariants.
The two basic requirements for any good
FACTORY
are:
- Each creation method is atomic and enforces all invariants of the created object or
AGGREGATE
. AFACTORY
should only be able to produce an object in a consistent state. - The
FACTORY
should be abstracted to the type desired, rather than the concrete class(es) created.
[...] if you needed to add elements inside a preexisting
AGGREGATE
, you might create aFACTORY METHOD
on the root of theAGGREGATE
. This hides the implementation of the interior of theAGGREGATE
from any external client, while giving the root responsibility for ensuring the integrity of theAGGREGATE
as elements are added [...]
The trade-offs favor a bare, public constructor in the following circumstances.
- The class is the type. It is not part of any interesting hierarchy, and it isn’t used polymorphically by implementing an interface.
- The client cares about the implementation, perhaps as a way of choosing a
STRATEGY
. - All of the attributes of the object are available to the client, so that no object creation gets nested inside the constructor exposed to the client.
- The construction is not complicated.
- A public constructor must follow the same rules as a FACTORY: It must be an atomic operation that satisfies all invariants of the created object.
Avoid calling constructors within constructors of other classes. Constructors should be dead simple. Complex assemblies, especially of
AGGREGATES
, call forFACTORIES
. The threshold for choosing to use a littleFACTORY METHOD
isn’t high.
When designing the method signature of a FACTORY, whether standalone or FACTORY METHOD, keep in mind these two points.
- Each operation must be atomic. You have to pass in everything needed to create a complete product in a single interaction with the
FACTORY
. You also have to decide what will happen if creation fails, in the event that some invariant isn’t satisfied. You could throw an exception or just return a null. To be consistent, consider adopting a coding standard for failures inFACTORIES
. - The
FACTORY
will be coupled to its arguments.
The safest parameters are those from a lower design layer.
Another good choice of parameter is an object that is closely related to the product in the model, so that no new dependency is being added.
So giving control to the
AGGREGATE
root and encapsulating theAGGREGATE
’S internal structure is a good trade-off.
The
FACTORY
is coupled to the concrete class of the products; it does not need to be coupled to concrete parameters also.
Under some circumstances, there are advantages to placing invariant logic in the FACTORY and reducing clutter in the product.
It is especially unappealing with FACTORY METHODS attached to other domain objects.
VALUE OBJECTS are completely immutable. [...] [, then] In such cases, the FACTORY is a logical place to put invariants, keeping the product simpler.
ENTITY FACTORIES
differ fromVALUE OBJECT FACTORIES
in two ways.VALUE OBJECTS
are immutable; the product comes out complete in its final form. So the FACTORY operations have to allow for a full description of the product.ENTITY FACTORIES
tend to take just the essential attributes required to make a valid AGGREGATE. Details can be added later if they are not required by an invariant.
Then there are the issues involved in assigning identity to an
ENTITY
— irrelevant to aVALUE OBJECT
.
When the program is assigning an identifier, the
FACTORY
is a good place to control it.
A
FACTORY
used for reconstitution is very similar to one used for creation, with two major differences.
- An
ENTITY FACTORY
used for reconstitution does not assign a new tracking ID. - A
FACTORY
reconstituting an object will handle violation of an invariant differently.
A FACTORY encapsulates the life cycle transitions of creation and reconstitution.
Associations allow us to find an object based on its relationship to another. But we must have a starting point for a traversal to an
ENTITY
orVALUE
in the middle of its life cycle.
Whether to provide a traversal or depend on a search becomes a design decision, trading off the decoupling of the search against the cohesiveness of the association.
The right combination of search and association makes the design comprehensible.
The goal of domain-driven design is to create better software by focusing on a model of the domain rather than the technology.
Persistent
VALUE OBJECTS
are usually found by traversal from someENTITY
that acts as the root of theAGGREGATE
that encapsulates them.
From this discussion, it is clear that most objects should not be accessed by a global search. It would be nice for the design to communicate those that do.
A
REPOSITORY
represents all objects of a certain type as a conceptual set (usually emulated). It acts like a collection, except with more elaborate querying capability. [...] This definition gathers a cohesive set of responsibilities for providing access to the roots of AGGREGATES from early life cycle through the end.
REPOSITORIES
can implement a variety of queries that select objects based on whatever criteria the client requires.
A
REPOSITORY
lifts a huge burden from the client, which can now talk to a simple, intention-revealing interface, and ask for what it needs in terms of the model. To support all this requires a lot of complex technical infrastructure, but the interface is simple and conceptually connected to the domain model.
For each type of object that needs global access, create an object that can provide the illusion of an in-memory collection of all objects of that type. Set up access through a well-known global interface.
Provide
REPOSITORIES
only forAGGREGATE
roots that actually need direct access. Keep the client focused on the model, delegating all object storage and access to theREPOSITORIES
.
REPOSITORIES
have many advantages, including the following:
- They present clients with a simple model for obtaining persistent objects and managing their life cycle.
- They decouple application and domain design from persistence technology, multiple database strategies, or even multiple data sources.
- They communicate design decisions about object access.
- They allow easy substitution of a dummy implementation, for use in testing (typically using an in-memory collection).
Even a
REPOSITORY
design with flexible queries should allow for the addition of specialized hard-coded queries. They might be convenience methods that encapsulate an often-used query or a query that doesn’t return the objects themselves, such as a mathematical summary of selected objects. Frameworks that don’t allow for such contingencies tend to distort the domain design or get bypassed by developers.
The performance implications can be extreme when REPOSITORIES are used in different ways or work in different ways.
Developers need to understand the implications of using encapsulated behavior. That does not have to mean detailed familiarity with the implementation. Well-designed components can be characterized.
Implementation will vary greatly, depending on the technology being used for persistence and the infrastructure you have. The ideal is to hide all the inner workings from the client (although not from the developer of the client), so that client code will be the same whether the data is stored in an object database, stored in a relational database, or simply held in memory.
The
REPOSITORY
will delegate to the appropriate infrastructure services to get the job done. Encapsulating the mechanisms of storage, retrieval, and query is the most basic feature of aREPOSITORY
implementation.
- Abstract the type.
- Take advantage of the decoupling from the client. [...] You can take advantage of this to optimize for performance, by varying the query technique or by caching objects in memory, freely switching persistence strategies at any time.
- Leave transaction control to the client. [...] It is tempting to commit after saving, for example, but the client presumably has the context to correctly initiate and commit units of work.
In general, don’t fight your frameworks. Seek ways to keep the fundamentals of domain-driven design and let go of the specifics when the framework is antagonistic. Look for affinities between the concepts of domain-driven design and the concepts in the framework.
If you have the freedom, choose frameworks, or parts of frameworks, that are harmonious with the style of design you want to use.
A
FACTORY
handles the beginning of an object’s life; aREPOSITORY
helps manage the middle and the end.
The
FACTORY
makes new objects; theREPOSITORY
finds old objects.
The client of a
REPOSITORY
should be given the illusion that the objects are in memory.
A
FACTORY
’S job is to instantiate a potentially complex object from data. If the product is a new object, the client will know this and can add it to theREPOSITORY
, which will encapsulate the storage of the object in the database.