The git repo of the kata is here: https://github.com/sandromancuso/cleancoders_openchat/
The "starting-point"
branch is where both implementations began: https://github.com/sandromancuso/cleancoders_openchat/tree/starting-point
- The ๐ฌ๐ง
"openchat-outside-in"
branch captures the tomato by tomato history of the London approach. - The ๐บ๐ธ
"openchat-unclebob"
branch captures the tomato by tomato history of the Chicago approach.
- Great confidence with IntelliJ IDE (shortcuts, templates, refactoring moves, etc). BTW as a viewer I would have appreciated a lot to actually see the typed keys, so I would recommend a plugin like Presentation Assistant for the next video :)
- Almost all IDE commands are given with shortcuts, and most of them are customized because IntelliJ does not assign a shortcut for some specific moves by default (e.g. splitting the window in two panes, moving files to the right/left window with a shortcut, close all files in a pane, etc).
- Clear ideas on the design and layering of a webapp (where to put things and responsibilities).
- High frequency of commits.
- Having to write all the ITs / ATs before start writing any UT / production code: to use the "ATs as a probe", you have to have the ATs in place before start coding, don't you?
- BTW this also mean that you actually commit code with red tests (=> broken build): I wonder if this approach could work when working in a team and with a "real project" (something you work on for weeks, months, not days).
- Mocking Spark objects (like
Request
orResponse
) mean you have to know in advance the Spark web framework, because otherwise you could not know which method Spark will really call in the end... - Sandro's style is a bit of erratic sometimes, in my view (e.g. jumping from a UT to another, from production code to ATs, filling the holes that you left behind); but if you can control this and keep the focus on what you're doing and what you have to do, his style is quite powerful (maybe I could suggest the use of a TODO list with the next things to do and what has been done).
- Related to the previous point, I see that Sandro often works and codes with many still unresolved compilation errors, which I understand is part of his flow and approach, but sometimes it seems to me that this goes too far (e.g. you have syntax issues you cannot spot because everything is red...). Again, I think you have to have a strong focus to not get lost while having to deal with many compilation errors.
- Sandro creates many small helper methods (
postsTextFrom
,jsonContaining
, ...) upfront (they do not even compile at the beginning): this seems coming from his great experience and indeed from his style (he knows he'll need those, so he creates them as early as possible): I tend to prefer to let those helper methods emerge when refactoring with a green bar. - I'm not fully convinced by the use of exceptions as a way to control flow... what are the alternatives?
Unit test classes are named with the Should
suffix instead of the classic Test
suffix.
Test method names follow the snake case convention (both for UTs and ITs).
So, for example, you have: PostServiceShould
, LoginAPIShould
, UserRepositoryShould
(inform_when_a_username_is_already_taken
, return_user_matching_valid_credentials
, return_all_users
, detect_a_following_already_exists
, ...).
Integration test classes are named with an IT_
prefix, e.g. IT_RegistrationAPI
, IT_UsersAPI
.
Extensive use of constants in the tests to stand for fixed values (e.g. USER_ID
, USER
for an instance of User
, POST_TEXT
, POST_ID
, DATE_TIME
, PASSWORD
, USER_CREDENTIALS
, ...) ("I normally create them as constants because I don't care what they are..." says Sandro)
Sandro prefers to use checked exceptions for "business-related" error conditions, exceptions from which you could recover from: in those case a checked exceptions force the layer above to deal with that condition.
E.g. The PostsAPI
class needs to deal with an InappropriateLanguageException
, because it's part of the logic (see the commit).
While he prefers to use runtime exceptions for error conditions you could not recover from.
It seems Sandro doesn't use interfaces when designing with TDD, meaning that when some new collaborator emerges while writing a unit test, its implementation will be a concrete class, not an interface (a different way is the "interface discovery" approach followed by Steve Freeman and Nat Pryce, explained in their book "Growing Object-Oriented Software, Guided by Tests").
I guess this is due to his "pragmatism" in doing TDD (see also how he avoids baby steps).
This is something I already saw with his way of solving the Bank Account Kata.
- The commit message used by Sandro contains often the name of the class and methods implemented (e.g.
UserService.createUser(registrationData) implemented.
). - The verb is at past tense.
"When writing a test, I always start from the bottom: the assertion (this is what I expect to happen)." (=> ASSERT)
"Then I ask: 'what should I need to trigger this?' (=> ACT)."
Then I think at the bare minimum I have to do in order to have all in place (=> ARRANGE)."
The UTs drive the architecture, the ATs probe the architecture.
"Integration tests / Acceptance Tests act as a probe to where we are and what we have still need to finish, in order to complete the feature. They show me what is left to be tested, and this will guide me to the next step, going inside starting from the outside."
๐บ๐ธ Uncle Bob describing Sandro's style: "The ATs will probe how deeply you've gone; the UTs mock things out, while the ATs will block as soon as they hit something that the UTs are mocking and still needs to be implemented."
โ => "The Acceptance Tests act as a 'north star', a guiding star: they make sure we don't forget anything, because in the ATs I use the real stuff, not mocks."
The AT are used as a probe to measure the completeness of the UT.
Classically, ATs probe the completion of behavior (when we're done).
In Sandro style, ATs probe the completion of architecture (what is the next layer to code, what is still missing in the architecture).
๐บ๐ธ Uncle Bob on Sandro's style: "You will only write the Unit Tests that cause the ATs to pass, respecting some architectural decisions, the layering and deriving the architecture from the UTs."
๐บ๐ธ Uncle Bob: "Here the ATs are testing behaviourally and the UTs are driving the architecture."
Sandro: "The ATs guarantee that the app is providing the expected behaviour to the external world, while through the UTs I derive the design, layer by layer."
"The ATs only tests the happy-path scenarios ('sunny day' scenarios). They're not trying to get all the possible errors and corner cases, because then you would have a big overlap between UTs and ATs, and a combinatory explosion of ATs (see the 'pyramid of tests')."
"In the ATs I want to verify the most important path only, and leave all the other cases to the UTs."
"When I have two pieces of data that together they mean something (e.g. UserCredentials
instead of two strings for username and password), I prefer to create that thing as an Object instead of passing around those two pieces of data."
https://cleancoders.com/episode/comparativeDesign-episode-1/show
I headed to the starting-point
branch.
Notes on the openchat project:
-
Use ReactJS as the framework for the frontend of the chat: this will be the client of the Java backend API.
-
API documented with Swagger - to have the openchat swagger API available on localhost (port 8080), just run
docker run --rm -p 8080:8080 -v $PWD:/openchat -e SWAGGER_JSON=/openchat/APIs.yaml swaggerapi/swagger-ui
-
Uses different interesting tools in the Java project:
JUnitParams
, to have better parametrized testsAssertJ
andRestAssured
- also
minimal-json
to generate JSON programmatically - also
BDD Mockito
-
Take a look at how the maven POM is defined, and more generally how the project is organized.
-
See also how Integration Tests (IT) are defined in the POM, and run with a specific maven profile.
๐ฌ๐ง "When we call the method under test, you need to understand what is the expected side-effect: what should happen?"
"A test should always have a side-effect: focus on that when you start writing a test."
"I always start with an assertion: what do I want to test? What am I testing?."
"I don't like mocking classes that are not mine, but in this case (the Spark Request
class) I don't have an option!."
"In my style of TDD, many design decisions happen and take place in the 'RED' step", which BTW takes quite some time in the Sandro way of doing outside-in TDD (I found this sometimes too long, personally I'm not feeling safe when I stay too much time in a 'RED bar' condition).
"We already know that the class will violate the SRP and we already know that we'll have to keep separated the delivery mechanism side (which is where UsersAPI
belongs) from the domain side (where e.g. you persist the user, generate an ID for the user, validate the registration request, etc), so we feel safe to take those design decisions up-front, instead of delaying those to a later moment (e.g. in the "refactoring" step)."
"It's all about confidence. If I don't know, I take small steps, I put all in the same class and then refactor later, when I'll be GREEN."
UsersAPI
is effectively a controller.
"Design in the RED" โ
UserJson
handle the serialization of the User domain class as a JSON string. It has been created in UsersAPI
as a static inner class.
"Some people use annotations on their domain objects to be serialized as JSON: I don't like that. I don't want to bind the two things." ๐
"The JSON belongs to the delivery mechanism, on the contract we have with the external world.
While the User
belongs to the domain, and cannot know how will be transformed to be delivered somewhere."
Having annotations for JSON serialization on the User
class or having methods like toJSON()
binds User
and my domain to the delivery mechanism!
"Otherwise, you end up with a domain that breaks persistence and API when you e.g. rename a field, because you bound the domain object to the persistence layer from one side and to the presentation layer to the other side as well..."
"I prefer to take data from the delivery mechanism and extract / convert those data into the domain names / data, I don't want to have the same names and conventions."
Builders are used in tests, but not in production code.
There's an IntelliJ shortcut to create a builder for a class.
Sandro then do three more things on the builder:
- adds a
aThing()
method to return an instance of sayThingBuilder
so that the chaining is then more fluent, - renames
createThing()
intobuild()
, - sets a default values for all the fields of
Thing
so that is safe to use any instance create with the builder.
How should UserService
behave when it's not able to create a new user?
- we could have the service return an optional
User
- we could have the service throw an exception
- ...
Sandro prefers the second option.
"Mocking is not about testing, it is about designing the collaboration between classes"
https://cleancoders.com/episode/comparativeDesign-episode-2/show
Accidental duplication is when code is accidentally duplicated, but may then change for different reasons, so you decide to keep those pieces of code duplicated (e.g. same logic in different APIs that may then change for different reasons).
"A repository is a helper class to the service: everyone that wants to access the domain should go through the service. BUT in this simple app it would be a simple delegation to the repo, so we decide pragmatically to skip that delegation and use the repository directly in the LoginAPI
."
https://cleancoders.com/episode/comparativeDesign-episode-3/show
Uses LocalDateTime.of()
e DateTimeFormatter.ofPattern()
I don't like the method names prepareOKResponse
, prepareErrorResponse
.
Uncle Bob seems to prefer to write tests in a "top going forward to the bottom" way (e.g. when creating an object he instantiates its dependencies first, then the object), while Sandro often follow a "bottom backwards to the top" way (e.g. starting from the assertion and going backwards, creating helper methods that still don't exist, creating objects before instantiating their dependencies, ...), like a crayfish moving backward when swimming.
Tomato Gods! :-)
"In this style every test is a design decision"
PostService
starts to have too many collaborators (four!).
PostRepository.add(post)
method has been left unimplemented intentionally, because we still don't have query methods to fetch posts.
๐ฌ๐ง Sandro seems to uses the term inform
(e.g. inform_when_a_text_has_inappropriate_language
) to name tests that test for boolean methods.
When unit testing LanguageService
Sandro and UncleBob say "here we can do some baby-steps / classic TDD approach because we are at the end of the architectural chain".
So, when you don't have any collaborations with other objects it seems natural to use the classic state-based TDD approach.
The API developed so far:
API | Domain | Infrastructure
UsersAPI | UserService | UserRepository, User, IdGenerator, RegistrationData, UsernameAlreadyInUseException | UserJson
LoginAPI | | UserRepository, User, UserCredentials |
PostsAPI | PostService | PostRepository, Post, LanguageService, Clock, InappropriateLanguageException | PostJson
https://cleancoders.com/episode/comparativeDesign-episode-4/show
It seems to me that Sandro sometimes takes a very long step to go from RED to GREEN (e.g. see PostsAPIShould.return_a_json_containing_posts_from_a_given_user
), and when the GREEN arrives, it's kinda magic, like you wouldn't want to believe it and you want to review all the code touched by the test to be sure you understand all clearly (and I guess UncleBob has the same feeling :-)
In the test for PostRepository
, the add(post)
is used as a setup for the test of the query method postsBy(userId)
.
The post of a given user should be displayed in reverse chronological order! So where should we put this behaviour? In the repository? Somewhere outside it?
return_posts_for_a_given_user_in_reverse_chronological_order
seems a very big test, which is verifying two things:
- that the repo is able to select the posts of a given user from all the posts
- that the repo will fetch those post in a specific order
In Sandro's view, the extension of a test, its size, is all about his confidence: higher confidence means larger steps, lower confidence means more tiny steps and small increments.
Object mothers! => Builder for test!
I found a bit ambiguous to talk about "reverse chronological order" when in fact what the first implementation does is more a "reverse insertion order": what are you doing is fetching the elements (the user posts in this case) in the opposite order of addition, like in a LIFO stack. Maybe it's just a naming issue.
We decided to push this logic down to the repository.
"This behaviour (getting all the user posts in reverse chronological order) is not explicit, it's hidden in the PostRepository
. It's a compromise."
Alternative: PostRepository
returns a list of unordered posts and the PostService
, one level higher in the layers, will sort the posts in "reverse chronological order"; so we would move that knowledge up in the service layer.
Another alternative (my two cents here) would be to have the repository expose a finder method with the possibility to specify an ordering strategy (e.g. "reverse chronological order"), so that the sorting implementation would be performed down in the repository while the selection of the strategy (e.g. "reverse chronological order") would be done up in the service.
Uncle Bob's take on this: "The order of the posts is purely a UI issue, has nothing to do with the internal of the application."
There's a timezone issue! The backend returns the post time with a UTC timezone (+00
) while (I'm guessing here) the frontend doesn't handle the post time correctly, probably it doesn't know the backend runs with the Chicago timezone, where they have UTC-06 :-)
The UI filters out the current user from all the users to follow.
Uncle Bob: "Things are getting mechanical. No interesting design decisions showing up."
Following API: probably I would have preferred a different API, something like
POST /users/{followerId}/followings
instead of POST /followings
with both the follower and followee ids in the body.
49:47: Why creating up-front the Following
DTO while writing the UT for the FollowingAPI
class? Where's the need for that coming from? Do you have any clue that you will need that object right now? Moreover, in the UT you'll have to "unwrap" that object to get from it the followerId and the followeeId...
59:00: => Gotcha! The need for the Following
DTO emerges from the new version of the UT where we verify the interaction with the UserService
!
Again, the RED step seems indeed to take quite a long time (compared to Uncle Bob's golden rule of 'You are not allowed to write any more of a unit test than is sufficient to fail'). "A lot of people may say that this test has too much stuff in it, but this is design, we are designing, 'Designing in the RED'."
"It's not just writing the test, we are designing while we're writing the test"
https://cleancoders.com/episode/comparativeDesign-episode-5/show
Sandro creates test data with too generic names IMO. Specifically, the example is FOLLOWING
in UserServiceShould
test: I prefer to use explicit values so that is clear that those are test values, e.g. "anyFollowerId" instead of "followerId".
On UserRepository
, I don't like using the same name (add
) for the method to add users and for that to add following relationships.
Again, using exceptions to control flow (FollowingAlreadyExistsException
): I wonder what alternatives are there to this strategy. Ideas?
Gotta use doThrow
when mocking an exception thrown by a void method => gosh!
WIP => 13:00
TODO:
- review Sandro's test-and-code flow (e.g. in login and registration APIs)
- remove the AWT
List
implementation from the types suggested by IntelliJ
Hi @konfri , and thanks for taking time to give me this feedback.
BTW I still didn't complete the full series, just watched Mancuso's session and still have to watch Uncle Bob's take ๐
The episodes are really great indeed, and full of insights on how two different "minds" tackle the same problem... highly recommended!
Coming to your advice, I don't know if it was somehow clear from my words, by my observation was a bit "provocative"... I would feel very weird in committing broken code (stuff that broke some test), even if it reminds me about the "Broken Test" pattern by Kent Beck... so if I decide to do it, I would be very explicit with my teammates, or probably I would just keep the broken code not pushed on the shared main branch.
I know Accelerate, though I never read it cover-to-cover, but just scanned it quickly on specific topics. I liked the kind of awareness it raised in the community!
Thanks again for sharing your advice!
Pietro