You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
During my software developer experience, I have seen many bad practices as a code reviewer and collected a lot of tips to try to build better products. I want to focus on a specific part in order to prepare a enhanced Golang project template merging all these tips and practices.
What is a micro-service (again)?
My own personal definition
A micro-service is an internal autonomous observable immutable scalable self-contained unit of deployment of a non-monolithic style architecture. Each micro-service is responsible partially or completely of a part of a business problem, orchestrated and exposed by a Service Gateway.
A micro-service doesn't means micro "code" infrastructure, because if you consider a micro-service like a microcode, you will be in a world of pico-service where each service is designed to run roughly one Assembly opcode. I'm hearing tech early-adop-hipsters while writing: "Oh dude, such a good idea, we could have a distributed multi-architecture assembly services that could run on Kubernetes".
An opcode is an instruction executed by your CPU
Consider micro-services as deployment unit, built and running to achieve a more complex service. Responsibilities have been shared for deployment reasons (heavy load, sometimes re-usability). So don't design micro-services without a full vision of your business service. I'm not saying to rollback your code to that gorgeous monolith, but consider that splitting it in micro-service should be driven by technical requirements (load balancing, etc.), not just for fancy hype reasons.
Micro-services add network between problems!
And all micro-services are part of a micro-service style architecture that serve an identified target to reach.
Splitting them and contact them via an external transport, just add external transport problems to the business problems that you are trying to solve (Connection Resiliency / Explosion, Distributed Concurrency, etc.)
Once again, you must know what to do BEFORE how to do it!
Architectural patterns
Clean code is not written by following a set of rules. You don’t become a software craftsman by learning a list of heuristics. Professionalism and craftsmanship come from values that drive disciplines.Robert C. Martin, Clean Code: A Handbook of Agile Software Craftsmanship
Architecture patterns are frameworks in which, you must understand all concepts, but use only what you need.
In software development, architecture should not be the most "hot" point to solve first, you should be focused on features to add value to your product. So that in order to save times, many architectural patterns exist to be able to think about "what I want to do", instead of "how am I going to organize my code".
This is the main role of the software architect to balance between adding features and keep code maintainable in time.
The main objective of this architecture pattern is to design software with very low coupling, independent of technical implementation and fully testable.
Entities, contains no business logic only intrinsic validation;
Use Cases, contains business logic completely independent of technical implementation;
Interface Adapters, contains bridges between external world via presenter (HTTP, gRPC, Subscriber) and User Cases;
All layers are bound by the dependency rule. The dependency rule tells us that inner layers should not depend on outer layers. That is, our business logic and application logic should not be pegged to the Presenter, UI, databases, or other elements. Nothing from an external layer can be mentioned by the code of an internal layer.
Hexagonal Architecture
Allow an application to equally be driven by users, programs, automated test or batch scripts, and to be developed and tested in isolation from its eventual run-time devices and databases.
The main objective of this architecture is to isolate business services and rules from technical environment using ports and adapters.
When you starts a project, you have to make technical choice that could reveal, in time, to be a bad choice (project death, license changes, etc.), when not isolating these implementation from your code, you will not be able to switch quickly to a new implementation.
For example, you want to build software using Function as a service pattern, but you stick invocation with AWS Lambda, using internally AWS service objects, when you would like to migrate to GCP or other, your code will be completely and tightly coupled to AWS instead of having defined technically independent business logic and provided an AWS Lambda adapter.
By isolating business from technical dependency, you can test by using mock and stub. You can also start your project without waiting for completion of all parts, just contract (adapters) are needed. You can also start testing your code without deploying the full stack, no need to wait for final artifact to be available and built. You can test each feature, layer by layer.
By isolating your project from infrastructure, you can have a quick demonstration product; you can run your product as a simple HTTP server in a docker container by changing an adapter, then deploy as a Lambda on AWS as production target. Don't think that using AWS to test your code is mandatory as unit test level, even as integration level, you don't have to test AWS Lamda invocation but your own code.
By isolating persistence, you can produce stub repositories to simulate data providers, in order to create the first PoC (for example) and mocking business service.
Hexagonal architecture products can communicate between them using adapters. Think about micro-service architecture style, it's completely applicable, if you consider each micro-service as its own hexagonal architecture product with transport communication layer as an adapter between them (HTTP, gRPC, Pub/Sub).
Consider architecture patterns as toolboxes or guidelines, never as source of truth to apply them in any situation.
Conclusion
I know, we have to work faster every day and quality (also security) is the first feature to be dropped. By being focused on business problems, and use architectural patterns as tools, you will have free time, that is generally spent by reinventing the wheel, again, and again, to produce demonstrable value and makes happy maintainers and auditors.
Consider the maintainer as a complete psychopath who knows where you live and want to kill you ... while you are writing code.
First, I create an internal package used to create the complete project, which will be executed in dedicated runnable artifacts.
$ mkdir spotigraph
$ go mod init go.zenithar.org/spotigraph
$ mkdir internal
internal package will not be accessible from outside of the project package.
Helpers
Helpers should contains additional function for models, such as:
Password encoding, verification and policy upgrades (Algorithm deprecation)
ID generation and validation
Time function indirection for time drive tests
$ mkdir internal/helpers
Random ID Generation
For example, id.go should contains all logic used to generate and verify ID syntax.
package helpers
import (
"github.com/dchest/uniuri"
validation "github.com/go-ozzo/ozzo-validation""github.com/go-ozzo/ozzo-validation/is"
)
// IDGeneratedLength defines the length of the id stringconstIDGeneratedLength=32// IDGeneratorFunc returns a randomly generated string useable as identifiervarIDGeneratorFunc=func() string {
returnuniuri.NewLen(IDGeneratedLength)
}
// IDValidationRules describes identifier contract for syntaxic validationvarIDValidationRules= []validation.Rule{
validation.Required,
validation.Length(IDGeneratedLength, IDGeneratedLength),
is.Alphanumeric,
}
I like to use ozzo-validation because it's easy to create composable validation checks
Principal hashing
Another example with principal handling, you don't need to store it as plain text, think about privacy, database leaks, nobody is perfect ... So you could hash the principal before storing it. It's not the database role to hash the principal, it's part of your requirements to be privacy compliant.
package helpers
import (
"encoding/base64""golang.org/x/crypto/blake2b"
)
varprincipalHashKey= []byte(`7EcP%Sm5=Wgoce5Sb"%[E.<&xG8t5soYU$CzdIMTgK@^4i(Zo|)LoDB'!g"R2]8$`)
// PrincipalHashFunc return the principal hashed using Blake2b keyed algorithmvarPrincipalHashFunc=func(principalstring) string {
// Prepare hasherhasher, err:=blake2b.New512(principalHashKey)
iferr!=nil {
panic(err)
}
// Append principal_, err=hasher.Write([]byte(principal))
iferr!=nil {
panic(err)
}
// Return base64 hash value of the principal hashreturnbase64.RawStdEncoding.EncodeToString(hasher.Sum(nil))
}
// SetPrincipalHashKey used to set the key of hash functionfuncSetPrincipalHashKey(key []byte) {
iflen(key)!=64 {
panic("Principal hash key length must be 64bytes long.")
}
principalHashKey=key
}
principalHashKey is hard-coded for default behavior, you must call exported function SetPrincipalHashKey to update the hash key.
Don't use simple hash simple (without key) because in this case, the secret is only based on the hash algorithm you use!
Time indirection
In order to set time during tests, you could use time.Now function indirection by declaring an alias and use it everywhere.
package helpers
// TimeFunc is used for time based testsvarTimeFunc=time.Now
Password has to be carefully processed when you decide to store it.
package helpers
import (
"context""fmt""sync""go.zenithar.org/pkg/log""go.zenithar.org/butcher""go.zenithar.org/butcher/hasher""github.com/trustelem/zxcvbn"
)
var (
once sync.Once// butcher hasher instancebutch*butcher.Butcher// PasswordPepper is added to password derivation during hash in order to// prevent salt/password bruteforce// Do not change this!! It will makes all stored password unrecoverablepepperSeed= []byte("TuxV%AqKB0|gxjEB!vc~]T8Hf[q|xgS('3S<IEnqOv:jF&F8}+pur)N@DHYulF#")
)
const (
// PasswordQualityScoreThreshold is the password quality thresholdPasswordQualityScoreThreshold=3
)
// Initialize butcher hasherfuncinit() {
once.Do(func() {
varerrerrorifbutch, err=butcher.New(
butcher.WithAlgorithm(hasher.ScryptBlake2b512),
butcher.WithPepper(pepperSeed),
); err!=nil {
log.Bg().Fatal("Unable to initialize butcher hasher")
}
})
}
// SetPasswordPepperSeed used to set the peppering parameter for butcherfuncSetPasswordPepperSeed(key []byte) {
iflen(key)!=64 {
panic("Password peppering seed length must be 64bytes long.")
}
pepperSeed=key
}
// PasswordHasherFunc is the function used for password hashingvarPasswordHasherFunc=func(passwordstring) (string, error) {
returnbutcher.Hash([]byte(password))
}
// PasswordVerifierFunc is the function used to check password hash besides the cleartext onevarPasswordVerifierFunc=func(hashstring, cleartextstring) (bool, error) {
returnbutcher.Verify([]byte(hash), []byte(cleartext))
}
// CheckPasswordPolicyFunc is the function used to check password storage policy and// call the assigner function to set the new encoded password using the updated // password storage policy, if applicable.varCheckPasswordPolicyFunc=func(hashstring, cleartextstring, assignerfunc(string) error) (bool, error) {
ifbutcher.NeedsUpgrade([]byte(hash)) {
iferr:=assigner(cleartext); err!=nil {
returnfalse, err
}
}
returntrue, nil
}
// CheckPasswordQualityFunc is the function used to check password quality regarding security constraintsvarCheckPasswordQualityFunc=func(cleartextstring) error {
quality:=zxcvbn.PasswordStrength(cleartext, nil)
ifquality.Score<PasswordQualityScoreThreshold {
returnfmt.Errorf("Password quality insufficient, try to complexify it (score: %d/%d)", quality.Score, PasswordQualityScoreThreshold)
}
returnnil
}
package models
import (
"fmt""time""go.zenithar.org/spotigraph/internal/helpers"
validation "github.com/go-ozzo/ozzo-validation""github.com/go-ozzo/ozzo-validation/is"
)
// User describes user attributes holdertypeUserstruct {
IDstringPrincipalstringCreated time.TimePasswordModifiedAt time.TimeSecretstring
}
// NewUser returns an user instancefuncNewUser(principalstring) *User {
return&User{
// Generate an identity from ID helperID: helpers.IDGeneratorFunc(),
// Hash the given principal using helperPrincipal: helpers.PrincipalHashFunc(principal),
// Set the creation date using the time function helperCreated: helpers.TimeFunc(),
}
}
Password management
Password is not persistence adapter specific, it's an entity attribute of the User model.
// SetPassword updates the password hash of the given accountfunc (u*User) SetPassword(passwordstring) (errerror) {
// Check password qualityerr=helpers.CheckPasswordQualityFunc(password)
iferr!=nil {
returnerr
}
// Generate password hashu.Secret, err=helpers.PasswordHasherFunc(password)
iferr!=nil {
returnerr
}
// Update last modified dateu.PasswordModifiedAt=helpers.TimeFunc()
// No errorreturnnil
}
// VerifyPassword checks if given password matches the hashfunc (u*User) VerifyPassword(passwordstring) (bool, error) {
// Validate password hash using constant time comparisonvalid, err:=helpers.PasswordVerifierFunc(u.Secret, password)
if!valid||err!=nil {
returnfalse, err
}
// Check if password need upgradesreturnhelpers.CheckPasswordPolicyFunc(u.PasswordHash, password, func(pwdstring) error {
returnu.SetPassword(pwd)
})
}
SetPassword is used to update password hash using the password helpers, the given password is evaluated according password complexity policy, hashed using scrypt-blake2b-512 algorithm.
VerifyPassword is used to verify the given clear-text password with the local encoded one. If the password encoding strategy changed since the password storage, the password is verified using the last password encoding strategy, then updated to latest one if password match. If encoded password is modified, User:passwordHash will be updated using the callback.
The password encoding update strategy is mandatory, if you want to be able to update password encoding policy without asking everyone to change their password.
Validation
For User validation, let's implement a Validate() error function using ozzo-validation.
// Validate entity contraintsfunc (u*User) Validate() error {
returnvalidation.ValidateStruct(u,
// User must have a valid idvalidation.Field(&u.ID, helpers.IDValidationRules...),
// User must have a principal with valid printable ASCII characters as valuevalidation.Field(&u.Principal, validation.Required, is.PrintableASCII),
)
}
This method will be called from persistence adapters on creation, and update requests
By adding logic in models, you must add unit tests to check for specification compliance.
package models_test
import (
"testing"
. "github.com/onsi/gomega""go.zenithar.org/spotigraph/internal/models"
)
funcTestUserValidation(t*testing.T) {
g:=NewGomegaWithT(t)
for_, f:=range []struct {
namestringexpectErrbool
}{
{"[email protected]", false},
} {
obj:=models.NewUser(f.name)
g.Expect(obj).ToNot(BeNil(), "Entity should not be nil")
iferr:=obj.Validate(); err!=nil {
if!f.expectErr {
t.Errorf("Validation error should not be raised, %v raised", err)
}
} else {
iff.expectErr {
t.Error("Validation error should be raised")
}
}
}
}
Repositories as Hexagonal architecture defines this component is a persisence adapter. It's a technical implementation of a models provider. It could be:
A local provider: database, file;
A remote provider: another micro-service.
I'm used to split repository implementations in a dedicated package according technical backend used.
But before starting to implements the persistence adapter, and in order to comply the dependency loose coupling of The Clean Architecture, we have to declare the adapter interface first. So that all implementations must be compliant with.
For example api.go must contain only repository contract.
Don't forget to pass context.Context as far as possible in your calls.
Unit Tests
For testing, I've added this go compiler generate step in api.go, to build User mocks from interface type definition.
//go:generate mockgen -destination test/mock/user.gen.go -package mock go.zenithar.org/spotigraph/internal/repositories User
Repository mocks will be used by service unit tests.
Integration Tests
These tests are executed using a real backend.
For example, the User integration test generator creates a full test that create, read, update, delete User using the persistence adapter implementation.
//+build integrationpackage specs
import (
"context""testing""github.com/google/go-cmp/cmp"
. "github.com/onsi/gomega""go.zenithar.org/pkg/db""go.zenithar.org/spotigraph/internal/models""go.zenithar.org/spotigraph/internal/repositories"
)
// User returns user repositories full test scenario builderfuncUser(underTest repositories.User) func(*testing.T) {
returnfunc(t*testing.T) {
t.Parallel()
g:=NewGomegaWithT(t)
// Stub contextctx:=context.Background()
// Prepare a new entitycreated:=models.NewUser("[email protected]")
g.Expect(created).ToNot(BeNil(), "Newly created entity should not be nil")
// Create the entity using repositoryerr:=underTest.Create(ctx, created)
g.Expect(err).To(BeNil(), "Error creation should be nil")
// -------------------------------------------------------------------------// Retrieve by id from repositorysaved, err:=underTest.Get(ctx, created.ID)
g.Expect(err).To(BeNil(), "Retrieval error should be nil")
g.Expect(saved).ToNot(BeNil(), "Saved entity should not be nil")
// Compare objectsg.Expect(cmp.Equal(created, saved)).To(BeTrue(), "Saved and Created should be equals")
// Retrieve by non-existent idnonExistent, err:=underTest.Get(ctx, "non-existent-id")
g.Expect(err).ToNot(BeNil(), "Error should be raised on non-existent entity")
g.Expect(err).To(Equal(db.ErrNoResult), "Error ErrNoResult should be raised")
g.Expect(nonExistent).To(BeNil(), "Non-existent entity should be nil")
// Retrieve by principalsavedPrincipal, err:=underTest.FindByPrincipal(ctx, created.Principal)
g.Expect(err).To(BeNil(), "Retrieval error should be nil")
g.Expect(savedPrincipal).ToNot(BeNil(), "Saved entity should not be nil")
// Compare objectsg.Expect(cmp.Equal(created, savedPrincipal)).To(BeTrue(), "SavedPrincipal and Created should be equals")
// -------------------------------------------------------------------------// Update an entitysaved, err=underTest.Get(ctx, created.ID)
g.Expect(err).To(BeNil(), "Retrieval error should be nil")
g.Expect(saved).ToNot(BeNil(), "Saved entity should not be nil")
// Update properties// Update with repositoryerr=underTest.Update(ctx, saved)
g.Expect(err).To(BeNil(), "Update error should be nil")
// Retrieve from repository to check updated propertiesupdated, err:=underTest.Get(ctx, created.ID)
g.Expect(err).To(BeNil(), "Retrieval error should be nil")
g.Expect(updated).ToNot(BeNil(), "Saved entity should not be nil")
// Compare objectsg.Expect(cmp.Equal(created, updated)).To(BeTrue(), "Saved and Updated should be equals")
// -------------------------------------------------------------------------// Remove an entityerr=underTest.Delete(ctx, created.ID)
g.Expect(err).To(BeNil(), "Removal error should be nil")
// Retrieve from repository to check deletiondeleted, err:=underTest.Get(ctx, created.ID)
g.Expect(err).ToNot(BeNil(), "Deletion error should not be nil")
g.Expect(err).To(Equal(db.ErrNoResult), "Error ErrNoResult should be raised")
g.Expect(deleted).To(BeNil(), "Deleted entity should be nil")
// Remove a non-existent entityerr=underTest.Delete(ctx, "non-existent-id")
g.Expect(err).ToNot(BeNil(), "Removal error should not be nil")
g.Expect(err).To(Equal(db.ErrNoModification), "Error ErrNoModification should be raised")
}
}
PostgreSQL
For example as User persistence adapter for PostgreSQL in internal/repositories/pkg/postgresql
If the model can't fit directly in the persistence adapter you have to translate it as needed
With given table schema :
-- +migrate UpCREATETABLEIF NOT EXISTS squads (
id VARCHAR(32) NOT NULLPRIMARY KEY,
name VARCHAR(50) NOT NULL,
meta JSON NOT NULL,
product_owner_id VARCHAR(32) NOT NULL,
member_ids JSON NOT NULL
);
-- +migrate DownDROPTABLE squads;
In this case, I choose to use JSON column type to handle Squad:members association and Squad:metadata. So I had to translate member_ids array as a JSON object before writing to database, a reading from JSON to an array when decoding.
My persistence adapter must transform the model before each SQL operations.
Never update a full object without controlling each keys, you must setup update function to update only updatable attributes.
Running integration tests
In order to run integration tests with Golang, you must prepare a TestMain . This runner is responsible of building all related persistence adapter instance according requested command line flag.
The test specification will be used to generate the full scenario test via passing the persistence adapter to the generator.
Obviously all persistence adapter implementations should have the same behavior, validated by your test suite.
// +build integrationpackage integration
import (
"context""flag""fmt""math/rand""os""strings""testing""time""go.zenithar.org/pkg/testing/containers/database""go.uber.org/zap""go.zenithar.org/pkg/log""go.zenithar.org/spotigraph/internal/version"
)
vardatabases=flag.String("databases", "postgresql", "Repositories backend to use, splitted with a coma ','. Example: postgresql,mongodb,rethinkdb")
funcinit() {
flag.Parse()
ctx:=context.Background()
// Prepare loggerlog.Setup(ctx, &log.Options{
Debug: true,
AppName: "spotigraph-integration-tests",
AppID: "123456",
Version: version.Version,
Revision: version.Revision,
SentryDSN: "",
})
// Initialize random seedrand.Seed(time.Now().UTC().Unix())
// Set UTC for all timetime.Local=time.UTC
}
functestMainWrapper(m*testing.M) int {
iftesting.Short() {
fmt.Println("Skipping integration tests")
return0
}
log.Bg().Info("Initializing test DB for integration test (disable with `go test -short`)")
ctx:=context.Background()
backends:=strings.Split(strings.ToLower(*databases), ",")
for_, back:=rangebackends {
switchback {
case"postgresql":
// Initialize postgresqlcancel, err:=postgreSQLConnection(ctx)
iferr!=nil {
log.Bg().Fatal("Unable to initialize repositories", zap.Error(err))
}
deferfunc() {
cancel()
}()
default:
log.Bg().Fatal("Unsupported backend", zap.String("backend", back))
}
}
deferfunc() {
database.KillAll(ctx)
}()
returnm.Run()
}
// TestMain is the test entrypointfuncTestMain(m*testing.M) {
os.Exit(testMainWrapper(m))
}
It will initialize a docker container running a PostgreSQL server using ory-am/dockertest. It will automates the database server deployment when executing test in local, and could be used also on a remote database instance (when managing database execution with your CI pipeline for example).
package database
import (
"fmt""log"// Load driver if not already done
_ "github.com/lib/pq""github.com/dchest/uniuri"
dockertest "gopkg.in/ory-am/dockertest.v3""go.zenithar.org/pkg/testing/containers"
)
var (
// PostgreSQLVersion defines version to usePostgreSQLVersion="10"
)
// PostgreSQLContainer represents database container handlertypepostgreSQLContainerstruct {
Namestringpool*dockertest.Poolresource*dockertest.Resourceconfig*Configuration
}
// NewPostgresContainer initialize a PostgreSQL server in a docker containerfuncnewPostgresContainer(pool*dockertest.Pool) *postgreSQLContainer {
var (
databaseName=fmt.Sprintf("test-%s", uniuri.NewLen(8))
databaseUser=fmt.Sprintf("user-%s", uniuri.NewLen(8))
password=uniuri.NewLen(32)
)
// Initialize a PostgreSQL serverresource, err:=pool.Run("postgres", PostgreSQLVersion, []string{
fmt.Sprintf("POSTGRES_PASSWORD=%s", password),
fmt.Sprintf("POSTGRES_DB=%s", databaseName),
fmt.Sprintf("POSTGRES_USER=%s", databaseUser),
})
iferr!=nil {
log.Fatalf("Could not start resource: %s", err)
}
// Prepare connection stringconnectionString:=fmt.Sprintf("postgres://%s:%s@localhost:%s/%s?sslmode=disable", databaseUser, password, resource.GetPort("5432/tcp"), databaseName)
// Retrieve container namecontainerName:=containers.GetName(resource)
// Return container informationreturn&postgreSQLContainer{
Name: containerName,
pool: pool,
resource: resource,
config: &Configuration{
ConnectionString: connectionString,
Password: password,
DatabaseName: databaseName,
DatabaseUser: databaseUser,
},
}
}
// -------------------------------------------------------------------// Close the containerfunc (container*postgreSQLContainer) Close() error {
log.Printf("Postgres (%v): shutting down", container.Name)
returncontainer.pool.Purge(container.resource)
}
// Configuration return database settingsfunc (container*postgreSQLContainer) Configuration() *Configuration {
returncontainer.config
}
At this point, you must be able to execute all test suites on your domain as persistence adapters. From this point, you should be able to create business service by using these adapters via the interface, NOT DIRECTLY via adapter instance.
On the previous post (part-2), we have copletly done all first version of our SCRUD persistence adapters. Now we can assemble them as service to produce the product value expected.