Skip to content

Instantly share code, notes, and snippets.

@Zenithar
Last active May 13, 2019 19:14
Show Gist options
  • Save Zenithar/e6f92bac3bdf0883eedff25b57f280a4 to your computer and use it in GitHub Desktop.
Save Zenithar/e6f92bac3bdf0883eedff25b57f280a4 to your computer and use it in GitHub Desktop.

How I write micro-services?

During my software developer experience, I have seen many bad practices as a code reviewer and collected a lot of tips to try to build better products. I want to focus on a specific part in order to prepare a enhanced Golang project template merging all these tips and practices.

What is a micro-service (again)?

My own personal definition

A micro-service is an internal autonomous observable immutable scalable self-contained unit of deployment of a non-monolithic style architecture. Each micro-service is responsible partially or completely of a part of a business problem, orchestrated and exposed by a Service Gateway.

A micro-service doesn't means micro "code" infrastructure, because if you consider a micro-service like a microcode, you will be in a world of pico-service where each service is designed to run roughly one Assembly opcode. I'm hearing tech early-adop-hipsters while writing: "Oh dude, such a good idea, we could have a distributed multi-architecture assembly services that could run on Kubernetes".

An opcode is an instruction executed by your CPU

Consider micro-services as deployment unit, built and running to achieve a more complex service. Responsibilities have been shared for deployment reasons (heavy load, sometimes re-usability). So don't design micro-services without a full vision of your business service. I'm not saying to rollback your code to that gorgeous monolith, but consider that splitting it in micro-service should be driven by technical requirements (load balancing, etc.), not just for fancy hype reasons.

Micro-services add network between problems!

And all micro-services are part of a micro-service style architecture that serve an identified target to reach.

Splitting them and contact them via an external transport, just add external transport problems to the business problems that you are trying to solve (Connection Resiliency / Explosion, Distributed Concurrency, etc.)

Once again, you must know what to do BEFORE how to do it!

Architectural patterns

Clean code is not written by following a set of rules. You don’t become a software craftsman by learning a list of heuristics. Professionalism and craftsmanship come from values that drive disciplines. Robert C. Martin, Clean Code: A Handbook of Agile Software Craftsmanship

Architecture patterns are frameworks in which, you must understand all concepts, but use only what you need.

In software development, architecture should not be the most "hot" point to solve first, you should be focused on features to add value to your product. So that in order to save times, many architectural patterns exist to be able to think about "what I want to do", instead of "how am I going to organize my code".

This is the main role of the software architect to balance between adding features and keep code maintainable in time.

Clean Architecture

![Clean Architecture](</home/zenithar/Documents/CleanArchitecture.jpg)

The "Uncle Bob" Clean Architecture (Robert C. Martin) is an architectural toolbox to describes and organize software component concepts and how they are communicating between them.

The main objective of this architecture pattern is to design software with very low coupling, independent of technical implementation and fully testable.

  • Entities, contains no business logic only intrinsic validation;
  • Use Cases, contains business logic completely independent of technical implementation;
  • Interface Adapters, contains bridges between external world via presenter (HTTP, gRPC, Subscriber) and User Cases;
  • Frameworks & Drivers, contains all technical implementations (Database repositories, Security controls, etc.)

All layers are bound by the dependency rule. The dependency rule tells us that inner layers should not depend on outer layers. That is, our business logic and application logic should not be pegged to the Presenter, UI, databases, or other elements. Nothing from an external layer can be mentioned by the code of an internal layer.

Hexagonal Architecture

Allow an application to equally be driven by users, programs, automated test or batch scripts, and to be developed and tested in isolation from its eventual run-time devices and databases.

The main objective of this architecture is to isolate business services and rules from technical environment using ports and adapters.

When you starts a project, you have to make technical choice that could reveal, in time, to be a bad choice (project death, license changes, etc.), when not isolating these implementation from your code, you will not be able to switch quickly to a new implementation.

For example, you want to build software using Function as a service pattern, but you stick invocation with AWS Lambda, using internally AWS service objects, when you would like to migrate to GCP or other, your code will be completely and tightly coupled to AWS instead of having defined technically independent business logic and provided an AWS Lambda adapter.

By isolating business from technical dependency, you can test by using mock and stub. You can also start your project without waiting for completion of all parts, just contract (adapters) are needed. You can also start testing your code without deploying the full stack, no need to wait for final artifact to be available and built. You can test each feature, layer by layer.

By isolating your project from infrastructure, you can have a quick demonstration product; you can run your product as a simple HTTP server in a docker container by changing an adapter, then deploy as a Lambda on AWS as production target. Don't think that using AWS to test your code is mandatory as unit test level, even as integration level, you don't have to test AWS Lamda invocation but your own code.

By isolating persistence, you can produce stub repositories to simulate data providers, in order to create the first PoC (for example) and mocking business service.

Hexagonal architecture products can communicate between them using adapters. Think about micro-service architecture style, it's completely applicable, if you consider each micro-service as its own hexagonal architecture product with transport communication layer as an adapter between them (HTTP, gRPC, Pub/Sub).

Consider architecture patterns as toolboxes or guidelines, never as source of truth to apply them in any situation.

Conclusion

I know, we have to work faster every day and quality (also security) is the first feature to be dropped. By being focused on business problems, and use architectural patterns as tools, you will have free time, that is generally spent by reinventing the wheel, again, and again, to produce demonstrable value and makes happy maintainers and auditors.

Consider the maintainer as a complete psychopath who knows where you live and want to kill you ... while you are writing code.

References

How I write micro-services? (part 2)

Golang Template Project

We are about to prepare a Golang project according Clean and Hexagonal Architecture principles.

TL;DR - Code repository on Github - <https://github.com/Zenithar/go-spotigraph

Domain

In order to manipulate concepts, we need to implements them. Let's use Spotify agile terminology.

  • User is a system identity
    • User has an id
    • User has a principal to describe user private identity (email, LDAP dn, etc.)
  • Squad is a group of User working together on a product
    • Squad has an id
    • Squad has a label
    • Squad has a Product Owner, referenced by an User:id
  • Guild is a group of User according their affinities
    • Guild has an id
    • Guild has a label
    • Guild has members, a collection of User:id
  • Tribe is group of Squad working on common project
    • Tribe has an id
    • Tribe has a label
    • Tribe has members, a collection of User:id
  • Chapter is a group of User according their skill-sets
    • Chapter has an id
    • Chapter has a label
    • Chapter has members, a collection of User:id
    • Chapter has a Chapter Leader, referenced by an User:id

Core Services

I assume from here that you know how to work with Golang - https://tour.golang.org

First, I create an internal package used to create the complete project, which will be executed in dedicated runnable artifacts.

$ mkdir spotigraph
$ go mod init go.zenithar.org/spotigraph
$ mkdir internal

internal package will not be accessible from outside of the project package.

Helpers

Helpers should contains additional function for models, such as:

  • Password encoding, verification and policy upgrades (Algorithm deprecation)
  • ID generation and validation
  • Time function indirection for time drive tests
$ mkdir internal/helpers
Random ID Generation

For example, id.go should contains all logic used to generate and verify ID syntax.

package helpers

import (
	"github.com/dchest/uniuri"
	validation "github.com/go-ozzo/ozzo-validation"
	"github.com/go-ozzo/ozzo-validation/is"
)

// IDGeneratedLength defines the length of the id string
const IDGeneratedLength = 32

// IDGeneratorFunc returns a randomly generated string useable as identifier
var IDGeneratorFunc = func() string {
	return uniuri.NewLen(IDGeneratedLength)
}

// IDValidationRules describes identifier contract for syntaxic validation
var IDValidationRules = []validation.Rule{
	validation.Required,
	validation.Length(IDGeneratedLength, IDGeneratedLength),
	is.Alphanumeric,
}

I like to use ozzo-validation because it's easy to create composable validation checks

Principal hashing

Another example with principal handling, you don't need to store it as plain text, think about privacy, database leaks, nobody is perfect ... So you could hash the principal before storing it. It's not the database role to hash the principal, it's part of your requirements to be privacy compliant.

package helpers

import (
	"encoding/base64"

	"golang.org/x/crypto/blake2b"
)

var principalHashKey = []byte(`7EcP%Sm5=Wgoce5Sb"%[E.<&xG8t5soYU$CzdIMTgK@^4i(Zo|)LoDB'!g"R2]8$`)

// PrincipalHashFunc return the principal hashed using Blake2b keyed algorithm
var PrincipalHashFunc = func(principal string) string {
	// Prepare hasher
	hasher, err:= blake2b.New512(principalHashKey)
	if err!= nil {
		panic(err)
	}

	// Append principal
	_, err = hasher.Write([]byte(principal))
	if err!= nil {
		panic(err)
	}

	// Return base64 hash value of the principal hash
	return base64.RawStdEncoding.EncodeToString(hasher.Sum(nil))
}

// SetPrincipalHashKey used to set the key of hash function
func SetPrincipalHashKey(key []byte) {
	if len(key)!= 64 {
		panic("Principal hash key length must be 64bytes long.")
	}

	principalHashKey = key
}

principalHashKey is hard-coded for default behavior, you must call exported function SetPrincipalHashKey to update the hash key.

Don't use simple hash simple (without key) because in this case, the secret is only based on the hash algorithm you use!

Time indirection

In order to set time during tests, you could use time.Now function indirection by declaring an alias and use it everywhere.

package helpers

// TimeFunc is used for time based tests
var TimeFunc = time.Now

For more complex clock mocking, I advise you to consider github.com/jonboulle/clockwork.

Password

Password has to be carefully processed when you decide to store it.

package helpers

import (
	"context"
	"fmt"
	"sync"

	"go.zenithar.org/pkg/log"
	"go.zenithar.org/butcher"
	"go.zenithar.org/butcher/hasher"
	"github.com/trustelem/zxcvbn"
)

var (
	once sync.Once
    
	// butcher hasher instance
	butch *butcher.Butcher
    
	// PasswordPepper is added to password derivation during hash in order to
	// prevent salt/password bruteforce
	// Do not change this!! It will makes all stored password unrecoverable
	pepperSeed = []byte("TuxV%AqKB0|gxjEB!vc~]T8Hf[q|xgS('3S<IEnqOv:jF&F8}+pur)N@DHYulF#")
)

const (
	// PasswordQualityScoreThreshold is the password quality threshold
	PasswordQualityScoreThreshold = 3
)

// Initialize butcher hasher
func init() {
	once.Do(func() {
		var err error
		if butch, err = butcher.New(
			butcher.WithAlgorithm(hasher.ScryptBlake2b512),
			butcher.WithPepper(pepperSeed),
		); err!= nil {
			log.Bg().Fatal("Unable to initialize butcher hasher")
		}
	})
}

// SetPasswordPepperSeed used to set the peppering parameter for butcher
func SetPasswordPepperSeed(key []byte) {
	if len(key)!= 64 {
		panic("Password peppering seed length must be 64bytes long.")
	}

	pepperSeed = key
}

// PasswordHasherFunc is the function used for password hashing
var PasswordHasherFunc = func(password string) (string, error) {
    return butcher.Hash([]byte(password))
}

// PasswordVerifierFunc is the function used to check password hash besides the cleartext one
var PasswordVerifierFunc = func(hash string, cleartext string) (bool, error) {
    return butcher.Verify([]byte(hash), []byte(cleartext))
}

// CheckPasswordPolicyFunc is the function used to check password storage policy and
// call the assigner function to set the new encoded password using the updated 
// password storage policy, if applicable.
var CheckPasswordPolicyFunc = func(hash string, cleartext string, assigner func(string) error) (bool, error) {
    if butcher.NeedsUpgrade([]byte(hash)) {
		if err:= assigner(cleartext); err!= nil {
			return false, err
		}
	}
	return true, nil
}

// CheckPasswordQualityFunc is the function used to check password quality regarding security constraints
var CheckPasswordQualityFunc = func(cleartext string) error {
	quality:= zxcvbn.PasswordStrength(cleartext, nil)
	if quality.Score < PasswordQualityScoreThreshold {
		return fmt.Errorf("Password quality insufficient, try to complexify it (score: %d/%d)", quality.Score, PasswordQualityScoreThreshold)
	}
	return nil
}

I use zxvbn as password strength evaluator (github.com/trustelem/zxcvbn)

Models

$ mkdir internal/models

For example user.go is defined as following:

package models

import (
	"fmt"
    "time"

	"go.zenithar.org/spotigraph/internal/helpers"

	validation "github.com/go-ozzo/ozzo-validation"
	"github.com/go-ozzo/ozzo-validation/is"
)

// User describes user attributes holder
type User struct {
    ID                 string
    Principal          string
    Created            time.Time
    PasswordModifiedAt time.Time
    Secret             string
}

// NewUser returns an user instance
func NewUser(principal string) *User {
    return &User{
        // Generate an identity from ID helper
		ID:        helpers.IDGeneratorFunc(),
        // Hash the given principal using helper
		Principal: helpers.PrincipalHashFunc(principal),
        // Set the creation date using the time function helper
        Created:   helpers.TimeFunc(),
	}
}
Password management

Password is not persistence adapter specific, it's an entity attribute of the User model.

// SetPassword updates the password hash of the given account
func (u *User) SetPassword(password string) (err error) {
	// Check password quality
	err = helpers.CheckPasswordQualityFunc(password)
	if err!= nil {
		return err
	}

	// Generate password hash
	u.Secret, err = helpers.PasswordHasherFunc(password)
	if err!= nil {
		return err
	}

	// Update last modified date
	u.PasswordModifiedAt = helpers.TimeFunc()

	// No error
	return nil
}

// VerifyPassword checks if given password matches the hash
func (u *User) VerifyPassword(password string) (bool, error) {
	// Validate password hash using constant time comparison
	valid, err:= helpers.PasswordVerifierFunc(u.Secret, password)
	if!valid || err!= nil {
		return false, err
	}

	// Check if password need upgrades
	return helpers.CheckPasswordPolicyFunc(u.PasswordHash, password, func(pwd string) error {
		return u.SetPassword(pwd)
	})
}
  • SetPassword is used to update password hash using the password helpers, the given password is evaluated according password complexity policy, hashed using scrypt-blake2b-512 algorithm.
  • VerifyPassword is used to verify the given clear-text password with the local encoded one. If the password encoding strategy changed since the password storage, the password is verified using the last password encoding strategy, then updated to latest one if password match. If encoded password is modified, User:passwordHash will be updated using the callback.

The password encoding update strategy is mandatory, if you want to be able to update password encoding policy without asking everyone to change their password.

Validation

For User validation, let's implement a Validate() error function using ozzo-validation.

// Validate entity contraints
func (u *User) Validate() error {
	return validation.ValidateStruct(u,
        // User must have a valid id
        validation.Field(&u.ID, helpers.IDValidationRules...),
    	// User must have a principal with valid printable ASCII characters as value
        validation.Field(&u.Principal, validation.Required, is.PrintableASCII),
	)
}

This method will be called from persistence adapters on creation, and update requests

By adding logic in models, you must add unit tests to check for specification compliance.

package models_test

import (
	"testing"

	. "github.com/onsi/gomega"

	"go.zenithar.org/spotigraph/internal/models"
)

func TestUserValidation(t *testing.T) {
	g:= NewGomegaWithT(t)

	for _, f := range []struct {
		name      string
		expectErr bool
	}{
		{"[email protected]", false},
	} {
		obj:= models.NewUser(f.name)
		g.Expect(obj).ToNot(BeNil(), "Entity should not be nil")

		if err := obj.Validate(); err!= nil {
			if!f.expectErr {
				t.Errorf("Validation error should not be raised, %v raised", err)
			}
		} else {
			if f.expectErr {
				t.Error("Validation error should be raised")
			}
		}
	}
}

Note the table driven test pattern very useful to decorelate test data from test cases (https://dave.cheney.net/2019/05/07/prefer-table-driven-tests).

Repositories

Repositories as Hexagonal architecture defines this component is a persisence adapter. It's a technical implementation of a models provider. It could be:

  • A local provider: database, file;
  • A remote provider: another micro-service.

I'm used to split repository implementations in a dedicated package according technical backend used.

$ mkdir internal/repositories
$ touch internal/repositories/api.go
$ mkdir internal/repositories/pkg
$ mkdir internal/repositories/pkg/postgresql
$ mkdir internal/repositories/pkg/mongodb

But before starting to implements the persistence adapter, and in order to comply the dependency loose coupling of The Clean Architecture, we have to declare the adapter interface first. So that all implementations must be compliant with.

For example api.go must contain only repository contract.

package repositories

import (
	"context"

	"go.zenithar.org/pkg/db"
	"go.zenithar.org/spotigraph/internal/models"
)

// UserSearchFilter represents user entity collection search criteria
type UserSearchFilter struct {
	UserID    string
	Principal string
}

// User describes user repository contract
type User interface {
	Create(ctx context.Context, entity *models.User) error
	Get(ctx context.Context, id string) (*models.User, error)
	Update(ctx context.Context, entity *models.User) error
	Delete(ctx context.Context, id string) error
	Search(ctx context.Context, filter *UserSearchFilter, pagination *db.Pagination, sortParams *db.SortParameters) ([]*models.User, int, error)
	FindByPrincipal(ctx context.Context, principal string) (*models.User, error)
}

Don't forget to pass context.Context as far as possible in your calls.

Unit Tests

For testing, I've added this go compiler generate step in api.go, to build User mocks from interface type definition.

//go:generate mockgen -destination test/mock/user.gen.go -package mock go.zenithar.org/spotigraph/internal/repositories User

Repository mocks will be used by service unit tests.

Integration Tests

These tests are executed using a real backend.

For example, the User integration test generator creates a full test that create, read, update, delete User using the persistence adapter implementation.

//+build integration

package specs

import (
	"context"
	"testing"

	"github.com/google/go-cmp/cmp"
	. "github.com/onsi/gomega"

	"go.zenithar.org/pkg/db"
	"go.zenithar.org/spotigraph/internal/models"
	"go.zenithar.org/spotigraph/internal/repositories"
)

// User returns user repositories full test scenario builder
func User(underTest repositories.User) func(*testing.T) {
	return func(t *testing.T) {
		t.Parallel()

		g:= NewGomegaWithT(t)

		// Stub context
		ctx:= context.Background()

		// Prepare a new entity
		created:= models.NewUser("[email protected]")
		g.Expect(created).ToNot(BeNil(), "Newly created entity should not be nil")

		// Create the entity using repository
		err:= underTest.Create(ctx, created)
		g.Expect(err).To(BeNil(), "Error creation should be nil")

		// -------------------------------------------------------------------------

		// Retrieve by id from repository
		saved, err:= underTest.Get(ctx, created.ID)
		g.Expect(err).To(BeNil(), "Retrieval error should be nil")
		g.Expect(saved).ToNot(BeNil(), "Saved entity should not be nil")

		// Compare objects
		g.Expect(cmp.Equal(created, saved)).To(BeTrue(), "Saved and Created should be equals")

		// Retrieve by non-existent id
		nonExistent, err:= underTest.Get(ctx, "non-existent-id")
		g.Expect(err).ToNot(BeNil(), "Error should be raised on non-existent entity")
		g.Expect(err).To(Equal(db.ErrNoResult), "Error ErrNoResult should be raised")
		g.Expect(nonExistent).To(BeNil(), "Non-existent entity should be nil")

		// Retrieve by principal
		savedPrincipal, err:= underTest.FindByPrincipal(ctx, created.Principal)
		g.Expect(err).To(BeNil(), "Retrieval error should be nil")
		g.Expect(savedPrincipal).ToNot(BeNil(), "Saved entity should not be nil")

		// Compare objects
		g.Expect(cmp.Equal(created, savedPrincipal)).To(BeTrue(), "SavedPrincipal and Created should be equals")

		// -------------------------------------------------------------------------

		// Update an entity
		saved, err = underTest.Get(ctx, created.ID)
		g.Expect(err).To(BeNil(), "Retrieval error should be nil")
		g.Expect(saved).ToNot(BeNil(), "Saved entity should not be nil")

		// Update properties

		// Update with repository
		err = underTest.Update(ctx, saved)
		g.Expect(err).To(BeNil(), "Update error should be nil")

		// Retrieve from repository to check updated properties
		updated, err:= underTest.Get(ctx, created.ID)
		g.Expect(err).To(BeNil(), "Retrieval error should be nil")
		g.Expect(updated).ToNot(BeNil(), "Saved entity should not be nil")

		// Compare objects
		g.Expect(cmp.Equal(created, updated)).To(BeTrue(), "Saved and Updated should be equals")

		// -------------------------------------------------------------------------

		// Remove an entity
		err = underTest.Delete(ctx, created.ID)
		g.Expect(err).To(BeNil(), "Removal error should be nil")

		// Retrieve from repository to check deletion
		deleted, err:= underTest.Get(ctx, created.ID)
		g.Expect(err).ToNot(BeNil(), "Deletion error should not be nil")
		g.Expect(err).To(Equal(db.ErrNoResult), "Error ErrNoResult should be raised")
		g.Expect(deleted).To(BeNil(), "Deleted entity should be nil")

		// Remove a non-existent entity
		err = underTest.Delete(ctx, "non-existent-id")
		g.Expect(err).ToNot(BeNil(), "Removal error should not be nil")
		g.Expect(err).To(Equal(db.ErrNoModification), "Error ErrNoModification should be raised")
	}
}
PostgreSQL

For example as User persistence adapter for PostgreSQL in internal/repositories/pkg/postgresql

package postgresql

import (
	"context"
	"strings"

	"go.zenithar.org/pkg/db"
	"go.zenithar.org/pkg/db/adapter/postgresql"
	"go.zenithar.org/spotigraph/internal/models"
	"go.zenithar.org/spotigraph/internal/repositories"
    
	sq "github.com/Masterminds/squirrel"
	"github.com/jmoiron/sqlx"
)

type pgUserRepository struct {
	adapter *postgresql.Default
}

// NewUserRepository returns a Postgresql user management repository instance
func NewUserRepository(session *sqlx.DB) repositories.User {
	// Default columns to retrieve
	defaultColumns := []string{"user_id", "principal", "secret", "creation_date"}

	// Sortable columns for criteria
	sortableColumns := []string{"user_id", "principal", "creation_date"}

	// Initialize repository
	return &pgUserRepository{
		adapter: postgresql.NewCRUDTable(session, "", UserTableName, defaultColumns, sortableColumns),
	}
}

// -----------------------------------------------------------------------------

func (r *pgUserRepository) Create(ctx context.Context, entity *models.User) error {
	if err := entity.Validate(); err != nil {
		return err
	}

	return r.adapter.Create(ctx, entity)
}

func (r *pgUserRepository) Get(ctx context.Context, realmID, id string) (*models.User, error) {
	var entity models.User

	if err := r.adapter.WhereAndFetchOne(ctx, map[string]interface{}{
		"user_id":  id,
	}, &entity); err != nil {
		return nil, err
	}

	return &entity, nil
}

func (r *pgUserRepository) Update(ctx context.Context, entity *models.User) error {
	return r.adapter.Update(ctx, map[string]interface{}{
		"secret": entity.Secret,
	}, map[string]interface{}{
		"user_id":  entity.ID,
	})
}

func (r *pgUserRepository) Delete(ctx context.Context, realmID, id string) error {
	return r.adapter.RemoveOne(ctx, map[string]interface{}{
		"user_id":  id,
	})
}

func (r *pgUserRepository) Search(ctx context.Context, filter *repositories.UserSearchFilter, pagination *db.Pagination, sortParams *db.SortParameters) ([]*models.User, int, error) {
	var results []*models.User

	count, err := r.adapter.Search(ctx, r.buildFilter(filter), pagination, sortParams, &results)
	if err != nil {
		return nil, count, err
	}

	if len(results) == 0 {
		return results, count, db.ErrNoResult
	}

	// Return results and total count
	return results, count, nil
}

func (r *pgUserRepository) FindByPrincipal(ctx context.Context, realmID string, principal string) (*models.User, error) {
	var entity models.User

	if err := r.adapter.WhereAndFetchOne(ctx, map[string]interface{}{
		"principal": principal,
	}, &entity); err != nil {
		return nil, err
	}

	return &entity, nil
}

// -----------------------------------------------------------------------------

func (r *pgUserRepository) buildFilter(filter *repositories.UserSearchFilter) interface{} {
	if filter != nil {
		clauses := sq.Eq{
			"1": "1",
		}

		if len(strings.TrimSpace(filter.UserID)) > 0 {
			clauses["user_id"] = filter.UserID
		}
		if len(strings.TrimSpace(filter.Principal)) > 0 {
			clauses["principal"] = filter.Principal
		}

		return clauses
	}

	return nil
}

If the model can't fit directly in the persistence adapter you have to translate it as needed

With given table schema :

-- +migrate Up
CREATE TABLE IF NOT EXISTS squads (
    id                  VARCHAR(32) NOT NULL PRIMARY KEY,
    name                VARCHAR(50) NOT NULL,
    meta                JSON        NOT NULL,
    product_owner_id    VARCHAR(32) NOT NULL,
    member_ids          JSON        NOT NULL
);

-- +migrate Down
DROP TABLE squads;

In this case, I choose to use JSON column type to handle Squad:members association and Squad:metadata. So I had to translate member_ids array as a JSON object before writing to database, a reading from JSON to an array when decoding.

My persistence adapter must transform the model before each SQL operations.

package postgresql

import (
	"context"
	"encoding/json"
	"strings"

	api "go.zenithar.org/pkg/db"
	db "go.zenithar.org/pkg/db/adapter/postgresql"
	"go.zenithar.org/spotigraph/internal/models"
	"go.zenithar.org/spotigraph/internal/repositories"

	sq "github.com/Masterminds/squirrel"
	"github.com/jmoiron/sqlx"
	"github.com/pkg/errors"
)

type pgSquadRepository struct {
	adapter *db.Default
}

// NewSquadRepository returns an initialized PostgreSQL repository for squads
func NewSquadRepository(cfg *db.Configuration, session *sqlx.DB) repositories.Squad {
	// Defines allowed columns
	defaultColumns := []string{
		"id", "name", "meta", "product_owner_id", "member_ids",
	}

	// Sortable columns
	sortableColumns := []string{
		"name", "product_owner_id",
	}

	return &pgSquadRepository{
		adapter: db.NewCRUDTable(session, "", SquadTableName, defaultColumns, sortableColumns),
	}
}

// ------------------------------------------------------------

type sqlSquad struct {
	ID             string `db:"id"`
	Name           string `db:"name"`
	Meta           string `db:"meta"`
	ProductOwnerID string `db:"product_owner_id"`
	MemberIDs      string `db:"member_ids"`
}

func toSquadSQL(entity *models.Squad) (*sqlSquad, error) {
	meta, err := json.Marshal(entity.Meta)
	if err != nil {
		return nil, errors.WithStack(err)
	}

	members, err := json.Marshal(entity.MemberIDs)
	if err != nil {
		return nil, errors.WithStack(err)
	}

	return &sqlSquad{
		ID:             entity.ID,
		Name:           entity.Name,
		Meta:           string(meta),
		MemberIDs:      string(members),
		ProductOwnerID: entity.ProductOwnerID,
	}, nil
}

func (dto *sqlSquad) ToEntity() (*models.Squad, error) {
	entity := &models.Squad{
		ID:             dto.ID,
		Name:           dto.Name,
		ProductOwnerID: dto.ProductOwnerID,
	}

	// Decode JSON columns

	// Metadata
	err := json.Unmarshal([]byte(dto.Meta), &entity.Meta)
	if err != nil {
		return nil, errors.WithStack(err)
	}

	// Membership
	err = json.Unmarshal([]byte(dto.MemberIDs), &entity.MemberIDs)
	if err != nil {
		return nil, errors.WithStack(err)
	}

	return entity, nil
}

// ------------------------------------------------------------

func (r *pgSquadRepository) Create(ctx context.Context, entity *models.Squad) error {
	// Validate entity first
	if err := entity.Validate(); err != nil {
		return err
	}

	// Convert to DTO
	data, err := toSquadSQL(entity)
	if err != nil {
		return err
	}

	return r.adapter.Create(ctx, data)
}

func (r *pgSquadRepository) Get(ctx context.Context, id string) (*models.Squad, error) {
	var entity sqlSquad

	if err := r.adapter.WhereAndFetchOne(ctx, map[string]interface{}{
		"id": id,
	}, &entity); err != nil {
		return nil, err
	}

	return entity.ToEntity()
}

func (r *pgSquadRepository) Update(ctx context.Context, entity *models.Squad) error {
	// Validate entity first
	if err := entity.Validate(); err != nil {
		return err
	}

	// Intermediary DTO
	obj, err := toSquadSQL(entity)
	if err != nil {
		return err
	}

	return r.adapter.Update(ctx, map[string]interface{}{
		"name":             obj.Name,
		"meta":             obj.Meta,
		"product_owner_id": obj.ProductOwnerID,
		"member_ids":       obj.MemberIDs,
	}, map[string]interface{}{
		"id": entity.ID,
	})
}
MongoDB

Another example with a NoSQL persistence adapter.

package mongodb

import (
	"context"

	mongowrapper "github.com/opencensus-integrations/gomongowrapper"
	api "go.zenithar.org/pkg/db"
	db "go.zenithar.org/pkg/db/adapter/mongodb"
	"go.zenithar.org/spotigraph/internal/models"
	"go.zenithar.org/spotigraph/internal/repositories"
)

type mgoSquadRepository struct {
	adapter *db.Default
}

// NewSquadRepository returns an initialized MongoDB repository for squads
func NewSquadRepository(cfg *db.Configuration, session *mongowrapper.WrappedClient) repositories.Squad {
	return &mgoSquadRepository{
		adapter: db.NewCRUDTable(session, cfg.DatabaseName, SquadTableName),
	}
}

// ------------------------------------------------------------

func (r *mgoSquadRepository) Create(ctx context.Context, entity *models.Squad) error {
	// Validate entity first
	if err := entity.Validate(); err != nil {
		return err
	}

	return r.adapter.Insert(ctx, entity)
}

func (r *mgoSquadRepository) Get(ctx context.Context, id string) (*models.Squad, error) {
	var entity models.Squad

	if err := r.adapter.WhereAndFetchOne(ctx, map[string]interface{}{
		"id": id,
	}, &entity); err != nil {
		return nil, err
	}

	return &entity, nil
}

func (r *mgoSquadRepository) Update(ctx context.Context, entity *models.Squad) error {
	// Validate entity first
	if err := entity.Validate(); err != nil {
		return err
	}

	return r.adapter.Update(ctx, map[string]interface{}{
		"name":             entity.Name,
		"meta":             entity.Meta,
		"product_owner_id": entity.ProductOwnerID,
	}, map[string]interface{}{
		"id": entity.ID,
	})
}

func (r *mgoSquadRepository) Delete(ctx context.Context, id string) error {
	return r.adapter.Delete(ctx, id)
}

func (r *mgoSquadRepository) FindByName(ctx context.Context, name string) (*models.Squad, error) {
	var entity models.Squad

	if err := r.adapter.WhereAndFetchOne(ctx, map[string]interface{}{
		"name": name,
	}, &entity); err != nil {
		return nil, err
	}

	return &entity, nil
}

func (r *mgoSquadRepository) AddMembers(ctx context.Context, id string, users ...*models.User) error {
	// Retrieve squad entity
	entity, err := r.Get(ctx, id)
	if err != nil {
		return err
	}

	// Add user as members
	for _, u := range users {
		entity.AddMember(u)
	}

	// Update members
	return r.adapter.Update(ctx, map[string]interface{}{
		"member_ids": entity.MemberIDs,
	}, map[string]interface{}{
		"id": entity.ID,
	})
}

func (r *mgoSquadRepository) RemoveMembers(ctx context.Context, id string, users ...*models.User) error {
	// Retrieve squad entity
	entity, err := r.Get(ctx, id)
	if err != nil {
		return err
	}

	// Remove user from members
	for _, u := range users {
		entity.RemoveMember(u)
	}

	// Update members
	return r.adapter.Update(ctx, map[string]interface{}{
		"member_ids": entity.MemberIDs,
	}, map[string]interface{}{
		"id": entity.ID,
	})
}

Never update a full object without controlling each keys, you must setup update function to update only updatable attributes.

Running integration tests

In order to run integration tests with Golang, you must prepare a TestMain . This runner is responsible of building all related persistence adapter instance according requested command line flag.

The test specification will be used to generate the full scenario test via passing the persistence adapter to the generator.

Obviously all persistence adapter implementations should have the same behavior, validated by your test suite.

// +build integration

package integration

import (
	"context"
	"flag"
	"fmt"
	"math/rand"
	"os"
	"strings"
	"testing"
	"time"

	"go.zenithar.org/pkg/testing/containers/database"

	"go.uber.org/zap"
	"go.zenithar.org/pkg/log"
	"go.zenithar.org/spotigraph/internal/version"
)

var databases = flag.String("databases", "postgresql", "Repositories backend to use, splitted with a coma ','. Example: postgresql,mongodb,rethinkdb")

func init() {
	flag.Parse()

	ctx := context.Background()

	// Prepare logger
	log.Setup(ctx, &log.Options{
		Debug:     true,
		AppName:   "spotigraph-integration-tests",
		AppID:     "123456",
		Version:   version.Version,
		Revision:  version.Revision,
		SentryDSN: "",
	})

	// Initialize random seed
	rand.Seed(time.Now().UTC().Unix())

	// Set UTC for all time
	time.Local = time.UTC
}

func testMainWrapper(m *testing.M) int {
	if testing.Short() {
		fmt.Println("Skipping integration tests")
		return 0
	}

	log.Bg().Info("Initializing test DB for integration test (disable with `go test -short`)")

	ctx := context.Background()
	backends := strings.Split(strings.ToLower(*databases), ",")

	for _, back := range backends {
		switch back {
		case "postgresql":
			// Initialize postgresql
			cancel, err := postgreSQLConnection(ctx)
			if err != nil {
				log.Bg().Fatal("Unable to initialize repositories", zap.Error(err))
			}
			defer func() {
				cancel()
			}()
		default:
			log.Bg().Fatal("Unsupported backend", zap.String("backend", back))
		}
	}

	defer func() {
		database.KillAll(ctx)
	}()

	return m.Run()
}

// TestMain is the test entrypoint
func TestMain(m *testing.M) {
	os.Exit(testMainWrapper(m))
}

It will initialize a docker container running a PostgreSQL server using ory-am/dockertest. It will automates the database server deployment when executing test in local, and could be used also on a remote database instance (when managing database execution with your CI pipeline for example).

package database

import (
	"fmt"
	"log"

	// Load driver if not already done
	_ "github.com/lib/pq"

	"github.com/dchest/uniuri"
	dockertest "gopkg.in/ory-am/dockertest.v3"

	"go.zenithar.org/pkg/testing/containers"
)

var (
	// PostgreSQLVersion defines version to use
	PostgreSQLVersion = "10"
)

// PostgreSQLContainer represents database container handler
type postgreSQLContainer struct {
	Name     string
	pool     *dockertest.Pool
	resource *dockertest.Resource
	config   *Configuration
}

// NewPostgresContainer initialize a PostgreSQL server in a docker container
func newPostgresContainer(pool *dockertest.Pool) *postgreSQLContainer {

	var (
		databaseName = fmt.Sprintf("test-%s", uniuri.NewLen(8))
		databaseUser = fmt.Sprintf("user-%s", uniuri.NewLen(8))
		password     = uniuri.NewLen(32)
	)

	// Initialize a PostgreSQL server
	resource, err := pool.Run("postgres", PostgreSQLVersion, []string{
		fmt.Sprintf("POSTGRES_PASSWORD=%s", password),
		fmt.Sprintf("POSTGRES_DB=%s", databaseName),
		fmt.Sprintf("POSTGRES_USER=%s", databaseUser),
	})
	if err != nil {
		log.Fatalf("Could not start resource: %s", err)
	}

	// Prepare connection string
	connectionString := fmt.Sprintf("postgres://%s:%s@localhost:%s/%s?sslmode=disable", databaseUser, password, resource.GetPort("5432/tcp"), databaseName)

	// Retrieve container name
	containerName := containers.GetName(resource)

	// Return container information
	return &postgreSQLContainer{
		Name:     containerName,
		pool:     pool,
		resource: resource,
		config: &Configuration{
			ConnectionString: connectionString,
			Password:         password,
			DatabaseName:     databaseName,
			DatabaseUser:     databaseUser,
		},
	}
}

// -------------------------------------------------------------------

// Close the container
func (container *postgreSQLContainer) Close() error {
	log.Printf("Postgres (%v): shutting down", container.Name)
	return container.pool.Purge(container.resource)
}

// Configuration return database settings
func (container *postgreSQLContainer) Configuration() *Configuration {
	return container.config
}

Complete integration test suite could be found here - https://github.com/Zenithar/go-spotigraph/blob/master/internal/repositories/test/integration/main_test.go

Conclusion

At this point, you must be able to execute all test suites on your domain as persistence adapters. From this point, you should be able to create business service by using these adapters via the interface, NOT DIRECTLY via adapter instance.

References

How I write micro-services? (part 3)

On the previous post (part-2), we have copletly done all first version of our SCRUD persistence adapters. Now we can assemble them as service to produce the product value expected.

Golang Template Project

Business

Use Cases (aka Services)

Constraints
Decorators

Service Objects

Infrastructure

Configuration Management

Core Wiring

Observability

Dispatchers

Packaging

Artifacts

Command Line

Deployment

Docker
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment