How do you send information between clients and servers? What format should that information be in? What happens when the server changes the format, but the client has not been updated yet? What happens when the server changes the format, but the database cannot be updated?
These are difficult questions. It is not just about picking a format, but rather picking a format that can evolve as your application evolves.
By now there are many approaches to communicating between client and server. These approaches tend to be known within specific companies and language communities, but the techniques do not cross borders. I will outline JSON, ProtoBuf, and GraphQL here so we can learn from them all.
Copyright © 2016-2018 Fantasyland Institute of Learning. All rights reserved.
A function is a mapping from one set, called a domain, to another set, called the codomain. A function associates every element in the domain with exactly one element in the codomain. In Scala, both domain and codomain are types.
val square : Int => Int = x => x * x
- Web Server: Play (framework) or http4s (library)
- Actors: akka
- Asynchronous Programming: monix (for tasks, reactors, observables, scheduler etc)
- Authentication: Silhouette
- Authorization: Deadbolt
- Command-line option parsing: case-app
- CSV Parsing: kantan.csv
- DB: doobie (for PostgreSQL)
All things considered, our experience in Scala Native has shown that resource management in Scala is way harder than it should be. This gist presents a simple design pattern that makes it resource management absolutely hassle-free: scoped implicit lifetimes.
The main idea behind it is to encode resource lifetimes through a concept of an implicit scope. Scopes are necessary to acquire resources. They are responsible for disposal of the resources once the evaluation exits the
# Add this snippet to the top of your playbook. | |
# It will install python2 if missing (but checks first so no expensive repeated apt updates) | |
# [email protected] | |
- hosts: all | |
gather_facts: False | |
tasks: | |
- name: install python 2 | |
raw: test -e /usr/bin/python || (apt -y update && apt install -y python-minimal) |
#!/bin/sh | |
# Make sure to: | |
# 1) Name this file `backup.sh` and place it in /home/ubuntu | |
# 2) Run sudo apt-get install awscli to install the AWSCLI | |
# 3) Run aws configure (enter s3-authorized IAM user and specify region) | |
# 4) Fill in DB host + name | |
# 5) Create S3 bucket for the backups and fill it in below (set a lifecycle rule to expire files older than X days in the bucket) | |
# 6) Run chmod +x backup.sh | |
# 7) Test it out via ./backup.sh |
package com.vaughndickson.elasticsearch | |
import groovy.util.logging.Slf4j | |
import org.elasticsearch.action.admin.cluster.health.ClusterHealthStatus | |
import org.elasticsearch.client.Client | |
import org.elasticsearch.client.transport.TransportClient | |
import org.elasticsearch.common.settings.ImmutableSettings | |
import org.elasticsearch.common.settings.Settings | |
import org.elasticsearch.common.transport.InetSocketTransportAddress | |
import org.springframework.beans.factory.DisposableBean |
/** | |
* A Map which tracks the insertion order of entries, so that entries may be | |
* traversed in the order they were inserted. Uses just two purely functional | |
* maps. | |
*/ | |
import scala.collection.immutable.LongMap | |
class LinkedMap[K,V]( | |
entries: Map[K,(V,Long)], |