.
-
-
Save MithunArunan/8e7a3df05862cbf6647ad3bde8ce884e to your computer and use it in GitHub Desktop.
Building Core libraries with Meta Programming
- Re-Use across projects is much cleaner
- Separation of responsibility - part of your code may be more suitable different developers or teams
- You can benefit from improvements in the libraries that other teams make without having to locate the specific code
- Cleaner design and code - thinking about structuring things into libraries should result in related groups of functionality in each library and you tend to separate generic code, into the libraries, from the application specifics, in the app.
- If you use only small subset of the external library
- If there is any possibility of changing the external library in the future
- If there is any no possibility of changing the external library
- Your code base becomes more flexible to changes
- You can define the API of the wrapper independently of the API of the library
- Unit testing is way simpler
- You create a loosely coupled system
Collector - FluentD/Beats (Filebeat/Metricbeat)
Backend store - ES
Visualization - Kibana
- Environment specific log encoding - JSON (production), console(development) JSON for machine consumption and the console output for humans
- Configuration to specify the mandatory parameters to be taken from thread variables
{
"level": "info",
"ip": "127.0.0.1",
"log": "raw log from source",
"request_id": "abcdefg",
"xxx_metadata": {
},
"payload": {
},
}
- Flexibility to add new variables
- Strict type checking
Platform/Framework
Service essentials
- Independently Developed & Deployed
- Private Data Ownership
If changes to a shared library require all services be updated simultaneously, then you have a point of tight coupling across services. Carefully understand the implications of any shared library you're introducing.
https://www.youtube.com/watch?v=X0tjziAQfNQ
https://dzone.com/articles/microservices-in-practice-1
https://eng.uber.com/building-tincup/
https://eng.uber.com/tech-stack-part-one/
https://konghq.com/webinars-success-service-mesh-architecture-monoliths-microservices-beyond/
For each microservice, track the folowing
- Overall CPU utilization
- Overall Memory utilization
- Overall Disk utilization
- Latency per API (50%, 95th percentile, 99th percentile)
- Throughput per API (max throughput, avg throughput)
- Newrelic
- Elastic.co APM
- OpenCensus
- Prometheus
- Zipkin
- Jaegar
https://www.elastic.co/solutions/apm
https://github.com/kubernetes/heapster
https://github.com/coreos/prometheus-operator/tree/master/contrib/kube-prometheus
SPDY was an experimental protocol, developed at Google and announced in mid 2009, whose primary goal was to try to reduce the load latency of web pages by addressing some of the well-known performance limitations of HTTP/1.1.
HTTP/2 reduces latency by enabling full request and response multiplexing, minimize protocol overhead via efficient compression of HTTP header fields, support for request prioritization, allows multiple concurrent exchanges on the same connection and server push.
RFC 7540 (HTTP/2) and RFC 7541 (HPACK)
HTTP/0.9 was a one-line protocol to bootstrap the World Wide Web.
HTTP/1.0 documented the popular extensions to HTTP/0.9 in an informational standard.
HTTP/1.1 introduced an official IETF standard.
HTTP/1.x clients need to use multiple connections to achieve concurrency and reduce latency; HTTP/1.x does not compress request and response headers, causing unnecessary network traffic; HTTP/1.x does not allow effective resource prioritization, resulting in poor use of the underlying TCP connection; and so on.
Optimized encoding mechanism between the socket interface and the higher HTTP API exposed to our applications: the HTTP semantics, such as verbs, methods, and headers, are unaffected, but the way they are encoded while in transit is different. Instead of new line delimited plaintext.
Stream: A bidirectional flow of bytes within an established connection, which may carry one or more messages.
Message: A complete sequence of frames that map to a logical request or response message.
Frame: The smallest unit of communication in HTTP/2, each containing a frame header, which at a minimum identifies the stream to which the frame belongs.
-
Canonical
-
Performance
-
Backward compatibility
-
Polyglot
High Performance Browser Networking by Ilya Grigorik
Message Queues - RabbitMQ, Kafka
Consideration | RabbitMQ | Kafka |
---|---|---|
Language | ErLang | Scala |
Organization |
- Exchanges
- Queues
- Bindings
Push API Pull API
MQTT STOMP
Clustering
Federation
Shovel
Nodes are equal peers, No Master/Slave setup. Data is sharded between the nodes and can be viewed by the client from any node.
All data/state for the cluster are replicated, not the queues. Each queue has a master node.
- Mirrored Queues
- Non Mirrored Quueues
Node discovery happens with ErlangCookie located at /var/lib/rabbitmq/.erlang.cookie using anyone of the standard peer discovery plugins rabbit_peer_discovery_k8s
Disk vs RAM Nodes - One disk node should be always present How external clients connect to rabbitmq?
How does node discovery happen?
Messages stored in disk? - /var/lib/rabbitmq/mnesia/rabbit@hostname/queues - File locations
rabbitmq-server
rabbitmqctl status
rabbitmq-plugins list
rabbitmqadmin
- How to handle errors in logging the data? Type error or so on? How will we be notified? RBDMS will throw error in sentry
- What are the data points?
- How to retain 1 year old data and use it?
- How to add new data?
- How to make breaking changes to the data stored for a better version of it?
- What is the overhead (ms) in using DB over ES.
- No updates to the call information? Consider this scenario, you get a call for order refund status in e-commerce domain we raise a concerned ticket in the e-commerce platform and keep the call details as unresolved. We need to mark this as resolved by the client either by contacting the customer manually or in an automated way.
- Multi tenancy ? Separate database for each client?
Possible categories of data
- Write only data.
- Significant/Critical data.
- Not so significant data.
- Structured data.
- Unstructured data.
Possible Architectural Solutions
- To store call logs in RDBMS - Straight forward approach.
- To push details to a message queue - Consume and store in RDBMS, to avoid overhead time of storing in DB.
- Push details to a message queue - Consume and store in NOSQL like Mongo, Cassandra.
- Push and forget all the structured data directly to ES.
- Use application logs and store in ES using beats or fluentd.
- Keep the data model generic and just use the stucts
To build Activity dashboard and query the data.
— RDBMS
— Logging approach
RESEARCH
ES - Primary data store
Good option for cases with only writes (no updates), many reads and wherein there is no need for transactions, integrity, constraints (datatype, PK, FK, NOT NULL, DEFAULT, UNIQUE), correctness and robustness
Elasticsearch is commonly used in addition to another database.