Elixir in the Next 5 Years - Jose Valim Video
- Earlier
- Enhanced documentation with
authors
anddeprecated
. Upgraded ExDocs - Development page on elixir-lang.org
- Community is getting better. New podcasts, frameworks, and faster compilation.
- Professional support of developers by Plataformatec
- Enhanced documentation with
- Ongoing
- Defining guidelines on new Elixir features. Prefer "Enable" vs "Provide"
- Property-testing. External library
- XHTTP. External library
- Code formatter
- Releases.
- Investigating Type System.
- Future
- Elixir 2.0 to rid deprecations
- Releases into stdlib
- "The next five years are in your hands"
Debugging - Luke Imhoff Video
- Ways to debug:
- IO
- Pry
- Graphical
- Tracing (Not covered)
- Sometimes difficult because your Elixir code doesn't actually exist as-is after compilation.
- Code chunks are compiled to a variety of formats
- IntelliJ-Elixir has access to these code chunks for you to review
- Elixir -> Erlang -> BEAM assembly language -> C -> Assembly
- Yikes
Architecting Flow in Elixir - Rene Fohring Video
- Pipes
with
macro- Token approach
- Create a "Token" struct to hold context as data flows
- Build behaviors/callbacks to enforce consistency
- Use metaprogramming to create small DSL if necessary.
Growing Applications and Taming Complexity - Aaron Renner Video
When a module is reaching into multiple other modules, it's an indicator of complexity.
- Create public API
- Try making implementations swappable. This makes it testable and defines boundaries. Use Mox testing library
- Rinse and repeat for internal APIs
Take your Time - Ben Marx Video
- Schedulers have a run queue. This is good to look at for determining where the bottlenecks are.
- NIFs (Native Implemented Functions) are usually written in C. This is where you lose guarantees when interacting with Erlang.
- Dirty Schedulers. Introduced in OTP 17, but enabled by default in OTP 21. Dirty schedulers are either IO intensive or CPU intensive. Dirty schedulers are unmanaged, like async tasks.
- Since C is hard, use Rust, which provides type-safety and thread-safety.
- Use Rustler to create Rust NIFs for Erlang. Watch "Taking Elixir to the Metal" by Sonny Scroggin
- Avoid NIFs until you really need IO or CPU-intensive work.
Introducing NervesHub - Justin Schneck Video
- Nerves is a set of tooling around releasing into embedded hardware.
- Nerves is not Rasbian, GRiSP, small enough for Arduinos
- Deploying Nerves to fleet of IoT is difficult, so:
- NervesHub.
mix nerves_hub.firmware publish
- NervesHub manages secure deployments
- NervesHub can deploy to groups of devices and provide unique serial numbers.
Event Sourcing in Real World Applications - Gaslight Video
- Event Storming helps determine domains
- Event Sourcing is a technique of logging where every change is stored, so that at any time, the current state could be reconstructed
- Event Sourcing is good when auditing is very important.
- Existing Libraries:
- Asynchronous processes make coordinating time-sensitive things difficult
- The database doesn't have to be your only store. It can be in memory too
Docker and OTP: Friends or Foes? - Daniel Azuma Video
- Elixir came out around the same time as Containers and Docker. OTP and container orchestration have some friction; it's a clash of culture.
- Problem 1: Maintaining an Erlang Cluster We can use a library: libcluster.
- Problem 2: Handling node shutdown. For channels, Phoenix has a solution already with Phoenix Presence. For GenServers. OTP supervisor could restart the process, but it's the container that goes down not the process. Instead, we want a "Distributed" supervisor. There's a solution: horde
- Problem 3: Preserving process state. When a node is restarted, it'll grab the default state which is likely empty. We need a "Distributed Agent" == CRDT. Swarm provides a CRDT.
- Problem 4: Preserving communication. OTP provides a registry for named processes, but we need a "Distributed Registry". Horde provides a distributed registry.
- Let's be friends :)
Introducing Scenic - A functional UI Framework - Boyd Multerer Video
- Scenic Goals: Small. Fast. Simple. Robust. Remoteable. Approachable. Secure
- Scenic Architecture:
- Scene layer. Genserver process. Analogous to a web page. Defines a graph.
- ViewPort layer. Intermediate management layer
- Driver layer. Specific to hardware. Does actual render. Handles user input
- The service - Hosted service for Scenic applications. User auth, UI remoting, audit trails, and debugging. https://www.kry10.com
- Basic controls are included in the library: buttons, dropdowns, radios, text input, password field, painting images, drawing lines.
- OTP handles crashes and restarts really well. OTP limits the blast radius of a bug
Using Elixir and OTP Behaviors to monitor infrastructure - Jeffry Gillis Video
- Troubleshooting legacy systems is manual. Let's use Elixir to make it easier
- Erlang has SSH server built in. Use https://github.com/rubencaro/sshex as an Elixir wrapper
- Wrap SSH connections with a GenServer
- Track the GenServer SSH processes with the OTP registry. Start up a new registry and supervise it.
- Use the DynamicSupervisor to automatically restart dead processes
- The OTP Registry can also use a Pub/Sub architecture. This is helpful because SSH processes could take a little time to be ready since it's over the network.
- Wrap it with a Phoenix app to visual metrics. https://github.com/deadtrickster/prometheus.ex
Scaling Concurrency without Getting burned - Alex Garibay Video
- Previous architecture: Rails-esque worker queues. Important data fetched serially
- New architecture: Embrace OTP platform. Catalog entire blockchain as fast as possible. Efficient event dispatching
- GenServer -> Start Task. Recover the Task with saved state in Genserver.
- Supervised Task -> Monitor but don't link.
Task.async_nolink
- Maximize Database Writes:
Repo.insert
can be slow with rapid calls. Instead useEcto.Repo.insert_all/3
- Maximizing Database Reads.
Repo.all
can be too slow. It's accumulated before it returns. Slow. Database loads into memory, and then again into Elixir as structs.Ecto.Repo.stream/2
lazy enumerable. More memory efficient. Usetimeout: :infinity
to bothRepo.stream
andRepo.transaction
- Introduce Read-heavy Cache. Common pattern is to store state in an Agent, but don't. Instead, use ETS with read-concurrency flag
:ets.new(:name, read_concurrency: true, write_concurrency: true)
- Efficient event dispathing through the Registry pubsub
Going Full Circle - Chris McCord Video
- Phoenix 1.4 will be "Out Soon". Release Candidate releasing this week.
- HTTP2 with Cowboy 2
- Fast dev compilation
- Overhauled Transport layer
- Improved Presence API
- Telemetry - Separate library:
phoenix_telemetry
- Tiny Core
- Reporters
- Aggregators
- Introducing Phoenix.LiveView. Not part of Phoenix core. Rich user experiences without the complexity.
- Includes event binding,
phx_*
- Can it scale? Yea, but it's not free. But it can achieve 60fps!!
- It's not best for all situations, like if you need offline support, or have high latency clients, or need to take advantage of native features or have incredibly complex UI, then SPAs are better choices. Otherwise, LiveView fits the bill.
- Includes event binding,
- Phoenix is not just about performance. It's also about advancing the state of the art.
https://www.youtube.com/watch?v=m7TWMFtDwHg&index=12&list=PLqj39LCvnOWaxI87jVkxSdtjG8tlhl7U6
- Documentation metadata.
authors
,since
,deprecated
using these @doc tags in ExDoc
- New podcasts
- New ExDoc versions. Documentation is first-class
- Erlang/OTP 21 released, which leads to 15-20% faster compilation.
- Elixir: A Mini-Documentary
- Membrane Framework. Before it was Ecto, Phoenix, Nerves, and now Membrane. Evidence of extending Elixir.
- New "Development" page that explains next set goals, and how to contribute.
- Started the "Elixir Development Subscription"
- Hired Wojtek Mach for dedicated ElixirLang development (Up to 2, +Jose)
- Guidelines for new features
- Extensibility. Prefer 'Enable' vs 'Provide'
- Conservative. It's very difficult to remove code once added. If unsure, then postpone.
- Code formatter
- Helps you focus on what matters.
- Community consistency.
- No style guide. It's a bad introduction into the language.
- Unreadable code is still unreadable code and the formatter will expose that.
- Property-based testing
- Exposed bugs in stdlib, so the community can benefit too.
- Promote best-practices in the community.
- Not going into stdlib. Remain as a library. SteamData, PropEr.
- https://github.com/whatyouhide/stream_data
- XHTTP
- "An HTTP Client that does not prescribe a process architecture."
- Remain as a library outside of Elixir
- https://github.com/ericmj/xhttp
- Releases
- Releases are the best and safest mechanism to deploy systems.
- Continue using Distillery.
- Eventually will be part of core (since it's part of OTP).
- Configuration is a big topic that needs to be "standardized". Experimentation is happening now.
- Type System
- "Detecting errors via type-checking", but no type system will protect from all errors.
- Started development, then stopped development
- Fault-tolerance is necessary in Elixir, so it can't be too strict.
- Intersection types are very expensive in terms of inference to the point of being impractical. An alternative would be to require all inputs to be explicitly typed; but would that be Elixir?
- This is not going into stdlib unless we find a good performant way.
- Elixir 2.0
- If we need a major language version to change the language, then we failed to build an extensible language.
- From 2014, there is only one breaking changed planned for v2.0
- Old practices are deprecated.
- Will solidify best practices and remove deprecated ones.
- We are not in a hurry.
- The last major planned feature for Elixir is "Releases"
- New projects, ideas, and developments belong in the ecosystem.
- "The next five years are in your hands"
https://www.youtube.com/watch?v=w4xMarVUZQ4&list=PLqj39LCvnOWaxI87jVkxSdtjG8tlhl7U6&index=1
- IO
- Pry
- Graphical
- Tracing (Not covered)
IO.puts
IO.inspect
options:limit
:printable_limit
:as_strings
:pretty
:struct
Structs sometimes have protocols that hide private fields:width
. default 80
IO.warn
which also prints red and with a stacktrace. Originally used for deprecation warnings
- Stop on the line and evaluate code
require IEx; IEx.pry()
IEx.break!(Module:my_function/1, )
respawn
kills the iex shell processcontinue
continues the processwhereami
gives you the context where the pry is happeningopen
opens the source file in $ELIXIR_EDITOR or $EDITOR- CTRL+C in iex actually stops the VM, whereas "Allow? [Yn]" is allowing VM to continue.
- Conditional breakpoints, run one test, or use patterns/guards.
- Code chunks are compiled to a variety of formats
- IntelliJ-Elixir has access to these code chunks for you to review
- Elixir -> Erlang -> BEAM assembly language -> C -> Assembly
- Yikes
https://www.youtube.com/watch?v=uU7T-b1k2Ws&list=PLqj39LCvnOWaxI87jVkxSdtjG8tlhl7U6&index=3
http://trivelop.de/2018/05/14/flow-elixir-designing-apis/ http://trivelop.de/2018/04/30/flow-elixir-token-approach-pros-and-cons/ http://trivelop.de/2018/04/09/flow-elixir-metaprogramming/
How to manage the flow of data. For example, Business Process of image conversion.
But Elixir is just modules and functions, but architecting is a human problem.
How do we avoid these problems?
- Aim for clarity
- Communicate intent
- Create structures that are flexible enough
- Take care that the code is accessible
- Creating maintainable software is the highest goal
Elixir can help with this:
- Pipes
with
macro- "Token" approach
def do_something(data) do
data
|> function1()
|> function2()
|> function3()
end
#...
def convert_images(data) do
data
|> parse_options
|> validate_options
|> prepare_conversion
|> convert_images
|> report_results
end
I can show these to my manager and they can probably get it. But we have a hidden struct in this pipeline so that the next function can move on.
We use with
when we have to use 3rd-party functions where we can't control the return structure.
with {glob, target_dir, format} <- parse_options(argv)
:ok <_ validate_options
filenames <- prepare_conversion
results <- conver_images do
report_results(results, target_dir)
else
{:error, message} -> report_error(message)
end
So which is better?
Pipes for pipelines, high level flow, controlling the interfaces, and dictating the rules.
with
is a swiss army knife, nitty-gritty low-level, calling third-party code, anything else that doesn't quite fit.
Popular tokens in Elixir: Ecto.Changeset
, Plug.Conn
, Wallaby.Session
Ecto.Changeset
is a token that goes through the pipeline of transformationsPlug.Conn
is a token that goes through the pipeline of transformationsWallaby.Session
is a token that goes through the pipeline of actions
"Token" is board game analogy. A token represents you moving around, intent of movement, and resources which are transformed. (the word "Context" was already taken by Phoenix 🙂)
defmodule Converter.Token do
defstruct [:argv, :glob, :target_dir, :format, :filenames, :options]
end
- Have a
build
function. - Have
put_*
functions to normalize data. - Avoid users messing with the struct manually
- Design tokens around their intended use
- Design tokens APIs around requirements, not fancy code
- Create tokens using your API
- Write values using your API
- Provide functions for common operations.
defmodule Converter.Step do
# Plug also supports functions as Plugs
# we could do that, but for the sake of this article, we won't :)
@type t :: module
@callback call(token :: Converter.Step.t()) :: Converter.Step.t()
defmacro __using__(_opts \\ []) do
quote do
@behaviour Converter.Step
alias Converter.Token
end
end
end
#...
defmodule Converter.Task.ParseOptions do
use Converter.Step
@default_glob "./image_uploads/*"
@default_target_dir "./tmp"
@default_format "jpg"
def call(%Token{argv: argv} = token) do
{opts, args, _invalid} =
OptionParser.parse(argv, switches: [target_dir: :string, format: :string])
glob = List.first(args) || @default_glob
target_dir = opts[:target_dir] || @default_target_dir
format = opts[:format] || @default_format
%Token{token | glob: glob, target_dir: target_dir, format: format}
end
end
# ...
defmodule Converter.MyProcess do
use Converter.StepBuilder
step Converter.Step.ParseOptions
step Converter.Step.ValidateOptions
step Converter.Step.PrepareConversion
step Converter.Step.ConvertImages
step Converter.Step.ReportResults
end
# ...
defmodule Converter.StepBuilder do
# this macro is invoked by `use Converter.StepBuilder`
defmacro __using__(_opts \\ []) do
quote do
# we enable the module attribute `@steps` to accumulate all its values;
# this means that the value of this attribute is not reset when
# set a second or third time, but rather the new values are prepended
Module.register_attribute(__MODULE__, :steps, accumulate: true)
# register this module to be called before compiling the source
@before_compile Converter.StepBuilder
# import the `step/1` macro to build the pipeline
import Converter.StepBuilder
# implement the `Step` behaviour's callback
def call(token) do
# we defer this call to a function, which we will generate at compile time;
# we can't generate this function (`call/1`) directly because we would get
# a compiler error since the function would be missing when the compiler
# checks run
do_call(token)
end
end
end
# this macro gets used to register another Step with our pipeline
defmacro step(module) do
quote do
# this is why we set the module attribute to `accumulate: true`:
# all Step modules will be stored in this module attribute,
# so we can read them back before compiling
@steps unquote(module)
end
end
# this macro is called after all macros were evaluated (e.g. the `use` statement
# and all `step/1` calls), but before the source gets compiled
defmacro __before_compile__(_env) do
quote do
# this quoted code gets inserted into the module containing
# our `use Converter.StepBuilder` statement
defp do_call(token) do
# we are reading the @steps and hand them to another function for execution
#
# IMPORTANT: the reason for deferring again here is that we want to do
# as little complexity as possible in our generated code in
# order to minimize the implicitness in our code!
steps = Enum.reverse(@steps)
Converter.StepBuilder.call_steps(token, steps)
end
end
end
def call_steps(initial_token, steps) do
# to implement the "handing down" of our token through the pipeline,
# we utilize `Enum.reduce/3` and use the accumulator to store the token
Enum.reduce(steps, initial_token, fn step, token ->
step.call(token)
end)
end
end
- A very clearly stated requirement
- many parts of your system have to talk about the same thing in different contexts.
- the need of a contract is apparent
- you have to ensure contracts between steps
- you have several 'pipelines' in your business
- you have to be able to add pipeline features later
- extensibility is a major concern
- you can have a Token & API without metaprogramming (it's just modules and functions)
- When there are very few stakeholders
- there are many stakeholders, but requirements are vague
- the problem domain is very small
- you get the feeling that the overhead is simply not worth it.
Takeaways:
- Thing about the flow of your program
- Make that flow easily comprehensible
- There are several options to do that
https://www.youtube.com/watch?v=Ue--hvFzr0o&index=9&list=PLqj39LCvnOWaxI87jVkxSdtjG8tlhl7U6
tldr:
- Create public api
- Try making implemtnations swappable
- Rinse and repeat for internal APIs
When a module is reaching into multiple other modules, it's an indicator of complexity.
"If your application is difficult to use from iex, your code APIs are probably wrong."
- Define a public API
- Improve the quality of the public API
- Generate ExDocs from it
- Don't require devs to circumvent the API in order to get something to work.
- Make it easy to test against.
"Documentation is for users of your public Application Programming Interface"
- Limit docs to the public API
@doc false
ignore the function from docs- Public API == Contract with outside world
- Only test the current layer
- Use Mox for separating layers. Use a Behavior and split the implementation to the real thing and the mock. This also gives you flexibility to replace your default implementation. This is called Hexagonal Architecture (Ports/Adapters).
- Don't use
Application.get_env
because that breaks compile-time checks and moves it to the runtime.
- Keep implementation details out of the public APIs
- For example, move validation up toward the edge
- For example, push the persistence layer down.
- For example, add domain layers for persistence errors. ie,
MyApp.PersistanceError{message: "Email already taken"}
- Organize modules into hierarchy
- Create internal API layer.
-
Define internal APIs
-
Make internal APIs swappable. (Allow Mox)
-
Improve swapping performance. Don't use
Application.get_env
instead use compile switches. -
Extracting Apps. You can leverage umbrella applications to separate internal domain layers even more.
https://www.youtube.com/watch?v=1bQlc-K6vN0&list=PLqj39LCvnOWaxI87jVkxSdtjG8tlhl7U6&index=10
Algae needs certain things to grow: light, water, temp, etc.
Added sensors. Each sensor is wrapped with a Supervisor Elixir process.
Establish process and determine states for state machine.
Used grovepi to send events for changing states.
Goals:
- Maximize harvest frequency
- Measure a change in oxygen concentration
- Print money
Experience:
- Sensors are finicky. Hard to trust unless you have $$$
- So much easier with Nerves.
- Turns out that algae is really stinky.
https://www.youtube.com/watch?v=_ANg28Pello&list=PLqj39LCvnOWaxI87jVkxSdtjG8tlhl7U6&index=14
AdoptingElixir_Workshop_2018
pragprog coupon code
Schedulers can bounce from core to core, but you can also tell Erlang to bind schedulers to stay on the same core (only on Linux).
Erlang schedulers are pre-emptive (vs cooperative schdulers).
Each scheduler has a run queue. This is good to look at for determining where the bottlenecks are.
Erlang has migration logic, which is a load balancer between schedulers.
When the schedulers fall apart, it's called a scheduler collapse. NIFs can cause this with misbehaving processes.
NIFs are usually written in C. This is where you lose guarantees when interacting with Erlang. When a NIF crashes, the VM crashes. "A native function doing lengthy work before returning degrades responsiveness of the VM, and can cause miscellaneous and strange behaviors"
Introduced in OTP 17, but enabled by default in OTP 21.
Dirty schedulers are either IO intensive or CPU intensive.
Dirty schedulers are unmanaged, like async tasks.
Can't really use Dialyzer or compile-time checks with NIFs.
- Rust however has a type system.
- Rust is really fast (on par with C)
- Rust is thread-safe. Concurrency is possible and guaranteed.
Rustler helps create Rust NIFs for Erlang. Watch "Taking Elixir to the Metal" by Sonny Scroggin
Basically, use Rust instead of C.
Nay. But cool.
https://www.youtube.com/watch?v=YAGbvulsIGE&list=PLqj39LCvnOWaxI87jVkxSdtjG8tlhl7U6&index=16
Machine Learning is getting a program to improve without being explicitly programmed.
What is a model? A model represents what we've learned from our dataset.
Going to use the classical ML example of classifying Iris species by measurements.
How we'll interact with the model:
- HTTP (Server example in Python, Client example in Elixir)
- gRPC - Uses HTTP2, long-lived connection, compression, and multiplexed
- Ports - Can call Python functions from Erlang, reverse, and pass messages back and forth. Uses STDIN and STDOUT under the hood. Creating a GenServer that will start the python process, and call the python methods to get the prediction models. Use sklearn-porter
- NIFs - Add a mix task to also compile NIF with make
- Elixir - Ported sklearn-porter to also output Elixir from predictions.
Lightning Talks
BEAM Wisdom BEAM Book NewRelic Elixir Agent
It's not about "using this package", it's about seeing it all come together.
It's a set of tooling around releasing into embedded hardware.
- Not Rasbian
- GRiSP. Nerves runs embedded linux with soft guarantees. GRiSP runs on hardware with hard guarantees.
- Does not run on Arduinos. Too small. But you can control them through a raspberry pi home base.
mix nerves_hub.firmware publish
publishes firmware to NervesHub.- Deployment groups (test, integration, prod) subscribe via Phoenix Channels for updates.
- Secured with Client SSL, where both ends are validated. Secrets are not stored on NervesHub; they're only on the devices and your dev machine.
- All firmware pushed must be signed. These signing keys can also be associated with delpoyment groups (a dev might only have access to test)
NERVES_SERIAL_NUMBER=a001 mix firmware.burn
NERVES_SERIAL_NUMBER
provides a serial number to the software burned to the device. But this will be tedious for each device to have its own signed key. So we're going to delegate it to nerves hub.
config :enrves, :formware,
rootfs_overlay: "rootfs_overlay",
provisioning: :nerves_hub
# Introduced this:
Plug.Conn.get_peer_data(conn)
# requires Plug >=1.6 and Cowboy >=2.1
# ... endpoint.ex
socket(
"/socket",
NervesHubDeviceWeb.UserSocket,
websocket: [
connect_info: Plug.Conn.get_peer_data(conn)
]
)
This allows the end-device to verify that the websocket connection is receiving info from a server that matches the SSL signed key.
Event-oriented architecture
Every change is stored. At any time, the current state could be lost and be reconstructed from the event changes. Greg Young is a leading authority in event-sourcing.
"You probably use event-sourcing every day and don't realize it", like accounting. Each transaction is immutable. Version control. Uh-oh, blockchain?
If auditing is very important.
- Asynchronous processes make coordinating time-sensitive things difficult
- Channels really help, but time clocks aren't always in sync.
- CQRS/ES (Command Query Responsibility Segregation and Event Sourcing). Instead we did CQRS*Like and ES
- Testing is difficult with async processes and the Database. Ecto.Sandbox problems would happen. We decided to avoid the database where we could. The database doesn't have to be your only store. It can be in memory too
- Didn't have have to worry about scale.
(demo)
ElixirTanx demo.
Tanx Phoenix App with Web client with Websockets for controls. Each game is a GenServer. Stateful application, but without a database.
First problem: Deployment
Elixir came out around the same time as Containers and Docker. Wrapped up ElixirTanx in a container and deployed. Great! but state is inside the containers which are emphemeral.
OTP and container orchestration have some friction; it's a clash of culture.
Let's explore how to take the good parts of OTP and containers and combine them.
We can use a library: libcluster. It can use the Kubernetes API to find nodes
For channels, Phoenix has a solution already. For GenServers. OTP supervisor could restart the process, but it's the container that goes down not the process. Instead, we want a "Distributed" supervisor. There's a solution: horde
When a node is restarted, it'll grab the default state which is likely empty. We need a "Distributed Agent" == CRDT. Horde provides a CRDT.
OTP provides a registry for named processes, but we need a "Distributed Registry". Horde provides a distributed registry.
(yes) Live demo.
Small. Fast. Simple. Chips are getting smaller and cheaper, not faster. Cheapest hardware chosen for any given problem. Browsers start at 150MB and go up from there. Target: 20-30MB including linux, erlang, scenic, and more.
Robust. OTP allows us to recover from errors on the server, and clients need it just as much! Bad data happens and errors happen, so devices must recover quickly and independently.
Remoteable
Approachable Like web, but not. Easy to create interesting UI.
Secure Keep things simple. No open ports required. Static asset hashing. Never trust a device if you don't know where it keeps its brain.
Also, avoid support matrix hell. 5 versions x two releases x 10 years = a horrendous matrix of support if the brain is on the server.
- Scene layer. Genserver process. Analogous to a web page. Defines a graph.
- ViewPort layer. Intermediate management layer
- Driver layer. Specific to hardware. Does actual render. Handles user input
- The service - Hosted service for Scenic applications. User auth, UI remoting, audit trails, and debugging. https://www.kry10.com
Services in the cloud
mix scenic.run
(yay)
Basic controls are included in the library: buttons, dropdowns, radios, text input, password field, painting images, drawing lines. OTP handles crashes and restarts really well.
OTP limits the blast radius of a bug
Manual process. Manual ssh into it. We ask:
- How many OS worker processes are runing?
- What connections do they make to other services
- Consume too many resources?
- typically do this with
ps -ejf | grep pid
andlsof -i
Use SSH and Elixir over SSH. Elixir resource usage is pretty good Elixir is fault taulerant already Erlang has SSH server built in. Use https://github.com/rubencaro/sshex as an Elixir wrapper
- GenServer to start SSH connection to server. Wrap bash commands up to make them convenient in GenServer calls.
- Track the GenServer SSH processes with the OTP registry. Start up a new registry and supervise it.
- Use the DynamicSupervisor to automatically restart dead processes
- The OTP Registry can also use a Pub/Sub architecture. This is helpful because SSH processes could take a little time to be ready since it's over the network.
Wrap it with a Phoenix app.
https://github.com/deadtrickster/prometheus.ex
Lessons learned while pushing an application to the limits. Use case is POA ethereum blockchain
Previous architecture:
- Rails-esque worker queues
- Important data fetched serially
New architecture:
- Embrace OTP platform
- Catalog entire blockchain as fast as possible
- Efficient event dispatching
Bottlenecked API GenServer. Several requests needed to catalog a block. We can fix this with Elixir Task.async_stream
and Task.yield_many
.
block_range
|> Task.async_stream
|> Task.yield_many
|> (burn)
With great power comes great responsibility. Need to limit the amount of concurrency to calm the CPU. How do you restart tasks? How do you handle other faulty things?
Put it in a GenServer instead!
Genserver -> Start Task. Recover the Task with saved state in Genserver.
Supervised Task -> Monitor but don't link. Task.async_nolink
Maximize Database Writes:
Repo.insert
can be slow with rapid calls.- Instead use
Ecto.Repo.insert_all/3
Maximizing Database Reads:
Repo.all
can be too slow. It's accumulated before it returns. Slow. Database loads into memory, and then again into Elixir as structs.Ecto.Repo.stream/2
lazy enumerable. More memory efficient.- Use
timeout: :infinity
to bothRepo.stream
andRepo.transaction
Introduce Read-heavy Cache:
- Common pattern is to store state in an Agent, but don't.
- Instead, use ETS with read-concurrency flag
:ets.new(:name, read_concurrency: true, write_concurrency: true)
- Read no longer require serial access. Can also be used as a rate-limiter.
Introduce efficient event dispathing
- Elixir Registry includes a pub/sub
Out Soon. Release Candidate releasing this week.
- HTTP2 with Cowboy 2
- Fast dev compilation
- Overhauled Transport layer
- Improved Presence API
Programming Phoenix 1.4 25% Off Phoenix_Workshop_2018
at phoenixframework.org/book
- Tiny Core
- Reporters
- Aggregators
defmodule MyApp.Logger do
def handle_event([:web, :request, :step], latency, meta, _) do
Logger.info( stuff )
end
end
#.. Turns into
Telemtry.attach("my-logger", [:web, :request, :stop], MyApp.Logger
Telemetry.execute([:web, :request, :stop], latency, %{
status:conn.status
path: conn.request_path,
})
Chris wrote sync.rb. It worked, but it sucked. It lead him away from Rails because of websockets, and lack of concurrency.
Why do we write JavaScript? To enrich user experiences.
But the JavaScript tooling is insane.
Introducing Phoenix.LiveView. Not part of Phoenix core. Rich user experiences without the complexity.
phx_* event bindings as well
phx_submit: :save
Yea, but it's not free. But it can achieve 60fps!!
It's not best for all situations, like if you need offline support, or have high latency clients, or need to take advantage of native features or have incredibly complex UI, then SPAs are better choices. Otherwise, LiveView fits the bill.
It's not just about performance.
"And, even if it was faster, would it matter? Is there such a thing as a fast enough web application? ... Just how important is performance when choosing a web framework or even a programming language for a web application?
It's incredibly important. If the language allows you to advance the state of art.