Skip to content

Instantly share code, notes, and snippets.

@rponte
Created April 28, 2023 14:01
Show Gist options
  • Save rponte/a69b9cf237a81857bfceff6a5b18c09b to your computer and use it in GitHub Desktop.
Save rponte/a69b9cf237a81857bfceff6a5b18c09b to your computer and use it in GitHub Desktop.
Content about implementing a database-based queue with SQL and SKIP LOCKED
@rponte
Copy link
Author

rponte commented Nov 5, 2024

Excelente artigo demonstrando como criar uma solução Request-Reply usando o Pub/Sub (LISTEN/NOTIFY) do Postgres! Além de tirar proveito do banco com o uso de:

  • SKIP LOCKED
  • Advisory Locks
  • Unlogged tables
  • CTEs e returning clause
  • Stored procedures

E também com exemplo funcional em Elixir + testes de integração 🤤

⭐️ Request-reply in Postgres

@rponte
Copy link
Author

rponte commented Nov 6, 2024

Excelente artigo no blog da Sequin sobre como desenhar sua fila de mensagens no RDBMS inspirada no design de brokers populares:

  1. SQS (Standard e FIFO);
  2. Kafka;
  3. Combinação de Kafka+SQS;

⭐️ Sequin: Build your own SQS or Kafka with Postgres

Pontos interessantes sobre a implementação SQS-like:

  • The staple feature of SQS (and most queues) is exactly once delivery within a visibility timeout.

  • Postgres is a great fit for taking on SQS workloads.

    If a non-FIFO SQS queue is sufficient, the biggest advantage SQS has is that it has unlimited throughput (according to AWS). So if your throughput is very high and ordering doesn't matter, SQS might be hard to beat with Postgres.

    But if you need FIFO or your throughput is in the ~thousands per second, Postgres is a great option. FIFO SQS can support up to 3,000 messages per second. A basic machine running Postgres can achieve that easily as well.

Pontos interessantes sobre a implementação Kafka-like:

  • So, it's a matter of your requirements. If you need massive scale and throughput (> 100k messages/sec), Kafka will be hard to beat. But if you're not operating at that scale, Postgres is hard to beat. You can use one system in your stack (Postgres) instead of two (Postgres and Kafka), and Postgres is far more observable than Kafka (everything is just a table).

@rponte
Copy link
Author

rponte commented Nov 6, 2024

Thread no Twitter do criador da Crunchy Data, Craig Kerstiens, com dicas sobre como desenhar uma boa solução de fila de mensagens no Postgres.

De acordo com o Craig, não é algo muito dificil, basta seguir estas 3 regras:

  1. Try to have index-only scans and very low BUFFERS numbers in the execution plans (not degrading over time;
  2. Most important (derived from "1", actually): take into consideration the Postgres MVCC model. Don't use DELETE for cleanup. Remember that any UPDATE is physically DELETE+INSERT. Design for low bloat. Most likely, decisions will involve partitioning and/or TRUNCATE;
  3. Consider using SKIP LOCKED to have multiple consumers. In most cases, this is overrated, though, being a premature optimization for throughput – if you have low latency (SQL is written well), a single consumer might be enough;

Ele também cita mais algumas dicas após eu questioná-lo sobre práticas de design para evitar BLOAT no Postgres.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment