Skip to content

Instantly share code, notes, and snippets.

View exonomyapp's full-sized avatar
💭
Decentralizing Life

Exonomy App exonomyapp

💭
Decentralizing Life
View GitHub Profile
@exonomyapp
exonomyapp / Roles and Relationships Between Patroni and Bucardo.md
Last active October 15, 2024 10:52
The Distinctions and Interplays Between Patroni and Bucardo for Kubernetes PostgreSQL Clustering

The Distinctions and Interplays Between Patroni and Bucardo for Kubernetes Orchestrated PostgreSQL Clustering

Distinctions between Patroni and Bucardo

Patroni and Bucardo offer different approaches for achieving high availability (HA) and multi-master read/write setups in PostgreSQL, but they are designed for distinct use cases and have different architectures. Here’s a breakdown of what each offers in relation to your project.

What Patroni Offers:

  1. High Availability and Automatic Failover (HA) with a Single Leader (Primary-Replica Architecture):
  • Leader election: Patroni ensures there is always a single write node (the leader), and it automatically promotes a replica to a leader if the current one fails. This offers high availability but not true multi-master read/write capabilities.

Use Case Scenario: The Synergy of Vault and Consul for PostgreSQL Operations

The Problem: Dynamic Credentials and Service Discovery in a Complex Architecture

In a rapidly scaling, multi-service architecture, where services dynamically scale in and out, maintaining security and service discovery is a significant challenge. Imagine an e-commerce platform that uses PostgreSQL as the primary database backend. This platform experiences high traffic during flash sales or holiday seasons, requiring dynamic scaling of application services that communicate with PostgreSQL. Moreover, each new instance of an application service must securely authenticate and access the database without hardcoding credentials. Similarly, database administrators often need to perform seamless configuration changes or failover procedures for PostgreSQL clusters to ensure high availability and resilience.

The problem becomes complex because:

  1. Dynamic Credentials Management: The application services need secure access to P
@exonomyapp
exonomyapp / Secrets Management with Coolify.md
Created October 6, 2024 07:22
Secrets Management with Coolify

Managing secrets in Coolify for your production website involves securely storing sensitive information, such as API keys, tokens, and environment variables, in a way that your application can access them during deployment or runtime. Here's a step-by-step guide to manage secrets in Coolify:

1. Access Your Project's Configuration

  • Go to the Coolify dashboard.
  • Navigate to the "Projects" tab.
  • Select your production website project (in your case, exosystems_nuxt).

2. Navigate to Environment Variables

  • Inside your project settings, click on the "Environment Variables" section from the left sidebar.
  • This section allows you to add, update, and remove environment variables (including secrets) for your project.
@exonomyapp
exonomyapp / Coolify Orchestrated PostgreSQL Cluster.md
Last active June 14, 2025 22:23
Coolify Orchestrated PostgreSQL Cluster

Coolify Orchestrated DB Cluster

In this project, our goal is to establish a robust and scalable infrastructure for a PostgreSQL database with high availability, seamless security, and integrated monitoring and alerting systems.

Introduction

We'll leverage tools like Patroni, Consul, Vault, Prometheus, Grafana, and Cert-Manager to ensure a comprehensive, modern solution. Coolify will act as our orchestration platform, managing various services and simplifying deployments. We aim to not only build a highly available database cluster but also provide a learning experience for interns that demonstrates best practices in DevOps, security, and observability.

The backbone of our infrastructure will focus on a distributed, high-availability PostgreSQL cluster. To ensure reliability, we’ll introduce Patroni for automating failover, Consul for service coordination, and Vault for managing sensitive information. Monitoring will be handled by Prometheus and visualized u

@exonomyapp
exonomyapp / Secrets with Vault.md
Created October 4, 2024 16:06
A Little About Secrets Management with Hashicorp's Vault

HashiCorp's Vault is an open-source solution. It is available under the Mozilla Public License (MPL) 2.0, which allows users to access, modify, and distribute the code. Vault provides secure secrets management, encryption as a service, and access control mechanisms for dynamic infrastructure.

In addition to the open-source version, HashiCorp offers Vault Enterprise with additional features tailored for larger organizations, such as advanced performance, governance, and disaster recovery features.

Vault has several competitors in the secrets management, encryption, and access control space. Here’s a list of some of its closest competitors, along with brief descriptions:

1. AWS Secrets Manager

  • Overview: A fully managed service from Amazon Web Services (AWS) that helps securely manage and rotate secrets like API keys, database credentials, and more.
  • Strengths: Integrated with AWS services, easy to use within AWS infrastructure, automatic rotation, and auditing.
  • Weaknesses: L

Socket.io and SocketSupply are both tools for network communication, but they differ significantly in their approach, usage, and scope.

Socket.io

Socket.io is a JavaScript library designed to enable real-time, bidirectional communication between clients (typically web browsers) and servers. It’s built on top of WebSockets but also includes features like automatic reconnection, event-based messaging, and fallback mechanisms (like HTTP long polling) to ensure reliable communication even in environments where WebSockets are unavailable.

Key Features:

  • Real-time communication: Primarily used to build web applications that require instant updates, like chat apps, gaming platforms, or live data feeds.
  • Client-server model: Operates in a client-server architecture where the server handles connections, and the client (browser or node.js) communicates over that channel.
  • Automatic reconnections: If the connection is lost, Socket.io can automatically attempt to reconnect.
  • *Cross-browser support
@exonomyapp
exonomyapp / Dialectically Decentralized.md
Last active October 1, 2024 22:43
The phrase "dialectically decentralized" suggests an approach to decentralization that emerges through the process of dialectical reasoning—where opposing forces or ideas are examined and reconciled to shape a more nuanced understanding or structure.

Dialectically Decentralized

This document elaborates on centralization as thesis and decentralization as antithesis. The weaknesses of centralized systems (thesis) are contrasted with the strengths of decentralized systems (antithesis), leading to a synthesis that deepens our understanding of the strengths and weaknesses of both. Neither centralization nor decentralization are neither lauded nor scorned. The analysis focuses only on their respective strengths and weaknesses and the lesser explored dynamics between the two polar views that could illuminate many solutions to the problems that each has hitherto faced without the other.

Dialectically:

This refers to the method of reasoning through dialogue or the confrontation of contradictory positions (thesis and antithesis) to arrive at a higher truth (synthesis). In this context, the dialectical process would involve examining centralization and decentralization as opposing forces, finding the strengths and weaknesses of each, and understandi

@exonomyapp
exonomyapp / Exonomy Vouchers and IPFS.md
Last active September 26, 2024 02:56
How Exonomy Vouchers are Published to IPFS

Concept of Publishing a Digital Voucher Document to IPFS for global distribution to non-Exonomists

Publishing a document to IPFS means storing the content in a decentralized network. Instead of being stored on a single server, the data is broken into pieces and distributed across multiple nodes. The document is assigned a unique CID (Content Identifier) based on its content, ensuring that anyone with the CID can retrieve the document, no matter where it is stored. For our app, this means the voucher, once broadcasted, will be accessible globally and immutably, with its content hash being the identifier.

Step-by-Step Process

1. Step: Prepare the Document

  • Why: Before publishing to IPFS, we need to ensure that the data (voucher details) is structured properly for easy retrieval and validation.
  • Options:
  • You can publish the voucher as raw JSON, or you can format it for easier display, like converting it

Automating Local Branch Tracking for Remote Branches

Problem Statement

In our project, we have a remote Git repository with several branches that contain important features and updates. However, there are several challenges we face regarding branch management:

  1. Remote Branch Awareness: Developers need to be aware of all the branches available on the remote repository, especially those created by other team members.
  2. Local Branch Tracking: We need to set up local branches that track these remote branches to ensure that we can easily pull updates and contribute to those branches.
  3. Avoiding Conflicts: We want to avoid creating local branches for remote branches that already exist locally to prevent errors and clutter.

When we install packages using npm i <package> --save-dev, we are specifically telling npm to install our package as a development dependency.:

--save-dev parameter:

  • This adds our package to the devDependencies section in the package.json file.
  • Usage: This means the package is intended only for development purposes, not for production builds. When deploying the application to production (with npm install --production or similar), packages listed under devDependencies won't be installed.
  • Example: Cypress, as a testing framework, is usually not needed in production environments, so it makes sense to include it in the devDependencies section.

Other common parameters:

  • No parameter (npm i <package>): This installs our package and adds it to the dependencies section in package.json, meaning it will be installed in both development and production environments. This is less ideal, for example, for a testing tool like Cypress.
  • *--global (npm i -g )