This gist provides you with the steps to deploy a minimally viable High Availability (HA) Ceph Cluster. It follows the Microceph Multi-node install guide but adds a little more detail to make the deployment simpler.
3 Bare-metal nodes.
This gist provides you with the steps to deploy a minimally viable High Availability (HA) Ceph Cluster. It follows the Microceph Multi-node install guide but adds a little more detail to make the deployment simpler.
3 Bare-metal nodes.
You can run this from any node that is part of the Ceph Cluster deployed using Microceph. The node you run the following commands from becomes the host for the Object/Rados Gateway service.
If you are going to use this cluster as a storage layer for your different Kubernetes applications, then I recommend doing this from your Active leader.
You might see an error like the following on a ceph cluster that you had previously tore down. It's just a false negative:
Error: failed placing service rgw: failed to add DB record for rgw: failed to record role: This "services" entry already exists
Ceph Object Storage user management involves managing users who access the Ceph Object Storage service, not the Ceph Object Gateway itself. To allow end users to interact with Ceph Object Gateway services, create a user along with an access key and secret key. Users can also be organized into Accounts for easier management.
# Required to run alias command only 1 time
alias radosgw-admin="mircoceph.radosgw-admin"
In this gist we are going to look at the simple steps required to build a staggered CI process using Github Actions that generate multi-platform images/packages/artifacts for any application. .
We have an application that is deployed across 2 environments:
live
preview
We want to generate artifacts targeted for each environment that we can later use for manual and/or Continuous Deployment. Below represents a directory structure of a sample application
This gist aims to provide the shortest path to deploying MongoDB Community Kubernetes Operator on ARM64 machines with some clarifications in response to the following Open Issues about ARM64 support on the official repository:
You are free to use any Kubernetes Installer of your choice. I am using Microk8s since it's zero-ops and the light
In this gist we are going to use BentoML to locally serve two Machine Learning models as Services for developmental testing.
Any Laptop/Desktop/Cloud VM that is an Ubuntu Jammy or a Debian Bookworm based system with a GPU that is atleast 6GB VRAM in size.
Nvidia drivers installed. Follow Ubuntu Installation docs or the official Nvidia CUDA installation docs for Debian or other Linux based OS.
(Optional but recommended) Conda Installed. Refer quick 1 line installation of Miniconda.
In this gist we are simply going to containerize a simple Bento Service with pre-packaged models.
From the root of the directory containing your Bento File (bentofile.yaml
) and the Bento Service (in most cases service.py
) run the following command:
Refer to official docs for more information.
In this gist we are going to deploy a containerized BentoML service to Kubernetes as a server-less function using Knative.
I'm doing this on a small dekstop I have at home. This one has a old GTX 1660 with 6GB VRAM. Since the model we are loading is only 600 MB. This system is enough to run our Prompt Engineering service (detailed in step
Looking to build a high availability Ceph cluster with ease? Ansible Playbooks have your back! Whether you're scaling out storage for a home lab or enterprise setup, automating your Ceph deployment is key to reliability and efficiency. In this guide, I'll walk you through step-by-step how to set up a resilient, high availability Ceph cluster using Ansible Playbooks—so you can focus on your data, not the details. Let's get your cluster up and running like a pro!