Skip to content

Instantly share code, notes, and snippets.

View initcron's full-sized avatar

Gourav J. Shah initcron

View GitHub Profile

Lab: AI-Augmented DevOps with AWS, GitHub MCP & Claude Permission Modes

Duration: 60 minutes Difficulty: Beginner–Intermediate Prerequisites: AWS CLI configured (aws configure), Claude Code installed, GitHub personal access token, Node.js 18+ Deliverable: Completed diagnosis report, GitHub issue filed, permission mode journal


Lab Objective

Hermes Agent: Feature Analysis & Enterprise Framework Comparison 2026
---
What Kind of System Is Hermes?
Before comparing frameworks, it's important to classify Hermes correctly. It is not the same category of thing as LangGraph or LangChain. Those are orchestration libraries — you build agent
systems with them. Hermes is a fully assembled agentic runtime — it ships with the agent loop, 40+ tools, 19 messaging platform adapters, an execution environment abstraction layer, a
skills marketplace, a context compression engine, multi-model routing, and an OpenAI-compatible API server, all integrated and configured out of the box.
The right comparison is less "LangGraph vs Hermes" and more:
Security & Enterprise-Readiness Report: Claw Code
Subject: claw-code (ultraworkers/claw-code, Rust, v0.1.0)
Comparators: IronClaw, OpenClaw, ZeroClaw
Hermes: Not present in this experiments/ workspace — excluded from comparison.
Date: 2026-04-07
---
TL;DR — Is Claw Code safe?
@initcron
initcron / SKILL.md
Created April 7, 2026 04:18
Sample SKill file for Kubernetes Diagnostics
name k8s-pod-health-investigator
description Investigate unhealthy pods in a Kubernetes namespace. Use when pods are in CrashLoopBackOff, ImagePullBackOff, OOMKilled, or showing high restart counts. Covers pod status, log analysis, deployment history, event correlation, and self-healing recommendations.
version 1.0.0
compatibility kubectl, HERMES_LAB_MODE=mock|live
metadata
hermes
category tags
devops
kubernetes
pods
health
crashloopbackoff
oomkilled
diagnosis
sre

Step 1.2

This alarm indicates acute resource pressure on your catalog-api instance. Here's the immediate
  triage:

  Critical Status

  - Instance: i-0abc123def456001 in us-east-1

In the context of the Common Architecture Language Model (CALM), patterns are described as the project's "real superpower." They serve as a mechanism for architects to convey specific architecture opinions to developers, who then use these patterns to create architectures for their specific business problems.

Core Functions and Structure

  • Defining Expectations: Patterns allow architects to specify exactly what is required for a deployment. For example, an MCP (Model Context Protocol) server deployment pattern might include interfaces for container image names and ports.
  • Constraints and Consistency: By defining specific requirements within a pattern, architects ensure that downstream implementations remain consistent. This allows for the creation of a "secure pattern" that can be used to deploy various services (like different APIs) while maintaining the same security standards.
  • Placeholders: When a developer generates an architecture from a pattern, it often inc
@initcron
initcron / loki_values.yaml
Created January 12, 2026 06:29
Loki Custom Values File
loki:
commonConfig:
replication_factor: 1
schemaConfig:
configs:
- from: "2024-04-01"
store: tsdb
object_store: s3
schema: v13
index:
@initcron
initcron / Dockerfile
Created January 7, 2026 10:01
vllm CPU image for mac
FROM openeuler/vllm-cpu:0.9.1-oe2403lts
# Patch the cpu_worker.py to handle zero NUMA nodes
RUN sed -i 's/cpu_count_per_numa = cpu_count \/\/ numa_size/cpu_count_per_numa = cpu_count \/\/ numa_size if numa_size > 0 else cpu_count/g' \
/workspace/vllm/vllm/worker/cpu_worker.py
ENV VLLM_TARGET_DEVICE=cpu \
VLLM_CPU_KVCACHE_SPACE=1 \
OMP_NUM_THREADS=2 \
OPENBLAS_NUM_THREADS=1 \

This Dockerfile builds a container for running vLLM (Large Language Model inference engine) on CPU with specific patches and optimizations. Here's a breakdown:

Base Image

FROM openeuler/vllm-cpu:0.9.1-oe2403lts

  • Uses OpenEuler Linux distribution's pre-built vLLM image (version 0.9.1)
  • Built for CPU inference (not GPU)
  • Based on OpenEuler 24.03 LTS

Critical Patch (Lines 4-5)

Lab: Using kubectl-ai --mcp-server with Cursor to Inspect the atharva-ml Namespace

0. Lab Goals

By the end of this lab you’ll be able to:

  • Run kubectl-ai as an MCP server.
  • Wire it into Cursor via mcp.json.
  • Use Cursor chat + kubectl-ai tools to: