Skip to content

Instantly share code, notes, and snippets.

View soasme's full-sized avatar
🐷
Peppa PEG v1.16.0 is released!

Ju soasme

🐷
Peppa PEG v1.16.0 is released!
View GitHub Profile

AI Engineering: LLMs, RAG, MCP, Agents, Fine-Tuning, and Quantization

This post covers the core concepts of AI engineering — not machine learning research, but the practical discipline of taking pre-built models and shipping them into real products. By the end, you'll have a working mental model of LLMs, RAG, MCP, agents, fine-tuning, and quantization, and understand how they fit together.


AI Engineering vs. Machine Learning

These two disciplines are often conflated, but they're distinct.

AI Engineering: A Complete Overview of LLMs, RAG, MCP, Agents, Fine-Tuning, and Quantization

Artificial Intelligence has rapidly evolved from a niche research field into one of the most transformative technologies in software engineering. Today, developers across Android, backend, web, DevOps, and cloud engineering are increasingly expected to understand AI systems and how to integrate them into real-world products.

This article walks through the core concepts behind modern AI engineering — including Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), MCP (Model Context Protocol), tools, embeddings, and the architecture of AI-powered systems.


Machine Learning vs AI Engineering

import logic from 'logicjs';
var or = logic.or,
and = logic.and,
not = logic.not,
eq = logic.eq,
run = logic.run,
lvar = logic.lvar,
between = logic.between
var g1_rule = `

Core Problem: Users need a frictionless way to log daily actions and track their impact without overwhelming complexity.

Invariants for Success:

  1. Users can log actions in under 10 seconds.
  2. The feed clearly shows the relationship between actions and outcomes.
  3. The default view prioritizes today's activities.
  4. Reflection is optional and unobtrusive.

Must-Have Outcome of the MVP: A single screen that allows users to log actions quickly, view a feed of their logs, and see metrics associated with those actions.

Keyword Category Search Intent Notes
indie hacker Indie Development Community/identity Very high overlap with IH audience; good for blog/landing pages
indie developer tools Indie Development Tool discovery People actively look for resources to support indie projects
build side project Indie Development How-to/educational Entry point for your funnel
bootstrap startup Indie Development Founders searching for growth without VC Good for blog/landing pages
solo founder guide Indie Development Educational Clear match for your positioning
startup roadmap Growth & Roadmap How-to planning Competes with Notion/roadmap templates
startup checklist Growth & Roadmap Tactical Strong “template” content angle
indie hacker roadmap Growth & Roadmap How-to planning Niche-specific variant you can own
startup daily tasks Growth & Roadmap Practical guidance Perfect fit for “daily missions” angle
{ pkgs ? import <nixpkgs> {}}:
with pkgs; mkShell {
# Include C++ headers for regular clang calls:
NIX_CFLAGS_COMPILE = lib.optionals stdenv.isDarwin [
"-I${lib.getDev libcxx}/include/c++/v1"
];
nativeBuildInputs = [
...
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
---- MODULE VirtualMachine ----
VM_Version == "1.0.0"
LOCAL INSTANCE Sequences
LOCAL INSTANCE Integers
LOCAL INSTANCE TLC
CONSTANT PC
CONSTANT SUBJ

Overview

In this document, I propose to separate SLI from SLO as a new data kind in OpenSLO specification. See discussion #29.

SLI

A service level indicator (SLI) represents how to gather data from metric sources.

apiVersion: openslo/v0.2.0-beta
# python3 /tmp/iterategosrc.py /tmp/go/src/
# check whether go grammar can pass all golang source files.
import os
import sys
import glob
import subprocess
root_dir = sys.argv[1]
total = 0
failed = 0
for filename in glob.iglob(root_dir + '**/**.go', recursive=True):