Skip to content

Instantly share code, notes, and snippets.

View mvandermeulen's full-sized avatar

Mark mvandermeulen

  • Fivenynes
  • Sydney, Australia
  • 11:15 (UTC +10:00)
View GitHub Profile
@fritzprix
fritzprix / llm-commit.sh
Last active March 25, 2025 08:10
🤖 Generate git commit messages automatically using local LLM (Ollama). Simple bash script that analyzes your git diff and creates meaningful commit messages. No API keys, no cloud - runs locally with Ollama.
#!/bin/bash
# Get the git diff and save it to a temporary file
git diff --cached > /tmp/git_diff.txt
# If there's no diff, exit
if [ ! -s /tmp/git_diff.txt ]; then
echo "No staged changes to commit"
exit 1
fi
@loftwah
loftwah / ollama.md
Last active March 13, 2025 00:51
ollama

Complete Ollama Guide

Running GGUF Models Locally with Ollama

GGUF (GPT-Generated Unified Format) has quickly become the go-to standard for running large language models on your machine. There’s a growing number of GGUF models on Hugging Face, and thanks to community contributors like TheBloke, you now have easy access to them.

Ollama is an application based on llama.cpp that allows you to interact with large language models directly on your computer. With Ollama, you can use any GGUF quantized models available on Hugging Face directly, without the need to create a new Modelfile or download the models manually.

In this guide, we'll explore two methods to run GGUF models locally with Ollama:

@ruvnet
ruvnet / tutorial.md
Created October 23, 2024 13:18
Train Your Own AI Models for Free Using Google AI Studio

How To Train Your Own AI Models for Free Using Google AI Studio

Introduction: Why Fine-Tuning AI Models Matters

This year, we've seen some remarkable leaps in the world of Large Language Models (LLMs). Models like O1, GPT-4o, and Claude Sonnet 3.5 have shown how far LLM capabilities have come, pushing the boundaries of coding, reasoning, and self-reflection. O1, in particular, is one of the best models on the market, known for its self-reflection capabilities, which allows it to iteratively improve its reasoning over time. GPT-4o offers a wide range of capabilities, making it incredibly versatile across tasks, while Claude Sonnet 3.5 excels at coding, solving complex problems with higher efficiency.

What many people don’t realize is that these high-performing models are essentially fine-tuned versions of underlying models. Fine-tuning allows these models to be optimized for specific tasks, making them more useful for things like analysis, coding, and decision-making

@wong2
wong2 / README.md
Last active January 12, 2025 03:12
How to run Claude computer use demo on macOS

Note

It is necessary to give Terminal (or iTerm or whatever you use) the permission to control the computer. This can be done in System Settings ➔ Privacy & Security ➔ Accessibility.

Guide

  • Install cliclick for mouse & keyboard emulation
    • brew install cliclick
  • Clone Anthropic quickstart repo
    • git clone https://github.com/anthropics/anthropic-quickstarts.git
  • cd computer-use-demo
  • Replace computer-use-demo/computer_use_demo/tools/computer.py with the modified version below
@e2r2fx
e2r2fx / image.lua
Created October 22, 2024 16:21
open ai image tool for codecompanion
local xml2lua = require("codecompanion.utils.xml.xml2lua")
local log = require("codecompanion.utils.log")
local function get_secret()
local cmd = os.getenv("HOME") .. "/scripts/get_secret.sh"
local handle = io.popen(cmd, "r")
if handle then
local result = handle:read("*a")
log:trace("Executed cmd: %s", cmd)
handle:close()
@tkunstek
tkunstek / Readme.txt
Created October 21, 2024 10:24
A private team LLM for research
I created the environment in the docker-compose.yaml. The sandbox was only accessable via Wireguard and team members individual keys. Since everything had to exist in the sandbox, you will see that I included a container to run firefox. Using that container I downloaded all of the case files.
The case files included PDF's that were image scans of computer print-outs and handwritten notes. There was no native text in any of the files. I attempeted using OCR software (Tika, Tesseract, etc) with poor results. I settled on using AWS Textract in a private account with a private VPC.
Before sending the data to textract there was some cleanup needed. First, I had to fix the file names, for this I used the detox linux command. Next, each multi-page PDF had to be split into a seperate file. See split.sh for a wrapper script I wrote to automate the job.
The resulting individual pages were than uploaded to s3 using the aws cli into a secure s3 bucket. I configured a retention policy on the bucket to delete all files
@monotykamary
monotykamary / 1_system_second_to_last_three.py
Last active March 12, 2025 07:31
Open WebUI Anthropic with Prompt Caching
"""
title: Anthropic Manifold Pipe
authors: monotykamary
author_url: https://github.com/monotykamary
funding_url: https://github.com/open-webui
version: 0.2.5
required_open_webui_version: 0.3.17
license: MIT
"""
@RomneyDa
RomneyDa / continue-deep-dive.md
Last active January 12, 2025 13:23
A look into each part of Continue's code 2024-10-5

Continue codebase deep dive

Intro

This is a deep dive into the Continue code base, folder by folder and file by file where relevant.

@exonomyapp
exonomyapp / Coolify Orchestrated PostgreSQL Cluster.md
Last active April 25, 2025 23:19
Coolify Orchestrated PostgreSQL Cluster

Coolify Orchestrated DB Cluster

In this project, our goal is to establish a robust and scalable infrastructure for a PostgreSQL database with high availability, seamless security, and integrated monitoring and alerting systems.

Introduction

We'll leverage tools like Patroni, Consul, Vault, Prometheus, Grafana, and Cert-Manager to ensure a comprehensive, modern solution. Coolify will act as our orchestration platform, managing various services and simplifying deployments. We aim to not only build a highly available database cluster but also provide a learning experience for interns that demonstrates best practices in DevOps, security, and observability.

The backbone of our infrastructure will focus on a distributed, high-availability PostgreSQL cluster. To ensure reliability, we’ll introduce Patroni for automating failover, Consul for service coordination, and Vault for managing sensitive information. Monitoring will be handled by Prometheus and visualized u

import asyncio
import os
from contextlib import asynccontextmanager
import sqlalchemy as sa
from dependency_injector import providers
from dependency_injector.containers import DeclarativeContainer
from dependency_injector.wiring import Provide, inject
from fastapi import Depends, FastAPI
from sqlalchemy.ext.asyncio import (