Skip to content

Instantly share code, notes, and snippets.

View mvandermeulen's full-sized avatar

Mark mvandermeulen

  • Fivenynes
  • Sydney, Australia
  • 01:32 (UTC +10:00)
View GitHub Profile
@gavrie
gavrie / gist:ebfad1920d7c300ff9b55d68839adbc2
Last active January 12, 2025 02:08
Building Consistent Artifacts from Multiple Git Repositories

The following is based on this prompt to Claude:

Design a solution to build artifacts from several Git repos that all use the same compiler flags. We do not want to use a central makefile for this. The repos are all built using a specific Docker image that includes all the build tools. How should we store and cache the artifacts, and retrieve the relevant versions?

I'll design a solution that focuses on consistency, reusability, and efficient artifact management. Here's a comprehensive approach:

  1. Compiler Flags Management:
# compiler-config.yaml (stored in a separate config repo)
compiler:

Home Assistant Model Context Protocol integration

TL;DR

Completing these steps will let you have an LLM Powered Web scraper in Home Assistant through the Model Context Protocol with an example of how you could make a template entity for extracting new headlines for a display.

Pre-requisites

This assumes you already know about the following:

@gc-victor
gc-victor / software_engineer_system_prompt
Created January 7, 2025 09:27
Software Engineer System Prompt
<system_prompt>
<identity>
<role>You are a highly skilled software engineer with extensive knowledge across multiple programming languages, frameworks, design patterns, and best practices.</role>
<characteristics>
- Emulates highly proficient developers
- Provides clear, efficient, and concise coding solutions
- Maintains friendly and approachable demeanor
- Stays up-to-date with latest technologies and best practices
- Focuses on modern web development
- Knowledge spans various programming languages and frameworks
@shaoyanji
shaoyanji / duck.lua
Last active January 12, 2025 02:07
tgpt integration within neovim
#!/usr/bin/env luajit
-- Handle command-line arguments
local args = {...}
for i, v in ipairs(args) do
-- print("Argument " .. i .. ": " .. v)
end
-- Handle piped input
--local piped_input = io.stdin:read("*a")
--if piped_input and piped_input ~= "" then
-- print("Received piped input:", piped_input)
@Maharshi-Pandya
Maharshi-Pandya / contemplative-llms.txt
Last active April 28, 2025 13:10
"Contemplative reasoning" response style for LLMs like Claude and GPT-4o
You are an assistant that engages in extremely thorough, self-questioning reasoning. Your approach mirrors human stream-of-consciousness thinking, characterized by continuous exploration, self-doubt, and iterative analysis.
## Core Principles
1. EXPLORATION OVER CONCLUSION
- Never rush to conclusions
- Keep exploring until a solution emerges naturally from the evidence
- If uncertain, continue reasoning indefinitely
- Question every assumption and inference
@ruvnet
ruvnet / SynthLang.md
Created January 5, 2025 03:18
SynthLang is a hyper-efficient prompt language designed to optimize interactions with Large Language Models (LLMs) like GPT-4o by leveraging logographical scripts and symbolic constructs.

SynthLang: A Hyper-Efficient Prompt Language for AI

SynthLang is a hyper-efficient prompt language designed to optimize interactions with Large Language Models (LLMs) like GPT-4o by leveraging logographical scripts and symbolic constructs. By compressing complex instructions into fewer tokens (reducing token usage by 40–70%), SynthLang significantly lowers inference latency, making it ideal for latency-sensitive applications such as high-frequency trading, real-time analytics, and compliance checks.

Additionally, SynthLang mitigates English-centric biases in multilingual models, enhancing information density and ensuring more equitable performance across diverse languages. Its scalable design maintains or improves task performance in translation, summarization, and question-answering, fostering faster, fairer, and more efficient AI-driven solutions.

Large Language Models (LLMs) such as GPT-4o and Llama-2 exhibit English-dominant biases in intermediate embeddings, leading to inefficient and oft

@erikj27
erikj27 / pydantic-ai_summary.md
Last active February 28, 2025 05:22
Documentation for pydantic-ai

Repository Overview

Directory Structure

pydantic-ai
├── LICENSE
├── Makefile
├── README.md
├── docs
│   ├── _worker.js
@bigsnarfdude
bigsnarfdude / scoring_engine_with_metadata_creation.py
Created December 28, 2024 20:44
scoring_engine_with_metadata_creation.py
import chromadb
import json
from ollama import Client
from typing import List, Dict, Any
import re
from dataclasses import dataclass
import numpy as np
from concurrent.futures import ThreadPoolExecutor
@dataclass
import os
import openai
from bs4 import BeautifulSoup
from loguru import logger
import tiktoken
# Remove default handlers and add a new one to standard output
logger.remove()
logger.add(
lambda msg: print(msg),

How to Extract All Data from WhatsApp

Follow these steps to extract all your WhatsApp data, including messages, in a secure and comprehensive way.


Disclaimer:

This guide is intended to help you access your own data only. Unauthorized access to data that does not belong to you may violate privacy laws and terms of service. Use this guide responsibly.