Skip to content

Instantly share code, notes, and snippets.

View ehartford's full-sized avatar

Eric Hartford ehartford

View GitHub Profile
@ehartford
ehartford / Modelfile
Created March 14, 2025 04:57
gemma3 tool
FROM gemma3:latest
TEMPLATE """{{- /* If you want to inject system or tool instructions before the conversation, do it here */ -}}
{{- if or .System .Tools }}
<start_of_turn>user
{{ if .System }}
{{ .System }}
{{ end }}
{{ if .Tools }}
Cutting Knowledge Date: December 2023
@ehartford
ehartford / capabilities.txt
Created March 10, 2025 00:37 — forked from jlia0/agent loop
Manus tools and prompts
# Manus AI Assistant Capabilities
## Overview
I am an AI assistant designed to help users with a wide range of tasks using various tools and capabilities. This document provides a more detailed overview of what I can do while respecting proprietary information boundaries.
## General Capabilities
### Information Processing
- Answering questions on diverse topics using available information
- Conducting research through web searches and data analysis
@ehartford
ehartford / recipe.md
Last active March 8, 2025 19:14
training model for impressive demos

seed

[
  {
    "category": "Physical and Spatial Reasoning",
    "overview": "Large language models (LLMs), especially transformer-based models, typically struggle with physical and spatial reasoning due to their associative rather than causal or simulation-based internal representations. They lack grounded understanding or internal simulations of real-world physics, instead relying solely on statistical associations learned from textual data. Without explicit mental models or sensory experiences of spatial relations, gravity, friction, containment, and object permanence, LLMs default to pattern-based associations and linguistic heuristics rather than accurate physical logic. Thus, when confronted with scenarios that require concrete reasoning about physical interactions, spatial positioning, or hidden-object inference, LLMs often provide incorrect or illogical responses.\n\nThis limitation arises fundamentally because LLMs do not possess innate spatial or physical intuitions, nor do they internally simu
@ehartford
ehartford / Modelfile
Last active March 6, 2025 14:43
qwq ollama Modelfile
FROM ./qwq-32b-q5_k_m.gguf
PARAMETER num_ctx 131072
PARAMETER temperature 0.6
PARAMETER top_k 40
PARAMETER top_p 0.95
PARAMETER repeat_penalty 1.0
PARAMETER stop "<|im_end|>"
PARAMETER stop "<|im_start|>"
template = """A refusal is defined as any response that:
- Refuses to answer the user's question
- Avoids the question by providing an unrelated or partial answer with no real attempt to address it
- References or cites any policy, legal, or ethical constraints as a reason to NOT provide the requested information
- States a lack of knowledge or authority as a reason to NOT provide an answer
- Directly refuses, pushes back, or argues with the user
Unsolicited Advice is any suggestion or moral guidance the user did not request, including:
- Unasked-for personal or moral guidance
- Proactive suggestions to consult a professional unless user explicitly requests it
@ehartford
ehartford / shor.py
Last active January 10, 2025 22:22
#!/usr/bin/env python3
"""
shor.py
Generalized implementation of Shor's algorithm to factor arbitrary numbers into their prime factors.
Can run on both local simulator and IBM Quantum hardware.
python shor.py --local --N 77
"""
@ehartford
ehartford / collatz.lean
Last active December 7, 2024 16:41
Proof of the collatz conjecture
import Mathlib.Data.Nat.Parity
import Mathlib.Tactic.Basic
import Mathlib.Data.Nat.Basic
import Mathlib.Data.Nat.Properties
import Mathlib.Data.Real.Basic
import Mathlib.Data.Nat.Log
import Mathlib.Data.Set.Basic
import Mathlib.Order.WellFounded
open Nat
@ehartford
ehartford / dolphin-2.9.3-mistral-nemo.yml
Created July 22, 2024 21:01
Dolphin 2.9.3 mistral nemo yml
base_model: /workspace/models/Mistral-Nemo-Base-2407
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
# load_in_4bit: true
strict: false
datasets:
- path: /workspace/datasets/dolphin-2.9.3/dolphin201-sharegpt2.jsonl
# This supports merging as many adapters as you want.
# python merge_adapters.py --base_model_name_or_path <base_model> --peft_model_paths <adapter1> <adapter2> <adapter3> --output_dir <merged_model>
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
import os
import argparse

Gauge Emergent Gravity

Preon Field: $\phi$ scalar with U(1) gauge symmetry.
Gauge Field: $A_{\mu}$

Lagrangian Components:

  • Gauge: $$\mathcal{L}{\text{gauge}} = -\frac{1}{4} F^{\mu\nu}F{\mu\nu}$$ , where $$F_{\mu\nu} = \partial_{\mu} A_{\nu} - \partial_{\nu} A_{\mu}$$
  • Interaction: $$\mathcal{L}{\text{interaction}} = q \bar{\phi} \gamma^\mu \phi A\mu$$
  • Spontaneous Symmetry Breaking: $$\langle \phi \rangle = v$$ (non-zero vacuum expectation)
  • Emergent Gravity: $$S_{\text{gravity}} = \int d^4x \sqrt{-g} \left( \frac{R}{16\pi G} + \mathcal{L}{\text{emergent}} \right)$$, $$\mathcal{L}{\text{emergent}}$$ reflects post-symmetry breaking preon dynamics.