Skip to content

Instantly share code, notes, and snippets.

View adeelahmad's full-sized avatar
:octocat:

Adeel Ahmad adeelahmad

:octocat:
View GitHub Profile
@jepjoo
jepjoo / llm_utils.py
Created July 4, 2025 12:34
Fix for llama-server
import os
import re
from copy import deepcopy
from functools import cache
from typing import Any, AsyncIterator, Protocol, cast
from mistralai import Mistral
from openai import AsyncOpenAI, OpenAI
from unmute.kyutai_constants import LLM_SERVER
@EricZimmerman
EricZimmerman / Caddy_AuthCrunch.md
Last active February 1, 2026 13:26
Caddy and Authcrunch working example

After pulling everything together, I thought it would be a good idea to document what ended up working for me with the following setup:

  1. *darr apps
  2. Some 3d printers
  3. MobilRaker
  4. NZB360

This stack requires the following

  • Protecting the sites from unauthorized access
[TASK(s)]
1.
[INPUT]
Design a chatbot interface where, instead of responding with plain text, the AI provides action buttons based on the user’s message. The chatbot should analyze the user input and dynamically generate relevant options as clickable buttons.
Key Features:
@jonashaag
jonashaag / Use macOS OCR engine from Python.md
Last active June 13, 2025 12:22
Use macOS OCR engine from Python

macOS Live Text has a very good quality/speed tradeoff.

Compared to Tesseract, it has much higher quality and is up to 3x as fast.

@kykim0
kykim0 / conversation.py
Last active April 4, 2025 12:31
Llama3 custom
"""
Conversation prompt templates.
We kindly request that you import fastchat instead of copying this file if you wish to use it.
If you have any changes in mind, please contribute back so the community can benefit collectively and continue to maintain these valuable templates.
"""
import base64
import dataclasses
from enum import auto, IntEnum
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@adeelahmad
adeelahmad / README.md
Last active April 8, 2024 11:54
Description of the process for fine tuning instruct dataset generating.

Finetune instruct that can enhance LLM capabilities and safety.

flowchart
	StartScriptExecution["Start Script Execution"]
	ReadTextFile["Read Text File"]
	ChunkTextIntoSegments["Chunk Text Into Segments"]
	ForEachTextChunk["For Each Text Chunk"]
	PerformNamedEntityRecognition["Perform Named Entity Recognition (NER)"]

Tokenizer Notes

Praxis Maldevide - Draft A

Introduction

This document is a collection of thoughts and observations about the tokenizers used in llama-rooted large language models.

The Tokenizer

Most language models use the LlamaTokenizer.

@Artefact2
Artefact2 / README.md
Last active February 22, 2026 16:07
GGUF quantizations overview

LiteLLM: Bypass Cert Verifications

Overview

If you use LiteLLM to proxy requests to Ollama.ai in corporate environments, you may encounter the following error in your Python application:

httpcore.ConnectError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1006)