Skip to content

Instantly share code, notes, and snippets.

View mvandermeulen's full-sized avatar

Mark mvandermeulen

  • Fivenynes
  • Sydney, Australia
  • 07:00 (UTC +10:00)
View GitHub Profile
@rajivmehtaflex
rajivmehtaflex / HF_AS_OPENAI.py
Last active February 16, 2025 15:27
smolagents - Example
from langchain_openai import ChatOpenAI
model=ChatOpenAI(
temperature=0.5,
model='codellama/CodeLlama-34b-Instruct-hf',
base_url='https://api-inference.huggingface.co/v1/',
api_key='<KEY>'
)
response=model.invoke(input=[{"role": "user", "content": "What is color of flamingo?"}])
# Writing Code and Documentation
## ⚠️ CRITICAL: ALL CODE MUST BE IN ARTIFACTS ⚠️
The single most important rule: NEVER write code directly in the conversation. ALL code MUST be provided as a unified diff file within an artifact. No exceptions.
## Before You Start
1. Read the user's request
2. Plan your response
3. Create a SINGLE unified diff file containing ALL code changes
4. Place this diff file in an artifact
@othyn
othyn / 00_local_llm_guide.md
Last active April 28, 2025 23:20
Setting up a local only LLM (Qwen/Llama3/etc.) on macOS with Ollama, Continue and VSCode

Setting up a local only LLM (Qwen/Llama3/etc.) on macOS with Ollama, Continue and VSCode

As with a lot of organisations, the idea of using LLM's is a reasonably frightning concept, as people freely hand over internal IP and sensitive comms to remote entities that are heavily data bound by nature. I know it was on our minds when deciding on LLM's and their role within the team and wider company. 6 months ago, I set out to explore what offerings were like in the self-hosted and/or OSS space, and if anything could be achieved locally. After using this setup since then, and after getting a lot of questions on it, I thought I might share some of the things I've come across and getting it all setup.

Que in Ollama and Continue. Ollama is an easy way to locally download, manage and run models. Its very familiar to Docker in its usuage, and can probably be most conceptually aligned with it in how it operates, think imag

@archydeberker
archydeberker / client_injection.py
Last active January 21, 2025 15:38
Demonstrate the repository pattern for session mgmt in FastAPI
from typing import List
from uuid import UUID
from sqlalchemy import create_engine
from sqlalchemy.orm import Session, sessionmaker
from fastapi import FastAPI, Depends, HTTPException
from sqlalchemy.orm import Session
from pydantic import BaseModel
class User(BaseModel):
# This is a DB model - in SQLModel you can return the ORM model directly bc it's Pydantic under the hood
@virattt
virattt / hedge-fund-agent-team-v1-3.ipynb
Last active March 28, 2025 05:53
hedge-fund-agent-team-v1-3.ipynb
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@timothywarner
timothywarner / github-copilot-certification-resources.md
Last active March 16, 2025 13:44
GitHub Copilot Certification Resources
@s3rgeym
s3rgeym / app.py
Created November 11, 2024 21:29
HH Applicant Telemetry Server
from __future__ import annotations
from datetime import datetime
from typing import Any, Dict, List, Optional, AsyncGenerator
from sqlalchemy.dialects.postgresql import TIMESTAMP
from fastapi import Depends, FastAPI, Request
from fastapi.exceptions import RequestValidationError
from fastapi.responses import JSONResponse
from pydantic import BaseModel, Field, ValidationError, validator
from pydantic_settings import BaseSettings
from sqlalchemy import JSON, Column, DateTime, Integer, String, func
@otmb
otmb / progress_multipartparser_example.py
Created November 10, 2024 15:12
File upload progress with FastAPI
# https://www.starlette.io/requests/
# https://developer.mozilla.org/ja/docs/Web/API/Streams_API/Using_readable_streams
from fastapi import FastAPI, Request, HTTPException
from fastapi.responses import HTMLResponse, StreamingResponse, Response
from logging import getLogger, StreamHandler
import asyncio
import tempfile
import json
import time
import os
@shresthakamal
shresthakamal / cors.py
Last active January 21, 2025 17:07
FAST API Response and Request [LEARN]
"""CORS or "Cross-Origin Resource Sharing" refers to the situations when a frontend running in a browser has JavaScript code that communicates with a backend, and the backend is in a different "origin" than the frontend.
Origin
An origin is the combination of protocol (http, https), domain (myapp.com, localhost, localhost.tiangolo.com), and port (80, 443, 8080).
So, all these are different origins:
http://localhost
@itsfrank
itsfrank / codecompanion-save.lua
Last active April 26, 2025 00:01
Snippet to add the ability to save/load CodeCompanion chats in neovim
-- add 2 commands:
-- CodeCompanionSave [space delimited args]
-- CodeCompanionLoad
-- Save will save current chat in a md file named 'space-delimited-args.md'
-- Load will use a telescope filepicker to open a previously saved chat
-- create a folder to store our chats
local Path = require("plenary.path")
local data_path = vim.fn.stdpath("data")
local save_folder = Path:new(data_path, "cc_saves")