Skip to content

Instantly share code, notes, and snippets.

View garyblankenship's full-sized avatar
🎩
I may be slow to respond.

Gary Blankenship garyblankenship

🎩
I may be slow to respond.
View GitHub Profile
@zackangelo
zackangelo / context.txt
Created December 13, 2024 19:39
Llama 3.3 Multi-tool Use Context Window
<|begin_of_text|><|start_header_id|>system<|end_header_id|>Environment: ipython
Cutting Knowledge Date: December 2023
Today Date: 13 Dec 2024
# Tool Instructions
You may optionally call functions that you have been given access to. You DO NOT have
to call a function if you do not require it. ONLY call functions if you need them. Do NOT call
functions that you have not been given access to.
@michabbb
michabbb / Gemini.php
Created October 22, 2024 17:13
Upload Files to the Gemini API
<?php
namespace App\Services\google;
use Exception;
use Illuminate\Http\Client\ConnectionException;
use Illuminate\Support\Facades\Http;
class Gemini
{
@cugu
cugu / README.md
Last active March 25, 2025 16:46
Webhooks for PocketBase

Webhooks for PocketBase

A simple webhook plugin for PocketBase.

Adds a new collection "webhooks" to the admin interface, to manage webhooks.

Example

The webhook record in the following example send create, update, and delete events in the tickets collection to http://localhost:8080/webhook.

@oplanre
oplanre / Pipeline.php
Created June 13, 2024 22:17
Simple pipeline implementation in php as a class or function
<?php
class Pipeline {
public function __construct(
private mixed $data
) {}
public function pipe(callable ...$callbacks): static {
foreach ($callbacks as $callback) {
$this->data = $callback($this->data);
}
return $this;
@roychri
roychri / README.md
Created May 2, 2024 17:50
Stream Ollama (openai) chat completion API on CLI with HTTPie and jq

Stream Ollama (openai) chat completion API on CLI with HTTPie and jq

Explanation

This command sends a request to the Chat Completion API to generate high-level documentation for the file @src/arch.js. The API is configured to use the llama3-gradient model and to respond in Markdown format.

The messages array contains two elements:

  • The first element is a system message that provides the prompt for the API.
  • The second element is a user message that specifies the file for which to generate documentation.
# Machine Intelligence Made to Impersonate Characteristics: MIMIC
# NOTE run this $ conda install -c conda-forge mpi4py mpich to get mpi working
# accelerate launch --use_deepspeed -m axolotl.cli.train ./config_name_here
base_model: alpindale/Mistral-7B-v0.2-hf
base_model_config: alpindale/Mistral-7B-v0.2-hf
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
@Artefact2
Artefact2 / README.md
Last active April 6, 2025 06:45
GGUF quantizations overview
@garyblankenship
garyblankenship / deployartifact.yml
Created January 6, 2024 03:26
Workflow to Deploy Laravel to Digital Ocean
name: Deploy to DigitalOcean
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
@garyblankenship
garyblankenship / gracemacro.php
Last active January 11, 2024 02:10
Laravel Cache::grace() return stale and dispatch revalidate
<?php
/**
* Graceful cache retrieval and updating macro.
*
* First, tries to retrieve the cached value. If not found, uses a grace value.
* Schedules a background job for updating the cache if grace value is found.
*
* @param string $key The cache key to retrieve.
* @param int $ttl Time to live for the cache, in seconds.
@garyblankenship
garyblankenship / googleai.php
Last active December 15, 2023 22:42
Google AI Gemini Pro Example Laravel Usage
<?php
$response = Http::post(
"https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=" .
config("services.googleai.key"),
[
"contents" => [
[
"parts" => [
[