Skip to content

Instantly share code, notes, and snippets.

View Laurian's full-sized avatar

Laurian Gridinoc Laurian

  • Creative Technologist ※ Knight-Mozilla OpenNews Fellow ※ Visual analytics × Computational Linguistics × Semantic Web
  • Cyberspace
  • X @gridinoc
View GitHub Profile
Understand the Task: Grasp the main objective, goals, requirements, constraints, and expected output.
- Minimal Changes: If an existing prompt is provided, improve it only if it's simple. For complex prompts, enhance clarity and add missing elements without altering the original structure.
- Reasoning Before Conclusions: Encourage reasoning steps before any conclusions are reached. ATTENTION! If the user provides examples where the reasoning happens afterward, REVERSE the order! NEVER START EXAMPLES WITH CONCLUSIONS!
- Reasoning Order: Call out reasoning portions of the prompt and conclusion parts (specific fields by name). For each, determine the ORDER in which this is done, and whether it needs to be reversed.
- Conclusion, classifications, or results should ALWAYS appear last.
- Examples: Include high-quality examples if helpful, using placeholders [in brackets] for complex elements.
- What kinds of examples may need to be included, how many, and whether they are complex enough to benefit from p
@mtisz
mtisz / llama-3-70B-qlora.yaml
Created May 15, 2024 16:47
Axolotl Config for Llama-3-70B QLoRA
base_model: meta-llama/Meta-Llama-3-70B
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: /home/migel/ai_datasets/tess-v1.5b-chatml.jsonl
import llama_cpp
import re
import json
# Model configuration
# tested with mistral, llama2, llama3, and phi3
model_path = "/path/to/model"
base_llm = llama_cpp.Llama(model_path, seed=42, n_gpu_layers=-1, n_ctx=4096, verbose=False, temperature=0.0)
@matt-e-king
matt-e-king / gist:3ac2127025cf5dbc03001d9b21c1984a
Created April 17, 2024 19:06
Hypothesis Embed and sock in Vue app
// HypothesisEmbed.vue
// This is the component that mounts on the page that has the annotations
// It's important to destroy the client when componenent is unmounted
// because if the component remounts, it creates duplicate annotations in the DOM
<template>
<div
id="HypothesisEmbed"
class="HypothesisEmbed"
/>
@veekaybee
veekaybee / normcore-llm.md
Last active November 18, 2024 19:40
Normcore LLM Reads

Anti-hype LLM reading list

Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.

Foundational Concepts

Screenshot 2023-12-18 at 10 40 27 PM

Pre-Transformer Models

@gimenete
gimenete / safeParse.ts
Last active March 15, 2024 16:05
A wrapper around the fetch function that validates the response body against a Zod schema
import z from "zod";
export async function safeFetch<T>(
schema: z.Schema<T>,
input: RequestInfo,
init?: RequestInit
): Promise<T> {
const response = await fetch(input, init);
if (!response.ok) {
@hyperupcall
hyperupcall / settings.jsonc
Last active September 19, 2024 16:20
VSCode config to disable popular extensions' annoyances (telemetry, notifications, welcome pages, etc.)
// I'm tired of extensions that automatically:
// - show welcome pages / walkthroughs
// - show release notes
// - send telemetry
// - recommend things
//
// This disables all of that stuff.
// If you have more config, leave a comment so I can add it!!
{
@kconner
kconner / macOS Internals.md
Last active November 19, 2024 20:21
macOS Internals

macOS Internals

Understand your Mac and iPhone more deeply by tracing the evolution of Mac OS X from prelease to Swift. John Siracusa delivers the details.

Starting Points

How to use this gist

You've got two main options:

@veekaybee
veekaybee / chatgpt.md
Last active October 30, 2024 08:38
Everything I understand about chatgpt

ChatGPT Resources

Context

ChatGPT appeared like an explosion on all my social media timelines in early December 2022. While I keep up with machine learning as an industry, I wasn't focused so much on this particular corner, and all the screenshots seemed like they came out of nowhere. What was this model? How did the chat prompting work? What was the context of OpenAI doing this work and collecting my prompts for training data?

I decided to do a quick investigation. Here's all the information I've found so far. I'm aggregating and synthesizing it as I go, so it's currently changing pretty frequently.

Model Architecture

@gmurdocca
gmurdocca / socat_caesar_dpi.md
Last active June 28, 2024 15:53
Circumventing Deep Packet Inspection with Socat and rot13

Circumventing Deep Packet Inspection with Socat and rot13

I have a Linux virtual machine inside a customer's private network. For security, this VM is reachable only via VPN + Citrix + Windows + a Windows SSH client (eg PuTTY). I am tasked to ensure this Citrix design is secure, and users can not access their Linux VM's or other resources on the internal private network in any way outside of using Citrix.

The VM can access the internet. This task should be easy. The VM's internet gateway allows it to connect anywhere on the internet to TCP ports 80, 443, and 8090 only. Connecting to an internet bastion box on one of these ports works and I can send and receive clear text data using netcat. I plan to use good old SSH, listening on tcp/8090 on the bastion, with a reverse port forward configured to expose sshd on the VM to the public, to show their Citrix gateway can be circumvented.

Rejected by Deep Packet Inspection

I hit an immediate snag. The moment I try to establish an SSH or SSL connection over o