Skip to content

Instantly share code, notes, and snippets.

View joeswann's full-sized avatar

Joe Swann joeswann

View GitHub Profile
@celsowhite
celsowhite / shopify-nodejs-file-create-flow.js
Last active September 30, 2025 19:30
Shopify Files API
/*------------------------
Libraries
------------------------*/
const axios = require("axios");
const fs = require("fs");
const FormData = require("form-data");
/*------------------------
Download the file.
Good article on how to download a file and send with form data - https://maximorlov.com/send-a-file-with-axios-in-nodejs/
@employee451
employee451 / resolveSanityGraphqlReferences.ts
Last active May 7, 2024 10:17
Resolve references in Sanity data (GraphQL)
import { groq } from 'next-sanity' // Replace with another library if you don't use Next.JS
import { client } from './client' // Replace this with wherever you have set up your client
/**
* Resolves all unresolved references when fetching from the Sanity GraphQL API.
* There is no native way to do this, so we have to do it manually.
* @param data
*/
export const resolveSanityGraphqlReferences = async <T = unknown>(
@yoavg
yoavg / instruct-to-not-hallucinate.md
Created September 9, 2024 20:23
Is telling a model to "not hallucinate" absurd?

Is telling a model to "not hallucinate" absurd?

Can you tell an LLM "don't hallucinate" and expect it to work? my gut reaction was "oh this is so silly" but upon some reflection, it really isn't. There is actually no reason why it shouldn't work, especially if it was preference-fine-tuned on instructions with "don't hallucinate" in them, and if it a recent commercial model, it likely was.

What does an LLM need in order to follow an instruction? It needs two things:

  1. an ability to perform then task. Something in its parameters/mechanism should be indicative of the task objective, in a way that can be influenced. (In our case, it should "know" when it hallucinates, and/or should be able to change or adapt its behavior to reduce the chance of hallucinations.)
  2. an ability to ground the instruction: the model should be able to associate the requested behavior with its parameters/mechanisms. (In our case, the model should associate "don't hallucinate" with the behavior related to 1).