Skip to content

Instantly share code, notes, and snippets.

@obycode
obycode / bootup.md
Last active March 8, 2024 14:12
Epoch 3 testnet boot up
  1. The stacks-node mines blocks up to epoch 2.5 height at which point, pox-4 is deployed (along with other new boot contracts)
  • Look for:
    Applying epoch transition, new_epoch_id: 2.5, old_epoch_id: 2.4
    
  • Note: This happens during reward cycle 6 with the default test config (20 block cycles with 5 block prepare phases)
  1. Before the prepare phase starts, stackers must successfully call stack-stx
  • Look for something similar to:
    Contract-call successfully processed, txid: adc91171be4f7edb614f80d3707f75f3cc29650f88ff791133f55ebd71e24108, origin: STRYYQQ9M8KAF4NS7WNZQYY59X93XEKR31JP64CP, origin_nonce: 0, contract_name: ST000000000000000000002AMW42H.pox-4, function_name: stack-stx, function_args: [u1000080000000000, (tuple (hashbytes 0x31ef5ee9a226a792b93f2bfbfbc54f523eba7818) (version 0x00)), u109, u2, (some 0x331cb6e41dcb335f6851bb42e6dc39816ad4a2fe3bda4c8836f43e51fec9c2e401a35ef7b676af27214716ce8e22e57fdc60b1a29f087b031a6486e2989d5fcc01), 0x038e3c4529395611be9abf6fa3b6
    
@veekaybee
veekaybee / normcore-llm.md
Last active November 18, 2024 19:40
Normcore LLM Reads

Anti-hype LLM reading list

Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.

Foundational Concepts

Screenshot 2023-12-18 at 10 40 27 PM

Pre-Transformer Models

@adrienbrault
adrienbrault / llama2-mac-gpu.sh
Last active August 15, 2024 07:10
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM. UPDATE: see https://twitter.com/simonw/status/1691495807319674880?s=20
# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# Build it
make clean
LLAMA_METAL=1 make
# Download model
export MODEL=llama-2-13b-chat.ggmlv3.q4_0.bin
@rain-1
rain-1 / llama-home.md
Last active November 9, 2024 03:49
How to run Llama 13B with a 6GB graphics card

This worked on 14/May/23. The instructions will probably require updating in the future.

llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)

Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.

It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.

  • Clone llama.cpp from git, I am on commit 08737ef720f0510c7ec2aa84d7f70c691073c35d.
@ericlewis
ericlewis / .directions.md
Last active October 25, 2023 18:25
A conversational chatbot experience.

Prerequisites

  • API key for OpenAI.
  • API key for Picovoice
  • API key for ElevenLabs
  • mpg123 installed
  • node 18+

Directions

  • git clone https://gist.github.com/ericlewis/ccd3f0b7a17fcbe2473121a473082c8f
  • edit .env with your keys
@steveruizok
steveruizok / rng.ts
Last active July 22, 2023 00:31
Seeded random number generator in TypeScript.
/**
* Seeded random number generator, using [xorshift](https://en.wikipedia.org/wiki/Xorshift).
* Adapted from [seedrandom](https://github.com/davidbau/seedrandom).
* @param seed {string} The seed for random numbers.
*/
function rng(seed = '') {
let x = 0
let y = 0
let z = 0
let w = 0
@GetVladimir
GetVladimir / Force-RGB-Color-on-M1-Mac.md
Last active November 15, 2024 14:00
Force RGB Color on M1 Mac

Force RGB Color on M1 Mac

How to Force RGB Color Output instead of YPbPr on your M1 Apple Silicon Mac for an External Monitor.

This step-by-step video tutorial will guide you through the procedure of forcing RGB color output on your M1 Mac.

Force RGB Color on M1 Mac

Here is the direct link to the video tutorial: https://www.youtube.com/watch?v=Z1EqH3fd0V4

The video also has Closed Captions (Subtitles) that you can enable, to make it easier to follow if needed.

@sindresorhus
sindresorhus / esm-package.md
Last active November 17, 2024 22:07
Pure ESM package

Pure ESM package

The package that linked you here is now pure ESM. It cannot be require()'d from CommonJS.

This means you have the following choices:

  1. Use ESM yourself. (preferred)
    Use import foo from 'foo' instead of const foo = require('foo') to import the package. You also need to put "type": "module" in your package.json and more. Follow the below guide.
  2. If the package is used in an async context, you could use await import(…) from CommonJS instead of require(…).
  3. Stay on the existing version of the package until you can move to ESM.
@nth-commit
nth-commit / VariadicPipe.ts
Last active September 29, 2022 21:47
Function composition in TypeScript 4.1 (on top of ixjs's pipe)
import { OperatorFunction } from 'ix/interfaces';
import { pipe } from 'ix/iterable';
import { map } from 'ix/iterable/operators';
/**
* Creates a new type which is the first element of a non-empty tuple type.
*
* @example type T = Head<[string, number, Object]>; // string
*/
export type Head<Ts extends [any, ...any[]]> = Ts extends [infer T, ...any[]] ? T : never;