Skip to content

Instantly share code, notes, and snippets.

#********************************************************************
#** _00-readme-freebsd-11.1-imac.txt
#** This note describes the proces of installing FreeBSD 11.1
#** on an Intel iMac (early 2008)
#** xref _00-readme-freebsd-11.txt
#** last update 20170822.2034
#********************************************************************
---------------------------------------------------------------------
-- System Specifications:
---------------------------------------------------------------------
@wangii
wangii / muttrc
Last active March 18, 2022 10:38 — forked from yangxuan8282/muttrc
Hotmail template config for mutt, just insert your mail account and password to "imap_user" and "imap_pass", if 2-factor is enable, generate an app password.
set ssl_starttls=yes
set ssl_force_tls=yes
set imap_user = '[email protected]'
set imap_pass = 'password_here'
set from= $imap_user
set use_from=yes
set realname='Your_Name'
set folder = imaps://imap-mail.outlook.com:993
set spoolfile = "+INBOX"
@wangii
wangii / Thermoeconomics.md
Created January 28, 2023 20:42 — forked from clumma/Thermoeconomics.md
Thermoeconomics references
@wangii
wangii / LLM.md
Created March 29, 2023 07:20 — forked from rain-1/LLM.md
LLM Introduction: Learn Language Models

Purpose

Bootstrap knowledge of LLMs ASAP. With a bias/focus to GPT.

Avoid being a link dump. Try to provide only valuable well tuned information.

Prelude

Neural network links before starting with transformers.

@wangii
wangii / macOS Internals.md
Created May 5, 2023 20:24 — forked from kconner/macOS Internals.md
macOS Internals

macOS Internals

Understand your Mac and iPhone more deeply by tracing the evolution of Mac OS X from prelease to Swift. John Siracusa delivers the details.

Starting Points

How to use this gist

You've got two main options:

@wangii
wangii / llama-home.md
Created May 14, 2023 13:29 — forked from rain-1/llama-home.md
How to run Llama 13B with a 6GB graphics card

This worked on 14/May/23. The instructions will probably require updating in the future.

It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.

  • Get llama.cpp from git, I am on commit 08737ef720f0510c7ec2aa84d7f70c691073c35d.
  • Use the link at the bottom of the page to apply for research access to the llama model: https://ai.facebook.com/blog/large-language-model-llama-meta-ai/
  • Set up a micromamba environment to install cuda/python pytorch stuff in order to run the conversion scripts. Install some packages:
    • micromamba install -c conda-forge -n mymamba pytorch transformers sentencepiece
  • Perform the conversion process: (This will produce a file called `ggml-model