Skip to content

Instantly share code, notes, and snippets.

View venatiodecorus's full-sized avatar
👋
I'm open to work!

venatiodecorus

👋
I'm open to work!
View GitHub Profile
@louisremi
louisremi / animLoopX.js
Created July 29, 2011 17:34
Animation loop with requestAnimationFrame
// Cross browser, backward compatible solution
(function( window, Date ) {
// feature testing
var raf = window.mozRequestAnimationFrame ||
window.webkitRequestAnimationFrame ||
window.msRequestAnimationFrame ||
window.oRequestAnimationFrame;
window.animLoop = function( render, element ) {
var running, lastFrame = +new Date;
@pascalpoitras
pascalpoitras / config.md
Last active October 7, 2024 01:35
My WeeChat configuration

WeeChat Screenshot

Mouse


enable


@DongDiddler
DongDiddler / hueg.pl
Created June 28, 2014 21:28
hueg.pl with upside down crosses (^b)
#!/usr/bin/perl
######
# hueg.pl PRO MODE
# modded by ma0 and others
# respekts 2 jakk and others
######
use Irssi;
use vars qw($VERSION %IRSSI);
@attacus
attacus / riot-matrix-workshop.md
Last active March 13, 2024 00:16
Create your own encrypted chat server with Riot and Matrix

This guide is unmaintained and was created for a specific workshop in 2017. It remains as a legacy reference. Use at your own risk.

Running your own encrypted chat service with Matrix and Riot

Workshop Instructor:

This workshop is distributed under a CC BY-SA 4.0 license.

What are we doing here?

@keithpl
keithpl / arch-linux-install-notes.md
Last active September 12, 2024 18:32
Arch Linux Installation Notes
  • UEFI
  • Systemd-boot (gummiboot)
  • Encrypted root partition
  • mkinitcpio
  • Unified Kernel Image (UKI)
  • Intel CPU
  • NVIDIA GPU
  • Wayland (sway)
  • Secure Boot (optional)
@rain-1
rain-1 / llama-home.md
Last active November 9, 2024 03:49
How to run Llama 13B with a 6GB graphics card

This worked on 14/May/23. The instructions will probably require updating in the future.

llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)

Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.

It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.

  • Clone llama.cpp from git, I am on commit 08737ef720f0510c7ec2aa84d7f70c691073c35d.
@khalidx
khalidx / node-typescript-esm.md
Last active November 14, 2024 08:26
A Node + TypeScript + ts-node + ESM experience that works.

The experience of using Node.JS with TypeScript, ts-node, and ESM is horrible.

There are countless guides of how to integrate them, but none of them seem to work.

Here's what worked for me.

Just add the following files and run npm run dev. You'll be good to go!

package.json