Skip to content

Instantly share code, notes, and snippets.

View Rettend's full-sized avatar
🚩
거인의 어깨에 올라서라

Hegyi Áron Ferenc Rettend

🚩
거인의 어깨에 올라서라
View GitHub Profile
@matthewzring
matthewzring / markdown-text-101.md
Last active November 18, 2024 08:27
A guide to Markdown on Discord.

Markdown Text 101

Want to inject some flavor into your everyday text chat? You're in luck! Discord uses Markdown, a simple plain text formatting system that'll help you make your sentences stand out. Here's how to do it! Just add a few characters before & after your desired text to change your text! I'll show you some examples...

What this guide covers:

@josemmo
josemmo / repair-mysql-data.ps1
Created August 28, 2020 18:48
Repair MySQL data directory (for XAMPP)
# Based on this answer: https://stackoverflow.com/a/61859561/1956278
# Backup old data
Rename-Item -Path "./data" -NewName "./data_old"
# Create new data directory
Copy-Item -Path "./backup" -Destination "./data" -Recurse
Remove-Item "./data/test" -Recurse
$dbPaths = Get-ChildItem -Path "./data_old" -Exclude ('mysql', 'performance_schema', 'phpmyadmin') -Recurse -Directory
Copy-Item -Path $dbPaths.FullName -Destination "./data" -Recurse

Vue-Windi-Capacitor App

This guide should help get you started developing with Vue.js and Windi CSS using Capacitor.

First things first

There are some prerequisites you need to have installed before you can start developing.

Node.js

@rain-1
rain-1 / llama-home.md
Last active November 9, 2024 03:49
How to run Llama 13B with a 6GB graphics card

This worked on 14/May/23. The instructions will probably require updating in the future.

llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)

Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.

It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.

  • Clone llama.cpp from git, I am on commit 08737ef720f0510c7ec2aa84d7f70c691073c35d.
@cgsdev0
cgsdev0 / house_builder.sh
Created February 3, 2024 16:07
house builder pattern in bash
#!/usr/bin/env bash
function house_builder() {
# floors,rooms,has_garage
echo "0,0,0"
}
function set_field() {
local f r g
IFS=, read f r g
@Rettend
Rettend / Chainable Console
Last active July 25, 2024 15:24
Allows chaining console logs to print them to the same line
import process from 'node:process'
class ChainableConsole extends console.Console {
#chain: this
#isChaining: boolean = false
#isFirstInChain: boolean = true
constructor(stdout: NodeJS.WriteStream, stderr: NodeJS.WriteStream, ignoreErrors?: boolean) {
super(stdout, stderr, ignoreErrors)
this.#chain = this.createChain()
@karpathy
karpathy / add_to_zshrc.sh
Created August 25, 2024 20:43
Git Commit Message AI
# -----------------------------------------------------------------------------
# AI-powered Git Commit Function
# Copy paste this gist into your ~/.bashrc or ~/.zshrc to gain the `gcm` command. It:
# 1) gets the current staged changed diff
# 2) sends them to an LLM to write the git commit message
# 3) allows you to easily accept, edit, regenerate, cancel
# But - just read and edit the code however you like
# the `llm` CLI util is awesome, can get it here: https://llm.datasette.io/en/stable/
gcm() {