Skip to content

Instantly share code, notes, and snippets.

View CypherpunkSamurai's full-sized avatar
😶
Currently Busy 🌻🐢

Cypherpunk Samurai CypherpunkSamurai

😶
Currently Busy 🌻🐢
View GitHub Profile
@chavinlo
chavinlo / train.py
Last active July 29, 2023 20:15
HIVEMIND_DISTRIBUTED_TRAINING_12_9_2022_8_37PM
# Install bitsandbytes:
# `nvcc --version` to get CUDA version.
# `pip install -i https://test.pypi.org/simple/ bitsandbytes-cudaXXX` to install for current CUDA.
# Example Usage:
# Single GPU: torchrun --nproc_per_node=1 trainer/diffusers_trainer.py --model="CompVis/stable-diffusion-v1-4" --run_name="liminal" --dataset="liminal-dataset" --hf_token="hf_blablabla" --bucket_side_min=64 --use_8bit_adam=True --gradient_checkpointing=True --batch_size=1 --fp16=True --image_log_steps=250 --epochs=20 --resolution=768 --use_ema=True
# Multiple GPUs: torchrun --nproc_per_node=N trainer/diffusers_trainer.py --model="CompVis/stable-diffusion-v1-4" --run_name="liminal" --dataset="liminal-dataset" --hf_token="hf_blablabla" --bucket_side_min=64 --use_8bit_adam=True --gradient_checkpointing=True --batch_size=10 --fp16=True --image_log_steps=250 --epochs=20 --resolution=768 --use_ema=True
import argparse
import socket
import sys
# Install bitsandbytes:
# `nvcc --version` to get CUDA version.
# `pip install -i https://test.pypi.org/simple/ bitsandbytes-cudaXXX` to install for current CUDA.
# Example Usage:
# Single GPU: torchrun --nproc_per_node=1 trainer/diffusers_trainer.py --model="CompVis/stable-diffusion-v1-4" --run_name="liminal" --dataset="liminal-dataset" --hf_token="hf_blablabla" --bucket_side_min=64 --use_8bit_adam=True --gradient_checkpointing=True --batch_size=1 --fp16=True --image_log_steps=250 --epochs=20 --resolution=768 --use_ema=True
# Multiple GPUs: torchrun --nproc_per_node=N trainer/diffusers_trainer.py --model="CompVis/stable-diffusion-v1-4" --run_name="liminal" --dataset="liminal-dataset" --hf_token="hf_blablabla" --bucket_side_min=64 --use_8bit_adam=True --gradient_checkpointing=True --batch_size=10 --fp16=True --image_log_steps=250 --epochs=20 --resolution=768 --use_ema=True
import argparse
import socket
import torch
@CypherpunkSamurai
CypherpunkSamurai / Cool Snippet Storage Utilities
Created August 9, 2022 08:10
Cool Snippet Storage Utilities for Snippets #snippet-storage #code
# Cool Snippet Storage Utilities
* Gisto - https://web.gistoapp.com/
* Masscode - https://masscode.io/
* 3Cols - https://3cols.com/
* Cacher - https://www.cacher.io/
# What do i prefer?
I'm currently using Gisto, as it's pretty cool and uses Github Gists for storage. I also like the VSCode extension for Masscode and would like to try it.
@EcoBay
EcoBay / devuan-bootsrap.md
Last active March 17, 2024 12:21
Devuan bootstap install
@pouyaardehkhani
pouyaardehkhani / Keep Google Colab from disconnecting markdown.md
Last active April 22, 2025 01:34
Keep Google Colab from disconnecting

Problem

I was training a deep learning model on Google Colab. I had to go somewhere for an emergency work. When I came back, I realized that Colab was disconnected; Therefore, I had to run the whole model again.

Solution

I searched on the Internet to solve this problem. I noticed that Colab will be disconnected if you don't click on the page. Here is an answer that I find:

function ClickConnect() {
  console.log('Working')
 document
from tqdm import tqdm
import requests
# file url
url = 'https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-stub-meta-history.xml.gz' #file size is '70.5GB'
# stream true is required
response = requests.get(url, stream=True)
# total file size
t = int(response.headers.get('content-length', 0))
block_size = 1024**2 #1 Mbit
import os
sublime_binary_path = "/tmp/sublime_text" # FIXME: this is the absolute path to writable sublime_text binary.
version_magic_string = "4126"
sz_magic_string = 4
version_magic_string_offset = 0x0002d78a # (Real offset from xxd)
is_file_read = os.access(sublime_binary_path, os.R_OK)
if not is_file_read:
@aronreisx
aronreisx / tutorial-rename-wsl-disto.txt
Created December 11, 2021 19:16
How to rename WSL Distro on Windows
To rename WSL Distros on Windows follow the steps:
1. Stop all instances of WSL
On PowerShell run the command: wsl --shutdown
2. Open Registry Editor and go to HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Lxss
Each subfolder on Lxss represents a distro
3. Locate and rename the distro you want
Inside each distro folder you have the DistributionName, change it and click ok
@jramiresbrito
jramiresbrito / wsl2_android_device_via_wifi_or_usb.md
Last active February 26, 2025 15:29
Connect Android Devices to WSL2 with WIFI or USB
@dmknght
dmknght / sublimetext_4121_crack.py
Last active January 19, 2022 05:37
Patch binary of sublimtext amd64 linux build 4121
import os
sublime_binary_path = "/tmp/sublime_text"
version_magic_string = "/updates/4/stable_update_check?version=4121&platform=linux&arch=x64"
sz_magic_string = 67
version_magic_string_offset = 0x000106bd # (Real offset from xxd)
is_file_read = os.access(sublime_binary_path, os.R_OK)
if not is_file_read: