Skip to content

Instantly share code, notes, and snippets.

View Quasimondo's full-sized avatar

Mario Klingemann Quasimondo

View GitHub Profile
@Quasimondo
Quasimondo / gist:1b49d32c0e8f6031a44384d8bba207e4
Created July 5, 2024 15:07
Claude Proposal for a fair artist compensation system
Designing a fair and implementable compensation model for artists in the age of AI is indeed complex, but here's a proposed framework that attempts to balance various interests:
1. Training Data Licensing:
- Establish a system where artists can opt-in to license their work for AI training.
- Companies using artworks for AI training pay into a collective licensing pool.
- Payment rates based on factors like usage frequency, uniqueness, and influence on AI outputs.
@Quasimondo
Quasimondo / rpi_direct_fb_output.py
Last active October 21, 2024 03:58
Writing directly to the Raspberry PI framebuffer from Python (no GUI or X required)
# After a lot of searching and false or complicated leads I found this brilliant method
# that allows to use a numpy array to get direct read/write access to the rpi framebuffer
# https://stackoverflow.com/questions/58772943/how-to-show-an-image-direct-from-memory-on-rpi
# I thought it is worth sharing again since so it might someone else some research time
#
# The only caveat is that you will have to run this as root (sudo python yourscript.py),
# But you can get around this if you add the current user to the "video" group like this:
# usermod -a -G video [user]
# source: https://medium.com/@avik.das/writing-gui-applications-on-the-raspberry-pi-without-a-desktop-environment-8f8f840d9867
#
@Quasimondo
Quasimondo / sd15_vae_merge.py
Created October 22, 2022 15:23
Quick script to merge finetuned StabilityAI autoencoder into RunwayML Stable Diffusion 1.5 checkpoint
import torch
#USE AT YOUR OWN RISK
#local path to runwayML SD 1.5 checkpoint (https://huggingface.co/runwayml/stable-diffusion-v1-5)
ckpt_15 = "./v1-5-pruned-emaonly.ckpt"
#local path to StabilityAI finetuned autoencoder (https://huggingface.co/stabilityai/sd-vae-ft-mse)
ckpt_vae = "./vae-ft-mse-840000-ema-pruned.ckpt"
import json
from urllib.request import urlopen
def get_query_result(query, timeout=10):
with urlopen(query, timeout=timeout) as request:
if request.status == 200:
return json.loads(request.read().decode())
# Get the list of users allowed to vote
hen_users = get_query_result("https://vote.hencommunity.quest/hen-users-snapshot-16-01-2022.json")
I am attesting that this GitHub handle quasimondo is linked to the Tezos account tz1hb9PiWxQEf6J9xevPsUM6dkuCLnhDMvsp for tzprofiles
sig:edsigtoekSaBkHWoVT4WcedEWrgHS2jfG1tsaqUGH3o9ULoFn8yzEMsAa7JsZbXwu4iKPepv98hKX2QSwi1KMkn1YX8B3tSC4be
@Quasimondo
Quasimondo / gist:71d9eb865210cd7e66e4690c28c5e72c
Created July 26, 2021 19:00
Install Cuda 11-1 on Ubuntu 20.94
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo apt upgrade
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/11.1.0/local_installers/cuda-repo-ubuntu2004-11-1-local_11.1.0-455.23.05-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu2004-11-1-local_11.1.0-455.23.05-1_amd64.deb
sudo apt-key add /var/cuda-repo-ubuntu2004-11-1-local/7fa2af80.pub
sudo apt-get update
@Quasimondo
Quasimondo / hic_et_nunc_get_all_token_data.py
Created April 2, 2021 12:03
Some basic code to retrieve hic et nunc token data from better-call.dev
import os
import pickle
import requests
#download cached token data here:
#https://drive.google.com/file/d/1g_4w_Re5Y0NmcS2Y55WQzESWDeL2dey6/view?usp=sharing
#and put it into the same folder as this file
cachedTokenData = {"maxTokenID":-1,"knownTokenIds":{},"data":[]}
if os.path.exists("cached_token_data.pickle"):
@Quasimondo
Quasimondo / hic_et_nunc_basic_scraper.py
Created March 10, 2021 11:23
This is a very basic no-frills scraper to retrieve the metadata and digital assets from all tokens minted on hicetnunc.xyz. I share this as a starting point for people who want to experiment with building alternative views on the works created on the platform or preserve the data. Feel free to improve upon this or add additional features.
import requests
import os
import ipfsApi
api = ipfsApi.Client(host='https://ipfs.infura.io', port=5001)
url = "https://better-call.dev/v1/contract/mainnet/KT1RJ6PbjHpwc3M5rw5s2Nbmefwbuwbdxton/tokens"
r = requests.get(url)
data = r.json()
'''
Usage:
python quickwatermark.py [path to folder that contains images to waternark]
This will go through all the files in that folder, try to open them and add
the filename as text on top of the image. The watermarked images will be stored
in a subfolder of the chosen folder called "watermarked"
'''
@Quasimondo
Quasimondo / gist:7e1068e488e20f194d37ba80696b55d8
Last active December 9, 2023 09:17
A possible fix for "failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device"
This is a dreaded error that seems pop up its ugly head again and again, in particular after upgrading CUDA or Tensorflow.
Typcially it looks like this:
2020-12-30 17:31:40.829615: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2020-12-30 17:31:42.149768: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2020-12-30 17:31:42.150368: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2020-12-30 17:31:42.176643: E tensorflow/stream_executor/cuda/cuda_driver.cc:328] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
Here is a solution that currently seems to work on my system,
with Cuda 11.0 and Tensorflow 2.4.0, you can try it if all the