Skip to content

Instantly share code, notes, and snippets.

View tuxedocat's full-sized avatar
🐈‍⬛
WFH with cats

tuxedocat

🐈‍⬛
WFH with cats
View GitHub Profile
@danisla
danisla / README.md
Last active November 7, 2022 13:59
GKE GPU Sharing Daemonset

GPU Sharing on GKE DaemonSet

NOTE: This is not a Google supported product.

Example Usage

  1. Create a GKE cluster with a GPU node pool:
gcloud container clusters create gpu-sharing-demo --zone us-central1-c
@c-bata
c-bata / lightgbm_rfe.py
Last active June 12, 2024 19:34
Recursive Feature Elimination for LightGBM. This class accepts missing values and Optuna LightGBM tuner.
import numpy as np
import pandas as pd
#import lightgbm as lgb
from optuna.integration import lightgbm as lgb
from sklearn.model_selection import train_test_split
from sklearn.utils import check_X_y, safe_sqr
from sklearn.feature_selection.base import SelectorMixin
from lightgbm import Booster
@tuxedocat
tuxedocat / Pok3r-macOS-_Karabiner-fn_.kbd.json
Last active June 28, 2020 04:24
Pok3r macOS (Karabiner fn)
[
{
"name": "Pok3r macOS (Karabiner fn)",
"author": "tuxedocat",
"switchMount": "cherry",
"switchBrand": "cherry",
"switchType": "MX3A-L1xx"
},
[
{
@smnbbrv
smnbbrv / promisified-grpc-client.ts
Last active February 19, 2025 18:43
Promisify @grpc-js service client with typescript
import { Client, ServiceError, Metadata, CallOptions, ClientUnaryCall } from '@grpc/grpc-js';
import { Message } from 'google-protobuf';
type OriginalCall<T, U> = (request: T, metadata: Metadata, options: Partial<CallOptions>, callback: (error: ServiceError, res: U) => void) => ClientUnaryCall;
type PromisifiedCall<T, U> = ((request: T, metadata?: Metadata, options?: Partial<CallOptions>) => Promise<U>);
export type Promisified<C> = { $: C; } & {
[prop in Exclude<keyof C, keyof Client>]: (C[prop] extends OriginalCall<infer T, infer U> ? PromisifiedCall<T, U> : never);
}

Reinforcement Learning for Language Models

Yoav Goldberg, April 2023.

Why RL?

With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much

import time
import os
import logging
import random
from datasets import load_dataset
class QuantAutoGPTQ:
def __init__(self, model_name_or_path, output_dir, dataset,
num_samples=128, trust_remote_code=False, cache_examples=True,
use_fast=True, use_triton=False, bits=[4], group_size=[128], damp=[0.01],
@vaaaaanquish
vaaaaanquish / PythonのPackage Managerを深く知るためのリンク集.md
Last active March 9, 2025 00:01
PythonのPackage Managerを深く知るためのリンク集

PythonのPackage Managerを深く知るためのリンク集

以下の発表(2023/10/12)につき作成した、Pythonのパッケージ管理について学ぶ上で有益なリンクを集めたもの。

Pythonでの開発に関するベストプラクティス等を知ることは目的にしていない。
Package Managerを自作したり、開発にコミットするために必要なベースの知識を補うリンク集。

@kyo-takano
kyo-takano / making-the-most-of-local-llms.ipynb
Last active May 8, 2025 07:28
ローカルLLMはこーやって使うの💢
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.