Skip to content

Instantly share code, notes, and snippets.

View djinn's full-sized avatar

Supreet Sethi djinn

View GitHub Profile
@djinn
djinn / multiget.js
Created June 24, 2024 07:35
How to do Multi Get in Couchbase SDK 4 node.js
const users = [
{ id: 'user_111', email: '[email protected]' },
{ id: 'user_222', email: '[email protected]' },
{ id: 'user_333', email: '[email protected]' },
]
// Wait for all the get operations to complete and store the results.
const getResults = await Promise.all(
users.map((user) => {
console.log(`Getting document: ${user.id}`)
return usersCollection.get(user.id, user)
@djinn
djinn / read_benchmark.py
Created May 24, 2024 07:24
Mongo Vs Couchbase
import time
import random
from pymongo import MongoClient
from couchbase.cluster import Cluster
from couchbase.management.buckets import BucketManager
from couchbase.options import ClusterOptions
from couchbase.auth import PasswordAuthenticator
import os
import string
from tqdm import tqdm
@djinn
djinn / var.c++
Created August 23, 2023 06:18
Calculating Value at Risk VaR using pure C++ code
#include <iostream>
#include <vector>
#include <random>
#include <algorithm>
#include <numeric>
#include <thread>
#include <mutex>
const int num_simulations = 10000; // Number of Monte Carlo simulations
const int num_days = 252; // Number of trading days in a year
@djinn
djinn / example_transformer.py
Created August 10, 2023 04:18
This defines a Transformer using minimum dependencies
import numpy as np
import math
# Define the Transformer model architecture
class Transformer:
def __init__(self, input_vocab_size, output_vocab_size, max_seq_length, d_model, num_heads, num_layers):
self.input_vocab_size = input_vocab_size
self.output_vocab_size = output_vocab_size
self.max_seq_length = max_seq_length
self.d_model = d_model
@djinn
djinn / running_llm_on_aws.sh
Last active May 29, 2023 01:44
Running LLM on AWS
#!/bin/sh
# The cheapest CUDA instance on AWS is Graviton2 based g5g instances
# 1) Install right CUDA Drivers on the system. Whilst this script has been tested on Ubuntu 22.04. It should work on others
# 2) Install torch with GPU support. Compiling from source works the best
# 3) Preferably Quantize the models - https://huggingface.co/docs/transformers/main/en/main_classes/quantization
#
# Author: Supreet Sethi <[email protected]>
# Web: https://www.linkedin.com/in/djinn
#
# Step 1) is here
@djinn
djinn / sysbench_parser.py
Last active March 20, 2023 06:38
Useful in situations where you are tuning system and comparing benchmarks
#!/usr/bin/env python3
from json import dumps
test_load = """SQL statistics:
queries performed:
read: 845684
write: 241589
other: 120831
total: 1208104
transactions: 60398 (1506.89 per sec.)
@djinn
djinn / yubikey.sh
Last active July 21, 2022 05:28
To reduce battery usage in Mac when using Yubikey Nano
#!/bin/sh
# Author Supreet Sethi <[email protected]>
# Dated Thu Jul 21 13:26:06 +08 2022
# Instructions - wget <script>; chmod +x yubikey.sh; ./yubikey.sh
sudo pmset -a hibernatemode 25 && sudo pmset -a standbydelay 15
echo "Yubikey hibernation changes installed"
@djinn
djinn / math_error.cpp
Created August 30, 2021 05:52
The tiny code to show some of the floating point errors.
#include <iostream>
#include <string>
#include <cinttypes> // for uint32_t
#include <cfenv>
#include <cmath>
std::string float_to_binary(float f) {
union {float f; uint32_t i;} u;
#!/usr/bin/env python3
# Copyright(C) 2020 Supreet Sethi <[email protected]>
# I do Hindi, Punjabi and Urdu in Python dude!
बोल = print
बोल("ਬੋਲ ਕੇ ਲਾਬ ਆਜ਼ਾਦ ਹੈਂ ਤੇਰੇ - فیض احمد فیض")
@djinn
djinn / document_processing.py
Created October 16, 2020 04:58
Parallel processing of CPU intensive blocking tasks leveraging available CPUs
#!/usr/bin/env python3
# Author: Supreet Sethi <[email protected]>
# Dated: 16/10/2020
# Please use it has working prototype
# Lot can be done from process management and general housekeeping perspective
from multiprocessing import Pool, cpu_count, Manager
from collections import namedtuple