Skip to content

Instantly share code, notes, and snippets.

View bsantraigi's full-sized avatar
🧘
procrastinating...

Bishal Santra bsantraigi

🧘
procrastinating...
View GitHub Profile
@bsantraigi
bsantraigi / boot_fix_linux.md
Created July 28, 2024 05:19
Recreating /boot and /boot/efi partition of a broken installation

Recreating or fixing the /boot and /boot/efi partition of a broken installation

  • Situations
    • Deleted /boot partition by mistake or intentionally.
    • /boot partition got corrupted
    • Failed update
  • Commands are for an Arch-linux installation, so you might need to find the equivalent for other distros.

After chroot-ing into the installation through a live iso. (See my other gist)

@bsantraigi
bsantraigi / Encrypted_Linux_Setup.md
Last active July 28, 2024 05:20
LVM on LUKs | Linux Installation with Full Encryption | Any Distro

LVM on LUKS installation process

Shortlink: https://tinyurl.com/lvm-luks

  • Custom partitioning, full system encryption, LVM on LUKS, and booting with GRUB2.
  • Common instruction for all distributions.

Format and partition your disk

Target Installation Disk: /dev/sda (yours may be different)

@bsantraigi
bsantraigi / timer.py
Last active March 13, 2023 06:51
GAMIFIER: A credit timer system. Run stopwatch to do the difficult task (reading), start timer to do the easy and addictive task (coding).
#!/usr/bin/python3
# A simple timer with following features
# 1. It will beep when the timer is done
# 2. Press S to start stopwatch and accumulate time
# 3. Press S again to stop stopwatch and print accumulated time
# 4. Press T to start timer and lose from accumulated time
# 5. Press T again to stop timer and print accumulated time
#
# Author: Bishal Santra (http://bsantraigi.github.io) MIT License
@bsantraigi
bsantraigi / top-k-top-p-batched.py
Last active October 7, 2024 22:51
Batched top-k and top-p/nucleus sampling in PyTorch!
def top_k_top_p_filtering(logits, top_k=0, top_p=0.0, filter_value=-float('Inf')):
""" Filter a distribution of logits using top-k and/or nucleus (top-p) filtering
Args:
logits: logits distribution shape (vocabulary size)
top_k >0: keep only top k tokens with highest probability (top-k filtering).
top_p >0.0: keep the top tokens with cumulative probability >= top_p (nucleus filtering).
Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751)
Basic outline taken from https://gist.github.com/thomwolf/1a5a29f6962089e871b94cbd09daf317
"""
@bsantraigi
bsantraigi / algo-a6-2021-life-is-strange-dp.cpp
Last active October 7, 2021 13:28
DP Assignment 2021: Life is Strange
#include <iostream>
#include <cstring>
#include <cmath>
#include <cstdlib>
#include <vector>
#include <ctime>
using namespace std;
const int inf = 1e6;
@bsantraigi
bsantraigi / chem-lab-dp.cpp
Created September 17, 2021 05:22
Solution to Algo DP - 2020
#include <iostream>
using namespace std;
int main(){
// IO:
cout << "Enter N and C:" << endl;
int N, C;
cin >> N;
cin >> C;
#!/bin/bash
# Syncs the content of the current folder with onedrive. **Upload only**
rclone copy "./" "onedrive:/Projects Backups/" -P
@bsantraigi
bsantraigi / git_backup_repo.sh
Created January 5, 2021 19:20
Create backup of a Github Repo as a .bundle file. It will contain all remote branches and commit history.
# How to run
# chmod +x git_backup_repo.sh
# ./git_backup_repo.sh https://github.com/bsantraigi/MyRepo.git
REPO=$1
echo "Backup: $REPO"
DIR=$(grep -o -e "[^/]*.git$" <<< $REPO)
# DIR=$?
echo "Repo Cloned To: $DIR"
#!/bin/bash
#$ -N qtest
#$ -j y # Merge the IO to standard IO
#$ -pe smp 6
#$ -cwd # Run in current directory. Otherwise defaults to home folder
#$ -V
#$ -l h_vmem=6G
#$ -l gpu=2
# For more options, follow the steps from http://bioinformatics.mdc-berlin.de/intro2UnixandSGE/index.html
@bsantraigi
bsantraigi / torch_grad_clip.py
Created November 3, 2019 12:37
Snippets for Gradients in PyTorch | Clip Gradient Norm
# Check Gradient
for p in model.parameters():
param_norm = p.grad.data.norm(2)
total_norm += param_norm.item() ** 2
total_norm = total_norm ** (1. / 2)
# Clip Gradient Norm
optimizer.zero_grad()
loss, hidden = model(data, hidden, targets)
loss.backward()