Skip to content

Instantly share code, notes, and snippets.

View sytelus's full-sized avatar

Shital Shah sytelus

View GitHub Profile
@chtzvt
chtzvt / dnsmasq.conf
Created September 20, 2017 20:30
Optimized Dnsmasq configuration, for use with OpenWRT/DD-WRT/Tomato/etc
# Charlton Trezevant's Zoomin DNSMasq Config - Version 1.0
# Having a large local cache speeds up subsequent DNS queries significantly (from several hundred msec to around 25-30)
# You may need to adjust this depending on the amount of free space you have
cache-size=10000
# This ensures local reverse lookup queries are never sent upstream (e.g. dig +noall +answer -x 10.0.1.1)
bogus-priv
# Names without a dot or other domain part will also not be forwarded upstream
domain-needed
# We won't need dnsmasq to overwrite the system's resolv.conf, as we have our own cache.
@wassname
wassname / live_plot_notebook.py
Created September 27, 2017 06:14
Live plot using %matplotlib notebook in jupyter notebook
import numpy as np
from matplotlib import pyplot as plt
class LivePlotNotebook(object):
"""
Live plot using %matplotlib notebook in jupyter notebook
Usage:
```
import time
@terabyte
terabyte / amazon.md
Created December 6, 2017 02:27
Amazon's Build System

Prologue

I wrote this answer on stackexchange, here: https://stackoverflow.com/posts/12597919/

It was wrongly deleted for containing "proprietary information" years later. I think that's bullshit so I am posting it here. Come at me.

The Question

Amazon is a SOA system with 100s of services (or so says Amazon Chief Technology Officer Werner Vogels). How do they handle build and release?

@Brainiarc7
Brainiarc7 / conky-setup-clevo-p751dm2-g.md
Last active February 24, 2025 00:02
My Conky configuration

Setting up Conky on Ubuntu 16.04LTS for the Clevo P751DM2-G

System Information:

We extract this with inxi:

installation:

sudo apt-get install inxi
@ctmakro
ctmakro / ipython_display.py
Last active April 15, 2024 03:22
Display numpy ndarray as Image in Jupyter/IPython notebook
# if input image is in range 0..1, please first multiply img by 255
# assume image is ndarray of shape [height, width, channels] where channels can be 1, 3 or 4
def imshow(img):
import cv2
import IPython
_,ret = cv2.imencode('.jpg', img)
i = IPython.display.Image(data=ret)
IPython.display.display(i)
@soumith
soumith / gist:01da3874bf014d8a8c53406c2b95d56b
Last active March 28, 2022 16:53
Install PillowSIMD+libjpeg-turbo on Conda
conda uninstall --force pillow -y
# install libjpeg-turbo to $HOME/turbojpeg
git clone https://github.com/libjpeg-turbo/libjpeg-turbo
pushd libjpeg-turbo
mkdir build
cd build
cmake .. -DCMAKE_INSTALL_PREFIX:PATH=$HOME/turbojpeg
make
make install
@rxwei
rxwei / ad-manifesto.md
Last active December 6, 2024 16:54
First-Class Automatic Differentiation in Swift: A Manifesto
@thomwolf
thomwolf / gpt-2-wikitext-103.py
Last active December 3, 2025 07:57
A very small and self-contained gist to train a GPT-2 transformer model on wikitext-103
# Copyright (c) 2019-present, Thomas Wolf.
# All rights reserved. This source code is licensed under the MIT-style license.
""" A very small and self-contained gist to train a GPT-2 transformer model on wikitext-103 """
import os
from collections import namedtuple
from tqdm import tqdm
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from ignite.engine import Engine, Events
@shakna-israel
shakna-israel / LetsDestroyC.md
Created January 30, 2020 03:50
Let's Destroy C

Let's Destroy C

I have a pet project I work on, every now and then. CNoEvil.

The concept is simple enough.

What if, for a moment, we forgot all the rules we know. That we ignore every good idea, and accept all the terrible ones. That nothing is off limits. Can we turn C into a new language? Can we do what Lisp and Forth let the over-eager programmer do, but in C?


@tamuhey
tamuhey / tokenizations_post.md
Last active July 27, 2024 14:46
How to calculate the alignment between BERT and spaCy tokens effectively and robustly

How to calculate the alignment between BERT and spaCy tokens effectively and robustly

image

site: https://tamuhey.github.io/tokenizations/

Natural Language Processing (NLP) has made great progress in recent years because of neural networks, which allows us to solve various tasks with end-to-end architecture. However, many NLP systems still require language-specific pre- and post-processing, especially in tokenizations. In this article, I describe an algorithm that simplifies calculating correspondence between tokens (e.g. BERT vs. spaCy), one such process. And I introduce Python and Rust libraries that implement this algorithm. Here are the library and the demo site links: