Skip to content

Instantly share code, notes, and snippets.

View AdityaSoni19031997's full-sized avatar
🎯
Focusing

Aditya Soni AdityaSoni19031997

🎯
Focusing
View GitHub Profile
@ottomata
ottomata / spark-amm.sh
Created October 2, 2019 14:25
spark + ammonite
#!/usr/bin/env bash
export SPARK_HOME="${SPARK_HOME:-/usr/lib/spark2}"
export SPARK_CONF_DIR="${SPARK_CONF_DIR:-"${SPARK_HOME}"/conf}"
source ${SPARK_HOME}/bin/load-spark-env.sh
export HIVE_CONF_DIR=${SPARK_CONF_DIR}
export HADOOP_CONF_DIR=/etc/hadoop/conf
AMMONITE=~/bin/amm # This is amm binary release 2.11-1.6.7
@dinhchi27
dinhchi27 / Key Sublime Text 3.2.1 Build 3207 - Sublime Text 3 License Key
Last active July 2, 2025 05:58
Key Sublime Text 3.2.1 Build 3207 - Sublime Text 3 License Key
Key Sublime Text 3.2.1 Build 3207
----- BEGIN LICENSE -----
Member J2TeaM
Single User License
EA7E-1011316
D7DA350E 1B8B0760 972F8B60 F3E64036
B9B4E234 F356F38F 0AD1E3B7 0E9C5FAD
FA0A2ABE 25F65BD8 D51458E5 3923CE80
87428428 79079A01 AA69F319 A1AF29A4
A684C2DC 0B1583D4 19CBD290 217618CD
@dmmeteo
dmmeteo / 1.srp.py
Last active June 27, 2025 07:18
SOLID Principles explained in Python with examples.
"""
Single Responsibility Principle
“…You had one job” — Loki to Skurge in Thor: Ragnarok
A class should have only one job.
If a class has more than one responsibility, it becomes coupled.
A change to one responsibility results to modification of the other responsibility.
"""
class Animal:
def __init__(self, name: str):
@nhymxu
nhymxu / README-python-framework-benchmark.md
Last active April 3, 2025 09:58
Flask vs Falcon vs FastAPI benchmark
gunicorn run:app --workers=9
gunicorn run:app --workers=9 --worker-class=meinheld.gmeinheld.MeinheldWorker

Macbook Pro 2015 Python 3.7

Framework Server Req/s Max latency +/- Stdev
@TrentBrick
TrentBrick / PyTorch_bucket_by_sequence_length.py
Last active June 14, 2024 06:39
PyTorch BatchSampler for bucketing sequences by length
"""
PyTorch has pack_padded_sequence this doesn’t work with dense layers. For sequence data with high variance in its length
the best way to minimize padding and masking within a batch is by feeding in data that is already grouped by sequence length
(while still shuffling it somewhat). Here is my current solution in numpy.
I will need to convert every function over to torch to allow it to run on the GPU and am sure there are many other
ways to optimize it further. Hope this helps others and that maybe it can become a new PyTorch Batch Sampler someday.
General approach to how it works:
Decide what your bucket boundaries for the data are.
import numpy as np
import faiss
def search_knn(xq, xb, k, distance_type=faiss.METRIC_L2):
""" wrapper around the faiss knn functions without index """
nq, d = xq.shape
nb, d2 = xb.shape
assert d == d2
@anna-hope
anna-hope / structured_self_attention.py
Last active April 22, 2019 06:05
Structured Self-Attention in PyTorch (Lin et al. 2017)
# Implementation of Structured Self-Attention mechanism
# from Lin et al. 2017 (https://arxiv.org/pdf/1703.03130.pdf)
# Anton Melnikov
import torch
import torch.nn as nn
class StructuredAttention(nn.Module):
def __init__(self, *, input_dim: int, hidden_dim: int, attention_hops: int):
#from my repo
#https://github.com/AdityaSoni19031997/Machine-Learning/blob/master/AV/AV_Enigma_NLP_functional_api.ipynb
def preprocess_word(word):
# Remove punctuation
word = word.strip('\'"?!,.():;')
# Convert more than 2 letter repetitions to 2 letter
# funnnnny --> funny
word = re.sub(r'(.)\1+', r'\1\1', word)
# Remove - & '
@thomwolf
thomwolf / gradient_accumulation.py
Last active November 23, 2024 20:53
PyTorch gradient accumulation training loop
model.zero_grad() # Reset gradients tensors
for i, (inputs, labels) in enumerate(training_set):
predictions = model(inputs) # Forward pass
loss = loss_function(predictions, labels) # Compute loss function
loss = loss / accumulation_steps # Normalize our loss (if averaged)
loss.backward() # Backward pass
if (i+1) % accumulation_steps == 0: # Wait for several backward steps
optimizer.step() # Now we can do an optimizer step
model.zero_grad() # Reset gradients tensors
if (i+1) % evaluation_steps == 0: # Evaluate the model when we...

MEDIAN OF MEDIANS