Skip to content

Instantly share code, notes, and snippets.

View dayyass's full-sized avatar
🚀
Rocket Science

Dani El-Ayyass dayyass

🚀
Rocket Science
View GitHub Profile
@yaroslavvb
yaroslavvb / install_pdb_handler.py
Created October 11, 2019 17:53
install_pdb_handler
def install_pdb_handler():
"""Signals to automatically start pdb:
1. CTRL+\\ breaks into pdb.
2. pdb gets launched on exception.
"""
import signal
import pdb
def handler(_signum, _frame):
@4OH4
4OH4 / tfidf_adv.py
Last active January 27, 2025 07:38
TF-idf model with stopwords and lemmatizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import linear_kernel
from nltk import word_tokenize
from nltk.stem import WordNetLemmatizer
import nltk
from nltk.corpus import stopwords
# Download stopwords list
nltk.download('punkt')
stop_words = set(stopwords.words('english'))
@jamescalam
jamescalam / flask_api.py
Last active September 10, 2024 20:13
A example API using Flask
from flask import Flask
from flask_restful import Resource, Api, reqparse
import pandas as pd
import ast
app = Flask(__name__)
api = Api(app)
class Users(Resource):
def get(self):
# Ideally, we would manage async access to stdin/stdout/stderr *without*
# setting them to non-blocking mode, because that can break other processes.
# (See https://github.com/python-trio/trio/issues/174 for much more detail.)
# Of course we can call read/write in a separate thread, but then we lose
# cancellation support.
# This file demonstrates a weird hack to make blocking read/write cancellable,
# and thus at least theoretically possible to integrate into Trio as ordinary
# first-class operations.
@dayyass
dayyass / pytorch_onnx_global_pooling.py
Created November 25, 2020 16:49
ONNX doesn't support PyTorch Adaptive Pooling (and Global Pooling as a special case with output_size=1). There is an implementation of Global Pooling compatible with ONNX.
import numpy as np
import torch
import torch.nn as nn
import onnx
import onnxruntime
##### INIT 1d, 2d, 3d GLOBAL POOLING MODULES #####
@jirihnidek
jirihnidek / sub-sub-command.py
Last active September 12, 2025 19:04
Python example of using argparse sub-parser, sub-commands and sub-sub-commands
"""
Example of using sub-parser, sub-commands and sub-sub-commands :-)
"""
import argparse
def main(args):
"""
Just do something
@dayyass
dayyass / pytorch_cross_entropy_loss_for_binary_classification.py
Last active June 17, 2021 15:37
PyTorch nn.BCELoss and nn.CrossEntropyLoss equivalence for binary classification.
"""
In a binary classification problem, a neural network usually returns a vector of logits of shape [batch_size],
while in a multiclass classification problem, logits are represented as a matrix of shape [batch_size, n_classes].
For these tasks, different loss functions are used, and, therefore, the network training pipelines are also different,
which is not convenient when you need to test hypotheses for both problem statements (binary/multiclass).
Pipeline schemes:
- binary classification:
logits (of shape [batch_size]) -> BCEWithLogitsLoss
@dayyass
dayyass / pytorch_set_global_seed.py
Created December 20, 2020 10:16
Set global seed for reproducibility.
import torch
import random
import numpy as np
def set_global_seed(seed: int):
"""
Set global seed for reproducibility.
"""
@dayyass
dayyass / pytorch_pack_padded_sequence.ipynb
Last active January 12, 2023 12:43
RNN inference time with/without pack_padded_sequence comparison.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@dayyass
dayyass / attention.ipynb
Last active June 17, 2021 15:37
My own implementation of Multihead Attention.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.