Skip to content

Instantly share code, notes, and snippets.

View ritwikraha's full-sized avatar
🎲
learning is probabilistic.

Ritwik Raha ritwikraha

🎲
learning is probabilistic.
View GitHub Profile
@ritwikraha
ritwikraha / mask_dilation.ipynb
Created February 15, 2024 15:13
Mask_Dilation.ipynb
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@ritwikraha
ritwikraha / gradient_descent_script.ipynb
Created January 18, 2024 17:54
gradient_descent_script.ipynb
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@ritwikraha
ritwikraha / gradio_pixel_selector.py
Created December 30, 2023 20:24
Gradio Pixel Selector Utility
import gradio as gr
import numpy as np
import torch
from PIL import Image
'''
TODOs:
- Fetch the SAM model
- Fetch the inpainting model
@ritwikraha
ritwikraha / pdf-extractor.ipynb
Created December 5, 2023 06:48
PDF-Extractor.ipynb
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@ritwikraha
ritwikraha / yt-transcript.ipynb
Created December 1, 2023 17:19
yt-transcript.ipynb
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@ritwikraha
ritwikraha / semantic_segmentation_deeplab_v3_plus.ipynb
Created November 27, 2023 10:01
semantic_segmentation_deeplab_v3_plus
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@ritwikraha
ritwikraha / Pretraining-LLM.md
Last active November 16, 2024 06:05
Pretraining of Large Language Models

Pretraining


A Map for Studying Pre-training in LLMs

  • Data Collection
    • General Text Data
    • Specialized Data
  • Data Preprocessing
    • Quality Filtering
  • Deduplication
@ritwikraha
ritwikraha / Bahdanau.md
Last active August 29, 2023 12:54
why Bahdanau is Additive?

Bahdanau Attention is often called Additive Attention because of the mathematical formulation used to compute the attention scores. In contrast to Dot-Product (Multiplicative) Attention, Bahdanau Attention relies on addition and a non-linear activation function.

Let's go through the math step-by-step:

Definitions

  • ( h_i ): Hidden state of the encoder for the (i)-th time step in the source sequence.
  • ( s_t ): Hidden state of the decoder for the (t)-th time step in the target sequence.
  • ( W_1 ) and ( W_2 ): Weight matrices.
  • ( b ): Bias term.
  • ( v ): Context vector.
@ritwikraha
ritwikraha / shape_list.py
Created March 24, 2022 15:57
The shape_list function from HuggingFace/transformers
def shape_list(tensor: Union[tf.Tensor, np.ndarray]) -> List[int]:
"""
Deal with dynamic shape in tensorflow cleanly.
Args:
tensor (`tf.Tensor` or `np.ndarray`): The tensor we want the shape of.
Returns:
`List[int]`: The shape of the tensor as a list.
"""
@ritwikraha
ritwikraha / photoshop-blend-modes.ipynb
Created November 26, 2021 09:53
Photoshop-Blend-Modes.ipynb
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.