Skip to content

Instantly share code, notes, and snippets.

View gsoykan's full-sized avatar
🦝
🐪🐫🐪🐫🐪🐫🐪🐫🐪🐫🐪🐫🐪🐫🐪🐫

Gürkan Soykan gsoykan

🦝
🐪🐫🐪🐫🐪🐫🐪🐫🐪🐫🐪🐫🐪🐫🐪🐫
View GitHub Profile
@gsoykan
gsoykan / kl_div_loss.md
Created March 24, 2024 18:15
Notes on KL Div & KL Div Loss

KL divergence loss, or Kullback-Leibler divergence loss, measures how one probability distribution diverges from a second, expected probability distribution. It's often used in scenarios where you want to compare two probability distributions, typically in the context of machine learning models like neural networks.

In PyTorch, the KL divergence loss is implemented through the torch.nn.KLDivLoss class. This loss function computes the divergence between two distributions by comparing a target probability distribution ( Q(x) ) with a predicted probability distribution ( P(x) ). The formula for KL divergence is given by:

[ D_{KL}(P ,||, Q) = \sum P(x) \log \left(\frac{P(x)}{Q(x)}\right) ]

where:

  • ( P(x) ) is the true probability distribution.

In PyTorch, "contiguous" refers to the way an array (or tensor) is stored in memory. When a tensor is contiguous, the data within the tensor is stored in a continuous block of memory. This means that the elements are laid out in memory in the order that they appear in the tensor, row by row, column by column, etc., without any gaps between the elements that belong to different rows or columns.

Non-contiguous tensors can occur after certain operations that change the shape or stride of a tensor, like transpose, permute, or slicing. When a tensor is non-contiguous, the elements are not stored in a linear chunk of memory, which can lead to inefficient memory access and increased computational overhead during operations.

To ensure that a tensor is contiguous, you can call the .contiguous() method on a tensor. This method will make a contiguous copy of the tensor in memory if the tensor is not already contiguous. Here’s how you use it:

tensor = tensor.contiguous()
@gsoykan
gsoykan / watch_gpu.sh
Created March 20, 2024 15:37
Polling GPU for continous resource usage
watch -n0.1 nvidia-smi
@gsoykan
gsoykan / set_cluster_cuda.md
Created November 4, 2023 11:08
setting cuda driver version on hpc cluster

Q:

i am working on the cluster - there module for cuda/12.3 - i want cuda version to use that path

A:

If you're working on a cluster that provides a specific module for CUDA version 12.3, and you want to use that CUDA version in your environment, you can follow these steps to load the CUDA module and set the appropriate environment variables:

  1. First, you should log in to your cluster and open a terminal.

  2. Use the module load command to load the CUDA module for version 12.3. The command might look something like this, but the exact module name and syntax can vary depending on your cluster's configuration:

@gsoykan
gsoykan / blackout_polygons_aug.py
Created July 20, 2023 17:34
black out (mask) polygons from an image with albumentations
import numpy as np
import albumentations as A
import cv2
from albumentations.pytorch import ToTensorV2
from albumentations.augmentations import functional as A_F
from albumentations.core.transforms_interface import ImageOnlyTransform
class BlackOutWithPolygonMasks(ImageOnlyTransform):
def __init__(self, always_apply=False, p=1.0):
super(BlackOutWithPolygonMasks, self).__init__(always_apply, p)
@gsoykan
gsoykan / get_package_source.py
Created July 14, 2023 11:29
checks a python package source -> really useful when package errors happen
import inspect
import ssl # Replace with the name of the imported package you want to check
def get_package_source(package):
module = inspect.getmodule(package)
if module is not None:
source_file = inspect.getfile(module)
print(f"Source location for package '{package.__name__}': {source_file}")
else:
print(f"Package '{package.__name__}' is not found.")
@gsoykan
gsoykan / crop_bb_and_mask.py
Created July 7, 2023 16:24
cropping a bounding box and segmentation box from image
def crop_all_components(self,
page_img_path: str,
save_root_folder: str,
original_img_shape: Tuple[int, int, int], # h, w, c
transformed_img_shape: Tuple[int, int, int]):
comic_series, page = page_img_path.split('/')[-2:]
page = page.split('.')[0]
scale_h_w = np.array(original_img_shape[:2]) / np.array(transformed_img_shape[:2])
@gsoykan
gsoykan / run_all_py.sh
Created June 30, 2023 19:19
run all .py files in folder ubuntu
find . -name "*.py" -exec python {} \;
@gsoykan
gsoykan / draw_line_in_mask_batched.py
Created May 7, 2023 14:11
given a mask draws line from start and end points in batched manner
def draw_line_in_mask_batched(self, mask, start_point, end_point, light_neighboring=True):
"""
Draws a line in the given batch of masks.
Args:
mask (torch.Tensor): Batch of masks with shape (B, H, W)
start_point (List of Tuples): Batch of start points with shape (B, 2)
end_point (List of Tuples): Batch of end points with shape (B, 2)
Returns:
@gsoykan
gsoykan / bresenham_torch.py
Created April 7, 2023 15:42
Bresenham's line algorithm in PyTorch to connect points in a mask
def draw_line_in_mask(mask, start_point, end_point):
# Extract x and y coordinates of start and end points
x0, y0 = start_point[0], start_point[1]
x1, y1 = end_point[0], end_point[1]
# Compute differences between start and end points
dx = abs(x1 - x0)
dy = abs(y1 - y0)
# Determine direction of the line