Skip to content

Instantly share code, notes, and snippets.

View sadimanna's full-sized avatar
🌏
Submitted Ph.D. thesis a few days ago

Siladittya Manna sadimanna

🌏
Submitted Ph.D. thesis a few days ago
View GitHub Profile
@echo off
setlocal enabledelayedexpansion
REM Set the paths for Folder A, train folder, val folder, and test folder
set "base_path=C:\Users\ISI_UTS\Siladittya\MIDA2023"
set "image_folder_path=%base_path%\images"
set "train_image_path=%base_path%\train_images"
set "val_image_path=%base_path%\val_images"
set "test_image_path=%base_path%\test_images"
#!/bin/bash
# Set the paths for the train folder, label folder, and target folder
train_folder_path="/path/to/train"
label_folder_path="/path/to/label"
target_folder_path="/path/to/target"
label_ext = ".labext"
# Create the target folder if it doesn't exist
mkdir -p "$target_folder_path"
#!/bin/bash
# Set the paths for Folder A and the three target folders
folder_a_path="/path/to/FolderA"
train_folder_path="/path/to/train"
val_folder_path="/path/to/val"
test_folder_path="/path/to/test"
# Create the target folders if they don't exist
mkdir -p "$train_folder_path"
#!/bin/bash
# Set the paths for the train folder, label folder, and target folder
image_folder_path="/path/to/train"
label_folder_path="/path/to/label"
target_train_folder="/path/to/train_split"
target_val_folder="/path/to/val_split"
target_test_folder="/path/to/test_split"
# Create the target folders if they don't exist
==========================================================================================================================================================================
Layer (type:depth-idx) Input Shape Output Shape Param # Kernel Shape Mult-Adds
==========================================================================================================================================================================
Model [1, 3, 64, 64] [1, 1000] -- -- --
├─ResNet: 1-1 [1, 3, 64, 64] [1, 1000] -- -- --
│ └─Conv2d: 2-1 [1, 3, 64, 64] [1, 64, 32, 32] 9,408 [7, 7] 9,633,792
│ └─BatchNorm2d: 2-2 [1, 64, 32
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 64, 32, 32] 9,408
BatchNorm2d-2 [-1, 64, 32, 32] 128
ReLU-3 [-1, 64, 32, 32] 0
MaxPool2d-4 [-1, 64, 16, 16] 0
.
.
Linear-68 [-1, 1000] 513,000
@sadimanna
sadimanna / torchinfosummaryoutput.txt
Created August 25, 2022 09:26
Output from torchinfo.summary on ResNet18 for input of (3,224,224)
=====================================================================================================================================================================
Layer (type:depth-idx) Input Shape Output Shape Param # Kernel Shape Mult-Adds
=====================================================================================================================================================================
ResNet [1, 3, 224, 224] [1, 1000] -- -- --
├─Conv2d: 1-1 [1, 3, 224, 224] [1, 64, 112, 112] 9,408 [7, 7] 118,013,952
├─BatchNorm2d: 1-2 [1, 64, 112, 112] [1, 64, 112, 112] 128 -- 128
├─ReLU: 1-3 [1, 64, 112, 112] [1, 64, 112, 112
!pip uninstall -y torch
!pip install torch==1.8.2+cpu torchvision==0.9.2+cpu -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
class ArrayDataset:
def __init__(self,
phase,
array,
labels,
mean, std,
transformations = None):
self.phase = phase
self.imgarr = copy.deepcopy(array)
self.labels = copy.deepcopy(labels)
class GradCamModel(nn.Module):
def __init__(self):
super().__init__()
self.gradients = None
self.tensorhook = []
self.layerhook = []
self.selected_out = None
#PRETRAINED MODEL
self.pretrained = models.resnet50(pretrained=True)