Skip to content

Instantly share code, notes, and snippets.

View iKrishneel's full-sized avatar
🏠
Working from home

Krishneel iKrishneel

🏠
Working from home
View GitHub Profile
def trimmed_mae_loss(prediction, target, mask, trim=0.2):
M = torch.sum(mask, (1, 2))
res = prediction - target
res = res[mask.bool()].abs()
trimmed, _ = torch.sort(res.view(-1), descending=False)[
: int(len(res) * (1.0 - trim))
]
@mael
mael / tricks.md
Last active July 16, 2024 11:44
Xcode 10.2 "Unable to boot the Simulator"

Solution "Unable to boot the Simulator"

sudo mkdir /private/tmp

sudo chmod 1777 /private/tmp

Other basic command

xcrun

xcrun simctl list devices //to list all simulators

xcrun simctl delete // to delete specific device

@felipemoraes
felipemoraes / 0.useful.md
Last active February 17, 2025 14:23
Machine Learning Interview Questions
@ZijiaLewisLu
ZijiaLewisLu / Tricks to Speed Up Data Loading with PyTorch.md
Last active March 10, 2025 00:22
Tricks to Speed Up Data Loading with PyTorch

In most of deep learning projects, the training scripts always start with lines to load in data, which can easily take a handful minutes. Only after data ready can start testing my buggy code. It is so frustratingly often that I wait for ten minutes just to find I made a stupid typo, then I have to restart and wait for another ten minutes hoping no other typos are made.

In order to make my life easy, I devote lots of effort to reduce the overhead of I/O loading. Here I list some useful tricks I found and hope they also save you some time.

  1. use Numpy Memmap to load array and say goodbye to HDF5.

    I used to relay on HDF5 to read/write data, especially when loading only sub-part of all data. Yet that was before I realized how fast and charming Numpy Memmapfile is. In short, Memmapfile does not load in the whole array at open, and only later "lazily" load in the parts that are required for real operations.

Sometimes I may want to copy the full array to memory at once, as it makes later operations

@autosquid
autosquid / blender_cam.py
Last active January 6, 2025 21:17
blender-camera-from-3x4-matrix
# from: http://blender.stackexchange.com/questions/40650/blender-camera-from-3x4-matrix?rq=1
# And: http://blender.stackexchange.com/questions/38009/3x4-camera-matrix-from-blender-camera
# Input: P 3x4 numpy matrix
# Output: K, R, T such that P = K*[R | T], det(R) positive and K has positive diagonal
#
# Reference implementations:
# - Oxford's visual geometry group matlab toolbox
# - Scilab Image Processing toolbox
@filitchp
filitchp / OpenCV-3.1-Ubuntu-16.04-Cuda-8.md
Last active December 12, 2019 21:36
Installing OpenCV 3.1 on Ubuntu 16.04 with Cuda 8 support

This is a guide for installing OpenCV 3.1 on Ubuntu 16.04 with Cuda 8 support. This has been tested using a system with a GeForce GTX 1060 and on one with a GeForce GTX 1080.

Nvidia Drivers with Compiz

Install Nvidia drivers

# Start clean
sudo apt purge nvidia-*
# Add the PPA
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
@erikbern
erikbern / use_pfx_with_requests.py
Last active April 10, 2025 07:17
How to use a .pfx file with Python requests – also works with .p12 files
import contextlib
import OpenSSL.crypto
import os
import requests
import ssl
import tempfile
@contextlib.contextmanager
def pfx_to_pem(pfx_path, pfx_password):
''' Decrypts the .pfx file to be used with requests. '''
@ottokart
ottokart / nn.py
Last active August 27, 2021 05:52
3-layer neural network example with dropout in 2nd layer
# Tiny example of 3-layer nerual network with dropout in 2nd hidden layer
# Output layer is linear with L2 cost (regression model)
# Hidden layer activation is tanh
import numpy as np
n_epochs = 100
n_samples = 100
n_in = 10
n_hidden = 5
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example"
android:versionCode="1"
android:versionName="1.0">
<uses-sdk android:minSdkVersion="8"/>
<uses-permission android:name="android.permission.READ_CONTACTS" />
<application android:label="@string/app_name">
@bhaskara
bhaskara / openni_record_player.launch
Created April 16, 2012 17:38
Example ROS launch file that uses depth_image_proc to convert an RGB-depth image pair into a point cloud
<launch>
<!--
To distinguish between the cases where the rgb image is
1280x1024 versus 640x480. This affects the pipeline.
-->
<arg name="high_res_rgb" default="true"/>
<arg name="cloud_input_ns" value="camera/rgb_downsampled"
if="$(arg high_res_rgb)"/>
<arg name="cloud_input_ns" value="camera/rgb" unless="$(arg high_res_rgb)"/>