Skip to content

Instantly share code, notes, and snippets.

@daniel-j-h
daniel-j-h / ld.gold.sh
Last active June 19, 2024 00:07
default to ld.gold on Ubuntu'ish
update-alternatives --install "/usr/bin/ld" "ld" "/usr/bin/ld.gold" 20
update-alternatives --install "/usr/bin/ld" "ld" "/usr/bin/ld.bfd" 10
update-alternatives --config ld
ld --version
GNU gold
export CPP=cpp-5 gcc-5 g++-5
env CXXFLAGS='-march=native -flto -fuse-linker-plugin' cmake .. -DCMAKE_BUILD_TYPE=Release
@alexpana
alexpana / hacks.md
Last active February 23, 2025 10:58
C++ is a hack

Various 'features' of C++ that show the hacky / inconsistent way in which the language was constructed. This is a work in progress, and currently contains some of the reasons I can remember why I've given up on C++. If you want to contribute, leave your favourite "hack" in the comments.

  1. (in)Visibility: C++ allows changing the access modifier of a virtual function in the derived class. Not only does C++ have no notion of interfaces, it actually allows subclasses to hide methods declared public in the superclass.

  2. Operator over-overloading: One of the increment operators takes a dummy int parameter in order to allow overloading. Can you tell which without googling? (hint: its postfix).

  3. Exception unspecifiers: C++ has two types of exception specifiers: throw() and nothrow. The first is deprecated (because 'we screwed up, sorry, let's forget about this terrible mess'). The second one guarantees it's contract by terminating the application when violated. That's because functions declared

@mrkline
mrkline / parent-polymorphism.cpp
Last active May 13, 2022 08:08
Sean Parent's polymorphism demo
// See http://sean-parent.stlab.cc/presentations/2016-10-10-runtime-polymorphism/2016-10-10-runtime-polymorphism.pdf
// or an earlier version, "Inheritance Is The Base Class of Evil"
#include <iostream>
#include <memory>
#include <string>
#include <vector>
using namespace std;
@meetps
meetps / computeIoU.py
Created February 11, 2017 14:02
Intersection over Union for Python [ Keras ]
import numpy as np
def computeIoU(y_pred_batch, y_true_batch):
return np.mean(np.asarray([pixelAccuracy(y_pred_batch[i], y_true_batch[i]) for i in range(len(y_true_batch))]))
def pixelAccuracy(y_pred, y_true):
y_pred = np.argmax(np.reshape(y_pred,[N_CLASSES_PASCAL,img_rows,img_cols]),axis=0)
y_true = np.argmax(np.reshape(y_true,[N_CLASSES_PASCAL,img_rows,img_cols]),axis=0)
y_pred = y_pred * (y_true>0)
@Starl1ght
Starl1ght / main.cpp
Created March 26, 2017 21:32
Console
#include <vector>
#include <sstream>
#include <iostream>
#include <iterator>
#include <tuple>
template <typename T>
std::vector<std::string> Split(T&& input) {
std::stringstream ss{ std::forward<T>(input) };
return std::vector<std::string> {std::istream_iterator<std::string>(ss), std::istream_iterator<std::string>{} };
@felixgwu
felixgwu / fc_densenet.py
Created April 2, 2017 05:22
FC-DenseNet Implementation in PyTorch
import torch
from torch import nn
__all__ = ['FCDenseNet', 'fcdensenet_tiny', 'fcdensenet56_nodrop',
'fcdensenet56', 'fcdensenet67', 'fcdensenet103',
'fcdensenet103_nodrop']
class DenseBlock(nn.Module):
@oeway
oeway / imageUtils.py
Last active May 8, 2024 14:21
Improved image transform functions for dense predictions (for pytorch, keras etc.)
import numpy as np
import scipy
import scipy.ndimage
from scipy.ndimage.filters import gaussian_filter
from scipy.ndimage.interpolation import map_coordinates
import collections
from PIL import Image
import numbers
__author__ = "Wei OUYANG"
@bartolsthoorn
bartolsthoorn / multilabel_example.py
Created April 29, 2017 12:13
Simple multi-laber classification example with Pytorch and MultiLabelSoftMarginLoss (https://en.wikipedia.org/wiki/Multi-label_classification)
import torch
import torch.nn as nn
import numpy as np
import torch.optim as optim
from torch.autograd import Variable
# (1, 0) => target labels 0+2
# (0, 1) => target labels 1
# (1, 1) => target labels 3
train = []
@mbinna
mbinna / effective_modern_cmake.md
Last active July 20, 2025 14:17
Effective Modern CMake

Effective Modern CMake

Getting Started

For a brief user-level introduction to CMake, watch C++ Weekly, Episode 78, Intro to CMake by Jason Turner. LLVM’s CMake Primer provides a good high-level introduction to the CMake syntax. Go read it now.

After that, watch Mathieu Ropert’s CppCon 2017 talk Using Modern CMake Patterns to Enforce a Good Modular Design (slides). It provides a thorough explanation of what modern CMake is and why it is so much better than “old school” CMake. The modular design ideas in this talk are based on the book [Large-Scale C++ Software Design](https://www.amazon.de/Large-Scale-Soft

@peteflorence
peteflorence / pytorch_bilinear_interpolation.md
Last active November 18, 2024 06:10
Bilinear interpolation in PyTorch, and benchmarking vs. numpy

Here's a simple implementation of bilinear interpolation on tensors using PyTorch.

I wrote this up since I ended up learning a lot about options for interpolation in both the numpy and PyTorch ecosystems. More generally than just interpolation, too, it's also a nice case study in how PyTorch magically can put very numpy-like code on the GPU (and by the way, do autodiff for you too).

For interpolation in PyTorch, this open issue calls for more interpolation features. There is now a nn.functional.grid_sample() feature but at least at first this didn't look like what I needed (but we'll come back to this later).

In particular I wanted to take an image, W x H x C, and sample it many times at different random locations. Note also that this is different than upsampling which exhaustively samples and also doesn't give us fle