Skip to content

Instantly share code, notes, and snippets.

@jcaraballo
jcaraballo / git-branch-between-different-repositories.md
Created March 6, 2012 01:05
How to fork a github repository in bitbucket

#Create bitbucket branch

##Create local branch

$ git checkout -b sync
Switched to a new branch 'sync'
$ git branch
  master
* sync
@roadrunner2
roadrunner2 / 0 Linux-On-MBP-Late-2016.md
Last active November 3, 2025 04:13
Linux on MacBook Pro Late 2016 and Mid 2017 (with Touchbar)

Introduction

This is about documenting getting Linux running on the late 2016 and mid 2017 MPB's; the focus is mostly on the MacBookPro13,3 and MacBookPro14,3 (15inch models), but I try to make it relevant and provide information for MacBookPro13,1, MacBookPro13,2, MacBookPro14,1, and MacBookPro14,2 (13inch models) too. I'm currently using Fedora 27, but most the things should be valid for other recent distros even if the details differ. The kernel version is 4.14.x (after latest update).

The state of linux on the MBP (with particular focus on MacBookPro13,2) is also being tracked on https://github.com/Dunedan/mbp-2016-linux . And for Ubuntu users there are a couple tutorials (here and here) focused on that distro and the MacBook.

Note: For those who have followed these instructions ealier, and in particular for those who have had problems with the custom DSDT, modifying the DSDT is not necessary anymore - se

@kylemcdonald
kylemcdonald / eos-project.ipynb
Created June 24, 2017 23:50
Calculating a facial landmarks with dlib, estimating the face mesh with eos, and projecting the mesh to 2d.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@TengdaHan
TengdaHan / ddp_notes.md
Last active June 7, 2025 22:26
Multi-node-training on slurm with PyTorch

Multi-node-training on slurm with PyTorch

What's this?

  • A simple note for how to start multi-node-training on slurm scheduler with PyTorch.
  • Useful especially when scheduler is too busy that you cannot get multiple GPUs allocated, or you need more than 4 GPUs for a single job.
  • Requirement: Have to use PyTorch DistributedDataParallel(DDP) for this purpose.
  • Warning: might need to re-factor your own code.
  • Warning: might be secretly condemned by your colleagues because using too many GPUs.