Skip to content

Instantly share code, notes, and snippets.

@awni
awni / mlx_distributed_deepseek.md
Last active March 31, 2025 13:11
Run DeepSeek R1 or V3 with MLX Distributed

Setup

On every machine in the cluster install openmpi and mlx-lm:

conda install conda-forge::openmpi
pip install -U mlx-lm

Next download the pipeline parallel run script. Download it to the same path on every machine:

@akshaykhadse
akshaykhadse / README.md
Last active January 22, 2025 19:36
C++ Google Colab Plugin

C++ Google Colab Plugin

Example notebook can be found here.

Install Zsh and Oh-my-zsh on CentOS 7

Based on this article

ALL INSTALLATIONS ASSUME YES WHEN PROMPTED, that's what -y does

This script can be copy paste to ssh as is. No hands installation. :-)

yum install zsh -y
@leofavre
leofavre / deepGroupBy.js
Last active January 6, 2023 06:11
Similar to LoDash groupBy(), but with nested groups.
/**
* Part of [Canivete](http://canivete.leofavre.com/#deepgroupby)
*
* Groups the contents of an array by one or more iteratees.
* Unlike Lodash [`groupBy()`](https://lodash.com/docs/4.17.4#groupBy),
* this function can create nested groups, but cannot receive
* strings for iteratees.
*/
const deepGroupBy = (collection, ...iteratees) => {
let paths = collection.map(value => iteratees.map(iteratee => iteratee(value))),
@cosmo0920
cosmo0920 / mingw-w64-4.0.4-osx10.11.2.sh
Last active May 28, 2022 03:02 — forked from Drakulix/mingw-w64-3.10-osx10.9.sh
Script to install a Mingw-w64 Cross-Compiler Suite on Mac OS X 10.11.2
#!/bin/sh
# dependencies
echo "Installing dependencies via Homebrew (http://brew.sh)"
ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"
brew update
brew tap homebrew/versions
@denji
denji / nginx-tuning.md
Last active March 27, 2025 12:41
NGINX tuning for best performance

NGINX Tuning For Best Performance

For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.

Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon with HyperThreading enabled, but it can work without problem on slower machines.

You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.