Skip to content

Instantly share code, notes, and snippets.

View chriszs's full-sized avatar

Chris Zubak-Skees chriszs

View GitHub Profile
@samselikoff
samselikoff / authed-swr-provider.js
Created July 19, 2021 04:32
Example useAuth hook using Zustand, SWR and Suspense
import firebase from "firebase/app";
import "firebase/auth";
import { gql, GraphQLClient } from "graphql-request";
import { SWRConfig } from "swr";
import create from "zustand";
import { computed } from "zustand-middleware-computed-state";
const firebaseConfig = {
//
};
@slifty
slifty / git-commit-edit-alias.md
Last active December 12, 2023 17:42
Git alias to edit a commit

A git alias to edit the content of a commit

This git alias allows you to remove specific changes from a past commit / from git history and place those changes into your working directory, outside of your git history.

For example, maybe a code reviewer has identified a few files or lines that belong in their own commit or pull request. This helps you do git commit surgery on specific commits without needing to manually re-play.

git edit {commithash}

(e.g. git edit HEAD would edit the most recent commit or git edit c52b7bfe12c2f6082a69ea339eeec95a20532fa5 would edit a specific commit)

@sindresorhus
sindresorhus / esm-package.md
Last active November 20, 2024 12:29
Pure ESM package

Pure ESM package

The package that linked you here is now pure ESM. It cannot be require()'d from CommonJS.

This means you have the following choices:

  1. Use ESM yourself. (preferred)
    Use import foo from 'foo' instead of const foo = require('foo') to import the package. You also need to put "type": "module" in your package.json and more. Follow the below guide.
  2. If the package is used in an async context, you could use await import(…) from CommonJS instead of require(…).
  3. Stay on the existing version of the package until you can move to ESM.
@tmcw
tmcw / optimization.md
Last active February 14, 2021 14:38
Optimization

Optimization

Correctly prioritizing and targeting performance problems and optimization opportunities is one of the hardest things to master in programming. There are a lot of ways to do it wrong: by prematurely optimizing non-bottlenecks, or preferring fast solutions to clear solutions, or measuring problems incorrectly.

I'll try to summarize what I've learned about doing this right.

First, don't optimize until there's an issue. And issues should be defined as application issues: performance problems that are either detectable by the users (lag) or endanger the platform – i.e. problems that cause downtime, like out-of-memory issues. Until there's an issue, don't think about peformance at all: just solve the problem at hand, which is "creating value for the end-user," or some less-corporate translation of the same.

Second, only optimize with instruments. By instruments, I mean technology that lets you decipher which sub-part of the stack is the bottleneck. Let's say you see slowness around fet

from collections import Counter
import pandas as pd
df = pd.read_hdf('training.h5')
g = df.groupby('slug')
def get_sample(slug):
return df.ix[g.groups[slug]]
@emanuelfeld
emanuelfeld / gi-lf
Last active April 24, 2017 14:26
as pre-commit script, automatically add files larger than some size to your repository's .git/info/exclude file
#!/bin/bash
# set max file size to include (in MB)
max_size_mb=100
max_size_b="$(($max_size_mb * 1000000))c"
git_dir="$(git rev-parse --show-toplevel)"
git_exclude=$git_dir/.git/info/exclude
files="$(find $git_dir -path $git_dir/.git -prune -o -type f -size +$max_size_b -print | sed "s%$git_dir/%%g" | sed "s/\ /\\\ /g")"
@thomaswilburn
thomaswilburn / index.js
Last active July 22, 2017 21:23
ASP page scraper with comments
// Built-in modules
var csv = require("csv");
var fs = require("fs");
var url = require("url");
// Loaded from NPM
var $ = require("cheerio"); // jQuery-like DOM library
var async = require("async"); // Easier concurrency utils
var request = require("request"); // Make HTTP requests simply
@duner
duner / README.md
Last active April 28, 2022 19:48
Twitter Archive to JSON

If you download your personal Twitter archive, you don't quite get the data as JSON, but as a series of .js files, one for each month (there are meant to replicate the Twitter API respones for the front-end part of the downloadable archive.)

But if you want to be able to use the data in those files, which is far richer than the CSV data, for some analysis or app just run this script.

Run sh ./twitter-archive-to-json.sh in the same directory as the /tweets folder that comes with the archive download, and you'll get two files:

  • tweets.json — a JSON list of the objects
  • tweets_dict.json — a JSON dictionary where each Tweet's key is its id_str

You'll also get a /json-tweets directory which has the individual JSON files for each month of tweets.

@mbostock
mbostock / .block
Last active November 13, 2016 21:45
U.S. Atlas, Redux [UNLISTED]
license: bsd-3-clause