Skip to content

Instantly share code, notes, and snippets.

@brews
brews / dask-test-workflow.yaml
Last active October 31, 2023 16:22
Argo Workflow demo to launch kubernetes dask distributed job
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: dask-test-
spec:
entrypoint: dask
activeDeadlineSeconds: 1800 # Safety first, kids!
templates:
- name: dask
script:
@katef
katef / plot.awk
Last active November 20, 2024 23:27
#!/usr/bin/awk -f
# This program is a copy of guff, a plot device. https://github.com/silentbicycle/guff
# My copy here is written in awk instead of C, has no compelling benefit.
# Public domain. @thingskatedid
# Run as awk -v x=xyz ... or env variables for stuff?
# Assumptions: the data is evenly spaced along the x-axis
# TODO: moving average
# -*- coding: utf-8 -*-
""" Deletes all tweets below a certain retweet threshold.
"""
import tweepy
from datetime import datetime
# Constants
CONSUMER_KEY = ''
@brews
brews / fillout_gcsfuse_bucket.py
Last active May 8, 2020 18:28
Python script to fill out missing directories in GCS bucket so the directory tree can be read by gcsfuse.
#! /usr/bin/env python
# 2020-01-16
# Brewster Malevich <[email protected]>
"""Fill out missing directories in GCS bucket (TARGET_BUCKET), for gcsfuse.
Looks at blob names in Google Cloud Storage bucket to pull out all implied
directories. Then add a blob directory placeholder - an empty blob for each
directory that doesn't already have a blob. All directories should then
appear when mounted by gcsfuse.
@brews
brews / backup_disks.sh
Last active December 2, 2020 21:22
Example bash script to create a basic Google Compute Engine (GCE) disk backup policy (schedule snapshot) and apply the policy to GCE disks in a zone.
#!/usr/bin/env bash
# 2020-01-02
# Create a basic GCE disk backup policy (schedule snapshot) and
# apply the policy to GCE disks in a zone.
set -e
TARGET_POLICY="backup-schedule"
ZONE="us-west1-a"
REGION="us-west1"
@brews
brews / environment.yml
Last active December 27, 2019 01:07
Conda environment to create a py27 for the risingverse. Updates now hosted at https://anaconda.org/ClimateImpactLab/risingverse-py27
name: risingverse-py27
channels:
- conda-forge
- defaults
dependencies:
- click
- emcee
- gspread=3.1.0
- numpy>=1.14
- netCDF4
@brews
brews / GitHub-Forking.md
Created July 11, 2019 00:03 — forked from Chaser324/GitHub-Forking.md
GitHub Standard Fork & Pull Request Workflow

Whether you're trying to give back to the open source community or collaborating on your own projects, knowing how to properly fork and generate pull requests is essential. Unfortunately, it's quite easy to make mistakes or not know what you should do when you're initially learning the process. I know that I certainly had considerable initial trouble with it, and I found a lot of the information on GitHub and around the internet to be rather piecemeal and incomplete - part of the process described here, another there, common hangups in a different place, and so on.

In an attempt to coallate this information for myself and others, this short tutorial is what I've found to be fairly standard procedure for creating a fork, doing your work, issuing a pull request, and merging that pull request back into the original project.

Creating a Fork

Just head over to the GitHub page and click the "Fork" button. It's just that simple. Once you've done that, you can use your favorite git client to clone your repo or j

@brews
brews / fetchmeta.py
Last active July 22, 2025 08:55 — forked from jrsmith3/doi2bib.py
Python function to access crossref.org DOI metadata resolver and return bibliography as dictionary or bibtex string.
import requests
import json
def fetchmeta(doi, fmt='dict', **kwargs):
"""Fetch metadata for a given DOI.
Parameters
----------
doi : str
  1. Create an access token at "https://zenodo.org/account/settings/applications/tokens/new/" with the "deposit:write" and "deposit:actions" scopes, and keep it somewhere safe (we'll refer to this token as ZENODO_TOKEN)
  2. Create your deposit via the web interface at "https://zenodo.org/deposit/new", fill in the minimum metadata (title, authors, description, access rights and license) and click "Save".
  3. On your browser's URL, you will now see the deposit ID in the form "https://zenodo/deposit/".
  4. Next step is to get the file upload URL. Via curl (or your HTTP client of preference) you can do:
$ # Store the Zenodo token in an envionrment variable
$ read -s ZENODO_TOKEN
$ curl "https://zenodo.org/api/deposit/depositions/222761?access_token=${ZENODO_TOKEN}"
{ ...  
@junpenglao
junpenglao / OOS_Predict_with_Missing.ipynb
Last active May 9, 2018 22:58
OOS Prediction for regression model with missing data
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.