Skip to content

Instantly share code, notes, and snippets.

View willwade's full-sized avatar
🌏
Working on dasher and a lot of Ace hardware projects

will wade willwade

🌏
Working on dasher and a lot of Ace hardware projects
View GitHub Profile
file = open("usb_hid_keys.h", "r")
for line in file:
goodline = line[8:].split(" ")
if line.startswith("#define ") and len(goodline)>1:
key = goodline[0]
value = ""
for i in range(1, len(goodline)):
if goodline[i] == "":
continue
@fnurl
fnurl / docsearch-pageindexer.py
Last active January 15, 2020 07:07
A script that produces a JSON page index file for markdown files (extension `.md`) in a directory and its subdirectories (e.g. a Hugo site's (https://gohugo.io/) `content` directory) for use with Algolia Docsearch (https://github.com/algolia/docsearch).
import os
import sys
import yaml
import json
# base url to use
base_url = "http://localhost:1313"
# The attribute mapping for docsearch.
#
@adeekshith
adeekshith / .git-commit-template.txt
Last active October 20, 2024 21:10 — forked from Linell/.git-commit-template.txt
This commit message template helps you write great commit messages and enforce it across teams.
# <type>: (If applied, this commit will...) <subject> (Max 50 char)
# |<---- Using a Maximum Of 50 Characters ---->|
# Explain why this change is being made
# |<---- Try To Limit Each Line to a Maximum Of 72 Characters ---->|
# Provide links or keys to any relevant tickets, articles or other resources
# Example: Github issue #23
@domenic
domenic / 0-github-actions.md
Last active May 26, 2024 07:43
Auto-deploying built products to gh-pages with Travis

Auto-deploying built products to gh-pages with GitHub Actions

This is a set up for projects which want to check in only their source files, but have their gh-pages branch automatically updated with some compiled output every time they push.

A file below this one contains the steps for doing this with Travis CI. However, these days I recommend GitHub Actions, for the following reasons:

  • It is much easier and requires less steps, because you are already authenticated with GitHub, so you don't need to share secret keys across services like you do when coordinate Travis CI and GitHub.
  • It is free, with no quotas.
  • Anecdotally, builds are much faster with GitHub Actions than with Travis CI, especially in terms of time spent waiting for a builder.
@rodricios
rodricios / summarize.py
Last active November 18, 2020 17:21
Flipboard's summarization algorithm, sort of
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
pip install networkx distance pattern
In Flipboard's article[1], they kindly divulge their interpretation
of the summarization technique called LexRank[2].
@mattzuba
mattzuba / 03_s3fs.config
Created November 24, 2014 16:23
s3fs-fuse on AWS Elastic Beanstalk
packages:
yum:
gcc: []
libstdc++-devel: []
gcc-c++: []
fuse: []
fuse-devel: []
libcurl-devel: []
libxml2-devel: []
openssl-devel: []
@t-paul
t-paul / 00info.txt
Last active March 3, 2019 01:51
openscad
Image link:
![My Image](https://gist.github.com/t-paul/7171783#file-openscad-freetype-4-png)
@postrational
postrational / gunicorn_start.bash
Last active April 4, 2024 12:48
Example of how to set up Django on Nginx with Gunicorn and supervisordhttp://michal.karzynski.pl/blog/2013/06/09/django-nginx-gunicorn-virtualenv-supervisor/
#!/bin/bash
NAME="hello_app" # Name of the application
DJANGODIR=/webapps/hello_django/hello # Django project directory
SOCKFILE=/webapps/hello_django/run/gunicorn.sock # we will communicte using this unix socket
USER=hello # the user to run as
GROUP=webapps # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=hello.settings # which settings file should Django use
DJANGO_WSGI_MODULE=hello.wsgi # WSGI module name
@willwade
willwade / citeusync.py
Last active April 7, 2019 01:48
inspired by the perl version - but not quite the same. Simply use to download your bibtex file and attachments on a regular basis. NB: Only downloads the PDF if not already present so technically only 2 calls at a minimum to CUL (Login and download of bibtex. Obviously a lot more if downloading the PDFs). If more than one attachment - will only …
#!/usr/bin/env python
# Contact: Will Wade willwa.de
# Date: April 2013
# Needs mechanize and pybtex
#
# NB: Little error checking going on in this script
# TO-DO: Check last-download-date of bibtex file later than last-modified date on CUL. ? possible
#
# With thanks to https://pypi.python.org/pypi/citeulike_api/0.1.3dev for the login part
import mechanize
@dpapathanasiou
dpapathanasiou / text_grabber.py
Created October 27, 2012 15:18
How to extract just the text from html page articles
"""
A series of functions to extract just the text from html page articles
"""
from lxml import etree
default_encoding = "utf-8"
def newyorker_fp (html_text, page_encoding=default_encoding):
"""For the articles found on the 'Financial Page' section of the New Yorker's website