Skip to content

Instantly share code, notes, and snippets.

View danecjensen's full-sized avatar
🎯
Focusing

Dane Jensen danecjensen

🎯
Focusing
View GitHub Profile
@norman
norman / earthdistance.rb
Last active October 7, 2024 14:15
Geographic Searches With Postgres's Earthdistance and Cube Extensions
#!/usr/bin/env ruby
=begin
= Geographic Searches With Postgres's Earthdistance and Cube Extensions
This program shows how to easily create a Postgres database that uses the Cube
and Earthdistance extensions to perform fast queries on geographic data.
Briefly, the problem this code solves is "show me all places within 50
kilometers of New York City."
@awafer
awafer / bmp.py
Created August 3, 2012 04:05 — forked from voidw0rd/bmp.py
Python bitmap
import struct, random, sys
from PIL import Image
class Bitmap(object):
def __gen_radom_pixel__(self):
return 111
return random.randint(0, 255)
@prognostikos
prognostikos / app.js
Created September 11, 2012 13:45
AngularJS & Rails
/**
* Angular needs to send the Rails CSRF token with each post request.
*
* Here we get the token from the meta tags (make sure <%= csrf_meta_tags %>
* is present in your layout.)
*/
angular.module('myapp',[]).
// configure our http requests to include the Rails CSRF token
config(["$httpProvider", function(p) {
var m = document.getElementsByTagName('meta');
@alexras
alexras / img_resize.py
Last active January 11, 2025 20:44
Nearest-neighbor image scaling with PIL
#!/usr/bin/env python
from PIL import Image
import sys
im = Image.open(sys.argv[1])
def scale_to_width(dimensions, width):
height = (width * dimensions[1]) / dimensions[0]
@willurd
willurd / web-servers.md
Last active June 14, 2025 07:39
Big list of http static server one-liners

Each of these commands will run an ad hoc http static server in your current (or specified) directory, available at http://localhost:8000. Use this power wisely.

Discussion on reddit.

Python 2.x

$ python -m SimpleHTTPServer 8000
@cheeaun
cheeaun / image-processing-services.md
Last active March 3, 2025 01:50
3rd-party image processing/manipulation/upscaling/enlarging services
@chrisvfritz
chrisvfritz / index.html
Created November 18, 2014 19:22
Simplest possible HTML template
<!doctype html>
<html>
<head>
<title>This is the title of the webpage!</title>
</head>
<body>
<p>This is an example paragraph. Anything in the <strong>body</strong> tag will appear on the page, just like this <strong>p</strong> tag and its contents.</p>
</body>
</html>
@magician11
magician11 / listen-for-shopify-webhooks.js
Created November 21, 2016 08:06
How to listen to Shopify webhook event data with Node.js
/* eslint-disable no-console */
const express = require('express');
const bodyParser = require('body-parser');
const app = express();
/*
Shopify issues a HTTP POST request.
- https://help.shopify.com/api/tutorials/webhooks#receive-webhook
@jbrown123
jbrown123 / bookmarklet.md
Last active March 6, 2025 16:07
A simple bookmarklet to help clip a webpage to a google doc

Bookmarklet to clip webpages to google docs

Below is a simple bookmarklet (see https://en.wikipedia.org/wiki/Bookmarklet) to make it easier to capture the content of a webpage into a google doc. This is similar (but much simpler and less functionality) to Evernote Web Clipper (see https://evernote.com/webclipper/).

This bookmarklet will copy the URL and the title of the current page. Since most browsers forbid directly manipulating the clipboard contents, it makes a copy by creating a pop-up with all the data highlighted and asks the user to press control-c (the keyboard copy shortcut) and then enter. Once you press enter it will create a new google document for you in a new tab. You can simply press control-v (the keyboard paste shortcut) to put in the URL and title.

If you press escape or hit 'cancel' in the popup, nothing happens and you return to your original webpage.

If you happen to have some text selected on the current webpage, this will be appended to the end of the pasted block. However,

@msharp
msharp / aws-lambda-unzipper.py
Created February 6, 2017 22:52
unzip archive from S3 on AWS Lambda
import os
import sys
import re
import boto3
import zipfile
def parse_s3_uri(url):
match = re.search('^s3://([^/]+)/(.+)', url)
if match:
return match.group(1), match.group(2)