Skip to content

Instantly share code, notes, and snippets.

const nrf = require('nrf');
const SPI_DEV = '/dev/spidev0.0';
const CE_PIN = 25;
const IRQ_PIN = 24;
const CHANNEL = 0x4c;
const DATA_RATE = '250kbps';
const CRC_BYTES = 1;
const TX_POWER = 'PA_MAX';
/**
* attiny85 pin map:
* +-\/-+
* NC PB5 1|o |8 Vcc --- nRF24L01 3v3, pin2
* nRF24L01 CE, pin3 --- PB3 2| |7 PB2 --- nRF24L01 SCK, pin5
* nRF24L01 CSN, pin4 --- PB4 3| |6 PB1 --- nRF24L01 MOSI, pin6
* nRF24L01 GND, pin1 --- GND 4| |5 PB0 --- nRF24L01 MISO, pin7
* +----+
*/

Notes for Chromebook keyboard settings

Make the search key an additional Escape

The chromebook search key sends the keycode 133, or LWIN. This can be mapped as an additional escape key (useful for vim users) using XKB.

Create the file /usr/share/X11/xkb/symbols/chromebook_opts with the following:

// Make LWIN an additional Escape

hidden partial modifier_keys

@bendavis78
bendavis78 / dabblet.css
Created February 21, 2017 19:19
Untitled
h1 {
border: 1px solid black;
}
LOGGING = {
"disable_existing_loggers": false,
"handlers": {
"console": {
"formatter": "simple",
"level": "INFO",
"class": "logging.StreamHandler"
},
"logfile": {
"formatter": "basic",
@bendavis78
bendavis78 / express.md
Last active July 22, 2016 15:51
Getting started with Express and Nunjucks in Cloud9

Installation

Start with the "Blank" cloud9 template. When it finishes loading, open up the terminal and install the yeoman and the express generator:

npm install -g yo generator-express

Then run the following command to generate the project files:

yo express

Select the "Basic" option, as well as the "Nunjucks" option for the view engine.

namespace :scraper do
task scrape: :environment do
# the url we want to scrape
url = "http://www.amazon.com/s/ref=lp_2619525011_nr_n_5?fst=as%3Aoff&rh=n%3A2619525011%2Cn%3A%212619526011%2Cn%3A2686328011&bbn=2619526011&ie=UTF8&qid=1462292509&rnid=2619526011"
# get the raw HTML content
response = HTTParty.get url
html = response.body
# get the root document so we can parse it using CSS selectors
namespace :scraper do
task scrape: :environment do
# the url we want to scrape
url = "http://www.amazon.com/s/ref=lp_2619525011_nr_n_5?fst=as%3Aoff&rh=n%3A2619525011%2Cn%3A%212619526011%2Cn%3A2686328011&bbn=2619526011&ie=UTF8&qid=1462292509&rnid=2619526011"
# get the raw HTML content
response = HTTParty.get url
html = response.body
# get the root document so we can parse it using CSS selectors
//---- Game utility functions -------------------
function getBounds(id) {
var x = getXPosition(id);
var y = getYPosition(id);
var w = parseInt(getAttribute(id, "width"));
var h = parseInt(getAttribute(id, "height"));
return {
left: x,
top: y,
right: x + w,