Skip to content

Instantly share code, notes, and snippets.

View bjpcjp's full-sized avatar
💭
Fully caffeinated for your safety

brian piercy bjpcjp

💭
Fully caffeinated for your safety
View GitHub Profile

The Art of Profitability

Original notes by James Clear

  • Do the math yourself. Too many people take numbers from unreliable sources.

  • There are 4 levels of learning: Awareness, Awkwardness, Application, Assimilation

  • Customer-Solution Profit: Know your customers incredibly well and create a solution specifically for them.

@bjpcjp
bjpcjp / dash-hello-world.py
Created February 20, 2019 02:10
$python app.py -- launches basic example of Dash web app framework. View results on localhost port# 8050.
import dash
import dash_core_components as dcc
import dash_html_components as html
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
app.layout = html.Div(children=[
html.H1(children='Hello Dash'),
@bjpcjp
bjpcjp / housingScrape.py
Created February 8, 2018 22:57 — forked from theriley106/housingScrape.py
Scraping Valid Addresses from all US ZipCodes
import sys
reload(sys)
sys.setdefaultencoding("utf-8")
import requests
import bs4
import zipcode
import threading
import re
import json
import time
@bjpcjp
bjpcjp / main.r
Created February 8, 2016 02:46
Build your own neural network classifier in R (source: http://junma5.weebly.com/data-blog)
# How to build your own NN classifier in r
# source: http://www.r-bloggers.com/build-your-own-neural-network-classifier-in-r/
# reference: http://junma5.weebly.com/data-blog/build-your-own-neural-network-classifier-in-r
# project:
# 1) build simple NN with 2 fully-connected layers
# 2) use NN to classify a dataset of 4-class 2D images & visualize decision boundary.
# 3) train NN with MNIST dataset
# ref: stanford CS23 source: http://cs231n.github.io/
# source: http://multithreaded.stitchfix.com/blog/2016/01/13/market-watch/
# https://github.com/klarsen1/MarketMatching
# from terminal command line:
sudo apt-get install libcurl4-openssl-dev
# from R command line:
# install CausalImpact package:
install.packages("devtools")
import scrapy
class StackOverflowSpider(scrapy.Spider):
name = 'stackoverflow'
start_urls = ['http://stackoverflow.com/questions?sort=votes']
def parse(self, response):
for href in response.css('.question-summary h3 a::attr(href)'):
1) Make a download dir to store the node source and download it.
mkdir downloads
cd downloads
git clone https://github.com/joyent/node.git
Find the latest version
2) List all of the tags in the repository, and check out the most recent.
git tag
git checkout v0.9.9