Skip to content

Instantly share code, notes, and snippets.

View alexklibisz's full-sized avatar
:octocat:

Alex Klibisz alexklibisz

:octocat:
View GitHub Profile
@alexklibisz
alexklibisz / SassMeister-input-HTML.html
Created July 1, 2015 15:28
Generated by SassMeister.com.
<p>Test</p>
@alexklibisz
alexklibisz / tools - dpl - Dockerfile
Created December 27, 2015 09:02
Elastic Beanstalk Deployment using the dpl ruby gem ( '-' represents a '/' in the file path)
FROM ruby:2.2.4
RUN apt-get update -y
RUN apt-get install ruby-full -y
RUN apt-get install git -y
RUN gem install dpl
RUN gem install aws-sdk
RUN gem install aws-sdk-v1
RUN mkdir /usr/src/app
@alexklibisz
alexklibisz / Creating an Effective Firebase Backup Solution.md
Last active July 14, 2020 04:31
Creating an Effective Firebase Backup Solution

Preface and Problem

There is a project that I've spent the last two to three months working on that uses Firebase. The project includes a web app and iOS application and focuses heavily on real-time user interaction. We've really enjoyed working with Firebase and the Firebase web and iOS SDKs. It makes this real-time programming much simpler than rolling our own data-syncing solution for the server and multiple client language.

It will soon (in the next two weeks) be time to release the project, and we have no effective way in place to back up our data. Firebase offers a "private backups" feature for the "Bonfire" plan, but we obviously don't want to pay the $150 / month until we absolutely have to. Until we reach a point where we will use the Bonfire plan, we are forced to roll our own solution.

The Goal

Must-haves:

@alexklibisz
alexklibisz / resume-sample.md
Created February 12, 2016 19:33
markdown-resume

Alex Klibisz

@alexklibisz
alexklibisz / confidence-intervals.md
Last active June 26, 2022 04:47
Confidence Intervals
@alexklibisz
alexklibisz / demo.js
Created March 16, 2016 06:01
quick-and-dirty-multi-property-search-01
const restaurants = [
{
"name": "Copper Cellar",
"food_type": "Burgers",
"neighborhood": "UT Campus"
},
{
"name": "Soccer Taco",
"food_type": "Mexican",
"neighborhood": "Market Square"
@alexklibisz
alexklibisz / demo.js
Created March 16, 2016 06:02
quick-and-dirty-multi-property-search-02
function matchFilter(allItems, query, threshold) {
// Create an array of properties that are defined in the query.
// For the example, it will be [food_type, neighborhood]
const properties = Object.keys(query)
.filter(key => query[key].trim().length > 0);
// Create a comparison string for the query item.
// For the example, it will be "Mxicanmarketsquare"
const queryComp = properties.map(p => query[p]).join();
@alexklibisz
alexklibisz / vnet prototxt
Created April 8, 2017 13:53
vnet prototxt
input: "data"
input_shape{dim: 1 dim: 1 dim: 128 dim: 128 dim: 64}
layer {
name: "conv_in128_chan16"
type: "Convolution"
bottom: "data"
top: "conv_in128_chan16"
param {
lr_mult: 1.0
@alexklibisz
alexklibisz / weighted_log_loss.py
Created April 11, 2017 20:30
Keras weighted log loss
def weighted_log_loss(yt, yp):
'''Log loss that weights false positives or false negatives more.
Punish the false negatives if you care about making sure all the neurons
are found and don't mind some false positives. Vice versa for punishing
the false positives. Concept taken from the UNet paper where they
weighted boundary errors to get cleaner boundaries.'''
emphasis = 'fn'
assert emphasis in ['fn', 'fp']
m = 2
@alexklibisz
alexklibisz / unet.prototxt
Created April 14, 2017 17:40
unet prototxt
name: 'phseg_v5'
force_backward: true
layers { top: 'data' top: 'label' name: 'loaddata' type: HDF5_DATA hdf5_data_param { source: 'aug_deformed_phseg_v5.txt' batch_size: 1 } include: { phase: TRAIN }}
layers { bottom: 'data' top: 'd0b' name: 'conv_d0a-b' type: CONVOLUTION blobs_lr: 1 blobs_lr: 2 weight_decay: 1 weight_decay: 0 convolution_param { num_output: 64 pad: 0 kernel_size: 3 engine: CAFFE weight_filler { type: 'xavier' }} }
layers { bottom: 'd0b' top: 'd0b' name: 'relu_d0b' type: RELU }
layers { bottom: 'd0b' top: 'd0c' name: 'conv_d0b-c' type: CONVOLUTION blobs_lr: 1 blobs_lr: 2 weight_decay: 1 weight_decay: 0 convolution_param { num_output: 64 pad: 0 kernel_size: 3 engine: CAFFE weight_filler { type: 'xavier' }} }
layers { bottom: 'd0c' top: 'd0c' name: 'relu_d0c' type: RELU }
layers { bottom: 'd0c' top: 'd1a' name: 'pool_d0c-1a' type: POOLING pooling_param { pool: MA