All instructions/links/version are valid as of Jan 29, 2014
Here is how I got up-and running with the Xerial JDBC driver and libspatialite on a Centos 6 x86_64 box.
| # Lua routines for use inside the Redis datastore | |
| # Hyperloglog cardinality estimation | |
| # ported from http://stackoverflow.com/questions/5990713/loglog-algorithm-for-counting-of-large-cardinalities | |
| # | |
| # Dan Brown, 2012. https://github.com/dbro | |
| # | |
| # note that lua needs to have the bitlib and murmur3 modules built in, and loaded by redis | |
| # | |
| # suitable for counting unique items from 0 to billions | |
| # choose a k value to balance storage and precision objectives |
All instructions/links/version are valid as of Jan 29, 2014
Here is how I got up-and running with the Xerial JDBC driver and libspatialite on a Centos 6 x86_64 box.
| import math | |
| import sys | |
| DELTA = 1.5e-8 | |
| _INFINITY = 1e+308 | |
| class Vec: | |
| __slots__ = ( 'x', 'y', 'z' ) | |
| def __init__(self,x = 0.0, y = 0.0, z = 0.0): | |
| self.x = float(x) |
| #! /usr/bin/env python | |
| # Of course, the author does not guarantee safety. | |
| # I did my best by using SQLite's online backup API. | |
| from __future__ import print_function | |
| import sys, ctypes | |
| from ctypes.util import find_library | |
| SQLITE_OK = 0 | |
| SQLITE_ERROR = 1 | |
| SQLITE_BUSY = 5 |
| import ast | |
| app = lambda name, *args: \ | |
| ast.Call( | |
| func=ast.Name(id=name, ctx=ast.Load(), lineno=0, col_offset=0), | |
| args=list(args), keywords=[], vararg=None, | |
| lineno=0, col_offset=0) | |
| abs = lambda arg, body: \ |
| # Makefile template for a shared library in C | |
| # https://www.topbug.net/blog/2019/10/28/makefile-template-for-a-shared-library-in-c-with-explanations/ | |
| CC = gcc # C compiler | |
| CFLAGS = -fPIC -Wall -Wextra -O2 -g # C flags | |
| LDFLAGS = -shared # linking flags | |
| RM = rm -f # rm command | |
| TARGET_LIB = libtarget.so # target lib | |
| SRCS = main.c src1.c src2.c # source files |
| # can split up a long url into multiple lines in a file url.txt | |
| # and then go to it by typing "ffue url.txt" | |
| function ffue() { | |
| open -a FireFox $(paste -s -d '\0' $1) | |
| } |
I was at Amazon for about six and a half years, and now I've been at Google for that long. One thing that struck me immediately about the two companies -- an impression that has been reinforced almost daily -- is that Amazon does everything wrong, and Google does everything right. Sure, it's a sweeping generalization, but a surprisingly accurate one. It's pretty crazy. There are probably a hundred or even two hundred different ways you can compare the two companies, and Google is superior in all but three of them, if I recall correctly. I actually did a spreadsheet at one point but Legal wouldn't let me show it to anyone, even though recruiting loved it.
I mean, just to give you a very brief taste: Amazon's recruiting process is fundamentally flawed by having teams hire for themselves, so their hiring bar is incredibly inconsistent across teams, despite various efforts they've made to level it out. And their operations are a mess; they don't real
| DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE | |
| Version 2, December 2004 | |
| Copyright (C) 2011 Tom Robinson <http://tlrobinson.net/> | |
| Everyone is permitted to copy and distribute verbatim or modified | |
| copies of this license document, and changing it is allowed as long | |
| as the name is changed. | |
| DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE |
| #!/usr/bin/env ruby | |
| class MRISC | |
| def run(code) | |
| tokens = code.gsub(/(\*.*?\*)|[^a-z0-9,-;@\._]/,'').split(';') | |
| @vars,stack,i = {:_pc=>-1,:_oc=>0},[],0 | |
| tokens.map!{|t| t.chars.first=='@' ? (@vars[t.to_sym]=i-1;nil) : (i+=1;t.split(',').map{|e|numeric?(e) ? e.to_i : e.to_sym})}.compact! | |
| while @vars[:_pc] < tokens.size-1 | |
| @vars[:_pc] += 1 | |
| @vars[:_oc] += 1 |