Add on top of default.nix: with import {};
or simply run as nix-build ''
(i. e. for nix-build complaining) or rather nix-build -E 'with import {}; callPackage ./default.nix {}'
(or even import)
--- a/node_modules/jest-haste-map/build/crawlers/node.js 2019-05-10 13:03:51.000000000 -0300 | |
+++ b/node_modules/jest-haste-map/build/crawlers/node.js 2019-05-10 13:10:49.000000000 -0300 | |
@@ -165,7 +165,7 @@ | |
} | |
function findNative(roots, extensions, ignore, callback) { | |
- const args = Array.from(roots); | |
+ const args = ['-L'].concat(Array.from(roots)); | |
args.push('-type', 'f'); | |
# Edit this configuration file to define what should be installed on | |
# your system. Help is available in the configuration.nix(5) man page | |
# and in the NixOS manual (accessible by running ‘nixos-help’). | |
{ config, pkgs, ... }: | |
{ | |
imports = | |
[ # Include the results of the hardware scan. |
When stream hasn't started yet, you can still prep your page/app/embed if you know a YouTube channel's channel ID: | |
https://www.youtube.com/embed/live_stream?channel=[channel ID] | |
IT'S THAT EASY. | |
Context: | |
http://stackoverflow.com/questions/39204757/youtube-live-streaming-embed-code-keeps-changing/39235779#39235779 |
file, err := os.Open(path) | |
if err != nil { | |
return err | |
} | |
defer file.Close() | |
// Only the first 512 bytes are used to sniff the content type. | |
buffer := make([]byte, 512) | |
_, err = file.Read(buffer) | |
if err != nil { |
SELECT | |
tc.constraint_name, tc.table_name, kcu.column_name, | |
ccu.table_name AS foreign_table_name, | |
ccu.column_name AS foreign_column_name | |
FROM | |
information_schema.table_constraints AS tc | |
JOIN information_schema.key_column_usage AS kcu | |
ON tc.constraint_name = kcu.constraint_name | |
JOIN information_schema.constraint_column_usage AS ccu | |
ON ccu.constraint_name = tc.constraint_name |
TL;DR - smelly software engineer discusses using rethinkdb changefeeds for building caches, breaks hearts, shaves the cheerleader, shaves the world.
Let's talk about caches.
Imagine that you build UIs for an ecommerce company, possibly in a fancy office with free coffee and whatnot. You've just been asked to build a way for the marketing / sales folks to change landing pages whenever they're running campaigns. After a number of angry discussions involving the ux team about what they can and cannot change, you settle on a 'document' format for these pages. It could be json describing a tree of widgets of banners and carousels, or html, or yaml, or whatever. Maybe you also invent a dsl that marks out parts of the document as dynamic, based on request parameters or something. I dunno, I'm not your boss. You build a little ui over the weekend (with react? maybe!) that lets these folks login, drag and drop their banners, maybe upload an image or two, and save to database.
Yo
A running example of the code from:
- http://marcio.io/2015/07/handling-1-million-requests-per-minute-with-golang
- http://nesv.github.io/golang/2014/02/25/worker-queues-in-go.html
This gist creates a working example from blog post, and a alternate example using simple worker pool.
TLDR: if you want simple and controlled concurrency use a worker pool.
""" | |
This is a batched LSTM forward and backward pass | |
""" | |
import numpy as np | |
import code | |
class LSTM: | |
@staticmethod | |
def init(input_size, hidden_size, fancy_forget_bias_init = 3): |
package com.company.components; | |
import android.animation.Animator; | |
import android.animation.ObjectAnimator; | |
import android.animation.ValueAnimator; | |
import android.graphics.Canvas; | |
import android.graphics.ColorFilter; | |
import android.graphics.Paint; | |
import android.graphics.PixelFormat; | |
import android.graphics.Rect; |