start new:
tmux
start new with session name:
tmux new -s myname
This benchmark has been misleading for a while. It was originally made to demonstrate how JIT compilers can do all sorts of crazy stuff to your code - especially LuaJIT - and was meant to be a starting point of discussion about what exactly LuaJIT does and how.
As a result, its not indicative of what its performance may be on more realistic data. Differences can be expected because
This tutorial guides you through creating your first Vagrant project.
We start with a generic Ubuntu VM, and use the Chef provisioning tool to:
Afterwards, we'll see how easy it is to package our newly provisioned VM
;SMBDIS.ASM - A COMPREHENSIVE SUPER MARIO BROS. DISASSEMBLY | |
;by doppelganger ([email protected]) | |
;This file is provided for your own use as-is. It will require the character rom data | |
;and an iNES file header to get it to work. | |
;There are so many people I have to thank for this, that taking all the credit for | |
;myself would be an unforgivable act of arrogance. Without their help this would | |
;probably not be possible. So I thank all the peeps in the nesdev scene whose insight into | |
;the 6502 and the NES helped me learn how it works (you guys know who you are, there's no |
module Alphabet ( | |
Alphabet, | |
encodeWithAlphabet, | |
decodeFromAlphabet | |
) where | |
import Prelude | |
import Data.List(elemIndex, mapAccumR) | |
import Data.Maybe(fromMaybe) |
I visited with PagerDuty yesterday for a little Friday beer and pizza. While there I got started talking about Go. I was asked by Alex, their CEO, why I liked it. Several other people have asked me the same question recently, so I figured it was worth posting.
The first 1/2 of Go's concurrency story. Lightweight, concurrent function execution. You can spawn tons of these if needed and the Go runtime multiplexes them onto the configured number of CPUs/Threads as needed. They start with a super small stack that can grow (and shrink) via dynamic allocation (and freeing). They are as simple as go f(x)
, where f()
is a function.
user www-data; | |
# As a thumb rule: One per CPU. If you are serving a large amount | |
# of static files, which requires blocking disk reads, you may want | |
# to increase this from the number of cpu_cores available on your | |
# system. | |
# | |
# The maximum number of connections for Nginx is calculated by: | |
# max_clients = worker_processes * worker_connections | |
worker_processes 1; |
Eric Bidelman has documented some of the common workflows possible with headless Chrome over in https://developers.google.com/web/updates/2017/04/headless-chrome.
If you're looking at this in 2016 and beyond, I strongly recommend investigating real headless Chrome: https://chromium.googlesource.com/chromium/src/+/lkgr/headless/README.md
Windows and Mac users might find using Justin Ribeiro's Docker setup useful here while full support for these platforms is being worked out.
#!/usr/bin/env python | |
""" | |
smem_test.py - test smem's USS/PSS/RSS values | |
More info on smem: http://www.selenic.com/smem/ | |
Sample run: | |
# ./smem_test.py |
Ideas are cheap. Make a prototype, sketch a CLI session, draw a wireframe. Discuss around concrete examples, not hand-waving abstractions. Don't say you did something, provide a URL that proves it.
Nothing is real until it's being used by a real user. This doesn't mean you make a prototype in the morning and blog about it in the evening. It means you find one person you believe your product will help and try to get them to use it.