First add your twitter username and password. Then server.rb and once it's started open websocket.html in your browser. You should see some tweets appear. If not take a look at the javascript console.
require 'rubygems' | |
# sudo gem install httparty | |
require 'httparty' | |
# sudo gem sources -a http://gems.github.com && sudo gem install multipart-post | |
require 'net/http/post/multipart' | |
require 'mime/types' | |
require 'pp' | |
class Dropio | |
include HTTParty |
=Navigating= | |
visit('/projects') | |
visit(post_comments_path(post)) | |
=Clicking links and buttons= | |
click_link('id-of-link') | |
click_link('Link Text') | |
click_button('Save') | |
click('Link Text') # Click either a link or a button | |
click('Button Value') |
#!/bin/sh | |
#################################### | |
# Output file for HTML5 video # | |
# Requirements: # | |
# - handbrakecli # | |
# - ffmpeg # | |
# - ffmpeg2theora # | |
# # | |
# usage: # |
I was at Amazon for about six and a half years, and now I've been at Google for that long. One thing that struck me immediately about the two companies -- an impression that has been reinforced almost daily -- is that Amazon does everything wrong, and Google does everything right. Sure, it's a sweeping generalization, but a surprisingly accurate one. It's pretty crazy. There are probably a hundred or even two hundred different ways you can compare the two companies, and Google is superior in all but three of them, if I recall correctly. I actually did a spreadsheet at one point but Legal wouldn't let me show it to anyone, even though recruiting loved it.
I mean, just to give you a very brief taste: Amazon's recruiting process is fundamentally flawed by having teams hire for themselves, so their hiring bar is incredibly inconsistent across teams, despite various efforts they've made to level it out. And their operations are a mess; they don't real
# daily backup for 7 days | |
"/file/to/backup.log" { | |
daily | |
copytruncate | |
rotate 7 | |
compress | |
compresscmd /bin/bzip2 | |
compressext .bz2 | |
} |
Latency Comparison Numbers (~2012) | |
---------------------------------- | |
L1 cache reference 0.5 ns | |
Branch mispredict 5 ns | |
L2 cache reference 7 ns 14x L1 cache | |
Mutex lock/unlock 25 ns | |
Main memory reference 100 ns 20x L2 cache, 200x L1 cache | |
Compress 1K bytes with Zippy 3,000 ns 3 us | |
Send 1K bytes over 1 Gbps network 10,000 ns 10 us | |
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD |
Each of these commands will run an ad hoc http static server in your current (or specified) directory, available at http://localhost:8000. Use this power wisely.
$ python -m SimpleHTTPServer 8000
RDBMS-based job queues have been criticized recently for being unable to handle heavy loads. And they deserve it, to some extent, because the queries used to safely lock a job have been pretty hairy. SELECT FOR UPDATE followed by an UPDATE works fine at first, but then you add more workers, and each is trying to SELECT FOR UPDATE the same row (and maybe throwing NOWAIT in there, then catching the errors and retrying), and things slow down.
On top of that, they have to actually update the row to mark it as locked, so the rest of your workers are sitting there waiting while one of them propagates its lock to disk (and the disks of however many servers you're replicating to). QueueClassic got some mileage out of the novel idea of randomly picking a row near the front of the queue to lock, but I can't still seem to get more than an an extra few hundred jobs per second out of it under heavy load.
So, many developers have started going straight t
L1 cache reference ......................... 0.5 ns
Branch mispredict ............................ 5 ns on recent CPU
L2 cache reference ........................... 7 ns 14x L1 cache
Mutex lock/unlock ........................... 25 ns
Main memory reference ...................... 100 ns 20x L2 cache, 200x L1 cache
Compress 1K bytes with Zippy ............. 3,000 ns = 3 µs
Send 2K bytes over 1 Gbps network ....... 20,000 ns = 20 µs
SSD random read ........................ 150,000 ns = 150 µs
Read 1 MB sequentially from memory ..... 250,000 ns = 250 µs 4X memory