It's now here, in The Programmer's Compendium. The content is the same as before, but being part of the compendium means that it's actively maintained.
Flame graphs are a nifty debugging tool to determine where CPU time is being spent. Using the Java Flight recorder, you can do this for Java processes without adding significant runtime overhead.
Shivaram Venkataraman and I have found these flame recordings to be useful for diagnosing coarse-grained performance problems. We started using them at the suggestion of Josh Rosen, who quickly made one for the Spark scheduler when we were talking to him about why the scheduler caps out at a throughput of a few thousand tasks per second. Josh generated a graph similar to the one below, which illustrates that a significant amount of time is spent in serialization (if you click in the top right hand corner and search for "serialize", you can see that 78.6% of the sampled CPU time was spent in serialization). We used this insight to spee
Pop open "filter preferences" in adblock plus, and add the following rules to hide mentions from people who don't follow you (and who you don't follow).
For the interactions/notifications page:
twitter.com##.interaction-page [data-follows-you="false"][data-you-follow="false"]:not(.my-tweet)
For the mentions page:
twitter.com##.mentions-page [data-follows-you="false"][data-you-follow="false"]:not(.my-tweet)
Around 2006-2007, it was a bit of a fashion to hook lava lamps up to the build server. Normally, the green lava lamp would be on, but if the build failed, it would turn off and the red lava lamp would turn on.
By coincidence, I've actually met, about that time, (probably) the first person to hook up a lava lamp to a build server. It was Alberto Savoia, who'd founded a testing tools company (that did some very interesting things around generative testing that have basically never been noticed). Alberto had noticed that people did not react with any urgency when the build broke. They'd check in broken code and go off to something else, only reacting to the breakage they'd caused when some other programmer pulled the change and had problems.
These come from @tsantero with the last two additions being curteousy of @ifesdjeen in reply to this question from @SeanTAllen.
- explain the life of an http request.
- what does the FLP result teach us?
- what is a byzantine failure?
- explain CRDTs
- explain linearizability.
- how does DNS work?
| #!ruby | |
| # Requirements: | |
| # brew install trash | |
| casks_path = '/opt/homebrew-cask/Caskroom' | |
| class Version < Array | |
| def initialize s | |
| super(s.split('.').map { |e| e.to_i }) |
| (defn mostly-small-nonempty-subset | |
| "Returns a subset of the given collection, with a logarithmically decreasing | |
| probability of selecting more elements. Always selects at least one element. | |
| (->> #(mostly-small-nonempty-subset [1 2 3 4 5]) | |
| repeatedly | |
| (map count) | |
| (take 10000) | |
| frequencies | |
| sort) |
| burp-2:foo jd$ python probabilistic_strategy.py 1000 16 64 | |
| request_count: 1000 cluster_size: 16 threshold: 64 | |
| probability of being quotad at a given local concurrency: | |
| 1 0.015070684079 | |
| 2 0.07629533815 | |
| 3 0.202571187171 | |
| 4 0.378831226431 | |
| mean observed global concurrency limit: 64.033 |