GitHub supports several lightweight markup languages for documentation; the most popular ones (generally, not just at GitHub) are Markdown and reStructuredText. Markdown is sometimes considered easier to use, and is often preferred when the purpose is simply to generate HTML. On the other hand, reStructuredText is more extensible and powerful, with native support (not just embedded HTML) for tables, as well as things like automatic generation of tables of contents.
Latency Comparison Numbers (~2012) | |
---------------------------------- | |
L1 cache reference 0.5 ns | |
Branch mispredict 5 ns | |
L2 cache reference 7 ns 14x L1 cache | |
Mutex lock/unlock 25 ns | |
Main memory reference 100 ns 20x L2 cache, 200x L1 cache | |
Compress 1K bytes with Zippy 3,000 ns 3 us | |
Send 1K bytes over 1 Gbps network 10,000 ns 10 us | |
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD |
import com.twitter.finagle.http.path._ | |
import com.twitter.finagle.http.service.RoutingService | |
import com.twitter.finagle.http.{Request, Response, RichHttp, Http} | |
import com.twitter.finagle.{Service, SimpleFilter} | |
import org.jboss.netty.handler.codec.http._ | |
import org.jboss.netty.handler.codec.http.HttpResponseStatus._ | |
import org.jboss.netty.handler.codec.http.HttpVersion.HTTP_1_1 | |
import org.jboss.netty.buffer.ChannelBuffers.copiedBuffer | |
import org.jboss.netty.util.CharsetUtil.UTF_8 | |
import com.twitter.util.Future |
- https://dancres.github.io/Pages/
- https://ferd.ca/a-distributed-systems-reading-list.html
- http://the-paper-trail.org/blog/distributed-systems-theory-for-the-distributed-systems-engineer/
- https://github.com/palvaro/CMPS290S-Winter16/blob/master/readings.md
- http://muratbuffalo.blogspot.com/2015/12/my-distributed-systems-seminars-reading.html
- http://christophermeiklejohn.com/distributed/systems/2013/07/12/readings-in-distributed-systems.html
- http://michaelrbernste.in/2013/11/06/distributed-systems-archaeology-works-cited.html
- http://rxin.github.io/db-readings/
- http://research.microsoft.com/en-us/um/people/lamport/pubs/pubs.html
- http://pdos.csail.mit.edu/dsrg/papers/
import scala.concurrent.duration._ | |
import scala.concurrent.ExecutionContext | |
import scala.concurrent.Future | |
import akka.pattern.after | |
import akka.actor.Scheduler | |
/** | |
* Given an operation that produces a T, returns a Future containing the result of T, unless an exception is thrown, | |
* in which case the operation will be retried after _delay_ time, if there are more possible retries, which is configured through | |
* the _retries_ parameter. If the operation does not succeed and there is no retries left, the resulting Future will contain the last failure. |
Whether you're trying to give back to the open source community or collaborating on your own projects, knowing how to properly fork and generate pull requests is essential. Unfortunately, it's quite easy to make mistakes or not know what you should do when you're initially learning the process. I know that I certainly had considerable initial trouble with it, and I found a lot of the information on GitHub and around the internet to be rather piecemeal and incomplete - part of the process described here, another there, common hangups in a different place, and so on.
In an attempt to coallate this information for myself and others, this short tutorial is what I've found to be fairly standard procedure for creating a fork, doing your work, issuing a pull request, and merging that pull request back into the original project.
Just head over to the GitHub page and click the "Fork" button. It's just that simple. Once you've done that, you can use your favorite git client to clone your repo or j
# first: | |
lsbom -f -l -s -pf /var/db/receipts/org.nodejs.pkg.bom | while read f; do sudo rm /usr/local/${f}; done | |
sudo rm -rf /usr/local/lib/node /usr/local/lib/node_modules /var/db/receipts/org.nodejs.* | |
# To recap, the best way (I've found) to completely uninstall node + npm is to do the following: | |
# go to /usr/local/lib and delete any node and node_modules | |
cd /usr/local/lib | |
sudo rm -rf node* |
##TCP FLAGS## | |
Unskilled Attackers Pester Real Security Folks | |
============================================== | |
TCPDUMP FLAGS | |
Unskilled = URG = (Not Displayed in Flag Field, Displayed elsewhere) | |
Attackers = ACK = (Not Displayed in Flag Field, Displayed elsewhere) | |
Pester = PSH = [P] (Push Data) | |
Real = RST = [R] (Reset Connection) | |
Security = SYN = [S] (Start Connection) |
import scalaz.\/ | |
import scalaz.syntax.either._ | |
object Example2 { | |
// This example simulates error handling for a simple three tier web application | |
// | |
// The tiers are: | |
// - the HTTP service | |
// - a user authentication layer | |
// - a database layer |
Every application ever written can be viewed as some sort of transformation on data. Data can come from different sources, such as a network or a file or user input or the Large Hadron Collider. It can come from many sources all at once to be merged and aggregated in interesting ways, and it can be produced into many different output sinks, such as a network or files or graphical user interfaces. You might produce your output all at once, as a big data dump at the end of the world (right before your program shuts down), or you might produce it more incrementally. Every application fits into this model.
The scalaz-stream project is an attempt to make it easy to construct, test and scale programs that fit within this model (which is to say, everything). It does this by providing an abstraction around a "stream" of data, which is really just this notion of some number of data being sequentially pulled out of some unspecified data source. On top of this abstraction, sca