Product: Sagitta Brutalis 1080 (PN S3480-GTX-1080-2697-128)
Software: Hashcat v3.00-beta-145-g069634a, Nvidia driver 367.18
Accelerator: 8x Nvidia GTX 1080 Founders Edition
Product: Sagitta Brutalis 1080 (PN S3480-GTX-1080-2697-128)
Software: Hashcat v3.00-beta-145-g069634a, Nvidia driver 367.18
Accelerator: 8x Nvidia GTX 1080 Founders Edition
The Federal Aviation Administration is posting PDFs of the Section 333 exemptions that it grants, i.e. the exemptions for operators who want to fly drones commercially before the FAA finishes its rulemaking. A journalist wanted to look for exemptions granted to operators in a given U.S. state. But the FAA doesn't appear to have an easy-to-read data file to use and doesn't otherwise list exemptions by location of operator.
However, since their exemptions page is just one giant HTML table for listing the PDFs, we can just use wget to fetch all the PDFs, run pdftotext on each file, and then [grep](https://medium.com/@rualthanzauva/grep-was-a-private-command-of-m
In a project I'm working on I ran into the requirement of having some sort of persistent FIFO buffer or pipe in Linux, i.e. something file-like that could accept writes from a process and persist it to disk until a second process reads (and acknowledges) it. The persistence should be both across process restarts as well as OS restarts.
AFAICT unfortunately in the Linux world such a primitive does not exist (named pipes/FIFOs do not persist
Never break backcompat, keep the API nimble
An extension of SemVer with a stricter (yet more realistic) backcompat guarantee, that provides more flexibility to change the API, for libraries that are packaged and downloaded (not services accessed remotely over the Internet (see Note 4)).
# Ubuntu Server automated installation | |
# by Scott Lowe ([email protected]) | |
d-i debian-installer/locale string en_US | |
d-i console-setup/ask_detect boolean false | |
d-i keyboard-configuration/layoutcode string us | |
d-i netcfg/choose_interface select eth0 | |
d-i netcfg/get_hostname string hostname | |
d-i netcfg/get_domain string domain.com | |
d-i netcfg/wireless_wep string |
github.com/twotwotwo/sorts is a Go package with parallel radix- and quicksorts. It can run up to 5x faster than stdlib sort on the right kind of large sort task, so it could be useful for analysis and indexing/database-y work in which you have to sort millions of items. (To be clear, I don't recommend most folks drop stdlib sort, which is great, and which sorts depends on.)
While the process of writing it's fresh on my mind, here are some technical details, some things that didn't make the cut, and some thoughts about the process:
Concretely, what this looks like inside:
Both number and string versions are in-place MSD radix sorts that look at a byte at a time and, once the range being sorted gets down to 128 items, call (essentially) the stdlib's quicksort.
The [parallelization code
dc:14:de:8e:d7:c1:15:43:23:82:25:81:d2:59:e8:c0 | 245272 | |
---|---|---|
32:f9:38:a2:39:d0:c5:f5:ba:bd:b7:75:2b:00:f6:ab | 197846 | |
d0:db:8a:cb:74:c8:37:e4:9e:71:fc:7a:eb:d6:40:81 | 152046 | |
34:47:0f:e9:1a:c2:eb:56:eb:cc:58:59:3a:02:80:b6 | 140777 | |
df:17:d6:57:7a:37:00:7a:87:5e:4e:ed:2f:a3:d5:dd | 91904 | |
81:96:a6:8c:3a:75:f3:be:84:5e:cc:99:a7:ab:3e:d9 | 80499 | |
7c:a8:25:21:13:a2:eb:00:a6:c1:76:ca:6b:48:6e:bf | 78172 | |
1c:1e:29:43:d2:0c:c1:75:40:05:30:03:d4:02:d7:9b | 71851 | |
8b:75:88:08:41:78:11:5b:49:68:11:42:64:12:6d:49 | 70786 | |
c2:77:c8:c5:72:17:e2:5b:4f:a2:4e:e3:04:0c:35:c9 | 68654 |
# Hello, and welcome to makefile basics. | |
# | |
# You will learn why `make` is so great, and why, despite its "weird" syntax, | |
# it is actually a highly expressive, efficient, and powerful way to build | |
# programs. | |
# | |
# Once you're done here, go to | |
# http://www.gnu.org/software/make/manual/make.html | |
# to learn SOOOO much more. |
I hereby claim:
To claim this, I am signing this object:
At Timeline Labs, we are continuously looking at new technologies to see what fits our needs. We are especially excited about Kubernetes from Google to manage our services atop Docker and CoreOS.
This process for installing Kubernetes on CoreOS uses Flannel for Kubernetes networking and should be cloud provider agnostic. To deploy the Kubernetes master functionality into the cluster, it uses fleetctl
.
Thanks to Kelsey Hightower and his blog posts! They served as a great starting point for this process.
Add the cloud config below to your own and bring up your cluster using a CoreOS version with Docker 1.3 (currently v472.0.0 in alpha). During that initial boot, the download-kubernetes and download-flannel units will download binaries from the latest project release and use those.