通过第一篇《使用树莓派3B打造超强路由之一:初装》的努力,树莓派3B已经可以作为一台超低能耗、随身携带的开发用服务器来使用了。但这对于目标——打造超强路由而言,才刚刚开始。接下来,我们需要将其打磨成一台基本的无线路由器。
WARNING
本文所有指令均仅供参考,切勿无脑复制粘贴!
通过第一篇《使用树莓派3B打造超强路由之一:初装》的努力,树莓派3B已经可以作为一台超低能耗、随身携带的开发用服务器来使用了。但这对于目标——打造超强路由而言,才刚刚开始。接下来,我们需要将其打磨成一台基本的无线路由器。
WARNING
本文所有指令均仅供参考,切勿无脑复制粘贴!
sudo apt-get update | |
sudo apt-get install build-essential chrpath libssl-dev libxft-dev -y | |
sudo apt-get install libfreetype6 libfreetype6-dev -y | |
sudo apt-get install libfontconfig1 libfontconfig1-dev -y | |
cd ~ | |
export PHANTOM_JS="phantomjs-2.1.1-linux-x86_64" | |
wget https://github.com/Medium/phantomjs/releases/download/v2.1.1/$PHANTOM_JS.tar.bz2 | |
sudo tar xvjf $PHANTOM_JS.tar.bz2 | |
sudo mv $PHANTOM_JS /usr/local/share | |
sudo ln -sf /usr/local/share/$PHANTOM_JS/bin/phantomjs /usr/local/bin |
#!/usr/bin/python | |
# -*- coding: utf-8 -*- | |
import pprint | |
import subprocess | |
def get_processes(): | |
""" | |
Parse the output of `ps aux` into a list of dictionaries representing the parsed |
from pyspark.sql.functions import udf | |
from pyspark.sql.types import BooleanType | |
def regex_filter(x): | |
regexs = ['.*ALLYOURBASEBELONGTOUS.*'] | |
if x and x.strip(): | |
for r in regexs: | |
if re.match(r, x, re.IGNORECASE): | |
return True |
Nothing gives you more detail about spark internals than actually reading it source code. In addition, you get to learn many design techniques and improve your scala coding skills. These are the random notes I make while reading the spark code. The best way to comprehend the notes is to load spark code into an IDE, e.g. IntelliJ, and navigate the code on the side.
The scripts for creating a spark cluster are: start-master.sh and start-slave.sh. Read them carefully, and you can see that both scripts are very similar except the values for $CLASS variable. For start-master.sh, the value is CLASS="org.apache.spark.deploy.master.Master", while the value for start-slave.sh is shown below with more context.
# NOTE: This exact class name is matched downstream by SparkSubmit.
""" | |
A weighted version of categorical_crossentropy for keras (2.0.6). This lets you apply a weight to unbalanced classes. | |
@url: https://gist.github.com/wassname/ce364fddfc8a025bfab4348cf5de852d | |
@author: wassname | |
""" | |
from keras import backend as K | |
def weighted_categorical_crossentropy(weights): | |
""" | |
A weighted version of keras.objectives.categorical_crossentropy | |
# Redis Cheatsheet | |
# All the commands you need to know | |
redis-server /path/redis.conf # start redis with the related configuration file | |
redis-cli # opens a redis prompt | |
# Strings. |
UPDATE: The instructions here are no longer necessary! Resizing the disk image is now possible right from the UI since Docker for Mac Version 17.12.0-ce-mac49 (21995).
If you are getting the error: No space left on device
Configuring the qcow2 size cap is possible in the current versions:
# my disk is currently 64GiB
FWIW: I (@rondy) am not the creator of the content shared here, which is an excerpt from Edmond Lau's book. I simply copied and pasted it from another location and saved it as a personal note, before it gained popularity on news.ycombinator.com. Unfortunately, I cannot recall the exact origin of the original source, nor was I able to find the author's name, so I am can't provide the appropriate credits.
#!/bin/bash | |
# Minimum TODOs on a per job basis: | |
# 1. define name, application jar path, main class, queue and log4j-yarn.properties path | |
# 2. remove properties not applicable to your Spark version (Spark 1.x vs. Spark 2.x) | |
# 3. tweak num_executors, executor_memory (+ overhead), and backpressure settings | |
# the two most important settings: | |
num_executors=6 | |
executor_memory=3g |