First run the one below to be able to install pg_config
brew install postgresql
Install openssl
brew link openssl
#!/usr/bin/env bash | |
set -ex | |
PWD=$(pwd) | |
mkfifo /tmp/servicelogs.pipe || echo "pipe already exists" | |
runningPIDs=() | |
exitfn () { |
First run the one below to be able to install pg_config
brew install postgresql
Install openssl
brew link openssl
tree . -aF --dirsfirst -I '.git|node_modules|.idea' | sed -e 's/│/|/g' -e 's/└/+/g' -e 's/├/+/g' -e 's/──/--/g' |
// Future versions of Hyper may add additional config options, | |
// which will not automatically be merged into this file. | |
// See https://hyper.is#cfg for all currently supported options. | |
module.exports = { | |
config: { | |
// choose either `'stable'` for receiving highly polished, | |
// or `'canary'` for less polished but more frequent updates | |
updateChannel: 'stable', |
/* MIT License | |
* | |
* Copyright (c) 2017 Roland Singer [[email protected]] | |
* | |
* Permission is hereby granted, free of charge, to any person obtaining a copy | |
* of this software and associated documentation files (the "Software"), to deal | |
* in the Software without restriction, including without limitation the rights | |
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | |
* copies of the Software, and to permit persons to whom the Software is | |
* furnished to do so, subject to the following conditions: |
server { | |
root /var/www/v1.polymerapp.io/html; | |
index index.html index.htm index.nginx-debian.html; | |
server_name v1.polymerapp.io; | |
location / { | |
try_files $uri $uri/ /index.html $uri/ =404; | |
} |
The logs directory will be your $PWD/.zeppelin_logs
and the notebook directory will be $PWD/.notebook:/notebook
.
If you dont create folders before running the next command, the folders will be created as the
root
user, which will be problematic for the container to create.git/
and other folders in there. You can simply fix that too, by doingsudo chown -R $USER .zeppelin_logs .notebook
.
mkdir -p $PWD/.zeppelin_logs $PWD/.notebook
First of all generate the data, here is an interesting tool to do that https://pypi.org/project/csvfaker/
Create test_data/
folder and use csvfaker
to put data in there
csvfaker -r 100000 first_name last_name email ssn job country phone_number user_name zipcode invalid_ssn credit_card_number credit_card_provider credit_card_security_code bban > test_data/pii_data.csv
Then create a docker container using image mysql:5.7
and mount volume test_data/
# create in from a list | |
mkdir -p temp/{api,logs,whatnot} | |
# create all from A to Z | |
mkdir -p temp/{A..Z} |
def flattenDataFrame(df: DataFrame): DataFrame = { | |
val fields = df.schema.fields | |
val fieldNames = fields.map(x => x.name) | |
for (i <- fields.indices) { | |
val field = fields(i) | |
val fieldType = field.dataType | |
val fieldName = field.name | |
fieldType match { | |
case _: ArrayType => |