1. Launch and connect to EC2 instance running Amazon Linux 2.
2. Promote to root and edit /etc/ssh/sshd_config
## sudo vi /etc/ssh/sshd_config
3. Edit line 17 (usually 17) #PORT 22. You'll need to un-comment the line and change the port to whatever you like.
## PORT 9222
4. Save changes and exit
## :wq
#listener { | |
width: 45px; | |
height: 45px; | |
border: 1px solid black; | |
} |
function splitPath(path) { | |
const paths = []; | |
for (let i = 0; i < path.length; i++) { | |
const c = path[i]; | |
if (c === ']') { | |
continue; | |
} |
.file-node-content { | |
padding: 9px 4px; | |
margin: 0; | |
display: flex; | |
align-items: center; | |
} | |
.file-tree-node > .file-node-children { | |
padding-left: 12px; |
export HOME="/Users/<USERNAME>" | |
export PATH=$HOME/bin:/usr/local/bin:$PATH | |
export PATH="/opt/local/bin:/opt/local/sbin:$PATH" | |
export PATH="$HOME/.yarn/bin:$HOME/.config/yarn/global/node_modules/.bin:$PATH" | |
# Path to your oh-my-zsh installation. | |
export ZSH="$HOME/.oh-my-zsh" | |
export LANG=en_US.UTF-8 | |
export LC_ALL=en_US.UTF-8 |
{ | |
"Use Non-ASCII Font" : true, | |
"Tags" : [ | |
], | |
"Ansi 12 Color" : { | |
"Green Component" : 0.57647058823529407, | |
"Red Component" : 0.74117647058823533, | |
"Blue Component" : 0.97647058823529409 | |
}, |
Parameters: | |
EnvironmentName: | |
Description: An environment name that will be prefixed to resource names | |
Type: String | |
VpcCIDR: | |
Description: Please enter the IP range (CIDR notation) for this VPC | |
Type: String | |
Default: 10.0.0.0/16 |
- Install Git
- Install Nginx
- Setup Nginx as a Reverse Proxy for your Node.js Application
- Install Node using NVM
- Install PM2
- Run a Dummy API Server Using express
- Start the Server using PM2
- Auto Start PM2 after a server reboot.
user nginx; | |
worker_processes auto; | |
include /usr/share/nginx/modules/*.conf; | |
events { | |
worker_connections 1024; | |
} | |
http { | |
sendfile on; | |
tcp_nopush on; | |
tcp_nodelay on; |
Reading Big Files in Node.js is a little tricky. Node.js is meant to deal with I/O tasks efficiently and
not CPU intensive computations. It is still doable though but I'd prefer doing such tasks in languages like python, R etc.
Reading, Parsing, Transforming and then Saving large data sets (I'm talking millions of records here) can be done in
a lot of ways but only a few of those are efficient. Following snippet is able to parse millions of records without
wasting a lot of CPU (15% - 30% max) and (40 MB - 60 MB max) memory. It is based on Streams
.
The following program expects the input to be a csv file source eg. big-data.unpr.csv
It saves the result as ndjson and not json as working with huge datasets is easier when done using ndjson format.