In order to set up I used the Developer onbaording set up
I occassionally reference the System76 set up
This is done.
In my ~/.ssh
, I have four files and this is what they look like:
~/.ssh/config
:
Host *
AddKeysToAgent yes
UseKeychain yes
IdentityFile ~/.ssh/id_ed25519
~/.ssh/id_ed25519
:
-----BEGIN OPENSSH PRIVATE KEY-----
abunchanumbersandletters
-----END OPENSSH PRIVATE KEY-----
~/.ssh/id_ed25519.pub
:
I can't open this one..
~/.ssh/known_hosts
:
github.com, an ISP ssh-rsa, a long unique ID.
This repeats three or four times.
So far, this feels correct.
I did not add a .githubconfig
, but rather, a .gitconfig
. I imagined this was a typo, but maybe not.
~/.gitconfig
:
[user]
name = Gregory Anderson
email = [email protected]
I logged on to the AWS console and set up a unique access key.
Terminal command: /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Terminal response:
==> Downloading and installing Homebrew...
remote: Enumerating objects: 37, done.
remote: Counting objects: 100% (37/37), done.
remote: Compressing objects: 100% (11/11), done.
remote: Total 23 (delta 15), reused 17 (delta 11), pack-reused 0
Unpacking objects: 100% (23/23), 4.12 KiB | 120.00 KiB/s, done.
From https://github.com/Homebrew/brew
a65c5d685..86cbfd73e master -> origin/master
* [new branch] update-manpage -> origin/update-manpage
HEAD is now at 86cbfd73e Merge pull request #10719 from Homebrew/sorbet-files-update
Updated 1 tap (homebrew/core).
==> Installation successful!
Terminal command:
echo $HOME
Terminal response:
/Users/picogreg
I decided to use .zsh and was giving the following two commands:
echo "export PICO_HOME='$HOME/pico'" >> $HOME/.zshrc
echo "export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")" [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" --no-use # This loads nvm'" >> $HOME/.zshrc source ~/.zshrc
After running those commands, my .zshrc
file looks like this:
export PICO_HOME='/Users/picogreg/pico'
export NVM_DIR=/Users/picogreg/.nvm
[ -s /Users/picogreg/.nvm/nvm.sh ] && \. /Users/picogreg/.nvm/nvm.sh --no-use # This loads nvm'
This seems right, but I am pretty unfamiliar as it pertains to .zsh and .bash. I have a basic understanding of what is happening, but I am not entirely sure.
Terminal command:
brew install nvm
Terminal response:
nvm 0.37.2 is already installed and up-to-date.
Terminal command:
nvm install --lts
Terminal response:
Installing latest LTS version.
v14.16.0 is already installed.
Terminal command:
npm install npm@latest -g
Terminal response:
changed 14 packages, and audited 254 packages in 2s
11 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
Terminal command:
brew install pyenv
Terminal response:
Warning: pyenv 1.2.23 is already installed and up-to-date.
Terminal command:
brew install awscli
Terminal response:
Warning: awscli 2.1.28 is already installed and up-to-date.
From here, there is the command to aws configure. This prompts for four inputs within the terminal and creates a hidden .aws
directory
Within that directory there are two files:
~/.aws/config
:
[default]
region = us-east-1
output = json
~/.aws/credentials
:
[default]
aws_access_key_id = MYACCESSKEY$#$(#*$#(
aws_secret_access_key = MYSECRETACCESSKEY$H#FUI$J#F
Again, not incredibly familiar with this territory annnd I was using my .zsh
before, nevertheless..
Terminal command:
echo "export PICO_HOME='$HOME/pico'" >> $HOME/.bash_profile
source ~/.bash_profile
This resulted in my .bash_profile
looking like this:
export PICO_HOME='/Users/picogreg/pico'
I do not know if there are issues between using both .zsh
and .bash
, buuuut with the commands given, it seems like that should be what my .bash_profile
looks like.
There are six commands that were each individually added in the terminal:
sudo -- sh -c -e "echo '127.0.0.1 wordpress.local' >> /etc/hosts"
sudo -- sh -c -e "echo '::1 wordpress.local' >> /etc/hosts"
sudo -- sh -c -e "echo '127.0.0.1 wrapper.local' >> /etc/hosts"
sudo -- sh -c -e "echo '::1 wrapper.local' >> /etc/hosts"
sudo -- sh -c -e "echo '127.0.0.1 host.docker.internal' >> /etc/hosts"
sudo -- sh -c -e "echo '::1 host.docker.internal' >> /etc/hosts"
This resulted in etc/hosts
looking like this:
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
127.0.0.1 wordpress.local
::1 wordpress.local
127.0.0.1 wrapper.local
::1 wrapper.local
127.0.0.1 host.docker.internal
::1 host.docker.internal
mkdir $PICO_HOME/api && cd $PICO_HOME/api
git clone [email protected]:PicoNetworks/API.git .
aws s3 cp s3://pico-secret-keys/dev.pico.tools.crt ./data/nginx/certs
aws s3 cp s3://pico-secret-keys/dev.pico.tools.key ./data/nginx/certs
make env
When I first did this, lowercase api
and uppercase API
from lines 1 and 2 caused discrepencies, creating a folder inside the folder and I felt as though that was causing issues. I then turned to cd pico
and cloning in from there automatically creating a pico/API
folder.
Terminal command:
git clone [email protected]:PicoNetworks/API.git
Terminal response:
Cloning into 'API'...
remote: Enumerating objects: 47, done.
remote: Counting objects: 100% (47/47), done.
remote: Compressing objects: 100% (41/41), done.
remote: Total 55027 (delta 14), reused 31 (delta 6), pack-reused 54980
Receiving objects: 100% (55027/55027), 277.02 MiB | 2.04 MiB/s, done.
Resolving deltas: 100% (40189/40189), done.
I then cd API
and run the following commands.
Terminal command:
aws s3 cp s3://pico-secret-keys/dev.pico.tools.crt ./data/nginx/certs
Terminal response:
download: s3://pico-secret-keys/dev.pico.tools.crt to data/nginx/certs/dev.pico.tools.crt
Terminal command:
aws s3 cp s3://pico-secret-keys/dev.pico.tools.key ./data/nginx/certs
Terminal response:
download: s3://pico-secret-keys/dev.pico.tools.key to data/nginx/certs/dev.pico.tools.key
I then can check on this by going to pico/API/data/nginx/certs
and see that there are two files.
dev.pico.tools.cert
:
-----BEGIN CERTIFICATE-----
blahblablah
blahblabl
blahblablah
blahblablahblahblablah
blahblablah
blahblablah
-----END CERTIFICATE-----
dev.pico.tools.key
:
-----BEGIN PRIVATE KEY-----
blahblahblahblahblahblah
blahblahblah
blahblahblah
blahblahblah
blahblahbl
-----END PRIVATE KEY-----
This feels good so far. Now I want to make an env.
Terminal command:
make env
Terminal response:
==> Extracting keys from SSM ...
grep: .env: No such file or directory
==> .env updated
This creates a large .env
file 72 lines long.
I added these lines to the bottom of the .env
:
AWS_ACCESS_KEY_ID=my_key
AWS_SECRET_ACCESS_KEY=my_secret_key
DEBUG_QUEUE=publisher_reindex
QUEUE_URL=http://localstack.pico.local:4576/queue
QUEUE_WORKERS=publisher_reindex,taxonomy_update,hubspot,metrics,signup_link_sync,csv_export
WEBHOOKS=false
Followed by an unzip command within the config
folder:
Terminal command:
cd config && unzip mmdb.zip
Terminal response:
Archive: mmdb.zip
inflating: GeoIP2-City.mmdb
This had me within the config
folder. I confirmed against the system76 setup and it seems that is where I should be when running the build.
Terminal command:
docker-compose build api
Terminal response:
Successfully built b3313b1b3057
Successfully tagged pico-api:development
Just in case..
Terminal command:
git status
Terminal response:
On branch dev
Your branch is up to date with 'origin/dev'.
nothing to commit, working tree clean
# widget
# https://widget.dev.pico.tools
mkdir $PICO_HOME/widget && cd $PICO_HOME/widget
git clone [email protected]:PicoNetworks/widget.git .
docker-compose build
I ran the mkdir
command, then the clone
. To confirm that expected behavior was met I ran a few things.
Terminal command:
pwd
Terminal response:
/Users/picogreg/pico/widget
Terminal command:
ls
Terminal response:
CHANGELOG.md README.md config docker-compose.yml npmrc package.json scripts tests
Dockerfile babel.config.js development.config.js karma.conf.js package-lock.json public src
This felt good. I ran the build:
Terminal command:
docker-compose build
Terminal response:
Successfully built 4db291049217
Successfully tagged widget_pico_widget:latest
# gadget
# https://gadget.dev.pico.tools
mkdir $PICO_HOME/gadget && cd $PICO_HOME/gadget
git clone [email protected]:PicoNetworks/gadget.git .
docker-compose build
Similar to above after mkdir
and clone
.
Terminal command:
pwd
Terminal response:
/Users/picogreg/pico/gadget
Terminal command:
ls
Terminal response:
Dockerfile __tests__ docker-compose.yml jest.config.js localServer plugins src webpack.legacy.js
README.md babel.config.json enzyme.setup.js jsconfig.json package.json scripts webpack.dev.js webpack.prod.js
Then build.
Terminal command:
docker-compose build
Terminal response:
Successfully built c0715ae385c8
Successfully tagged gadget_gadget:latest
# publisher
# https://publisher.dev.pico.tools
mkdir $PICO_HOME/publisher && cd $PICO_HOME/publisher
git clone [email protected]:PicoNetworks/publisher.git .
docker-compose build
Similar to above after mkdir
and clone
.
Terminal command:
pwd
Terminal response:
/Users/picogreg/pico/publisher
Terminal command:
ls
Terminal response:
Dockerfile __tests__ config docker-compose.yml package-lock.json public semantic yarn.lock
README.md babel.config.js development.config.js jest.config.js package.json scripts src
Then build.
Terminal command:
docker-compose build
Terminal response:
Successfully built 713bb2aa9d79
Successfully tagged publisher_publisher:latest
# dashboard
# https://dashboard.dev.pico.tools
mkdir $PICO_HOME/dashboard && cd $PICO_HOME/dashboard
git clone [email protected]:PicoNetworks/dashboard.git .
docker-compose build
Similar to above after mkdir
and clone
.
Terminal command:
pwd
Terminal response:
/Users/picogreg/pico/dashboard
Terminal command:
ls
Terminal response:
Dockerfile __mocks__ codecov.yml docker-compose.yml jsconfig.json package-lock.json public src
README.md __tests__ commitlint.config.js jest.config.js next.config.js package.json server.js
Then build:
Terminal command:
docker-compose build
Terminal response:
Successfully built d6af1f94d5a4
Successfully tagged dashboard_dashboard:latest
# local widget/gadget clients
# http://wordpress.local
mkdir $PICO_HOME/widget-clients
git clone [email protected]:PicoNetworks/widget-clients.git $PICO_HOME/widget-clients
docker-compose build
docker-compose up composer
The cadence is off on this. The previous set ups were slightly different, but it should not cause an error. Just noting.
Similar to above after mkdir
and clone
.
Terminal command:
pwd
Terminal response:
/Users/picogreg/pico/widget-clients
Terminal command:
ls
Terminal response:
README.md composer.json docker-compose.yml wrapper
Then build.
Terminal command:
docker-compose build
Terminal response:
wordpressdb uses an image, skipping
wordpress uses an image, skipping
wrapper uses an image, skipping
composer uses an image, skipping
This doesn't seem totally right. Additionally, the commands for the system76 are very different. I will continue with the Mac instructions and address the other commands later on if issues arise.
There is also a composer
command.
Terminal command:
docker-compose up composer
Terminal response:
Starting widget-clients_composer_1 ... error
ERROR: for widget-clients_composer_1 Cannot start service composer: error while creating mount source path '/host_mnt/Users/picogreg/pico/widget-clients': mkdir /host_mnt/Users/picogreg/pico/widget-clients: no such file or directory
ERROR: for composer Cannot start service composer: error while creating mount source path '/host_mnt/Users/picogreg/pico/widget-clients': mkdir /host_mnt/Users/picogreg/pico/widget-clients: no such file or directory
ERROR: Encountered errors while bringing up the project.
This seems not great, but I want to carry on as this seems more like an aberration than a crucial element.
# onboarding
# https://onboarding.dev.pico.tools/signup
mkdir $PICO_HOME/onboarding
git clone [email protected]:PicoNetworks/onboarding.git $PICO_HOME/onboarding
docker-compose build
Again, the cadence is slightly different than the first directories, but that shouldn't cause errors.
Similar to above after mkdir
and clone
.
Terminal command:
pwd
Terminal response:
/Users/picogreg/pico/onboarding
Terminal command:
ls
Terminal response:
Dockerfile __tests__ docker-compose.yml layouts next.config.js scripts
Makefile components favicon.ico lib package.json styles
README.md containers helpers modules pages trypico_com
Then build.
Terminal command:
docker-compose build
Terminal response:
Successfully built 7baa0fe009c0
Successfully tagged onboarding_onboarding:latest
Awesome.
Terminal command:
pwd
Terminal response:
/Users/picogreg/pico/API
Terminal command:
git status
Terminal response:
On branch dev
Your branch is up to date with 'origin/dev'.
nothing to commit, working tree clean
Terminal command:
docker-compose up api
Terminal response:
WARNING: The NGROK_TOKEN variable is not set. Defaulting to a blank string.
Removing localstack.pico.local
proxy is up-to-date
redis.dev.pico.tools is up-to-date
Recreating 587c4e8e3d9f_localstack.pico.local ...
db.dev.pico.tools is up-to-date
Recreating 587c4e8e3d9f_localstack.pico.local ... error
ERROR: for 587c4e8e3d9f_localstack.pico.local Cannot start service localstack: error while creating mount source path '/host_mnt/Users/picogreg/pico/API/scripts/localstack_init.d': mkdir /host_mnt/Users/picogreg/pico/API: no such file or directory
ERROR: for localstack Cannot start service localstack: error while creating mount source path '/host_mnt/Users/picogreg/pico/API/scripts/localstack_init.d': mkdir /host_mnt/Users/picogreg/pico/API: no such file or directory
ERROR: Encountered errors while bringing up the project.
Error occurs in /host_mnt/Users/picogreg/pico/API/scripts/localstack_init.d
.
I navigate to /host_mnt/Users/picogreg/pico/API/scripts/localstack_init.d
in my directory and it looks like this:
#!/bin/bash
if [ ! -z $QUEUE_WORKERS ]; then
# create the queues we're asked to create
for i in ${QUEUE_WORKERS//,/ }; do
awslocal sqs create-queue --queue-name development_$i.fifo --attributes '{"FifoQueue":"true", "ContentBasedDeduplication":"true"}'
done
fi
#webhooks lambda
awslocal s3api create-bucket --bucket pico-serverless-deployments
awslocal sqs create-queue --queue-name development_webhooks_contact.fifo --attributes '{"FifoQueue":"true", "ContentBasedDeduplication":"true"}'
awslocal sqs create-queue --queue-name development_webhooks_payment.fifo --attributes '{"FifoQueue":"true", "ContentBasedDeduplication":"true"}'
awslocal iam create-role --role-name development-serverless-role --assume-role-policy-document '{"Version": "2012-10-17","Statement": [{ "Effect": "Allow", "Principal": {"Service": "lambda.amazonaws.com"}, "Action": "sts:AssumeRole"}]}' || true
sleep 5
At this point, I am beginning to go away from the set up set forth in the Developer onboarding.
Terminal command:
docker-compose down
Terminal response:
WARNING: The NGROK_TOKEN variable is not set. Defaulting to a blank string.
Stopping db.dev.pico.tools ... done
Stopping proxy ... done
Stopping redis.dev.pico.tools ... done
Stopping elastic.dev.pico.tools ... done
Removing localstack.pico.local ... done
Removing api.dev.pico.tools ... done
Removing db.dev.pico.tools ... done
Removing proxy ... done
Removing 587c4e8e3d9f_localstack.pico.local ... done
Removing redis.dev.pico.tools ... done
Removing elastic.dev.pico.tools ... done
Removing network api_picoweb
Removing network api_internal_network
Terminal command:
docker ps
Terminal response:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
There are no processed running and that is confirmed by looking at my Docker Dashboard.
I run the command one more time, because you never know.. annnnnd maybe its working!
Terminal command:
docker-compose up api
Terminal response:
api.dev.pico.tools | --- PM2 development mode ------------------------------------------------------
api.dev.pico.tools | Apps started : API
api.dev.pico.tools | Processes started : 1
api.dev.pico.tools | Watch and Restart : Enabled
api.dev.pico.tools | Ignored folder : node_modules
api.dev.pico.tools | ===============================================================================
api.dev.pico.tools | API-0 | Debugger listening on ws://0.0.0.0:9229/70ba2ff2-9e9d-4b2b-8654-7c29571bb596
api.dev.pico.tools | API-0 | For help, see: https://nodejs.org/en/docs/inspector
api.dev.pico.tools | [rundev] App API restarted
api.dev.pico.tools | API-0 | {
api.dev.pico.tools | API-0 | message: 'api.dev.pico.tools is now Listening on :::80',
api.dev.pico.tools | API-0 | level: 'info'
api.dev.pico.tools | API-0 | }
api.dev.pico.tools | API-0 | Cannot delete or update a parent row: a foreign key constraint fails
api.dev.pico.tools | API-0 | Constraint audienceidentifications_ibfk_1 on table AudienceIdentifications does not exist
api.dev.pico.tools | API-0 | Cannot delete or update a parent row: a foreign key constraint fails
api.dev.pico.tools | API-0 | Cannot delete or update a parent row: a foreign key constraint fails
api.dev.pico.tools | API-0 | Cannot delete or update a parent row: a foreign key constraint fails
api.dev.pico.tools | API-0 | Cannot delete or update a parent row: a foreign key constraint fails
api.dev.pico.tools | API-0 | Cannot delete or update a parent row: a foreign key constraint fails
api.dev.pico.tools | API-0 | Constraint audiencemonetizations_ibfk_2 on table AudienceMonetizations does not exist
api.dev.pico.tools | API-0 | Constraint audiencemonetizations_ibfk_1 on table AudienceMonetizations does not exist
api.dev.pico.tools | API-0 | Constraint linkedaccounts_ibfk_1 on table LinkedAccounts does not exist
api.dev.pico.tools | API-0 | Constraint identificationrules_ibfk_1 on table IdentificationRules does not exist
api.dev.pico.tools | API-0 | Constraint articles_ibfk_1 on table Articles does not exist
api.dev.pico.tools | API-0 | Constraint articletaxonomies_ibfk_1 on table ArticleTaxonomies does not exist
api.dev.pico.tools | API-0 | Constraint audiences_ibfk_1 on table Audiences does not exist
api.dev.pico.tools | API-0 | Constraint audienceidentifications_ibfk_2 on table AudienceIdentifications does not exist
api.dev.pico.tools | API-0 | Constraint shares_ibfk_1 on table Shares does not exist
api.dev.pico.tools | API-0 | Constraint shares_ibfk_2 on table Shares does not exist
api.dev.pico.tools | API-0 | Constraint shareclicks_ibfk_1 on table ShareClicks does not exist
api.dev.pico.tools | API-0 | Constraint shareclicks_ibfk_2 on table ShareClicks does not exist
api.dev.pico.tools | API-0 | Constraint userarticles_ibfk_1 on table UserArticles does not exist
api.dev.pico.tools | API-0 | Constraint userarticles_ibfk_2 on table UserArticles does not exist
api.dev.pico.tools | API-0 | Constraint userarticles_ibfk_3 on table UserArticles does not exist
api.dev.pico.tools | API-0 | Constraint balancechanges_ibfk_1 on table BalanceChanges does not exist
api.dev.pico.tools | API-0 | Constraint balancechanges_ibfk_2 on table BalanceChanges does not exist
api.dev.pico.tools | API-0 | Constraint balancechanges_ibfk_3 on table BalanceChanges does not exist
api.dev.pico.tools | API-0 | Constraint BalanceChange_publisher_id_foreign_idx on table BalanceChanges does not exist
api.dev.pico.tools | API-0 | { message: 'Legacy Seeding starting', level: 'info' }
api.dev.pico.tools | API-0 | { name: 'TimeoutError', level: 'error' }
api.dev.pico.tools | API-0 | { level: 'error' }
api.dev.pico.tools | API-0 | Error: TimeoutError: ResourceRequest timed out
api.dev.pico.tools | API-0 | at seed (/usr/src/app/services/dataAccess/Seed.js:54:15)
api.dev.pico.tools | API-0 | at processTicksAndRejections (internal/process/task_queues.js:93:5)
api.dev.pico.tools | API-0 | at async Object.up (/usr/src/app/migrations/20210331000000-offical-v2-legacy-seed.js:11:9)
api.dev.pico.tools | API-0 | From previous event:
api.dev.pico.tools | API-0 | at asyncGeneratorStep (/usr/src/app/node_modules/umzug/lib/migration.js:9:227)
api.dev.pico.tools | API-0 | at _next (/usr/src/app/node_modules/umzug/lib/migration.js:11:194)
api.dev.pico.tools | API-0 | at processImmediate (internal/timers.js:461:21)
I get an error, but everything is green on Docker Dashboard. That has not happened before, I will press forward with the next instruction.
cd $PICO_HOME/publisher
docker-compose up -d
https://publisher.dev.pico.tools
I open a new terminal window from the terminal that I hope API is running in and cd
into publisher
. From here I want to implement the docker compose command.
Terminal command:
docker-compose up -d
Terminal response:
Creating publisher.dev.pico.tools ... done
I then went to https://publisher.dev.pico.tools/
and was hoping for success, but I recieved a 504 Gateway Timeout
. The only entry in the network tab was the 504
for the favicon.ico
I am defeated. I stop to have lunch.
I come back to refresh the page. It works. I see the publisher on https://publisher.dev.pico.tools/
. Moving on, I guess.
cd $PICO_HOME/onboarding docker-compose up -d
Go to https://onboarding.dev.pico.tools/signup and signup for a new account. When trying to redeem the verify link from your email, replace trypico.dev.pico.tools with onboarding.dev.pico.tools. This will redirect you to https://publisher.dev.pico.tools at the end, already logged in.
Terminal command:
docker-compose up -d
Terminal response:
Creating onboarding.dev.pico.tools ... done
I navigate to https://onboarding.dev.pico.tools/signup
. I get an error, but upon refreshing, I seen the sign-up page.
This feels like a win.
I sign up, but I am unable to get an email for verification. I will wait a bit longer to see if that changes.
cd $PICO_HOME/widget-clients docker-compose up -d wordpress Go to http://wordpress.local.
Terminal command:
docker-compose up -d wordpress
Terminal response:
Creating wordpress-db.pico.local ... done
Creating wordpress.local ... done
I navigate to http://wordpress.local
and I am presented with a wordpress page asking about language preferences. It asks for further set up and I decide to hold on that.
At this point, onboarding seems to be complete. I think there may be some holes that are involved, but the base is surely there. I can go to publisher and to onboarding. Although, I have yet to see an email from onboarding.