-
-
Save tcodes0/fcd1fac083a9c7f792c70fb49a71177c to your computer and use it in GitHub Desktop.
RTL relay testings | |
https://github.com/facebook/relay/blob/master/packages/relay-test-utils/__tests__/RelayMockEnvironmentWithComponents-test.js | |
https://github.com/entria/entria-fullstack/pull/109/files | |
Very easy native splash screen on Xcode | |
https://medium.com/@kelleyannerose/react-native-ios-splash-screen-in-xcode-bd53b84430ec | |
Init rn app on specific rn version | |
(rn cli 2.0.1+) | |
react-native-cli init --version="[email protected]" my project | |
// correct CRA usage | |
yarn create react-app my-app | |
// mongo regex example | |
db.Event.find({ title: {$regex: new RegExp("tom", "i")} })[0] | |
// rn network debug | |
https://github.com/jhen0409/react-native-debugger/issues/382 | |
``` | |
// add to index.js | |
global.XMLHttpRequest = global.originalXMLHttpRequest || global.XMLHttpRequest; | |
global.FormData = global.originalFormData || global.FormData; | |
if (window.FETCH_SUPPORT) { | |
window.FETCH_SUPPORT.blob = false; | |
} else { | |
global.Blob = global.originalBlob || global.Blob; | |
global.FileReader = global.originalFileReader || global.FileReader; | |
} | |
``` | |
Renato Bohler's tmux config | |
``` | |
set-option -g default-shell /bin/zsh | |
set -g utf8 | |
set-window-option -g utf8 on | |
set -g status off | |
set -g default-terminal "screen-256color" | |
set -g prefix C-a | |
unbind C-b | |
set -sg escape-time 1 | |
set-option -g base-index 1 | |
setw -g pane-base-index 1 | |
bind r source-file ~/.tmux.conf \; display "Reloaded!" | |
bind | split-window -h | |
bind - split-window -v | |
``` | |
// calculate ssh key fingerprint | |
- cat public key, remove leading strings, copy base64payload | |
- echo -n $base64payload | base64 -D | md5 | |
// replaceAtIndex helper | |
export const replaceAtIndex = <Item = any>(array: Item[], index: number, item: Item): Item[] => { | |
return [...array.slice(0, index), item, ...array.slice(index + 1)]; | |
}; | |
export const ONE_SECOND_IN_MILLISECONDS = 1000; | |
export const ONE_MINUTE_IN_MILLISECONDS = 60 * ONE_SECOND_IN_MILLISECONDS; | |
export const ONE_HOUR_IN_MILLISECONDS = 60 * ONE_MINUTE_IN_MILLISECONDS; | |
export const ONE_DAY_IN_MILLISECONDS = 24 * ONE_HOUR_IN_MILLISECONDS; | |
export const SEVEN_DAYS_IN_MILLISECONDS = 7 * ONE_DAY_IN_MILLISECONDS; | |
json to typescript type conversion | |
https://transform.tools/json-to-typescript | |
awesome phone regex | |
https://regex101.com/r/MNWXbW/3 | |
convert salaries between year/hour/month | |
new Array(130).fill(0).map((x, i) => (i+4) * 5000).map(year => ({ year, hour: (year/1920.1755589082431).toFixed(0), month: (year/12).toFixed(0) })) | |
cool thoughts articles tweets ideas | |
https://stopa.io | |
github gif how to | |
![](name-of-giphy.gif) | |
kill branches that have no remotes | |
```bash | |
git branch -vv | grep ': gone]'| grep -v "\*" | awk '{ print $1; }' | xargs -r git branch -D | |
``` | |
twitter bot translation google translate npm library | |
https://github.com/vitalets/google-translate-api | |
how to grep git source code | |
``` | |
git grep <regexp> $(git rev-list --all) | |
``` | |
how to grep git commit msg | |
``` | |
<your fav git log alias> --grep="foo" | |
``` | |
git push a few commits to origin | |
``` | |
git push origin 91143c3:fix-52-2 | |
``` | |
where 91143c3 is the last commit you want to push, and fix-52-2 is the branch on origin to push to. | |
this actually pushes all child commits of 91143c3 to origin because commits are chained together. | |
so, if you'd like to push a range, push the head commit with this technique. | |
when doing complex stuff, find the simples code that works and build from there |
linux gpg keys handling
when installing packages, if running into gpg errors to import keys, check this: https://sks-keyservers.net/status look for a server that is green on HKPS column. then import key using that server gpg --keyserver sks.pod02.fleetstreetops.com --recv-keys <pub key>
pubkey encrypt
gpg --sign --encrypt --armor file
will output file.asc
and ask for recipient email and keypassword
provide a gpg key you own
pubkey decrypt
gpg -d file.asc > out
provided you have the priv key of the reciepient used to sign the data!
set key trust level
gpg --edit-key $EMAIL
select key in prompt by typing a number (usually 1)
type trust
to set trust level
quit
tar
tar -cvf out.tar in.file
tar -xvf out.tar
linux dns
use resolvconf -l
and resolvconf -a $INERFACE < file
where file is like /etc/resolv.conf:
# resolv.conf from wlp3s0.ra
# Generated by dhcpcd from wlp3s0.ra
nameserver 8.8.8.8
nameserver 8.8.4.4
nameserver 2804:14d:1:0:181:213:132:2
nameserver 2804:14d:1:0:181:213:132:3
option timeout:1
if ip link
shows interfaces are up, but still ping fails, try dhcpcd <interface>
resolv.conf confliict
NetworkManager wants to start dhcpcd as a child process as part of the connection process. If dhcpcd is running as a systemd service it will conflict, the child loses and the connection fails.
Disabling NetworkManager dns config solves the conflict, no more child and dhcpcd service handles DNS, but causes the tray icon to bug out and Spotify doesn't work
Disabling dhcpcd service allows connection, but NetworkManager writes a bugged, garbage DNS server to resov.conf that causes several issues. Additionally, dhcpcd is required for CLI environment connection.
Set NetworkManager to use resolvconf: rc-manager=resolvconf
, and set dhcp=dhclient
to fix dhcpcd conflict. Enable dhcpcd to allow cli environments to connect normally to networks.
CLi dns and brazil registro.br
dig a.dns.br // get dns server ip
dig @200.219.148.10 golang.dev.br // use server IP to query a domain in it
calls to dig will return more information each time, keep adjusting the invocation with output
linux networking
if using a GUI, network manager should take care of everything. Any issues can probably be resolved with its config file.
on console
- stop NetworkManager.service, to avoid conflicts over interface. it relies on kwallet which relies on x11.
- use wpa_supplicant to connect
- use
iw dev $INTERFACE link
to check link status ip link
ip link set $INTERFACE up
to manage interfaces- check that dhcpcd.service is running on all interfaces, or run
dhcpcd $INTERFACE
Find your internal ip address
hostname -i
linux broadcom wifi
kernel module is wl
check dmesg | grep wl
to find if it's loaded or not
also check lspci -k
look for Network controller
good:
Kernel driver in use: wl
Kernel modules: bcma, wl
bad (no drive in use):
Kernel modules: bcma, wl
fix 1: unload wl and load it again
sudo rmmod b43
sudo rmmod ssb
sudo rmmod bcma
sudo rmmod wl
sudo modprobe wl
fix 2: edit mkinitcpio blacklist to avoid problems causing wl to not load
see arch wiki broadcom wifi
linux importing pfx windows key files
IN=path/to/pfx
openssl pkcs12 -in "$IN" -legacy -nocerts -nodes -out out-filename.key
openssl pkcs12 -in "$IN" -legacy -clcerts -nokeys -out out-filename.cer
files will be in $PWD
store in /etc/ssl
cert and pem might be the same thing
systemd
/etc/systemd/system
has symlinks to enabled services, unlink to disable service
journal
When checking journalctl, use journalctl -r -u <unit>
as root to see all output for <unit>
-r means reverse (new on top)
running as regular user hides output
rsync
https://lzone.de/cheat-sheet/rsync
community
German Guaigua linux specialist on react br
Amdgpu
low level gpu overclock/underclock
echo "manual" > /sys/class/drm/card0/device/power_dpm_force_performance_level
echo "s 0 300 750" > /sys/class/drm/card0/device/pp_od_clk_voltage
echo "s 1 588 750" > /sys/class/drm/card0/device/pp_od_clk_voltage
echo "s 2 952 800" > /sys/class/drm/card0/device/pp_od_clk_voltage
echo "s 3 1041 800" > /sys/class/drm/card0/device/pp_od_clk_voltage
echo "s 4 1106 800" > /sys/class/drm/card0/device/pp_od_clk_voltage
echo "s 5 1168 800" > /sys/class/drm/card0/device/pp_od_clk_voltage
echo "s 6 1209 800" > /sys/class/drm/card0/device/pp_od_clk_voltage
echo "s 7 1270 800" > /sys/class/drm/card0/device/pp_od_clk_voltage
echo "c" > /sys/class/drm/card0/device/pp_od_clk_voltage
echo "m 0 300 750" > /sys/class/drm/card0/device/pp_od_clk_voltage
echo "m 1 1000 800" > /sys/class/drm/card0/device/pp_od_clk_voltage
echo "m 2 1000 800" > /sys/class/drm/card0/device/pp_od_clk_voltage
echo "c" > /sys/class/drm/card0/device/pp_od_clk_voltage
echo "r" > /sys/class/drm/card0/device/pp_od_clk_voltage
OD_SCLK:
0: 300MHz 750mV
1: 588MHz 765mV
2: 952MHz 912mV
3: 1041MHz 981mV
4: 1106MHz 1037mV
5: 1168MHz 1106mV
6: 1209MHz 1150mV
7: 1270MHz 1150mV
OD_MCLK:
0: 300MHz 750mV
1: 1000MHz 800mV
2: 1750MHz 900mV
OD_RANGE:
SCLK: 300MHz 2000MHz
MCLK: 300MHz 2250MHz
VDDC: 750mV 1150mV
pacman
404 on package fix
- google package find dl link with higher version, copy link
- pacman -U
gpg key error
pacman -S archlinux-keyring will refresh your keys
groups
add a user to a groups
- usermod -a -G adm vacation
hibernation
https://wiki.archlinux.org/title/Power_management#Sleep_hooks
https://bbs.archlinux.org/viewtopic.php?id=247036
https://www.kernel.org/doc/Documentation/power/swsusp-and-swap-files.txt
https://btrfs.readthedocs.io/en/latest/Swapfile.html#hibernation
# no btrfs
sudo -i
truncate -s 0 /swapfile
chattr +C /swapfile
fallocate -l 2G /swapfile
chmod 0600 /swapfile
mkswap /swapfile
swapon /swapfile
# btrfs
btrfs filesystem mkswapfile --size 2G /swapfile
hibernation needs offset for file as cmdline boot arg or in /sys/power/resume_offset
sudo -i
btrfs inspect-internal map-swapfile -r /swapfile > /sys/power/resume_offset
btrfs resize
use sudo btrfs filesystem show /dev/nvme0n1p5
to get the devid of the fs to be resized
to resize a partition, delete and recreate it.
btrfs filesystem resize 1:max /mnt
where /mnt contains a btrfs and devid is 1
for some specific size, try btrfs filesystem resize 1:+300G
if expanding, expand partition first, then expand fs. if shrinking, shrink fs then partition
btrfs subvolumes
btrfs subvolume set-default /path/to/subvol # will be mounted by kernel
btrfs subvolume snapshot -r /source/subvol /snap/name # read only
btrfs subvolume get-default / # currently @root is default
btrfs subvolume list /
to restore a backup create a rw snapshot from a ro snapshot
useful commands
grep @ /proc/mounts | bat --language fstab
sudo mount -o "remount,exec" LABEL=Archlinux /home/vacation/.local/share/Steam
/usr/bin/btrfs check --check-data-csum /dev/disk/by-label/Archlinux # fsck equivalent
root btrfs send --compressed-data /toplevel/@root | btrfs receive /media/data/toplevel # send ro snapshot, pass folder to receive
Infra
AWS
samples
https://github.com/aws-samples
cdk
https://github.com/aws-samples/aws-cdk-examples/tree/master/typescript
K8S
exec command in pod
kubectl exec --stdin --tty $POD -c $CONTAINER -- /bin/sh
psql
PGPASSWORD=$(gcloud auth print-access-token) psql -U [email protected] -d hub -h /home/vacation/.eleanor_sql_sockets/ele-qa-436057:us-east1:eleanor-postgres
get secrets
kubectl get secrets -o jsonpath='{.data.FOO_DATA}' $CONTAINER
kubectl edit secrets
kubectl get pods
kubectl describe pod
kubectl logs --tail=100 $POD $CONTAINER
k8s scheduling/cron scheduling syntax helper comment
# ┌───────────── minute (0 - 59)
# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of the month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday;
# │ │ │ │ │ 7 is also Sunday on some systems)
# │ │ │ │ │
# │ │ │ │ │
# * * * * *
k8s jobs and crons
# job from cron
kubectl create job --from=cronjob/hub-server-sync-athena-ticklers my-job
gcloud install
plugin update
gcloud components install gke-gcloud-auth-plugin
export USE_GKE_GCLOUD_AUTH_PLUGIN=True
gcloud components update
gcloud --project=ele-qa-436057 container clusters --region=us-east1 get-credentials cluster
pod details
kubectl get pod hub-server-insurance-cleanup-4wb86-9nw44 --output=yaml
Typescript
extract type from relay type
type FragmentRefs<T extends string> = T
type FeedList_query = {
readonly posts: {
readonly endCursorOffset: number;
readonly startCursorOffset: number;
readonly count: number | null;
readonly pageInfo: {
readonly hasNextPage: boolean;
readonly hasPreviousPage: boolean;
readonly startCursor: string | null;
readonly endCursor: string | null;
};
readonly edges: ReadonlyArray<{
readonly node: {
readonly id: string;
readonly " $fragmentRefs": FragmentRefs<"Post_post">;
} | null;
} | null>;
};
readonly me: {
readonly " $fragmentRefs": FragmentRefs<"Post_me">;
} | null;
readonly " $refType": "FeedList_query";
};
type ExtractNode<T extends {edges: any}> = NonNullable<NonNullable<T['edges'][number]>['node']>
type Post = ExtractNode<FeedList_query['posts']>
tsconfig.json typeroots needs to manually add node_modules/@types, like a default that is lost when you specify manually another path
"typeRoots": ["src/types", "node_modules/@types"]
https://stackoverflow.com/questions/39040108/import-class-in-definition-file-d-ts
ambinet vs local type declarations and different syntax to import in them
declare module 'repl.history' {
// import some types from node
type AddHistory = (repl: import('repl').REPLServer, file: import('fs').PathLike) => void
declare const def: AddHistory
export default def
}
type Primitive =
| string
| boolean
| number
| bigint
| symbol
| null
| undefined;
type object = any non primitive.
Sql and psql with docker
connect to posgress using psql inside docker
sudo docker-compose exec postgres psql -U postgres -d hub
describe table
\d table
connect to db
\c
quit
\q
list all tables
\dt
list all dbs
\l
delete row
DELETE FROM table_name WHERE condition;
clone db
create database new_database_name (Name of new database which we have cloning from another database) WITH TEMPLATE old_database_name
kill off containers and volumes using docker-compose
docker-compose down --rmi all -v
if fail, you may need to docker ps -a
and remove any images that errored with docker rm <image id>
dump docker dp to local file
sudo docker-compose exec postgres pg_dump -h localhost -U "postgres" "hub" --format plain > "hub.dump.sql"
to restore
psql -d hub -f hub.dump.sql
docker cp from container to local fs
use sudo docker cp 5810d91ee2e5:/out.txt $PWD/out.txt
where
postgres is the container for docker-compose, and 5810d91ee2e5 is the container for docker
5810d91ee2e5
is from sudo docker ps
/out.txt
is from sudo docker-compose exec postgres pwd
which replies with /
and out.txt
is the file I want. exec postgres ls
also works btw!
docker nuke
removecontainers() {
docker stop $(docker ps -aq)
docker rm $(docker ps -aq)
}
armageddon() {
removecontainers
docker network prune -f
docker rmi -f $(docker images --filter dangling=true -qa)
docker volume rm $(docker volume ls --filter dangling=true -q)
docker rmi -f $(docker images -qa)
}
postgres
-- seq error fix
SELECT MAX(the_primary_key) FROM the_table;
SELECT nextval('the_primary_key_sequence');
-- nextval needs to be hight than tha table
SELECT setval('the_primary_key_sequence', (SELECT MAX(the_primary_key) FROM the_table)+1);
-- just getting the value fixes, or set it with setval
-- drop all tables quickly
DROP SCHEMA public CASCADE;
CREATE SCHEMA public;
GRANT ALL ON SCHEMA public TO postgres;
GRANT ALL ON SCHEMA public TO public;
psql
psql -U postgres -d db-name-here
\dt
to list definitions
psql -U hub-server -W -d hub -h /home/vacation/eleanor_sql_sockets/ele-qa-436057:us-east1:eleanor-postgres
-U user
-W prompt pass
-d dbname
-h host/socket
column information
SELECT column_name, is_nullable, data_type, column_default FROM information_schema.columns WHERE table_name = 'foo';
postgres upgrade
systemctl stop postgresql
sudo -i
mv /var/lib/postgres/data /var/lib/postgres/olddata
mkdir /var/lib/postgres/data /var/lib/postgres/tmp
chown postgres:postgres /var/lib/postgres/data /var/lib/postgres/tmp
exit
sudo -iu postgres
cd /var/lib/postgres/tmp
initdb -D /var/lib/postgres/data --encoding=UTF8
# it is the old version upgrading from, ls /opt might reveal it
pg_upgrade -b /opt/pgsql-14/bin -B /usr/bin -d /var/lib/postgres/olddata -D /var/lib/postgres/data
exit
systemctl start postgresql
# 6 is number of cores
sudo -u postgres vacuumdb --all --analyze-in-stages --jobs=6
echo echoing cleanup commands
echo sudo -i
echo rm -fr /var/lib/postgres/olddata
echo rm -fr /var/lib/postgres/tmp
echo exit
x-work
get list from website
// do not use really sucks. need to concat with paragraphs and titles
re = []
sel = ".JobOpeningListItem_JobDescription__1DPoi''
document.querySelectorAll(sel).forEach(x => re.push(x.innerText))
('vaga: ' + re.join('\n\n vaga: ')).replace(/Learn more >>/g, '')
// real dolar
var makeBrlToUsd = rate => brl => Math.round(brl*rate)
High level
speed
In posts talking about how someone made something fast, I see the same idea repeat time after time: it has linear time complexity, has great cache locality, and saturates all available processors.
Interview questions
technical, high-level
How do you handle the diversity of patterns within the JS ecosystem? maybe follow-up: What are some good solutions or patterns you have used that worked for you?
What's your thoughts on JS itself and it's current direction?
execution and team work
Have you worked with designers? what were some highlights there?
We all know it's very hard to give estimates, how to do you approach that? what's your attitude?
What is a big mistake in product process you've seen or committed yourself?
Your favorite meeting and why?
Tell me a good idea for a project that your company should take on, but will not. Explain from both sides, even if they are wrong.
Walk me through a project you enjoyed start to finish. Explain design decisions (why x and not y?)
code review
https://www.michaelagreiler.com/respectful-constructive-code-review-feedback/
https://phauer.com/2018/code-review-guidelines/
https://mtlynch.io/code-review-love/
Datadog
APM traces
query samples
Service:hub-server @url:*drivers-license*
trace_id:2934837843
tricks
search for logs using kubectl, find trace id, then do https://app.datadoghq.com/apm/trace/{id}
if code looks like this
infra.Logger.Info().Str("event", "message.new")
query to find field would be @event:message.new
RUM custom actions
find with @action.target.name:foobar
Cookie cliker
// node
prestigeLvls = Math.cbrt((allTimeCookies + popingWrinklers + sellingBuildings + difference) * 10 ** 18 /* 18 for quintilion cookies*/)/10000
// load cookie monster into cookie clicker
Game.LoadMod('https://cookiemonsterteam.github.io/CookieMonster/dist/CookieMonster.js');
Go
Installing private repos as packages
update ~/.gitconfig
with
[url "ssh://[email protected]/"]
insteadOf = https://github.com/
run go env -w GOPRIVATE=github.com/eleanorhealth/*
and add to shell init file export GOPRIVATE=github.com/eleanorhealth/*
run ssh-keyscan github.com > ~/.ssh/known_hosts
try to run go mod download "github.com/eleanorhealth/member-server@<latest commit on main>"
and see if you get no errors, if so, run go get "github.com/eleanorhealth/member-server@<latest commit on main>"
to add the package
notes
omitting the commit hash doesn't work
ssh-key add may be needed on mac, it will prompt you for password
code must be merged to main
coverage unit
go get golang.org/x/tools/cmd/cover
go test -race -coverprofile .coverage ./svc/server/application/application/...
go tool cover -html=.coverage
/**
* Maybe captures the result of some operation that may fail.
* If there is an non null error, attempting to retrieve result will throw.
*/
export class Maybe<T> {
#result: T | null
#error: Error | null
// constructor(data: Data | null, error: Error | null) {
constructor() {
this.#result = null
this.#error = null
}
/**
* throws unless error() returns null.
*/
result(): T {
if (this.#error) {
throw this.#error
}
if (this.#result === null) {
throw new Error("null data")
}
return this.#result
}
/**
* if null, result() is safe to call
*/
error() {
return this.#error
}
/**
* error must be null for result to be read
*/
setResult(data: T) {
this.#result = data
}
/**
* blocks result from being read
*/
setError(message: string) {
this.#error = new Error(message)
}
}
AI
52.7k words on member-server codebase
244 non mock non test files
find . -type f -name *.go | grep -Ev test.go | grep -Ev mock
avg 314 words per file (handlers)
avg 215 word per file (codebase)
say 5k words for a massive conversation context with ai
have 59k words of context to use with ai
linux input hardware key remmaping
How to remap keys from somethings such as a keybord or gaming mouse
scancodes to keycodes
This step maps scancodes (hex that hardware sends) to keycodes (system defined key names)
use
sudo evemu-describe
fromevemu
to get a dump of your current keyboard.you might need to guess what your keyboard is called here, it's input number
event_
(event15, event4)also try
ls /dev/input/by-path
the files here can be used as arguments tosudo udevadm info <file>
this will dump usefulinfo about the input device.
The
/dev/input/event_
number is the path of your hardware in the system.see this
run
sudo evtest
on a new tab, choose the input device from the list. Now when you click a key, it will dump scancodes (calledvalue
in the output) and keycodes as strings. Scancodes are hexadecimal values./etc/udev/hwdb.d/70-custom-keyboard.hwdb
. In this example, 700e3 scancode is left windows key, it should senda
now.evdev:input*
line is a hardware selection, note the *. See link for more info. Trylsusb
to find bus ID. udevadm also helps here.evtest
a
is a keycode. It may be found atkeymap.xkb
file created byxkbcomp $DISPLAY keymap.xkb
. Or header file/usr/include/linux/input-event-codes.h
, like thisKEY_A
->a
sudo systemd-hwdb update && sudo udevadm trigger
to load changes to kernel. Wait around 5s.sudo udevadm info /dev/input/by-path/<your keyboard>
expect to seeKEYBOARD_KEY_700e3=a
(your changes) in the output. Using the keyboard should also reflect the keys remaped at this point. If you don't see changes in output try a reboot, sometimes it works with no reboot, sometimes it doesn't.keycodes to symbols
You might not need this step, if it works already, good.
This step maps keycodes to symbols, locale and user oriented input characters.
keymap.xkb
byxkbcomp $DISPLAY keymap.xkb
if you have not already. Open it in a text editor.<LWIN>
now. Expect something like thisxkbcomp output.xkb $DISPLAY
testing
xev
and put mouse on top of the little window to see what keycodes and symbols are being sent.Ports and processes
netstat -tulpn # dump all processes using ports
lsof -i :3100 what on port 3100 on localhost?
sudoers
fstab tips & tricks
list open files
files preventing a partition from unmounting
lsof /media/diskX