Bootstrap knowledge of LLMs ASAP. With a bias/focus to GPT.
Avoid being a link dump. Try to provide only valuable well tuned information.
Neural network links before starting with transformers.
| import time | |
| import etw | |
| import etw.evntrace | |
| import sys | |
| import argparse | |
| import threading | |
| class RundownDotNetETW(etw.ETW): | |
| def __init__(self, verbose, high_risk_only): |
In your command-line run the following commands:
brew doctorbrew update| package main | |
| import ( | |
| "bytes" | |
| "encoding/json" | |
| "fmt" | |
| "strconv" | |
| ) | |
| func main() { |
L1 cache reference ......................... 0.5 ns
Branch mispredict ............................ 5 ns
L2 cache reference ........................... 7 ns
Mutex lock/unlock ........................... 25 ns
Main memory reference ...................... 100 ns
Compress 1K bytes with Zippy ............. 3,000 ns = 3 µs
Send 2K bytes over 1 Gbps network ....... 20,000 ns = 20 µs
SSD random read ........................ 150,000 ns = 150 µs
Read 1 MB sequentially from memory ..... 250,000 ns = 250 µs