this will deadlock the process if we run it in same process as js
| ################################################################### | |
| Writing C software without the standard library | |
| Linux Edition | |
| ################################################################### | |
| There are many tutorials on the web that explain how to build a | |
| simple hello world in C without the libc on AMD64, but most of them | |
| stop there. | |
| I will provide a more complete explanation that will allow you to | |
| build yourself a little framework to write more complex programs. |
| #!/usr/bin/env python3 | |
| import socket | |
| import struct | |
| import fcntl | |
| with open("/dev/vsock", "rb") as fd: | |
| r = fcntl.ioctl(fd, socket.IOCTL_VM_SOCKETS_GET_LOCAL_CID, " ") | |
| cid = struct.unpack("I", r)[0] | |
| print("Local CID: {}".format(cid)) |
This aims to be factual information about the size of large language models. None of this document was written by AI. I do not include any information from leaks or rumors. The focus of this document is on base models (the raw text continuation engines, not 'helpful chatbot/assistants'). This is a view from a few years ago to today of one very tiny fraction of the larger LLM story that's happening.
- GPT-2,-medium,-large,-xl (2019): 137M, 380M, 812M, 1.61B. Source: openai-community/gpt2. Trained on the unreleased WebText dataset said to 40GB of Internet text - I estimate that to be roughly 10B tokens. You can see a list of the websites that went into that data set here domains.txt.
- GPT-3 aka davinci, davinci-002 (2020): 175B parameters. There is a good breakdown of how those parameters are 'spent' here [How d
| # $ apt install rdiff | |
| # $ rdiff --help | |
| # Usage: rdiff [OPTIONS] signature [BASIS [SIGNATURE]] | |
| # [OPTIONS] delta SIGNATURE [NEWFILE [DELTA]] | |
| # [OPTIONS] patch BASIS [DELTA [NEWFILE]] | |
| # Options: | |
| # -v, --verbose Trace internal processing | |
| # -V, --version Show program version | |
| # -?, --help Show this help message |
| import { Stats } from './lib/bench.js' | |
| import { SQL } from "bun"; | |
| const pool_size = 4 | |
| /* | |
| CREATE TABLE Test ( | |
| id integer NOT NULL, | |
| PRIMARY KEY (id) | |
| ); |
| /** | |
| * Display a string with padding | |
| * | |
| * @param {String} str String to pad | |
| * @param {Integer} l padding length | |
| * @param {Boolean} [r] true for right padding | |
| * | |
| * @return {String} | |
| */ | |
| function pad(str, l, r) { |
Scalable Vector Extensions (SVE) is ARM’s latest SIMD extension to their instruction set, which was announced back in 2016. A follow-up SVE2 extension was announced in 2019, designed to incorporate all functionality from ARM’s current primary SIMD extension, NEON (aka ASIMD).
Despite being announced 5 years ago, there is currently no generally available CPU which supports any form of SVE (which excludes the [Fugaku supercomputer](https://www.fujitsu.com/global/about/innovation/
| Hello libtls - libressl libtls API sample program |
| all: default | |
| default: waitpid waitpid_optimized | |
| waitpid: waitpid.c Makefile | |
| $(CC) -Wall -Werror -std=gnu17 -ggdb -o waitpid waitpid.c | |
| waitpid_optimized: waitpid.c Makefile | |
| $(CC) -Wall -Werror -std=gnu17 -Ofast -o waitpid_optimized waitpid.c |