Created
July 13, 2022 13:44
-
-
Save lifthrasiir/24be669646d399362d4c20b7c8022332 to your computer and use it in GitHub Desktop.
epyt: a simple yet challenging typing game for JS1024 2022 https://js1024.fun/demos/2022/18/readme
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// _ | |
// ___ __ _ _ _ _| | | |
// / _ \ / _` | | | |__ | by Kang Seonghoon (@senokay, mearie.org) | |
// \__ | (_| | |_| |_| | 2022-07-13, for JS1024 2022 | |
// |___/ \__, | .__/|__/ Public Domain (CC0 1.0) | |
// |_|\___| | |
// | |
// This is a simple typing game where you have lots of short English words to | |
// quickly type before a timer runs out. Perhaps most surprisingly, the game | |
// seems to have an access to lots of real English words (1,903 to be exact). | |
// Given it's a 1 KB of JavaScript, this only leaves at most 0.792 bits per | |
// character (!!!!) which sounds very implausible. While it is thought to be | |
// technically possible to compress a _large enough_ text down to this ratio, | |
// such a ratio is only possible with a very advanced algorithm and lots of | |
// computing power (as of 2022-07, the best ratio for the Large Text Compression | |
// Benchmark is only 0.867 bpc and that requires 1 GB of input and ~60 hours of | |
// heavy GPU computation using RTX 3090). This is not surprising at all; the | |
// data compression is all about exploiting the input distribution and predict | |
// as accurate as possible, and a small data simply lacks any prior context to | |
// estimate the distribution. | |
// | |
// What if there is a way to provide the prior context? Indeed, this is possible | |
// and called the dictionary compression. Normally this works by providing _your | |
// own_ dictionary to be used multiple small-sized data, but for a certain (and | |
// common enough) type of data like text it is worthwhile to provide the default | |
// dictionary. To my knowledge there is the only one such compression format | |
// available from web browsers, namely Brotli (RFC 7932). Brotli's default | |
// dictionary is pretty beefy (120+ KB), is stable unlike other browser-provided | |
// environments, and contains lots of common English words amongst others, which | |
// are exactly what we need. | |
// | |
// Extracting this dictionary is not easy. In fact I'm still amazed that this | |
// is actually (only barely) possible. Brotli is nowadays most commonly used in | |
// HTTP compression, but you can't control the exact HTTP byte stream for web | |
// browsers. The only possibility is to use the WOFF2 font file format which | |
// Brotli was originally designed for, but you need to make a whole font file | |
// to leverage this. This got more complicated recently by the fact that modern | |
// browsers sanitize font files, typically by the OpenType Sanitizer (OTS), as | |
// it is very insecure to put untrusted font files directly to the system. | |
// Therefore we need to make an WOFF2 file that is sane enough to be accepted by | |
// OTS _and_ has a desired byte sequence inside which can be somehow extracted. | |
// After lots of failed experiments, I settled on the glyph widths ("advance") | |
// which get encoded in a sequence of two-byte signed integers with almost no | |
// other restrictions. One of the craziest consequences of this design is that | |
// we can actually pack the source code alongside with the extracted dictionary. | |
// | |
// That's all I can say at the moment. The remainder of this code will require | |
// you a basic understanding of the OpenType file format, the WOFF2 file format, | |
// the Brotli compression format and enough C++ experience to read the OTS code. | |
// Sorry for that, but I thought about this possibility for literally years! | |
// | |
// ------------ | |
// How To Build | |
// ------------ | |
// | |
// The code should be prepared in this order: | |
// | |
// - Prepare the embedded code, which will be run in `setInterval`. For testing | |
// this code can be also given without embedding (set `REPLACE_EMBEDDED_CODE` | |
// to true), but in the final version the embedded code should be extracted | |
// and minified separately. | |
// | |
// - Run this code in node.js with the `embeddedCode` variable set accordingly. | |
// You need to have a Brotli preset dictionary file in the current directory. | |
// The packing process is quite computation intensive, and sometimes can fail | |
// just by a pure chance, so be prepared for that. | |
// | |
// - The resulting code fragment should be pasted in an appropriate place and | |
// the whole source code should be minified. | |
// | |
// This code is designed to be copied directly to the minifier, assuming that | |
// the minimier has a knowledge of `window` being defined in web browsers. | |
// I specifically used Terser and RegPack via Maxime Euziere's terser-online | |
// (https://xem.github.io/terser-online/) but anything else would work. | |
// You will need to set `compress.global_refs` to `{"window": true}` though. | |
// Note that you should NOT use RegPack to minifiy the embedded code, because | |
// Brotli would do a way better job than RegPack there. I also had to make a | |
// final tweak to the RegPack output because it did a suboptimal job when the | |
// deduplicate fragment starts at the first byte. | |
// | |
// node.js has its own Brotli interface so this process should work without | |
// any additional tool, but it is extremely recommended to have a copy of | |
// OTS (https://github.com/khaledhosny/ots; specifically ots-sanitize), WOFF2 | |
// (https://github.com/google/woff2) and Brotli in your command line. You would | |
// have to build at least OTS manually in this case, so make sure to have a copy | |
// of meson and ninja as well. You can get lots of information from enabling | |
// debug outputs of OTS, WOFF2 and Brotli. | |
// | |
// There are lots of things to tune. For example, the packing process will give | |
// you multiple code fragments and some may perform better than others---the | |
// final version for example happened to have more `AA`s in the base64 encoding | |
// and they got deduplicated. There are certain fragments of code that I know | |
// they look longer than alternatives but result in a shorter code when packed. | |
// Ultimately, code golfing is an endless iterative process that has to stop at | |
// some point, and I'm pretty happy with where I stopped at. | |
//////////////////////////////////////////////////////////////////////////////// | |
// creates a _template_ of an WOFF2 font file, which should be reconstructed | |
// in the following way: | |
// | |
// +-----------------------------+ -. | |
// | WOFF2 header | | | |
// +-----------------------------+ | | |
// | WOFF2 table list | | | |
// +-----------------------------+ | | |
// | Brotli stream | | | |
// | .--------------------------+ | `prefix` (base64-encoded) | |
// | | head, hhea, maxp, os2, | | | |
// | | glyf, loca, name, post | | | |
// | | (order can vary) | | | |
// | +--------------------------+ | | |
// | | cmap .-------------------+ -: | |
// | | | glyph IDs | | consecutive 16-bit big endian integers | |
// | | | | | from 0 to `sequenceCount - 1` | |
// | +------+-------------------+ -: | |
// | | hmtx .-------------------+ | | |
// | | | advances | | `infix` (base64-encoded) | |
// | | | | | | |
// | | | (actually source | -: | |
// | | | code followed by | | multiple consecutive 16-bit little endian | |
// | | | dictionary refs) | | integers according to `encodedRange` | |
// +--+------+-------------------+ -' | |
// | |
// all but last segment is guaranteed to be aligned to the 3-byte boundary, | |
// so dynamic segments can be encoded to base64 and everything can be | |
// concatenated with a simple operation. the caller should ensure that | |
// the resulting WOFF2 file is aligned to the 4-byte boundary, otherwise | |
// this function will report a failure. | |
// | |
// due to the dependency this function can be only run in node.js, and also | |
// requires dictionary.bin from the Brotli source code to be available in | |
// the current directory. | |
async function makeTemplate(params) { | |
const zlib = require('node:zlib'), fs = require('node:fs'), crypto = require('node:crypto'); | |
const { | |
embeddedCode, // will be padded to a line comment | |
dictionaryRefs, // [ {length, startDistance, count}, ...] | |
} = params; | |
let dictionary; | |
try { | |
dictionary = String.fromCharCode(...fs.readFileSync('dictionary.bin')); | |
} catch (e) { | |
console.warn('this packer requires dictionary.bin from Brotli in the current directory.'); | |
console.warn('https://github.com/google/brotli/blob/master/c/common/dictionary.bin'); | |
throw e; | |
} | |
const dictionaryRanges = []; | |
let cumulOffset = 0; | |
[0, 0, 0, 0, 10, 10, 11, 11, 10, 10, 10, 10, 10, 9, 9, 8, 7, 7, 8, 7, 7, 6, 6, 5, 5].forEach((ndbits, i) => { | |
dictionaryRanges.push({ offset: cumulOffset, count: ndbits ? 1 << ndbits : 0 }); | |
if (ndbits) cumulOffset += i << ndbits; | |
}); | |
if (cumulOffset !== dictionary.length) throw 'dictionary size does not match'; | |
//-------------------------------------------------------------------------------- | |
let input = ''; | |
for (const dref of dictionaryRefs) { | |
const { offset: wordOffset, count: wordCount } = dictionaryRanges[dref.length]; | |
if (dref.startDistance + dref.count > wordCount) { | |
// transformed dictionary words have varying lengths and can't be reliably used | |
throw 'dictionary reference no longer corresponds to a single word'; | |
} | |
input += dictionary.slice( | |
wordOffset + dref.length * dref.startDistance, | |
wordOffset + dref.length * (dref.startDistance + dref.count)); | |
} | |
const dictionaryInputLength = input.length; | |
// make sure that the final input length is always a multiple of 6 | |
const excessSlashCount = (6 - (embeddedCode.length + 2 + input.length) % 6) % 6; | |
input = embeddedCode + '//' + '/'.repeat(excessSlashCount) + input; | |
// we will have [startCode, endCode] mapping to actual glyphs we want, | |
// and [startCode - excessGlyphs, startCode) used as a placeholder. | |
// excessGlyphs should be at least 1 because the first glyph is effectively unusable. | |
const excessGlyphs = 1; | |
const startCode = 20000; // U+4E20 | |
const endCode = startCode + input.length / 2 - 1; | |
const numGlyphs = input.length / 2 + excessGlyphs; | |
// the second dynamic segment is a sequence of IaC (insert-and-copy) and | |
// distance codes. an IaC code dictates the number of literal bytes to | |
// insert (0 in this case) and the number of past bytes to copy (4---7 in | |
// this case). a distance code is how far those past bytes should be, and | |
// since we need to refer to the built-in dictionary the encoded distance | |
// should be large enough to get past the LZ77 window. | |
// | |
// since the size of LZ77 window will increase as new word gets decoded, | |
// we need to track both the size of window and the distance code and that's | |
// pretty cumbersome. but there is a trick: we can simply increase the | |
// window size up to the maximum allowed and work from there. for example, | |
// given WBITS=14 the maximum window size is 2^14 - 16, so distance codes | |
// from 2^14 - 15 always refer to the built-in dictionary. | |
// | |
// distance codes are organized in the way that large enough consecutive | |
// distances are groupped into a single code plus extra bits, so if a code | |
// corresponds to distances starting from 16389 or lower we can make that | |
// code zero bits long and entirely work with extra bits. in Brotli distance | |
// code trees can be configured via two parameters NPOSTFIX and NDIRECT, | |
// but unfortunately the minimum distance possible seems to be 2^14 - 3, so | |
// we have to give up a few (12 to be exact) starting distances. in return | |
// the resulting extra bits are much smaller and easier to work with. | |
// | |
// long story short, we will use the following prefix codes: | |
// - no literal codes. | |
// - IaC codes 130..133, i.e. insert length 0 and copy lengths 4--7. | |
// - a single implied distance code spanning (2^WBITS - 3) through | |
// (2^WBITS + 2^(WBITS-1) - 4), using (WBITS - 1) extra bits. | |
// this is possible when NPOSTFIX=0, NDIRECT=0 and the resulting | |
// code will be (12 + 2 * WBITS). | |
// | |
// these result in each dictionary word being represented as a 16-bit | |
// little endian integer in this way: | |
// - 2 low bits are IaC codes, which copy lengths are 00=4, 10=5, 01=6 and | |
// 11=7. these are reversed because, while prefix codes are sorted | |
// lexicographically they are read from the least significant bit. | |
// - 14 high bits are extra bits for the implied distance code. since this | |
// should be also (WBITS - 1) bits long, WBITS should be set to 15. | |
// | |
// there are other possible options. for example, WBITS can be lowered | |
// if we use more IaC codes (which will be mostly unused but that's fine). | |
// on the other hand, WBITS can't be higher because we then have to use | |
// shorter IaC codes to fit to two-byte integers, but that'd mean that we | |
// can only have two kind of lengths and that's kinda boring for this game. | |
const windowBits = 15; | |
// this can be made larger, at the expense of more complex packer code | |
const minTargetSizeBeforeDynamic2 = (1 << windowBits) - 16; | |
const encodedRanges = []; // {start, step, count} | |
for (const { length, startDistance, count } of dictionaryRefs) { | |
if (length < 4 || length > 7) throw 'dictionary word length should be 4--7'; | |
if (startDistance < 12) throw 'some initial dictionary words cannot be used'; | |
encodedRanges.push({ | |
start: (startDistance - 12) << 2 | [0, 2, 1, 3][length - 4], | |
step: 4, | |
count, | |
}); | |
} | |
//-------------------------------------------------------------------------------- | |
// two-byte big endian integer | |
const two = v => [v >>> 8 & 255, v & 255]; | |
// four-byte big endian integer | |
const four = v => [v >>> 24 & 255, v >>> 16 & 255, v >>> 8 & 255, v & 255]; | |
// variable-length 32-bit integer, UIntBase128 in the WOFF2 specification | |
const varfour = v => { | |
const b = []; | |
while (v > 127) b.unshift(128 | v & 127), v >>>= 7; | |
b.unshift(128 | v); | |
b[b.length - 1] &= 127; | |
return b; | |
}; | |
const repeat = (n, f) => { | |
const r = []; | |
for (let i = 0; i < n; ++i) r.push(...f(i)); | |
return r; | |
}; | |
const maxWidth = 0x7fff; // this is signed and can't be 0xffff | |
// Now we have to fill all the required table. | |
// font header | |
// https://docs.microsoft.com/en-us/typography/opentype/spec/head | |
const head = [ | |
...two(1), // majorVersion | |
...two(0), // minorVersion | |
...four(0), // fontRevision | |
...four(0), // checksumAdjustment (WOFF2 recomputes this, so we don't have to be correct) | |
...four(0x5f0f3cf5), // magicNumber | |
...two(1<<0 | 1<<1 | 1<<3), // flags | |
...two(16), // unitsPerEm (the minimum possible) | |
0,0,0,0,0,0,0,0, // created | |
0,0,0,0,0,0,0,0, // modified | |
...two(0), ...two(0), // xMin, yMin | |
...two(maxWidth), ...two(0), // xMax, yMax (excess xMax is okay, as long as it's positive) | |
...two(0), // macStyle | |
...two(0), // lowestRectPPEM (0 is probably okay) | |
...two(0), // fontDirectionHint (deprecated) | |
...two(0), // indexToLocFormat | |
...two(0), // glyphDataFormat | |
]; | |
// horizontal header | |
// https://docs.microsoft.com/en-us/typography/opentype/spec/hhea | |
const hhea = [ | |
...two(1), // majorVersion | |
...two(0), // minorVersion | |
...two(0), // ascender | |
...two(0), // descender | |
...two(0), // lineGap | |
...two(maxWidth), // advanceWidthmax | |
...two(0), // minLeftSideBearing | |
...two(0), // minRightSideBearing | |
...two(maxWidth), // xMaxExtent | |
...two(1), // caretSlopeRise | |
...two(0), // caretSlopeRun | |
...two(0), // caretOffset | |
...two(0), ...two(0), ...two(0), ...two(0), // reserved0..3 | |
...two(0), // metricDataFormat | |
...two(numGlyphs), // numberOfHMetrics | |
]; | |
// maximum profile | |
// https://docs.microsoft.com/en-us/typography/opentype/spec/maxp | |
const maxp = [ | |
...four(0x00010000), // version | |
...two(numGlyphs), // numGlyphs | |
...two(1), // maxPoints (at least 1) | |
...two(1), // maxContours (at least 1) | |
...two(0), // maxCompositePoints | |
...two(0), // maxCompositeContours | |
...two(2), // maxZones | |
...two(0), // maxTwilightPoints | |
...two(0), // maxStorage | |
...two(0), // maxFunctionDefs | |
...two(0), // maxInstructionDefs | |
...two(0), // maxStackElements | |
...two(0), // maxSizeOfInstructions | |
...two(0), // maxComponentElements | |
...two(0), // maxComponentDepth | |
]; | |
// OS/2 and Windows metrics | |
// https://docs.microsoft.com/en-us/typography/opentype/spec/os2 | |
const os2 = [ | |
...two(0), // version | |
...two(0), // xAvgCharWidth | |
...two(500), // usWeightClass (anything from 100 to 900 will work) | |
...two(5), // usWidthClass (normal) | |
...two(0), // fsType | |
...two(0), // ySubscriptXSize | |
...two(0), // ySubscriptYSize | |
...two(0), // ySubscriptXOffset | |
...two(0), // ySubscriptYOffset | |
...two(0), // ySuperscriptXSize | |
...two(0), // ySuperscriptYSize | |
...two(0), // ySuperscriptXOffset | |
...two(0), // ySuperscriptYOffset | |
...two(0), // yStrikeoutSize | |
...two(0), // yStrikeoutPosition | |
...two(0), // sFamilyClass | |
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, // panose (matches any spec) | |
...four(0), ...four(0), ...four(0), ...four(0), // ulUnicodeRange1..4 | |
...four(0x00000000), // achVendorId | |
...two(0x0040), // fsSelection (regular) | |
...two(0), // usFirstCharIndex (OTS does fix this so leave it as 0) | |
...two(0), // usLastCharIndex (ditto) | |
...two(0), // sTypoAscender | |
...two(0), // sTypoDescender | |
...two(0), // sTypoLineGap | |
...two(0), // usWinAscent | |
...two(0), // usWinDescent | |
]; | |
// glyph data | |
// https://docs.microsoft.com/en-us/typography/opentype/spec/glyf | |
// | |
// WOFF2 provides two ways to represent the glyf table, but the no-op | |
// transform (version 3) is not usable due to a possible WOFF2 bug. | |
// so we need to use the default transform (version 0) instead. | |
// this also means that the loca table (index to location) is entirely | |
// reconstructed from this transformed data and never stored. | |
// | |
// we don't use TrueType outlines at all, but OTS will complain when every | |
// glyph is empty, so we put a single point to the first glyph. | |
const glyfBboxBitmapSize = ((numGlyphs + 31) >>> 5) * 4; | |
const glyf = [ | |
...two(0), // reserved | |
...two(0), // optionFlags | |
...two(numGlyphs), // numGlyphs | |
...two(0), // indexFormat (= head.indexToLocFormat) | |
...four(2 * numGlyphs), // nContourStreamSize | |
...four(1), // nPointsStreamSize | |
...four(1), // flagStreamSize | |
...four(2), // glyphStreamSize | |
...four(0), // compositeStreamSize | |
...four(glyfBboxBitmapSize), // bboxStreamSize | |
...four(0), // instructionStreamSize | |
// nContourStream = [1, 0...] | |
...repeat(numGlyphs, i => two(i === 0 ? 1 : 0)), | |
// nPointsStream = [1] | |
1, | |
// flagStream = [0] | |
// flag 0: x = 0, y = 0 - (8 additional bits) | |
0, | |
// glyphStream | |
0b00000000, // additional bits for flag 0 | |
0, // instruction length for the glyph | |
// bboxBitmap = [0...] | |
...repeat(glyfBboxBitmapSize, _ => [0]), | |
]; | |
// naming table | |
// https://docs.microsoft.com/en-us/typography/opentype/spec/name | |
// | |
// names are actually irrelevant in the context of web fonts because they | |
// are always overriden by CSS-provided names. this is also why we can't | |
// use the name table to extract the dictionary, as there is no interface | |
// to access the original name of given web font. (amusingly there *is* | |
// an interface for variable font metadata, but I don't think they can be | |
// exploited for compression purpose.) OTS will synthesize any required | |
// names anyway. | |
const name = [ | |
...two(0), // version | |
...two(0), // count | |
...two(6), // storageOffset (can't be zero, OTS will complain) | |
// plus padding; see below | |
]; | |
// PostScript table | |
// https://docs.microsoft.com/en-us/typography/opentype/spec/post | |
const post = [ | |
...four(0x00030000), // version | |
...four(0), // italicAngle | |
...two(0), // underlinePosition | |
...two(0), // underlineThickness | |
...four(0), // isFixedPitch | |
...four(0), ...four(0), // min/maxMemType42 (suggestive, can be left as 0) | |
...four(0), ...four(0), // min/maxMemType1 (ditto) | |
]; | |
// character to glyph index mapping table | |
// https://docs.microsoft.com/en-us/typography/opentype/spec/cmap | |
// | |
// now here's actually an interesting bit. we need to associate glyphs with | |
// actual characters (of given encoding, here Unicode). the mapping itself | |
// is trivial, but there are multiple possible encodings of given mapping | |
// and the degree of support varies. specifically OTS only accepts a handful | |
// number of selected combinations which we have to consider. | |
// | |
// TODO format 12 is actually easier to build, but I couldn't get it working | |
// in Firefox despite of it using the same OTS library as other browsers. | |
const sequenceCount = numGlyphs + (3 - numGlyphs) % 3; // for the 3-byte alignment | |
const cmap1 = [ | |
...two(0), // version | |
...two(1), // numTables | |
...two(3), // platformID (Microsoft) | |
...two(1), // encodingID (Unicode) | |
...four(12), // subtableOffset | |
// cmap format 4 subtable: | |
...two(4), // format | |
...two(32 + sequenceCount * 2), // length | |
...two(0), // language (default) | |
// segCount = 2 | |
// maps [startCode - excessGlyphs, startCode + input.length / 2) to glyph ids [0, numGlyphs) | |
...two(4), // segCountX2 | |
...two(4), // searchRange | |
...two(1), // entrySelector | |
...two(0), // rangeShift | |
...two(startCode - excessGlyphs + numGlyphs - 1), ...two(0xffff), // endCode[segCount] | |
...two(0), // reservedPad | |
...two(startCode - excessGlyphs), ...two(0xffff), // startCode[segCount] | |
...two(0x10000 - (startCode - excessGlyphs)), ...two(1), // idDelta[segCount] | |
...two(0), ...two(0), // idRangeOffsets[segCount] | |
// glyphIdArray[] is encoded in the first dynamic segment following | |
]; | |
const cmap2 = repeat(sequenceCount, i => two(i)); | |
// horizontal metrics | |
// https://docs.microsoft.com/en-us/typography/opentype/spec/hmtx | |
// | |
// this table encodes a sequence of advance widths and left side bearings, | |
// and those are almost directly visible to the 2D canvas context interface. | |
// these are interleaved in the original TrueType format; for the simplicity | |
// we only use advance widths which are encoded as a sequence of signed | |
// 16-bit integers when the transform version 1 is in use. | |
const hmtx1 = [ | |
0x03, // flags (both lsb and leftSideBearing inferred) | |
// advanceWidth[numberOfHMetrics] follows | |
...repeat(excessGlyphs, i => two(0)), | |
...[...input.slice(0, -dictionaryInputLength)].map(c => c.charCodeAt(0)), | |
// the second dynamic segment follows | |
]; | |
const hmtx2 = [...input.slice(-dictionaryInputLength)].map(c => c.charCodeAt(0)); | |
// ensure that the font size is at least the target size. while many tables | |
// can be fine with excess data, glyf is already large enough so enlarging | |
// glyf has a smaller effect than enlarging other smaller tables. | |
let uncompressedSize = | |
head.length + hhea.length + maxp.length + os2.length + | |
glyf.length + name.length + post.length + | |
cmap1.length + cmap2.length + hmtx1.length + hmtx2.length; | |
let glyfPadSize = minTargetSizeBeforeDynamic2 - (uncompressedSize - hmtx2.length); | |
if (glyfPadSize < 0) throw `target size ${minTargetSizeBeforeDynamic2}B cannot be met (${uncompressedSize - hmtx2.length}B already)`; | |
for (let i = 0; i < glyfPadSize; ++i) glyf.push(0); | |
uncompressedSize += glyfPadSize; | |
const tables = [ | |
{ tag: 1, transform: 0, size: head.length }, | |
{ tag: 2, transform: 0, size: hhea.length }, | |
{ tag: 4, transform: 0, size: maxp.length }, | |
{ tag: 6, transform: 0, size: os2.length }, | |
// unlike other tables, the glyf table can be reconstructed in multiple ways | |
// and WOFF2 specifically ignores the original size of the glyf table. | |
{ tag: 10, transform: 0, size: glyf.length, originalSize: 0 }, | |
{ tag: 11, transform: 0, size: 0, originalSize: 2 + 2 * numGlyphs }, | |
{ tag: 5, transform: 0, size: name.length }, | |
{ tag: 7, transform: 0, size: post.length }, | |
{ tag: 0, transform: 0, size: cmap1.length + cmap2.length }, | |
{ tag: 3, transform: 1, size: hmtx1.length + hmtx2.length, originalSize: 4 * numGlyphs }, | |
]; | |
const uncompressedStatic1 = new Uint8Array([...head, ...hhea, ...maxp, ...os2, ...glyf, ...name, ...post, ...cmap1]); | |
const uncompressedDynamic1 = new Uint8Array(cmap2); | |
const uncompressedStatic2 = new Uint8Array(hmtx1); | |
const uncompressedDynamic2 = new Uint8Array(hmtx2); | |
const uncompressedHeader = [ | |
...four(0x774f4632), // signature | |
...four(0x00010000), // flavor (sfnt version) | |
...four(0), // total size (patched later) | |
...two(10), // # tables | |
...two(0), // reserved | |
...four(0x00010000), // total uncompressed size (unused, but should be larger than the font itself) | |
...four(0), // total compressed size (patched later) | |
...two(0), // font major version | |
...two(0), // font minor version | |
...four(0), // metadata block offset | |
...four(0), // metadata compressed size | |
...four(0), // metadata uncompressed size | |
...four(0), // private data block offset | |
...four(0), // private data block size | |
]; | |
for (const { tag, transform, size, originalSize } of tables) { | |
uncompressedHeader.push(tag | transform << 6); | |
if (typeof originalSize === 'number') uncompressedHeader.push(...varfour(originalSize)); | |
uncompressedHeader.push(...varfour(size)); | |
} | |
//-------------------------------------------------------------------------------- | |
class CompressingStream { | |
constructor(params) { | |
this.c = zlib.createBrotliCompress({ params }); | |
} | |
writeAtomically(buf) { | |
this.c.write(buf); | |
this.c.flush(zlib.constants.BROTLI_OPERATION_FLUSH); | |
return new Promise((resolve, reject) => { | |
// assumes that buf is small enough that this is triggered only once! | |
this.c.once('data', chunk => resolve(chunk)); | |
this.c.on('error', e => reject(e)); | |
}); | |
} | |
endAtomically(buf) { | |
if (buf) this.c.write(buf); | |
this.c.end(); | |
return new Promise((resolve, reject) => { | |
const chunks = []; | |
this.c.once('data', chunk => chunks.push(chunk)); | |
this.c.on('end', () => resolve(Buffer.concat(chunks))); | |
this.c.on('error', e => reject(e)); | |
}); | |
} | |
} | |
// counts the number of extra bits that can be safely removed from given | |
// partial compressed data, which should NOT have been completed but be | |
// known to be aligned to the byte boundary somehow. | |
const countPadding = compressed => { | |
// the byte boundary alignment means that the last metablock happens | |
// to be aligned or an empty metablock with empty metadata has been added: | |
// | |
// ----+-------------+-----+ ----+-------------------+ | |
// ... | metablock N |empty| or ... | metablock N | | |
// ----+-------------+-----+ ----+-------------------+ | |
// | |
// the empty metablock consists of 6 header bits (000110; corresponding to | |
// ISLAST=1, MNIBBLES=0, MSKIPBYTES=0 and thus MSKIPLEN=0) and any padding | |
// which should be always zero. so we first determine the padding and then | |
// check if the header bits are as expected. | |
if (compressed.length === 0) return 0; | |
// append an empty metablock (ISLAST=1, ISLASTEMPTY=1)... | |
const lastByte = compressed.length - 1; | |
compressed = new Uint8Array([...compressed, 3]); | |
// ...so that this will succeed | |
const input = zlib.brotliDecompressSync(compressed); | |
let padding = 0; | |
for (; padding < 8 && (compressed[lastByte] >> (7 - padding)) === 0; ++padding) { | |
const mask = 1 << (7 - padding); | |
compressed[lastByte] |= mask; | |
try { | |
zlib.brotliDecompressSync(compressed); | |
// if this succeeds, it means that the last metablock is valid | |
// regardless of the final bit, but that final bit is not padding | |
break; | |
} catch (e) { | |
// unexpected code means that we have altered something other than | |
// the metadata padding after MSKIPLEN | |
if (e.code !== 'ERR_PADDING_1') break; | |
} finally { | |
compressed[lastByte] &= ~mask; | |
} | |
} | |
if (padding === 8) throw 'metadata padding spans one full byte?!'; | |
// try to remove header bits | |
let bits = (compressed[lastByte] << 8 | (lastByte > 0 ? compressed[lastByte - 1] : 0)); | |
const possiblePadding = padding + 6; | |
if ((lastByte > 0 || possiblePadding <= 8) && (bits >> (16 - possiblePadding)) === 0b000110) { | |
let filteredBits = bits; | |
filteredBits &= (1 << (16 - possiblePadding)) - 1; // remove the possible metadata metablock | |
filteredBits |= 3 << (16 - possiblePadding); // and add an empty metablock instead | |
compressed[lastByte] = filteredBits >> 8; | |
if (lastByte > 0) compressed[lastByte - 1] = filteredBits & 255; | |
try { | |
if (zlib.brotliDecompressSync(compressed).equals(input)) padding = possiblePadding; | |
} catch (e) {} | |
} | |
// verify that the new padding is indeed correct | |
bits &= (1 << (16 - padding)) - 1; | |
bits |= 3 << (16 - padding); | |
compressed[lastByte] = bits >> 8; | |
if (lastByte > 0) compressed[lastByte - 1] = bits & 255; | |
if (!zlib.brotliDecompressSync(compressed).equals(input)) throw 'verification failed'; | |
return padding; | |
}; | |
// remove enough bytes from `buf` so that there are at most 7 padding bits | |
// in `buf`. returns the updated number of padding bits. | |
const shrinkPadding = (buf, padding) => { | |
if (padding > buf.length * 8) throw 'padding is too large'; | |
while (padding >= 8) { | |
buf.pop(); | |
padding -= 8; | |
} | |
return padding; | |
}; | |
// append given number of bits to a byte array, in which the last `padding` | |
// bits should be ignored. if the resulting byte array is not aligned to | |
// the byte boundary it will be padded to zeroes. | |
const appendBits = (buf, padding, bits, nbits) => { | |
padding = shrinkPadding(buf, padding); | |
bits = BigInt(bits); | |
buf[buf.length - 1] = (buf[buf.length - 1] & (255 >> padding)) | Number(bits << BigInt(8 - padding) & 255n); | |
bits >>= BigInt(padding); | |
nbits -= padding; | |
while (nbits > 0) { | |
// it's possible that nbits < 8, in which case this is the last | |
// iteration and missing bits are automatically padded to zeroes | |
buf.push(Number(bits & 255n)); | |
bits >>= 8n; | |
nbits -= 8; | |
} | |
}; | |
// true if this number of bits results in, after padded to the next byte | |
// boundary, no `=` padding after base64 encoding. | |
const base64PaddingOkay = nbits => nbits % 24 === 0 || nbits % 24 > 16; | |
// there are multiple possible encodings for static segments, | |
// so we vary Brotli parameters to get the shortest result (in bits). | |
let compressedStatic1, compressedStatic2, padding1, padding2, size = Infinity; | |
const randomDynamic1 = crypto.randomBytes(uncompressedDynamic1.length); | |
for (const mode of [ | |
zlib.constants.BROTLI_MODE_GENERIC, | |
zlib.constants.BROTLI_MODE_TEXT, | |
zlib.constants.BROTLI_MODE_FONT | |
]) { | |
// lower quality levels result in a different bitstream organization | |
for (let quality = 8; quality <= 11; ++quality) { | |
// unfortunately for us, metablocks share not only the LZ77 window | |
// but also the past distance buffer which we have no control over. | |
// what we are trying to do here is to put random bytes in place of | |
// the 1st dynamic segment, so that Brotli hopefully compresses this | |
// as an uncompressed metablock just like what we will do later. | |
// this is arguably a hack but we have a final verification so the | |
// worst possible outcome is just a spontaneous verification error. | |
const stream = new CompressingStream({ | |
[zlib.constants.BROTLI_PARAM_QUALITY]: quality, | |
[zlib.constants.BROTLI_PARAM_MODE]: mode, | |
[zlib.constants.BROTLI_PARAM_LGWIN]: windowBits, | |
}); | |
const s1 = await stream.writeAtomically(uncompressedStatic1); | |
const rd1 = await stream.writeAtomically(randomDynamic1); | |
if (rd1.length !== randomDynamic1.length + 3) throw 'rd1 is probably not an uncompressed metablock'; | |
const s2 = await stream.writeAtomically(uncompressedStatic2); | |
const end = await stream.endAtomically(); | |
if (end.length !== 1 || end[0] !== 3) throw 'writeAtomically somehow did not work well'; | |
// for the same reason, s2 now depends on s1 and they can't be | |
// handled independently. | |
const p1 = countPadding(s1); | |
const p2 = countPadding([...s1, ...rd1, ...s2]); | |
if (size > (s1.length + s2.length) * 8 - p1 - p2) { | |
size = (s1.length + s2.length) * 8 - p1 - p2; | |
compressedStatic1 = s1; | |
compressedStatic2 = s2; | |
padding1 = p1; | |
padding2 = p2; | |
} | |
} | |
} | |
// various bit patterns to build a compressed Brotli bitstream. | |
// | |
// note that the bitstream is always read as if all bytes are concatenated | |
// in the way that bytes `12 34 56` are interpreted as 0x563412, and least | |
// significant bits are read each time (or each bit for prefix codes). | |
// so the convention here is to progress from right (LSB) to left (MSB). | |
const emptyMetablockSize = 6, emptyMetablock = // empty metablock | |
// 0 - ISLAST | |
// 11 - MNIBBLES (0) | |
// 0 - reserved | |
//00 - MSKIPBYTES (padding to the next byte boundary follows) | |
0b000110; | |
const s1headerSize = 20, s1header = // metablock header for the 1st dynamic segment | |
// 0 - ISLAST | |
// 00 - MNIBBLES (4) | |
// xxxxxxxxxxxxxxxx - MLEN-1 | |
//1 - ISUNCOMPRESSED (padding to the next byte boundary follows) | |
0b10000000000000000000 | ((uncompressedDynamic1.length - 1) << 3); | |
// the second static segment ends with a metablock header, which has to be | |
// manually aligned to the 3-byte boundary. we first put enough empty | |
// metablocks _before_ the segment (which is guaranteed to be byte-aligned) | |
// to do a coarse-grained adjustment and pick an appropriate bit sequence | |
// to exactly align to the next boundary. quite annoying, but it's possible. | |
const s2headerSize = 33, s2header = // metablock header for the 2nd dynamic segment | |
// 1 - ISLAST | |
// 0 - ISLASTEMPTY | |
// 00 - MNIBBLES (4) | |
// xxxxxxxxxxxxxxxx - MLEN-1 | |
// 0 - NBLTYPESL (1) | |
// 0 - NBLTYPESI (1) | |
// 0 - NBLTYPESD (1) | |
// 00 - NPOSTFIX (0) | |
// 0000 - 4 most significant bits of NDIRECT (0) | |
// 00 - context modes for literal block 0 (unused) | |
// 0 - NTREESL (1) | |
//0 - NTREESD (1) | |
0b000000000000000000000000000000001n | (BigInt(uncompressedDynamic2.length - 1) << 4n); | |
const s2literalTreeMinSize = 12, s2literalTreeOptions = new Map([ // literal tree (unused) | |
// 00 - HSKIP (all tree codes are present) | |
// 0111 - tree code 1: length 1 (0) | |
// 0111 - tree code 2: length 1 (1) | |
// 0 - literal 0: length 1 (0) | |
// 0 - literal 1: length 1 (1) | |
[12, 0b000111011100n], | |
// ... | |
// 0 - literal 0: length 1 (0) | |
// 1 - literal 1: length 2 (01) | |
// 1 - literal 2: length 2 (11) | |
[13, 0b0110111011100n], | |
// ... | |
// 1 - literal 0: length 2 (00) | |
// 1 - literal 1: length 2 (10) | |
// 1 - literal 2: length 2 (01) | |
// 1 - literal 3: length 2 (11) | |
[14, 0b11110111011100n], | |
// 15 is probably impossible, which makes different IaC trees necessary. | |
// 00 - HSKIP (all tree codes are present) | |
// 011 - tree code 1: length 2 (01) | |
// 0111 - tree code 2: length 1 (0) | |
// 011 - tree code 3: length 2 (11), unused | |
// 01 - literal 1: length 1 (0) | |
// 0 - literal 2: length 2 (01) | |
// 0 - literal 3: length 2 (11) | |
[16, 0b0001011011101100n], | |
// 00 - HSKIP (all tree codes are present) | |
// 0111 - tree code 1: length 1 (0) | |
// 011 - tree code 2: length 2 (01) | |
// 011 - tree code 3: length 2 (11), unused | |
// 0 - literal 1: length 1 (0) | |
// 01 - literal 2: length 2 (01) | |
// 01 - literal 3: length 2 (11) | |
[17, 0b01010011011011100n], | |
]); | |
const s2iacTreeMinSize = 36, s2iacTreeOptions = new Map([ // IaC (insert-and-copy) code tree | |
// | | | | 00 - HSKIP (all tree codes are listed) | |
// | | | | 00 - tree code 1: length 0 | |
// | | | |0111 - tree code 2: length 1 (0) | |
// | | | 00| - tree code 3: length 0 | |
// | | | 00 | - tree code 4: length 0 | |
// | | | 00 | - tree code 0: length 0 | |
// | | |00 | - tree code 5: length 0 | |
// | | 0111| | - tree code 17: length 1 (1), zero repeat | |
// | | | | IaC code 0..129: length 0 | |
// | |0001 | | - zero repeat count = 3 + 0b000 = 3 | |
// | 1101| | | - zero repeat count = 8 * (3 - 2) + 3 + 0b110 = 17 | |
// |1111 | | | - zero repeat count = 8 * (17 - 2) + 3 + 0b111 = 130 | |
// 0| | | | - IaC code 130 (insert 0, copy 4): length 2 (00) | |
// 0 | | | | - IaC code 131 (insert 0, copy 5): length 2 (10) | |
// 0 | | | | - IaC code 132 (insert 0, copy 6): length 2 (01) | |
// 0 | | | | - IaC code 133 (insert 0, copy 7): length 2 (11) | |
[36, 0b0000_11111101_00010111_00000000_01110000n], | |
// | | | | 00 - HSKIP (all tree codes are listed) | |
// | | | | 00 - tree code 1: length 0 | |
// | | | |0111 - tree code 2: length 1 (0) | |
// | | | 00| - tree code 3: length 0 | |
// | | | 00 | - tree code 4: length 0 | |
// | | | 00 | - tree code 0: length 0 | |
// | | 0|11 | - tree code 5: length 2 (01), unused | |
// | | 011 | | - tree code 17: length 2 (11), zero repeat | |
// | | | | IaC code 0..129: length 0 | |
// | 0|0011 | | - zero repeat count = 3 + 0b000 = 3 | |
// | 11011 | | | - zero repeat count = 8 * (3 - 2) + 3 + 0b110 = 17 | |
// 111|11 | | | - zero repeat count = 8 * (17 - 2) + 3 + 0b111 = 130 | |
// 0 | | | | - IaC code 130 (insert 0, copy 4): length 2 (00) | |
// 0 | | | | - IaC code 131 (insert 0, copy 5): length 2 (10) | |
// 0 | | | | - IaC code 132 (insert 0, copy 6): length 2 (01) | |
// 0 | | | | - IaC code 133 (insert 0, copy 7): length 2 (11) | |
[39, 0b0000111_11110110_00110110_11000000_01110000n], | |
// | | | | 00 - HSKIP (all tree codes are listed) | |
// | | | | 011 - tree code 1: length 2 (01), unused | |
// | | | |011 - tree code 2: length 2 (11) | |
// | | | 00| - tree code 3: length 0 | |
// | | | 00 | - tree code 4: length 0 | |
// | | | 00 | - tree code 0: length 0 | |
// | | |00 | - tree code 5: length 0 | |
// | | 0111| | - tree code 17: length 1 (0), zero repeat | |
// | | | | IaC code 0..129: length 0 | |
// | |0000 | | - zero repeat count = 3 + 0b000 = 3 | |
// | 1100| | | - zero repeat count = 8 * (3 - 2) + 3 + 0b110 = 17 | |
// |1110 | | | - zero repeat count = 8 * (17 - 2) + 3 + 0b111 = 130 | |
// 11| | | | - IaC code 130 (insert 0, copy 4): length 2 (00) | |
// 11 | | | | - IaC code 131 (insert 0, copy 5): length 2 (10) | |
// 11 | | | | - IaC code 132 (insert 0, copy 6): length 2 (01) | |
// 11 | | | | - IaC code 133 (insert 0, copy 7): length 2 (11) | |
[40, 0b11111111_11101100_00000111_00000000_01101100n], | |
]); | |
const s2distanceSize = 10, s2distance = // distance code tree | |
// 01 - HSKIP (simple tree) | |
// 00 - NSYM-1 | |
//xxxxxx - distance code: length 1 (0) | |
0b0000000001n | BigInt((12 + 2 * windowBits) << 4); | |
// we guarantee that there is a combination of s2literal/iacTree for | |
// all bit sizes between s2minSize and s2minSize + 7 | |
const s2minSize = s2headerSize + s2literalTreeMinSize + s2iacTreeMinSize + s2distanceSize; | |
// prepare the first static segment | |
const static1 = [...uncompressedHeader, ...compressedStatic1]; | |
padding1 = shrinkPadding(static1, padding1); | |
if (!base64PaddingOkay(static1.length * 8 - padding1 + 20)) { | |
// append empty metablocks to satisfy the base64 padding requirement. | |
// each metablock with empty metadata is 6 bits long so it should be | |
// possible to pad the buffer to any number of integral bytes. | |
// (note that a non-empty metablock should be at least 22 bits long, | |
// so we necessarily need to use multiple empty metablocks here.) | |
appendBits(static1, padding1, emptyMetablock, emptyMetablockSize); | |
padding1 = 0; | |
while (!base64PaddingOkay(static1.length * 8 + 20)) { | |
static1.push(emptyMetablock); | |
} | |
} | |
appendBits(static1, padding1, s1header, s1headerSize); | |
// prepare the second static segment | |
const static2 = [...compressedStatic2]; | |
padding2 = shrinkPadding(static2, padding2); | |
while (!base64PaddingOkay(static2.length * 8 - padding2 + s2minSize)) { | |
// we know that the beginning of static2 is aligned to the byte boundary, | |
// so this is fine and should not alter padding2 | |
static2.unshift(emptyMetablock); | |
} | |
let s2literalTreeSizeUsed = Infinity, s2iacTreeSizeUsed = Infinity; | |
for (const [s2literalTreeSize, s2literalTree] of s2literalTreeOptions.entries()) { | |
for (const [s2iacTreeSize, s2iacTree] of s2iacTreeOptions.entries()) { | |
const size = static2.length * 8 - padding2 + | |
s2headerSize + s2literalTreeSize + s2iacTreeSize + s2distanceSize; | |
if (size % 24 !== 0) continue; | |
if (s2literalTreeSizeUsed + s2iacTreeSizeUsed > s2literalTreeSize + s2iacTreeSize) { | |
s2literalTreeSizeUsed = s2literalTreeSize; | |
s2iacTreeSizeUsed = s2iacTreeSize; | |
} | |
} | |
} | |
appendBits( | |
static2, | |
padding2, | |
s2header | | |
s2literalTreeOptions.get(s2literalTreeSizeUsed) << BigInt(s2headerSize) | | |
s2iacTreeOptions.get(s2iacTreeSizeUsed) << BigInt(s2headerSize + s2literalTreeSizeUsed) | | |
s2distance << BigInt(s2headerSize + s2literalTreeSizeUsed + s2iacTreeSizeUsed), | |
s2headerSize + s2literalTreeSizeUsed + s2iacTreeSizeUsed + s2distanceSize); | |
// now patch the uncompressed header in static1 to correct compressed sizes | |
let totalSize = static1.length + sequenceCount * 2 + static2.length; | |
for (const { count } of encodedRanges) totalSize += count * 2; | |
if (totalSize % 4 !== 0) throw `WOFF2 file size should be a multiple of 4 bytes, the actual size ${totalSize}B`; | |
static1.splice(8, 4, ...four(totalSize)); | |
static1.splice(20, 4, ...four(totalSize - uncompressedHeader.length)); | |
//-------------------------------------------------------------------------------- | |
const prefix = btoa(String.fromCharCode(...static1)); | |
const infix = btoa(String.fromCharCode(...static2)); | |
if (prefix.endsWith('=') || infix.endsWith('=')) throw 'padding compensation did not work!'; | |
// try to reconstruct the font file | |
const recons1 = btoa(String.fromCharCode(...repeat(sequenceCount, i => [i >> 8, i & 255]))); | |
if (recons1.endsWith('=')) throw 'padding compensation did not work!'; | |
const recons2buf = []; | |
for (const { start, step, count } of encodedRanges) { | |
for (let i = 0, v = start; i < count; ++i, v += step) recons2buf.push(v & 255, v >> 8); | |
} | |
const recons2 = btoa(String.fromCharCode(...recons2buf)); | |
const recons = Uint8Array.from(atob(prefix + recons1 + infix + recons2), c => c.charCodeAt()); | |
const expectedDecompressed = new Uint8Array([...uncompressedStatic1, ...uncompressedDynamic1, ...uncompressedStatic2, ...uncompressedDynamic2]); | |
try { | |
const actualDecompressed = zlib.brotliDecompressSync(recons.slice(uncompressedHeader.length)); | |
if (!actualDecompressed.equals(expectedDecompressed)) { | |
if (actualDecompressed.length !== expectedDecompressed.length) throw 'decompressed data mismatch (length)'; | |
for (let i = 0; i < actualDecompressed.length; ++i) { | |
if (actualDecompressed[i] !== expectedDecompressed[i]) { | |
const encodePrintable = s => [...s].map(c => { | |
return 0x20 <= c && c <= 0x7e && c !== 0x5c /*\*/ ? String.fromCharCode(c) : '\\x' + c.toString(16).padStart(2, '0'); | |
}).join(''); | |
const actual = encodePrintable(actualDecompressed.slice(Math.max(0, i - 32), i + 32)); | |
const expected = encodePrintable(expectedDecompressed.slice(Math.max(0, i - 32), i + 32)); | |
throw `decompressed data mismatch around ${i}:\nexpected ${expected}\nactual ${actual}`; | |
} | |
} | |
throw 'somehow bits got flipped'; | |
} | |
} catch (e) { | |
fs.writeFileSync('failed.br', recons.slice(uncompressedHeader.length)); | |
fs.writeFileSync('failed.woff2', recons); | |
throw e; | |
} | |
const base64Size = prefix.length + infix.length; | |
const uncompressedHeaderSize = uncompressedHeader.length; | |
return { | |
// to be actually used by the external code | |
prefix, infix, sequenceCount, encodedRanges, startCode, endCode, | |
// diagnostic purpose and not actually used | |
base64Size, uncompressedHeaderSize, | |
}; | |
} | |
//////////////////////////////////////////////////////////////////////////////// | |
// the following code is for node.js packer. basically you are supposed to paste | |
// anything printed by this code below. | |
if (typeof window === 'undefined') { | |
(async () => { | |
const embeddedCode = ` | |
a.style.transform="scaleX(-1",a.width|=0,c.globalCompositeOperation="xor",c.font="20vw serif",c.fillText(r=r||"type",(a.width-c[m](r).width)/2,a.height/2),c.fillText(q,(a.width-c[m](r).width)/2,a.height/2),i=k&&(+new Date-t)/4e5*(40+k),c.fillRect(0,0,i*a.width,a.height),i>1&&alert("GAME OVER\nScore: "+k,k=0,r=q=""),onkeydown=a=>{if("Backspace"==(a=a.key)&&(q=q.slice(0,-1)),a.match(/^[a-z]+$/)&&(q+=a).length==r.length){if(q==r){t=+new Date,++k;do{r=p.substr(-22*(COUNT*Math.random()|0)-140778%(i=4*Math.random()|28),i%8)}while(!r.match(/^[a-z]+$/))}q=""}} | |
`.trim() | |
.replace('\n', '\\n') | |
//.replace('r=r||', 'r||=') | |
.replace(/Math\.random()\*COUNT/g, 'COUNT*Math.random()'); | |
let limit, tmpl, bestBase64Size = Infinity; | |
// the starting limit of 670 was chosen because the 671st 4-letter word would be `dick` | |
nextLimit: for (limit = 670; limit >= 600; --limit) { | |
retry: for (let trial = 0; trial < 5; ++trial) { | |
try { | |
tmpl = await makeTemplate({ | |
embeddedCode: embeddedCode.replace(/COUNT/g, limit - 12), | |
dictionaryRefs: [ | |
// we intentionally reorder lengths to match the IaC code order | |
{ length: 4, startDistance: 12, count: limit - 12 }, | |
{ length: 6, startDistance: 12, count: limit - 12 }, | |
{ length: 5, startDistance: 12, count: limit - 12 }, | |
{ length: 7, startDistance: 12, count: limit - 12 }, | |
], | |
}); | |
} catch (e) { | |
const msg = e.toString(); | |
// simply too large to continue | |
if (msg.match(/target size.*cannot be met/)) continue nextLimit; | |
// this is normally a dead end, but there is a possibility of spontaneous | |
// failure so we try to retry a bit more | |
if (msg.match(/WOFF2 file size should be a multiple of 4 bytes/)) continue retry; | |
// this is definitely a spontaneous failure | |
if (msg.match(/Decompression failed/)) continue retry; | |
throw e; | |
} | |
// only print the best solution so far | |
if (bestBase64Size >= tmpl.base64Size) { | |
bestBase64Size = tmpl.base64Size; | |
console.log(`// ${tmpl.base64Size}B in total (limit=${limit})`); | |
console.log(`var PREFIX = ${JSON.stringify(tmpl.prefix)};`); | |
console.log(`var INFIX = ${JSON.stringify(tmpl.infix)};`); | |
console.log(`var SEQUENCE_COUNT = ${tmpl.sequenceCount};`); | |
for (let i = 0; i < tmpl.encodedRanges.length; ++i) { | |
const { start, step, count } = tmpl.encodedRanges[i]; | |
if (step !== 4 || start >= 4 || count !== limit - 12) throw 'unexpected'; | |
} | |
console.log(`var COUNT = ${limit - 12};`); | |
console.log(`var START_CODE = ${tmpl.startCode}, END_CODE = ${tmpl.endCode};`); | |
console.log(); | |
} | |
break retry; | |
} | |
} | |
})(); | |
} else { | |
//////////////////////////////////////////////////////////////////////////////// | |
var REPLACE_EMBEDDED_CODE = false; | |
var DUMP_WORDS = false; | |
//-------------------------------------------------------------------------------- | |
// paste the packer output here: | |
// 720B in total (limit=664) | |
var PREFIX = "d09GMgABAAAAAFC0AAoAAAABAAAAAFBmAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAATYCJAQgBk4KAIGEawv0PAAFBgcgAPRkQ4HodPQ7cRAOQTU8T3/fzv3L1gryJ/F2oDhrPIFMAgkx8OKnkzMK6NvNMgz4nmPSPJIAKdU8xuK4jm07HwqYPTbHv4Ut86sARY1wAJtnwYCKAQ35abG+whCx36PQj48BqN8geMx/o75OvY1asNTs72g0wvNQCrdd8vMV4EYn"; | |
var INFIX = "kBEAQJZsE9ppoVdL7vwrmuqwdRINfuBBfOCcoMOBYXRb5KuwArGUz1eXC5d3EfSowRum5pHa3w6w3jsyCRa0WX3/HVxSloXlF1uKtmb9fvA67As/Exu/hUe8HIoqQqJyeVHE+aIyaUhujju51wuOrSMk260SyqJbwF5yta/+DwiOFBgKyx78Q/seAlxi/Np4pu7JCRiGAN/dFMN540LDeTRdUPCtURuJNtqsDxIGXoGHhLkZg/jlhaReqn4PeEZatxV1vbZdilZoTEqhZV6A9MF/TUwPm9wE9cK1roqWNhEwsS/q48kJjGkc6sZKtC4hWqZ38CL8frFb/5ijUJobYuQU4RPJE4Pw0ZhITnNKE/zVGtO9qw83QiP3n6bAFYWGy4ub3Q79/9pGdOk8Xl1dH0Om84JGcY02Hy/szoXkJwfycQruptzZ7UIIIQ5wAAB3wAFc9EOo"; | |
var SEQUENCE_COUNT = 7452; | |
var COUNT = 652; | |
var START_CODE = 20000, END_CODE = 27451; | |
//-------------------------------------------------------------------------------- | |
// a: canvas element (from shim) | |
// c: 2D canvas context (from shim) | |
// d: document (from shim) | |
// p: source code and extracted dictionary | |
// q: typed word | |
// r: current word | |
// i: general temporary variable | |
// k: score | |
// t: start time for current word; unused when k = 0, i.e. before the first word | |
// m: 'measureText' | |
// reconstruct a data URI for the font file. `p` is for the 1st dynamic segment | |
// (big-endian) and `q` is for the second (little-endian). | |
// | |
// most importantly first n words of certain length correspond to two-byte | |
// integers from k to 4(n-1)+k in the second segment, so we can simply emit all | |
// integers from 0 to 4*COUNT-1 to put every required word into the decompressed | |
// string. because IaC codes are shuffled, words are also shuffled in the same | |
// way and the embedded code should account for this: | |
// | |
// +---------------------------------+ | |
// | a.style.transform="...";... | | |
// | | embeddedCode | |
// | .../// | | |
// +------+--------+-------+---------+ | |
// | just | global | music | country | 13th word of each length | |
// | like | medium | field | account | 14th word of each length | |
// | free | filter | order | created | 15th word of each length | |
// | work | number | point | stories | 16th word of each length | |
// | : | : | : | : | | |
// | : | : | : | : | | |
// | : | : | : | : | | |
// | HTTP | ****** | i++){ | bearing | 663rd word of each length | |
// | -201 | ****** | adobe | mapping | 664th word of each length (* unprintable) | |
// +------+--------+-------+---------+ | |
r = q = p = ''; | |
for (i = 0; i < SEQUENCE_COUNT; ++i) | |
p += String.fromCharCode(i >> 8, 255 & i), | |
i < COUNT * 4 && (q += String.fromCharCode(i & 255, i >> 8)); | |
// we have made sure that all but last segment has no `=` padding after base64. | |
// the correct argument should be `url(data:font/woff2;base64,...)`, but the | |
// last `)` can be omitted per the CSS syntax, and a media type doesn't seem to | |
// be necessary for all modern browsers. | |
d.fonts.add(new FontFace('t', 'url(data:;base64,' + PREFIX + btoa(p) + INFIX + btoa(q))); | |
// ensure that the font is ready for use, and also set the initial font size. | |
// this initial font size should be same to head.unitsPerEm. | |
// (FontFaceSet.load is not guaranteed to be synchronous; Safari requires this.) | |
d.fonts.load(c.font = '16px t').then(_ => { | |
// extract advance widths from each glyph. this loop is intentionally made | |
// to look identical to the prior URI reconstruction loop. | |
r = q = p = ''; | |
for (i = 0; i < END_CODE - START_CODE + 1; ++i) | |
p += String.fromCharCode( | |
// we will use `c.measureText` later so let's cache the method name. | |
// note that metrics are limited in size and anything above 0x7fff | |
// will be mangled, but we only use ASCII letters both in the | |
// embedded code and in the dictionary so that should be fine; | |
// in the other words, this mangling won't make valid words invalid. | |
(k = c[m = 'measureText'](String.fromCharCode(i + START_CODE)).width) | |
>> 8, | |
255 & k); // this shares a common `>> 8, 255 &` fragment | |
// the actual source code should be in the reconstructed string, but for | |
// testing we can replace it with an arbitrary function. | |
if (REPLACE_EMBEDDED_CODE) p = '(' + (() => { | |
// - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | |
// the following code should be minified and copied to `embeddedCode` later | |
// this code assumes the following assignments in the very first call: | |
// | |
// p: source code and extracted dictionary | |
// q: '' | |
// r: '' (immediately reset to 'type') | |
// k: 0 | |
// t: 0 | |
// m: 'measureText' | |
// mirror the canvas globally, `)` not required for the same reason | |
a.style.transform = 'scaleX(-1'; | |
// clear the canvas (also resets 2D context) | |
a.width |= 0; | |
// everything will be XORed | |
c.globalCompositeOperation = 'xor'; | |
// enlarge and set the font. | |
// | |
// it seems that serif is generally harder than sans-serif in this game. | |
// there are many factors, including general shapes and ligatures. | |
// browsers have different default fonts, which can be either serif or | |
// sans-serif and leaving them as is would mean a varying difficulty. | |
c.font = '20vw serif'; | |
// draw the current word and the typed word in the same position. | |
// we put the current word in center and then align the beginning of | |
// the typed word with that current word, so we need the width of the | |
// current word. the y coordinate in comparison corresponds to the | |
// alphabetic baseline by default so there is no need for adjustment. | |
// | |
// in principle XOR should mean that matching characters completely | |
// disappear, but antialiasing generally leaves a faint outline, | |
// which is in my opinion better than no antialiasing. | |
c.fillText(r = r || 'type', (a.width - c[m](r).width) / 2, a.height / 2); | |
c.fillText(q, (a.width - c[m](r).width) / 2, a.height / 2); | |
// draw the timer bar. `i` would be a fraction of the screen width. | |
// | |
// the first word (always `type`) serves both as a splash screen and | |
// a self-check for keyboard, so it has no time limit. | |
// | |
// the time limit is set to ~10 seconds for the second word, then | |
// slowly decaying (8s after 10 words, 5s after 40 words, 2.5s after | |
// 120 words, 1s after 360 words etc). this does feel enough for | |
// first words but too lenient later, but the game is already hard | |
// enough so I'm okay with that. | |
i = k && (+new Date - t) / 4e5 * (40 + k); | |
c.fillRect(0, 0, i * a.width, a.height); | |
// if that fraction exceeds the screen width, the game is over | |
if (i > 1) | |
alert('GAME OVER\nScore: ' + k, | |
// reset the score, current and typed word. | |
// the current word will be reset to `type` in the next call. | |
k = 0, | |
r = q = ''); | |
// this sets `window.onkeydown` | |
onkeydown = e => { | |
e = e.key; | |
// respond to the backspace | |
if (e == 'Backspace') { | |
q = q.slice(0, -1); | |
} | |
// `KeyboardEvent.key` is only lowercase when it's a single key. | |
// this prevents all special keys, including "Backspace" above. | |
// `+` is not required, but kept to exploit a common code fragment. | |
if (e.match(/^[a-z]+$/)) { | |
// append the key to the typed word, and check the length | |
if ((q += e).length == r.length) { | |
// we've now got an attempt and the typed word should be | |
// reset later, check if the typed word is correct first | |
if (q == r) { | |
// set or reset the start time, and increase the score | |
t = +new Date; | |
++k; | |
// pick a word from the word list. this may fail, so | |
// we may have to pick a word multiple times. | |
do { | |
// the first random call picks a group of 4 words, | |
// the second random call picks one out of them and | |
// also sets the (biased) length variable `i` which | |
// is finally used to get the correct word. | |
// | |
// this particular code was found with an exhaustive | |
// search and results in the following mapping: | |
// | |
// +------------+----+----+----+----+ | |
// | i | 28 | 29 | 30 | 31 | | |
// +------------+----+----+----+----+ | |
// | i % 8 | 4 | 5 | 6 | 7 | <- length | |
// +------------+----+----+----+----+ | |
// | 140778 % i | 22 | 12 | 18 | 7 | <- offset | |
// +------------+----+----+----+----+ | |
r = p.substr( | |
-(7 + 5 + 6 + 4) * (COUNT * Math.random() | 0) - | |
140778 % (i = 4 * Math.random() | 28), | |
i % 8); | |
} while ( | |
// again, a common code fragment for compession | |
!r.match(/^[a-z]+$/) | |
); | |
} | |
q = ''; | |
} | |
} | |
}; | |
// - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | |
}) + ')()' + p.slice(-((4 + 5 + 6 + 7) * COUNT)); | |
// extract the source code and execute it periodically. `setInterval` can be | |
// given a source code string instead of a string, and the interval can be | |
// omitted for the default 0. in reality browsers will enforce the minimal | |
// interval, but that interval seems small enough for smooth animation so we | |
// don't use requestAnimationFrame here. | |
setInterval(p.slice(t = k = 0, -((4 + 5 + 6 + 7) * COUNT))); | |
// dump all allowed words for testing | |
if (DUMP_WORDS) { | |
let numWords = 0, numBytes = 0, lines = []; | |
for (const [length, offset] of [[4, 7+5+6+4], [5, 7+5], [6, 7+5+6], [7, 7]]) { | |
let words = []; | |
for (let i = 0; i < COUNT; ++i) { | |
words.unshift(p.substr(-(7 + 5 + 6 + 4) * i - offset, length)); | |
} | |
words = words.filter(s => s.match(/^[a-z]+$/)); | |
const wordsPerLine = 78 / (length + 1) | 0; | |
for (let i = 0; i < words.length; i += wordsPerLine) { | |
lines.push(words.slice(i, i + wordsPerLine).join(' ')); | |
} | |
lines.push(''); | |
numWords += words.length; | |
numBytes += words.join('').length; | |
} | |
console.log(lines.join('\n')); | |
console.log(numWords, numBytes); | |
} | |
}); | |
//////////////////////////////////////////////////////////////////////////////// | |
} | |
// 1022B | |
// ---------------- | |
// Bonus: Word List | |
// ---------------- | |
// | |
// This word list of 1,903 words was constructed by taking 13th through 664th | |
// words of lengths 4--7 from the Brotli preset dictionary, that exclusively | |
// contain lowercase Latin letters. Since the dictionary itself was (thought to | |
// be) generated from a large corpus of Internet texts, there is some bias | |
// towards Internet-specific words like `user`, `popup`, `nowrap` etc. It also | |
// contains some part-of-words-but-not-words like `ation`, `ition` or `pacity`. | |
// But overall they are mostly genuine English words, probably because | |
// the cutoff precluded slightly common but not genuine words and most non- | |
// English words towards the end. | |
// | |
// I initially thought that some manual filtering would be required, like `fuck` | |
// which is the 760th 4-letter word. But it seems the cutoff is low enough that | |
// it's the least concern. And honestly speaking, when a word is mirrored and | |
// a clock is ticking you have no time to think about problematic words ;-) | |
// | |
// just like free work text year over body love form book play live line help | |
// home side more word long them view find page days full head term each area | |
// from true mark able upon high date land news even next case both post used | |
// made hand here what name blog size base held make main user hold ends with | |
// read were sign take have game seen call path well plus menu film part join | |
// this list good need ways west jobs mind also logo rich uses last team army | |
// food king will east ward best fire know away move than load give self note | |
// much feed many rock icon once look hide died rule host ajax info club laws | |
// less half some such zone ones care race blue four week face hope gave hard | |
// lost when park kept pass ship room plan done save keep flag link sold five | |
// took rate town jump thus dark card file fear stay kill that fall auto ever | |
// talk shop vote deep mode rest turn born band fell rose skin role come acts | |
// ages meet gold item vary felt then send drop copy stop else lies tour pack | |
// past gray mean ride shot late said road feel john rick port fast dead poor | |
// bill type wood must rank wide want wall lead paul wave sure wait mass arms | |
// goes gain lang paid lock unit root walk firm wife song test kind rows tool | |
// font mail safe star maps core rain flow baby span says arts foot real wiki | |
// heat step trip lake weak told cast fans bank very runs july task goal grew | |
// slow edge sets soon seat none tube zero sent reed fact into gift harm came | |
// hill bold zoom void easy ring fill peak init cost jack tags bits roll edit | |
// knew near grow duty sale lots pain jazz cold eyes fish risk tabs prev rise | |
// ding ball ford earn wild fair lack vers pair june tech pick evil warm lord | |
// does pull idea draw huge spot fund burn href cell keys tick hour loss fuel | |
// suit deal aged grey ease aims girl aids navy grid tips wars lady cars hell | |
// tall whom hall push chat crew hash flat rare tell camp onto laid miss skip | |
// tent fine male gets plot cool feet eric most guid bell desc hair math atom | |
// luck cent tiny gone html sell drug node nick lose null vast wind wear rely | |
// been same duke nasa cape wish gulf hits slot gate kick blur they msie wins | |
// bird sort beta seek ords tree mall farm boys bear kids mary tend quad prop | |
// lift vice andy debt pool neck blow door eval lets fail oral poll nova cols | |
// gene soft rome till ross pour fade pink mini mine bars hear milk iron fred | |
// disk went soil puts holy adam sees json cont loop asia moon soul fort cart | |
// mike nice inch rice pure mage para tone bond tank yard bowl bush jeff cash | |
// visa golf snow quer sick meat bind dell hire pics rent | |
// | |
// music field order point value level table board house group works years state | |
// today water start style death power phone night error input about terms title | |
// tools event local times large words games short space focus clear model block | |
// guide radio share women again money image names young lines later color green | |
// front watch force price rules begin after visit issue areas below index total | |
// hours label print press built links speed study trade found sense under shown | |
// forms range added still moved taken above flash fixed often other views check | |
// legal river items quick shape human exist going movie third basic peace stage | |
// width login ideas wrote pages users drive store break south voice sites month | |
// where build which earth forum three sport party lower lives class layer entry | |
// story usage sound court birth popup types apply being upper notes every shows | |
// means extra match track known early began super paper north learn given named | |
// ended parts brand using woman false ready audio takes while lived cases daily | |
// child great judge those units never broad coast cover apple files cycle scene | |
// plans click write queen piece email frame older photo limit cache civil scale | |
// enter theme there touch bound royal asked whole since stock faith heart empty | |
// offer scope owned might album think blood array major trust canon union count | |
// valid stone happy occur fresh quite films grade needs urban fight basis hover | |
// route mixed final slide topic brown alone drawn split reach dates march quote | |
// goods doubt async thumb allow chief youth novel serve until hands query james | |
// equal twice panel songs round eight shift worth posts leads weeks avoid these | |
// miles plane smart alpha plant marks rates plays claim sales texts stars wrong | |
// thing multi heard stand token solid bring ships staff tried calls fully facts | |
// agent admin egypt cross spent blogs noted leave china sizes guest robot heavy | |
// seven grand crime signs aware dance phase latin enjoy ation smith holds peter | |
// india chain score comes doing prior roman lists japan falls trial owner agree | |
// abuse alert opera cards hills teams truth clean saint metal louis meant proof | |
// brief genre truck looks makes costs plain adult quest train labor helps cause | |
// magic motor their least steps could glass sides funds hotel award mouth moves | |
// paris gives dutch texas fruit ocean floor speak depth banks catch chart align | |
// deals would parks mouse among brain based carry draft refer meter delay dream | |
// prove joint drugs april ideal allen exact forth codes logic seems blank ports | |
// saved goals grant greek homes rings rated whose parse linux jones pixel david | |
// horse raise boxes ement tower cable henry setup italy sharp minor taste wants | |
// reset wheel girls clubs stuff bible votes korea bands queue cking ahead clock | |
// irish ratio stats yahoo finds debug tasks cells prime tells turns spain beach | |
// taxes micro angel gifts steve mount roger frank feeds scott tests drink lewis | |
// shall loved waste simon reply meets unter cheap tight dress clips rooms onkey | |
// mobil plate funny trees wmode param idden virus chair trans worst ition patch | |
// firms tours asian adobe | |
// | |
// global medium filter number change result public screen choose normal travel | |
// issues source target spring module mobile switch photos border region itself | |
// social active column record follow either length family friend layout author | |
// create review summer server played player expand policy format double points | |
// series person living design months forces unique weight people energy nature | |
// search figure having custom offset letter window submit render groups upload | |
// health method videos school future shadow debate values others rights league | |
// chrome simple notice shared ending season report online square button images | |
// enable moving latest winter period strong repeat detail formed demand secure | |
// passed toggle places device static cities stream yellow attack street flight | |
// hidden opened useful valley causes leader secret second damage sports except | |
// rating signed things effect fields states office visual editor volume museum | |
// movies parent access mostly mother market ground chance survey before symbol | |
// moment speech motion inside matter object exists middle growth legacy manner | |
// enough career answer origin portal client select random closed topics coming | |
// father option simply raised escape chosen church define reason corner output | |
// memory iframe police models during offers styles killed listed called silver | |
// margin delete better browse limits single widget center budget nowrap credit | |
// claims engine safety choice spirit spread making needed russia please extent | |
// broken allows charge divide factor member theory config around worked helped | |
// impact should always bottom prefix orange couple garden bridge launch taking | |
// vision little dating beauty themes forgot anchor almost loaded return string | |
// reload income supply orders viewed course island cookie amazon modern advice | |
// dialog houses starts centre height adding assets effort direct nearly manual | |
// joined awards handle import regard skills nation degree weekly behind doctor | |
// logged united begins plants assist artist issued canada agency scheme remain | |
// sample beyond accept served marine camera leaves stress onload loader sister | |
// surviv listen female appeal levels thanks higher forced animal anyone agreed | |
// recent wonder prices turned inline sunday failed census minute beacon quotes | |
// estate remote linked signal formal signup prince papers sounds extend slider | |
// studio owners manage profit annual params bought famous google longer israel | |
// saying decide header ensure branch pieces stated racing resize pacity sexual | |
// bureau obtain titles amount comedy lyrics indeed county looked turkey forest | |
// giving errors insert footer faster agents pragma friday junior dollar placed | |
// covers plugin boston avatar tested forums schema filled shares reader appear | |
// seeing jersey verify expert injury across thread native pocket cancer tables | |
// proved really driver boards colors campus guitar finish showed assume layers | |
// wilson stores relief sweden easily taylor resort french though buying brands | |
// opping sector vspace poster coffee martin mature happen kansas hspace jordan | |
// miller senior guides ection repair within virgin phones bahasa brasil galego | |
// magyar polski srpski | |
// | |
// country account created stories results running process writing objects | |
// visible welcome article unknown network company dynamic browser privacy | |
// problem respect display request reserve website history friends options | |
// working version million channel address visited weather correct product | |
// edirect forward removed subject control archive current reading library | |
// limited manager further summary machine minutes private context program | |
// society numbers written enabled trigger sources loading element partner | |
// finally perfect meaning systems keeping culture journal project surface | |
// expires reviews balance through opinion contact average primary village | |
// gallery decline meeting mission popular quality measure general species | |
// session section writers counter initial reports figures members holding | |
// dispute earlier express digital picture married traffic leading changed | |
// central victory reasons studies feature listing schools usually episode | |
// playing growing obvious overlay present actions wrapper already certain | |
// reality storage another desktop offered pattern unusual capital failure | |
// connect reduced decades regular animals release getting methods nothing | |
// caption letters capture science license changes updated require comment | |
// warning toolbar remains because elected finance workers quickly between | |
// exactly setting disease weapons exhibit classes covered outline attacks | |
// devices purpose killing showing dropped heavily effects confirm advance | |
// sharing opening drawing billion ordered related include whether defined | |
// catalog buttons largest uniform journey sidebar holiday passage animate | |
// feeling arrived passing natural roughly density tribute factors receive | |
// husband affairs radical brought finding landing leaders planned premium | |
// package complex looking station believe smaller records similar studied | |
// maximum heading rapidly climate kingdom emerged amounts founded pioneer | |
// formula dynasty revenue economy brother soldier largely calling segment | |
// efforts learned nations applied acquire massive granted treated biggest | |
// benefit driving minimum perhaps morning selling reverse variant missing | |
// achieve promote student someone extreme restore evolved sitemap english | |
// symbols matters musical against serving payment trouble concept compare | |
// parents players regions monitor winning explore adapted produce ability | |
// enhance careers collect ancient existed handler printed console exports | |
// windows illegal neutral suggest signing settled western causing claimed | |
// chapter victims mozilla promise parties edition outside hundred authors | |
// reached chronic demands seconds protect adopted prepare neither greatly | |
// greater overall improve command special worship funding thought highest | |
// instead utility quarter testing clearly exposed liberal example answers | |
// allowed defense serious freedom trained prevent observe provide borders | |
// artists powered perform fiction medical tickets opposed witness justice | |
// twitter notably waiting warfare ranking phrases mention survive scholar | |
// ignored strange stopped islands notable carried several becomes wedding | |
// monarch teacher biology plusone hunting joining circles vehicle crystal | |
// enjoyed assumed foreign retired however battles seeking cabinet conduct | |
// happens turning lacking typical extract enemies generat decided beliefs | |
// located convert violent entered circuit chemist divided mystery falling | |
// railway college monster descent nuclear protest flowers predict reforms | |
// lecture instant suicide generic periods markets fishing combine graphic | |
// winners cookies outcome resolve briefly depicts columns housing scripts | |
// bearing mapping | |
// | |
// Here is a list of words that are not common English words to my knowledge. | |
// | |
// cont desc eval guid href html init json msie nasa ords para pics prev quer | |
// vers async ation cking ement idden ition onkey param unter wmode bahasa | |
// config ection galego hspace iframe lingle nowrap onload opping pacity params | |
// polski pragma signup srpski surviv vspace edirect generat plusone sitemap |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
_='`i^i),i<2608&&(q+=~255&i,i>>8));d_s.add(new FontFace("t","url(data:;base64,d09GMgAB|AFC0?o|B|AFBm||||||||ATYCJAQgBk4KAIGEawv0P?FBgcgAPRkQ4HodPQ7cRAOQTU8T3/fzv3L1gryJ/F2oDhrPIFMAgkx8OKnkzMK6NvNMgz4nmPSPJIAKdU8xuK4jm07HwqYPTbHv4Ut86sARY1wAJtnwYCKAQ35abG+whCx36PQj48BqN8geMx/o75OvY1asNTs72g0wvNQCrdd8vMV4EYn@p)+"kBEAQJZsE9ppoVdL7vwrmuqwdRINfuBBfOCcoMOBYXRb5KuwArGUz1eXC5d3EfSowRum5pHa3w6w3jsyCRa0WX3/HVxSloXlF1uKtmb9fvA67As/Exu/hUe8HIoqQqJyeVHE+aIyaUhujju51wuOrSMk260SyqJbwF5yta/+DwiOFBgKyx78Q/seAlxi/Np4pu7JCRiGAN/dFMN540LDeTRdUPCtURuJNtqsDxIGXoGHhLkZg/jlhaReqn4PeEZatxV1vbZdilZoTEqhZV6A9MF/TUwPm9wE9cK1roqWNhEwsS/q48kJjGkc6sZKtC4hWqZ38CL8frFb/5ijUJobYuQU4RPJE4Pw0ZhITnNKE/zVGtO9qw83QiP3n6bAFYWGy4ub3Q79/9pGdOk8Xl1dH0Om84JGcY02Hy/szoXkJwfycQruptzZ7UIIIQ5w?B3wAFc9EOo@q))),d_s.load(c_="16px t").then(A=>{`(k=c[m="measureText"](~i+2e4)).width)^k);setInterval(p.slice(t=k=0,-14344))})~String.fromCharCode(|??`for(r=q=p="",i=0;i<7452;++i)p+=~_.font^>>8,255&@"+btoa(?AA';for(i of'?@^_`|~')with(_.split(i))_=join(pop());eval(_) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment