Skip to content

Instantly share code, notes, and snippets.

The Enduring Legacy of the 88x31 Pixel Web Button: A Historical and Technical Analysis

1. Executive Summary

The 88x31 pixel button, a seemingly archaic digital artifact, is an iconic symbol of the early World Wide Web. This report investigates its origins, explores the technical and design principles that drove its proliferation, and contextualizes its profound cultural significance. The analysis reveals that the button was not the result of a single design decision but rather a perfect confluence of multiple, reinforcing factors.

In an era defined by slow dial-up connections and low screen resolutions, the 88x31 button emerged as a pragmatic solution to a fundamental problem: how to create a visually distinct link that was lightweight and efficient. Its dimensions represented a strategic compromise between the larger, commercially-oriented 468x60 banner ad and a non-descript, smaller icon.

《解构HDR:拨开高动态范围的迷雾》—— 史蒂夫·耶德林演讲深度概括

引言:一个核心的悖论

著名电影摄影师史蒂夫·耶德林(Steve Yedlin)的这场演示,旨在从根本上挑战并重塑我们对运动影像中色调、色彩与对比度再现的思维模式。其核心目标是揭穿当前行业中关于“高动态范围”(High Dynamic Range, HDR)成像的一些普遍存在但可被证伪的误解。耶德林开宗明义地指出,一旦我们摒弃这些谬误,将思维模型与客观现实对齐,无论是作为影像创作者还是观众,都将获得前所未有的自由与掌控力。

演示现场,耶德林与多位资深电影人共同观看了一系列未经压缩的母版级影像素材,涵盖了各种场景、光线和色彩风格。这些影像同时在两台顶级的索尼X310母版监视器上显示。这两台监视器经过了精密的校准,一台接收的是所谓的“SDR”(标准动态范围)信号,另一台则是“HDR”信号。然而,一个惊人且构成本次演示基石的现象出现了:两台监视器上显示的画面看起来完全一模一样

这些画面共同展现了我们通常被灌输为“HDR专属”的特质:光泽感十足、画面明亮清晰(即使在非全黑的调色环境中)、黑色深邃而不失细节、色彩细腻与鲜艳并存。这个“SDR与HDR画面完全相同”的现象,正是耶德林用以解构整个HDR话语体系的切入点。

AVX2与AVX-512性能综合分析:增益、衰减与架构细微差异

第1节:矢量化的原理:从标量到SIMD的演进

1.1 标量处理的局限与SIMD的诞生

在现代计算体系中,性能提升是永恒的追求。传统上,处理器遵循一种被称为“标量处理”的模式,即一条指令在单个时钟周期内仅对一个或一对数据进行操作 。这种模式虽然直观且易于编程,但其性能存在一个固有的天花板。随着处理器时钟频率的增长逐渐放缓,单纯依靠提高频率来提升性能的路径变得难以为继。为了突破这一瓶颈,计算机体系结构引入了一种革命性的并行计算范式:单指令多数据流(Single Instruction, Multiple Data,简称SIMD)。

SIMD的核心思想是通过单条指令同时对多个数据元素执行相同的操作 。想象一下,需要将两个包含八个浮点数的数组逐元素相加。在标量模型下,这需要执行八次独立的加法指令。而在SIMD模型下,处理器可以使用一条矢量加法指令,在一个或少数几个时钟周期内完成所有八对元素的相加。这种数据级并行性(Data-Level Parallelism)极大地提高了计算密集型任务的吞吐量,尤其是在图形处理、科学计算和多媒体编解码等领域。

x264 Adaptive Quantization (AQ mode)

In x264, the --aq-mode option controls macroblock-level adaptive quantization. In mode 0 (disabled), all blocks use the frame’s base QP. In mode 1 (variance AQ) and mode 2 (auto-variance AQ), x264 measures each block’s activity (roughly its AC variance) to adjust its QP: complex blocks get higher QP (fewer bits) while flat/dark blocks get lower QP (more bits). The goal is to maintain overall bitrate while improving perceptual quality (less banding in flat areas). x264’s code was tuned so that AQ modes use roughly the same total bits as no-AQ (see comment in code). Below we detail each mode’s formula, code logic, and effects.

Mode 0: Disabled

AQ mode 0 turns off adaptive quantization. In this case x264 simply sets all MB-level offsets to zero. In code, if i_aq_mode==X264_AQ_NONE or strength=0, x264 does:

memset(frame->f_qp_offset, 0, mb_count*sizeof(float));

Of course. Here is a comprehensive summary of the packJPG v2.5k source code in English, detailing all the essential steps of its compression and decompression processes.

Introduction: The Core Value of packJPG

packJPG is a highly specialized, lossless re-compression utility designed exclusively for JPEG files. The term "lossless" here is critical: it does not mean converting the JPEG to a pixel-based format and then compressing it (like PNG). Instead, it means the program can take a JPEG file, compress it into a .pjg file, and then decompress it back into a JPEG file that is bitwise identical to the original. On average, it achieves a file size reduction of around 20%.

The program's effectiveness stems from its deep understanding and exploitation of the inherent inefficiencies within the standard JPEG compression scheme. While JPEG is excellent for lossy image compression, its final entropy coding stage (using Huffman coding) is not optimally efficient. packJPG replaces this with more advanced pr

.prose-slate, .prose-slate :not(.katex, .katex *) {
font-size: revert !important;
line-height: unset !important;
}
.whitespace-pre-wrap {
white-space: normal;
}
html body :not(.katex, .katex *) {
img[style][src^="data:"] {
margin: auto;
width: revert !important;
height: revert !important;
max-width: 100% !important;
max-height: 100% !important;
}
[data-turn-id] .user-message-bubble-color {
outline: 2px #d00000 solid
}
button.cursor-pointer.rounded-full[class*="bottom-[calc(var(--composer-overlap-px)+--spacing(6))]"]:has(svg) {
display: none;
}
.text-token-text-secondary:has(.tabular-nums) {
background: #0f6c
}
[class*="turn-messages"] {

How to transcode Dolby Vision with correct color on non-Dolby device?

Below is a step‑by‑step workflow to “flatten” Dolby Vision into a single‑layer HDR10 (or SDR) encode that preserves the correct color on devices without Dolby Vision support.

At a high level, Dolby Vision streams consist of a backward‑compatible HDR10 base layer plus one or two enhancement layers (dynamic metadata) that non‑Dolby decoders ignore; our goal is to merge and tone‑map these into a single output the target device can render accurately.

1. Understand Dolby Vision’s Dual‑Layer Architecture

Dolby Vision encodes a mandatory HDR10 base layer plus one (Profile 8.1) or two (Profile 7.6) enhancement layers (RPU and optional MEL/FEL) carrying dynamic metadata.

/*
Y Combinator: From Factorial to Fixed-point Combinator
Modern JS Implementation
https://picasso250.github.io/2015/03/31/reinvent-y.html
https://gist.github.com/igstan/388351
*/
/* STEP 1: Basic recursive factorial */
const fact1 = n => n < 2 ? 1 : n * fact1(n - 1);