Executable and Linkable Format (ELF), is the default binary format on Linux-based systems.
Intel added the Galois Field instruction set (GFNI) extensions to their Sunny Cove and Tremont cores. What’s particularly interesting is that GFNI is the only new SIMD extension that came with SSE and VEX/AVX encodings (in addition to EVEX/AVX512), to allow it to be supported on all future Intel cores, including those which don’t support AVX512 (such as the Atom line, as well as Celeron/Pentium branded “big” cores).
I suspect GFNI was aimed at accelerating SM4 encryption, however, one of the instructions can be used for many other purposes. The extension includes three instructions, but of particular interest here is the Affine Transformation (GF2P8AFFINEQB
), aka bit-matrix multiply, instruction.
There have been various articles which discuss out-of-band
Counting the trailing zero bit count (TZCNT) can be done by isolating the lowest bit, then depositing this into the appropriate locations for the count. The leading zero bit count (LZCNT) can be done by reversing bits, then computing the TZCNT.
__m128i _mm_tzcnt_epi8(__m128i a) {
// isolate lowest bit
a = _mm_andnot_si128(_mm_add_epi8(a, _mm_set1_epi8(0xff)), a);
// convert lowest bit to index
#if 0 | |
(g++-9 $0 || g++ $0) && \ | |
./a.out > output.tex && \ | |
pdflatex output && \ | |
exec convert -density 400 -flatten output.pdf -resize 25% output.png | |
exit 1 | |
#endif | |
#include <cmath> | |
#include <cstdio> |
// x64 encoding | |
enum Reg { | |
RAX, RCX, RDX, RBX, RSP, RBP, RSI, RDI, | |
R8, R9, R10, R11, R12, R13, R14, R15, | |
}; | |
enum XmmReg { | |
XMM0, XMM1, XMM2, XMM3, XMM4, XMM5, XMM6, XMM7, | |
XMM8, XMM9, XMM10, XMM11, XMM12, XMM13, XMM14, XMM15, |
The trick to designing transpose algorithms for both small and large problems is to recognize their simple recursive structure.
For a matrix A, let's denote its transpose by T(A) as a shorthand. First, suppose A is a 2x2 matrix:
[A00 A01]
A = [A10 A11]
Then we have:
#define RUN_ME /* | |
exec cc -g -ggdb -O2 -W -Wall -std=c99 $0 -o "$(basename $0 .c)" | |
*/ | |
/* | |
* Copyright 2020 Paul Khuong | |
* SPDX-License-Identifier: BSD-2-Clause | |
* | |
* Redistribution and use in source and binary forms, with or without | |
* modification, are permitted provided that the following conditions |
#define RUN_ME /* | |
exec cc -O2 -W -Wall -std=c99 -shared $0 -o "$(basename $0 .c).so" -fPIC | |
*/ | |
/* | |
* Copyright 2019 Paul Khuong | |
* SPDX-License-Identifier: BSD-2-Clause | |
* | |
* Redistribution and use in source and binary forms, with or without | |
* modification, are permitted provided that the following conditions |
The counters that are the easiest to understand and the best for making ratios that are internally consistent (i.e., always fall in the range 0.0 to 1.0) are the mem_load_retired events, e.g., mem_load_retired.l1_hit and mem_load_retired.l1_miss.
These count at the instruction level, i.e., the universe of retired instructions. For example, could make a reasonable hit ratio from mem_load_retired.l1_hit / mem_inst_retired.all_loads and it will be sane (never indicate a hit rate more than 100%, for example).
That one isn't perfect though, in that it may not reflect the true costs of cache misses and the behavior of the program for at least the following reasons:
- It appplies only to loads and can't catch misses imposed by stores (AFAICT there is no event that counts store misses).
- It only counts loads that retire - a lot of the load activity in your process may be due to loads on a speculative path that never retire. Loads on a speculative path may bring in data that is never used, causing misses and d
- 2011 - A trip through the Graphics Pipeline 2011
- 2015 - Life of a triangle - NVIDIA's logical pipeline
- 2015 - Render Hell 2.0
- 2016 - How bad are small triangles on GPU and why?
- 2017 - GPU Performance for Game Artists
- 2019 - Understanding the anatomy of GPUs using Pokémon
- 2020 - GPU ARCHITECTURE RESOURCES