-
Install Xcode (Avaliable on the Mac App Store)
-
Install Xcode Command Line Tools (Preferences > Downloads)
-
Install depot_tools
$ git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git
$ nano ~/.zshrc
- Add
path=('/path/to/depot_tools' $path)
$ source ~/.zshrc
-
From the directory you want to install V8 into, run
$ gclient
$ fetch v8
$ cd v8
$ gclient sync
For Intel Macs:
$ ./tools/dev/gm.py x64.optdebug
$ ninja -C out.gn/x64.optdebug
For ARM (M1) Macs:
$ ./tools/dev/gm.py arm64.optdebug
$ ninja -C out.gn/arm64.optdebug
I'd also recommend adding these to your .zshrc
:
Note: replace "x64" in these paths with "arm64" on M1 Macs.
$ nano ~/.zshrc
- Add
alias d8=/path/to/v8/repo/out/x64.optdebug/d8
- Add
alias tick-processor=/path/to/v8/repo/tools/mac-tick-processor
- Add
export D8_PATH="/path/to/v8/repo/out/x64.optdebug"
$ source ~/.zshrc
Note: many of these examples have become outdated as v8
continues to evolve. Hoping to update them in the near future, but for now please be aware that some may not work as expected or may reference optimizations/deoptimizations that no longer happen.
Create test.js
with the following code:
function test( obj ) {
return obj.prop + obj.prop;
}
var a = { prop: 'a' }, i = 0;
while ( i++ < 10000 ) {
test( a );
}
Run $ d8 --trace-opt-verbose test.js
You should see that the test
function was optimized by V8, along with an explanation of why. "ICs" stands for inline caches -- and are one of the ways that V8 performs optimizations. Generally speaking, the more "ICs with typeinfo" the better.
Now modify test.js
to include the following code:
function test( obj ) {
return obj.prop + obj.prop;
}
var a = { prop: 'a' }, b = { prop: [] }, i = 0;
while ( i++ < 10000 ) {
test( Math.random() > 0.5 ? a : b );
}
Run $ d8 --trace-opt-verbose test.js
So, you'll see that this time, the test
function was never actually optimized. And the reason for that is because it's being passed objects with different hidden classes. Try changing the value of prop
in a
to an integer and run it again. You should see that the function was able to be optimized.
Modify the contents of test.js
:
function test( obj ) {
return obj.prop + obj.prop;
}
var a = { prop: 'a' }, b = { prop: [] }, i = 0;
while ( i++ < 10000 ) {
test( i !== 8000 ? a : b );
}
Run $ d8 --trace-opt --trace-deopt test.js
You should see that the optimized code for the test
function was thrown out. What happened here was that V8 kept seeing test
being passed an object that looked like {prop: <String>}
. But on the 8000th round of the while loop, we gave it something different. So V8 had to throw away the optimized code, because its initial assumptions were wrong.
Modify test.js
:
function factorial( n ) {
return n === 1 ? n : n * factorial( --n );
}
var i = 0;
while ( i++ < 1e7 ) {
factorial( 10 );
}
Run $ time d8 --prof test.js
(Generates v8.log
)
Run $ tick-processor
(Reads v8.log
and cat
s the parsed output)
This'll show you where the program was spending most of its time, by function. Most of it should be under LazyCompile: *factorial test.js:1:19
. The asterisk before the function name means that it was optimized.
Make a note of the execution time that was logged to the terminal. Now try modifying the code to this dumb, contrived example:
function factorial( n ) {
return equal( n, 1 ) ? n : multiply( n, factorial( --n ) );
}
function multiply( x, y ) {
return x * y;
}
function equal( a, b ) {
return a === b;
}
var i = 0;
while ( i++ < 1e7 ) {
factorial( 10 );
}
Run $ time d8 --prof test.js
Run $ tick-processor
Roughly the same execution time as the last function, which seems like it should have been faster. You'll also notice that the multiply
and equal
functions are nowhere on the list. Weird, right?
Run $ d8 --trace-inlining test.js
Okay. So, we can see that the optimizing compiler was smart here and completely eliminated the overhead of calling both of those functions by inlining them into the optimized code for factorial
.
The optimized code for both versions ends up being basically identical (which you can check, if you know how to read assembly, by running d8 --print-opt-code test.js
).
Modify test.js
function strToArray( str ) {
var i = 0,
len = str.length,
arr = new Uint16Array( str.length );
for ( ; i < len; ++i ) {
arr[ i ] = str.charCodeAt( i );
}
return arr;
}
var i = 0, str = 'V8 is the collest';
while ( i++ < 1e5 ) {
strToArray( str );
}
Run $ d8 --trace-gc test.js
You'll see a bunch of Scavenge... [allocation failure]
.
Basically, V8's GC heap has different "spaces". Most objects are allocated in the "new space". It's super cheap to allocate here, but it's also pretty small (usually somewhere between 1 and 8 MB). Once that space gets filled up, the GC does a "scavenge".
Scavenging is the fast part of V8 garbage collection. Usually somewhere between 1 and 5ms from what I've seen -- so it might not necessarily cause a noticeable GC pause.
Scavenges can only be kicked off by allocations. If the "new space" never gets filled up, the GC never needs to reclaim space by scavenging.
Modify test.js
:
function strToArray( str, bufferView ) {
var i = 0,
len = str.length;
for ( ; i < len; ++i ) {
bufferView[ i ] = str.charCodeAt( i );
}
return bufferView;
}
var i = 0,
str = 'V8 is the coolest',
buffer = new ArrayBuffer( str.length * 2 ),
bufferView = new Uint16Array( buffer );
while ( i++ < 1e5 ) {
strToArray( str, bufferView );
}
Here, we use a preallocated ArrayBuffer
and an associated ArrayBufferView
(in this case a Uint16Array
) in order to avoid reallocating a new object every time we run strToArray()
. The result is that we're hardly allocating anything.
Run $ d8 --trace-gc test.js
Nothing. We never filled up the "new space", so we never had to scavenge.
One more thing to try in test.js
:
function strToArray( str ) {
var i = 0,
len = str.length,
arr = new Uint16Array( str.length );
for ( ; i < len; ++i ) {
arr[ i ] = str.charCodeAt( i );
}
return arr;
}
var i = 0, str = 'V8 is the coolest', arr = [];
while ( i++ < 1e6 ) {
strToArray( str );
if ( i % 100000 === 0 ) {
// save a long-term reference to a random, huge object
arr.push( new Uint16Array( 100000000 ) );
// release references about 5% of the time
Math.random() > 0.95 && ( arr.length = 0 );
}
}
Run $ d8 --trace-gc test.js
Lots of scavenges, which is expected since we're no longer using a preallocated buffer. But there should also be a bunch of Mark-sweep
lines.
Mark-sweep is the "full" GC. It gets run when the "old space" heap reaches a certain size, and it tends to take a lot longer than a scavenge. If you look at the logs, you'll probably see Scavenge
at around ~1.5ms and Mark-sweep
closer to 25 or 30ms.
Since the frame budget in a web app is about 16ms, you're pretty much guaranteed to drop at least 1 frame every time Mark-sweep runs.
There's a ton there, but you can usually find what you're looking for with something like d8 --help | grep memory
or whatever.
This actually lets you call V8 internal methods from within your JS file, like this:
function factorial( n ) {
return n === 1 ? n : factorial( --n );
}
var i = 0;
while ( i++ < 1e8 ) {
factorial( 10 );
// run a full Mark-sweep pass every 10MM iterations
i % 1e7 === 0 && %CollectGarbage( null );
}
...and run $ d8 --allow-natives-syntax --trace-gc test.js
Native functions are prefixed with the %
symbol. A (somewhat incomplete) list of native functions are listed here.
d8 doesn't have a console
object (or a window
object, for that matter). But you can log to the terminal using print()
.
Comparing Hidden Classes
This is probably my favorite one. I actually just found it.
So in V8, there's this concept of "hidden classes" (Good explanation a couple paragraphs in). You should read that article – but basically, hidden classes are how V8 (SpiderMonkey and JavaScript Core use similar techniques, too) determine whether or not two objects have the same "shape".
All things considered, you always want to pass objects of the same hidden class as arguments to functions.
Anyway, you can actually compare the hidden classes of two objects:
function Class( val ) {
this.prop = val;
}
var a = new Class('foo');
var b = new Class('bar');
print( %HaveSameMap( a, b ) );
b.prop2 = 'baz';
print( %HaveSameMap( a, b ) );
Run $ d8 --allow-natives-syntax test.js
You should see true
, then false
. By adding b.prop2 = 'baz'
, we modified its structure and created a new hidden class.
A lot of these flags (but not all of them) work with Node, too. --trace-opt
, --prof
, --allow-natives-syntax
are all supported.
That can be helpful if you want to test something that relies on another library, since you can use Node's require()
.
A list of supported V8 flags can be accessed with node --v8-options
.
Performance Tips for JavaScript in V8 (Good basic intro to Hidden Classes)
Use forensics and detective work to solve JavaScript performance mysteries
Breaking the JavaScript Speed Limit with V8
V8 - A Tale of Two Compilers (Good explanation of Inline Caches)
Anyway, this is all still pretty new to me, and there's a lot I haven't figured out yet. But the stuff I've found so far is pretty cool, so I wanted to write something up and share it.
Oh, and I'm sure there's stuff in here that I'm wrong about, because I'm honestly a little out of my depth here. Feedback is appreciated.
If you have trouble to compile v8 on macOS with Xcode, I suppose it depend on the Xcode developer tools. If the path is in "/Libary/Developer/~" (you can check that with "Xcode-select --print-path", it will end in some confusing errors. Simple run "xcode-select -r" and compile v8 again. That worked for me.
Version: macOS Mojave 10.14.2
Xcode: 10.1 (10B61)