I hereby claim:
- I am crazypython on github.
- I am crazypython (https://keybase.io/crazypython) on keybase.
- I have a public key ASBoEYW7zHd-NPg0WIpavH6srAP2S9M4MB7JOtaLPphEqAo
To claim this, I am signing this object:
I hereby claim:
To claim this, I am signing this object:
| l, |
| { | |
| "name": "ccapp", | |
| "version": "0.1.0", | |
| "private": true, | |
| "devDependencies": { | |
| "babel-preset-react-native-stage-0": "^1.0.1", | |
| "jest-expo": "^22.0.0", | |
| "react-native-cli": "^2.0.1", | |
| "react-test-renderer": "16.0.0-beta.5" | |
| }, |
| from collections import defaultdict | |
| output = defaultdict(list) | |
| for item in data: | |
| output[item["owner"]].append(item["pet"]) | |
| # output is => | |
| defaultdict(list, | |
| {'Kent': ['Shiner'], | |
| 'Mary': ['Pumpkin', 'Tasha'], | |
| 'Paige': ['Sushi'], |
Putting the code within for (let j = 0; j < 100; j++) in its own function did not affect results in either benchmark.
Removing this.x = 0; this.y = 0; did not affect results for the JS Array benchmark.
Passing --trace-opt and --trace-deopt shows that the JIT optimizes all lines of code for both benchmarks in the first few iterations; it does not repeatedly try to re-optimize for either.
The results scale. When passing j < 300, the JS Array takes 15-17 seconds while the Int8Array takes 0.54 seconds. That's a 27x performance improvement for a pure read/write/multiply ALU test.
More complex loop benchmark (if/else branch) Surprisingly, the Int8Array (now turned into a Float64Array) isn't just faster at loads and stores. Here's a floating point arithmetic benchmark (the code that's identical to the previous benchmark have been omitted):
| // GUN DEFINITIONS | |
| const dfltskl = exports.dfltskl = 7; | |
| exports.combineStats = function(arr) { | |
| try { | |
| // Build a blank array of the appropiate length | |
| let data = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]; | |
| arr.forEach(function(component) { | |
| for (let i=0; i<data.length; i++) { | |
| data[i] = data[i] * component[i]; | |
| } |
| // gamemode files would be best in CoffeeScript | |
| // real width is 3000 | |
| // food was originally 800, food would add an element of dodging | |
| // this mode should have the leaderboard disabled on the client | |
| // npm install driftless for more accurate timing | |
| // also could npm install driftless help make arras faster, by maing gameloop execute more accurately? | |
| exports.flagCarriers = [] | |
| exports.flagTimes = {[-1] /*blue*/: 5*60, [-3] /*red*/: 5*60} | |
| const isFlag = e => e.type === 'flag' |
| const { genericEntity } = require('../lib/basedefinitions.js') | |
| module.exports.flag = { | |
| LABEL: 'Flag', | |
| ACCEPTS_SCORE: false, | |
| TYPE: 'flag', | |
| SHAPE: 6, | |
| PARENT: [genericEntity], | |
| SIZE: 50, | |
| DANGER: 0, | |
| BODY: { |
| -march=native -mtune=native |
I want read-only access to your GitLab repository, with the permission to propose merge requests. Do that and I'll considering continuing staying on arras-dev.
If not, I'm gonna ask to join Fillygroove/Hellcat's server as a co-dev, and I'm gonna contribute my anti-lag efforts and my CTHD there.
I have three prongs of evidence that strongly lead me to believe that GC is the issue. And although I can't disprove the null hypothesis like you want me to, intuition trumps proof when you want speed.
Testing
We can do the Meausirng / Testing after you try out my fix in prod. But until then, it's not worth it for me to try to replicate your freeze-every-15-seconds behavior and do a controlled trial just to prove what my intuition already knows. Because I can't replicate your freeze-every-15-seconds behavior on my server by just adding lots of bots, and to do so I'd probably have to set up a VM (Or register for and wait for an OpenShift account.) with extremely limited memory size and allocate some swap to replic