Created
November 17, 2017 07:15
-
-
Save Trott/7725652eb432f0a47ccc85d2c8bcf049 to your computer and use it in GitHub Desktop.
Node.fz vs. nodejs master conflicts
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
diff --cc .gitignore | |
index 5951901b65,3d6b7a51a3..0000000000 | |
--- a/.gitignore | |
+++ b/.gitignore | |
@@@ -1,20 -1,4 +1,24 @@@ | |
++<<<<<<< HEAD | |
+# Whitelist dotfiles | |
+.* | |
+!deps/**/.* | |
+!test/fixtures/**/.* | |
+!tools/eslint/**/.* | |
+!tools/doc/node_modules/**/.* | |
+!.editorconfig | |
+!.eslintignore | |
+!.eslintrc.yaml | |
+!.gitattributes | |
+!.github | |
+!.gitignore | |
+!.gitkeep | |
+!.mailmap | |
+!.nycrc | |
+!.remarkrc | |
+ | |
++======= | |
+ cscope* | |
++>>>>>>> Node.fz changes | |
core | |
vgcore.* | |
v8*.log | |
diff --cc README.md | |
index db22c07cd4,7106a3efd2..0000000000 | |
--- a/README.md | |
+++ b/README.md | |
@@@ -1,592 -1,64 +1,653 @@@ | |
+<p align="center"> | |
+ <a href="https://nodejs.org/"> | |
+ <img alt="Node.js" src="https://nodejs.org/static/images/logo-light.svg" width="400"/> | |
+ </a> | |
+</p> | |
+<p align="center"> | |
+ <a title="CII Best Practices" href="https://bestpractices.coreinfrastructure.org/projects/29"><img src="https://bestpractices.coreinfrastructure.org/projects/29/badge"></a> | |
+</p> | |
+Node.js is a JavaScript runtime built on Chrome's V8 JavaScript engine. Node.js | |
+uses an event-driven, non-blocking I/O model that makes it lightweight and | |
+efficient. The Node.js package ecosystem, [npm][], is the largest ecosystem of | |
+open source libraries in the world. | |
+ | |
+The Node.js project is supported by the | |
+[Node.js Foundation](https://nodejs.org/en/foundation/). Contributions, | |
+policies, and releases are managed under an | |
+[open governance model](./GOVERNANCE.md). | |
+ | |
++<<<<<<< HEAD | |
+**This project is bound by a [Code of Conduct][].** | |
+ | |
+If you need help using or installing Node.js, please use the | |
+[nodejs/help](https://github.com/nodejs/help) issue tracker. | |
+ | |
+ | |
+# Table of Contents | |
+ | |
+* [Resources for Newcomers](#resources-for-newcomers) | |
+* [Release Types](#release-types) | |
+ * [Download](#download) | |
+ * [Current and LTS Releases](#current-and-lts-releases) | |
+ * [Nightly Releases](#nightly-releases) | |
+ * [API Documentation](#api-documentation) | |
+ * [Verifying Binaries](#verifying-binaries) | |
+* [Building Node.js](#building-nodejs) | |
+ * [Security](#security) | |
+ * [Current Project Team Members](#current-project-team-members) | |
+ * [TSC (Technical Steering Committee)](#tsc-technical-steering-committee) | |
+ * [Collaborators](#collaborators) | |
+ * [Release Team](#release-team) | |
+ | |
+## Resources for Newcomers | |
+ | |
+### Official Resources | |
+ | |
+* [Website][] | |
+* [Node.js Help][] | |
+* [Contributing to the project][] | |
+* IRC (node core development): [#node-dev on chat.freenode.net][] | |
+ | |
+### Unofficial Resources | |
+ | |
+* IRC (general questions): [#node.js on chat.freenode.net][]. Please see | |
+<http://nodeirc.info/> for more information regarding the `#node.js` IRC | |
+channel. | |
+ | |
+_Please note that unofficial resources are neither managed by (nor necessarily | |
+endorsed by) the Node.js TSC. Specifically, such resources are not | |
+currently covered by the [Node.js Moderation Policy][] and the selection and | |
+actions of resource operators/moderators are not subject to TSC oversight._ | |
+ | |
+## Release Types | |
+ | |
+The Node.js project maintains multiple types of releases: | |
+ | |
+* **Current**: Released from active development branches of this repository, | |
+ versioned by [SemVer](http://semver.org/) and signed by a member of the | |
+ [Release Team](#release-team). | |
+ Code for Current releases is organized in this repository by major version | |
+ number. For example: [v4.x](https://github.com/nodejs/node/tree/v4.x). | |
+ The major version number of Current releases will increment every 6 months | |
+ allowing for breaking changes to be introduced. This happens in April and | |
+ October every year. Current release lines beginning in October each year have | |
+ a maximum support life of 8 months. Current release lines beginning in April | |
+ each year will convert to LTS (see below) after 6 months and receive further | |
+ support for 30 months. | |
+* **LTS**: Releases that receive Long-term Support, with a focus on stability | |
+ and security. Every second Current release line (major version) will become an | |
+ LTS line and receive 18 months of _Active LTS_ support and a further 12 | |
+ months of _Maintenance_. LTS release lines are given alphabetically | |
+ ordered codenames, beginning with v4 Argon. LTS releases are less frequent | |
+ and will attempt to maintain consistent major and minor version numbers, | |
+ only incrementing patch version numbers. There are no breaking changes or | |
+ feature additions, except in some special circumstances. | |
+* **Nightly**: Versions of code in this repository on the current Current | |
+ branch, automatically built every 24-hours where changes exist. Use with | |
+ caution. | |
+ | |
+More information can be found in the [LTS README](https://github.com/nodejs/LTS/). | |
++======= | |
+ Node.fz | |
+ ===== | |
- ## Download | |
+ This repository contains a modified version of the popular [Node.js](https://github.com/nodejs/node) JavaScript runtime framework, version 0.12.7. | |
+ The modifications are exclusively (I think) to the libuv subsystem, which fundamentally provides Node.js's event-driven nature. | |
+ Their purpose is to increase the non-determinism already inherent in Node.js's asynchronous, event-driven nature. | |
+ The goal of increasing the non-determinism is to flush out *race conditions* in the Node.js application being executed. | |
- Binaries, installers, and source tarballs are available at | |
- <https://nodejs.org>. | |
+ The changes I made to Node.js are detailed in the associated [paper](http://dl.acm.org/citation.cfm?id=3064188). | |
+ Essentially, I probabilistically re-order the event queue, constrained by legal or practical ordering guarantees prescribed by the Node.js and libuv documentation. | |
+ ## Does it work? | |
++>>>>>>> Node.fz changes | |
+ | |
+ As described in the associated [paper](http://dl.acm.org/citation.cfm?id=3064188), Node.fz can: | |
+ - reproduce race conditions more effectively than Node.js | |
+ - increase the schedule space explored by a test suite | |
+ | |
+ Node.fz will make a great addition to your testing methodology, continuous integration pipelines, etc. | |
+ Once you've worked out obvious bugs in your program, replace the installation of Node.js with Node.fz, and then re-run your test suite to look for race conditions. | |
+ | |
++<<<<<<< HEAD | |
+#### Current and LTS Releases | |
+**Current** and **LTS** releases are available at | |
+<https://nodejs.org/download/release/>, listed under their version strings. | |
+The [latest](https://nodejs.org/download/release/latest/) directory is an | |
+alias for the latest Current release. The latest LTS release from an LTS | |
+line is available in the form: latest-_codename_. For example: | |
+<https://nodejs.org/download/release/latest-argon>. | |
+ | |
+#### Nightly Releases | |
+**Nightly** builds are available at | |
+<https://nodejs.org/download/nightly/>, listed under their version | |
+string which includes their date (in UTC time) and the commit SHA at | |
+the HEAD of the release. | |
+ | |
+#### API Documentation | |
+**API documentation** is available in each release and nightly | |
+directory under _docs_. <https://nodejs.org/api/> points to the API | |
+documentation of the latest stable version. | |
- | |
- ### Verifying Binaries | |
- | |
++======= | |
+ ## How do I use it? | |
+ | |
+ Follow the standard Node.js compilation procedure, as described elsewhere in this repository. | |
+ Then replace the version of Node.js installed on your machine with the resulting binary (e.g. with `make install`). | |
+ | |
+ Node.fz's behavior can be controlled using the following environment variables: | |
++>>>>>>> Node.fz changes | |
+ | |
+ | Parameter | Recommended value | | |
+ | ----------------------------------- | :----------------------: | | |
+ | UV_THREADPOOL_SIZE\* | 1 | | |
+ | UV_SCHEDULER_TYPE\* | "TP_FREEDOM" | | |
+ | UV_SCHEDULER_MODE\* | "RECORD" | | |
+ | UV_SILENT\* | "1" | | |
+ | UV_SCHEDULER_TP_DEG_FREEDOM | 5 | | |
+ | UV_SCHEDULER_TP_MAX_DELAY | 100 | | |
+ | UV_SCHEDULER_TP_EPOLL_THRESHOLD | 100 | | |
+ | UV_SCHEDULER_TIMER_LATE_EXEC_TPERC | 250 | | |
+ | UV_SCHEDULER_IOPOLL_DEG_FREEDOM | -1 | | |
+ | UV_SCHEDULER_IOPOLL_DEFER_PERC | 10 | | |
+ | UV_SCHEDULER_RUN_CLOSING_DEFER_PERC | 5 | | |
+ | |
++<<<<<<< HEAD | |
+Current, LTS and Nightly download directories all contain a _SHASUMS256.txt_ | |
+file that lists the SHA checksums for each file available for | |
+download. | |
+ | |
+The _SHASUMS256.txt_ can be downloaded using curl. | |
+ | |
+```console | |
+$ curl -O https://nodejs.org/dist/vx.y.z/SHASUMS256.txt | |
+``` | |
+ | |
+To check that a downloaded file matches the checksum, run | |
+it through `sha256sum` with a command such as: | |
+ | |
+```console | |
+$ grep node-vx.y.z.tar.gz SHASUMS256.txt | sha256sum -c - | |
+``` | |
++======= | |
+ \*Feel free to tune all but these parameters. | |
- _(Where "node-vx.y.z.tar.gz" is the name of the file you have | |
- downloaded)_ | |
+ See deps/uv/src/uv-common.c for an in-depth explanation of each parameter. | |
++>>>>>>> Node.fz changes | |
+ ## Limitations | |
+ | |
++<<<<<<< HEAD | |
+Additionally, Current and LTS releases (not Nightlies) have the GPG | |
+detached signature of SHASUMS256.txt available as SHASUMS256.txt.sig. | |
+You can use `gpg` to verify that SHASUMS256.txt has not been tampered with. | |
+ | |
+To verify SHASUMS256.txt has not been altered, you will first need to import | |
+all of the GPG keys of individuals authorized to create releases. They are | |
+listed at the bottom of this README under [Release Team](#release-team). | |
+Use a command such as this to import the keys: | |
+ | |
+```console | |
+$ gpg --keyserver pool.sks-keyservers.net --recv-keys DD8F2338BAE7501E3DD5AC78C273792F7D83545D | |
+``` | |
++======= | |
+ Node.fz is a dynamic test tool. It is only as good as the test suite being used to drive it. | |
+ If there's a bug in function *B* but your test suite never calls this function, obviously Node.fz won't help you. | |
+ | |
+ ## Contributing | |
+ | |
+ The initial prototype of Node.fz was implemented by [James (Jamie) Davis](https://github.com/davisjam). | |
++>>>>>>> Node.fz changes | |
- _(See the bottom of this README for a full script to import active | |
- release keys)_ | |
+ Please reach out to me to discuss any pull requests you'd like to submit. | |
+ The most needed change is porting it up to a more recent version of Node.js (libuv). | |
+ I think this should be pretty straightforward. | |
++<<<<<<< HEAD | |
+Next, download the SHASUMS256.txt.sig for the release: | |
+ | |
+```console | |
+$ curl -O https://nodejs.org/dist/vx.y.z/SHASUMS256.txt.sig | |
+``` | |
+ | |
+After downloading the appropriate SHASUMS256.txt and SHASUMS256.txt.sig files, | |
+you can then use `gpg --verify SHASUMS256.txt.sig SHASUMS256.txt` to verify | |
+that the file has been signed by an authorized member of the Node.js team. | |
+ | |
+Once verified, use the SHASUMS256.txt file to get the checksum for | |
+the binary verification command above. | |
+ | |
+## Building Node.js | |
+ | |
+See [BUILDING.md](BUILDING.md) for instructions on how to build | |
+Node.js from source. The document also contains a list of | |
+officially supported platforms. | |
+ | |
+## Security | |
+ | |
+All security bugs in Node.js are taken seriously and should be reported by | |
+emailing [email protected]. This will be delivered to a subset of the project | |
+team who handle security issues. Please don't disclose security bugs | |
+publicly until they have been handled by the security team. | |
+ | |
+Your email will be acknowledged within 24 hours, and you’ll receive a more | |
+detailed response to your email within 48 hours indicating the next steps in | |
+handling your report. | |
+ | |
+There are no hard and fast rules to determine if a bug is worth reporting as | |
+a security issue. The general rule is any issue worth reporting | |
+must allow an attacker to compromise the confidentiality, integrity | |
+or availability of the Node.js application or its system for which the attacker | |
+does not already have the capability. | |
+ | |
+To illustrate the point, here are some examples of past issues and what the | |
+Security Reponse Team thinks of them. When in doubt, however, please do send | |
+us a report nonetheless. | |
+ | |
+ | |
+### Public disclosure preferred | |
+ | |
+- [#14519](https://github.com/nodejs/node/issues/14519): _Internal domain | |
+ function can be used to cause segfaults_. Causing program termination using | |
+ either the public Javascript APIs or the private bindings layer APIs requires | |
+ the ability to execute arbitrary Javascript code, which is already the highest | |
+ level of privilege possible. | |
+ | |
+- [#12141](https://github.com/nodejs/node/pull/12141): _buffer: zero fill | |
+ Buffer(num) by default_. The buffer constructor behaviour was documented, | |
+ but found to be prone to [mis-use](https://snyk.io/blog/exploiting-buffer/). | |
+ It has since been changed, but despite much debate, was not considered misuse | |
+ prone enough to justify fixing in older release lines and breaking our | |
+ API stability contract. | |
+ | |
+### Private disclosure preferred | |
+ | |
+- [CVE-2016-7099](https://nodejs.org/en/blog/vulnerability/september-2016-security-releases/): | |
+ _Fix invalid wildcard certificate validation check_. This is a high severity | |
+ defect that would allow a malicious TLS server to serve an invalid wildcard | |
+ certificate for its hostname and be improperly validated by a Node.js client. | |
+ | |
+- [#5507](https://github.com/nodejs/node/pull/5507): _Fix a defect that makes | |
+ the CacheBleed Attack possible_. Many, though not all, OpenSSL vulnerabilities | |
+ in the TLS/SSL protocols also effect Node.js. | |
+ | |
+- [CVE-2016-2216](https://nodejs.org/en/blog/vulnerability/february-2016-security-releases/): | |
+ _Fix defects in HTTP header parsing for requests and responses that can allow | |
+ response splitting_. While the impact of this vulnerability is application and | |
+ network dependent, it is remotely exploitable in the HTTP protocol. | |
+ | |
+When in doubt, please do send us a report. | |
+ | |
+ | |
+## Current Project Team Members | |
+ | |
+The Node.js project team comprises a group of core collaborators and a sub-group | |
+that forms the _Technical Steering Committee_ (TSC) which governs the project. | |
+For more information about the governance of the Node.js project, see | |
+[GOVERNANCE.md](./GOVERNANCE.md). | |
+ | |
+### TSC (Technical Steering Committee) | |
+ | |
+* [addaleax](https://github.com/addaleax) - | |
+**Anna Henningsen** <[email protected]> (she/her) | |
+* [ChALkeR](https://github.com/ChALkeR) - | |
+**Сковорода Никита Андреевич** <[email protected]> (he/him) | |
+* [cjihrig](https://github.com/cjihrig) - | |
+**Colin Ihrig** <[email protected]> | |
+* [evanlucas](https://github.com/evanlucas) - | |
+**Evan Lucas** <[email protected]> (he/him) | |
+* [fhinkel](https://github.com/fhinkel) - | |
+**Franziska Hinkelmann** <[email protected]> (she/her) | |
+* [Fishrock123](https://github.com/Fishrock123) - | |
+**Jeremiah Senkpiel** <[email protected]> | |
+* [indutny](https://github.com/indutny) - | |
+**Fedor Indutny** <[email protected]> | |
+* [jasnell](https://github.com/jasnell) - | |
+**James M Snell** <[email protected]> (he/him) | |
+* [joshgav](https://github.com/joshgav) - | |
+**Josh Gavant** <[email protected]> | |
+* [joyeecheung](https://github.com/joyeecheung) - | |
+**Joyee Cheung** <[email protected]> (she/her) | |
+* [mcollina](https://github.com/mcollina) - | |
+**Matteo Collina** <[email protected]> (he/him) | |
+* [mhdawson](https://github.com/mhdawson) - | |
+**Michael Dawson** <[email protected]> (he/him) | |
+* [mscdex](https://github.com/mscdex) - | |
+**Brian White** <[email protected]> | |
+* [MylesBorins](https://github.com/MylesBorins) - | |
+**Myles Borins** <[email protected]> (he/him) | |
+* [ofrobots](https://github.com/ofrobots) - | |
+**Ali Ijaz Sheikh** <[email protected]> | |
+* [rvagg](https://github.com/rvagg) - | |
+**Rod Vagg** <[email protected]> | |
+* [targos](https://github.com/targos) - | |
+**Michaël Zasso** <[email protected]> (he/him) | |
+* [thefourtheye](https://github.com/thefourtheye) - | |
+**Sakthipriyan Vairamani** <[email protected]> (he/him) | |
+* [trevnorris](https://github.com/trevnorris) - | |
+**Trevor Norris** <[email protected]> | |
+* [Trott](https://github.com/Trott) - | |
+**Rich Trott** <[email protected]> (he/him) | |
+ | |
+### TSC Emeriti | |
+ | |
+* [bnoordhuis](https://github.com/bnoordhuis) - | |
+**Ben Noordhuis** <[email protected]> | |
+* [chrisdickinson](https://github.com/chrisdickinson) - | |
+**Chris Dickinson** <[email protected]> | |
+* [isaacs](https://github.com/isaacs) - | |
+**Isaac Z. Schlueter** <[email protected]> | |
+* [nebrius](https://github.com/nebrius) - | |
+**Bryan Hughes** <[email protected]> | |
+* [orangemocha](https://github.com/orangemocha) - | |
+**Alexis Campailla** <[email protected]> | |
+* [piscisaureus](https://github.com/piscisaureus) - | |
+**Bert Belder** <[email protected]> | |
+* [shigeki](https://github.com/shigeki) - | |
+**Shigeki Ohtsu** <[email protected]> (he/him) | |
+ | |
+### Collaborators | |
+ | |
+* [abouthiroppy](https://github.com/abouthiroppy) - | |
+**Yuta Hiroto** <[email protected]> (he/him) | |
+* [addaleax](https://github.com/addaleax) - | |
+**Anna Henningsen** <[email protected]> (she/her) | |
+* [ak239](https://github.com/ak239) - | |
+**Aleksei Koziatinskii** <[email protected]> | |
+* [andrasq](https://github.com/andrasq) - | |
+**Andras** <[email protected]> | |
+* [AndreasMadsen](https://github.com/AndreasMadsen) - | |
+**Andreas Madsen** <[email protected]> (he/him) | |
+* [AnnaMag](https://github.com/AnnaMag) - | |
+**Anna M. Kedzierska** <[email protected]> | |
+* [apapirovski](https://github.com/apapirovski) - | |
+**Anatoli Papirovski** <[email protected]> (he/him) | |
+* [aqrln](https://github.com/aqrln) - | |
+**Alexey Orlenko** <[email protected]> (he/him) | |
+* [bengl](https://github.com/bengl) - | |
+**Bryan English** <[email protected]> (he/him) | |
+* [benjamingr](https://github.com/benjamingr) - | |
+**Benjamin Gruenbaum** <[email protected]> | |
+* [bmeck](https://github.com/bmeck) - | |
+**Bradley Farias** <[email protected]> | |
+* [bmeurer](https://github.com/bmeurer) - | |
+**Benedikt Meurer** <[email protected]> | |
+* [bnoordhuis](https://github.com/bnoordhuis) - | |
+**Ben Noordhuis** <[email protected]> | |
+* [brendanashworth](https://github.com/brendanashworth) - | |
+**Brendan Ashworth** <[email protected]> | |
+* [BridgeAR](https://github.com/BridgeAR) - | |
+**Ruben Bridgewater** <[email protected]> | |
+* [bzoz](https://github.com/bzoz) - | |
+**Bartosz Sosnowski** <[email protected]> | |
+* [calvinmetcalf](https://github.com/calvinmetcalf) - | |
+**Calvin Metcalf** <[email protected]> | |
+* [ChALkeR](https://github.com/ChALkeR) - | |
+**Сковорода Никита Андреевич** <[email protected]> (he/him) | |
+* [chrisdickinson](https://github.com/chrisdickinson) - | |
+**Chris Dickinson** <[email protected]> | |
+* [cjihrig](https://github.com/cjihrig) - | |
+**Colin Ihrig** <[email protected]> | |
+* [claudiorodriguez](https://github.com/claudiorodriguez) - | |
+**Claudio Rodriguez** <[email protected]> | |
+* [danbev](https://github.com/danbev) - | |
+**Daniel Bevenius** <[email protected]> | |
+* [DavidCai1993](https://github.com/DavidCai1993) - | |
+**David Cai** <[email protected]> (he/him) | |
+* [edsadr](https://github.com/edsadr) - | |
+**Adrian Estrada** <[email protected]> (he/him) | |
+* [eljefedelrodeodeljefe](https://github.com/eljefedelrodeodeljefe) - | |
+**Robert Jefe Lindstaedt** <[email protected]> | |
+* [estliberitas](https://github.com/estliberitas) - | |
+**Alexander Makarenko** <[email protected]> | |
+* [eugeneo](https://github.com/eugeneo) - | |
+**Eugene Ostroukhov** <[email protected]> | |
+* [evanlucas](https://github.com/evanlucas) - | |
+**Evan Lucas** <[email protected]> (he/him) | |
+* [fhinkel](https://github.com/fhinkel) - | |
+**Franziska Hinkelmann** <[email protected]> (she/her) | |
+* [firedfox](https://github.com/firedfox) - | |
+**Daniel Wang** <[email protected]> | |
+* [Fishrock123](https://github.com/Fishrock123) - | |
+**Jeremiah Senkpiel** <[email protected]> | |
+* [gabrielschulhof](https://github.com/gabrielschulhof) - | |
+**Gabriel Schulhof** <[email protected]> | |
+* [geek](https://github.com/geek) - | |
+**Wyatt Preul** <[email protected]> | |
+* [gibfahn](https://github.com/gibfahn) - | |
+**Gibson Fahnestock** <[email protected]> (he/him) | |
+* [gireeshpunathil](https://github.com/gireeshpunathil) - | |
+**Gireesh Punathil** <[email protected]> (he/him) | |
+* [hashseed](https://github.com/hashseed) - | |
+**Yang Guo** <[email protected]> (he/him) | |
+* [iarna](https://github.com/iarna) - | |
+**Rebecca Turner** <[email protected]> | |
+* [imran-iq](https://github.com/imran-iq) - | |
+**Imran Iqbal** <[email protected]> | |
+* [imyller](https://github.com/imyller) - | |
+**Ilkka Myller** <[email protected]> | |
+* [indutny](https://github.com/indutny) - | |
+**Fedor Indutny** <[email protected]> | |
+* [italoacasas](https://github.com/italoacasas) - | |
+**Italo A. Casas** <[email protected]> (he/him) | |
+* [JacksonTian](https://github.com/JacksonTian) - | |
+**Jackson Tian** <[email protected]> | |
+* [jasnell](https://github.com/jasnell) - | |
+**James M Snell** <[email protected]> (he/him) | |
+* [jasongin](https://github.com/jasongin) - | |
+**Jason Ginchereau** <[email protected]> | |
+* [jbergstroem](https://github.com/jbergstroem) - | |
+**Johan Bergström** <[email protected]> | |
+* [jhamhader](https://github.com/jhamhader) - | |
+**Yuval Brik** <[email protected]> | |
+* [jkrems](https://github.com/jkrems) - | |
+**Jan Krems** <[email protected]> (he/him) | |
+* [joaocgreis](https://github.com/joaocgreis) - | |
+**João Reis** <[email protected]> | |
+* [joshgav](https://github.com/joshgav) - | |
+**Josh Gavant** <[email protected]> | |
+* [joyeecheung](https://github.com/joyeecheung) - | |
+**Joyee Cheung** <[email protected]> (she/her) | |
+* [julianduque](https://github.com/julianduque) - | |
+**Julian Duque** <[email protected]> (he/him) | |
+* [JungMinu](https://github.com/JungMinu) - | |
+**Minwoo Jung** <[email protected]> (he/him) | |
+* [kfarnung](https://github.com/kfarnung) - | |
+**Kyle Farnung** <[email protected]> (he/him) | |
+* [kunalspathak](https://github.com/kunalspathak) - | |
+**Kunal Pathak** <[email protected]> | |
+* [lance](https://github.com/lance) - | |
+**Lance Ball** <[email protected]> | |
+* [lpinca](https://github.com/lpinca) - | |
+**Luigi Pinca** <[email protected]> (he/him) | |
+* [lucamaraschi](https://github.com/lucamaraschi) - | |
+**Luca Maraschi** <[email protected]> (he/him) | |
+* [matthewloring](https://github.com/matthewloring) - | |
+**Matthew Loring** <[email protected]> | |
+* [mcollina](https://github.com/mcollina) - | |
+**Matteo Collina** <[email protected]> (he/him) | |
+* [mhdawson](https://github.com/mhdawson) - | |
+**Michael Dawson** <[email protected]> (he/him) | |
+* [micnic](https://github.com/micnic) - | |
+**Nicu Micleușanu** <[email protected]> (he/him) | |
+* [mikeal](https://github.com/mikeal) - | |
+**Mikeal Rogers** <[email protected]> | |
+* [misterdjules](https://github.com/misterdjules) - | |
+**Julien Gilli** <[email protected]> | |
+* [mscdex](https://github.com/mscdex) - | |
+**Brian White** <[email protected]> | |
+* [MylesBorins](https://github.com/MylesBorins) - | |
+**Myles Borins** <[email protected]> (he/him) | |
+* [not-an-aardvark](https://github.com/not-an-aardvark) - | |
+**Teddy Katz** <[email protected]> | |
+* [ofrobots](https://github.com/ofrobots) - | |
+**Ali Ijaz Sheikh** <[email protected]> | |
+* [orangemocha](https://github.com/orangemocha) - | |
+**Alexis Campailla** <[email protected]> | |
+* [othiym23](https://github.com/othiym23) - | |
+**Forrest L Norvell** <[email protected]> (he/him) | |
+* [phillipj](https://github.com/phillipj) - | |
+**Phillip Johnsen** <[email protected]> | |
+* [pmq20](https://github.com/pmq20) - | |
+**Minqi Pan** <[email protected]> | |
+* [princejwesley](https://github.com/princejwesley) - | |
+**Prince John Wesley** <[email protected]> | |
+* [Qard](https://github.com/Qard) - | |
+**Stephen Belanger** <[email protected]> (he/him) | |
+* [refack](https://github.com/refack) - | |
+**Refael Ackermann** <[email protected]> (he/him) | |
+* [richardlau](https://github.com/richardlau) - | |
+**Richard Lau** <[email protected]> | |
+* [rmg](https://github.com/rmg) - | |
+**Ryan Graham** <[email protected]> | |
+* [robertkowalski](https://github.com/robertkowalski) - | |
+**Robert Kowalski** <[email protected]> | |
+* [romankl](https://github.com/romankl) - | |
+**Roman Klauke** <[email protected]> | |
+* [ronkorving](https://github.com/ronkorving) - | |
+**Ron Korving** <[email protected]> | |
+* [RReverser](https://github.com/RReverser) - | |
+**Ingvar Stepanyan** <[email protected]> | |
+* [rvagg](https://github.com/rvagg) - | |
+**Rod Vagg** <[email protected]> | |
+* [saghul](https://github.com/saghul) - | |
+**Saúl Ibarra Corretgé** <[email protected]> | |
+* [sam-github](https://github.com/sam-github) - | |
+**Sam Roberts** <[email protected]> | |
+* [santigimeno](https://github.com/santigimeno) - | |
+**Santiago Gimeno** <[email protected]> | |
+* [sebdeckers](https://github.com/sebdeckers) - | |
+**Sebastiaan Deckers** <[email protected]> | |
+* [seishun](https://github.com/seishun) - | |
+**Nikolai Vavilov** <[email protected]> | |
+* [shigeki](https://github.com/shigeki) - | |
+**Shigeki Ohtsu** <[email protected]> (he/him) | |
+* [silverwind](https://github.com/silverwind) - | |
+**Roman Reiss** <[email protected]> | |
+* [srl295](https://github.com/srl295) - | |
+**Steven R Loomis** <[email protected]> | |
+* [stefanmb](https://github.com/stefanmb) - | |
+**Stefan Budeanu** <[email protected]> | |
+* [targos](https://github.com/targos) - | |
+**Michaël Zasso** <[email protected]> (he/him) | |
+* [thefourtheye](https://github.com/thefourtheye) - | |
+**Sakthipriyan Vairamani** <[email protected]> (he/him) | |
+* [thekemkid](https://github.com/thekemkid) - | |
+**Glen Keane** <[email protected]> (he/him) | |
+* [thlorenz](https://github.com/thlorenz) - | |
+**Thorsten Lorenz** <[email protected]> | |
+* [TimothyGu](https://github.com/TimothyGu) - | |
+**Timothy Gu** <[email protected]> (he/him) | |
+* [tniessen](https://github.com/tniessen) - | |
+**Tobias Nießen** <[email protected]> | |
+* [trevnorris](https://github.com/trevnorris) - | |
+**Trevor Norris** <[email protected]> | |
+* [Trott](https://github.com/Trott) - | |
+**Rich Trott** <[email protected]> (he/him) | |
+* [tunniclm](https://github.com/tunniclm) - | |
+**Mike Tunnicliffe** <[email protected]> | |
+* [vkurchatkin](https://github.com/vkurchatkin) - | |
+**Vladimir Kurchatkin** <[email protected]> | |
+* [vsemozhetbyt](https://github.com/vsemozhetbyt) - | |
+**Vse Mozhet Byt** <[email protected]> (he/him) | |
+* [watilde](https://github.com/watilde) - | |
+**Daijiro Wachi** <[email protected]> (he/him) | |
+* [whitlockjc](https://github.com/whitlockjc) - | |
+**Jeremy Whitlock** <[email protected]> | |
+* [XadillaX](https://github.com/XadillaX) - | |
+**Khaidi Chu** <[email protected]> (he/him) | |
+* [yorkie](https://github.com/yorkie) - | |
+**Yorkie Liu** <[email protected]> | |
+* [yosuke-furukawa](https://github.com/yosuke-furukawa) - | |
+**Yosuke Furukawa** <[email protected]> | |
+ | |
+### Collaborator Emeriti | |
+ | |
+* [isaacs](https://github.com/isaacs) - | |
+**Isaac Z. Schlueter** <[email protected]> | |
+* [lxe](https://github.com/lxe) - | |
+**Aleksey Smolenchuk** <[email protected]> | |
+* [monsanto](https://github.com/monsanto) - | |
+**Christopher Monsanto** <[email protected]> | |
+* [Olegas](https://github.com/Olegas) - | |
+**Oleg Elifantiev** <[email protected]> | |
+* [petkaantonov](https://github.com/petkaantonov) - | |
+**Petka Antonov** <[email protected]> | |
+* [piscisaureus](https://github.com/piscisaureus) - | |
+**Bert Belder** <[email protected]> | |
+* [rlidwka](https://github.com/rlidwka) - | |
+**Alex Kocharin** <[email protected]> | |
+* [tellnes](https://github.com/tellnes) - | |
+**Christian Tellnes** <[email protected]> | |
+ | |
+Collaborators follow the [COLLABORATOR_GUIDE.md](./COLLABORATOR_GUIDE.md) in | |
+maintaining the Node.js project. | |
+ | |
+### Release Team | |
+ | |
+Node.js releases are signed with one of the following GPG keys: | |
+ | |
+* **Colin Ihrig** <[email protected]> | |
+`94AE36675C464D64BAFA68DD7434390BDBE9B9C5` | |
+* **Evan Lucas** <[email protected]> | |
+`B9AE9905FFD7803F25714661B63B535A4C206CA9` | |
+* **Gibson Fahnestock** <[email protected]> | |
+`77984A986EBC2AA786BC0F66B01FBB92821C587A` | |
+* **Italo A. Casas** <[email protected]> | |
+`56730D5401028683275BD23C23EFEFE93C4CFFFE` | |
+* **James M Snell** <[email protected]> | |
+`71DCFD284A79C3B38668286BC97EC7A07EDE3FC1` | |
+* **Jeremiah Senkpiel** <[email protected]> | |
+`FD3A5288F042B6850C66B31F09FE44734EB7990E` | |
+* **Myles Borins** <[email protected]> | |
+`C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8` | |
+* **Rod Vagg** <[email protected]> | |
+`DD8F2338BAE7501E3DD5AC78C273792F7D83545D` | |
+ | |
+The full set of trusted release keys can be imported by running: | |
+ | |
+```shell | |
+gpg --keyserver pool.sks-keyservers.net --recv-keys 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 | |
+gpg --keyserver pool.sks-keyservers.net --recv-keys FD3A5288F042B6850C66B31F09FE44734EB7990E | |
+gpg --keyserver pool.sks-keyservers.net --recv-keys 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 | |
+gpg --keyserver pool.sks-keyservers.net --recv-keys DD8F2338BAE7501E3DD5AC78C273792F7D83545D | |
+gpg --keyserver pool.sks-keyservers.net --recv-keys C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 | |
+gpg --keyserver pool.sks-keyservers.net --recv-keys B9AE9905FFD7803F25714661B63B535A4C206CA9 | |
+gpg --keyserver pool.sks-keyservers.net --recv-keys 56730D5401028683275BD23C23EFEFE93C4CFFFE | |
+gpg --keyserver pool.sks-keyservers.net --recv-keys 77984A986EBC2AA786BC0F66B01FBB92821C587A | |
+``` | |
+ | |
+See the section above on [Verifying Binaries](#verifying-binaries) for details | |
+on what to do with these keys to verify that a downloaded file is official. | |
+ | |
+Previous releases may also have been signed with one of the following GPG keys: | |
+ | |
+* **Chris Dickinson** <[email protected]> | |
+`9554F04D7259F04124DE6B476D5A82AC7E37093B` | |
+* **Isaac Z. Schlueter** <[email protected]> | |
+`93C7E9E91B49E432C2F75674B0A78B0A6C481CF6` | |
+* **Julien Gilli** <[email protected]> | |
+`114F43EE0176B71C7BC219DD50A3051F888C628D` | |
+* **Timothy J Fontaine** <[email protected]> | |
+`7937DFD2AB06298B2293C3187D33FF9D0246406D` | |
+ | |
+### Working Groups | |
+ | |
+Information on the current Node.js Working Groups can be found in the | |
+[TSC repository](https://github.com/nodejs/TSC/blob/master/WORKING_GROUPS.md). | |
+ | |
+[npm]: https://www.npmjs.com | |
+[Website]: https://nodejs.org/en/ | |
+[Contributing to the project]: CONTRIBUTING.md | |
+[Node.js Help]: https://github.com/nodejs/help | |
+[Node.js Moderation Policy]: https://github.com/nodejs/TSC/blob/master/Moderation-Policy.md | |
+[#node.js on chat.freenode.net]: https://webchat.freenode.net?channels=node.js&uio=d4 | |
+[#node-dev on chat.freenode.net]: https://webchat.freenode.net?channels=node-dev&uio=d4 | |
+[Code of Conduct]: https://github.com/nodejs/TSC/blob/master/CODE_OF_CONDUCT.md | |
++======= | |
+ ## Citing this work | |
+ | |
+ If you use this work, please give credit to the associated [paper](http://dl.acm.org/citation.cfm?id=3064188), e.g. | |
+ | |
+ *Davis, James, Arun Thekumparampil, and Dongyoon Lee. "Node. fz: Fuzzing the Server-Side Event-Driven Architecture." Proceedings of the Twelfth European Conference on Computer Systems. ACM, 2017.* | |
++>>>>>>> Node.fz changes | |
diff --cc common.gypi | |
index 5cc75d763f,df8fa66ee8..0000000000 | |
--- a/common.gypi | |
+++ b/common.gypi | |
@@@ -135,7 -89,7 +135,11 @@@ | |
'variables': { | |
'v8_enable_handle_zapping': 0, | |
}, | |
++<<<<<<< HEAD | |
+ 'cflags': [ '-O3' ], | |
++======= | |
+ 'cflags': [ '-O3', '-ffunction-sections', '-fdata-sections', '-fstack-protector-all' ], | |
++>>>>>>> Node.fz changes | |
'conditions': [ | |
['target_arch=="x64"', { | |
'msvs_configuration_platform': 'x64', | |
@@@ -291,8 -221,10 +291,15 @@@ | |
'cflags': [ '-pthread', ], | |
'ldflags': [ '-pthread' ], | |
}], | |
++<<<<<<< HEAD | |
+ [ 'OS in "linux freebsd openbsd solaris android aix cloudabi"', { | |
+ 'cflags': [ '-Wall', '-Wextra', '-Wno-unused-parameter', ], | |
++======= | |
+ [ 'OS in "linux freebsd openbsd solaris android aix"', { | |
+ 'cflags': [ '-Wall', '-Wextra', '-Wno-unused-parameter', | |
+ '-DJD_SILENT_NODE', | |
+ ], | |
++>>>>>>> Node.fz changes | |
'cflags_cc': [ '-fno-rtti', '-fno-exceptions', '-std=gnu++0x' ], | |
'ldflags': [ '-rdynamic' ], | |
'target_conditions': [ | |
diff --cc deps/uv/include/uv-unix.h | |
index 6565ff441e,84c0f3fbf1..0000000000 | |
--- a/deps/uv/include/uv-unix.h | |
+++ b/deps/uv/include/uv-unix.h | |
@@@ -208,9 -227,6 +211,12 @@@ typedef struct | |
void* check_handles[2]; \ | |
void* idle_handles[2]; \ | |
void* async_handles[2]; \ | |
++<<<<<<< HEAD | |
+ void (*async_unused)(void); /* TODO(bnoordhuis) Remove in libuv v2. */ \ | |
+ uv__io_t async_io_watcher; \ | |
+ int async_wfd; \ | |
++======= | |
++>>>>>>> Node.fz changes | |
struct { \ | |
void* min; \ | |
unsigned int nelts; \ | |
diff --cc deps/uv/src/threadpool.c | |
index 108934112c,e2559164e4..0000000000 | |
--- a/deps/uv/src/threadpool.c | |
+++ b/deps/uv/src/threadpool.c | |
@@@ -127,9 -280,11 +268,11 @@@ UV_DESTRUCTOR(static void cleanup(void) | |
#endif | |
-static void init_once(void) { | |
+static void init_threads(void) { | |
unsigned int i; | |
const char* val; | |
+ | |
+ initialize_fuzzy_libuv(); | |
nthreads = ARRAY_SIZE(default_threads); | |
val = getenv("UV_THREADPOOL_SIZE"); | |
@@@ -164,28 -320,29 +308,50 @@@ | |
initialized = 1; | |
} | |
+ void uv__work_done(uv_async_t *handle) { | |
+ struct uv__work *w = NULL; | |
+ uv__work_async_t *uv__work_async = NULL; | |
+ int err = 0; | |
+ | |
+ assert(handle != NULL && handle->type == UV_ASYNC); | |
+ | |
+ uv__work_async = container_of(handle, uv__work_async_t, async_buf); | |
+ w = uv__work_async->uv__work; | |
+ mylog(LOG_THREADPOOL, 1, "uv__work_done: handle %p w %p\n", handle, w); | |
+ | |
+ err = (w->work == uv__cancelled) ? UV_ECANCELED : 0; | |
+ #ifdef UNIFIED_CALLBACK | |
+ invoke_callback_wrap((any_func) w->done, UV__WORK_DONE, (long int) w, (long int) err); | |
+ #else | |
+ w->done(w, err); | |
+ #endif | |
+ | |
+ /* We're done with handle. */ | |
+ mylog(LOG_THREADPOOL, 1, "uv__work_done: calling uv_close on handle %p w %p\n", handle, w); | |
+ uv_close((uv_handle_t *) handle, uv__work_async_close); | |
+ } | |
+#ifndef _WIN32 | |
+static void reset_once(void) { | |
+ uv_once_t child_once = UV_ONCE_INIT; | |
+ memcpy(&once, &child_once, sizeof(child_once)); | |
+} | |
+#endif | |
+ | |
+ | |
+static void init_once(void) { | |
+#ifndef _WIN32 | |
+ /* Re-initialize the threadpool after fork. | |
+ * Note that this discards the global mutex and condition as well | |
+ * as the work queue. | |
+ */ | |
+ if (pthread_atfork(NULL, NULL, &reset_once)) | |
+ abort(); | |
+#endif | |
+ init_threads(); | |
+} | |
+ | |
+ | |
void uv__work_submit(uv_loop_t* loop, | |
struct uv__work* w, | |
void (*work)(struct uv__work* w), | |
@@@ -223,30 -387,6 +396,33 @@@ static int uv__work_cancel(uv_loop_t* l | |
return 0; | |
} | |
++<<<<<<< HEAD | |
+ | |
+void uv__work_done(uv_async_t* handle) { | |
+ struct uv__work* w; | |
+ uv_loop_t* loop; | |
+ QUEUE* q; | |
+ QUEUE wq; | |
+ int err; | |
+ | |
+ loop = container_of(handle, uv_loop_t, wq_async); | |
+ uv_mutex_lock(&loop->wq_mutex); | |
+ QUEUE_MOVE(&loop->wq, &wq); | |
+ uv_mutex_unlock(&loop->wq_mutex); | |
+ | |
+ while (!QUEUE_EMPTY(&wq)) { | |
+ q = QUEUE_HEAD(&wq); | |
+ QUEUE_REMOVE(q); | |
+ | |
+ w = container_of(q, struct uv__work, wq); | |
+ err = (w->work == uv__cancelled) ? UV_ECANCELED : 0; | |
+ w->done(w, err); | |
+ } | |
+} | |
+ | |
+ | |
++======= | |
++>>>>>>> Node.fz changes | |
static void uv__queue_work(struct uv__work* w) { | |
uv_work_t* req = container_of(w, uv_work_t, work_req); | |
diff --cc deps/uv/src/unix/async.c | |
index 45c088ea1b,031434939b..0000000000 | |
--- a/deps/uv/src/unix/async.c | |
+++ b/deps/uv/src/unix/async.c | |
@@@ -33,17 -34,38 +34,49 @@@ | |
#include <string.h> | |
#include <unistd.h> | |
++<<<<<<< HEAD | |
+static void uv__async_send(uv_loop_t* loop); | |
+static int uv__async_start(uv_loop_t* loop); | |
++======= | |
+ /* Returns a nonblocking read-write'able fd. | |
+ * It's either from eventfd or from pipe. | |
+ */ | |
++>>>>>>> Node.fz changes | |
static int uv__async_eventfd(void); | |
+ /* Handler passed to uv__io_start. | |
+ * w is part of a uv_async_t. | |
+ */ | |
+ static void uv__async_io(uv_loop_t* loop, | |
+ uv__io_t* w, | |
+ unsigned int events); | |
+ | |
+ /* Begin monitoring for uv_async_send's on handle. */ | |
+ static int uv__async_start(uv_loop_t* loop, uv_async_t *handle); | |
+ | |
+ any_func uv_uv__async_io_ptr (void) | |
+ { | |
+ return (any_func) uv__async_io; | |
+ } | |
int uv_async_init(uv_loop_t* loop, uv_async_t* handle, uv_async_cb async_cb) { | |
int err; | |
+ static int here_before = 0; | |
++<<<<<<< HEAD | |
+ err = uv__async_start(loop); | |
+ if (err) | |
+ return err; | |
++======= | |
+ /* Node calls this on uv_default_loop() before initializing TP and before calling uv_run(). */ | |
+ if (!here_before) | |
+ initialize_fuzzy_libuv(); | |
+ else | |
+ here_before = 1; | |
+ | |
+ mylog(LOG_UV_ASYNC, 1, "uv_async_init: initializing handle %p\n", handle); | |
+ memset(handle, 0, sizeof(*handle)); | |
++>>>>>>> Node.fz changes | |
uv__handle_init(loop, (uv_handle_t*)handle, UV_ASYNC); | |
handle->async_cb = async_cb; | |
@@@ -61,28 -89,60 +100,76 @@@ int uv_async_send(uv_async_t* handle) | |
if (ACCESS_ONCE(int, handle->pending) != 0) | |
return 0; | |
+ /* Mark this handle as pending in a thread-safe manner. */ | |
if (cmpxchgi(&handle->pending, 0, 1) == 0) | |
++<<<<<<< HEAD | |
+ uv__async_send(handle->loop); | |
++======= | |
+ { | |
+ /* Write a byte to io_watcher's fd so that epoll will see the event. | |
+ * Since fd might be an eventfd, write an 8-byte integer. | |
+ * This is mostly harmless if it's a pipe instead. */ | |
+ static const uint64_t val = 1; | |
+ const void *buf = &val; | |
+ ssize_t len = sizeof(val); | |
+ int r; | |
++>>>>>>> Node.fz changes | |
+ | |
+ mylog(LOG_UV_ASYNC, 1, "uv_async_send: handle %p is now pending, and writing to fd %i\n", handle, handle->io_watcher.fd); | |
+ do | |
+ r = write(handle->io_watcher.fd, buf, len); | |
+ while (r == -1 && errno == EINTR); | |
+ if (r == len) | |
+ return 0; | |
+ | |
+ if (r == -1) | |
+ if (errno == EAGAIN || errno == EWOULDBLOCK) | |
+ return -1; | |
+ | |
+ abort(); | |
+ } | |
+ | |
++<<<<<<< HEAD | |
++static void uv__async_io(uv_loop_t* loop, uv__io_t* w, unsigned int events) { | |
++======= | |
return 0; | |
} | |
void uv__async_close(uv_async_t* handle) { | |
- QUEUE_REMOVE(&handle->queue); | |
+ assert(handle->type == UV_ASYNC); | |
+ | |
+ uv__io_stop(handle->loop, &handle->io_watcher, UV__POLLIN); | |
+ uv__close(handle->io_watcher.fd); | |
uv__handle_stop(handle); | |
- } | |
+ mylog(LOG_UV_ASYNC, 1, "uv__async_close: closed handle %p (fd %i)\n", handle, handle->io_watcher.fd); | |
+ return; | |
+ } | |
+ /* This is the IO function for uv_async_t's after they've been uv_async_send'd. | |
+ * It empties its fd (eventfd or pipe), then invokes the handle's async_cb. | |
+ */ | |
static void uv__async_io(uv_loop_t* loop, uv__io_t* w, unsigned int events) { | |
+ uv_async_t *h = NULL; | |
++>>>>>>> Node.fz changes | |
char buf[1024]; | |
ssize_t r; | |
+ QUEUE queue; | |
+ QUEUE* q; | |
+ uv_async_t* h; | |
+ | |
+ assert(w == &loop->async_io_watcher); | |
++<<<<<<< HEAD | |
++======= | |
+ h = container_of(w, uv_async_t, io_watcher); | |
+ mylog(LOG_UV_ASYNC, 1, "uv__async_io: IO on handle %p (fd %i)\n", h, h->io_watcher.fd); | |
+ assert(h->type == UV_ASYNC); | |
+ | |
+ /* Empty the fd. */ | |
++>>>>>>> Node.fz changes | |
for (;;) { | |
r = read(w->fd, buf, sizeof(buf)); | |
@@@ -101,26 -161,21 +188,44 @@@ | |
abort(); | |
} | |
++<<<<<<< HEAD | |
+ QUEUE_MOVE(&loop->async_handles, &queue); | |
+ while (!QUEUE_EMPTY(&queue)) { | |
+ q = QUEUE_HEAD(&queue); | |
+ h = QUEUE_DATA(q, uv_async_t, queue); | |
+ | |
+ QUEUE_REMOVE(q); | |
+ QUEUE_INSERT_TAIL(&loop->async_handles, q); | |
+ | |
+ if (cmpxchgi(&h->pending, 1, 0) == 0) | |
+ continue; | |
+ | |
+ if (h->async_cb == NULL) | |
+ continue; | |
+ | |
+ h->async_cb(h); | |
+ } | |
+} | |
+ | |
+ | |
+static void uv__async_send(uv_loop_t* loop) { | |
++======= | |
+ /* Invoke the uv_async_cb. */ | |
+ #if defined(UNIFIED_CALLBACK) | |
+ invoke_callback_wrap ((any_func) h->async_cb, UV_ASYNC_CB, (long) h); | |
+ #else | |
+ h->async_cb(h); | |
+ #endif | |
+ | |
+ return; | |
+ } | |
+ | |
+ #if 0 | |
+ /* TODO Do I want to add 'whodunnit' dependency edges? Would need to include all send'ers, not just the first one. | |
+ The current dependency edges indicate a list of "X must go before me" nodes. | |
+ In contrast, async edges would indicate "one of these Xes must go before me" nodes. */ | |
+ void uv__async_send(struct uv__async* wa) { | |
++>>>>>>> Node.fz changes | |
const void* buf; | |
ssize_t len; | |
int fd; | |
@@@ -132,35 -187,46 +237,75 @@@ | |
#if defined(__linux__) | |
if (fd == -1) { | |
++<<<<<<< HEAD | |
+ static const uint64_t val = 1; | |
+ buf = &val; | |
+ len = sizeof(val); | |
+ fd = loop->async_io_watcher.fd; /* eventfd */ | |
+ } | |
+#endif | |
+ | |
+ do | |
+ r = write(fd, buf, len); | |
+ while (r == -1 && errno == EINTR); | |
+ | |
+ if (r == len) | |
+ return; | |
+ | |
+ if (r == -1) | |
+ if (errno == EAGAIN || errno == EWOULDBLOCK) | |
++======= | |
+ static const uint64_t val = 1; | |
+ buf = &val; | |
+ len = sizeof(val); | |
+ fd = wa->io_watcher.fd; /* eventfd */ | |
+ } | |
+ #endif | |
+ | |
+ do | |
+ r = write(fd, buf, len); | |
+ while (r == -1 && errno == EINTR); | |
+ | |
+ if (r == len) | |
++>>>>>>> Node.fz changes | |
return; | |
+ | |
+ if (r == -1) | |
+ if (errno == EAGAIN || errno == EWOULDBLOCK) | |
+ return; | |
+ | |
+ abort(); | |
+ } | |
- abort(); | |
- } | |
- | |
+ #endif | |
++<<<<<<< HEAD | |
+static int uv__async_start(uv_loop_t* loop) { | |
+ int pipefd[2]; | |
+ int err; | |
+ | |
+ if (loop->async_io_watcher.fd != -1) | |
+ return 0; | |
+ | |
++======= | |
+ void uv__async_init(struct uv__async* wa) { | |
+ wa->io_watcher.fd = -1; | |
+ wa->wfd = -1; | |
+ } | |
+ | |
+ /* wa is the loop->async_watcher. | |
+ * If not already started, uv__io_start it (adding it to the list of fds being monitored by the loop). | |
+ * When any uv_async_send is called (-> uv__async_send), a byte will be written to the | |
+ * handle being send'd as well as to the loop->async_watcher, causing uv__io_poll to call loop->async_watcher's CB (uv__async_io). | |
+ * wa's cb = cb == uv__async_event, which is invoked in uv__async_io and which loops over the async handles looking for those that are pending. | |
+ */ | |
+ static int uv__async_start(uv_loop_t* loop, uv_async_t *handle) { | |
+ int pipefd[2]; | |
+ int err; | |
+ | |
+ /* Obtain a nonblocking read-write fd for handle->io_watcher. */ | |
+ assert(handle->io_watcher.fd == -1); | |
++>>>>>>> Node.fz changes | |
err = uv__async_eventfd(); | |
if (err >= 0) { | |
pipefd[0] = err; | |
@@@ -192,41 -258,33 +337,60 @@@ | |
if (err < 0) | |
return err; | |
++<<<<<<< HEAD | |
+ uv__io_init(&loop->async_io_watcher, uv__async_io, pipefd[0]); | |
+ uv__io_start(loop, &loop->async_io_watcher, POLLIN); | |
+ loop->async_wfd = pipefd[1]; | |
++======= | |
+ /* Start monitoring the fd. */ | |
+ uv__io_init(&handle->io_watcher, uv__async_io, pipefd[0]); | |
+ uv__io_start(loop, &handle->io_watcher, UV__POLLIN); | |
+ | |
+ assert(handle->io_watcher.fd == pipefd[0]); | |
+ mylog(LOG_UV_ASYNC, 1, "uv__async_start: handle %p fd %i\n", handle, handle->io_watcher.fd); | |
++>>>>>>> Node.fz changes | |
return 0; | |
} | |
++<<<<<<< HEAD | |
+ | |
+int uv__async_fork(uv_loop_t* loop) { | |
+ if (loop->async_io_watcher.fd == -1) /* never started */ | |
+ return 0; | |
+ | |
+ uv__async_stop(loop); | |
+ | |
+ return uv__async_start(loop); | |
+} | |
+ | |
+ | |
+void uv__async_stop(uv_loop_t* loop) { | |
+ if (loop->async_io_watcher.fd == -1) | |
++======= | |
+ /* Called with wa as loop->async_watcher. Stops async monitoring. */ | |
+ void uv__async_stop(uv_loop_t* loop, struct uv__async* wa) { | |
+ if (wa->io_watcher.fd == -1) | |
++>>>>>>> Node.fz changes | |
return; | |
- if (wa->wfd != -1) { | |
- if (wa->wfd != wa->io_watcher.fd) | |
- uv__close(wa->wfd); | |
- wa->wfd = -1; | |
+ if (loop->async_wfd != -1) { | |
+ if (loop->async_wfd != loop->async_io_watcher.fd) | |
+ uv__close(loop->async_wfd); | |
+ loop->async_wfd = -1; | |
} | |
- uv__io_stop(loop, &wa->io_watcher, UV__POLLIN); | |
- uv__close(wa->io_watcher.fd); | |
- wa->io_watcher.fd = -1; | |
+ uv__io_stop(loop, &loop->async_io_watcher, POLLIN); | |
+ uv__close(loop->async_io_watcher.fd); | |
+ loop->async_io_watcher.fd = -1; | |
} | |
++<<<<<<< HEAD | |
+ | |
+static int uv__async_eventfd(void) { | |
++======= | |
+ static int uv__async_eventfd() { | |
++>>>>>>> Node.fz changes | |
#if defined(__linux__) | |
static int no_eventfd2; | |
static int no_eventfd; | |
diff --cc deps/uv/src/unix/core.c | |
index d64593a313,e117dfd980..0000000000 | |
--- a/deps/uv/src/unix/core.c | |
+++ b/deps/uv/src/unix/core.c | |
@@@ -87,14 -77,9 +89,20 @@@ | |
#include <sys/ioctl.h> | |
#endif | |
++<<<<<<< HEAD | |
+#if !defined(__MVS__) | |
+#include <sys/param.h> /* MAXHOSTNAMELEN on Linux and the BSDs */ | |
+#endif | |
+ | |
+/* Fallback for the maximum hostname length */ | |
+#ifndef MAXHOSTNAMELEN | |
+# define MAXHOSTNAMELEN 256 | |
+#endif | |
++======= | |
+ #include "uv-common.h" | |
+ #include "../mylog.h" | |
+ #include "scheduler.h" | |
++>>>>>>> Node.fz changes | |
static int uv__run_pending(uv_loop_t* loop); | |
@@@ -114,7 -99,8 +122,12 @@@ uint64_t uv_hrtime(void) | |
void uv_close(uv_handle_t* handle, uv_close_cb close_cb) { | |
++<<<<<<< HEAD | |
+ assert(!uv__is_closing(handle)); | |
++======= | |
+ assert(handle != NULL); | |
+ assert(!(handle->flags & (UV_CLOSING | UV_CLOSED))); | |
++>>>>>>> Node.fz changes | |
handle->flags |= UV_CLOSING; | |
handle->close_cb = close_cb; | |
@@@ -522,10 -609,12 +660,11 @@@ int uv__close_nocheckstdio(int fd) | |
saved_errno = errno; | |
rc = close(fd); | |
+ mylog(LOG_MAIN, 7, "uv__close: %i = close(%i)\n", rc, fd); | |
if (rc == -1) { | |
rc = -errno; | |
- if (rc == -EINTR) | |
- rc = -EINPROGRESS; /* For platform/libc consistency. */ | |
+ if (rc == -EINTR || rc == -EINPROGRESS) | |
+ rc = 0; /* The close is in progress, not an error. */ | |
errno = saved_errno; | |
} | |
@@@ -775,7 -864,13 +915,17 @@@ static int uv__run_pending(uv_loop_t* l | |
QUEUE_REMOVE(q); | |
QUEUE_INIT(q); | |
w = QUEUE_DATA(q, uv__io_t, pending_queue); | |
++<<<<<<< HEAD | |
+ w->cb(loop, w, POLLOUT); | |
++======= | |
+ | |
+ #ifdef UNIFIED_CALLBACK | |
+ w->iocb_events = UV__POLLOUT; | |
+ invoke_callback_wrap((any_func) w->cb, UV__IO_CB, (long) loop, (long) w, (long) UV__POLLOUT); | |
+ #else | |
+ w->cb(loop, w, UV__POLLOUT); | |
+ #endif | |
++>>>>>>> Node.fz changes | |
} | |
return 1; | |
@@@ -859,26 -955,35 +1010,40 @@@ void uv__io_start(uv_loop_t* loop, uv__ | |
* every tick of the event loop but the other backends allow us to | |
* short-circuit here if the event mask is unchanged. | |
*/ | |
++<<<<<<< HEAD | |
+ if (w->events == w->pevents) | |
+ return; | |
++======= | |
+ if (w->events == w->pevents) { | |
+ if (w->events == 0 && !QUEUE_EMPTY(&w->watcher_queue)) { | |
+ QUEUE_REMOVE(&w->watcher_queue); | |
+ QUEUE_INIT(&w->watcher_queue); | |
+ } | |
+ goto DONE; | |
+ } | |
++>>>>>>> Node.fz changes | |
#endif | |
- if (QUEUE_EMPTY(&w->watcher_queue)) | |
+ if (QUEUE_EMPTY(&w->watcher_queue)) /* JD: i.e. if w not already in loop->watcher_queue */ | |
QUEUE_INSERT_TAIL(&loop->watcher_queue, &w->watcher_queue); | |
if (loop->watchers[w->fd] == NULL) { | |
loop->watchers[w->fd] = w; | |
loop->nfds++; | |
} | |
- } | |
+ DONE: | |
+ ENTRY_EXIT_LOG((LOG_UV_IO, 9, "uv__io_start: returning\n")); | |
+ } | |
+ /* Stop listening on w for events. */ | |
void uv__io_stop(uv_loop_t* loop, uv__io_t* w, unsigned int events) { | |
- assert(0 == (events & ~(UV__POLLIN | UV__POLLOUT))); | |
+ assert(0 == (events & ~(POLLIN | POLLOUT | UV__POLLRDHUP | UV__POLLPRI))); | |
assert(0 != events); | |
+ ENTRY_EXIT_LOG((LOG_UV_IO, 9, "uv__io_stop: begin: loop %p w %p events %i\n", loop, w, events)); | |
if (w->fd == -1) | |
- return; | |
+ goto DONE; | |
assert(w->fd >= 0); | |
diff --cc deps/uv/src/unix/fs.c | |
index e0969a4c2f,a789664f2a..0000000000 | |
--- a/deps/uv/src/unix/fs.c | |
+++ b/deps/uv/src/unix/fs.c | |
@@@ -1286,13 -1114,11 +1340,14 @@@ int uv_fs_mkdtemp(uv_loop_t* loop | |
uv_fs_t* req, | |
const char* tpl, | |
uv_fs_cb cb) { | |
+ uv_work_t *work_req; | |
INIT(MKDTEMP); | |
req->path = uv__strdup(tpl); | |
- if (req->path == NULL) | |
+ if (req->path == NULL) { | |
+ if (cb != NULL) | |
+ uv__req_unregister(loop, req); | |
return -ENOMEM; | |
+ } | |
POST; | |
} | |
@@@ -1317,8 -1144,7 +1373,12 @@@ int uv_fs_read(uv_loop_t* loop, uv_fs_t | |
unsigned int nbufs, | |
int64_t off, | |
uv_fs_cb cb) { | |
++<<<<<<< HEAD | |
+ INIT(READ); | |
+ | |
++======= | |
+ uv_work_t *work_req; | |
++>>>>>>> Node.fz changes | |
if (bufs == NULL || nbufs == 0) | |
return -EINVAL; | |
@@@ -1456,8 -1279,7 +1525,12 @@@ int uv_fs_write(uv_loop_t* loop | |
unsigned int nbufs, | |
int64_t off, | |
uv_fs_cb cb) { | |
++<<<<<<< HEAD | |
+ INIT(WRITE); | |
+ | |
++======= | |
+ uv_work_t *work_req; | |
++>>>>>>> Node.fz changes | |
if (bufs == NULL || nbufs == 0) | |
return -EINVAL; | |
diff --cc deps/uv/src/unix/internal.h | |
index 3df5c4c3eb,5ff150a9b3..0000000000 | |
--- a/deps/uv/src/unix/internal.h | |
+++ b/deps/uv/src/unix/internal.h | |
@@@ -203,14 -181,9 +204,17 @@@ void uv__io_stop(uv_loop_t* loop, uv__i | |
void uv__io_close(uv_loop_t* loop, uv__io_t* w); | |
void uv__io_feed(uv_loop_t* loop, uv__io_t* w); | |
int uv__io_active(const uv__io_t* w, unsigned int events); | |
+int uv__io_check_fd(uv_loop_t* loop, int fd); | |
void uv__io_poll(uv_loop_t* loop, int timeout); /* in milliseconds or -1 */ | |
+int uv__io_fork(uv_loop_t* loop); | |
/* async */ | |
++<<<<<<< HEAD | |
+void uv__async_stop(uv_loop_t* loop); | |
+int uv__async_fork(uv_loop_t* loop); | |
+ | |
++======= | |
++>>>>>>> Node.fz changes | |
/* loop */ | |
void uv__run_idle(uv_loop_t* loop); | |
@@@ -316,6 -285,17 +320,20 @@@ static const int kFSEventStreamEventFla | |
#endif /* defined(__APPLE__) */ | |
++<<<<<<< HEAD | |
++======= | |
+ UV_UNUSED(static void uv__req_init(uv_loop_t* loop, | |
+ uv_req_t* req, | |
+ uv_req_type type)) { | |
+ req->type = type; | |
+ | |
+ req->magic = UV_REQ_MAGIC; | |
+ uv__req_register(loop, req); | |
+ } | |
+ #define uv__req_init(loop, req, type) \ | |
+ uv__req_init((loop), (uv_req_t*)(req), (type)) | |
+ | |
++>>>>>>> Node.fz changes | |
UV_UNUSED(static void uv__update_time(uv_loop_t* loop)) { | |
/* Use a fast time source if available. We only need millisecond precision. | |
*/ | |
diff --cc deps/uv/src/unix/kqueue.c | |
index 5e89bdced4,97216279cc..0000000000 | |
--- a/deps/uv/src/unix/kqueue.c | |
+++ b/deps/uv/src/unix/kqueue.c | |
@@@ -274,9 -201,13 +274,18 @@@ void uv__io_poll(uv_loop_t* loop, int t | |
} | |
if (ev->filter == EVFILT_VNODE) { | |
++<<<<<<< HEAD | |
+ assert(w->events == POLLIN); | |
+ assert(w->pevents == POLLIN); | |
++======= | |
+ assert(w->events == UV__POLLIN); | |
+ assert(w->pevents == UV__POLLIN); | |
+ #if UNIFIED_CALLBACK | |
+ invoke_callback_wrap(w->cb, UV_FS_EVENT_CB, loop, w, ev->fflags); | |
+ #else | |
++>>>>>>> Node.fz changes | |
w->cb(loop, w, ev->fflags); /* XXX always uv__fs_event() */ | |
+ #endif | |
nevents++; | |
continue; | |
} | |
@@@ -334,20 -248,14 +343,29 @@@ | |
if (revents == 0) | |
continue; | |
++<<<<<<< HEAD | |
+ /* Run signal watchers last. This also affects child process watchers | |
+ * because those are implemented in terms of signal watchers. | |
+ */ | |
+ if (w == &loop->signal_io_watcher) | |
+ have_signals = 1; | |
+ else | |
+ w->cb(loop, w, revents); | |
+ | |
++======= | |
+ #if UNIFIED_CALLBACK | |
+ w->iocb_events = revents; | |
+ invoke_callback_wrap(w->cb, UV__IO_CB, loop, w, revents); | |
+ #else | |
+ w->cb(loop, w, revents); | |
+ #endif | |
++>>>>>>> Node.fz changes | |
nevents++; | |
} | |
+ | |
+ if (have_signals != 0) | |
+ loop->signal_io_watcher.cb(loop, &loop->signal_io_watcher, POLLIN); | |
+ | |
loop->watchers[loop->nwatchers] = NULL; | |
loop->watchers[loop->nwatchers + 1] = NULL; | |
diff --cc deps/uv/src/unix/linux-core.c | |
index 4d480ce10a,d46e93097c..0000000000 | |
--- a/deps/uv/src/unix/linux-core.c | |
+++ b/deps/uv/src/unix/linux-core.c | |
@@@ -18,13 -18,11 +18,16 @@@ | |
* IN THE SOFTWARE. | |
*/ | |
+/* We lean on the fact that POLL{IN,OUT,ERR,HUP} correspond with their | |
+ * EPOLL* counterparts. We use the POLL* variants in this file because that | |
+ * is what libuv uses elsewhere and it avoids a dependency on <sys/epoll.h>. | |
+ */ | |
+ | |
#include "uv.h" | |
#include "internal.h" | |
+ #include "scheduler.h" | |
+ | |
+ #include "statistics.h" | |
#include <stdint.h> | |
#include <stdio.h> | |
@@@ -206,8 -165,7 +210,12 @@@ void uv__io_poll(uv_loop_t* loop, int t | |
sigset_t sigset; | |
uint64_t sigmask; | |
uint64_t base; | |
++<<<<<<< HEAD | |
+ int have_signals; | |
+ int nevents; | |
++======= | |
+ int nevents; /* The number of fds whose CBs we invoked, per epoll iter. */ | |
++>>>>>>> Node.fz changes | |
int count; | |
int nfds; | |
int fd; | |
@@@ -307,13 -302,15 +352,25 @@@ | |
if (nfds == 0) { | |
assert(timeout != -1); | |
++<<<<<<< HEAD | |
+ if (timeout == 0) | |
+ return; | |
+ | |
+ /* We may have been inside the system call for longer than |timeout| | |
+ * milliseconds so we need to update the timestamp to avoid drift. | |
+ */ | |
+ goto update_timeout; | |
++======= | |
+ timeout = real_timeout - timeout; | |
+ if (timeout > 0) | |
+ { | |
+ mylog(LOG_MAIN, 5, "uv__io_poll: False timeout, epoll'ing again\n"); | |
+ continue; | |
+ } | |
+ | |
+ mylog(LOG_MAIN, 5, "uv__io_poll: Real timeout\n"); | |
+ goto DONE; | |
++>>>>>>> Node.fz changes | |
} | |
if (nfds == -1) { | |
@@@ -387,24 -407,23 +468,40 @@@ | |
* needs to remember the error/hangup event. We should get that for | |
* free when we switch over to edge-triggered I/O. | |
*/ | |
- if (pe->events == UV__EPOLLERR || pe->events == UV__EPOLLHUP) | |
- pe->events |= w->pevents & (UV__EPOLLIN | UV__EPOLLOUT); | |
+ if (pe->events == POLLERR || pe->events == POLLHUP) | |
+ pe->events |= w->pevents & (POLLIN | POLLOUT | UV__POLLPRI); | |
if (pe->events != 0) { | |
++<<<<<<< HEAD | |
+ /* Run signal watchers last. This also affects child process watchers | |
+ * because those are implemented in terms of signal watchers. | |
+ */ | |
+ if (w == &loop->signal_io_watcher) | |
+ have_signals = 1; | |
+ else | |
+ w->cb(loop, w, pe->events); | |
+ | |
+ nevents++; | |
+ } | |
+ } | |
+ | |
+ if (have_signals != 0) | |
+ loop->signal_io_watcher.cb(loop, &loop->signal_io_watcher, POLLIN); | |
++======= | |
+ #ifdef UNIFIED_CALLBACK | |
+ w->iocb_events = pe->events; | |
+ mylog(LOG_MAIN, 7, "uv__io_poll: Next work item: fd %i w %p fd %i\n", fd, w); | |
+ invoke_callback_wrap((any_func) w->cb, UV__IO_CB, (long) loop, (long) w, (long) w->iocb_events); | |
+ mylog(LOG_MAIN, 7, "uv__io_poll: Done with work item fd %i w %p\n", fd, w); | |
+ #else | |
+ w->cb(loop, w, pe->events); | |
+ #endif | |
+ statistics_record(STATISTIC_EPOLL_EVENTS_EXECUTED, 1); | |
+ nevents++; | |
+ } | |
+ } | |
+ mylog(LOG_MAIN, 7, "uv__io_poll: %i fds, ran (nevents) %i\n", nfds, nevents); | |
++>>>>>>> Node.fz changes | |
loop->watchers[loop->nwatchers] = NULL; | |
loop->watchers[loop->nwatchers + 1] = NULL; | |
diff --cc deps/uv/src/unix/linux-inotify.c | |
index 5934c5d8cb,6a4461f3b3..0000000000 | |
--- a/deps/uv/src/unix/linux-inotify.c | |
+++ b/deps/uv/src/unix/linux-inotify.c | |
@@@ -237,31 -160,14 +238,38 @@@ static void uv__inotify_read(uv_loop_t | |
*/ | |
path = e->len ? (const char*) (e + 1) : uv__basename_r(w->path); | |
- QUEUE_FOREACH(q, &w->watchers) { | |
+ /* We're about to iterate over the queue and call user's callbacks. | |
+ * What can go wrong? | |
+ * A callback could call uv_fs_event_stop() | |
+ * and the queue can change under our feet. | |
+ * So, we use QUEUE_MOVE() trick to safely iterate over the queue. | |
+ * And we don't free the watcher_list until we're done iterating. | |
+ * | |
+ * First, | |
+ * tell uv_fs_event_stop() (that could be called from a user's callback) | |
+ * not to free watcher_list. | |
+ */ | |
+ w->iterating = 1; | |
+ QUEUE_MOVE(&w->watchers, &queue); | |
+ while (!QUEUE_EMPTY(&queue)) { | |
+ q = QUEUE_HEAD(&queue); | |
h = QUEUE_DATA(q, uv_fs_event_t, watchers); | |
++<<<<<<< HEAD | |
+ | |
+ QUEUE_REMOVE(q); | |
+ QUEUE_INSERT_TAIL(&w->watchers, q); | |
+ | |
++======= | |
+ #if UNIFIED_CALLBACK | |
+ invoke_callback_wrap ((any_func) h->cb, UV_FS_EVENT_CB, (long) h, (long) path, (long) events, (long) 0); | |
+ #else | |
++>>>>>>> Node.fz changes | |
h->cb(h, path, events, 0); | |
+ #endif | |
} | |
+ /* done iterating, time to (maybe) free empty watcher_list */ | |
+ w->iterating = 0; | |
+ maybe_free_watcher_list(w, loop); | |
} | |
} | |
} | |
diff --cc deps/uv/src/unix/loop-watcher.c | |
index 340bb0dfa1,b0ee14b63c..0000000000 | |
--- a/deps/uv/src/unix/loop-watcher.c | |
+++ b/deps/uv/src/unix/loop-watcher.c | |
@@@ -45,24 -52,48 +52,67 @@@ | |
return 0; \ | |
} \ | |
\ | |
++<<<<<<< HEAD | |
+ void uv__run_##name(uv_loop_t* loop) { \ | |
+ uv_##name##_t* h; \ | |
+ QUEUE queue; \ | |
+ QUEUE* q; \ | |
+ QUEUE_MOVE(&loop->name##_handles, &queue); \ | |
+ while (!QUEUE_EMPTY(&queue)) { \ | |
+ q = QUEUE_HEAD(&queue); \ | |
+ h = QUEUE_DATA(q, uv_##name##_t, queue); \ | |
+ QUEUE_REMOVE(q); \ | |
+ QUEUE_INSERT_TAIL(&loop->name##_handles, q); \ | |
+ h->name##_cb(h); \ | |
+ } \ | |
+ } \ | |
+ \ | |
+ void uv__##name##_close(uv_##name##_t* handle) { \ | |
+ uv_##name##_stop(handle); \ | |
++======= | |
+ void uv__##_name##_close(uv_##_name##_t* handle) { \ | |
+ uv_##_name##_stop(handle); \ | |
++>>>>>>> Node.fz changes | |
} | |
+ /* Define separate versions of uv__run_X to handle CB invocation. */ | |
+ #ifdef UNIFIED_CALLBACK | |
+ #define UV_LOOP_WATCHER_DEFINE_2(_name, _type) \ | |
+ void uv__run_##_name(uv_loop_t* loop) { \ | |
+ uv_##_name##_t* h; \ | |
+ QUEUE* q; \ | |
+ \ | |
+ ENTRY_EXIT_LOG((LOG_MAIN, 9, "uv__run_" #_name ": begin: loop %p\n", loop)); \ | |
+ \ | |
+ QUEUE_FOREACH(q, &loop->_name##_handles) { \ | |
+ h = QUEUE_DATA(q, uv_##_name##_t, queue); \ | |
+ invoke_callback_wrap((any_func) h->_name##_cb, UV_##_type##_CB, (long) h); \ | |
+ } \ | |
+ \ | |
+ ENTRY_EXIT_LOG((LOG_MAIN, 9, "uv__run_" #_name ": returning\n")); \ | |
+ } | |
+ #else | |
+ #define UV_LOOP_WATCHER_DEFINE_2(_name, _type) \ | |
+ void uv__run_##_name(uv_loop_t* loop) { \ | |
+ uv_##_name##_t* h; \ | |
+ QUEUE* q; \ | |
+ \ | |
+ ENTRY_EXIT_LOG((LOG_MAIN, 9, "uv__run_" #_name ": begin: loop %p\n", loop)); \ | |
+ \ | |
+ QUEUE_FOREACH(q, &loop->_name##_handles) { \ | |
+ h = QUEUE_DATA(q, uv_##_name##_t, queue); \ | |
+ h->_name##_cb(h); \ | |
+ } \ | |
+ \ | |
+ ENTRY_EXIT_LOG((LOG_MAIN, 9, "uv__run_" #_name ": returning\n")); \ | |
+ } | |
+ #endif | |
+ | |
UV_LOOP_WATCHER_DEFINE(prepare, PREPARE) | |
+ UV_LOOP_WATCHER_DEFINE_2(prepare, PREPARE) | |
+ | |
UV_LOOP_WATCHER_DEFINE(check, CHECK) | |
+ UV_LOOP_WATCHER_DEFINE_2(check, CHECK) | |
+ | |
UV_LOOP_WATCHER_DEFINE(idle, IDLE) | |
+ UV_LOOP_WATCHER_DEFINE_2(idle, IDLE) | |
diff --cc deps/uv/src/unix/loop.c | |
index 5b5b0e095b,37a461e5fb..0000000000 | |
--- a/deps/uv/src/unix/loop.c | |
+++ b/deps/uv/src/unix/loop.c | |
@@@ -53,8 -51,6 +54,11 @@@ int uv_loop_init(uv_loop_t* loop) | |
loop->closing_handles = NULL; | |
uv__update_time(loop); | |
++<<<<<<< HEAD | |
+ loop->async_io_watcher.fd = -1; | |
+ loop->async_wfd = -1; | |
++======= | |
++>>>>>>> Node.fz changes | |
loop->signal_pipefd[0] = -1; | |
loop->signal_pipefd[1] = -1; | |
loop->backend_fd = -1; | |
@@@ -145,7 -100,6 +142,10 @@@ int uv_loop_fork(uv_loop_t* loop) | |
void uv__loop_close(uv_loop_t* loop) { | |
uv__signal_loop_cleanup(loop); | |
uv__platform_loop_delete(loop); | |
++<<<<<<< HEAD | |
+ uv__async_stop(loop); | |
++======= | |
++>>>>>>> Node.fz changes | |
if (loop->emfile_fd != -1) { | |
uv__close(loop->emfile_fd); | |
diff --cc deps/uv/src/unix/signal.c | |
index cb09ead50a,f955766b86..0000000000 | |
--- a/deps/uv/src/unix/signal.c | |
+++ b/deps/uv/src/unix/signal.c | |
@@@ -58,19 -54,13 +59,27 @@@ RB_GENERATE_STATIC(uv__signal_tree_s | |
uv_signal_s, tree_entry, | |
uv__signal_compare) | |
++<<<<<<< HEAD | |
+static void uv__signal_global_reinit(void); | |
++======= | |
+ any_func uv_uv__signal_event_ptr (void) | |
+ { | |
+ return (any_func) uv__signal_event; | |
+ } | |
+ | |
++>>>>>>> Node.fz changes | |
static void uv__signal_global_init(void) { | |
+ if (!uv__signal_lock_pipefd[0]) | |
+ /* pthread_atfork can register before and after handlers, one | |
+ * for each child. This only registers one for the child. That | |
+ * state is both persistent and cumulative, so if we keep doing | |
+ * it the handler functions will be called multiple times. Thus | |
+ * we only want to do it once. | |
+ */ | |
+ if (pthread_atfork(NULL, NULL, &uv__signal_global_reinit)) | |
+ abort(); | |
+ | |
if (uv__make_pipe(uv__signal_lock_pipefd, 0)) | |
abort(); | |
diff --cc deps/uv/src/unix/stream.c | |
index 672a7e2d6c,d3a7bb1d69..0000000000 | |
--- a/deps/uv/src/unix/stream.c | |
+++ b/deps/uv/src/unix/stream.c | |
@@@ -440,7 -469,9 +472,13 @@@ void uv__stream_flush_write_queue(uv_st | |
void uv__stream_destroy(uv_stream_t* stream) { | |
++<<<<<<< HEAD | |
+ assert(!uv__io_active(&stream->io_watcher, POLLIN | POLLOUT)); | |
++======= | |
+ ENTRY_EXIT_LOG((LOG_UV_STREAM, 9, "uv__stream_destroy: begin: stream %p\n", stream)); | |
+ | |
+ assert(!uv__io_active(&stream->io_watcher, UV__POLLIN | UV__POLLOUT)); | |
++>>>>>>> Node.fz changes | |
assert(stream->flags & UV_CLOSED); | |
if (stream->connect_req) { | |
@@@ -514,7 -556,10 +563,14 @@@ void uv__server_io(uv_loop_t* loop, uv_ | |
int err; | |
stream = container_of(w, uv_stream_t, io_watcher); | |
++<<<<<<< HEAD | |
+ assert(events & POLLIN); | |
++======= | |
+ | |
+ ENTRY_EXIT_LOG((LOG_UV_STREAM, 9, "uv__server_io: begin: loop %p w %p events %i stream %p fd %i\n", loop, w, events, stream, stream->io_watcher.fd)); | |
+ | |
+ assert(events == UV__POLLIN); | |
++>>>>>>> Node.fz changes | |
assert(stream->accepted_fd == -1); | |
assert(!(stream->flags & UV_CLOSING)); | |
@@@ -555,8 -610,8 +621,13 @@@ | |
if (stream->accepted_fd != -1) { | |
/* The user hasn't yet accepted called uv_accept() */ | |
++<<<<<<< HEAD | |
+ uv__io_stop(loop, &stream->io_watcher, POLLIN); | |
+ return; | |
++======= | |
+ uv__io_stop(loop, &stream->io_watcher, UV__POLLIN); | |
+ goto DONE; | |
++>>>>>>> Node.fz changes | |
} | |
if (stream->type == UV_TCP && (stream->flags & UV_TCP_SINGLE_ACCEPT)) { | |
@@@ -572,8 -634,9 +650,12 @@@ any_func uv_uv__server_io_ptr (void | |
int uv_accept(uv_stream_t* server, uv_stream_t* client) { | |
- int err; | |
+ int err = -1; | |
++<<<<<<< HEAD | |
++======= | |
+ ENTRY_EXIT_LOG((LOG_UV_STREAM, 9, "uv_accept: begin: server %p client %p\n", server, client)); | |
++>>>>>>> Node.fz changes | |
assert(server->loop == client->loop); | |
if (server->accepted_fd == -1) | |
@@@ -601,11 -667,10 +686,12 @@@ | |
break; | |
default: | |
- return -EINVAL; | |
+ err = -EINVAL; | |
+ goto RETURN; | |
} | |
+ client->flags |= UV_HANDLE_BOUND; | |
+ | |
done: | |
/* Process queued fds */ | |
if (server->queued_fds != NULL) { | |
@@@ -630,8 -695,11 +716,11 @@@ | |
} else { | |
server->accepted_fd = -1; | |
if (err == 0) | |
- uv__io_start(server->loop, &server->io_watcher, UV__POLLIN); | |
+ uv__io_start(server->loop, &server->io_watcher, POLLIN); | |
} | |
+ | |
+ RETURN: | |
+ ENTRY_EXIT_LOG((LOG_UV_STREAM, 9, "uv_accept: returning err %i\n", err)); | |
return err; | |
} | |
@@@ -660,11 -731,13 +752,13 @@@ int uv_listen(uv_stream_t* stream, int | |
static void uv__drain(uv_stream_t* stream) { | |
- uv_shutdown_t* req; | |
+ uv_shutdown_t* req = NULL; | |
int err; | |
+ ENTRY_EXIT_LOG((LOG_UV_STREAM, 9, "uv__drain: begin: stream %p\n", stream)); | |
+ | |
assert(QUEUE_EMPTY(&stream->write_queue)); | |
- uv__io_stop(stream->loop, &stream->io_watcher, UV__POLLOUT); | |
+ uv__io_stop(stream->loop, &stream->io_watcher, POLLOUT); | |
uv__stream_osx_interrupt_select(stream); | |
/* Shutdown? */ | |
@@@ -750,8 -832,9 +853,10 @@@ static void uv__write(uv_stream_t* stre | |
int iovmax; | |
int iovcnt; | |
ssize_t n; | |
+ int err; | |
+ ENTRY_EXIT_LOG((LOG_UV_STREAM, 9, "uv__write: begin: stream %p\n", stream)); | |
+ | |
start: | |
assert(uv__stream_fd(stream) >= 0); | |
@@@ -859,9 -939,15 +973,21 @@@ | |
} | |
if (n < 0) { | |
++<<<<<<< HEAD | |
+ if (errno != EAGAIN && errno != EWOULDBLOCK && errno != ENOBUFS) { | |
+ err = -errno; | |
+ goto error; | |
++======= | |
+ if (errno != EAGAIN && errno != EWOULDBLOCK) { | |
+ /* Error */ | |
+ req->error = -errno; | |
+ uv__write_req_finish(req); | |
+ uv__io_stop(stream->loop, &stream->io_watcher, UV__POLLOUT); | |
+ if (!uv__io_active(&stream->io_watcher, UV__POLLIN)) | |
+ uv__handle_stop(stream); | |
+ uv__stream_osx_interrupt_select(stream); | |
+ goto DONE; | |
++>>>>>>> Node.fz changes | |
} else if (stream->flags & UV_STREAM_BLOCKING) { | |
/* If this is a blocking stream, try again. */ | |
goto start; | |
@@@ -926,15 -1013,8 +1053,20 @@@ | |
/* Notify select() thread about state change */ | |
uv__stream_osx_interrupt_select(stream); | |
++<<<<<<< HEAD | |
+ return; | |
+ | |
+error: | |
+ req->error = err; | |
+ uv__write_req_finish(req); | |
+ uv__io_stop(stream->loop, &stream->io_watcher, POLLOUT); | |
+ if (!uv__io_active(&stream->io_watcher, POLLIN)) | |
+ uv__handle_stop(stream); | |
+ uv__stream_osx_interrupt_select(stream); | |
++======= | |
+ DONE: | |
+ ENTRY_EXIT_LOG((LOG_UV_STREAM, 9, "uv__write: returning\n")); | |
++>>>>>>> Node.fz changes | |
} | |
@@@ -1010,11 -1090,15 +1151,15 @@@ uv_handle_type uv__handle_type(int fd) | |
static void uv__stream_eof(uv_stream_t* stream, const uv_buf_t* buf) { | |
stream->flags |= UV_STREAM_READ_EOF; | |
- uv__io_stop(stream->loop, &stream->io_watcher, UV__POLLIN); | |
- if (!uv__io_active(&stream->io_watcher, UV__POLLOUT)) | |
+ uv__io_stop(stream->loop, &stream->io_watcher, POLLIN); | |
+ if (!uv__io_active(&stream->io_watcher, POLLOUT)) | |
uv__handle_stop(stream); | |
uv__stream_osx_interrupt_select(stream); | |
+ #if UNIFIED_CALLBACK | |
+ invoke_callback_wrap((any_func) stream->read_cb, UV_READ_CB, (long) stream, (long) UV_EOF, (long) buf); | |
+ #else | |
stream->read_cb(stream, UV_EOF, buf); | |
+ #endif | |
stream->flags &= ~UV_STREAM_READING; | |
} | |
@@@ -1112,17 -1196,12 +1257,17 @@@ static int uv__stream_recv_cmsg(uv_stre | |
} | |
+#ifdef __clang__ | |
+# pragma clang diagnostic push | |
+# pragma clang diagnostic ignored "-Wgnu-folding-constant" | |
+#endif | |
+ | |
static void uv__read(uv_stream_t* stream) { | |
uv_buf_t buf; | |
- ssize_t nread; | |
+ ssize_t nread, tot_nread; | |
struct msghdr msg; | |
char cmsg_space[CMSG_SPACE(UV__CMSG_FD_SIZE)]; | |
- int count; | |
+ int count, succ_reads; | |
int err; | |
int is_ipc; | |
@@@ -1143,12 -1225,20 +1291,26 @@@ | |
&& (count-- > 0)) { | |
assert(stream->alloc_cb != NULL); | |
++<<<<<<< HEAD | |
+ buf = uv_buf_init(NULL, 0); | |
+ stream->alloc_cb((uv_handle_t*)stream, 64 * 1024, &buf); | |
+ if (buf.base == NULL || buf.len == 0) { | |
++======= | |
+ #if UNIFIED_CALLBACK | |
+ invoke_callback_wrap((any_func) stream->alloc_cb, UV_ALLOC_CB, (long) (uv_handle_t*) stream, (long) 64 * 1024, (long) &buf); | |
+ #else | |
+ stream->alloc_cb((uv_handle_t*)stream, 64 * 1024, &buf); | |
+ #endif | |
+ mylog(LOG_UV_STREAM, 7, "uv__read: buf %p buf.base %p buf.len %li\n", &buf, buf.base, buf.len); | |
+ if (buf.len == 0) { | |
++>>>>>>> Node.fz changes | |
/* User indicates it can't or won't handle the read. */ | |
+ #if UNIFIED_CALLBACK | |
+ invoke_callback_wrap((any_func) stream->read_cb, UV_READ_CB, (long) stream, (long) UV_ENOBUFS, (long) &buf); | |
+ #else | |
stream->read_cb(stream, UV_ENOBUFS, &buf); | |
- return; | |
+ #endif | |
+ goto DONE; | |
} | |
assert(buf.base != NULL); | |
@@@ -1181,22 -1282,25 +1354,32 @@@ | |
if (errno == EAGAIN || errno == EWOULDBLOCK) { | |
/* Wait for the next one. */ | |
if (stream->flags & UV_STREAM_READING) { | |
- uv__io_start(stream->loop, &stream->io_watcher, UV__POLLIN); | |
+ uv__io_start(stream->loop, &stream->io_watcher, POLLIN); | |
uv__stream_osx_interrupt_select(stream); | |
} | |
+ #if UNIFIED_CALLBACK | |
+ invoke_callback_wrap((any_func) stream->read_cb, UV_READ_CB, (long) stream, (long) 0, (long) &buf); | |
+ #else | |
stream->read_cb(stream, 0, &buf); | |
++<<<<<<< HEAD | |
+#if defined(__CYGWIN__) || defined(__MSYS__) | |
+ } else if (errno == ECONNRESET && stream->type == UV_NAMED_PIPE) { | |
+ uv__stream_eof(stream, &buf); | |
+ return; | |
++======= | |
++>>>>>>> Node.fz changes | |
#endif | |
} else { | |
/* Error. User should call uv_close(). */ | |
+ #if UNIFIED_CALLBACK | |
+ invoke_callback_wrap((any_func) stream->read_cb, UV_READ_CB, (long) stream, (long) -errno, (long) &buf); | |
+ #else | |
stream->read_cb(stream, -errno, &buf); | |
+ #endif | |
if (stream->flags & UV_STREAM_READING) { | |
stream->flags &= ~UV_STREAM_READING; | |
- uv__io_stop(stream->loop, &stream->io_watcher, UV__POLLIN); | |
- if (!uv__io_active(&stream->io_watcher, UV__POLLOUT)) | |
+ uv__io_stop(stream->loop, &stream->io_watcher, POLLIN); | |
+ if (!uv__io_active(&stream->io_watcher, POLLOUT)) | |
uv__handle_stop(stream); | |
uv__stream_osx_interrupt_select(stream); | |
} | |
@@@ -1212,35 -1316,19 +1395,46 @@@ | |
if (is_ipc) { | |
err = uv__stream_recv_cmsg(stream, &msg); | |
if (err != 0) { | |
+ #if UNIFIED_CALLBACK | |
+ invoke_callback_wrap((any_func) stream->read_cb, UV_READ_CB, (long) stream, (long) err, (long) &buf); | |
+ #else | |
stream->read_cb(stream, err, &buf); | |
- return; | |
+ #endif | |
+ goto DONE; | |
} | |
} | |
++<<<<<<< HEAD | |
+ | |
+#if defined(__MVS__) | |
+ if (is_ipc && msg.msg_controllen > 0) { | |
+ uv_buf_t blankbuf; | |
+ int nread; | |
+ struct iovec *old; | |
+ | |
+ blankbuf.base = 0; | |
+ blankbuf.len = 0; | |
+ old = msg.msg_iov; | |
+ msg.msg_iov = (struct iovec*) &blankbuf; | |
+ nread = 0; | |
+ do { | |
+ nread = uv__recvmsg(uv__stream_fd(stream), &msg, 0); | |
+ err = uv__stream_recv_cmsg(stream, &msg); | |
+ if (err != 0) { | |
+ stream->read_cb(stream, err, &buf); | |
+ msg.msg_iov = old; | |
+ return; | |
+ } | |
+ } while (nread == 0 && msg.msg_controllen > 0); | |
+ msg.msg_iov = old; | |
+ } | |
+#endif | |
++======= | |
+ #if UNIFIED_CALLBACK | |
+ invoke_callback_wrap((any_func) stream->read_cb, UV_READ_CB, (long) stream, (long) nread, (long) &buf); | |
+ #else | |
++>>>>>>> Node.fz changes | |
stream->read_cb(stream, nread, &buf); | |
+ #endif | |
/* Return if we didn't fill the buffer, there is no more data to read. */ | |
if (nread < buflen) { | |
@@@ -1305,7 -1399,8 +1508,12 @@@ static void uv__stream_io(uv_loop_t* lo | |
assert(uv__stream_fd(stream) >= 0); | |
/* Ignore POLLHUP here. Even it it's set, there may still be data to read. */ | |
++<<<<<<< HEAD | |
+ if (events & (POLLIN | POLLERR | POLLHUP)) | |
++======= | |
+ if (events & (UV__POLLIN | UV__POLLERR | UV__POLLHUP)) | |
+ /* This causes a sequence of calls to stream->alloc_cb and stream->read_cb, if defined. */ | |
++>>>>>>> Node.fz changes | |
uv__read(stream); | |
if (uv__stream_fd(stream) == -1) | |
@@@ -1326,10 -1421,13 +1534,17 @@@ | |
} | |
if (uv__stream_fd(stream) == -1) | |
- return; /* read_cb closed stream. */ | |
+ goto DONE; /* read_cb closed stream. */ | |
++<<<<<<< HEAD | |
+ if (events & (POLLOUT | POLLERR | POLLHUP)) { | |
++======= | |
+ if (events & (UV__POLLOUT | UV__POLLERR | UV__POLLHUP)) { | |
+ /* Pops the first request off of stream->write_queue, fulfills it, | |
+ and calls uv__write_req_finish which puts it on stream->write_completed_queue. */ | |
++>>>>>>> Node.fz changes | |
uv__write(stream); | |
+ /* Iterates over stream->write_completed_queue, removing requests and calling their UV_WRITE_CBs. */ | |
uv__write_callbacks(stream); | |
/* Write queue drained. */ | |
@@@ -1421,13 -1542,10 +1659,20 @@@ int uv_write2(uv_write_t* req | |
* which works but only by accident. | |
*/ | |
if (uv__handle_fd((uv_handle_t*) send_handle) < 0) | |
++<<<<<<< HEAD | |
+ return -EBADF; | |
+ | |
+#if defined(__CYGWIN__) || defined(__MSYS__) | |
+ /* Cygwin recvmsg always sets msg_controllen to zero, so we cannot send it. | |
+ See https://github.com/mirror/newlib-cygwin/blob/86fc4bf0/winsup/cygwin/fhandler_socket.cc#L1736-L1743 */ | |
+ return -ENOSYS; | |
+#endif | |
++======= | |
+ { | |
+ rc = -EBADF; | |
+ goto DONE; | |
+ } | |
++>>>>>>> Node.fz changes | |
} | |
/* It's legal for write_queue_size > 0 even when the write_queue is empty; | |
diff --cc deps/uv/src/unix/timer.c | |
index f46bdf4bf5,427dbc9928..0000000000 | |
--- a/deps/uv/src/unix/timer.c | |
+++ b/deps/uv/src/unix/timer.c | |
@@@ -22,17 -22,85 +22,85 @@@ | |
#include "internal.h" | |
#include "heap-inl.h" | |
+ #include "scheduler.h" | |
+ #include "statistics.h" | |
+ | |
#include <assert.h> | |
#include <limits.h> | |
+ #include <stdlib.h> /* qsort */ | |
+ #include <unistd.h> /* usleep */ | |
+ | |
+ /* Keeping pointers to timers in an array makes it easy to shuffle them. */ | |
+ struct heap_timer_ready_aux | |
+ { | |
+ uv_timer_t **arr; | |
+ unsigned size; | |
+ unsigned max_size; | |
+ }; | |
+ | |
+ /* Wrapper around scheduler_thread_yield(SCHEDULE_POINT_TIMER_RUN, ...) for use with heap_walk. | |
+ * Identifies ready timers according to the scheduler's whims (i.e. time dilation). | |
+ * AUX is a heap_timer_ready_aux. | |
+ */ | |
+ static void uv__heap_timer_ready (struct heap_node *heap_node, void *aux) | |
+ { | |
+ uv_timer_t *handle = NULL; | |
+ struct heap_timer_ready_aux *htra = (struct heap_timer_ready_aux *) aux; | |
+ spd_timer_ready_t spd_timer_ready; | |
+ | |
+ assert(heap_node); | |
+ assert(aux); | |
+ | |
+ handle = container_of(heap_node, uv_timer_t, heap_node); | |
+ | |
+ /* Ask the scheduler whether this timer is ready. */ | |
+ spd_timer_ready_init(&spd_timer_ready); | |
+ spd_timer_ready.timer = handle; | |
+ spd_timer_ready.now = handle->loop->time; | |
+ spd_timer_ready.ready = -1; | |
+ scheduler_thread_yield(SCHEDULE_POINT_TIMER_READY, &spd_timer_ready); | |
+ assert(spd_timer_ready.ready == 0 || spd_timer_ready.ready == 1); | |
+ | |
+ if (spd_timer_ready.ready) | |
+ { | |
+ assert(htra->size < htra->max_size); | |
+ htra->arr[htra->size] = handle; | |
+ htra->size++; | |
+ } | |
+ | |
+ return; | |
+ } | |
+ | |
+ /* a and b are uv_timer_t **'s. "two arguments that point to the objects being compared." | |
+ * Returns -1 if a times out before b, 1 if b times out before a, 0 in the event of a tie (is this possible?). | |
+ */ | |
+ static int qsort_timer_cmp (const void *a, const void *b) | |
+ { | |
+ const uv_timer_t *a_timer = *(const uv_timer_t **) a; | |
+ const uv_timer_t *b_timer = *(const uv_timer_t **) b; | |
+ | |
+ if (a_timer->timeout < b_timer->timeout) | |
+ return -1; | |
+ if (b_timer->timeout < a_timer->timeout) | |
+ return 1; | |
+ /* Compare start_id when both have the same timeout. start_id is | |
+ * allocated with loop->timer_counter in uv_timer_start(). | |
+ */ | |
+ if (a_timer->start_id < b_timer->start_id) | |
+ return -1; | |
+ if (b_timer->start_id < a_timer->start_id) | |
+ return 1; | |
- static int timer_less_than(const struct heap_node* ha, | |
- const struct heap_node* hb) { | |
- const uv_timer_t* a; | |
- const uv_timer_t* b; | |
+ return 0; | |
+ } | |
+ | |
+ static int heap_timer_less_than (const struct heap_node* ha, const struct heap_node* hb) | |
+ { | |
+ const uv_timer_t *a = NULL, *b = NULL; | |
- a = container_of(ha, const uv_timer_t, heap_node); | |
- b = container_of(hb, const uv_timer_t, heap_node); | |
+ a = container_of(ha, uv_timer_t, heap_node); | |
+ b = container_of(hb, uv_timer_t, heap_node); | |
if (a->timeout < b->timeout) | |
return 1; | |
@@@ -135,38 -247,76 +247,82 @@@ int uv__next_timeout(const uv_loop_t* l | |
if (heap_node == NULL) | |
return -1; /* block indefinitely */ | |
++<<<<<<< HEAD | |
+ handle = container_of(heap_node, uv_timer_t, heap_node); | |
+ if (handle->timeout <= loop->time) | |
+ return 0; | |
++======= | |
+ handle = container_of(heap_node, const uv_timer_t, heap_node); | |
++>>>>>>> Node.fz changes | |
- diff = handle->timeout - loop->time; | |
- if (diff > INT_MAX) | |
- diff = INT_MAX; | |
+ spd_timer_next_timeout_init(&spd_timer_next_timeout); | |
+ spd_timer_next_timeout.timer = (uv_timer_t * /* I promise not to modify it */) handle; | |
+ spd_timer_next_timeout.now = loop->time; | |
+ scheduler_thread_yield(SCHEDULE_POINT_TIMER_NEXT_TIMEOUT, &spd_timer_next_timeout); | |
- return diff; | |
- } | |
+ /* We have to return an int, so cap if needed. */ | |
+ if (INT_MAX < spd_timer_next_timeout.time_until_timer) | |
+ spd_timer_next_timeout.time_until_timer = INT_MAX; | |
+ mylog(LOG_TIMER, 7, "uv__next_timeout: time_until_timer %llu\n", spd_timer_next_timeout.time_until_timer); | |
+ return spd_timer_next_timeout.time_until_timer; | |
+ } | |
void uv__run_timers(uv_loop_t* loop) { | |
- struct heap_node* heap_node; | |
- uv_timer_t* handle; | |
- | |
- for (;;) { | |
- heap_node = heap_min((struct heap*) &loop->timer_heap); | |
- if (heap_node == NULL) | |
- break; | |
+ unsigned i; | |
+ spd_timer_run_t spd_timer_run; | |
+ int *should_run = NULL; | |
+ struct heap_timer_ready_aux htra; | |
+ | |
+ /* Calculating ready timers here means that any timers registered by ready timers | |
+ * won't be candidates for execution until the next time we call uv__run_timers. | |
+ * This is appropriate since it matches the Node.js docs (timers have a minimum timeout of 1ms in the future). | |
+ */ | |
+ htra = uv__ready_timers(loop); | |
+ should_run = (int *) uv__malloc(htra.size * sizeof(int)); | |
+ assert(should_run != NULL); | |
+ memset(should_run, 0, sizeof(int)*htra.size); | |
+ | |
+ spd_timer_run_init(&spd_timer_run); | |
+ /* Ask scheduler which timers to handle in what order. | |
+ * Scheduler will defer every timer after the first deferred one, | |
+ * so that there is no shuffling due to deferral. */ | |
+ spd_timer_run_init(&spd_timer_run); | |
+ spd_timer_run.shuffleable_items.item_size = sizeof(uv_timer_t *); | |
+ spd_timer_run.shuffleable_items.nitems = htra.size; | |
+ spd_timer_run.shuffleable_items.items = htra.arr; | |
+ spd_timer_run.shuffleable_items.thoughts = should_run; | |
+ scheduler_thread_yield(SCHEDULE_POINT_TIMER_RUN, &spd_timer_run); | |
+ | |
+ for (i = 0; i < spd_timer_run.shuffleable_items.nitems; i++) | |
+ { | |
+ uv_timer_t *timer = ((uv_timer_t **) spd_timer_run.shuffleable_items.items)[i]; | |
+ if (spd_timer_run.shuffleable_items.thoughts[i] == 1) | |
+ { | |
+ mylog(LOG_TIMER, 7, "uv__run_timers: running timer %p, time is %llu, timer has timeout %llu\n", timer, timer->loop->time, timer->timeout); | |
+ uv_timer_stop(timer); | |
+ uv_timer_again(timer); | |
+ | |
+ #if UNIFIED_CALLBACK | |
+ invoke_callback_wrap((any_func) timer->timer_cb, UV_TIMER_CB, (long) timer); | |
+ #else | |
+ timer->timer_cb(timer); | |
+ #endif | |
+ statistics_record(STATISTIC_TIMERS_EXECUTED, 1); | |
+ } | |
+ else | |
+ { | |
+ mylog(LOG_TIMER, 7, "uv__run_timers: deferring ready timer %p and all subsequent timers\n", timer); | |
+ usleep(1000*5); /* Sleep 5 ms to let more events occur, since uv__io_poll will have timeout 0 while there are ready timers. */ | |
+ } | |
+ } | |
- handle = container_of(heap_node, uv_timer_t, heap_node); | |
- if (handle->timeout > loop->time) | |
- break; | |
+ uv__free(htra.arr); | |
+ uv__free(should_run); | |
- uv_timer_stop(handle); | |
- uv_timer_again(handle); | |
- handle->timer_cb(handle); | |
- } | |
+ return; | |
} | |
- | |
void uv__timer_close(uv_timer_t* handle) { | |
uv_timer_stop(handle); | |
} | |
diff --cc deps/uv/src/unix/udp.c | |
index a475bf5741,f62f29bbd3..0000000000 | |
--- a/deps/uv/src/unix/udp.c | |
+++ b/deps/uv/src/unix/udp.c | |
@@@ -168,10 -176,17 +179,23 @@@ static void uv__udp_recvmsg(uv_udp_t* h | |
h.msg_name = &peer; | |
do { | |
++<<<<<<< HEAD | |
+ buf = uv_buf_init(NULL, 0); | |
+ handle->alloc_cb((uv_handle_t*) handle, 64 * 1024, &buf); | |
+ if (buf.base == NULL || buf.len == 0) { | |
++======= | |
+ #if UNIFIED_CALLBACK | |
+ invoke_callback_wrap((any_func) handle->alloc_cb, UV_ALLOC_CB, (long) (uv_handle_t*) handle, (long) 64 * 1024, (long) &buf); | |
+ #else | |
+ handle->alloc_cb((uv_handle_t*) handle, 64 * 1024, &buf); | |
+ #endif | |
+ if (buf.len == 0) { | |
+ #if UNIFIED_CALLBACK | |
+ invoke_callback_wrap((any_func) handle->recv_cb, UV_UDP_RECV_CB, (long) handle, (long) UV_ENOBUFS, (long) &buf, (long) NULL, (long) 0); | |
+ #else | |
++>>>>>>> Node.fz changes | |
handle->recv_cb(handle, UV_ENOBUFS, &buf, NULL, 0); | |
+ #endif | |
return; | |
} | |
assert(buf.base != NULL); | |
diff --cc deps/uv/src/win/pipe.c | |
index 642213bc88,6e3202cbb1..0000000000 | |
--- a/deps/uv/src/win/pipe.c | |
+++ b/deps/uv/src/win/pipe.c | |
@@@ -1640,10 -1640,17 +1660,23 @@@ void uv_process_pipe_read_req(uv_loop_t | |
} | |
} | |
++<<<<<<< HEAD | |
+ buf = uv_buf_init(NULL, 0); | |
+ handle->alloc_cb((uv_handle_t*) handle, avail, &buf); | |
+ if (buf.base == NULL || buf.len == 0) { | |
++======= | |
+ #if UNIFIED_CALLBACK | |
+ INVOKE_CALLBACK_3(UV_ALLOC_CB, handle->alloc_cb, (uv_handle_t*) handle, avail, &buf); | |
+ #else | |
+ handle->alloc_cb((uv_handle_t*) handle, avail, &buf); | |
+ #endif | |
+ if (buf.len == 0) { | |
+ #if UNIFIED_CALLBACK | |
+ INVOKE_CALLBACK_3(UV_READ_CB, handle->read_cb, (uv_stream_t*) handle, UV_ENOBUFS, &buf); | |
+ #else | |
++>>>>>>> Node.fz changes | |
handle->read_cb((uv_stream_t*) handle, UV_ENOBUFS, &buf); | |
+ #endif | |
break; | |
} | |
assert(buf.base != NULL); | |
diff --cc deps/uv/src/win/tcp.c | |
index e63a63e771,ed0f34e8cc..0000000000 | |
--- a/deps/uv/src/win/tcp.c | |
+++ b/deps/uv/src/win/tcp.c | |
@@@ -496,11 -501,17 +501,24 @@@ static void uv_tcp_queue_read(uv_loop_t | |
*/ | |
if (loop->active_tcp_streams < uv_active_tcp_streams_threshold) { | |
handle->flags &= ~UV_HANDLE_ZERO_READ; | |
++<<<<<<< HEAD | |
+ handle->tcp.conn.read_buffer = uv_buf_init(NULL, 0); | |
+ handle->alloc_cb((uv_handle_t*) handle, 65536, &handle->tcp.conn.read_buffer); | |
+ if (handle->tcp.conn.read_buffer.base == NULL || | |
+ handle->tcp.conn.read_buffer.len == 0) { | |
++======= | |
+ #if UNIFIED_CALLBACK | |
+ INVOKE_CALLBACK_3(UV_ALLOC_CB, handle->alloc_cb, (uv_handle_t*) handle, 65536, &handle->tcp.conn.read_buffer); | |
+ #else | |
+ handle->alloc_cb((uv_handle_t*) handle, 65536, &handle->tcp.conn.read_buffer); | |
+ #endif | |
+ if (handle->tcp.conn.read_buffer.len == 0) { | |
+ #if UNIFIED_CALLBACK | |
+ INVOKE_CALLBACK_3(UV_READ_CB, handle->read_cb, (uv_stream_t*) handle, UV_ENOBUFS, &handle->tcp.conn.read_buffer); | |
+ #else | |
++>>>>>>> Node.fz changes | |
handle->read_cb((uv_stream_t*) handle, UV_ENOBUFS, &handle->tcp.conn.read_buffer); | |
+ #endif | |
return; | |
} | |
assert(handle->tcp.conn.read_buffer.base != NULL); | |
@@@ -1001,10 -1030,17 +1032,23 @@@ void uv_process_tcp_read_req(uv_loop_t | |
/* Do nonblocking reads until the buffer is empty */ | |
while (handle->flags & UV_HANDLE_READING) { | |
++<<<<<<< HEAD | |
+ buf = uv_buf_init(NULL, 0); | |
+ handle->alloc_cb((uv_handle_t*) handle, 65536, &buf); | |
+ if (buf.base == NULL || buf.len == 0) { | |
++======= | |
+ #if UNIFIED_CALLBACK | |
+ INVOKE_CALLBACK_3(UV_ALLOC_CB, handle->alloc_cb, (uv_handle_t*) handle, 65536, &buf); | |
+ #else | |
+ handle->alloc_cb((uv_handle_t*) handle, 65536, &buf); | |
+ #endif | |
+ if (buf.len == 0) { | |
+ #if UNIFIED_CALLBACK | |
+ INVOKE_CALLBACK_3(UV_READ_CB, handle->read_cb, (uv_stream_t*) handle, UV_ENOBUFS, &buf); | |
+ #else | |
++>>>>>>> Node.fz changes | |
handle->read_cb((uv_stream_t*) handle, UV_ENOBUFS, &buf); | |
+ #endif | |
break; | |
} | |
assert(buf.base != NULL); | |
diff --cc deps/uv/src/win/tty.c | |
index 05a11e8830,373bd64b74..0000000000 | |
--- a/deps/uv/src/win/tty.c | |
+++ b/deps/uv/src/win/tty.c | |
@@@ -563,10 -457,15 +563,22 @@@ static void uv_tty_queue_read_line(uv_l | |
req = &handle->read_req; | |
memset(&req->u.io.overlapped, 0, sizeof(req->u.io.overlapped)); | |
++<<<<<<< HEAD | |
+ handle->tty.rd.read_line_buffer = uv_buf_init(NULL, 0); | |
+ handle->alloc_cb((uv_handle_t*) handle, 8192, &handle->tty.rd.read_line_buffer); | |
+ if (handle->tty.rd.read_line_buffer.base == NULL || | |
+ handle->tty.rd.read_line_buffer.len == 0) { | |
++======= | |
+ #if UNIFIED_CALLBACK | |
+ INVOKE_CALLBACK_3(UV_ALLOC_CB, handle->alloc_cb, (uv_handle_t*) handle, 8192, &handle->tty.rd.read_line_buffer); | |
+ #else | |
+ handle->alloc_cb((uv_handle_t*) handle, 8192, &handle->tty.rd.read_line_buffer); | |
+ #endif | |
+ if (handle->tty.rd.read_line_buffer.len == 0) { | |
+ #if UNIFIED_CALLBACK | |
+ INVOKE_CALLBACK_3(UV_READ_CB, handle->read_cb, (uv_stream_t*) handle, UV_ENOBUFS, &handle->tty.rd.read_line_buffer); | |
+ #else | |
++>>>>>>> Node.fz changes | |
handle->read_cb((uv_stream_t*) handle, | |
UV_ENOBUFS, | |
&handle->tty.rd.read_line_buffer); | |
@@@ -871,10 -824,17 +904,23 @@@ void uv_process_tty_read_raw_req(uv_loo | |
if (handle->tty.rd.last_key_offset < handle->tty.rd.last_key_len) { | |
/* Allocate a buffer if needed */ | |
if (buf_used == 0) { | |
++<<<<<<< HEAD | |
+ buf = uv_buf_init(NULL, 0); | |
+ handle->alloc_cb((uv_handle_t*) handle, 1024, &buf); | |
+ if (buf.base == NULL || buf.len == 0) { | |
++======= | |
+ #if UNIFIED_CALLBACK | |
+ INVOKE_CALLBACK_3(UV_ALLOC_CB, handle->alloc_cb, (uv_handle_t*) handle, 1024, &buf); | |
+ #else | |
+ handle->alloc_cb((uv_handle_t*) handle, 1024, &buf); | |
+ #endif | |
+ if (buf.len == 0) { | |
+ #if UNIFIED_CALLBACK | |
+ INVOKE_CALLBACK_3(UV_READ_CB, handle->read_cb, (uv_stream_t*) handle, UV_ENOBUFS, &buf); | |
+ #else | |
++>>>>>>> Node.fz changes | |
handle->read_cb((uv_stream_t*) handle, UV_ENOBUFS, &buf); | |
+ #endif | |
goto out; | |
} | |
assert(buf.base != NULL); | |
@@@ -949,15 -926,14 +1011,26 @@@ void uv_process_tty_read_line_req(uv_lo | |
} | |
} else { | |
++<<<<<<< HEAD | |
+ if (!(handle->flags & UV_HANDLE_CANCELLATION_PENDING)) { | |
+ /* Read successful */ | |
+ /* TODO: read unicode, convert to utf-8 */ | |
+ DWORD bytes = req->u.io.overlapped.InternalHigh; | |
+ handle->read_cb((uv_stream_t*) handle, bytes, &buf); | |
+ } else { | |
+ handle->flags &= ~UV_HANDLE_CANCELLATION_PENDING; | |
+ handle->read_cb((uv_stream_t*) handle, 0, &buf); | |
+ } | |
++======= | |
+ /* Read successful */ | |
+ /* TODO: read unicode, convert to utf-8 */ | |
+ DWORD bytes = req->u.io.overlapped.InternalHigh; | |
+ #if UNIFIED_CALLBACK | |
+ INVOKE_CALLBACK_3(UV_READ_CB, handle->read_cb, (uv_stream_t*) handle, bytes, &buf); | |
+ #else | |
+ handle->read_cb((uv_stream_t*) handle, bytes, &buf); | |
+ #endif | |
++>>>>>>> Node.fz changes | |
} | |
/* Wait for more input events. */ | |
diff --cc deps/uv/src/win/udp.c | |
index 21348f3796,0c1906e1ef..0000000000 | |
--- a/deps/uv/src/win/udp.c | |
+++ b/deps/uv/src/win/udp.c | |
@@@ -288,10 -289,17 +288,23 @@@ static void uv_udp_queue_recv(uv_loop_t | |
if (loop->active_udp_streams < uv_active_udp_streams_threshold) { | |
handle->flags &= ~UV_HANDLE_ZERO_READ; | |
++<<<<<<< HEAD | |
+ handle->recv_buffer = uv_buf_init(NULL, 0); | |
+ handle->alloc_cb((uv_handle_t*) handle, 65536, &handle->recv_buffer); | |
+ if (handle->recv_buffer.base == NULL || handle->recv_buffer.len == 0) { | |
++======= | |
+ #if UNIFIED_CALLBACK | |
+ INVOKE_CALLBACK_3(UV_ALLOC_CB, handle->alloc_cb, (uv_handle_t*) handle, 65536, &handle->recv_buffer); | |
+ #else | |
+ handle->alloc_cb((uv_handle_t*) handle, 65536, &handle->recv_buffer); | |
+ #endif | |
+ if (handle->recv_buffer.len == 0) { | |
+ #if UNIFIED_CALLBACK | |
+ INVOKE_CALLBACK_5(UV_UDP_RECV_CB, handle->recv_cb, handle, UV_ENOBUFS, &handle->recv_buffer, NULL, 0); | |
+ #else | |
++>>>>>>> Node.fz changes | |
handle->recv_cb(handle, UV_ENOBUFS, &handle->recv_buffer, NULL, 0); | |
+ #endif | |
return; | |
} | |
assert(handle->recv_buffer.base != NULL); | |
@@@ -505,10 -522,17 +526,23 @@@ void uv_process_udp_recv_req(uv_loop_t | |
/* Do a nonblocking receive */ | |
/* TODO: try to read multiple datagrams at once. FIONREAD maybe? */ | |
++<<<<<<< HEAD | |
+ buf = uv_buf_init(NULL, 0); | |
+ handle->alloc_cb((uv_handle_t*) handle, 65536, &buf); | |
+ if (buf.base == NULL || buf.len == 0) { | |
++======= | |
+ #if UNIFIED_CALLBACK | |
+ INVOKE_CALLBACK_3(UV_ALLOC_CB, handle->alloc_cb, (uv_handle_t*) handle, 65536, &buf); | |
+ #else | |
+ handle->alloc_cb((uv_handle_t*) handle, 65536, &buf); | |
+ #endif | |
+ if (buf.len == 0) { | |
+ #if UNIFIED_CALLBACK | |
+ INVOKE_CALLBACK_5(UV_UDP_RECV_CB, handle->recv_cb, handle, UV_ENOBUFS, &buf, NULL, 0); | |
+ #else | |
++>>>>>>> Node.fz changes | |
handle->recv_cb(handle, UV_ENOBUFS, &buf, NULL, 0); | |
+ #endif | |
goto done; | |
} | |
assert(buf.base != NULL); | |
diff --cc deps/uv/uv.gyp | |
index 9d9bb4b735,0010a76a40..0000000000 | |
--- a/deps/uv/uv.gyp | |
+++ b/deps/uv/uv.gyp | |
@@@ -137,6 -130,24 +152,27 @@@ | |
], | |
}, | |
}, { # Not Windows i.e. POSIX | |
++<<<<<<< HEAD | |
++======= | |
+ 'cflags': [ | |
+ '-g', | |
+ '--std=gnu89', | |
+ '-pedantic', | |
+ '-Wall', | |
+ '-Wextra', | |
+ '-Werror', | |
+ '-fstack-protector-all', | |
+ '-DJD_DEBUG', | |
+ '-DENABLE_SCHEDULER_VANILLA', | |
+ '-DENABLE_SCHEDULER_FUZZING_TIME', | |
+ '-DENABLE_SCHEDULER_TP_FREEDOM', | |
+ #'-fstack-protector-strong', # Not portable, Ubuntu ships with older gcc | |
+ #'-DJD_DEBUG_FULL', | |
+ #'-DJD_UT', | |
+ #'-DJD_LOG_EE', | |
+ '-Wno-unused-parameter', | |
+ ], | |
++>>>>>>> Node.fz changes | |
'sources': [ | |
'include/uv-unix.h', | |
'include/uv-linux.h', | |
diff --cc deps/v8/src/heap/heap.cc | |
index b13ec784f5,655a44bec6..0000000000 | |
--- a/deps/v8/src/heap/heap.cc | |
+++ b/deps/v8/src/heap/heap.cc | |
@@@ -1064,69 -865,38 +1064,78 @@@ void Heap::CollectAllAvailableGarbage(G | |
UncommitFromSpace(); | |
} | |
- | |
-void Heap::EnsureFillerObjectAtTop() { | |
- // There may be an allocation memento behind every object in new space. | |
- // If we evacuate a not full new space or if we are on the last page of | |
- // the new space, then there may be uninitialized memory behind the top | |
- // pointer of the new space page. We store a filler object there to | |
- // identify the unused space. | |
- Address from_top = new_space_.top(); | |
- // Check that from_top is inside its page (i.e., not at the end). | |
- Address space_end = new_space_.ToSpaceEnd(); | |
- if (from_top < space_end) { | |
- Page* page = Page::FromAddress(from_top); | |
- if (page->Contains(from_top)) { | |
- int remaining_in_page = static_cast<int>(page->area_end() - from_top); | |
- CreateFillerObjectAt(from_top, remaining_in_page); | |
- } | |
+void Heap::ReportExternalMemoryPressure() { | |
+ const GCCallbackFlags kGCCallbackFlagsForExternalMemory = | |
+ static_cast<GCCallbackFlags>( | |
+ kGCCallbackFlagSynchronousPhantomCallbackProcessing | | |
+ kGCCallbackFlagCollectAllExternalMemory); | |
+ if (external_memory_ > | |
+ (external_memory_at_last_mark_compact_ + external_memory_hard_limit())) { | |
+ CollectAllGarbage( | |
+ kReduceMemoryFootprintMask | kFinalizeIncrementalMarkingMask, | |
+ GarbageCollectionReason::kExternalMemoryPressure, | |
+ static_cast<GCCallbackFlags>(kGCCallbackFlagCollectAllAvailableGarbage | | |
+ kGCCallbackFlagsForExternalMemory)); | |
+ return; | |
} | |
-} | |
- | |
+ if (incremental_marking()->IsStopped()) { | |
+ if (incremental_marking()->CanBeActivated()) { | |
+ StartIncrementalMarking(i::Heap::kNoGCFlags, | |
+ GarbageCollectionReason::kExternalMemoryPressure, | |
+ kGCCallbackFlagsForExternalMemory); | |
+ } else { | |
+ CollectAllGarbage(i::Heap::kNoGCFlags, | |
+ GarbageCollectionReason::kExternalMemoryPressure, | |
+ kGCCallbackFlagsForExternalMemory); | |
+ } | |
+ } else { | |
+ // Incremental marking is turned on an has already been started. | |
+ const double kMinStepSize = 5; | |
+ const double kMaxStepSize = 10; | |
+ const double ms_step = | |
+ Min(kMaxStepSize, | |
+ Max(kMinStepSize, static_cast<double>(external_memory_) / | |
+ external_memory_limit_ * kMinStepSize)); | |
+ const double deadline = MonotonicallyIncreasingTimeInMs() + ms_step; | |
+ // Extend the gc callback flags with external memory flags. | |
+ current_gc_callback_flags_ = static_cast<GCCallbackFlags>( | |
+ current_gc_callback_flags_ | kGCCallbackFlagsForExternalMemory); | |
+ incremental_marking()->AdvanceIncrementalMarking( | |
+ deadline, IncrementalMarking::GC_VIA_STACK_GUARD, | |
+ IncrementalMarking::FORCE_COMPLETION, StepOrigin::kV8); | |
+ } | |
+} | |
+ | |
++<<<<<<< HEAD | |
+void Heap::EnsureFillerObjectAtTop() { | |
+ // There may be an allocation memento behind objects in new space. Upon | |
+ // evacuation of a non-full new space (or if we are on the last page) there | |
+ // may be uninitialized memory behind top. We fill the remainder of the page | |
+ // with a filler. | |
+ Address to_top = new_space_->top(); | |
+ Page* page = Page::FromAddress(to_top - kPointerSize); | |
+ if (page->Contains(to_top)) { | |
+ int remaining_in_page = static_cast<int>(page->area_end() - to_top); | |
+ CreateFillerObjectAt(to_top, remaining_in_page, ClearRecordedSlots::kNo); | |
+ } | |
++======= | |
+ static void mylog_0 (char *fmt) | |
+ { | |
+ #ifndef JD_SILENT_NODE | |
+ fprintf(stderr, fmt); | |
+ #endif | |
++>>>>>>> Node.fz changes | |
} | |
-bool Heap::CollectGarbage(GarbageCollector collector, const char* gc_reason, | |
- const char* collector_reason, | |
+bool Heap::CollectGarbage(AllocationSpace space, | |
+ GarbageCollectionReason gc_reason, | |
const v8::GCCallbackFlags gc_callback_flags) { | |
+ mylog_0("Heap::CollectGarbage: Collecting garbage\n"); | |
// The VM is in the GC state until exiting this function. | |
- VMState<GC> state(isolate_); | |
+ VMState<GC> state(isolate()); | |
+ | |
+ const char* collector_reason = NULL; | |
+ GarbageCollector collector = SelectGarbageCollector(space, &collector_reason); | |
#ifdef DEBUG | |
// Reset the allocation timeout to the GC interval, but make sure to | |
diff --cc deps/v8/src/isolate.cc | |
index c8f0fd0c78,d615afedb3..0000000000 | |
--- a/deps/v8/src/isolate.cc | |
+++ b/deps/v8/src/isolate.cc | |
@@@ -8,12 -6,11 +8,13 @@@ | |
#include <fstream> // NOLINT(readability/streams) | |
#include <sstream> | |
+ #include <assert.h> | |
-#include "src/v8.h" | |
- | |
-#include "src/ast.h" | |
+#include "src/assembler-inl.h" | |
+#include "src/ast/ast-value-factory.h" | |
+#include "src/ast/context-slot-cache.h" | |
+#include "src/base/adapters.h" | |
+#include "src/base/hashmap.h" | |
#include "src/base/platform/platform.h" | |
#include "src/base/sys-info.h" | |
#include "src/base/utils/random-number-generator.h" | |
@@@ -3480,79 -2627,68 +3481,124 @@@ void Isolate::EnqueueMicrotask(Handle<O | |
set_pending_microtask_count(num_tasks + 1); | |
} | |
+ static void mylog_0 (char *fmt) | |
+ { | |
+ #ifndef JD_SILENT_NODE | |
+ fprintf(stderr, fmt); | |
+ #endif | |
+ } | |
+ | |
+ static void mylog_1 (char *fmt, int arg1) | |
+ { | |
+ #ifndef JD_SILENT_NODE | |
+ fprintf(stderr, fmt, arg1); | |
+ #endif | |
+ } | |
+ | |
+ static void mylog_2 (char *fmt, int arg1, int arg2) | |
+ { | |
+ #ifndef JD_SILENT_NODE | |
+ fprintf(stderr, fmt, arg1, arg2); | |
+ #endif | |
+ } | |
void Isolate::RunMicrotasks() { | |
++<<<<<<< HEAD | |
++======= | |
+ mylog_0("Isolate::RunMicrotasks: begin\n"); | |
+ // %RunMicrotasks may be called in mjsunit tests, which violates | |
+ // this assertion, hence the check for --allow-natives-syntax. | |
+ // TODO(adamk): However, this also fails some layout tests. | |
+ // | |
+ // DCHECK(FLAG_allow_natives_syntax || | |
+ // handle_scope_implementer()->CallDepthIsZero()); | |
+ | |
++>>>>>>> Node.fz changes | |
// Increase call depth to prevent recursive callbacks. | |
v8::Isolate::SuppressMicrotaskExecutionScope suppress( | |
reinterpret_cast<v8::Isolate*>(this)); | |
+ is_running_microtasks_ = true; | |
+ RunMicrotasksInternal(); | |
+ is_running_microtasks_ = false; | |
+ FireMicrotasksCompletedCallback(); | |
+} | |
+ | |
++<<<<<<< HEAD | |
+void Isolate::RunMicrotasksInternal() { | |
+ if (!pending_microtask_count()) return; | |
+ TRACE_EVENT0("v8.execute", "RunMicrotasks"); | |
+ TRACE_EVENT_CALL_STATS_SCOPED(this, "v8", "V8.RunMicrotasks"); | |
++======= | |
+ mylog_1("Isolate::RunMicrotasks: pending_microtask_count %i\n", pending_microtask_count()); | |
++>>>>>>> Node.fz changes | |
while (pending_microtask_count() > 0) { | |
+ mylog_1("Isolate::RunMicrotasks: loop: pending_microtask_count %i\n", pending_microtask_count()); | |
HandleScope scope(this); | |
int num_tasks = pending_microtask_count(); | |
+ // Do not use factory()->microtask_queue() here; we need a fresh handle! | |
Handle<FixedArray> queue(heap()->microtask_queue(), this); | |
DCHECK(num_tasks <= queue->length()); | |
set_pending_microtask_count(0); | |
heap()->set_microtask_queue(heap()->empty_fixed_array()); | |
++<<<<<<< HEAD | |
+ Isolate* isolate = this; | |
+ FOR_WITH_HANDLE_SCOPE(isolate, int, i = 0, i, i < num_tasks, i++, { | |
++======= | |
+ for (int i = 0; i < num_tasks; i++) { | |
+ mylog_2("Isolate::RunMicrotasks: Running task %i/%i\n", i+1, num_tasks); | |
+ /* JD: Any microtasks are fatal for now -- not included in record/replay system. Have not yet considered how to deal with them. | |
+ I also don't understand what kinds of things might be put into this microtask queue. Node.js stuff? Or just V8 stuff? */ | |
+ assert(!"Isolate::RunMicrotasks: Not yet supported"); | |
+ HandleScope scope(this); | |
++>>>>>>> Node.fz changes | |
Handle<Object> microtask(queue->get(i), this); | |
- if (microtask->IsJSFunction()) { | |
- Handle<JSFunction> microtask_function = | |
- Handle<JSFunction>::cast(microtask); | |
+ | |
+ if (microtask->IsCallHandlerInfo()) { | |
+ Handle<CallHandlerInfo> callback_info = | |
+ Handle<CallHandlerInfo>::cast(microtask); | |
+ v8::MicrotaskCallback callback = | |
+ v8::ToCData<v8::MicrotaskCallback>(callback_info->callback()); | |
+ void* data = v8::ToCData<void*>(callback_info->data()); | |
+ callback(data); | |
+ } else { | |
SaveContext save(this); | |
- set_context(microtask_function->context()->native_context()); | |
+ Context* context; | |
+ if (microtask->IsJSFunction()) { | |
+ context = Handle<JSFunction>::cast(microtask)->context(); | |
+ } else if (microtask->IsPromiseResolveThenableJobInfo()) { | |
+ context = | |
+ Handle<PromiseResolveThenableJobInfo>::cast(microtask)->context(); | |
+ } else { | |
+ context = Handle<PromiseReactionJobInfo>::cast(microtask)->context(); | |
+ } | |
+ | |
+ set_context(context->native_context()); | |
+ handle_scope_implementer_->EnterMicrotaskContext( | |
+ Handle<Context>(context, this)); | |
+ | |
+ MaybeHandle<Object> result; | |
MaybeHandle<Object> maybe_exception; | |
- MaybeHandle<Object> result = | |
- Execution::TryCall(microtask_function, factory()->undefined_value(), | |
- 0, NULL, &maybe_exception); | |
+ | |
+ if (microtask->IsJSFunction()) { | |
+ Handle<JSFunction> microtask_function = | |
+ Handle<JSFunction>::cast(microtask); | |
+ result = Execution::TryCall( | |
+ this, microtask_function, factory()->undefined_value(), 0, | |
+ nullptr, Execution::MessageHandling::kReport, &maybe_exception); | |
+ } else if (microtask->IsPromiseResolveThenableJobInfo()) { | |
+ PromiseResolveThenableJob( | |
+ Handle<PromiseResolveThenableJobInfo>::cast(microtask), &result, | |
+ &maybe_exception); | |
+ } else { | |
+ PromiseReactionJob(Handle<PromiseReactionJobInfo>::cast(microtask), | |
+ &result, &maybe_exception); | |
+ } | |
+ | |
+ handle_scope_implementer_->LeaveMicrotaskContext(); | |
+ | |
// If execution is terminating, just bail out. | |
- Handle<Object> exception; | |
if (result.is_null() && maybe_exception.is_null()) { | |
// Clear out any remaining callbacks in the queue. | |
heap()->set_microtask_queue(heap()->empty_fixed_array()); | |
diff --cc src/env.cc | |
index c7bda01303,1724f27bc3..0000000000 | |
--- a/src/env.cc | |
+++ b/src/env.cc | |
@@@ -220,143 -56,48 +220,164 @@@ void Environment::PrintSyncTrace() cons | |
fflush(stderr); | |
} | |
++<<<<<<< HEAD | |
+void Environment::RunAtExitCallbacks() { | |
+ for (AtExitCallback at_exit : at_exit_functions_) { | |
+ at_exit.cb_(at_exit.arg_); | |
+ } | |
+ at_exit_functions_.clear(); | |
+} | |
+ | |
+void Environment::AtExit(void (*cb)(void* arg), void* arg) { | |
+ at_exit_functions_.push_back(AtExitCallback{cb, arg}); | |
++======= | |
+ static void mylog_0 (char *fmt) | |
+ { | |
+ #ifndef JD_SILENT_NODE | |
+ fprintf(stderr, fmt); | |
+ #endif | |
++>>>>>>> Node.fz changes | |
} | |
-bool Environment::KickNextTick() { | |
- TickInfo* info = tick_info(); | |
+void Environment::AddPromiseHook(promise_hook_func fn, void* arg) { | |
+ auto it = std::find_if( | |
+ promise_hooks_.begin(), promise_hooks_.end(), | |
+ [&](const PromiseHookCallback& hook) { | |
+ return hook.cb_ == fn && hook.arg_ == arg; | |
+ }); | |
+ if (it != promise_hooks_.end()) { | |
+ it->enable_count_++; | |
+ return; | |
+ } | |
+ promise_hooks_.push_back(PromiseHookCallback{fn, arg, 1}); | |
- if (info->in_tick()) { | |
- return true; | |
+ if (promise_hooks_.size() == 1) { | |
+ isolate_->SetPromiseHook(EnvPromiseHook); | |
} | |
+} | |
+ | |
+bool Environment::RemovePromiseHook(promise_hook_func fn, void* arg) { | |
+ auto it = std::find_if( | |
+ promise_hooks_.begin(), promise_hooks_.end(), | |
+ [&](const PromiseHookCallback& hook) { | |
+ return hook.cb_ == fn && hook.arg_ == arg; | |
+ }); | |
- if (info->length() == 0) { | |
- isolate()->RunMicrotasks(); | |
+ if (it == promise_hooks_.end()) return false; | |
+ | |
+ if (--it->enable_count_ > 0) return true; | |
+ | |
+ promise_hooks_.erase(it); | |
+ if (promise_hooks_.empty()) { | |
+ isolate_->SetPromiseHook(nullptr); | |
} | |
- if (info->length() == 0) { | |
- info->set_index(0); | |
- return true; | |
+ return true; | |
+} | |
+ | |
+bool Environment::EmitNapiWarning() { | |
+ bool current_value = emit_napi_warning_; | |
+ emit_napi_warning_ = false; | |
+ return current_value; | |
+} | |
+ | |
+void Environment::EnvPromiseHook(v8::PromiseHookType type, | |
+ v8::Local<v8::Promise> promise, | |
+ v8::Local<v8::Value> parent) { | |
+ Environment* env = Environment::GetCurrent(promise->CreationContext()); | |
+ for (const PromiseHookCallback& hook : env->promise_hooks_) { | |
+ hook.cb_(type, promise, parent, hook.arg_); | |
} | |
+} | |
- info->set_in_tick(true); | |
+void CollectExceptionInfo(Environment* env, | |
+ v8::Local<v8::Object> obj, | |
+ int errorno, | |
+ const char* err_string, | |
+ const char* syscall, | |
+ const char* message, | |
+ const char* path, | |
+ const char* dest) { | |
+ obj->Set(env->errno_string(), v8::Integer::New(env->isolate(), errorno)); | |
++<<<<<<< HEAD | |
+ obj->Set(env->context(), env->code_string(), | |
+ OneByteString(env->isolate(), err_string)).FromJust(); | |
+ | |
+ if (message != nullptr) { | |
+ obj->Set(env->context(), env->message_string(), | |
+ OneByteString(env->isolate(), message)).FromJust(); | |
+ } | |
++======= | |
+ mylog_0("Environment::KickNextTick: Calling tick_callback_function\n"); | |
+ | |
+ // process nextTicks after call | |
+ TryCatch try_catch; | |
+ try_catch.SetVerbose(true); | |
+ tick_callback_function()->Call(process_object(), 0, nullptr); | |
+ | |
+ mylog_0("Environment::KickNextTick: Done calling tick_callback_function\n"); | |
+ | |
+ info->set_in_tick(false); | |
++>>>>>>> Node.fz changes | |
- if (try_catch.HasCaught()) { | |
- info->set_last_threw(true); | |
- return false; | |
+ v8::Local<v8::Value> path_buffer; | |
+ if (path != nullptr) { | |
+ path_buffer = | |
+ Buffer::Copy(env->isolate(), path, strlen(path)).ToLocalChecked(); | |
+ obj->Set(env->context(), env->path_string(), path_buffer).FromJust(); | |
} | |
- return true; | |
+ v8::Local<v8::Value> dest_buffer; | |
+ if (dest != nullptr) { | |
+ dest_buffer = | |
+ Buffer::Copy(env->isolate(), dest, strlen(dest)).ToLocalChecked(); | |
+ obj->Set(env->context(), env->dest_string(), dest_buffer).FromJust(); | |
+ } | |
+ | |
+ if (syscall != nullptr) { | |
+ obj->Set(env->context(), env->syscall_string(), | |
+ OneByteString(env->isolate(), syscall)).FromJust(); | |
+ } | |
+} | |
+ | |
+void Environment::CollectExceptionInfo(v8::Local<v8::Value> object, | |
+ int errorno, | |
+ const char* syscall, | |
+ const char* message, | |
+ const char* path) { | |
+ if (!object->IsObject() || errorno == 0) | |
+ return; | |
+ | |
+ v8::Local<v8::Object> obj = object.As<v8::Object>(); | |
+ const char* err_string = node::errno_string(errorno); | |
+ | |
+ if (message == nullptr || message[0] == '\0') { | |
+ message = strerror(errorno); | |
+ } | |
+ | |
+ node::CollectExceptionInfo(this, obj, errorno, err_string, | |
+ syscall, message, path, nullptr); | |
+} | |
+ | |
+void Environment::CollectUVExceptionInfo(v8::Local<v8::Value> object, | |
+ int errorno, | |
+ const char* syscall, | |
+ const char* message, | |
+ const char* path, | |
+ const char* dest) { | |
+ if (!object->IsObject() || errorno == 0) | |
+ return; | |
+ | |
+ v8::Local<v8::Object> obj = object.As<v8::Object>(); | |
+ const char* err_string = uv_err_name(errorno); | |
+ | |
+ if (message == nullptr || message[0] == '\0') { | |
+ message = uv_strerror(errorno); | |
+ } | |
+ | |
+ node::CollectExceptionInfo(this, obj, errorno, err_string, | |
+ syscall, message, path, dest); | |
} | |
} // namespace node | |
diff --cc src/node.cc | |
index 85fc23bbd9,721db6c74a..0000000000 | |
--- a/src/node.cc | |
+++ b/src/node.cc | |
@@@ -108,15 -69,12 +108,16 @@@ typedef int mode_t | |
#include <unistd.h> // setuid, getuid | |
#endif | |
-#if defined(__POSIX__) && !defined(__ANDROID__) | |
+#if defined(__POSIX__) && !defined(__ANDROID__) && !defined(__CloudABI__) | |
#include <pwd.h> // getpwnam() | |
#include <grp.h> // getgrnam() | |
+ #include <sys/time.h> // gettimeofday() | |
#endif | |
+#if defined(__POSIX__) | |
+#include <dlfcn.h> | |
+#endif | |
+ | |
#ifdef __APPLE__ | |
#include <crt_externs.h> | |
#define environ (*_NSGetEnviron()) | |
@@@ -209,214 -144,368 +210,378 @@@ std::string icu_data_dir; // NOLINT(ru | |
// used by C++ modules as well | |
bool no_deprecation = false; | |
++<<<<<<< HEAD | |
+#if HAVE_OPENSSL | |
+// use OpenSSL's cert store instead of bundled certs | |
+bool ssl_openssl_cert_store = | |
+#if defined(NODE_OPENSSL_CERT_STORE) | |
+ true; | |
+#else | |
+ false; | |
++======= | |
+ // process-relative uptime base, initialized at start-up | |
+ static double prog_start_time; | |
+ static bool debugger_running; | |
+ static uv_async_t dispatch_debug_messages_async; | |
+ | |
+ static Isolate* node_isolate = nullptr; | |
+ static v8::Platform* default_platform; | |
+ | |
+ static void mylog_0 (char *fmt) | |
+ { | |
+ #ifndef JD_SILENT_NODE | |
+ fprintf(stderr, fmt); | |
+ #endif | |
+ } | |
+ | |
+ static void mylog_1 (char *fmt, int arg1) | |
+ { | |
+ #ifndef JD_SILENT_NODE | |
+ fprintf(stderr, fmt, arg1); | |
+ #endif | |
+ } | |
+ | |
+ static void CheckImmediate(uv_check_t* handle) { | |
+ Environment* env = Environment::from_immediate_check_handle(handle); | |
+ HandleScope scope(env->isolate()); | |
+ Context::Scope context_scope(env->context()); | |
+ mylog_0("node::CheckImmediate: MakeCallback\n"); | |
+ MakeCallback(env, env->process_object(), env->immediate_callback_string()); | |
+ mylog_0("node::CheckImmediate: MakeCallback done\n"); | |
+ } | |
+ | |
+ | |
+ static void IdleImmediateDummy(uv_idle_t* handle) { | |
+ // Do nothing. Only for maintaining event loop. | |
+ // TODO(bnoordhuis) Maybe make libuv accept nullptr idle callbacks. | |
+ } | |
+ | |
+ | |
+ static inline const char *errno_string(int errorno) { | |
+ #define ERRNO_CASE(e) case e: return #e; | |
+ switch (errorno) { | |
+ #ifdef EACCES | |
+ ERRNO_CASE(EACCES); | |
+ #endif | |
+ | |
+ #ifdef EADDRINUSE | |
+ ERRNO_CASE(EADDRINUSE); | |
+ #endif | |
+ | |
+ #ifdef EADDRNOTAVAIL | |
+ ERRNO_CASE(EADDRNOTAVAIL); | |
+ #endif | |
+ | |
+ #ifdef EAFNOSUPPORT | |
+ ERRNO_CASE(EAFNOSUPPORT); | |
+ #endif | |
+ | |
+ #ifdef EAGAIN | |
+ ERRNO_CASE(EAGAIN); | |
+ #endif | |
+ | |
+ #ifdef EWOULDBLOCK | |
+ # if EAGAIN != EWOULDBLOCK | |
+ ERRNO_CASE(EWOULDBLOCK); | |
+ # endif | |
+ #endif | |
+ | |
+ #ifdef EALREADY | |
+ ERRNO_CASE(EALREADY); | |
+ #endif | |
+ | |
+ #ifdef EBADF | |
+ ERRNO_CASE(EBADF); | |
+ #endif | |
+ | |
+ #ifdef EBADMSG | |
+ ERRNO_CASE(EBADMSG); | |
+ #endif | |
+ | |
+ #ifdef EBUSY | |
+ ERRNO_CASE(EBUSY); | |
+ #endif | |
+ | |
+ #ifdef ECANCELED | |
+ ERRNO_CASE(ECANCELED); | |
+ #endif | |
+ | |
+ #ifdef ECHILD | |
+ ERRNO_CASE(ECHILD); | |
+ #endif | |
+ | |
+ #ifdef ECONNABORTED | |
+ ERRNO_CASE(ECONNABORTED); | |
+ #endif | |
+ | |
+ #ifdef ECONNREFUSED | |
+ ERRNO_CASE(ECONNREFUSED); | |
+ #endif | |
+ | |
+ #ifdef ECONNRESET | |
+ ERRNO_CASE(ECONNRESET); | |
+ #endif | |
+ | |
+ #ifdef EDEADLK | |
+ ERRNO_CASE(EDEADLK); | |
+ #endif | |
+ | |
+ #ifdef EDESTADDRREQ | |
+ ERRNO_CASE(EDESTADDRREQ); | |
+ #endif | |
+ | |
+ #ifdef EDOM | |
+ ERRNO_CASE(EDOM); | |
+ #endif | |
+ | |
+ #ifdef EDQUOT | |
+ ERRNO_CASE(EDQUOT); | |
+ #endif | |
+ | |
+ #ifdef EEXIST | |
+ ERRNO_CASE(EEXIST); | |
+ #endif | |
+ | |
+ #ifdef EFAULT | |
+ ERRNO_CASE(EFAULT); | |
+ #endif | |
+ | |
+ #ifdef EFBIG | |
+ ERRNO_CASE(EFBIG); | |
+ #endif | |
+ | |
+ #ifdef EHOSTUNREACH | |
+ ERRNO_CASE(EHOSTUNREACH); | |
+ #endif | |
+ | |
+ #ifdef EIDRM | |
+ ERRNO_CASE(EIDRM); | |
+ #endif | |
+ | |
+ #ifdef EILSEQ | |
+ ERRNO_CASE(EILSEQ); | |
+ #endif | |
+ | |
+ #ifdef EINPROGRESS | |
+ ERRNO_CASE(EINPROGRESS); | |
+ #endif | |
+ | |
+ #ifdef EINTR | |
+ ERRNO_CASE(EINTR); | |
+ #endif | |
+ | |
+ #ifdef EINVAL | |
+ ERRNO_CASE(EINVAL); | |
+ #endif | |
+ | |
+ #ifdef EIO | |
+ ERRNO_CASE(EIO); | |
+ #endif | |
+ | |
+ #ifdef EISCONN | |
+ ERRNO_CASE(EISCONN); | |
++>>>>>>> Node.fz changes | |
#endif | |
-#ifdef EISDIR | |
- ERRNO_CASE(EISDIR); | |
-#endif | |
+# if NODE_FIPS_MODE | |
+// used by crypto module | |
+bool enable_fips_crypto = false; | |
+bool force_fips_crypto = false; | |
+# endif // NODE_FIPS_MODE | |
+std::string openssl_config; // NOLINT(runtime/string) | |
+#endif // HAVE_OPENSSL | |
-#ifdef ELOOP | |
- ERRNO_CASE(ELOOP); | |
-#endif | |
+// true if process warnings should be suppressed | |
+bool no_process_warnings = false; | |
+bool trace_warnings = false; | |
-#ifdef EMFILE | |
- ERRNO_CASE(EMFILE); | |
-#endif | |
+// Set in node.cc by ParseArgs when --preserve-symlinks is used. | |
+// Used in node_config.cc to set a constant on process.binding('config') | |
+// that is used by lib/module.js | |
+bool config_preserve_symlinks = false; | |
-#ifdef EMLINK | |
- ERRNO_CASE(EMLINK); | |
-#endif | |
+// Set in node.cc by ParseArgs when --experimental-modules is used. | |
+// Used in node_config.cc to set a constant on process.binding('config') | |
+// that is used by lib/module.js | |
+bool config_experimental_modules = false; | |
-#ifdef EMSGSIZE | |
- ERRNO_CASE(EMSGSIZE); | |
-#endif | |
+// Set in node.cc by ParseArgs when --loader is used. | |
+// Used in node_config.cc to set a constant on process.binding('config') | |
+// that is used by lib/internal/bootstrap_node.js | |
+std::string config_userland_loader; // NOLINT(runtime/string) | |
-#ifdef EMULTIHOP | |
- ERRNO_CASE(EMULTIHOP); | |
-#endif | |
+// Set by ParseArgs when --pending-deprecation or NODE_PENDING_DEPRECATION | |
+// is used. | |
+bool config_pending_deprecation = false; | |
-#ifdef ENAMETOOLONG | |
- ERRNO_CASE(ENAMETOOLONG); | |
-#endif | |
+// Set in node.cc by ParseArgs when --redirect-warnings= is used. | |
+std::string config_warning_file; // NOLINT(runtime/string) | |
-#ifdef ENETDOWN | |
- ERRNO_CASE(ENETDOWN); | |
-#endif | |
+// Set in node.cc by ParseArgs when --expose-internals or --expose_internals is | |
+// used. | |
+// Used in node_config.cc to set a constant on process.binding('config') | |
+// that is used by lib/internal/bootstrap_node.js | |
+bool config_expose_internals = false; | |
-#ifdef ENETRESET | |
- ERRNO_CASE(ENETRESET); | |
-#endif | |
+bool v8_initialized = false; | |
-#ifdef ENETUNREACH | |
- ERRNO_CASE(ENETUNREACH); | |
-#endif | |
+bool linux_at_secure = false; | |
-#ifdef ENFILE | |
- ERRNO_CASE(ENFILE); | |
-#endif | |
+// process-relative uptime base, initialized at start-up | |
+static double prog_start_time; | |
-#ifdef ENOBUFS | |
- ERRNO_CASE(ENOBUFS); | |
-#endif | |
+static Mutex node_isolate_mutex; | |
+static v8::Isolate* node_isolate; | |
+ | |
+node::DebugOptions debug_options; | |
+ | |
+static struct { | |
+#if NODE_USE_V8_PLATFORM | |
+ void Initialize(int thread_pool_size) { | |
+ if (trace_enabled) { | |
+ tracing_agent_.reset(new tracing::Agent()); | |
+ platform_ = new NodePlatform(thread_pool_size, | |
+ tracing_agent_->GetTracingController()); | |
+ V8::InitializePlatform(platform_); | |
+ tracing::TraceEventHelper::SetTracingController( | |
+ tracing_agent_->GetTracingController()); | |
+ } else { | |
+ tracing_agent_.reset(nullptr); | |
+ platform_ = new NodePlatform(thread_pool_size, nullptr); | |
+ V8::InitializePlatform(platform_); | |
+ tracing::TraceEventHelper::SetTracingController( | |
+ new v8::TracingController()); | |
+ } | |
+ } | |
-#ifdef ENODATA | |
- ERRNO_CASE(ENODATA); | |
-#endif | |
+ void Dispose() { | |
+ platform_->Shutdown(); | |
+ delete platform_; | |
+ platform_ = nullptr; | |
+ tracing_agent_.reset(nullptr); | |
+ } | |
-#ifdef ENODEV | |
- ERRNO_CASE(ENODEV); | |
-#endif | |
+ void DrainVMTasks(Isolate* isolate) { | |
+ platform_->DrainBackgroundTasks(isolate); | |
+ } | |
-#ifdef ENOENT | |
- ERRNO_CASE(ENOENT); | |
-#endif | |
+ void CancelVMTasks(Isolate* isolate) { | |
+ platform_->CancelPendingDelayedTasks(isolate); | |
+ } | |
-#ifdef ENOEXEC | |
- ERRNO_CASE(ENOEXEC); | |
-#endif | |
+#if HAVE_INSPECTOR | |
+ bool StartInspector(Environment *env, const char* script_path, | |
+ const node::DebugOptions& options) { | |
+ // Inspector agent can't fail to start, but if it was configured to listen | |
+ // right away on the websocket port and fails to bind/etc, this will return | |
+ // false. | |
+ return env->inspector_agent()->Start(platform_, script_path, options); | |
+ } | |
-#ifdef ENOLINK | |
- ERRNO_CASE(ENOLINK); | |
-#endif | |
+ bool InspectorStarted(Environment *env) { | |
+ return env->inspector_agent()->IsStarted(); | |
+ } | |
+#endif // HAVE_INSPECTOR | |
-#ifdef ENOLCK | |
-# if ENOLINK != ENOLCK | |
- ERRNO_CASE(ENOLCK); | |
-# endif | |
-#endif | |
+ void StartTracingAgent() { | |
+ tracing_agent_->Start(trace_enabled_categories); | |
+ } | |
-#ifdef ENOMEM | |
- ERRNO_CASE(ENOMEM); | |
-#endif | |
+ void StopTracingAgent() { | |
+ tracing_agent_->Stop(); | |
+ } | |
-#ifdef ENOMSG | |
- ERRNO_CASE(ENOMSG); | |
-#endif | |
+ NodePlatform* Platform() { | |
+ return platform_; | |
+ } | |
-#ifdef ENOPROTOOPT | |
- ERRNO_CASE(ENOPROTOOPT); | |
-#endif | |
+ std::unique_ptr<tracing::Agent> tracing_agent_; | |
+ NodePlatform* platform_; | |
+#else // !NODE_USE_V8_PLATFORM | |
+ void Initialize(int thread_pool_size) {} | |
+ void Dispose() {} | |
+ void DrainVMTasks(Isolate* isolate) {} | |
+ void CancelVMTasks(Isolate* isolate) {} | |
+ bool StartInspector(Environment *env, const char* script_path, | |
+ const node::DebugOptions& options) { | |
+ env->ThrowError("Node compiled with NODE_USE_V8_PLATFORM=0"); | |
+ return true; | |
+ } | |
-#ifdef ENOSPC | |
- ERRNO_CASE(ENOSPC); | |
-#endif | |
+ void StartTracingAgent() { | |
+ fprintf(stderr, "Node compiled with NODE_USE_V8_PLATFORM=0, " | |
+ "so event tracing is not available.\n"); | |
+ } | |
+ void StopTracingAgent() {} | |
-#ifdef ENOSR | |
- ERRNO_CASE(ENOSR); | |
-#endif | |
+ NodePlatform* Platform() { | |
+ return nullptr; | |
+ } | |
+#endif // !NODE_USE_V8_PLATFORM | |
-#ifdef ENOSTR | |
- ERRNO_CASE(ENOSTR); | |
-#endif | |
+#if !NODE_USE_V8_PLATFORM || !HAVE_INSPECTOR | |
+ bool InspectorStarted(Environment *env) { | |
+ return false; | |
+ } | |
+#endif // !NODE_USE_V8_PLATFORM || !HAVE_INSPECTOR | |
+} v8_platform; | |
-#ifdef ENOSYS | |
- ERRNO_CASE(ENOSYS); | |
+#ifdef __POSIX__ | |
+static const unsigned kMaxSignal = 32; | |
#endif | |
-#ifdef ENOTCONN | |
- ERRNO_CASE(ENOTCONN); | |
-#endif | |
+static void PrintErrorString(const char* format, ...) { | |
+ va_list ap; | |
+ va_start(ap, format); | |
+#ifdef _WIN32 | |
+ HANDLE stderr_handle = GetStdHandle(STD_ERROR_HANDLE); | |
+ | |
+ // Check if stderr is something other than a tty/console | |
+ if (stderr_handle == INVALID_HANDLE_VALUE || | |
+ stderr_handle == nullptr || | |
+ uv_guess_handle(_fileno(stderr)) != UV_TTY) { | |
+ vfprintf(stderr, format, ap); | |
+ va_end(ap); | |
+ return; | |
+ } | |
-#ifdef ENOTDIR | |
- ERRNO_CASE(ENOTDIR); | |
-#endif | |
+ // Fill in any placeholders | |
+ int n = _vscprintf(format, ap); | |
+ std::vector<char> out(n + 1); | |
+ vsprintf(out.data(), format, ap); | |
-#ifdef ENOTEMPTY | |
-# if ENOTEMPTY != EEXIST | |
- ERRNO_CASE(ENOTEMPTY); | |
-# endif | |
-#endif | |
+ // Get required wide buffer size | |
+ n = MultiByteToWideChar(CP_UTF8, 0, out.data(), -1, nullptr, 0); | |
-#ifdef ENOTSOCK | |
- ERRNO_CASE(ENOTSOCK); | |
-#endif | |
+ std::vector<wchar_t> wbuf(n); | |
+ MultiByteToWideChar(CP_UTF8, 0, out.data(), -1, wbuf.data(), n); | |
-#ifdef ENOTSUP | |
- ERRNO_CASE(ENOTSUP); | |
+ // Don't include the null character in the output | |
+ CHECK_GT(n, 0); | |
+ WriteConsoleW(stderr_handle, wbuf.data(), n - 1, nullptr, nullptr); | |
#else | |
-# ifdef EOPNOTSUPP | |
- ERRNO_CASE(EOPNOTSUPP); | |
-# endif | |
-#endif | |
- | |
-#ifdef ENOTTY | |
- ERRNO_CASE(ENOTTY); | |
-#endif | |
- | |
-#ifdef ENXIO | |
- ERRNO_CASE(ENXIO); | |
-#endif | |
- | |
- | |
-#ifdef EOVERFLOW | |
- ERRNO_CASE(EOVERFLOW); | |
-#endif | |
- | |
-#ifdef EPERM | |
- ERRNO_CASE(EPERM); | |
-#endif | |
- | |
-#ifdef EPIPE | |
- ERRNO_CASE(EPIPE); | |
-#endif | |
- | |
-#ifdef EPROTO | |
- ERRNO_CASE(EPROTO); | |
-#endif | |
- | |
-#ifdef EPROTONOSUPPORT | |
- ERRNO_CASE(EPROTONOSUPPORT); | |
-#endif | |
- | |
-#ifdef EPROTOTYPE | |
- ERRNO_CASE(EPROTOTYPE); | |
-#endif | |
- | |
-#ifdef ERANGE | |
- ERRNO_CASE(ERANGE); | |
-#endif | |
- | |
-#ifdef EROFS | |
- ERRNO_CASE(EROFS); | |
-#endif | |
- | |
-#ifdef ESPIPE | |
- ERRNO_CASE(ESPIPE); | |
-#endif | |
- | |
-#ifdef ESRCH | |
- ERRNO_CASE(ESRCH); | |
-#endif | |
- | |
-#ifdef ESTALE | |
- ERRNO_CASE(ESTALE); | |
-#endif | |
- | |
-#ifdef ETIME | |
- ERRNO_CASE(ETIME); | |
+ vfprintf(stderr, format, ap); | |
#endif | |
+ va_end(ap); | |
+} | |
-#ifdef ETIMEDOUT | |
- ERRNO_CASE(ETIMEDOUT); | |
-#endif | |
-#ifdef ETXTBSY | |
- ERRNO_CASE(ETXTBSY); | |
-#endif | |
+static void CheckImmediate(uv_check_t* handle) { | |
+ Environment* env = Environment::from_immediate_check_handle(handle); | |
+ HandleScope scope(env->isolate()); | |
+ Context::Scope context_scope(env->context()); | |
+ MakeCallback(env->isolate(), | |
+ env->process_object(), | |
+ env->immediate_callback_string(), | |
+ 0, | |
+ nullptr, | |
+ {0, 0}).ToLocalChecked(); | |
+} | |
-#ifdef EXDEV | |
- ERRNO_CASE(EXDEV); | |
-#endif | |
- default: return ""; | |
- } | |
+static void IdleImmediateDummy(uv_idle_t* handle) { | |
+ // Do nothing. Only for maintaining event loop. | |
+ // TODO(bnoordhuis) Maybe make libuv accept nullptr idle callbacks. | |
} | |
const char *signo_string(int signo) { | |
@@@ -976,9 -956,10 +1141,11 @@@ void SetupDomainUse(const FunctionCallb | |
args.GetReturnValue().Set(Uint32Array::New(array_buffer, 0, fields_count)); | |
} | |
+ | |
void RunMicrotasks(const FunctionCallbackInfo<Value>& args) { | |
+ mylog_0("node::RunMicrotasks: Running tasks\n"); | |
args.GetIsolate()->RunMicrotasks(); | |
+ mylog_0("node::RunMicrotasks: Done running tasks\n"); | |
} | |
@@@ -1636,17 -1477,17 +1803,19 @@@ static Local<Value> ExecuteString(Envir | |
// we will handle exceptions ourself. | |
try_catch.SetVerbose(false); | |
- Local<v8::Script> script = v8::Script::Compile(source, filename); | |
+ ScriptOrigin origin(filename); | |
+ MaybeLocal<v8::Script> script = | |
+ v8::Script::Compile(env->context(), source, &origin); | |
if (script.IsEmpty()) { | |
ReportException(env, try_catch); | |
+ /* TODO Need to uv_mark_exit_end() ? */ | |
exit(3); | |
} | |
- Local<Value> result = script->Run(); | |
+ Local<Value> result = script.ToLocalChecked()->Run(); | |
if (result.IsEmpty()) { | |
ReportException(env, try_catch); | |
+ /* TODO Need to uv_mark_exit_end() ? */ | |
exit(4); | |
} | |
@@@ -2143,33 -1930,46 +2312,73 @@@ static void InitGroups(const FunctionCa | |
} | |
} | |
-#endif // __POSIX__ && !defined(__ANDROID__) | |
+#endif // __POSIX__ && !defined(__ANDROID__) && !defined(__CloudABI__) | |
+ | |
+ | |
+static void WaitForInspectorDisconnect(Environment* env) { | |
+#if HAVE_INSPECTOR | |
+ if (env->inspector_agent()->IsConnected()) { | |
+ // Restore signal dispositions, the app is done and is no longer | |
+ // capable of handling signals. | |
+#if defined(__POSIX__) && !defined(NODE_SHARED_MODE) | |
+ struct sigaction act; | |
+ memset(&act, 0, sizeof(act)); | |
+ for (unsigned nr = 1; nr < kMaxSignal; nr += 1) { | |
+ if (nr == SIGKILL || nr == SIGSTOP || nr == SIGPROF) | |
+ continue; | |
+ act.sa_handler = (nr == SIGPIPE) ? SIG_IGN : SIG_DFL; | |
+ CHECK_EQ(0, sigaction(nr, &act, nullptr)); | |
+ } | |
+#endif | |
+ env->inspector_agent()->WaitForDisconnect(); | |
+ } | |
+#endif | |
+} | |
++<<<<<<< HEAD | |
+static void Exit(const FunctionCallbackInfo<Value>& args) { | |
+ WaitForInspectorDisconnect(Environment::GetCurrent(args)); | |
+ exit(args[0]->Int32Value()); | |
++======= | |
+ /* JSEmitExit and EmitExit both come here. | |
+ From node.js, we may call process.emitExit without actually setting process.exitCode. | |
+ However, EmitExit is an external function, so presumably sometimes the caller | |
+ has set process.exitCode. */ | |
+ static int _EmitExit(Environment *env, int code) { | |
+ // process.emit('exit') | |
+ HandleScope handle_scope(env->isolate()); | |
+ Context::Scope context_scope(env->context()); | |
+ Local<Object> process_object = env->process_object(); | |
+ process_object->Set(env->exiting_string(), True(env->isolate())); | |
+ | |
+ Local<Value> args[] = { | |
+ env->exit_string(), | |
+ Integer::New(env->isolate(), code) | |
+ }; | |
+ | |
+ mylog_1("_EmitExit: process.emit('exit', %i)\n", code); | |
+ uv_mark_exit_begin(); | |
+ MakeCallback(env, process_object, "emit", ARRAY_SIZE(args), args); | |
+ | |
+ // Reload exit code, it may be changed by `emit('exit')` | |
+ Local<String> exitCode = env->exit_code_string(); | |
+ return process_object->Get(exitCode)->Int32Value(); | |
+ } | |
+ | |
+ static void JSEmitExit(const FunctionCallbackInfo<Value>& args){ | |
+ _EmitExit(Environment::GetCurrent(args), args[0]->Int32Value()); | |
+ return; | |
+ } | |
+ | |
+ static void _Exit (int code) { | |
+ uv_mark_exit_end(); | |
+ exit(code); | |
+ } | |
+ | |
+ void Exit(const FunctionCallbackInfo<Value>& args) { | |
+ _Exit(args[0]->Int32Value()); | |
++>>>>>>> Node.fz changes | |
} | |
@@@ -2230,55 -2029,48 +2439,71 @@@ static void Kill(const FunctionCallback | |
// Hrtime exposes libuv's uv_hrtime() high-resolution timer. | |
// The value returned by uv_hrtime() is a 64-bit int representing nanoseconds, | |
-// so this function instead returns an Array with 2 entries representing seconds | |
-// and nanoseconds, to avoid any integer overflow possibility. | |
-// Pass in an Array from a previous hrtime() call to instead get a time diff. | |
-void Hrtime(const FunctionCallbackInfo<Value>& args) { | |
- Environment* env = Environment::GetCurrent(args); | |
- | |
+// so this function instead fills in an Uint32Array with 3 entries, | |
+// to avoid any integer overflow possibility. | |
+// The first two entries contain the second part of the value | |
+// broken into the upper/lower 32 bits to be converted back in JS, | |
+// because there is no Uint64Array in JS. | |
+// The third entry contains the remaining nanosecond part of the value. | |
+static void Hrtime(const FunctionCallbackInfo<Value>& args) { | |
uint64_t t = uv_hrtime(); | |
- if (args.Length() > 0) { | |
- // return a time diff tuple | |
- if (!args[0]->IsArray()) { | |
- return env->ThrowTypeError( | |
- "process.hrtime() only accepts an Array tuple."); | |
- } | |
- Local<Array> inArray = Local<Array>::Cast(args[0]); | |
- uint64_t seconds = inArray->Get(0)->Uint32Value(); | |
- uint64_t nanos = inArray->Get(1)->Uint32Value(); | |
- t -= (seconds * NANOS_PER_SEC) + nanos; | |
+ Local<ArrayBuffer> ab = args[0].As<Uint32Array>()->Buffer(); | |
+ uint32_t* fields = static_cast<uint32_t*>(ab->GetContents().Data()); | |
+ | |
+ fields[0] = (t / NANOS_PER_SEC) >> 32; | |
+ fields[1] = (t / NANOS_PER_SEC) & 0xffffffff; | |
+ fields[2] = t % NANOS_PER_SEC; | |
+} | |
+ | |
+// Microseconds in a second, as a float, used in CPUUsage() below | |
+#define MICROS_PER_SEC 1e6 | |
+ | |
+// CPUUsage use libuv's uv_getrusage() this-process resource usage accessor, | |
+// to access ru_utime (user CPU time used) and ru_stime (system CPU time used), | |
+// which are uv_timeval_t structs (long tv_sec, long tv_usec). | |
+// Returns those values as Float64 microseconds in the elements of the array | |
+// passed to the function. | |
+static void CPUUsage(const FunctionCallbackInfo<Value>& args) { | |
+ uv_rusage_t rusage; | |
+ | |
+ // Call libuv to get the values we'll return. | |
+ int err = uv_getrusage(&rusage); | |
+ if (err) { | |
+ // On error, return the strerror version of the error code. | |
+ Local<String> errmsg = OneByteString(args.GetIsolate(), uv_strerror(err)); | |
+ args.GetReturnValue().Set(errmsg); | |
+ return; | |
} | |
- Local<Array> tuple = Array::New(env->isolate(), 2); | |
- tuple->Set(0, Integer::NewFromUnsigned(env->isolate(), t / NANOS_PER_SEC)); | |
- tuple->Set(1, Integer::NewFromUnsigned(env->isolate(), t % NANOS_PER_SEC)); | |
- args.GetReturnValue().Set(tuple); | |
+ // Get the double array pointer from the Float64Array argument. | |
+ CHECK(args[0]->IsFloat64Array()); | |
+ Local<Float64Array> array = args[0].As<Float64Array>(); | |
+ CHECK_EQ(array->Length(), 2); | |
+ Local<ArrayBuffer> ab = array->Buffer(); | |
+ double* fields = static_cast<double*>(ab->GetContents().Data()); | |
+ | |
+ // Set the Float64Array elements to be user / system values in microseconds. | |
+ fields[0] = MICROS_PER_SEC * rusage.ru_utime.tv_sec + rusage.ru_utime.tv_usec; | |
+ fields[1] = MICROS_PER_SEC * rusage.ru_stime.tv_sec + rusage.ru_stime.tv_usec; | |
} | |
+ // HrtimeWall returns the current time of day ("wall clock") to us granularity. | |
+ // This function instead returns an Array with 2 entries representing seconds | |
+ // and microseconds, to avoid any integer overflow possibility. | |
+ // JD: Clearly this is wildly unportable, which is why Hrtime relies on libuv. | |
+ void HrtimeWall(const FunctionCallbackInfo<Value>& args) { | |
+ Environment* env = Environment::GetCurrent(args); | |
+ | |
+ struct timeval tv; | |
+ assert(!gettimeofday(&tv, NULL)); | |
+ | |
+ Local<Array> tuple = Array::New(env->isolate(), 2); | |
+ tuple->Set(0, Integer::NewFromUnsigned(env->isolate(), (uint64_t) tv.tv_sec)); | |
+ tuple->Set(1, Integer::NewFromUnsigned(env->isolate(), (uint64_t) tv.tv_usec)); | |
+ args.GetReturnValue().Set(tuple); | |
+ } | |
+ | |
extern "C" void node_module_register(void* m) { | |
struct node_module* mp = reinterpret_cast<struct node_module*>(m); | |
@@@ -2500,37 -2231,35 +2725,59 @@@ void FatalException(Isolate* isolate | |
// failed before the process._fatalException function was added! | |
// this is probably pretty bad. Nothing to do but report and exit. | |
ReportException(env, error, message); | |
++<<<<<<< HEAD | |
+ exit_code = 6; | |
++======= | |
+ uv_mark_exit_end(); | |
+ exit(6); | |
++>>>>>>> Node.fz changes | |
} | |
- TryCatch fatal_try_catch; | |
+ if (exit_code == 0) { | |
+ TryCatch fatal_try_catch(isolate); | |
- // Do not call FatalException when _fatalException handler throws | |
- fatal_try_catch.SetVerbose(false); | |
+ // Do not call FatalException when _fatalException handler throws | |
+ fatal_try_catch.SetVerbose(false); | |
- // this will return true if the JS layer handled it, false otherwise | |
- Local<Value> caught = | |
- fatal_exception_function->Call(process_object, 1, &error); | |
+ // this will return true if the JS layer handled it, false otherwise | |
+ Local<Value> caught = | |
+ fatal_exception_function->Call(process_object, 1, &error); | |
++<<<<<<< HEAD | |
+ if (fatal_try_catch.HasCaught()) { | |
+ // the fatal exception function threw, so we must exit | |
+ ReportException(env, fatal_try_catch); | |
+ exit_code = 7; | |
+ } | |
+ | |
+ if (exit_code == 0 && false == caught->BooleanValue()) { | |
+ ReportException(env, error, message); | |
+ exit_code = 1; | |
++======= | |
+ if (fatal_try_catch.HasCaught()) { | |
+ // the fatal exception function threw, so we must exit | |
+ ReportException(env, fatal_try_catch); | |
+ uv_mark_exit_end(); | |
+ exit(7); | |
+ } | |
+ | |
+ if (false == caught->BooleanValue()) { | |
+ ReportException(env, error, message); | |
+ if (abort_on_uncaught_exception) { | |
+ ABORT(); | |
+ } else { | |
+ uv_mark_exit_end(); | |
+ exit(1); | |
++>>>>>>> Node.fz changes | |
} | |
} | |
+ | |
+ if (exit_code) { | |
+#if HAVE_INSPECTOR | |
+ env->inspector_agent()->FatalException(error, message); | |
+#endif | |
+ exit(exit_code); | |
+ } | |
} | |
@@@ -3446,9 -2971,8 +3694,10 @@@ void SetupProcessObject(Environment* en | |
env->SetMethod(process, "_debugEnd", DebugEnd); | |
env->SetMethod(process, "hrtime", Hrtime); | |
+ env->SetMethod(process, "hrtimeWall", HrtimeWall); | |
+ env->SetMethod(process, "cpuUsage", CPUUsage); | |
+ | |
env->SetMethod(process, "dlopen", DLOpen); | |
env->SetMethod(process, "uptime", Uptime); | |
@@@ -3520,12 -3048,19 +3769,13 @@@ void LoadEnvironment(Environment* env) | |
Local<Value> f_value = ExecuteString(env, MainSource(env), script_name); | |
if (try_catch.HasCaught()) { | |
ReportException(env, try_catch); | |
+ /* TODO Need to uv_mark_exit_end() ? */ | |
exit(10); | |
} | |
+ // The bootstrap_node.js file returns a function 'f' | |
CHECK(f_value->IsFunction()); | |
- Local<Function> f = Local<Function>::Cast(f_value); | |
- | |
- // Now we call 'f' with the 'process' variable that we've built up with | |
- // all our bindings. Inside node.js we'll take care of assigning things to | |
- // their places. | |
- // We start the process this way in order to be more modular. Developers | |
- // who do not like how 'src/node.js' setups the module system but do like | |
- // Node's I/O bindings may want to replace 'f' with their own function. | |
+ Local<Function> f = Local<Function>::Cast(f_value); | |
// Add a reference to the global object | |
Local<Object> global = env->context()->Global(); | |
@@@ -3552,29 -3087,45 +3802,60 @@@ | |
env->SetMethod(env->process_object(), "_rawDebug", RawDebug); | |
++<<<<<<< HEAD | |
+ // Expose the global object as a property on itself | |
+ // (Allows you to set stuff on `global` from anywhere in JavaScript.) | |
+ global->Set(FIXED_ONE_BYTE_STRING(env->isolate(), "global"), global); | |
++======= | |
+ Local<Value> arg = env->process_object(); | |
+ mylog_0("node::LoadEnvironment: f->Call\n"); | |
+ f->Call(global, 1, &arg); | |
+ mylog_0("node::LoadEnvironment: Done with f\n"); | |
+ } | |
+ | |
+ static void PrintHelp(); | |
+ | |
+ static bool ParseDebugOpt(const char* arg) { | |
+ const char* port = nullptr; | |
+ | |
+ if (!strcmp(arg, "--debug")) { | |
+ use_debug_agent = true; | |
+ } else if (!strncmp(arg, "--debug=", sizeof("--debug=") - 1)) { | |
+ use_debug_agent = true; | |
+ port = arg + sizeof("--debug=") - 1; | |
+ } else if (!strcmp(arg, "--debug-brk")) { | |
+ use_debug_agent = true; | |
+ debug_wait_connect = true; | |
+ } else if (!strncmp(arg, "--debug-brk=", sizeof("--debug-brk=") - 1)) { | |
+ use_debug_agent = true; | |
+ debug_wait_connect = true; | |
+ port = arg + sizeof("--debug-brk=") - 1; | |
+ } else if (!strncmp(arg, "--debug-port=", sizeof("--debug-port=") - 1)) { | |
+ port = arg + sizeof("--debug-port=") - 1; | |
+ } else { | |
+ return false; | |
+ } | |
++>>>>>>> Node.fz changes | |
- if (port != nullptr) { | |
- debug_port = atoi(port); | |
- if (debug_port < 1024 || debug_port > 65535) { | |
- fprintf(stderr, "Debug port must be in range 1024 to 65535.\n"); | |
- PrintHelp(); | |
- exit(12); | |
- } | |
- } | |
+ // Now we call 'f' with the 'process' variable that we've built up with | |
+ // all our bindings. Inside bootstrap_node.js and internal/process we'll | |
+ // take care of assigning things to their places. | |
+ | |
+ // We start the process this way in order to be more modular. Developers | |
+ // who do not like how bootstrap_node.js sets up the module system but do | |
+ // like Node's I/O bindings may want to replace 'f' with their own function. | |
+ Local<Value> arg = env->process_object(); | |
- return true; | |
+ auto ret = f->Call(env->context(), Null(env->isolate()), 1, &arg); | |
+ // If there was an error during bootstrap then it was either handled by the | |
+ // FatalException handler or it's unrecoverable (e.g. max call stack | |
+ // exceeded). Either way, clear the stack so that the AsyncCallbackScope | |
+ // destructor doesn't fail on the id check. | |
+ // There are only two ways to have a stack size > 1: 1) the user manually | |
+ // called MakeCallback or 2) user awaited during bootstrap, which triggered | |
+ // _tickCallback(). | |
+ if (ret.IsEmpty()) | |
+ env->async_hooks()->clear_async_id_stack(); | |
} | |
static void PrintHelp() { | |
@@@ -4458,35 -3827,7 +4739,38 @@@ int EmitExit(Environment *env) | |
Local<String> exitCode = env->exit_code_string(); | |
int code = process_object->Get(exitCode)->Int32Value(); | |
++<<<<<<< HEAD | |
+ Local<Value> args[] = { | |
+ env->exit_string(), | |
+ Integer::New(env->isolate(), code) | |
+ }; | |
+ | |
+ MakeCallback(env->isolate(), | |
+ process_object, "emit", arraysize(args), args, | |
+ {0, 0}).ToLocalChecked(); | |
+ | |
+ // Reload exit code, it may be changed by `emit('exit')` | |
+ return process_object->Get(exitCode)->Int32Value(); | |
++======= | |
+ return _EmitExit(env, code); | |
++>>>>>>> Node.fz changes | |
+} | |
+ | |
+ | |
+IsolateData* CreateIsolateData(Isolate* isolate, uv_loop_t* loop) { | |
+ return new IsolateData(isolate, loop, nullptr); | |
+} | |
+ | |
+IsolateData* CreateIsolateData( | |
+ Isolate* isolate, | |
+ uv_loop_t* loop, | |
+ MultiIsolatePlatform* platform) { | |
+ return new IsolateData(isolate, loop, platform); | |
+} | |
+ | |
+ | |
+void FreeIsolateData(IsolateData* isolate_data) { | |
+ delete isolate_data; | |
} | |
@@@ -4527,95 -3881,82 +4811,102 @@@ Local<Context> NewContext(Isolate* isol | |
} | |
-Environment* CreateEnvironment(Isolate* isolate, | |
- uv_loop_t* loop, | |
- Local<Context> context, | |
- int argc, | |
- const char* const* argv, | |
- int exec_argc, | |
- const char* const* exec_argv) { | |
+inline int Start(Isolate* isolate, IsolateData* isolate_data, | |
+ int argc, const char* const* argv, | |
+ int exec_argc, const char* const* exec_argv) { | |
HandleScope handle_scope(isolate); | |
- | |
+ Local<Context> context = NewContext(isolate); | |
Context::Scope context_scope(context); | |
- Environment* env = Environment::New(context, loop); | |
+ Environment env(isolate_data, context); | |
+ CHECK_EQ(0, uv_key_create(&thread_local_env)); | |
+ uv_key_set(&thread_local_env, &env); | |
+ env.Start(argc, argv, exec_argc, exec_argv, v8_is_profiling); | |
- isolate->SetAutorunMicrotasks(false); | |
+ const char* path = argc > 1 ? argv[1] : nullptr; | |
+ StartInspector(&env, path, debug_options); | |
- uv_check_init(env->event_loop(), env->immediate_check_handle()); | |
- uv_unref( | |
- reinterpret_cast<uv_handle_t*>(env->immediate_check_handle())); | |
- | |
- uv_idle_init(env->event_loop(), env->immediate_idle_handle()); | |
- | |
- // Inform V8's CPU profiler when we're idle. The profiler is sampling-based | |
- // but not all samples are created equal; mark the wall clock time spent in | |
- // epoll_wait() and friends so profiling tools can filter it out. The samples | |
- // still end up in v8.log but with state=IDLE rather than state=EXTERNAL. | |
- // TODO(bnoordhuis) Depends on a libuv implementation detail that we should | |
- // probably fortify in the API contract, namely that the last started prepare | |
- // or check watcher runs first. It's not 100% foolproof; if an add-on starts | |
- // a prepare or check watcher after us, any samples attributed to its callback | |
- // will be recorded with state=IDLE. | |
- uv_prepare_init(env->event_loop(), env->idle_prepare_handle()); | |
- uv_check_init(env->event_loop(), env->idle_check_handle()); | |
- uv_unref(reinterpret_cast<uv_handle_t*>(env->idle_prepare_handle())); | |
- uv_unref(reinterpret_cast<uv_handle_t*>(env->idle_check_handle())); | |
- | |
- // Register handle cleanups | |
- env->RegisterHandleCleanup( | |
- reinterpret_cast<uv_handle_t*>(env->immediate_check_handle()), | |
- HandleCleanup, | |
- nullptr); | |
- env->RegisterHandleCleanup( | |
- reinterpret_cast<uv_handle_t*>(env->immediate_idle_handle()), | |
- HandleCleanup, | |
- nullptr); | |
- env->RegisterHandleCleanup( | |
- reinterpret_cast<uv_handle_t*>(env->idle_prepare_handle()), | |
- HandleCleanup, | |
- nullptr); | |
- env->RegisterHandleCleanup( | |
- reinterpret_cast<uv_handle_t*>(env->idle_check_handle()), | |
- HandleCleanup, | |
- nullptr); | |
+ if (debug_options.inspector_enabled() && !v8_platform.InspectorStarted(&env)) | |
+ return 12; // Signal internal error. | |
- if (v8_is_profiling) { | |
- StartProfilerIdleNotifier(env); | |
+ env.set_abort_on_uncaught_exception(abort_on_uncaught_exception); | |
+ | |
+ if (no_force_async_hooks_checks) { | |
+ env.async_hooks()->no_force_checks(); | |
} | |
- Local<FunctionTemplate> process_template = FunctionTemplate::New(isolate); | |
- process_template->SetClassName(FIXED_ONE_BYTE_STRING(isolate, "process")); | |
+ { | |
+ Environment::AsyncCallbackScope callback_scope(&env); | |
+ env.async_hooks()->push_async_ids(1, 0); | |
+ LoadEnvironment(&env); | |
+ env.async_hooks()->pop_async_id(1); | |
+ } | |
- Local<Object> process_object = process_template->GetFunction()->NewInstance(); | |
- env->set_process_object(process_object); | |
+ env.set_trace_sync_io(trace_sync_io); | |
- SetupProcessObject(env, argc, argv, exec_argc, exec_argv); | |
- LoadAsyncWrapperInfo(env); | |
+ { | |
+ SealHandleScope seal(isolate); | |
+ bool more; | |
+ PERFORMANCE_MARK(&env, LOOP_START); | |
+ do { | |
+ uv_run(env.event_loop(), UV_RUN_DEFAULT); | |
- return env; | |
+ v8_platform.DrainVMTasks(isolate); | |
+ | |
+ more = uv_loop_alive(env.event_loop()); | |
+ if (more) | |
+ continue; | |
+ | |
+ EmitBeforeExit(&env); | |
+ | |
+ // Emit `beforeExit` if the loop became alive either after emitting | |
+ // event, or after running some callbacks. | |
+ more = uv_loop_alive(env.event_loop()); | |
+ } while (more == true); | |
+ PERFORMANCE_MARK(&env, LOOP_EXIT); | |
+ } | |
+ | |
+ env.set_trace_sync_io(false); | |
+ | |
++<<<<<<< HEAD | |
+ const int exit_code = EmitExit(&env); | |
+ RunAtExit(&env); | |
+ uv_key_delete(&thread_local_env); | |
+ | |
+ v8_platform.DrainVMTasks(isolate); | |
+ v8_platform.CancelVMTasks(isolate); | |
+ WaitForInspectorDisconnect(&env); | |
+#if defined(LEAK_SANITIZER) | |
+ __lsan_do_leak_check(); | |
+#endif | |
+ | |
+ return exit_code; | |
} | |
+inline int Start(uv_loop_t* event_loop, | |
+ int argc, const char* const* argv, | |
+ int exec_argc, const char* const* exec_argv) { | |
++======= | |
+ // Entry point for new node instances, also called directly for the main | |
+ // node instance. | |
+ static void StartNodeInstance(void* arg) { | |
+ NodeInstanceData* instance_data = static_cast<NodeInstanceData*>(arg); | |
++>>>>>>> Node.fz changes | |
Isolate::CreateParams params; | |
- ArrayBufferAllocator* array_buffer_allocator = new ArrayBufferAllocator(); | |
- params.array_buffer_allocator = array_buffer_allocator; | |
- Isolate* isolate = Isolate::New(params); | |
+ ArrayBufferAllocator allocator; | |
+ params.array_buffer_allocator = &allocator; | |
+#ifdef NODE_ENABLE_VTUNE_PROFILING | |
+ params.code_event_handler = vTune::GetVtuneCodeEventHandler(); | |
+#endif | |
+ | |
+ Isolate* const isolate = Isolate::New(params); | |
+ if (isolate == nullptr) | |
+ return 12; // Signal internal error. | |
+ | |
+ isolate->AddMessageListener(OnMessage); | |
+ isolate->SetAbortOnUncaughtExceptionCallback(ShouldAbortOnUncaughtException); | |
+ isolate->SetAutorunMicrotasks(false); | |
+ isolate->SetFatalErrorHandler(OnFatalError); | |
+ | |
if (track_heap_objects) { | |
isolate->GetHeapProfiler()->StartTrackingHeapObjects(true); | |
} | |
@@@ -4631,29 -3969,91 +4922,104 @@@ | |
Locker locker(isolate); | |
Isolate::Scope isolate_scope(isolate); | |
HandleScope handle_scope(isolate); | |
++<<<<<<< HEAD | |
+ IsolateData isolate_data( | |
+ isolate, | |
+ event_loop, | |
+ v8_platform.Platform(), | |
+ allocator.zero_fill_field()); | |
+ exit_code = Start(isolate, &isolate_data, argc, argv, exec_argc, exec_argv); | |
+ } | |
++======= | |
+ Local<Context> context = Context::New(isolate); | |
+ Environment* env = CreateEnvironment(isolate, context, instance_data); | |
+ array_buffer_allocator->set_env(env); | |
+ Context::Scope context_scope(context); | |
+ if (instance_data->is_main()) | |
+ env->set_using_abort_on_uncaught_exc(abort_on_uncaught_exception); | |
+ // Start debug agent when argv has --debug | |
+ if (instance_data->use_debug_agent()) | |
+ StartDebug(env, debug_wait_connect); | |
+ | |
+ mylog_0("node::StartNodeInstance: Loading the environment\n"); | |
+ LoadEnvironment(env); | |
+ mylog_0("node::StartNodeInstance: Done loading the environment\n"); | |
+ | |
+ env->set_trace_sync_io(trace_sync_io); | |
+ | |
+ // Enable debugger | |
+ if (instance_data->use_debug_agent()) | |
+ EnableDebug(env); | |
+ | |
+ { | |
+ SealHandleScope seal(isolate); | |
+ bool more; | |
+ | |
+ uv_mark_init_stack_end(); | |
+ mylog_0("node::StartNodeInstance: Initial stack is definitely over now\n"); | |
+ do { | |
+ mylog_0("node::StartNodeInstance: beginning of loop\n"); | |
+ | |
+ mylog_0("node::StartNodeInstance: pumping message loop\n"); | |
+ v8::platform::PumpMessageLoop(default_platform, isolate); | |
+ mylog_0("node::StartNodeInstance: done pumping message loop\n"); | |
+ | |
+ mylog_0("node::StartNodeInstance: uv_run\n"); | |
+ uv_mark_main_uv_run_begin(); | |
+ more = uv_run(env->event_loop(), UV_RUN_ONCE); | |
+ uv_mark_main_uv_run_end(); | |
+ mylog_1("node::StartNodeInstance: uv_run done (more %i)\n", more ? 1 : 0); | |
+ | |
+ if (more == false) { | |
+ v8::platform::PumpMessageLoop(default_platform, isolate); | |
+ EmitBeforeExit(env); | |
+ | |
+ // Emit `beforeExit` if the loop became alive either after emitting | |
+ // event, or after running some callbacks. | |
+ more = uv_loop_alive(env->event_loop()); | |
+ mylog_0("node::StartNodeInstance: uv_run after EmitBeforeExit\n"); | |
+ uv_mark_main_uv_run_begin(); | |
+ if (uv_run(env->event_loop(), UV_RUN_NOWAIT) != 0) | |
+ more = true; | |
+ uv_mark_main_uv_run_end(); | |
+ mylog_1("node::StartNodeInstance: uv_run after EmitBeforeExit done (more %i)\n", more ? 1 : 0); | |
+ } | |
+ mylog_0("node::StartNodeInstance: end of loop\n"); | |
+ } while (more == true); | |
+ } | |
+ | |
+ env->set_trace_sync_io(false); | |
+ | |
+ int exit_code = EmitExit(env); | |
+ if (instance_data->is_main()) | |
+ instance_data->set_exit_code(exit_code); | |
+ RunAtExit(env); | |
+ | |
+ #if defined(LEAK_SANITIZER) | |
+ __lsan_do_leak_check(); | |
+ #endif | |
++>>>>>>> Node.fz changes | |
- array_buffer_allocator->set_env(nullptr); | |
- env->Dispose(); | |
- env = nullptr; | |
+ { | |
+ Mutex::ScopedLock scoped_lock(node_isolate_mutex); | |
+ CHECK_EQ(node_isolate, isolate); | |
+ node_isolate = nullptr; | |
} | |
- CHECK_NE(isolate, nullptr); | |
isolate->Dispose(); | |
- isolate = nullptr; | |
- delete array_buffer_allocator; | |
- if (instance_data->is_main()) | |
- node_isolate = nullptr; | |
+ | |
+ return exit_code; | |
} | |
int Start(int argc, char** argv) { | |
++<<<<<<< HEAD | |
+ atexit([] () { uv_tty_reset_mode(); }); | |
++======= | |
+ mylog_0("node::Start: Initial stack code will run soon\n"); | |
+ uv_mark_init_stack_begin(); | |
++>>>>>>> Node.fz changes | |
PlatformInit(); | |
+ node::performance::performance_node_start = PERFORMANCE_NOW(); | |
CHECK_GT(argc, 0); | |
@@@ -4680,33 -4070,32 +5046,55 @@@ | |
// V8 on Windows doesn't have a good source of entropy. Seed it from | |
// OpenSSL's pool. | |
V8::SetEntropySource(crypto::EntropySource); | |
-#endif | |
- | |
+#endif // HAVE_OPENSSL | |
+ | |
++<<<<<<< HEAD | |
+ v8_platform.Initialize(v8_thread_pool_size); | |
+ // Enable tracing when argv has --trace-events-enabled. | |
+ if (trace_enabled) { | |
+ fprintf(stderr, "Warning: Trace event is an experimental feature " | |
+ "and could change at any time.\n"); | |
+ v8_platform.StartTracingAgent(); | |
++======= | |
+ mylog_0("node::Start: Initializing V8\n"); | |
+ const int thread_pool_size = 4; | |
+ default_platform = v8::platform::CreateDefaultPlatform(thread_pool_size); | |
+ V8::InitializePlatform(default_platform); | |
+ V8::Initialize(); | |
+ | |
+ int exit_code = 1; | |
+ { | |
+ NodeInstanceData instance_data(NodeInstanceType::MAIN, | |
+ uv_default_loop(), | |
+ argc, | |
+ const_cast<const char**>(argv), | |
+ exec_argc, | |
+ exec_argv, | |
+ use_debug_agent); | |
+ mylog_0("node::Start: Calling StartNodeInstance\n"); | |
+ StartNodeInstance(&instance_data); | |
+ exit_code = instance_data.exit_code(); | |
+ mylog_1("node::Start: node instance exited with %i\n", exit_code); | |
++>>>>>>> Node.fz changes | |
} | |
+ V8::Initialize(); | |
+ node::performance::performance_v8_start = PERFORMANCE_NOW(); | |
+ v8_initialized = true; | |
+ const int exit_code = | |
+ Start(uv_default_loop(), argc, argv, exec_argc, exec_argv); | |
+ if (trace_enabled) { | |
+ v8_platform.StopTracingAgent(); | |
+ } | |
+ v8_initialized = false; | |
V8::Dispose(); | |
- delete default_platform; | |
- default_platform = nullptr; | |
+ // uv_run cannot be called from the time before the beforeExit callback | |
+ // runs until the program exits unless the event loop has any referenced | |
+ // handles after beforeExit terminates. This prevents unrefed timers | |
+ // that happen to terminate during shutdown from being run unsafely. | |
+ // Since uv_run cannot be called, uv_async handles held by the platform | |
+ // will never be fully cleaned up. | |
+ v8_platform.Dispose(); | |
delete[] exec_argv; | |
exec_argv = nullptr; | |
* Unmerged path src/node.js |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment