Node.js is a platform built on Chrome's JavaScript runtime for easily building fast and scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.
Following are some of the important features that make Node.js the first choice of software architects.
Asynchronous and Event Driven − All APIs of Node.js library are asynchronous, that is, non-blocking. It essentially means a Node.js based server never waits for an API to return data. The server moves to the next API after calling it and a notification mechanism of Events of Node.js helps the server to get a response from the previous API call.
Very Fast − Being built on Google Chrome's V8 JavaScript Engine, Node.js library is very fast in code execution.
Single Threaded but Highly Scalable − Node.js uses a single threaded model with event looping. Event mechanism helps the server to respond in a non-blocking way and makes the server highly scalable as opposed to traditional servers which create limited threads to handle requests. Node.js uses a single threaded program and the same program can provide service to a much larger number of requests than traditional servers like Apache HTTP Server.
No Buffering − Node.js applications never buffer any data. These applications simply output the data in chunks.
License − Node.js is released under the MIT license.
Following is the link on github wiki containing an exhaustive list of projects, application and companies which are using Node.js. This list includes eBay, General Electric, GoDaddy, Microsoft, PayPal, Uber, Wikipins, Yahoo!, and Yammer to name a few.
Following are the areas where Node.js is proving itself as a perfect technology partner.
I/O bound Applications
Data Streaming Applications
Data Intensive Real-time Applications (DIRT)
JSON APIs based Applications
Single Page Applications
It is not advisable to use Node.js for CPU intensive applications.
Node Package Manager (NPM) provides two main functionalities −
Online repositories for node.js packages/modules which are searchable on search.nodejs.org
Command line utility to install Node.js packages, do version management and dependency management of Node.js packages.
NPM comes bundled with Node.js installables after v0.6.3 version. To verify the same, open console and type the following command and see the result −
$ npm --version 2.7.1
$ npm install <Module Name>
Now you can use this module in your js file as following −
var express = require('express');
By default, NPM installs any dependency in the local mode. Here local mode refers to the package installation in node_modules directory lying in the folder where Node application is present. Locally deployed packages are accessible via require() method. For example, when we installed express module, it created node_modules directory in the current directory where it installed the express module.
Globally installed packages/dependencies are stored in system directory. Such dependencies can be used in CLI (Command Line Interface) function of any node.js but cannot be imported using require() in Node application directly. Now let's try installing the express module using global installation.
$ npm install express -g
package.json is present in the root directory of any Node application/module and is used to define the properties of a package. Example:
{
"name": "express-template-file",
"version": "0.0.0",
"description": "{dna:micro}",
"main": "app.js",
"scripts": {
"start": "node ./bin/www",
"dev": "NODE_ENV=development node ./bin/www",
"prod": "NODE_ENV=production node ./bin/www",
"pm2_prod": "NODE_ENV=production pm2 start ./bin/www.js --name=expressApp",
"pm2_dev": "NODE_ENV=development pm2 start ./bin/www.js --name=expressApp"
},
"repository": {
"type": "git",
"url": "git@git.grumla.com:web-team/express_template.git"
},
"author": "{dna:micro}",
"license": "ISC",
"dependencies": {
"body-parser": "^1.12.4",
"compression": "^1.6.2",
"cookie-parser": "^1.3.5",
"debug": "^2.2.0",
"express": "^4.12.4",
"jade": "^1.10.0",
"moment": "^2.13.0",
"morgan": "^1.5.3",
"nconf": "^0.7.1",
"node-zoho": "0.0.18",
"serve-favicon": "^2.2.1"
}
}To add module to your package.json when installing from npm, use "--save" option
npm install express --save
-
name − name of the package
-
version − version of the package
-
description − description of the package
-
homepage − homepage of the package
-
author − author of the package
-
contributors − name of the contributors to the package
-
dependencies − list of dependencies. NPM automatically installs all the dependencies mentioned here in the node_module folder of the package.
-
repository − repository type and URL of the package
-
main − entry point of the package
-
keywords − keywords
Fast, unopinionated, minimalist web framework for Node.js
1. Do read about git
Knowing where to look is half the battle. I strongly urge everyone to read (and support) the Pro Git book. The other resources are highly recommended by various people as well.
2. Do commit early and often
Git only takes full responsibility for your data when you commit. If you fail to commit and then do something poorly thought out, you can run into trouble. Additionally, having periodic checkpoints means that you can understand how you broke something.
People resist this out of some sense that this is ugly, limits git-bisection functionality, is confusing to observers, and might lead to accusations of stupidity. Well, I'm here to tell you that resisting this is ignorant. Commit Early And Often. If, after you are done, you want to pretend to the outside world that your work sprung complete from your mind into the repository in utter perfection with each concept fully thought out and divided into individual concept-commits, well git supports that: see Sausage Making below. However, don't let tomorrow's beauty stop you from performing continuous commits today.
Personally, I commit early and often and then let the sausage making be seen by all except in the most formal of circumstances (public projects with large numbers of users, developers, or high developer turnover). For a less formal usage, like say this document I let people see what really happened.
3. Don't panic
As long as you have committed your work (or in many cases even added it with git add) your work will not be lost for at least two weeks unless you really work at it (run commands that manually purge it).
4.Do backups
Everyone always recommends taking backups as best practice, and I am going to do the same. However, you already may have a highly redundant distributed ad-hoc backup system in place! This is because essentially every clone is a backup. In many cases, you may want to use a clone for git experiments to perfect your method before trying it for real (this is most useful for git filter-branch and similar commands where your goal is to permanently destroy history without recourse—if you mess it up you may not have recourse). Still, probably you need a more formal system as well.
5. Don't change published history
Once you git push (or in theory someone pulls from your repo, but people who pull from a working repo often deserve what they get) your changes to the authoritative upstream repository or otherwise make the commits or tags publicly visible, you should ideally consider those commits etched in diamond for all eternity. If you later find out that you messed up, make new commits that fix the problems (possibly by revert, possibly by patching, etc).
Yes, of course git allows you to rewrite public history, but it is problematic for everyone and thus it is just not best practice to do so.
6.Do make useful commit messages
Creating insightful and descriptive commit messages is one of the best things you can do for others who use the repository. It lets people quickly understand changes without having to read code. When doing history archeology to answer some question, good commit messages likewise become very important.
7.Do keep up to date
This section has some overlap with workflow. Exactly how and when you update your branches and repositories is very much associated with the desired workflow. Also I will note that not everyone agrees with these ideas (but they should!)