Version 0.6.16 (stable)

2012.04.30 Version 0.6.16 (stable)

  • Upgrade V8 to

  • Upgrade npm to 1.1.18

  • Windows: add mappings for UV_ENOENT (Bert Belder)

  • linux: add IN_MOVE_SELF to inotify event mask (Ben Noordhuis)

  • unix: call pipe handle connection cb on accept() error (Ben Noordhuis)

  • unix: handle EWOULDBLOCK (Ben Noordhuis)

  • map EWOULDBLOCK to UV_EAGAIN (Ben Noordhuis)

  • Map ENOMEM to UV_ENOMEM (isaacs)

  • Child process: support the gid and uid options (Bert Belder)

  • test: cluster: add worker death event test (Ben Noordhuis)

  • typo in node_http_parser (isaacs)

  • http_parser: Eat CRLF between requests, even on connection:close. (Ben Noordhuis)

  • don't check return value of unsetenv (Ben Noordhuis)

Source Code:

Windows Installer:

Windows x64 Files:

Macintosh Installer (Universal):

Other release files:



Posted in release | Leave a comment

Profiling Node.js

It’s incredibly easy to visualize where your Node program spends its time using DTrace and node-stackvis (a Node port of Brendan Gregg’s FlameGraph tool):

  1. Run your Node.js program as usual.
  2. In another terminal, run:
    $ dtrace -n 'profile-97/execname == "node" && arg1/{
        @[jstack(100, 8000)] = count(); } tick-60s { exit(0); }' > stacks.out

    This will sample about 100 times per second for 60 seconds and emit results to stacks.out. Note that this will sample all running programs called “node”. If you want a specific process, replace execname == "node" with pid == 12345 (the process id).

  3. Use the “stackvis” tool to transform this directly into a flame graph. First, install it:
    $ npm install -g stackvis

    then use stackvis to convert the DTrace output to a flamegraph:

    $ stackvis dtrace flamegraph-svg < stacks.out > stacks.svg
  4. Open stacks.svg in your favorite browser.

You’ll be looking at something like this:

This is a visualization of all of the profiled call stacks. This example is from the “hello world” HTTP server on the Node.js home page under load. Start at the bottom, where you have “main”, which is present in most Node stacks because Node spends most on-CPU time in the main thread. Above each row, you have the functions called by the frame beneath it. As you move up, you’ll see actual JavaScript function names. The boxes in each row are not in chronological order, but their width indicates how much time was spent there. When you hover over each box, you can see exactly what percentage of time is spent in each function. This lets you see at a glance where your program spends its time.

That’s the summary. There are a few prerequisites:

  • You must gather data on a system that supports DTrace with the Node.js ustack helper. For now, this pretty much means illumos-based systems like SmartOS, including the Joyent Cloud. MacOS users: OS X supports DTrace, but not ustack helpers. The way to get this changed is to contact your Apple developer liason (if you’re lucky enough to have one) or file a bug report at I’d suggest referencing existing bugs 5273057 and 11206497. More bugs filed (even if closed as dups) show more interest and make it more likely Apple will choose to fix this.
  • You must be on 32-bit Node.js 0.6.7 or later, built --with-dtrace. The helper doesn’t work with 64-bit Node yet. On illumos (including SmartOS), development releases (the 0.7.x train) include DTrace support by default.

There are a few other notes:

  • You can absolutely profile apps in production, not just development, since compiling with DTrace support has very minimal overhead. You can start and stop profiling without restarting your program.
  • You may want to run the stacks.out output through c++filt to demangle C++ symbols. Be sure to use the c++filt that came with the compiler you used to build Node. For example:
    c++filt < stacks.out > demangled.out

    then you can use demangled.out to create the flamegraph.

  • If you want, you can filter stacks containing a particular function. The best way to do this is to first collapse the original DTrace output, then grep out what you want:
    $ stackvis dtrace collapsed < stacks.out | grep SomeFunction > collapsed.out
    $ stackvis collapsed flamegraph-svg < collapsed.out > stacks.svg
  • If you’ve used Brendan’s FlameGraph tools, you’ll notice the coloring is a little different in the above flamegraph. I ported his tools to Node first so I could incorporate it more easily into other Node programs, but I’ve also been playing with different coloring options. The current default uses hue to denote stack depth and saturation to indicate time spent. (These are also indicated by position and size.) Other ideas include coloring by module (so V8, JavaScript, libc, etc. show up as different colors.)

For more on the underlying pieces, see my previous post on Node.js profiling and Brendan’s post on Flame Graphs.

Dave Pacheco blogs at

Posted in Uncategorized | 8 Comments

Version 0.7.8 (unstable)

2012.04.18, Version 0.7.8, (unstable)

  • Upgrade V8 to

  • Upgrade OpenSSL to 1.0.0f

  • Upgrade npm to 1.1.18

  • Show licenses in Binary installers

  • Domains (isaacs)

  • readline: rename "end" to "close" (Nathan Rajlich)

  • tcp: make getsockname() return address family as string (Shigeki Ohtsu)

  • http, https: fix .setTimeout() (ssuda)

  • os: add cross platform EOL character (Mustansir Golawala)

  • typed arrays: unexport SizeOfArrayElementForType() (Aaron Jacobs)

  • net: honor 'enable' flag in .setNoDelay() (Ben Noordhuis)

  • child_process: emit error when .kill fails (Andreas Madsen)

  • gyp: fix 'argument list too long' build error (Ben Noordhuis)

  • fs.WriteStream: Handle modifications to (isaacs)

  • repl, readline: Handle newlines better (Nathan Rajlich, Nathan Friedly)

  • build: target OSX 10.5 when building on darwin (Nathan Rajlich)

  • Fix #3052 Handle errors properly in zlib (isaacs)

  • build: add support for DTrace and postmortem (Dave Pacheco)

  • core: add reusable Slab allocator (Ben Noordhuis)

Source Code:

Windows Installer:

Windows x64 Files:

Macintosh Installer (Universal):

Other release files:



Posted in release | Leave a comment

Version 0.6.15 (stable)

2012.04.09 Version 0.6.15 (stable)

  • Update npm to 1.1.16

  • Show licenses in binary installers.

  • unix: add uv_fs_read64, uv_fs_write64 and uv_fs_ftruncate64 (Ben Noordhuis)

  • add 64bit offset fs functions (Igor Zinkovsky)

  • windows: don't report ENOTSOCK when attempting to bind an udp handle twice (Bert Belder)

  • windows: backport pipe-connect-to-file fixes from master (Bert Belder)

  • windows: never call fs event callbacks after closing the watcher (Bert Belder)

  • fs.readFile: don't make the callback before the fd is closed (Bert Belder)

  • windows: use 64bit offsets for uv_fs apis (Igor Zinkovsky)

  • Fix #2061: segmentation fault on OS X due to stat size mismatch (Ben Noordhuis)

Source Code:

Windows Installer:

Windows x64 Files:

Macintosh Installer (Universal):

Other release files:



Posted in release | 2 Comments

Version 0.7.7 (unstable)

2012.03.30, Version 0.7.7 (unstable)

  • Upgrade V8 to

  • Upgrade npm to 1.1.15

  • Handle Emoji characters properly (Erik Corry, Bert Belder)

  • readline: migrate ansi/vt100 logic from tty to readline (Nathan Rajlich)

  • readline: Fix multiline handling (Alex Kocharin)

  • add a -i/–interactive flag to force the REPL (Nathan Rajlich)

  • debugger: add breakOnException command (Fedor Indutny)

  • cluster: kill workers when master dies (Andreas Madsen)

  • cluster: add graceful disconnect support (Andreas Madsen)

  • child_process: Separate 'close' event from 'exit' (Charlie McConnell)

  • typed arrays: add Uint8ClampedArray (Mikael Bourges-Sevenier)

  • buffer: Fix byte alignment issues (Ben Noordhuis, Erik Lundin)

  • tls: fix CryptoStream.setKeepAlive() (Shigeki Ohtsu)

  • Expose http parse error codes (Felix Geisendörfer)

  • events: don't delete the listeners array (Ben Noordhuis, Nathan Rajlich)

  • process: add process.config to view node's ./configure settings (Nathan Rajlich)

  • process: process.execArgv to see node's arguments (Micheil Smith)

  • process: fix process.title setter (Ben Noordhuis)

  • timers: handle negative or non-numeric timeout values (Ben Noordhuis)

Source Code:

Windows Installer:

Windows x64 Files:

Macintosh Installer (Universal):

Other release files:



Posted in release | 2 Comments

Service logging in JSON with Bunyan

Service logs are gold, if you can mine them. We scan them for occasional debugging. Perhaps we grep them looking for errors or warnings, or setup an occasional nagios log regex monitor. If that. This is a waste of the best channel for data about a service.

“Log. (Huh) What is it good for. Absolutely …”

  • debugging
  • monitors tools that alert operators
  • non real-time analysis (business or operational analysis)
  • historical analysis

These are what logs are good for. The current state of logging is barely adequate for the first of these. Doing reliable analysis, and even monitoring, of varied “printf-style” logs is a grueling or hacky task that most either don’t bother with, fallback to paying someone else to do (viz. Splunk’s great successes), or, for web sites, punt and use the plethora of JavaScript-based web analytics tools.

Let’s log in JSON. Let’s format log records with a filter outside the app. Let’s put more info in log records by not shoehorning into a printf-message. Debuggability can be improved. Monitoring and analysis can definitely be improved. Let’s not write another regex-based parser, and use the time we’ve saved writing tools to collate logs from multiple nodes and services, to query structured logs (from all services, not just web servers), etc.

At Joyent we use node.js for running many core services — loosely coupled through HTTP REST APIs and/or AMQP. In this post I’ll draw on experiences from my work on Joyent’s SmartDataCenter product and observations of Joyent Cloud operations to suggest some improvements to service logging. I’ll show the (open source) Bunyan logging library and tool that we’re developing to improve the logging toolchain.

Current State of Log Formatting

# apache access log - - [15/Oct/2010:11:46:46 -0700] "GET /favicon.ico HTTP/1.1" 404 209
fe80::6233:4bff:fe29:3173 - - [15/Oct/2010:11:46:58 -0700] "GET / HTTP/1.1" 200 44

# apache error log
[Fri Oct 15 11:46:46 2010] [error] [client] File does not exist: /Library/WebServer/Documents/favicon.ico
[Fri Oct 15 11:46:58 2010] [error] [client fe80::6233:4bff:fe29:3173] File does not exist: /Library/WebServer/Documents/favicon.ico

# Mac /var/log/secure.log
Oct 14 09:20:56 banana loginwindow[41]: in pam_sm_authenticate(): Failed to determine Kerberos principal name.
Oct 14 12:32:20 banana[25]: UID 501 authenticated as user trentm (UID 501) for right 'system.privilege.admin'

# an internal joyent agent log
[2012-02-07 00:37:11.898] [INFO] AMQPAgent - Publishing success.
[2012-02-07 00:37:11.910] [DEBUG] AMQPAgent - { req_id: '8afb8d99-df8e-4724-8535-3d52adaebf25',
  timestamp: '2012-02-07T00:37:11.898Z',

# typical expressjs log output
[Mon, 21 Nov 2011 20:52:11 GMT] 200 GET /foo (1ms)
Blah, some other unstructured output to from a console.log call.

What’re we doing here? Five logs at random. Five different date formats. As Paul Querna points out we haven’t improved log parsability in 20 years. Parsability is enemy number one. You can’t use your logs until you can parse the records, and faced with the above the inevitable solution is a one-off regular expression.

The current state of the art is various parsing libs, analysis tools and homebrew scripts ranging from grep to Perl, whose scope is limited to a few niches log formats.

JSON for Logs

JSON.parse() solves all that. Let’s log in JSON. But it means a change in thinking: The first-level audience for log files shouldn’t be a person, but a machine.

That is not said lightly. The “Unix Way” of small focused tools lightly coupled with text output is important. JSON is less “text-y” than, e.g., Apache common log format. JSON makes grep and awk awkward. Using less directly on a log is handy.

But not handy enough. That 80’s pastel jumpsuit awkwardness you’re feeling isn’t the JSON, it’s your tools. Time to find a json tool — json is one, bunyan described below is another one. Time to learn your JSON library instead of your regex library: JavaScript, Python, Ruby, Java, Perl.

Time to burn your log4j Layout classes and move formatting to the tools side. Creating a log message with semantic information and throwing that away to make a string is silly. The win at being able to trivially parse log records is huge. The possibilities at being able to add ad hoc structured information to individual log records is interesting: think program state metrics, think feeding to Splunk, or loggly, think easy audit logs.

Introducing Bunyan

Bunyan is a node.js module for logging in JSON and a bunyan CLI tool to view those logs.

Logging with Bunyan basically looks like this:

$ cat hi.js
var Logger = require('bunyan');
var log = new Logger({name: 'hello' /*, ... */});"hi %s", "paul");

And you’ll get a log record like this:

$ node hi.js
{"name":"hello","hostname":"banana.local","pid":40026,"level":30,"msg":"hi paul","time":"2012-03-28T17:25:37.050Z","v":0}

Pipe that through the bunyan tool that is part of the “node-bunyan” install to get more readable output:

$ node hi.js | ./node_modules/.bin/bunyan       # formatted text output
[2012-02-07T18:50:18.003Z]  INFO: hello/40026 on banana.local: hi paul

$ node hi.js | ./node_modules/.bin/bunyan -j    # indented JSON output
  "name": "hello",
  "hostname": "banana.local",
  "pid": 40087,
  "level": 30,
  "msg": "hi paul",
  "time": "2012-03-28T17:26:38.431Z",
  "v": 0

Bunyan is log4j-like: create a Logger with a name, call, etc. However it has no intention of reproducing much of the functionality of log4j. IMO, much of that is overkill for the types of services you’ll tend to be writing with node.js.

Longer Bunyan Example

Let’s walk through a bigger example to show some interesting things in Bunyan. We’ll create a very small “Hello API” server using the excellent restify library — which we used heavily here at Joyent. (Bunyan doesn’t require restify at all, you can easily use Bunyan with Express or whatever.)

You can follow along in if you like. Note that I’m using the current HEAD of the bunyan and restify trees here, so details might change a bit. Prerequisite: a node 0.6.x installation.

git clone
cd hello-json-logging

Bunyan Logger

Our server first creates a Bunyan logger:

var Logger = require('bunyan');
var log = new Logger({
  name: 'helloapi',
  streams: [
      stream: process.stdout,
      level: 'debug'
      path: 'hello.log',
      level: 'trace'
  serializers: {
    req: Logger.stdSerializers.req,
    res: restify.bunyan.serializers.response,

Every Bunyan logger must have a name. Unlike log4j, this is not a hierarchical dotted namespace. It is just a name field for the log records.

Every Bunyan logger has one or more streams, to which log records are written. Here we’ve defined two: logging at DEBUG level and above is written to stdout, and logging at TRACE and above is appended to ‘hello.log’.

Bunyan has the concept of serializers: a registry of functions that know how to convert a JavaScript object for a certain log record field to a nice JSON representation for logging. For example, here we register the Logger.stdSerializers.req function to convert HTTP Request objects (using the field name “req”) to JSON. More on serializers later.

Restify Server

Restify 1.x and above has bunyan support baked in. You pass in your Bunyan logger like this:

var server = restify.createServer({
  name: 'Hello API',
  log: log   // Pass our logger to restify.

Our simple API will have a single GET /hello?name=NAME endpoint:

server.get({path: '/hello', name: 'SayHello'}, function(req, res, next) {
  var caller = || 'caller';
  req.log.debug('caller is "%s"', caller);
  res.send({"hello": caller});
  return next();

If we run that, node server.js, and call the endpoint, we get the expected restify response:

$ curl -iSs
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: Accept, Accept-Version, Content-Length, Content-MD5, Content-Type, Date, X-Api-Version
Access-Control-Expose-Headers: X-Api-Version, X-Request-Id, X-Response-Time
Server: Hello API
X-Request-Id: f6aaf942-c60d-4c72-8ddd-bada459db5e3
Access-Control-Allow-Methods: GET
Connection: close
Content-Length: 16
Content-MD5: Xmn3QcFXaIaKw9RPUARGBA==
Content-Type: application/json
Date: Tue, 07 Feb 2012 19:12:35 GMT
X-Response-Time: 4


Setup Server Logging

Let’s add two things to our server. First, we’ll use the server.pre to hook into restify’s request handling before routing where we’ll log the request.

server.pre(function (request, response, next) {{req: request}, 'start');        // (1)
  return next();

This is the first time we’ve seen this style with an object as the first argument. Bunyan logging methods (log.trace, log.debug, …) all support an optional first object argument with extra log record fields:<object> fields, <string> msg, ...)

Here we pass in the restify Request object, req. The “req” serializer we registered above will come into play here, but bear with me.

Remember that we already had this debug log statement in our endpoint handler:

req.log.debug('caller is "%s"', caller);            // (2)

Second, use the restify server after event to log the response:

server.on('after', function (req, res, route) {{res: res}, "finished");             // (3)

Log Output

Now lets see what log output we get when somebody hits our API’s endpoint:

$ curl -iSs
HTTP/1.1 200 OK
X-Request-Id: 9496dfdd-4ec7-4b59-aae7-3fed57aed5ba


Here is the server log:

[trentm@banana:~/tm/hello-json-logging]$ node server.js
... intro "listening at" log message elided ...
{"name":"helloapi","hostname":"banana.local","pid":40341,"level":30,"req":{"method":"GET","url":"/hello?name=paul","headers":{"user-agent":"curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3","host":"","accept":"*/*"},"remoteAddress":"","remotePort":59831},"msg":"start","time":"2012-03-28T17:37:29.506Z","v":0}
{"name":"helloapi","hostname":"banana.local","pid":40341,"route":"SayHello","req_id":"9496dfdd-4ec7-4b59-aae7-3fed57aed5ba","level":20,"msg":"caller is \"paul\"","time":"2012-03-28T17:37:29.507Z","v":0}
{"name":"helloapi","hostname":"banana.local","pid":40341,"route":"SayHello","req_id":"9496dfdd-4ec7-4b59-aae7-3fed57aed5ba","level":30,"res":{"statusCode":200,"headers":{"access-control-allow-origin":"*","access-control-allow-headers":"Accept, Accept-Version, Content-Length, Content-MD5, Content-Type, Date, X-Api-Version","access-control-expose-headers":"X-Api-Version, X-Request-Id, X-Response-Time","server":"Hello API","x-request-id":"9496dfdd-4ec7-4b59-aae7-3fed57aed5ba","access-control-allow-methods":"GET","connection":"close","content-length":16,"content-md5":"Xmn3QcFXaIaKw9RPUARGBA==","content-type":"application/json","date":"Wed, 28 Mar 2012 17:37:29 GMT","x-response-time":3}},"msg":"finished","time":"2012-03-28T17:37:29.510Z","v":0}

Lets look at each in turn to see what is interesting — pretty-printed with node server.js | ./node_modules/.bin/bunyan -j:

{                                                   // (1)
  "name": "helloapi",
  "hostname": "banana.local",
  "pid": 40442,
  "level": 30,
  "req": {
    "method": "GET",
    "url": "/hello?name=paul",
    "headers": {
      "user-agent": "curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3",
      "host": "",
      "accept": "*/*"
    "remoteAddress": "",
    "remotePort": 59834
  "msg": "start",
  "time": "2012-03-28T17:39:44.880Z",
  "v": 0

Here we logged the incoming request with{req: request}, 'start'). The use of the “req” field triggers the “req” serializer registered at Logger creation.

Next the req.log.debug in our handler:

{                                                   // (2)
  "name": "helloapi",
  "hostname": "banana.local",
  "pid": 40442,
  "route": "SayHello",
  "req_id": "9496dfdd-4ec7-4b59-aae7-3fed57aed5ba",
  "level": 20,
  "msg": "caller is \"paul\"",
  "time": "2012-03-28T17:39:44.883Z",
  "v": 0

and the log of response in the “after” event:

{                                                   // (3)
  "name": "helloapi",
  "hostname": "banana.local",
  "pid": 40442,
  "route": "SayHello",
  "req_id": "9496dfdd-4ec7-4b59-aae7-3fed57aed5ba",
  "level": 30,
  "res": {
    "statusCode": 200,
    "headers": {
      "access-control-allow-origin": "*",
      "access-control-allow-headers": "Accept, Accept-Version, Content-Length, Content-MD5, Content-Type, Date, X-Api-Version",
      "access-control-expose-headers": "X-Api-Version, X-Request-Id, X-Response-Time",
      "server": "Hello API",
      "x-request-id": "9496dfdd-4ec7-4b59-aae7-3fed57aed5ba",
      "access-control-allow-methods": "GET",
      "connection": "close",
      "content-length": 16,
      "content-md5": "Xmn3QcFXaIaKw9RPUARGBA==",
      "content-type": "application/json",
      "date": "Wed, 28 Mar 2012 17:39:44 GMT",
      "x-response-time": 5
  "msg": "finished",
  "time": "2012-03-28T17:39:44.886Z",
  "v": 0

Two useful details of note here:

  1. The last two log messages include a “req_id” field (added to the req.log logger by restify). Note that this is the same UUID as the “X-Request-Id” header in the curl response. This means that if you use req.log for logging in your API handlers you will get an easy way to collate all logging for particular requests.

    If your’s is an SOA system with many services, a best practice is to carry that X-Request-Id/req_id through your system to enable collating handling of a single top-level request.

  2. The last two log messages include a “route” field. This tells you to which handler restify routed the request. While possibly useful for debugging, this can be very helpful for log-based monitoring of endpoints on a server.

Recall that we also setup all logging to go the “hello.log” file. This was set at the TRACE level. Restify will log more detail of its operation at the trace level. See my “hello.log” for an example. The bunyan tool does a decent job of nicely formatting multiline messages and “req”/”res” keys (with color, not shown in the gist).

This is logging you can use effectively.

Other Tools

Bunyan is just one of many options for logging in node.js-land. Others (that I know of) supporting JSON logging are winston and logmagic. Paul Querna has an excellent post on using JSON for logging, which shows logmagic usage and also touches on topics like the GELF logging format, log transporting, indexing and searching.

Final Thoughts

Parsing challenges won’t ever completely go away, but it can for your logs if you use JSON. Collating log records across logs from multiple nodes is facilitated by a common “time” field. Correlating logging across multiple services is enabled by carrying a common “req_id” (or equivalent) through all such logs.

Separate log files for a single service is an anti-pattern. The typical Apache example of separate access and error logs is legacy, not an example to follow. A JSON log provides the structure necessary for tooling to easily filter for log records of a particular type.

JSON logs bring possibilities. Feeding to tools like Splunk becomes easy. Ad hoc fields allow for a lightly spec’d comm channel from apps to other services: records with a “metric” could feed to statsd, records with a “loggly: true” could feed to

Here I’ve described a very simple example of restify and bunyan usage for node.js-based API services with easy JSON logging. Restify provides a powerful framework for robust API services. Bunyan provides a light API for nice JSON logging and the beginnings of tooling to help consume Bunyan JSON logs.

Update (29-Mar-2012): Fix styles somewhat for RSS readers.

Posted in module | 7 Comments

Version 0.6.14 (stable)

2012.03.22 Version 0.6.14 (stable)

  • net: don't crash when queued write fails (Igor Zinkovsky)

  • sunos: fix EMFILE on process.memoryUsage() (Bryan Cantrill)

  • crypto: fix compile-time error with openssl 0.9.7e (Ben Noordhuis)

  • unix: ignore ECONNABORTED errors from accept() (Ben Noordhuis)

  • Add UV_ENOSPC and mappings to it (Bert Belder)

  • http-parser: Fix response body is not read (koichik)

  • Upgrade npm to 1.1.12

    – upgrade node-gyp to 0.3.7
    – work around AV-locked directories on Windows
    – Fix isaacs/npm#2293 Don't try to 'uninstall' /
    – Exclude symbolic links from packages.
    – Fix isaacs/npm#2275 Spurious 'unresolvable cycle' error.
    – Exclude/include dot files as if they were normal files

Source Code:

Windows Installer:

Windows x64 Files:

Macintosh Installer (Universal):

Other release files:



Posted in release | 2 Comments

Version 0.6.13 (stable)

2012.03.15 Version 0.6.13 (stable)

  • Windows: Many libuv test fixes (Bert Belder)

  • Windows: avoid uv_guess_handle crash in when fd < 0 (Bert Belder)

  • Map EBUSY and ENOTEMPTY errors (Bert Belder)

  • Windows: include syscall in fs errors (Bert Belder)

  • Fix ENOSYS on Linux kernel version mismatch (Ben Noordhuis)

  • Update npm to 1.1.9

    – upgrade node-gyp to 0.3.5 (Nathan Rajlich)
    – Fix isaacs/npm#2249 Add cache-max and cache-min configs
    – Properly redirect across https/http registry requests
    – log config usage if undefined key in set function (Kris Windham)
    – Add support for os/cpu fields in package.json (Adam Blackburn)
    – Automatically node-gyp packages containing a binding.gyp
    – Fix failures unpacking in UNC shares
    – Never create un-listable directories
    – Handle cases where an optionalDependency fails to build

Source Code:

Windows Installer:

Macintosh Installer:



Posted in release | Leave a comment

Version 0.7.6 (unstable)

2012.03.13, Version 0.7.6 (unstable)

  • Upgrade v8 to 3.9.17

  • Upgrade npm to 1.1.8

    – Add support for os/cpu fields in package.json (Adam Blackburn)
    – Automatically node-gyp packages containing a binding.gyp
    – Fix failures unpacking in UNC shares
    – Never create un-listable directories
    – Handle cases where an optionalDependency fails to build

  • events: newListener emit correct fn when using 'once' (Roly Fentanes)

  • url: Ignore empty port component (Łukasz Walukiewicz)

  • module: replace 'children' array (isaacs)

  • tls: parse multiple values of a key in ssl certificate (Sambasiva Suda)

  • cluster: support passing of named pipes (Ben Noordhuis)

  • Windows: include syscall in fs errors (Bert Belder)

  • http: #2888 Emit end event only once (Igor Zinkovsky)

  • readline: add multiline support (Rlidwka)

  • process: add process.hrtime() (Nathan Rajlich)

  • net, http, https: add localAddress option (Dmitry Nizovtsev)

  • addon improvements (Nathan Rajlich)

  • build improvements (Ben Noordhuis, Sadique Ali, T.C. Hollingsworth, Nathan Rajlich)

  • add support for "SEARCH" request methods (Nathan Rajlich)

  • expose the zlib and http_parser version in process.versions (Nathan Rajlich)

Source Code:

Windows Installer:

Windows x64 Files:

Macintosh Installer (Universal):

Other release files:



Posted in release | Leave a comment

Version 0.6.12 (stable)

2012.03.02 Version 0.6.12 (stable)

  • Upgrade V8 to

  • dtrace ustack helper improvements (Dave Pacheco)

  • API Documentation refactor (isaacs)

  • #2827 net: fix race write() before and after connect() (koichik)

  • #2554 #2567 throw if fs args for 'start' or 'end' are strings (AJ ONeal)

  • punycode: Update to v1.0.0 (Mathias Bynens)

  • Make a fat binary for the OS X pkg (isaacs)

  • Fix hang on accessing process.stdin (isaacs)

  • repl: make tab completion work on non-objects (Nathan Rajlich)

  • Fix on OS X (Ben Noordhuis)

  • Fix #2515 nested setTimeouts cause premature process exit (Ben Noordhuis)

  • windows: fix time conversion in stat (Igor Zinkovsky)

  • windows: fs: handle EOF in read (Brandon Philips)

  • windows: avoid IOCP short-circuit on non-ifs lsps (Igor Zinkovsky)

  • Upgrade npm to 1.1.4 (isaacs)

    – windows fixes
    – Bundle nested bundleDependencies properly
    – install: support –save with url install targets
    – shrinkwrap: behave properly with url-installed modules
    – support installing uncompressed tars or single file modules from urls etc.
    – don't run make clean on rebuild
    – support HTTPS-over-HTTP proxy tunneling

Source Code:

Windows Installer:

Macintosh Installer:



Posted in release | Leave a comment