Forcing Garbage Collection with Node.js and V8

I’m in the middle of looking for some memory leaks at the moment, in order to isolate them I wanted to confirm exactly how much memory was being used by a given line of code after garbage collection.

Fortunately V8 allows you to manually force garbage collection from within Javascript.

When you run your node script just add the option: ‘–expose-gc’


node --expose-gc test.js

And then from within the Javascript just do:


That’s it, good luck finding those leaks!


Express.js public/static directory in one line

If you want to serve a static directory from express you can do it with the following line of code.

The first parameter is the route, the second parameter is the path to the directory.

app.use('/public', express.static(__dirname + '/public'));

Posting here for my own convenience.

HTTP requests with Node.js

I couldn’t find this code anywhere on the internets yesterday, posting here for my own convenience.

 var options = {
   host: '',
   port: 80,
   path: '/path',
   method: 'GET',
   encoding: 'utf8'

 http.get({ host:, path: options.path }, function(res2){
	var data = [];
	res2.on('data', function(chunk) { 

	res2.on('end', function() { // wait for the request to finish

Learning from Single Page Web Applications

Learning from Single Page Web Applications

I’ve spent the last three years working on single page applications of various shapes and sizes. I don’t like em,  this post isn’t about why but I will just say I like data to be exposed at the lowest level (HTTP) and not require Javascript to turn it into something useful.   All that being said I’ve been lucky enough to work with some clever folks and the end results have all been very interesting and pushed boundaries in their own way.

At Full Frontal I enjoyed listening to Nicholas Zakas talking about Scalable Javascript Application Architecture. In many ways he described the architecture we used for the Volkswagen Configurator. Earlier in the day Phil Hawksworth had been talking about Excessive Enhancement.

It struck me that while most front end MVC frameworks make it easy to build Javascript applications, making them work as web applications (without Javascript) becomes much more difficult. This is largely due to the fact that you define and build your application, templates/page structure using the browser’s Javascript interpreter.

I’m of the view that if you architect something properly you can get the same full experience provided by Javascript applications but still have a reliable fallback for those who do not have Javascript enabled. That is what I call a proper web application.

In the back of my mind I’ve been thinking about Charlie Robbins excellent post: Scaling Isomorphic Javascript Code which talks elegantly about why MVC might not be the best pattern in an environment where you can execute Javascript on the server. He suggests the Resource-View-Presenter pattern.

It seemed that the way we currently split our frameworks between front and back end reduces reuse of configuration/code and actually encourages duplication.

I wanted to try defining all of these things on the server, so they could be consumed on the server and client.

As soon as the talk finished I found myself writing code. This blog post talks about what I have been building and why.

Sharing URLs between client and server

Allow me to deviate for a moment.

On large applications it’s common practice to split the code and teams between front and back end developers.  This often results in duplication and unnecessary bugs.  A common example being maintaining URLs in two places.  In the browser we might a have a URLs object:

namespace.urls = {
     login: '/login',
     user: '/user/:username',

And then on the server the same URLs would be defined possibly using a different syntax.  Changing one does not change the other, and with split teams this can result in unnecessary bugs.

With Node.js it’s particularly easy to share exactly the same URL object on the client and server. If you ensure your HTML links are populated from the same URL object your application will continue to function when URLs change.

This is what I’ve been doing in some of my Node apps:

    exports.HOME = '/';
    exports.LOGIN = '/login';
    exports.USERS = '/users';
    exports.USER = '/users/:user';
    // also add URL functions here that can be shared between client and server. = function(str, tokens) {
        for(token in tokens){
            str = str.replace(':'+token, tokens[token]);
        return str;
})(typeof exports === 'undefined' ? namespace.urls={} : exports);

That allows it to be used as a Node module using require(‘./urls.js’) and when served to the browser the URLs are available at namespace.urls.

I’ve been using Express.js, the parameter sytax seems to work nicely with Levi routes on the front end.

This seems like common sense. It’s a fine example of DRY.  Define as much as possible in server-side JS and then allow it to be consumed by the client-side JS reusing as much logic as possible.

Reusable modules

TiddlyWiki has the concept of plugins which are essentially just a chunk of HTML/CSS/JS relevant to a particular piece of functionality which could be added to HTML using a special syntax:


On the Volkswagen Configurator we also have the concept of UI’s which were reusable chunks of HTML/CSS/JS which could be appended into any DOM node.

Both are essentially variants of the Module Pattern with added support for HTML and CSS as part of the module.  Both methods work really nicely just so long as you’ve got Javascript turned on.

I started to build a simple Node.js app which could parse a modules folder and serve the resources at appropriate URLs. See the following folder structure:

Generated the following URLs:


You will notice there is an app.js file in the flickr folder which contains the server-side JS required by the module. Later this will provide a getData method to asynchronously generate a templating object which can populate the HTML.

Define your views on the server

In browserland it’s really easy to mess around with the DOM. You can completely transform a page, especially when you start appending modules of HTML/CSS/JS to DOM nodes. The problem comes when you want to show the same view to something that doesn’t understand Javascript, an RSS feed or search engine for example.

I decided to create a view specification which could be read by the server and also served to the browser. Doing so should make it really easy to render the exact same HTML with or without Javascript enabled.  Using Sizlate I started with this (it has since changed):

var views = [{
    url: '/user/:user'
    view: 'userpage',
    modules: [

In the following example I have left out the JS and CSS files.  The view is just a folder containing a HTML file of the same name and an optional app.js (as with the modules). It assumes the view HTML file contains a  tag with a id corresponding to each of the modules specified for the view.

I like this simple approach but in order for it to scale I may soon need to start using HTML data attributes instead of ids. This is what the folder structure looked like:

project/views/userpage/userpage.html (the view HTML file) contains:

    <div id="photos" /> 
    <div id="login" />
    <div id="timeline" /> 

The end result would look something like this:

    <div id="photos">
        .. contents of /modules/photos/photos.html appended in here
    <div id="login">
        .. contents of /modules/login/login.html appended in here
    <div id="timeline"> 
        .. contents of /modules/timeline/timeline.html appended in here

Concatenation vs Caching

At this stage I was able to build a sample app with my re-usable modules and a view. I thought it would be useful to concatenate all the JS and CSS files together for each view. It turns out I was wrong.

In a situation where you’re reusing modules across multiple views concatinating per view makes no sense because you end up serving the same CSS across multiple URLs. The CSS/JS will be cached per view, not per module.

I decided it was actually better to use the URLs generated for the CSS/JS by the modules so they can be cached at the most granular level.  Both methods will be possible.  Currently CSS modules are served inline with the module HTML (not the document HEAD).  There will be some work to ensure that all CSS LINK tags are moved to the document HEAD before the view is rendered. JSDOM should make that quite easy.

History API

With views and modules defined on the server I’ve started to put together a front end framework which can consume the same application specification and make the whole experience a bit more pleasant.  I’m currently experimenting with generating popstate listeners from the specification which can then fire off the default/custom transitions between views.

Whats going on here?

Essentially I have started building a framework for mixing up reusable modules of HTML/CSS/JS in ways usually associated with Javascript applications but with progressive enhancement as one of its core values.

The functionality described above will almost certainly change.  What I have built so far is a simple proof of concept to test the best way to define views in this way using Sizlate.

At the moment I’m working on a sample app pulling in data from Flickr to demonstrate how it might all fit together in the real world.  Its pretty messy and requires lots of work but you can see the code here.

I thought it was worth blogging about my approach just to get some feedback. Please do let me know what you think. All idea’s, contributions, criticisms welcomed.

I’m going to be talking about Sizlate at the London Node user group on the 25th January at the Forward Offices in Camden, London. Please register here if you would like to attend.


Over the past year I have been experimenting with Node.js. Its been a pretty interesting journey and I have learned a great deal.

One of my more interesting experiments has been Sizlate.

On projects at work I often find myself doing things like this:

 domNode.find('div.class').html('<b>INSERT SOME STUFF HERE</b>'); 

It’s a really powerful way to populate HTML. What I really like is that there is no need to add any crazy syntax into my templates.  Templates are just valid HTML and the point of insertion is specified by the jQuery selector.

From the developers point of view this is really simple, it does however introduce problems when Javascript is not turned on. I found myself wondering how this technique might be transferred onto the server.

After some experimentation I came up with Sizlate. A HTML templating engine for Express.js.

Its pretty easy to get jQuery running on Node.js but I decided that  jQuery wasn’t  a good fit for my use case.  Sizzle is the selector engine used by jQuery, I decided to investigate using Sizzle to provide the selector functionality. It turned out that this works quite nicely using the JSDOM project.

To use Sizlate you simple need to register it as your templating engine:

var sizlate = require('sizlate'); 
app.register('.html', sizlate); 

And then just call res.render as you would normally with Express:

 res.render('template.html',  { selectors: { 'a': 'hi there' } }); 

That’s the most basic example. On github I have provided an example of  passing sizlate an object allowing more complex data structures to be used. There is also an example using partials.

At the moment Sizlate only works on the serverside but it should be quite easy to get it working in the browser.

Feel free to have a play and let me know if you have any feedback.

Sizlate is available as a NPM package and can be installed using the command:

 NPM install sizlate 
For more details please read the readme on

Cross Browser CSS3 Gradient

Publishing this here for my own convenience.

background: #FF8D2C; /* for non-css3 browsers */
/* For WebKit (Safari, Google Chrome etc) */
background: -webkit-gradient(linear, left top, left bottom, from(#FFFFFF), to(#FF8D2C));
/* For Mozilla/Gecko (Firefox etc) */
background: -moz-linear-gradient(top, #FFFFFF, #FF8D2C);
/* For Internet Explorer 5.5 - 7 */
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr=#FFFFFF, endColorstr=#FF8D2C);
/* For Internet Explorer 8 */
-ms-filter: "progid:DXImageTransform.Microsoft.gradient(startColorstr=#FFFFFF, endColorstr=#FF8D2C)";

Building on the Web


When building a house the foundations are fundamental to it’s structural integrity. Without good strong foundations the house is weak and liable to fall over at any moment.

Things on the web also have foundations in the form of HTML and URIs.  JavaScript can then be layered on top to improve the experience. It’s called progressive enhancement and people seem to be forgetting about it these days.

It’s pretty simple, build your HTML and CSS  first and then override the default behaviours with JavaScript.  This will ensure you are building on solid foundations.

When generating HTML on the server, you can easily re-use it with JavaScript. Its far better than generating the HTML in the browser (with JavaScript) and either ignoring Search engines or having to duplicate your logic on the server for SEO, accessibility and things like RSS feeds.

Here are some rather sweeping statements:

1. JavaScript should NEVER be used to process data in the browser.

2. JavaScript should rarely be used in the browser to generate html (sharing code with server-side JavaScript is acceptable).

I did warn you they were rather sweeping.

I’ve heard it said that if you want to provide an app/flash like experience you need to use JavaScript to render your pages: You need to build single page JavaScript apps.

history.pushState() tells us otherwise. You can read about it here.

Basically it makes it possible for what we now call single page web apps to exist across multiple pages while still providing nice page transitions (no page refresh).

history.pushState() – A Fallback

History.pushState is all well and good but it’s only available in WebKit and Firefox(4) at the moment.  Maybe that is why people are seeing hashbangs as an alternative solution. Personally I would rather fallback to a fragment identifier (#) only in situations where history.pushState is not available.

There would need to be a bit of JavaScript at the top of each page redirecting users to the appropriate fragment identifier in browsers that do not support pushState.

So when pushState is not available:

might redirect to :

which would then go and fetch the contents from

If a page is loaded with the fragment identifier in a browser that supports pushState the hash should be removed and pushState used.


When you require JavaScript for templating and data processing you are on a very slippery slope to writing JavaScript applications.

We have for a long time been able to obfuscate our data with technologies like Flash, I for one have avoided these technologies because I believe that when we publish data properly on the web it becomes more re-usable, findable, accessible and actually has far greater potential.

By all means use JavaScript, but please don’t rely on it.