CRSH your assets into tiny blocks!

Just now, I released version 0.2.0 of my nifty asset compression library, crsh. This new version comes with some majorly nifty stuff. Niftiest of them all is the new filter system. Similar to the middleware system used in connect/express, crsh now supports middleware to add support for any type of file you’d want to use! Here’s a quick sample of the csv-to-json filtering that I’ve got in the readme.

// Let's try a csv-to-json filter
Crsh.addFilter('csv', function () {
  var csv = require('csv')
  this.addType('json', 'csv')

  return function (data, next) {
    var pattern = /(?:^|,)("(?:[^"]+)*"|[^,]*)/g
      , lines = data.split("\n")
      , keys = lines.shift()
        .map(function (key) {
          return key.toLowerCase()
      , rows = (line) {
        var res = {}
        line.split(pattern).forEach(function (val, i) {
          if (keys[i]) {
            res[keys[i]] = val.replace(/"/g, '')
        return res

    callback(null, JSON.stringify(rows))

As you can see, crsh just got pretty darn flexible, thanks to filters. Next up is going to be output filters so you can specify how stuff gets joined. For example you could make a jsonp output filter to match the csv-to-json input filter above. The future is awesome. :D

You are not lean, stop pretending.

I promised a post on my views of the “Lean Religion”, as I refer to it. Here it is. Don’t get me wrong; I agree with the intent, just not with the execution.

The Origin

IMVU built a religion to market their product and the Cult of Lean did the footwork. The question is; how many subscribers of the lean religion actually use the product that IMVU is pushing? No one? Yeah, that’s what I thought.

They followed the tried and true method of getting as many eyeballs on them as possible. In a way you could say it worked, they are still in business after all. But with that many eyeballs on them, why are they not competitive with Zynga?

Because their product sucks. They focused so much on structuring their work that they forgot they need to feel it too. That’s the lean curse, and even they fell into that trap. Without feeling, you have a bland product and you can’t structure feeling. You need to just let it flow. If you have a brilliant idea, let it out. If you don’t, you can’t force it.

You think you are lean?

A third of your employees are managers? You are not lean. Cut the managers and get heads-down coding. Your coders don’t need babysitters, and if they do; fire them.

Regular brief meetings to determine direction and discuss targets? You are not lean. Dump your shit in a todo list. If it really needs to be fleshed out, just group chat between deploys.

Carefully structured release cycles? You are not lean. Release it when it works; not a moment sooner, nor later.

Screw your lean, screw your scrum, screw your kanban. To put it bluntly; just fucking do it. I do not subscribe to the lean religion, I just get shit done. A one man team can make a scalable service just as well as a hundred man team. The hundred man team just sounds more impressive.

Hack and slash, break and smash

If you find that your product requires more programmers than you have fingers, it is too complex. Break it up. Smash it to pieces. Spin off the pieces into totally separate projects, worked on by totally different teams.

If they need to interact regularly, you haven’t isolated the components well enough. Think of the platform. The platform rant that Steve Yegge posted on Google+ had it absolutely right; If your internal interfaces aren’t generic and boring, or are in any way different than public-facing interfaces, you are doing it wrong.

Do what works for you

I’ve seen many companies actually add complexity to fit within the lean paradigm. Few seem to recognize that the lean methodology are merely guidelines not rules. The lean approach is not a one-size-fits-all business model. Such a thing does not exist. Just do your work and the methods ideal to your specific situation will surface organically.

Today I went to a job interview.

I went to a job interview today. Admittedly, I went in knowing little about the company beforehand. But with the intent of getting a feel for their workplace and to see if it is somewhere I might actually enjoy working. I didn’t expect much, but I gave it a go anyway.

As I sat by the entrance, waiting for the HR person to prepare for the interview, I was passed by a total of 22 people. Only one of them took the extra breath to simply say “hello” as they briskly walked past. Now that’s certainly not a point of enormous consequence, but it does give one a bleak view of their passion for what they do. Happy people are social people.

The Interview

Eventually the interview began. They bandied about Microsoft Certifications and Partner Programs, as if those actually meant something. And there was the usual questions about “Why do you want to work here?” and “What are your career goals?”, but for the first question I had no answer yet. I didn’t really know yet what they did, so they explained it to me. Or rather, they tried to.

After a ten minute long pitch of their product/service/whatever-the-heck-it-is, I still had little idea what they as an entity actually did. I knew what the end result was, but was it their product? Was it partial work contracted by some external entity? What were they doing that they could actually put their name on?

I couldn’t figure out what exactly it was they did. But I got the distinct feeling that they were basically a glorified contract-based development shop. I hope they can dispute that.

The Conflict

They also asked me what my criteria was for an ideal workplace. I talked to them about my history of open source contributions and my development style. They seemed visibly at-odds with my views, which I was expecting, but not to such an extent.

They openly condemned the idea of open source simply on the grounds that “Enterprise clients don’t like that.” First; that sort of submissive attitude is what encourages stagnation of technologies. Second; No, enterprise does not inherently hate open source. Were that the case, Linux and the various Apache Foundation projects would not be so prolific. What they don’t like is half-assed use of open source technologies simply to save money. If you use open source code, you better understand what code it’s replacing.

On the note of development style. They, like many, have hopped aboard the lean bandwagon, following the Scrum variety in particular. I’ll leave my opinions on the “Lean Religion” for another post, but to put it bluntly; I’m more a follower of the “programming, motherfucker” methodology. I don’t think they liked that.

The Epic Conclusion

It was becoming obvious to me that this was not a place I wanted to work. I respect that they have built this company and kept it running for so long. That’s not yet something I can say I’ve outdone, so it’s only fair that I recognize their accomplishment.

But our views on what is the “right” way to run a tech business are so radically different. Some people just aren’t meant to work together, and this was one of those cases. I have politely chosen not to name the company. Though they will probably read this and know who I am referring to. I hope they have a compelling counter-argument to my views on their methods.

Games don’t need to be social

Social games have been a big trend in recent years. Zynga struck it big and now everyone else is trying to emulate them. Unfortunately, the first thing that pops into anyones head when a Zynga game is mentioned is Facebook. Facebook is the platform upon which their success stories like FarmVille were built, but it’s not the reason for their success.

Zyngas games work because they are fun. The social connectivity is merely a mechanism to share your enjoyment of the game with others. It becomes utterly useless if there is no enjoyment to share.

Sadly, many focus on social connectivity out of some misguided delusion of necessity. People actually believe their game needs to be social to be successful. This is simply not the case. At best, it’s a distraction from the real importance; fun. At worst, it’s lipstick on a pig. If your game sucks, social connectivity isn’t going to magically make people not notice.

Minecraft sold millions. No Facebook, no Twitter. It’s not even out of beta. Braid did incredibly well too. As did Bejeweled. No social connectivity there. Just good, old-fashioned fun. That’s all games need.

Stop for a moment and think; if Facebook didn’t exist, could my game still work? Would I still love playing it? Would I still tell my friends about that cool thing I did in it last night? If the answer to any of those is no, you have lost. Game over. Retry.

How to make Socket.IO work behind nginx (mostly)

UPDATE: News from jpetazzo:

dotCloud now has beta websockets support, so this hack should no longer be necessary. Just point your custom domain to instead of and you will be using a websockets-aware load balancer instead of the default one running Nginx.

Most web hosts with node.js support host it behind an nginx proxy. Sadly, Socket.IO doesn’t work at all behind nginx without a bit of hacking. Currently there’s no vhost-supported way to run websockets through nginx, but we can at least get the xhr transport to work properly–basically everything can do xhr-polling.

Turns out that nginx doesn’t really like how Socket.IO uses the “Connection: keep-alive” header, so lets just remove that. All we need to do is overwrite a function in the xhr-polling transport. This should do it;

io.configure(function() {
  io.set("transports", ["xhr-polling"]);
  io.set("polling duration", 10);
  var path = require('path');
  var HTTPPolling = require(path.join(
    path.dirname(require.resolve('')),'lib', 'transports','http-polling')
  var XHRPolling = require(path.join(
  XHRPolling.prototype.doWrite = function(data) {;
    var headers = {
      'Content-Type': 'text/plain; charset=UTF-8',
      'Content-Length': (data && Buffer.byteLength(data)) || 0
    if (this.req.headers.origin) {
      headers['Access-Control-Allow-Origin'] = '*';
      if (this.req.headers.cookie) {
        headers['Access-Control-Allow-Credentials'] = 'true';
    this.response.writeHead(200, headers);
    this.log.debug( + ' writing', data);

Finally open-sourced some new stuff


A handy middleware utility for node.js and express to queue scripts and styles to be rendered later. Supports supplying javascript via anonymous functions to avoid multi-line string issues and also has built-in minification, which can be enabled when you render the scripts.


This is the expanded source of the chat demo I made for my Node.js presentation at OKDG. I added CouchDB user persistence, S3-backed file uploading and also inline media embedding via another open-source project of mine called Embedify.


A jquery plugin for xhr-based upload management with support for drag-and-drop regions. I used this for handling file uploads to S3 in Chattan.

An eventful week…

It’s been an interesting week so far. I had a job interview at Yammer in San Francisco on Tuesday morning, so I was to fly out from Kelowna on Monday. My passport however, had expired. I ended up having to drive to Vancouver through the middle of the night and go to the passport office there at 7:30AM Monday morning. The process would take awhile, so I slept in the van for a few hours while I waited and hoped. They managed to have the passport put together just in time for me to pick it up and go straight to the airport.

Eventually I made it to San Francisco, but the problems still hadn’t ended. From the airport I took a shuttle…to the wrong hotel. Then I caught a cab to the right one. Finally at my hotel, I checked in and went up to my room. Much to my dismay however, their internet login system doesn’t work with Linux so my “complimentary” internet went entirely to waste. I had to resort to my Galaxy S for checking emails–no doubt my next bill is going to be scary expensive. In typical hotel fashion, they charge you to do pretty much anything but sleep, so I slept.

The next day was somewhat better. I was to meet at the Yammer HQ, in the same building as TechCrunch, for a few hours at 10AM. Checkout time at the hotel was noon, so I had to pack my luggage along with me. I quickly packed up my things and took a cab over to their offices, which were just a few minutes away.

The interview seemed to go pretty well, hopefully I can get the job because they’d be a pretty awesome crew to work with. There was several sets of interviews with people from the various teams at Yammer. They asked me various questions about Javascript to test my knowledge, which was refreshing–sometimes I just get disregarded because of my lack of formal training. Most of the questions were pretty easy but there was a few things that they were a bit vague about so it took a bit of prodding to figure out what exactly they were fishing for. Overall the interview seemed to go over pretty well though. I got to share a tasty catered lunch with them too.

After the interview I basically stepped outside the office, jumped into a cab and went straight to the airport. It was a much smoother exit from San Francisco than the entrance. There was a stop in Seattle though that was a bit on the long side. I sat at the gate for my next flight for about 3 hours. Fortunately there is plenty to do at the Seatac airport, so I managed to keep myself entertained. I got to try out a PlayBook while I was there, which was neat. I can kind of see where people are coming from about the complaints of it being “incomplete”, but it’s still a pretty neat piece of tech. I think application development should make it a pretty exciting platform.

Anyway, eventually I got back to Kelowna and finally got the sleep I had been longing for since Monday morning. I slept until almost noon on Wednesday, which I haven’t done in awhile. But eventually I felt compelled to get up and get some programming in. I wanted to get some more work done on the game engine I’ve been working on for a web game I’ve been making. I made a few updates to my Actor class to add RM2K support and animations, so I decided I’d do some benchmarks. I managed to get 484 unique Actors on a grass-tiled screen at 1920×968 before it the rendering started to drop below 30 fps. That’s some pretty decent performance. Now I need to work on layer grouping and pre-rendering so I’m not drawing each tile of each layer for every frame. I’m convinced I can actually make an infinite scrolling tilemap render fast enough to be viable for large scale game development.

Thursday morning was the second Evoke Game Group meeting. Unfortunately there was only three of us this time–the weather was a bit ugly and I think there was a bit of confusion with moving the meet to Thursday because of Good Friday. It should pick up again for the next meet though. I’ve got a few people who’ve invited some programmers to join us, so I’m optimistic that I can get a hacknight going soon. I’ll keep my fingers crossed.

Today was more game dev stuff and deploying Chattan to DuoStack. Hopefully I can get Cloudant working too so I can deploy another secret project I’ve been working on. Cloudant was one of the many SaaS companies hit hard by the EC2 outage today. Hopefully they recover soon, it’s a pretty awesome service. I also got a few more calls from some web companies in San Francisco, including Opzi; one of the startups presented at this years TechCrunch Disrupt. They are somewhat smaller than Yammer, but it sounds like they are trying to do some pretty neat stuff. I can’t wait to see where that goes.

A simple explanation of “new” in Javascript.

There is a major feature of Javascript that is sorely misunderstood–the “new” keyword. Using the “new” keyword when declaring a variable will assign it’s value to the state of “this” upon completion of the function. Not using “new” will simply assign it’s value to the return value, or ‘undefined’ if nothing was returned. I wrote a simple example below to illustrate the effects of using the new keyword.

function Test(name){
    this.test = function(){
        return 'This will only work through the "new" keyword.';
    return name;

var test = new Test('test');
test instanceof Test; // returns true
test.test(); // returns 'This will only work through the "new" keyword.'
test // returns the instance object of the Test() function.

var test2 = Test('test');
test2 instanceof Test; // returns false
test2.test(); // throws TypeError: Object #<Test> has no method 'test'
test2 // returns 'test'

As Andrew kindly pointed out; if you return an object the resulting value will be an instance of the returned object rather than the constructor.

function Test(name){
    this.test = function(){
        return 'This will only work through the "new" keyword.';
    return {name:name};

var test = new Test('test');
test instanceof Test; // returns false
test2.test(); // throws TypeError: Object #<Test> has no method 'test'
test // returns {name:'test'}.

Peint Update!

At last, Peint is usable enough to be called “beta” software. I was working on other things for awhile so it took a bit of time to get back to it.

I’ve made many changes including;
- Moved module system parts into a separate Utils library, which can easily attach the module system to any object
- Changed Peint.image.load() to allow for multiple loading in the same way as Peint.require()
- Changed Peint.require() and Peint.image.load() callbacks to use arguments, rather than
- Add region module to allow defining of events based on regions of the screen. Used for buttons and such.

As usual, the current demo is viewable at and the code is available on Github.

The next destination for the open web?

Those of you aware of OpenID likely feel it is the greatest thing since sliced bread. The internet is becoming open and data is being shared. While the open web movement is accomplishing some great things, I still find it lacking in actual social connectivity. It’s still difficult to find your friends anywhere other than Twitter and Facebook. It’s still difficult to share things around the web with your friends. It’s still difficult to recognize the response to your blog posts or news articles that aren’t made directly as replies in whatever comment system is being used. OpenID simplifies the entry point to signing up for a new website or service. But where can we go from there? What more can we simplify? What if we could share more than just your identity? There are two other important constants between social services that are not effectively being shared; connections and interactions.


Connections, friends, associates–whatever you call it, it is a list of people. Most of these people likely exist on many of your friends lists. Why do we need to maintain so many different lists of the same people? Obviously your connections between these people will vary to an extent, but that is where I think this idea could help. Some people are business associates, some people are family members, some people are poker buddies. These are different groups, but the important thing is that they are definable. A service could be made to allow people to track and categorize their many connections and share them with other social services. Say, for example that you work for a big company and are a member of the company softball team. One of your coworkers would perhaps be in your contact list under the categories “coworker” and “softball team”, possibly even “friends”…or possibly not? Maybe he’s a jerk.


How many people out there have a blog, a Facebook account and a Twitter account that feel frustrated when they want to say something to everyone? Many services have begun to include “post to twitter” and “post to facebook” buttons which is a step in the right direction, but is not especially intuitive. The biggest issue with this approach is that the association is not retained. What if I later notice I need to fix a typo? Now I need to go to all the services I pushed to and make the edit. What if I make a horrible drunken rant in the middle of the night and decide, in my embarrassment the next morning, that I need to delete it? Now I need to go back through my updates on all these services and delete it.

Where am I going with all this?

I’ve been bouncing around the idea of building a web service inspired by the OpenID model. The service would provide a universal method of sharing connection and interaction data between connected services. This connection would allow users to administrate this data from any of these services and automatically identify connections of theirs that use that particular service. A master service would store a copy of all the data and would function as a pass-through that relays any additions, updates and deletions to all other connected services. Likely it would require a verification method so it knows whether or not the change has actually been made on each service. If a service happens to be down it could store the change in a message queue and try again later.

Ideas? Suggestions? Drunken rants? Any input is appreciated.