Scales of Preference

I’m constantly in situations where I’m working with another person to try to make some decision.  This could be figuring out where to go out to eat with my wife, or which vendor to go with for a major purchase at work.  Either which way, when there are multiple people involved in the decision, it’s easy to find yourself at an impasse where everyone has a different preference.

One of the tricks I’ve learned for making these situations easier is to routinely give an indication of how strongly I feel about any particular option.  Of course, that can range from absolute certainty that something is a terrible idea to a positive and unshakable conviction that it’s the best thing ever.  So, when talking about how I feel about a certain option, I try to use language which gives a clear indication of where I am on the scale.  For example:

  • I’m vehemently opposed to __________.
  • I completely disagree with __________.
  • I don’t think __________ is right.
  • I’d prefer not to __________.
  • I’m not convinced that __________ is a good idea.
  • I’m not convinced that __________ is the best option.
  • I don’t really have a preference about __________.
  • I’m slightly inclined toward __________.
  • I think __________ is the best option on the table.
  • I think __________ is a good plan.
  • I really like the idea of doing __________.
  • I’m super excited about going with __________!
  • I think __________ is the perfect choice!

As you can tell, these are arranged to scale from strong disagreement to strong agreement.  And, of course, this is barely more than a starting point for the kind of language you can use to place yourself on the scale.  While there are certainly a whole lot of other excellent options, there are a few things these particular ones all have in common:

  1. They express my own opinion of the idea without judging the person who suggested the idea by starting with “I…”.  This makes it clear that I’m only expressing my own opinion: not passing judgement on someone else.
  2. They provide a wide range of shading on how much you like or don’t like the idea: not merely whether you’re in agreement or not.

These are both incredibly important when trying to come to a decision with other people.  The first one attempts to ensure that the conversation says friendly.  It’s much harder to come to a win-win decision with another person when you’ve managed to get them pissed off at you.  The second allows you to each gauge whether there’s a large disparity in passion.  If one person is strongly in favor of an idea, while another person is mildly against it, the best course of action may well be to just go with it (so long as the decision is sufficiently reversible).  If one person is violently opposed, when the other person is so-so… it’s almost certainly best to give it a pass.


I first thought of using this technique when I was in a start-up with just two other people.  It was incredibly helpful in unravelling decisions where we didn’t have anywhere near enough data for any of us to really convince the others objectively.  In those cases, it was often just each of us with our own intuition about how a certain course would turn out, and this tool made it a lot easier for us to express how strongly that “gut” feeling was.

Since then, I’ve used it quite a bit on software engineering teams when trying to figure out exactly how best to build various features or solve certain technical challenges.  Again, these were often cases where clear, objective answers were hard to come by (e.g., what would users think about X change to a feature?).  Using this technique allowed each person to weigh their own ideas against the others in a productive way.

Canvas vs. SVG

When I began my start-up, I knew the product was going to focus heavily on drawing high-quality graphs, so I spent quite a while looking at the various options.


Server Side Rendering

The first option, and the one that’s been around the longest, is to do all the rendering on the server and then send the resulting images back to the client.  This is essentially what the Google Maps did (though they’ve added more and more client-rendered elements over the years).  To do this, you’ll need some kind of image rendering environment on the server (e.g., Java2D), and, of course, a server capable of serving up the images thus produced.

I decided to skip this option because it adds latency and dramatically cuts down on interactivity and the ability to animation transitions.  Both were very important to me, so this solution was clearly not a good fit.



The second option was to use HTML5’s Canvas element.  This allows you to specify a region of your webpage as a freely drawn canvas which allows all the typical 2D drawing operations.  The result is a raster image which is drawn completely using JavaScript operations.

While Canvas has the advantages of giving a lot more control on the client-side, you’re still stuck with a raster image which needs to be drawn from scratch for each change.  You also lose the structure of the scene (since it’s all reduced to pixels), and therefore lose the functionality provided in the DOM (e.g., CSS and event handlers).



The final option I considered (and therefore the one I chose) was using Scalable Vector Graphics (SVG).  SVG is a mark-up language, much like HTML, which is specifically designed to represent vector graphics.  You pretty much all the same primitives as with Canvas, except these are all represented as elements in the DOM, and remain accessible via JavaScript even after everything is rendered.  In particular, both CSS and JavaScript can be applied to the elements of an SVG document, making it much more suitable for the kind of interactive graphics I had in mind.

As an added incentive for using SVG, there is an excellent library called Data Driven Documents (D3) which provides an amazing resource for manipulating SVG documents with interactivity, animations, and a lot more.  More than anything else, the existence of D3 decided me on using SVG for my custom graphic needs.

Reversibility and Fast Decision Making

There are a number of different circumstances when it’s important to distinguish between reversible and irreversible decisions.  First, though may seem pretty obvious, let me be clear by what I mean by each of those.

A decision is only irreversible if there’s absolutely no way to take it back again.  The somewhat clichéd example is that you can’t un-ring a bell.  However, there are plenty of more consequential decisions which are also quite permanent.  Drive drunk?  You may not have an opportunity to repent of that decision.  Unprotected sex?  You might have no way to undo the damage you’ve done to your body.

Fortunately, most decisions aren’t completely irreversible: although they may be more or less difficult / expensive to fix.  Get married to the wrong person?  That could have massive consequences, but it’s possible to fix.  Paint your house a color which turns out to be ugly?  Probably less difficult to unwind, but still not consequence-free.  Get a bad haircut?  You’re out a little bit of money, but it will fix itself in time.

And, of course, there are a huge number of decisions we make every day which are completely and readily reversible.  Not enjoying the channel you’re watching on TV?  Just flip to the next one.


One way to use this distinction is to guide you in how much effort to spend ahead of time trying to make a certain decision.  When I find myself faced with a decision, part of the process is to make exactly this evaluation.  If it’s a pretty reversible decision, I won’t let myself get too caught up in making it.  I choose the first option which seems pretty reasonable, and I move along.  On the other hand, I’ll spend quite a bit of time evaluating houses before buying one, and even more when considering changing jobs.

Another useful way to use this distinction is when you’re responsible for guiding another person (e.g., as a parent, guardian, coach, manager, executive, etc.).  If a decision is reversible, and your other person is set on making a choice you’re skeptical of, perhaps you let them go ahead anyway.  If they’re wrong, then they’ll learn something from the attempt in much deeper way they might have done.  If they’re right, then you’ve learned something instead.  On the other hand, for a more irreversible decision, you may decide you need to intervene (e.g., a toddler climbing upon on a coffee table vs. running out into the street).

Finally, this distinction can also be useful when trying to make a decision as a group.  As it often happens, different people will offer up differing suggestions on how to proceed, and the best answer isn’t always clear.  In such cases, it can be very helpful to gauge how reversible the decision is.  When it’s pretty easy to back away from, it’s fine to just pick a solution (probably from the most insistent person in the group), and see how it works out.  On the other hand, if it’s a fairly irreversible decision, you may all way to slow down, gather more data, and try to be a lot more careful.

Streams in Node.js, part 2: Object Streams

In my first post on streams, I discussed the Readable, Writable and Transform classes and how you override them to create your own sources, sinks and filters.

However, where Node.js streams diverge from more classical models (e.g., from the shell), is Object streams.  Each of the three types of stream objects can work with objects (instead of buffers of bytes) by passing the objectMode parameter set to true into the parent class constructor’s options argument.  From that point on, the stream will deal with individual objects (instead of groups of bytes) as the medium of the stream.

This has a few direct consequences:

  1. Readable objects are expected to call push once per object, and each argument is treated as a new element in the stream.
  2. Writeable objects will receive a single object at a time as the first argument to their _write methods, and the method will be called once for each object in the stream.
  3. Transform objects have the same changes as the both the other two objects.


Application: Tax Calculations

At first glance, it may not be obvious why object streams are so useful.  Let me provide a few examples to show why.  For the first example, consider performing tax calculations for a meal in a restaurant.  There are a number of different steps, and the outcome for each step often depends upon the results of another.  The whole thing can get very complex.  Object streams can be used to break things down into manageable pieces.

Let’s simplify a bit and say the steps are:

  1. Apply item-level discounts (e.g., mark the price of a free dessert as $0)
  2. Compute the tax for each item
  3. Compute the subtotal by summing the price of each item
  4. Compute the tax total by summing the tax of each item
  5. Apply check-level discounts (e.g., a 10% discount for poor service)
  6. Add any automatic gratuity for a large party
  7. Compute the grand total by summing the subtotal, tax total, and auto-gratuity

Of course, bear in mind that I’m actually leaving out a lot of detail and subtlety here, but I’m sure you get the idea.

You could, of course, write all this in a single big function, but that would be some pretty complicated code, easy to get wrong, and hard to test.  Instead, let’s consider how you might do the same thing with object streams.

First, let’s say we have a Readable which knows how to read orders from a database.  It’s constructor is given a connection object of some kind, and the order ID.  The _read method, of course, uses these to build an object which represents the order in memory.  This object is then given as an argument to the push method.

Next, let’s say each of the calculation steps above is separated into its own Transform object.  Each one will receive the object created by the Readable, and will modify it my adding on the extra data it’s responsible for.  So, for example, the second transform might look for an items array on the object, and then loop through it adding a taxTotal property with the appropriate computed value for each item.  It would then call its own push method, passing along the primary object for the next Transform.

After having passed from one Transform to the next, the order object created by the Readable would wind up with all the proper computations having been tacked on, piece-by-piece, by each object.  Finally, the object would be passed to a Writable subclass which would store all the new data back into the database.

Now that each step is nicely isolated with a very clear and simple interface (i.e., pass an object, get one back), it’s very easy to test each part of the calculation in isolation, or to add in new steps as needed.

Streams in Node.js, part 1: Basic concepts

When I started with Node.js, I started with the context of a lot of different programming environments from Objective C to C# to Bash.  Each of these has a notion of processing a large data sets by operating on little bits at a time, and I expected to find something similar in Node.  However, given Node’s way of embracing the asynchronous, I’d expected it to be something quite different.

What I found was actually more straight-forward than I’d expected.  In a typical stream metaphor, you have sources which produce data, filters which modify data, and sinks which consume data.  In Node.js, these are represented by three classes from the stream module: Readable, Transform and Writable.  Each of them is very simple to override to create your own, and the result is a very nicely factored set of classes.

Overriding Readable

As the “source” part of the stream metaphor, Readable subclasses are expected to provide data.  Any Readable can have data pushed into it manually by calling the push method.  The addition of new data immediately triggers the appropriate events which makes the data trickle downstream to any listeners.

When making your own Readable, you override the psuedo-hidden _read(size) function.  This is called by the machinery of the stream module whenever it determines that more data is needed from your class.  You then do whatever it is that you have to do to get the data and end by calling the push method to make it available to the underlying stream machinery.

You don’t have to worry about pushing too much data (multiple calls to push are handled gracefully), and when you’re done, you just push null to end the stream.

Here’s a simple Readable (in CoffeeScript) which returns words from a given sentence:

class Source extends Readable
    constructor: (sentence)->
        @words = sentence.split ' '
        @index = 0
    _read: ->
        if @index < @words.length
            @push @words[index]
            @push null

Overriding Writable

The Writable provides the “sink” part of the stream metaphor.  To create one of your own, you only need to override the _write(chunk, encoding, callback) method.  The chunk argument is the data itself (typically a Buffer with some bytes in it).  The encoding argument tells you the encoding of the bytes in the chunk argument if it was translated from a String.  Finally, you are expected to call callback when you’re finished (with an error if something went wrong).

Overriding Writable is about as easy as it gets.  Your _write method will be called whenever new data arrives, and you just need to deal with it as you like.  The only slight complexity is that, depending up on how you set up the stream, your may either get a Buffer, String, or a plain JavaScript object, and you may need to be ready to deal with multiple input types.  Here’s a simple example which accepts any type of data and writes it to the console:

class Sink extends Writable

    _write: (chunk, encoding, callback)->
        if Buffer.isBuffer chunk
            text = chunk.toString encoding
        else if typeof(chunk) is 'string'
            text = chunk
            text = chunk.toString()

        console.log text

Overriding Transform

A Transform fits between a source and a sink, and allows you to transform the data in any way you like.  For example, you might have a stream of binary data flowing through a Transform which compresses the data, or you might have a text stream flowing through a Transform which capitalizes all the letters.

Transforms don’t actually have to output data each time they receive data, however.  So, you could have a Transform which breaks up a incoming binary stream into lines of text by buffering enough raw data until a full line is received, and only at that point, emitting the string as a result.  In fact, you could even have a Transform which merely counts the lines, and only emits a single integer when the end of the stream is reached.

Fortunately, creating your own Transform is nearly the same as writing a class which implements both Readable and Writable.  However, in this case instead of overriding the _write(chunk, encoding, callback) method, you override the _transform(chunk, encoding, callback) method.  And, instead of overriding the _read method to gather data in preparation for calling push, you simply call push from within your _transform method.

Here’s a small example of a transform which capitalizes letters:

class Capitalizer extends Transform
    _transform: (chunk, encoding, callback)->
        text = chunk.toString encoding
        text = text.toUpperCase()
        @push text


All this is very interesting, but hardly unique to the Node.js platform. Where things get really interesting is when you start dealing with Object streams. I’ll talk more about those in a future post.

Gotchas: Searching across Redis keys

A teammate introduced me to Redis at a prior job, and ever since, I’ve impressed with it. For those not familiar with it, it’s a NoSQL, in-memory database which stores a variety of data types from simple strings all the way to sorted lists and hashes, which nevertheless has a solid replication and back-up story.

In any case, as you walk through the tutorial, you notice the convention is for keys to be separated by colons, for example: foo:bar:baz. You also see that nearly every command expects to be given a key to work with. If you want to set and then retrieve a value, you might use:

> SET foo:bar:baz "some value to store"
> GET foo:bar:baz
"some value to store"

Great. At this point, you might want to fetch or delete all of the values pertaining to the whole “bar” sub-tree. Perhaps, you’d expect it to work something like this:

> GET foo:bar:*
> DEL foo:bar:*

Well… too bad; it doesn’t work like that. The only command which accepts a wildcard is the KEYS command, and it only returns the keys which match the given pattern: not the data. Without getting into too much detail, there are legitimate performance reasons not to, but it was something of a surprise to me to find out.

However, all is not lost. Redis does support a Hash data structure which allows accessing some or all properties related to a specific key. Along with this data structure are commands both to manipulate individual properties along with the entire set.

Avoiding Us vs. Them in Problem Solving

I can’t count the number of times when I’ve seen two people trying to solve a technical problem where the real conflict is anything but technical.  Often, this starts when one person brings an idea to the table, and is trying to promote it to the group.  Then, someone else proposes a different idea.  Each goes back and forth trying to shoot down the other person’s idea and promote their own.  Perhaps there was a pre-existing rivalry, or some political maneuvering, or private agenda.  Whatever.  Sides form, and before long, the debate isn’t really about a technical solution anymore… it’s about who’s going to “win”.

Of course, no one “wins” such a contest.  It’s stressful to be part of.  It’s stressful to watch. And, worst of all, it kills the collaboration which could have produced a better solution than either “side” was proposing.  Fortunately, I’ve run across an excellent way to avoid the problem nearly 100% of the time.

The Technique in Theory

The key is to get everyone involved in the process to bring multiple ideas to the table.  This seems too simplistic to possibly work, but it does.  It could be that they come with these ideas ahead of time, or that they come up with them in a meeting.  That part doesn’t matter.  What matters is that each person comes up with more than one.  The reasons this works have a lot to do with how it forces people to approach the whole brainstorming / problem-solving process.

The Opened Mind

The first thing that happens if you insist on each person bringing multiple solutions is that it opens up each person’s mind to the possibility of there being multiple solutions.  If you, yourself, brought three different approaches to the table, it’s very hard to imagine there’s only one way to do it.  And, if you’ve already formed favorites among your own ideas, you’ve probably started to develop some criteria / principles to apply to making that judgement.  At that point, it becomes a lot easier to weight the pros & cons of another idea you didn’t originate, and fit it into the proper place among the rest of the ideas.

Breaking Up False Dichotomies

The second useful trait of this approach is that the decision is no longer an either-or decision (a “dichotomy”).  Instead, you, yourself, have already thought up a few different approaches, and your thought partners probably came with a bunch of their own.  With that many potential solutions on the table (even if some of them are obviously very bad) it becomes a lot easier to see the complexities of the problem, and how there are a whole range of ways solve it to a better or worse degree: or even solve parts of it to a better or worse degree in a whole variety of combinations.

Cross-Breeding Solutions

Another handy aspect of having a lot of potential solutions, and seeing the many different aspects of the problem they tackle, is being able to mix and match.  Perhaps you grab a chunk of your solution, throw in a little bit of someone else’s, and then add in a piece which neither of you thought of ahead of time.  And again: but with a different mix.  And again… and again.  Before long, you’ve all moved past the original set of ideas, and have generated a whole bunch of new ones which are each better than anything in the original set.  At this point, the question of “us vs. them” is impossible to even identify clearly.  And… you’ve got a much better solution than anyone generated alone.

Goodwill Deposits

In my last post about “goodwill accounting“, I talked about how fun experiences strengthen relationships.  The process of brainstorming with an eager partner who isn’t defensive, and who is eager to help is extremely exciting and fun.  This makes substantial “deposits” for the future.


The Technique in Action

In practice, the technique is dead simple.  Guide your problem solving with three questions:

  • How many ways could we do this?
  • What does a good solution look like?
  • How do these potential solutions stack up?


When I find myself in a brainstorming / problem solving context, I’ll often start out by literally saying: “Okay… so how many ways could we do this?”  Then, I’ll start to rattle off, as quickly as I can, every different solution I can imagine: even really obviously bad ones.  Sometimes, I’ve come with a few I already thought of, but very often not.  Others soon join in, tossing out more ideas.  I keep pushing everyone (especially myself) until everyone has given 2–3 solutions.  At this point, we really do want all the possible ways: even the ludicrous or impossible ones.

Once everyone is mute & contemplative, I’ll echo back all the ideas.  At this point, the order is so jumbled, it’s impossible to tell who added which.  Then I’ll ask: “So, what does a good solution look like?”  Now that we’ve had a chance to see some proposed solutions, it’s a lot easier to figure out what we like about some and not others.  This rapidly crystalizes into a set of criteria we can use to rank the various options.

At this point, I start pointing out how various solutions match up with our criteria.  Some get crossed off right away, and others take some discussion and even research.  We might even loop back to re-consider our criteria.  The major difference is that everyone is focused on the problem, and not on the personalities or politics.


I use this technique a lot.  Early in my career, I was (embarrassingly) often one of the pig-headed people at the table insisting on getting my own way.  Now, I jump in with my “how many ways…” question, and I’m almost never even tempted.  My own relationships with my teammates are better, and I’m often able to use this technique to help smooth over difficulties between other members of my team.  I find it invaluable, and I think you will, too.

Goodwill Accounting on Teams

A while ago, I was working at a very early stage start-up with two friends.  It was just the three of us, but both of them were in San Francisco, while I was living in Seattle.   Every month or so, I’d travel down to stay in San Francisco for a few days so that we could catch up and spend some real-world time together.  It was during this time that I noticed the phenomenon I call “goodwill accounting”.

Naturally, starting a company is stressful for anyone, and we were no different.  The odd thing I noticed was that the longer it was since I was down in SF, the more irritated I’d get with my teammates over little things.  Then, when I would go down to SF, I’d feel a lot closer to my friends again, and it would be a lot easier to deal with the myriad little disagreements you naturally have to sort out when building a company.

What I noticed was that it was really the outside-of-work time which really made the improvement, and it started to occur to me that those times were filled with good feelings: often as a strong contrast to the stress of the work.  We’d go out to eat together, chat about things we share outside of work, laugh, joke around, get out an enjoy the city… all sorts of things.  And those events would fill up a kind of emotional reservoir of friendship and trust between us.  Or, to put it another way, we’d make a “deposit” in a “goodwill account”.

Then, when we got back to work, the natural rigors of the work would start to “withdraw” from that account.  Each disagreement, though perfectly natural and necessary for the business, would count as a “withdrawal”.  So long as we kept making “deposits”, things would generally be fine, and we could sustain the stress of building a business together.  It was only when we let the account drain off that our “checks” would start to “bounce”, and we’d start to have unpleasant disagreements and ill-feeling.

As things went along, I started actively thinking about my relationship with my two friend along these lines quite explicitly, and I’d start to deliberately arrange to make “deposits” in the accounts I had with each of them as much as possible.  Sadly, sometimes the rigors of founding a company would result in “bounced checks”, but it helped a lot to have this metaphor for understanding what was going on and a way to correct the situation.


The lesson here is to keep in mind your “goodwill account” with the people around you.  This is part of the one-on-one relationship you have with each person you spend time with (whether a colleague, spouse, child, or friend).  Any time you do fun things, share something special, or enjoy time spent together, you make “deposits”.  Any time you disagree, get on each other’s nerves, or cause one another bad feelings, you make a “withdrawal”.  For some relationships, it’s easy to keep the books balanced, but for other relationships (especially work relationships), you have to be more deliberate.  When you notice a relationship starting to go south, consider whether you’re “in the red” with that person, and what you can do to add something back into the account.

Recipes: Chicken Tikka Masala

One of my other hobbies is cooking, and I’ve been collecting recipes for a while now.  As I try new recipes, I re-write the ones I like into my own online cookbook.  Naturally, being a software engineer, I created a little website to keep them organized.  Of course, having the source code on my hard drive, I was just one command-line command away from having a version-controlled repository using git.  From there, it was an easy step to publish it to GitHub, and from there to activate the GitHub Pages feature to make it a public website.

Besides being intensely geeky, this has actually been super helpful!  I’ve found myself at  a friend’s house where we decide we want to cook something together, or I’ve wanted to share a recipe with someone.  This makes it super easy to do so.  So… I’m sharing my latest addition with you!

I didn’t actually make up the recipe (you can see the source at the bottom), but I have adjusted it a bit to suit my own taste, and I re-arranged it to fit into my cookbook’s design.  Enjoy!

Building a Board & Weekly Progress Reports

In starting out a self-funded company, I didn’t just get a board by virtue of taking on an investor, but I still found that I really wanted a way to get feedback and advice from more experienced people I know and trust.

To that end, I started asking friends and colleagues if they’d be willing to receive a weekly progress report from me, and occasionally field a few questions. Having worked with a number of generous and savvy people, I quickly got about two dozen people on my list.

Starting from my first week “on the job”, I sent an email to this group roughly¹ once a week detailing how things were going.  This email has three broad sections:

  • A narrative description of the big issue(s) I dealt with that week. This might be describing some engineering issue, a new feature, or how some recent customer interviews went.  If applicable, it contained pictures and/or videos of recent progress.
  • A bullet points list in three sections: what I accomplished, what I’m still working on, and what’s next
  • An evaluation of my own emotional state (optimistic? disheartened? frustrated? elated?) and a brief outline of why and how I got that way.

Including the center section was especially helpful as it forced me to get very clear about past/present/future each week.  It also gave me an impetus to really complete things so I can include them in the accomplishments section, and thereby gives me a sense of accountability.

When I sent these emails out, I almost always got back a few responses which range from actual solutions to issues I was having, to simple words of encouragement (which were still much appreciated!).

¹ except when nothing much had happened, and I was just working through the same plan as the previous week