Using Exceptions Robustly

Before Object-Oriented programming (OOP), error conditions were commonly reported via a return code, an OS signal, or even by setting a global variable. One of the most useful notions introduced by OOP is that of an ‘exception’ because they drastically reduce the mental load of handling error cases.

In the old style of error handling, any time a function was called which could result in an error, the programmer had to remember to check whether an error occurred. If he forgot, the program would probably fail in some mysterious way at some point later down the line. There was no mechanism built into the language to aid the programmer in managing all the possible error conditions. This often meant that error handling was forgotten.

The situation is much improved with Exceptions, primarily because they offer a fool-proof way to ensure that error handling code is invoked (even if it is code for reporting an unhandled exception). This both makes it unnecessary to remember to check for errors, and it increases the cohesion of such code (i.e. it can be gathered into “catch” blocks instead of mixed in with the logic of the function). Both of these help preserve the unit economy of the author and reader of the code.

Unfortunately, despite being such a useful innovation, exceptions are often abused. We’ve all seen situations where one must catch three different exceptions and do the same thing for each. We’ve all seen situations where only a single exception is thrown no matter what goes wrong, and it doesn’t tell us anything about the problem. Both ends of the spectrum reflect a failure to use exceptions with the end user of the code in mind.

When throwing an exception, one should always keep two questions in mind: “Who is going to catch this?” and “What will they want to do with it?”. With this in mind, here are a number of best practices I’ve seen:

Each library should have a superclass for its exceptions.

Very frequently, users of a library aren’t going to be interested in what specific problem occured within the library; all they’re going to want to know is that the library either did or didn’t do its job. In the latter case, they will want the process of error handling to be as simple as possible. Having all exceptions in the library inherit from the same superclass makes that much easier.

Create a new subclass for each distinct outcome.

Most often, exception subclasses are created for each distinct problem which can arise. This makes a lot of sense to the author, but it usually doesn’t match what the user of the code needs. Instead of creating an exception subclass for each problem, create one for each possible solution. This may mean having exceptions to represent: permanent errors, temporary errors, errors in input, etc. Try to consider what possible users of the component will want to do with the exception, not what the problem originally was.

Remember that exceptions can hold data.

In most languages, exceptions are full-fledged classes, and your subclasses can extend them like any other parent class. This means that you can add your own data to them. Whether it is an error code for the specific problem, the name of the resource which was missing, or a localization key for the error message, including specific data in the exception object itself often is an invaluable means for communicating data which would otherwise be inaccessible from the ‘catch’ block where the exception is handled.

Exceptions should be self-describing in logs.

In most applications, when an exception is finally caught (i.e., not to be re-thrown or wrapped in another exception), it should be logged. The output produced should be as descriptive as possible, including:

  • a plain-English description of what happened
  • the state of any relevant variables in play
  • a full stack trace of where the error occurred

The Secret History of Women in Coding

For years now, I’ve been following the thread of stories about the overwhelming imbalance of white men in the software industry. Over and over, I’ve seen the questions: “How did this start? Where do we fix it?”. This is the first article I’ve read which makes a serious, well-researched effort to actually answer the question. If, like me, you’ve been concerned about this subject, I highly recommend spending some quality time reading it over.

I’ll take a stab at summarizing some of the high points, but I urge you to go read the full article yourself.

In the early days of computing, the hard part was the hardware. So, that was an area which was pretty strictly segregated along sexist lines. However, writing the software was seen as less challenging, and therefore the process of identifying programmers was mostly done using purely objective measures: usually a battery of tests which determined an applicant’s abilities to solve logic problems. In these tests, men and women tended to score equally well, and so the earliest programmers included a great many women.

In fact, sexism crept in here as well… to favor more women programmers. Since the software part was seen (erroneously) as being largely secretarial in nature (it definitely isn’t), women were often favored for such positions. It wasn’t uncommon to see these massive, room-sized machines tended to by all-female teams of programmers.

This persisted right up to around the personal computer revolution sparked by Apple and IBM in the late 70’s and early 80’s. Before this point, computers were exclusively found at large companies and universities. All students or job applicants hoping to work with them were expected to come in with no experience, and to be taught along the way. This all changed when most families could afford to have a computer in the home.

With computers being commonplace in the home, the gender biases of the typical American family started to play a massive influence over who would eventually become programmers. Young boys would be encouraged to play with the new machines, while girls would be steered toward more typically feminine pursuits. By the late 80’s and early 90’s, this lead to college computer science departments starting to see a huge influx of freshmen who already knew a fair bit about computers: most of them men.

According to the article, this is what started the feedback loop which drove women out of computer science. Overwhelmingly, the students who seemed the most precocious were male, and the professors started to offer preferential treatment to those students. Not surprisingly, this creates a very hostile environment for students starting out the program without any computer experience: largely women.

In the predictable 4–5 years later, the same situation started to play itself out in industry. Programming jobs were increasingly seen as the province of men, and those women talented and brave enough to break into the industry faced an increasingly uphill battle as the balance of men to women continued to shift. Combined with programming increasingly being seen as a challenging intellectual endeavor, the latent sexism already present lead to women being increasingly ignored, trivialized, and passed over. Over the next 10 years, you start to see the alarming figures of 80%–90% men in technical positions at the largest, top-tier companies in the software industry.

What to do?

Even better than merely recounting this dismal history, the article actually talks about some places which are successfully counteracting this trend. In particular, I was impressed with Carnegie Mellon’s approach (which has resulted in 40% of students in the CS department being female). Recognizing the difficulty of new students entering the program without prior programming experience, they’ve started offering different classes to incoming freshmen based upon whether they have prior programming experience. By the time these students have gotten through the first half of the degree, they’ve pretty much all equalled out based upon their own natural talent.

The article talks about a number of other positive steps being taken as well, but not without pointing out that many other problems remain unsolved.

✧✧✧

I do not accept guilt by association. So, despite the villains of this piece being male, I do not personally feel guilty to share some coincidental characteristics with them. Instead, I see a group of people who have been treated unfairly for far too long, and I find myself in the position of being able to say something against it. This is not about hating men, nor wanting to coddle women. It’s about hating injustice, and wanting to put a stop to it in whatever form, no matter how similar the perpetrator, or how different the victim.

I am proud to look back to all the women mentioned in this article (and the much larger number who weren’t) as fellow engineers who made my career possible. I appreciate them for it, and I’m glad to pass along word of their accomplishments.

One of my parents died

I just found out that one of my parents died last night.

Fortunately, it wasn’t my father. He’s been the one solid backdrop of my life from the moment I was born. For a long stretch between when he divorced when I was 5 and remarried when I was 12, he was the only solid thing in my life. He’s a man of simple virtues, deeply held. He instilled in me, from my earliest memories, a deep and life-long value of honesty, hard work, humor, and family. To this day, I marvel at how he raised two challenging young boys on his own, while dealing with the pain of a messy divorce. He means a great deal to me, and I’d be a wreck right now if I’d just lost him. I’m getting choked up right now even thinking about the possibility. Fortunately, it wasn’t my father.

Fortunately, it wasn’t my mother. I only met her at 11 years old when she and my father started dating. I’m not sure I would have had the courage to jump into a family with two almost feral boys and with a father working two jobs to keep it all together, but she did. She instilled in me a love of culture: fine music, theater, fine dining, cultured manners. She also turned me from a bright, but indifferent student to someone who excelled in school and graduated with multiple honors. On my 18th birthday, we went down to the city hall and adopted each other, and she’s been my mother ever since. I’d be devastated if I’d just lost her. Fortunately, it wasn’t my mother.

It was my father’s first wife, my natural mother. She walked out on our family when I was 5. She was already seeing another man who was heavily into drugs and an alcoholic. Soon after, they married and moved away. My memories of the rare occasions when my brother and I would go stay with them are not pleasant. He was abusive to my mother, and while not abusive to us, still terrifying. She continued the path of drugs and alcoholism as I became an adult, and started my own family. Right around the time my own son was born, I broke off my relationship with her forever.

That was over 15 years ago. I haven’t seen her, or talked to her since. The little bits and pieces I hear though my cousins have made it clear that nothing had changed with her. And, last night, her long-abused body finally gave up.

Am I sad? No, not really. That may seem callous now, but I did my grieving for her as a 5-year-old boy. And, how did I grieve. I didn’t understand who she was, or why she was gone, but my mother—half of my universe of trusted people—was gone. I wished and longed for my parents to be reunited. No matter the shouting and arguments. Eventually, as I grew older and started to understand more, that feeling settled into anger. Finally, as an adult facing those same choices, my feelings changed into disgust.

So, to me, my natural mother died over 30 years ago. I cried. I mourned. And I finished along time ago. Now, I don’t have anything left for the person who didn’t want to be my mother all those years ago.

✧✧✧

My real parents are those people who choose to love me. They are the people who gave of their own character to shape mine, and to set me on the best course in life that they could possibly manage. They are the people who have walked the tightrope with me of growing up and striking out on my own. They’ve been with me as I’ve built my career, and as I’ve grown my own family. I literally have tears streaming down my face as I write this: the depth of feeling I have for these two people is so overwhelming.

I regret that my experience with my natural mother has made it difficult to say in person to my real parents what they so richly deserve:

I love you both much more deeply than I could ever express in person, and much more than these written words covey. Thank you for choosing to be my parents.

Presentation timing card

I recently gave a newly-developed presentation for the first time, and I was concerned that I had too much material to fit everything in. I was going to be giving this presentation at work, and it was supposed to fit in people’s lunch hour, so it was particularly important that I get the timing right. To be sure I got everything to fit, I developed a “presentation timing card” for myself.

To start, I opened a spreadsheet and listed out all the major sections of the presentation. For each one, I went back through my slides for that section and came up with an estimate of how long I thought I’d need to cover each. I wrote that down next to the name of the section.

Next, I used the spreadsheet to sum up all the estimates. No surprise, I was way over the time I actually had. So, I went back through each section, and adjusted the timings to deliberately allocate the number of minutes I actually had across all of them. There was no way to avoid having less time than I thought I’d want to have in each section, but it forced me to make some careful choices about what was really important to cover at what depth.

Now that I had some realistic numbers, I added two more columns to the spreadsheet: the times I should start and stop each section. The first one’s start time was just the beginning of the talk. Each end was just the start time plus the duration I’d assigned. Each subsequent beginning was really just the end of the previous section.

✧✧✧

Having finished with the spreadsheet, I printed it out so that it just about fit on a playing card. When I actually gave the presentation, I propped the card up where I could keep at eye on it as I went along. I was surprised and very pleased with how easy it became to hit the right pace for each section. Even with two exercises for the audience and random questions throughout, I was able to finish each of three presentations of the material a few minutes before the end of the session. I will definitely be using this technique again!

Git 201: Safely Using Rebase

This post is part of a Git 201 series on keeping your commit history clean.  The series assumes some prior knowledge of Git, so you may want to start here if you’re new to git.

✧✧✧

The rebase tool in git is extremely powerful, and therefore also rather dangerous. In fact, I’ve known engineers (usually those new to git) who won’t touch it at all. I hope this will convince you that you really can use it safely, and it’s actually very beneficial.

What is rebase?

To start, I’ll very briefly touch on what a rebase actually is. So, very briefly then, a rebase is a way to rewrite the history of a branch. Let’s assume you’re starting out with a standard working branch from master:

At this point, let’s say you want to update your branch to contain the new commits on master (i.e., D and F), but you don’t want to create a merge commit in the middle of your work. You could rebase instead:

git rebase master

This command will rewrite your current branch as though it had originally been created starting from F (the tip of master). In order to that, though, it will need to re-create each of the commits on your branch (C, E, and G). Remember that a commit is the difference applied to some prior commit. In our example, C is the changes applied to B. E contains changes applied to B, and G contains changes applied to E. Rebasing will be that we need to change things around so that C actually has F as a parent instead of B.

The problem is that git can’t just change C’s parent because there’s no guarantee that the changes represented by C will result in the same codebase when applied to F instead of B. It might be that you’d wind up with some completely different code if you did that. So, git needs to figure out what result C creates, and then figure out what changes to apply to F in order to create the same result. That will yield a completely new commit which we’ll call CC. Since E was based upon C, which has been replaced, git will need to create a new commit using the same process, which we’ll call EE. And, since E has been removed, that means we’ll need to replace G with GG. Once all of the commits have been created, git moves the branch pointer to the end of the newest commit:

While all this seems complicated, it’s all hidden inside of git, and you don’t really have to deal with any of it.  In the end, using rebase instead of merge just means changing a single command, and your commit history is simpler because it appears as though you created your branch from the right place all along.  If you’d like a much fuller tutorial with loads of depth, I’d recommend you head over here.

Rebasing across repos

If you’re working on a branch which only exists locally, then rebasing is pretty straight-forward to work with. It’s really when you’re working across multiple clones of a repo (e.g., your local clone, and the one up on GitHub) that things become a little more complicated.

Let’s say you’ve been working on a branch for a while, and somewhere along the way, you pushed the branch back to the origin (e.g., GitHub). Later on, though, you decide you want to rebase to pick up some changes from master. That leaves you in the state we see in this diagram:

If you were to pull right now, git would freak out just a bit. Your local version of the branch seems to have three new commits on it (CC, EE, and GG) while it’s missing three others (C, E, and G). Then, when git checks for merge conflicts, there’s all sorts of things which seem to conflict (C conflicts with CC, E conflicts with EE, etc.). It’s a complete mess.

So, the normal thing to do here is to force git to push your local version of the branch back to origin:

git push -f

This is telling git to disregard any weirdness between the local version of the branch and origin’s version of the branch, and just make origin look like your local copy. If you’re the only one making changes to the branch, this works just fine. The origin gets your new branch, and you can move right along. But… what if you aren’t the only one making changes?

Where rebasing goes wrong

Imagine if someone else noticed your branch, and decided to help you out by fixing a bug. They clone the repository, checkout your branch, add a commit, and push. Now the origin has all your changes as well as the bug fix from your friend. Except, in the meantime, you decided to rebase. That would mean you’re in a situation like this:

Now you’re stuck. If you pull in order to get commit H, you’re going to have all sorts of nasty conflicts. However, if you force push your branch back to origin (to avoid the conflicts), you’re going to lose commit H since you’re telling git to disregard the version of the branch on origin. And, if your friend neglected to tell you about the bug fix, you might do exactly that and never even realize.

Solution 1: Communicate

The best way to fix the problem is to avoid it in the first place. Communicate clearly with your teammates that this branch is a working branch, and that they shouldn’t push commits onto it. It’s a good idea for teams to adopt some clear conventions around this to make this kind of mistake hard to make (e.g., any branch stating with a username should only be changed by that user, branches with “shared”, “team” or some other prefix are expected to have multiple contributors).

If you can’t be sure you’re only one working on a branch, the next best thing is, before starting the rebase, talk with anyone who might be working with the branch. Say that you’re going to rebase it, and what they should expect. If anyone speaks up that they’re working on changes to that branch, then you know to hold off.

Once everyone has pushed up any outstanding changes, pull down the latest version of the branch, rebase, and then push everything back up as soon as possible. That looks like this:

git checkout mybranch
git pull
git rebase master
git push -f

Once you’ve finished, you’ll want to tell the other people working on the branch that they need to get the fresh version of the branch for themselves. That looks like:

git checkout master
git branch -D mybranch
git checkout mybranch

Solution 2: Restart the rebase

If you find yourself having just rebased and only then learn there are upstream changes you’re missing, the simplest way out of this difficulty is to simply ditch your rebase. Go back, and pull down the changes from the origin, and start over (after referring to solution 1). That would look something like this:

git checkout master
git branch -D mybranch
git checkout mybranch
git rebase master
git push -f

This will switch back to master (1), so that you can delete your local copy of the branch (2), and then grab the most recent version from the origin (3). Now, you can re-apply your rebase (4), and then push up the rebased branch before anyone else has a chance to mess things up again (5).

Solution 3: Start a new branch

If you find that you want to rebase right away, and don’t want to wait to coordinate with others who might be sharing your branch, a good plan is to isolate yourself from the potentially shared branch first, and then do your rebase.

git checkout mybranch
git checkout -b mybranch-2

At this point, you’ve got a brand new branch which only exists on your local machine, so no one else could possibly have done anything to it. That means you can go ahead and rebase all you like. When you push the branch back up to origin (e.g., GitHub), it will be the first time that particular branch has been pushed.

Of course, if someone else has added a commit to the old branch, it will still be stuck over there, and not on your new branch. If you want to to get their commit on your new branch, use git’s cherry pick feature:

git cherry-pick <hash>

This will create a new commit on your branch which will have the exact same effect on your branch as it did on the old one. Once you’ve rescued any errant commits, you can delete the old branch and continue from the new one.

✧✧✧

I’m hope this makes rebasing less scary, and helps you get a sense of when you’d use it and when not. And, of course, should things go wrong, I hope this gives you a good sense of how to recover.

Two last bits of advice… First, before rebasing, create a new branch from the head of the branch you’re going to rebase. That way, should things go completely wrong, you can just delete the rebased branch, and use your backup. And, finally, if you’re in the middle of a rebase which seems to be going a little nuts, you can always bail out by using:

git rebase --abort

So, feel free to experiment!

The Importance of Context

When studying history, the first rule of intellectual honesty is to never drop the context of the time period being studied. We stand at the end of a long line of people who screwed things up, figured out what went wrong, and came up with a better solution. We are the inheritors of thousands of years learning in every area of human endeavor: including morality. When studying history, any time you indignantly ask the question “How could they?”, it is imperative to stop yourself and ask the question again with curiosity instead. Really… how did it come to pass that people in a prior age thought it right and natural to act in ways we find foreign or even immoral now?

We can (and should) look back with our modern eyes and pass judgement on the moral systems people have used in the past. Most moral codes for most of history were atrocious by our modern moral understanding. However, when judging individual members of those societies, we must not lose our perspective and judge them by standards they never even knew existed. One can only judge a person from a prior historical period by asking whether they faithfully adhered to the best moral code they knew about and/or whether they helped to advance our understanding of morality as such.

This does mean that certain historical figures, though perhaps despicable when judged by our modern standards, where moral and virtuous in their own time. It is important that we judge the moral system, not the person who could have known no better.

Considering Women in History

When thinking of the treatment of women through history (just to pick one minority), we must apply the same respect for context we would for any other historical study. We can (and should) judge historical societies’ moral codes based upon their respect for women. However, we can only judge individual people for having better or worse views and actions compared to others who shared their context.

For example, a person who was skeptical of a woman’s right to vote in England of 1880 is hardly a villain when judged by the moral standards of that time. We now find that position repugnant, but not the person who holds it. Needless to day, a person in a modern context who held such a view would (rightfully) be considered morally bankrupt. Conversely, a person who was enlightened enough, in that place and time, to support women’s suffrage wasn’t merely a normal, decent person (as they would be today), but a one of unusual foresight and virtue.

Notice that I very deliberately used the word “person” throughout that example. We must remember that the suffragettes were themselves usually foresighted and virtuous even among the women of their day. Many women of the time were as skeptical of such things as “votes for women” as their spouses. They too were not villains, but people of ordinary character and understanding: for their own time.

But what about…

The really interesting question is what other moral issues were, at one point, perfectly acceptable, but are not any longer? For example, homophobia was once not only perfectly acceptable, but actively encouraged and legally enforced. However, in the United States today, LGBT+ people are legally protected (in many jurisdictions) and homophobia (in most communities) is actively regarded as backward and immoral. When did that moral stance shift? How did it happen? At what point do you consider someone slow to make the shift as being immoral?

Thoughts on Toxic Masculinity

I recently saw the Gillette commercial about toxic masculinity, and it’s gotten me thinking, especially when viewed along side the Egard Watches response video. I highly recommend you go watch both of them before continuing to read here.

The perspectives in both are reflected by the polarized responses I’ve been seeing since the Me Too movement picked up steam. Any time I see such extreme reactions to the same thing (the commercial, especially) among people who normally agree about many things, it makes me stop to ponder what’s going on.

Personally, I find it very easy to have enormous sympathy with the Me Too movement.  It is sadly all too easy to find many, many examples of women being treated unjustly in every era, and in every civilization which has ever existed.  Indeed, “unjust” hardly begins to describe centuries of disregard, disenfranchisement, oppression, torment, slavery, mutilation, rape, and murder which women have suffered across the span of human history. Given that the perpetrators have been overwhelmingly male, it’s all too easy to take a dim view of masculinity in general.

However, it is also true that many brilliant, talented, moral, and courageous men have moved our species forward in leaps and bounds. Many of these men were the ones who fought against oppressors of every sort (both literally and figuratively). Indeed, many of them fought, specifically, to oppose the tremendous injustice met out to women by other men of their time. Taking either the view that all men are monsters or that all men are innocent is too simplistic.

I view “toxic masculinity” as being what the philosopher, Ayn Rand, called a package deal. That is, a bunch of concepts grouped together with the effect (usually deliberate) of damning the good by linking it with the evil. In this case, the “package” contains a lot of elements which are, in fact, attitudes, beliefs, and cultural norms which each have been held by individual men. However, not all men exhibit all these traits, and, in fact, it’s very common for the negative traits to be concentrated in certain individuals, and positive ones in others.

But let’s get specific here. When I think of traits considered typically “masculine”, I get something like this:

  • physical traits (size, strength, body shape, genitals)
  • self-control
  • competence
  • courage
  • protectiveness
  • resilience

However, when I think of the kind of behaviors associated with the phrase “toxic masculinity”, I get a very different (and mostly incompatible) list:

  • sexism & misogyny
  • homophobia
  • bullying
  • excessive use of drugs & alcohol
  • macho toughness

I think this is the heart of the division I see between people reacting to this issue. When someone says “masculine”, which of these two lists pops up in their head? You can easily tell by the litmus test of these two videos.

What I find especially fascinating and useful, is to construct a similar list using the phrases “feminine” and “toxic femininity”. To my mind, the first list is nearly identical, while the second list has its own (and different) set of revolting behaviors.

My point, really, is that using deliberately leading phrases like “toxic masculinity” or “toxic femininity” doesn’t actually help what is really an admirable goal: to eliminate the specific nasty behaviors associated with those phrases. At best, they serve to stir up animosity and misunderstanding between people who probably have the same goals at heart. At worst, they create a completely useless debate between people wanting to define “masculine” as meaning the first list versus the second.

Instead, I would urge people to discard the “package deal”, and focus on the real problems specifically, and one-by-one: sexual harassment, homophobia, bullying, and all those other behaviors we should no longer tolerate as a rational, civil society.

What I learned from Edward Tufte

I recently was able to attend Edward Tufte’s seminar on presentations and data graphic design.  This blog post covers the essential elements I took away from the lecture.

On Space vs. Time

When one has a great deal of content to convey to an audience, it cannot be blurted out all at once and in the same spot: it must be spread out over either space or time (or both).  A slide deck spreads out content in time, using the same space over and over. A document or web page shows everything at once, spread out in space. Documents play to human being’s strengths, while slide decks play to their weaknesses.

Humans have a natural ability to visually consume a complex field of data by instantly shifting modes from high-level scanning to detailed inspection and back again.  This makes it possible—and even quite easy—for a person to scan through a long document, identify sections of interest, and dive into that piece for a closer look. The same is true for presenting many hundreds or thousands of data points in a graph; the viewer can rapidly scan the overall structure of the data and zoom in to particular interesting details.  With information arranged with spatial adjacency, it is easy for people to compare and contrast, scan and examine, and learn most efficiently.  This leads to a very high throughput of data transfer from the author to the audience.

Humans, on the other hand, have a limited ability to precisely remember detailed data for any length of time.  They also have a limited attention span: particularly when presented with data which is either confusing or boring to them.  This makes it very difficult for a person to hold the context required to compare data presented sequentially over a series of slides.  The needs of a slide presentation (i.e., a limited space per slide which must be legible at a distance) means that the content is broken into tiny chunks and widely distributed over time as the presenter talks over each point: often re-iterating the content on each slide slowly (relative to the speed of reading).  It is impossible for any individual listener to speed up or slow down the presentation to suit their needs, or to scan ahead to answer a question, or to skip back to revisit an unclear point. This leads to a very low throughput of relevant data transfer from the author to the audience.

It is highly preferable, therefore, that information displays maximize “spatial adjacency” of material with a visually dense presentation with varying levels of headers, “data paragraphs”, and whitespace to allow viewers to readily identify and select from large blocks of content at a single glance.

On Giving Presentations

For presentations of virtually any scale, it is far better to provide a narrative document (i.e., like this one) instead of a slide deck.  The document should be from 2–6 pages long, and include all the information to be discussed integrated into a single flow. Tables of numbers, charts, graphs, pictures, etc. should all be integrated with the narrative description of the subject matter.  In all cases, references should be included to source materials, primary sources, etc. The document should be written to be a permanent record of what was discussed, and therefore should be complete and self-contained.

For the actual presentation, the meeting should begin by handing out copies of the actual document to each attendee.  This is followed by a study hall session long enough for everyone to carefully read the document and make note of any questions, thoughts, or disagreements.  Once everyone is ready, the remainder of the meeting is not spent re-hashing what was just read, but instead is spent in discussion those questions, thoughts, and disagreements each person noted while reading.

The primary advantages of this style of meeting are:

  1. People can read through the document at their own pace, and to serve their own needs.  Sections irrelevant to a certain person can be skimmed, while sections of intense interest can be lingered over carefully.
  2. People can easily jump ahead to see if a question is answered later, or skip back for extra clarity on a point they may have misunderstood.
  3. People read much more quickly and with much greater throughput than can be presented aloud, so meetings can often be shorter.
  4. The document serves as a permanent record of what was presented which everyone can take away with them to refresh their memories later.

On Judging an Information Display

“The purpose of information display is to assist people in reasoning about the content.”

Edward Tufte

When judging an information display (i.e., charts, graphs, tables, etc.), people judge both the quality of the data and the reliability of the presenter.  To establish both, apply these six principles both when making and consuming an information display.

show comparisons, contrasts, and differences

The information display should be deliberately designed to make it easy to compare various data sets or points within each data set.  The author should be thoroughly conversant with the data, and deliberately highlight those points of contrast which are most surprising, interesting, or useful.

show causality, mechanism, explanation, and systematic structure

Information displays should endeavor to show how certain data sets were the cause of other data sets.  In charts, for example, one can use labeled arrows to not only show the direction of causality, but also to describe the mechanism or process by which it happened.  On graphs, this can be a block of text describing some causal connection with an arrow pointing to where this is shown in the data.

show multivariate data (i.e., 3 or more variables)

The real world is complex, and includes a lot of interconnections between different data sets.  Information displays should attempt to draw in as many of these various data sets as possible to show the interconnections between them (see: Minard).

completely integrate words, numbers, images, diagrams, etc.

When helping someone understand a data set, it is very unhelpful to segregate data based upon its source, format, or media.  Instead, pull all sources of data into the single information display so that they can be compared side-by-side with the other data relevant to the story.  Data labels and other text should be integrated into the data display whenever possible instead of being relegated to sidebars, legends, or other documents.

document the display thoroughly

The reader should be left with no questions about what it is they are seeing or where it came from.  This often requires extensive textual, even narrative, explanations included within and along side the information display. A title, the author’s name, units for all numbers, and links to source data are a minimum.  One may also find it helpful to include a paragraph explaining the principle features of the data set, interesting comparisons to make, or surprising results.

presentations stand or fall based on the quality, relevance, and integrity of the content

Showing the content in the clearest and most accessible fashion should be the only purpose of an information display.  Design for the sake of design should be avoided at all costs. Any extra line, letter, or decoration should be eliminated if it doesn’t serve to help the reader understand the data better.  The data will tell the story better without confusing or distracting embellishment.

✧✧✧

Naturally, this only covers the most essentialized version of what Tufte presents over the course of his full-day lecture. Along with the lecture, you receive his four published works on data display:

I found the lecture both extremely informative, and productive. I came home bursting with ideas on how to improve the data displays of the various projects I was working on, and I’ve been able to put his precepts to good use on a number of occasions since. I highly recommend attending if he comes to a city near you.

Atomic Habits by James Clear

I recently picked up the audiobook version of “Atomic Habits” by James Clear. While I’ve only just started my second listen through, I already think it will become one of the most influential books I’ve read: right behind “Getting Things Done” by David Allen, and “The Fountainhead” by Ayn Rand.

The basic premiss of the book is that while goals are great for setting a direction, they are really lousy as a means to achieving anything. Instead, success comes from changing your daily habits—sometimes in very tiny ways—so that they accumulate, inevitably, and almost without effort, into success. This is accomplished by dissecting the life-cycle of a habit, and taking specific actions for each stage to ensure a new habit sticks. The same applies to habits you’d like to break: just apply the opposite actions for each stage to break the habit.

What impresses me the most about this book is its specificity. A lot of self-help books do a great job of laying out some interesting ideas or principles, but then fail to help the reader make the jump to practicing what is written. Not here. Every chapter starts with some motivating anecdote, then describes the principle involved, and then works through several different ways to put things into practice. Each chapter includes various kinds of mental exercises, checklists, and specific actions to take.

Another thing I like, is that the author fully understands how challenging it is to jump in at the deep end of creating some complex new habit (or breaking a very familiar one). He talks through various ways to simplify the process of easing into the new habit so that it doesn’t require tremendous willpower to accomplish it. Just a slow process of continuous improvement from very easy steps to more complex ones.

It’s not a long book at all, just 5½ hours in the audiobook version, and it’s caught my brain on fire with future possibilities. I highly recommend giving it a read.

GTD: Mastering Capture

I started using the Getting Things Done (GTD) method for staying organized almost 10 years ago now. Since then, I’ve learned a lot. This is part of a series describing where I’ve gotten to with my own GTD practice.

✧✧✧

The first step in the GTD workflow is capture. This means writing down anything and everything you run across which may have value at some point in the future. These days, this is almost 100% electronic. In order of frequency, I capture by:

  • adding an entry to my Things app (on iOS or MacOS)
  • sending an email to myself
  • telling Siri on my iPhone: “Hey Siri, remind me to…”
  • asking my wife to email me a reminder
  • putting a physical object in a conspicuous place (e.g., my inbox)

Given by job, I spend most of my time in front of a computer, and any time I’m not actually in front of a computer, I pretty much always have my phone on me. Since I use Things as my “trusted system”, it’s often most useful to just use the capture tool built in to that app (whether on the computer or on the phone).

If, for some reason, I’m not sitting at a computer with Things on it, I’ll just email myself and process it later.

If I’m driving, or otherwise not able to type, I’ll tell Siri to remind me, and use Things’ integration with the Reminders feature to automatically sync.

My wife also uses GTD, so we very often will discuss things with one another and then send one another an email to capture the request.

And, finally, if the thing to do actually involves a physical object, I’ll use the object itself as the means of capture. For paper mail, I have a wooden box on my desk. This also works for items to repair, and other small items which need my attention. If it’s something I need to take into work, for example, I’ll just make sure to put it next to where I put on my shoes and coat.

✧✧✧

While that pretty well covers what I do for my capture step, it’s worth noting that I find it absolutely essential to separate the capture step from process step. If mix these up, it creates a real impediment to capturing effectively and/or processing effectively.

The processing step is really the most thought-intensive part of GTD, and often requires a decent amount of time which you don’t always have in the moment you need to capture something. For example, if I’m in a meeting and hear something I need to follow up on, I need to make the shortest and quickest note possible so that I can return my attention to the person speaking. I literally don’t have time to think through all the processing steps.

By keeping the two separate, I can record some very tiny number of words in my inbox, and return my attention to what’s going on. I avoid both missing out on capturing altogether as well as entering something half-baked into my trusted system… which would, of course, become less trustworthy as a result!