I just found out that one of my parents died last night.
Fortunately, it wasn’t my father. He’s been the one solid backdrop of my life from the moment I was born. For a long stretch between when he divorced when I was 5 and remarried when I was 12, he was the only solid thing in my life. He’s a man of simple virtues, deeply held. He instilled in me, from my earliest memories, a deep and life-long value of honesty, hard work, humor, and family. To this day, I marvel at how he raised two challenging young boys on his own, while dealing with the pain of a messy divorce. He means a great deal to me, and I’d be a wreck right now if I’d just lost him. I’m getting choked up right now even thinking about the possibility. Fortunately, it wasn’t my father.
Fortunately, it wasn’t my mother. I only met her at 11 years old when she and my father started dating. I’m not sure I would have had the courage to jump into a family with two almost feral boys and with a father working two jobs to keep it all together, but she did. She instilled in me a love of culture: fine music, theater, fine dining, cultured manners. She also turned me from a bright, but indifferent student to someone who excelled in school and graduated with multiple honors. On my 18th birthday, we went down to the city hall and adopted each other, and she’s been my mother ever since. I’d be devastated if I’d just lost her. Fortunately, it wasn’t my mother.
It was my father’s first wife, my natural mother. She walked out on our family when I was 5. She was already seeing another man who was heavily into drugs and an alcoholic. Soon after, they married and moved away. My memories of the rare occasions when my brother and I would go stay with them are not pleasant. He was abusive to my mother, and while not abusive to us, still terrifying. She continued the path of drugs and alcoholism as I became an adult, and started my own family. Right around the time my own son was born, I broke off my relationship with her forever.
That was over 15 years ago. I haven’t seen her, or talked to her since. The little bits and pieces I hear though my cousins have made it clear that nothing had changed with her. And, last night, her long-abused body finally gave up.
Am I sad? No, not really. That may seem callous now, but I did my grieving for her as a 5-year-old boy. And, how did I grieve. I didn’t understand who she was, or why she was gone, but my mother—half of my universe of trusted people—was gone. I wished and longed for my parents to be reunited. No matter the shouting and arguments. Eventually, as I grew older and started to understand more, that feeling settled into anger. Finally, as an adult facing those same choices, my feelings changed into disgust.
So, to me, my natural mother died over 30 years ago. I cried. I mourned. And I finished along time ago. Now, I don’t have anything left for the person who didn’t want to be my mother all those years ago.
My real parents are those people who choose to love me. They are the people who gave of their own character to shape mine, and to set me on the best course in life that they could possibly manage. They are the people who have walked the tightrope with me of growing up and striking out on my own. They’ve been with me as I’ve built my career, and as I’ve grown my own family. I literally have tears streaming down my face as I write this: the depth of feeling I have for these two people is so overwhelming.
I regret that my experience with my natural mother has made it difficult to say in person to my real parents what they so richly deserve:
I love you both much more deeply than I could ever express in person, and much more than these written words covey. Thank you for choosing to be my parents.
I recently gave a newly-developed presentation for the first time, and I was concerned that I had too much material to fit everything in. I was going to be giving this presentation at work, and it was supposed to fit in people’s lunch hour, so it was particularly important that I get the timing right. To be sure I got everything to fit, I developed a “presentation timing card” for myself.
To start, I opened a spreadsheet and listed out all the major sections of the presentation. For each one, I went back through my slides for that section and came up with an estimate of how long I thought I’d need to cover each. I wrote that down next to the name of the section.
Next, I used the spreadsheet to sum up all the estimates. No surprise, I was way over the time I actually had. So, I went back through each section, and adjusted the timings to deliberately allocate the number of minutes I actually had across all of them. There was no way to avoid having less time than I thought I’d want to have in each section, but it forced me to make some careful choices about what was really important to cover at what depth.
Now that I had some realistic numbers, I added two more columns to the spreadsheet: the times I should start and stop each section. The first one’s start time was just the beginning of the talk. Each end was just the start time plus the duration I’d assigned. Each subsequent beginning was really just the end of the previous section.
Having finished with the spreadsheet, I printed it out so that it just about fit on a playing card. When I actually gave the presentation, I propped the card up where I could keep at eye on it as I went along. I was surprised and very pleased with how easy it became to hit the right pace for each section. Even with two exercises for the audience and random questions throughout, I was able to finish each of three presentations of the material a few minutes before the end of the session. I will definitely be using this technique again!
This post is part of a Git 201 series on keeping your commit history clean. The series assumes some prior knowledge of Git, so you may want to start here if you’re new to git.
The rebase tool in git is extremely powerful, and therefore also rather dangerous. In fact, I’ve known engineers (usually those new to git) who won’t touch it at all. I hope this will convince you that you really can use it safely, and it’s actually very beneficial.
What is rebase?
To start, I’ll very briefly touch on what a rebase actually is. So, very briefly then, a rebase is a way to rewrite the history of a branch. Let’s assume you’re starting out with a standard working branch from master:
At this point, let’s say you want to update your branch to contain the new commits on master (i.e., D and F), but you don’t want to create a merge commit in the middle of your work. You could rebase instead:
git rebase master
This command will rewrite your current branch as though it had originally been created starting from F (the tip of master). In order to that, though, it will need to re-create each of the commits on your branch (C, E, and G). Remember that a commit is the difference applied to some prior commit. In our example, C is the changes applied to B. E contains changes applied to B, and G contains changes applied to E. Rebasing will be that we need to change things around so that C actually has F as a parent instead of B.
The problem is that git can’t just change C’s parent because there’s no guarantee that the changes represented by C will result in the same codebase when applied to F instead of B. It might be that you’d wind up with some completely different code if you did that. So, git needs to figure out what result C creates, and then figure out what changes to apply to F in order to create the same result. That will yield a completely new commit which we’ll call CC. Since E was based upon C, which has been replaced, git will need to create a new commit using the same process, which we’ll call EE. And, since E has been removed, that means we’ll need to replace G with GG. Once all of the commits have been created, git moves the branch pointer to the end of the newest commit:
While all this seems complicated, it’s all hidden inside of git, and you don’t really have to deal with any of it. In the end, using rebase instead of merge just means changing a single command, and your commit history is simpler because it appears as though you created your branch from the right place all along. If you’d like a much fuller tutorial with loads of depth, I’d recommend you head over here.
Rebasing across repos
If you’re working on a branch which only exists locally, then rebasing is pretty straight-forward to work with. It’s really when you’re working across multiple clones of a repo (e.g., your local clone, and the one up on GitHub) that things become a little more complicated.
Let’s say you’ve been working on a branch for a while, and somewhere along the way, you pushed the branch back to the origin (e.g., GitHub). Later on, though, you decide you want to rebase to pick up some changes from master. That leaves you in the state we see in this diagram:
If you were to pull right now, git would freak out just a bit. Your local version of the branch seems to have three new commits on it (CC, EE, and GG) while it’s missing three others (C, E, and G). Then, when git checks for merge conflicts, there’s all sorts of things which seem to conflict (C conflicts with CC, E conflicts with EE, etc.). It’s a complete mess.
So, the normal thing to do here is to force git to push your local version of the branch back to origin:
git push -f
This is telling git to disregard any weirdness between the local version of the branch and origin’s version of the branch, and just make origin look like your local copy. If you’re the only one making changes to the branch, this works just fine. The origin gets your new branch, and you can move right along. But… what if you aren’t the only one making changes?
Where rebasing goes wrong
Imagine if someone else noticed your branch, and decided to help you out by fixing a bug. They clone the repository, checkout your branch, add a commit, and push. Now the origin has all your changes as well as the bug fix from your friend. Except, in the meantime, you decided to rebase. That would mean you’re in a situation like this:
Now you’re stuck. If you pull in order to get commit H, you’re going to have all sorts of nasty conflicts. However, if you force push your branch back to origin (to avoid the conflicts), you’re going to lose commit H since you’re telling git to disregard the version of the branch on origin. And, if your friend neglected to tell you about the bug fix, you might do exactly that and never even realize.
Solution 1: Communicate
The best way to fix the problem is to avoid it in the first place. Communicate clearly with your teammates that this branch is a working branch, and that they shouldn’t push commits onto it. It’s a good idea for teams to adopt some clear conventions around this to make this kind of mistake hard to make (e.g., any branch stating with a username should only be changed by that user, branches with “shared”, “team” or some other prefix are expected to have multiple contributors).
If you can’t be sure you’re only one working on a branch, the next best thing is, before starting the rebase, talk with anyone who might be working with the branch. Say that you’re going to rebase it, and what they should expect. If anyone speaks up that they’re working on changes to that branch, then you know to hold off.
Once everyone has pushed up any outstanding changes, pull down the latest version of the branch, rebase, and then push everything back up as soon as possible. That looks like this:
If you find yourself having just rebased and only then learn there are upstream changes you’re missing, the simplest way out of this difficulty is to simply ditch your rebase. Go back, and pull down the changes from the origin, and start over (after referring to solution 1). That would look something like this:
This will switch back to master (1), so that you can delete your local copy of the branch (2), and then grab the most recent version from the origin (3). Now, you can re-apply your rebase (4), and then push up the rebased branch before anyone else has a chance to mess things up again (5).
Solution 3: Start a new branch
If you find that you want to rebase right away, and don’t want to wait to coordinate with others who might be sharing your branch, a good plan is to isolate yourself from the potentially shared branch first, and then do your rebase.
git checkout mybranch
git checkout -b mybranch-2
At this point, you’ve got a brand new branch which only exists on your local machine, so no one else could possibly have done anything to it. That means you can go ahead and rebase all you like. When you push the branch back up to origin (e.g., GitHub), it will be the first time that particular branch has been pushed.
Of course, if someone else has added a commit to the old branch, it will still be stuck over there, and not on your new branch. If you want to to get their commit on your new branch, use git’s cherry pick feature:
git cherry-pick <hash>
This will create a new commit on your branch which will have the exact same effect on your branch as it did on the old one. Once you’ve rescued any errant commits, you can delete the old branch and continue from the new one.
I’m hope this makes rebasing less scary, and helps you get a sense of when you’d use it and when not. And, of course, should things go wrong, I hope this gives you a good sense of how to recover.
Two last bits of advice… First, before rebasing, create a new branch from the head of the branch you’re going to rebase. That way, should things go completely wrong, you can just delete the rebased branch, and use your backup. And, finally, if you’re in the middle of a rebase which seems to be going a little nuts, you can always bail out by using:
When studying history, the first rule of intellectual honesty is to never drop the context of the time period being studied. We stand at the end of a long line of people who screwed things up, figured out what went wrong, and came up with a better solution. We are the inheritors of thousands of years learning in every area of human endeavor: including morality. When studying history, any time you indignantly ask the question “How could they?”, it is imperative to stop yourself and ask the question again with curiosity instead. Really… how did it come to pass that people in a prior age thought it right and natural to act in ways we find foreign or even immoral now?
We can (and should) look back with our modern eyes and pass judgement on the moral systems people have used in the past. Most moral codes for most of history were atrocious by our modern moral understanding. However, when judging individual members of those societies, we must not lose our perspective and judge them by standards they never even knew existed. One can only judge a person from a prior historical period by asking whether they faithfully adhered to the best moral code they knew about and/or whether they helped to advance our understanding of morality as such.
This does mean that certain historical figures, though perhaps despicable when judged by our modern standards, where moral and virtuous in their own time. It is important that we judge the moral system, not the person who could have known no better.
Considering Women in History
When thinking of the treatment of women through history (just to pick one minority), we must apply the same respect for context we would for any other historical study. We can (and should) judge historical societies’ moral codes based upon their respect for women. However, we can only judge individual people for having better or worse views and actions compared to others who shared their context.
For example, a person who was skeptical of a woman’s right to vote in England of 1880 is hardly a villain when judged by the moral standards of that time. We now find that position repugnant, but not the person who holds it. Needless to day, a person in a modern context who held such a view would (rightfully) be considered morally bankrupt. Conversely, a person who was enlightened enough, in that place and time, to support women’s suffrage wasn’t merely a normal, decent person (as they would be today), but a one of unusual foresight and virtue.
Notice that I very deliberately used the word “person” throughout that example. We must remember that the suffragettes were themselves usually foresighted and virtuous even among the women of their day. Many women of the time were as skeptical of such things as “votes for women” as their spouses. They too were not villains, but people of ordinary character and understanding: for their own time.
But what about…
The really interesting question is what other moral issues were, at one point, perfectly acceptable, but are not any longer? For example, homophobia was once not only perfectly acceptable, but actively encouraged and legally enforced. However, in the United States today, LGBT+ people are legally protected (in many jurisdictions) and homophobia (in most communities) is actively regarded as backward and immoral. When did that moral stance shift? How did it happen? At what point do you consider someone slow to make the shift as being immoral?
I recently saw the Gillette commercial about toxic masculinity, and it’s gotten me thinking, especially when viewed along side the Egard Watches response video. I highly recommend you go watch both of them before continuing to read here.
The perspectives in both are reflected by the polarized responses I’ve been seeing since the Me Too movement picked up steam. Any time I see such extreme reactions to the same thing (the commercial, especially) among people who normally agree about many things, it makes me stop to ponder what’s going on.
Personally, I find it very easy to have enormous sympathy with the Me Too movement. It is sadly all too easy to find many, many examples of women being treated unjustly in every era, and in every civilization which has ever existed. Indeed, “unjust” hardly begins to describe centuries of disregard, disenfranchisement, oppression, torment, slavery, mutilation, rape, and murder which women have suffered across the span of human history. Given that the perpetrators have been overwhelmingly male, it’s all too easy to take a dim view of masculinity in general.
However, it is also true that many brilliant, talented, moral, and courageous men have moved our species forward in leaps and bounds. Many of these men were the ones who fought against oppressors of every sort (both literally and figuratively). Indeed, many of them fought, specifically, to oppose the tremendous injustice met out to women by other men of their time. Taking either the view that all men are monsters or that all men are innocent is too simplistic.
I view “toxic masculinity” as being what the philosopher, Ayn Rand, called a package deal. That is, a bunch of concepts grouped together with the effect (usually deliberate) of damning the good by linking it with the evil. In this case, the “package” contains a lot of elements which are, in fact, attitudes, beliefs, and cultural norms which each have been held by individual men. However, not all men exhibit all these traits, and, in fact, it’s very common for the negative traits to be concentrated in certain individuals, and positive ones in others.
But let’s get specific here. When I think of traits considered typically “masculine”, I get something like this:
physical traits (size, strength, body shape, genitals)
However, when I think of the kind of behaviors associated with the phrase “toxic masculinity”, I get a very different (and mostly incompatible) list:
sexism & misogyny
excessive use of drugs & alcohol
I think this is the heart of the division I see between people reacting to this issue. When someone says “masculine”, which of these two lists pops up in their head? You can easily tell by the litmus test of these two videos.
What I find especially fascinating and useful, is to construct a similar list using the phrases “feminine” and “toxic femininity”. To my mind, the first list is nearly identical, while the second list has its own (and different) set of revolting behaviors.
My point, really, is that using deliberately leading phrases like “toxic masculinity” or “toxic femininity” doesn’t actually help what is really an admirable goal: to eliminate the specific nasty behaviors associated with those phrases. At best, they serve to stir up animosity and misunderstanding between people who probably have the same goals at heart. At worst, they create a completely useless debate between people wanting to define “masculine” as meaning the first list versus the second.
Instead, I would urge people to discard the “package deal”, and focus on the real problems specifically, and one-by-one: sexual harassment, homophobia, bullying, and all those other behaviors we should no longer tolerate as a rational, civil society.
I recently was able to attend Edward Tufte’s seminar on presentations and data graphic design. This blog post covers the essential elements I took away from the lecture.
On Space vs. Time
When one has a great deal of content to convey to an audience, it cannot be blurted out all at once and in the same spot: it must be spread out over either space or time (or both). A slide deck spreads out content in time, using the same space over and over. A document or web page shows everything at once, spread out in space. Documents play to human being’s strengths, while slide decks play to their weaknesses.
Humans have a natural ability to visually consume a complex field of data by instantly shifting modes from high-level scanning to detailed inspection and back again. This makes it possible—and even quite easy—for a person to scan through a long document, identify sections of interest, and dive into that piece for a closer look. The same is true for presenting many hundreds or thousands of data points in a graph; the viewer can rapidly scan the overall structure of the data and zoom in to particular interesting details. With information arranged with spatial adjacency, it is easy for people to compare and contrast, scan and examine, and learn most efficiently. This leads to a very high throughput of data transfer from the author to the audience.
Humans, on the other hand, have a limited ability to precisely remember detailed data for any length of time. They also have a limited attention span: particularly when presented with data which is either confusing or boring to them. This makes it very difficult for a person to hold the context required to compare data presented sequentially over a series of slides. The needs of a slide presentation (i.e., a limited space per slide which must be legible at a distance) means that the content is broken into tiny chunks and widely distributed over time as the presenter talks over each point: often re-iterating the content on each slide slowly (relative to the speed of reading). It is impossible for any individual listener to speed up or slow down the presentation to suit their needs, or to scan ahead to answer a question, or to skip back to revisit an unclear point. This leads to a very low throughput of relevant data transfer from the author to the audience.
It is highly preferable, therefore, that information displays maximize “spatial adjacency” of material with a visually dense presentation with varying levels of headers, “data paragraphs”, and whitespace to allow viewers to readily identify and select from large blocks of content at a single glance.
On Giving Presentations
For presentations of virtually any scale, it is far better to provide a narrative document (i.e., like this one) instead of a slide deck. The document should be from 2–6 pages long, and include all the information to be discussed integrated into a single flow. Tables of numbers, charts, graphs, pictures, etc. should all be integrated with the narrative description of the subject matter. In all cases, references should be included to source materials, primary sources, etc. The document should be written to be a permanent record of what was discussed, and therefore should be complete and self-contained.
For the actual presentation, the meeting should begin by handing out copies of the actual document to each attendee. This is followed by a study hall session long enough for everyone to carefully read the document and make note of any questions, thoughts, or disagreements. Once everyone is ready, the remainder of the meeting is not spent re-hashing what was just read, but instead is spent in discussion those questions, thoughts, and disagreements each person noted while reading.
The primary advantages of this style of meeting are:
People can read through the document at their own pace, and to serve their own needs. Sections irrelevant to a certain person can be skimmed, while sections of intense interest can be lingered over carefully.
People can easily jump ahead to see if a question is answered later, or skip back for extra clarity on a point they may have misunderstood.
People read much more quickly and with much greater throughput than can be presented aloud, so meetings can often be shorter.
The document serves as a permanent record of what was presented which everyone can take away with them to refresh their memories later.
On Judging an Information Display
“The purpose of information display is to assist people in reasoning about the content.”
When judging an information display (i.e., charts, graphs, tables, etc.), people judge both the quality of the data and the reliability of the presenter. To establish both, apply these six principles both when making and consuming an information display.
show comparisons, contrasts, and differences
The information display should be deliberately designed to make it easy to compare various data sets or points within each data set. The author should be thoroughly conversant with the data, and deliberately highlight those points of contrast which are most surprising, interesting, or useful.
show causality, mechanism, explanation, and systematic structure
Information displays should endeavor to show how certain data sets were the cause of other data sets. In charts, for example, one can use labeled arrows to not only show the direction of causality, but also to describe the mechanism or process by which it happened. On graphs, this can be a block of text describing some causal connection with an arrow pointing to where this is shown in the data.
show multivariate data (i.e., 3 or more variables)
The real world is complex, and includes a lot of interconnections between different data sets. Information displays should attempt to draw in as many of these various data sets as possible to show the interconnections between them (see: Minard).
completely integrate words, numbers, images, diagrams, etc.
When helping someone understand a data set, it is very unhelpful to segregate data based upon its source, format, or media. Instead, pull all sources of data into the single information display so that they can be compared side-by-side with the other data relevant to the story. Data labels and other text should be integrated into the data display whenever possible instead of being relegated to sidebars, legends, or other documents.
document the display thoroughly
The reader should be left with no questions about what it is they are seeing or where it came from. This often requires extensive textual, even narrative, explanations included within and along side the information display. A title, the author’s name, units for all numbers, and links to source data are a minimum. One may also find it helpful to include a paragraph explaining the principle features of the data set, interesting comparisons to make, or surprising results.
presentations stand or fall based on the quality, relevance, and integrity of the content
Showing the content in the clearest and most accessible fashion should be the only purpose of an information display. Design for the sake of design should be avoided at all costs. Any extra line, letter, or decoration should be eliminated if it doesn’t serve to help the reader understand the data better. The data will tell the story better without confusing or distracting embellishment.
Naturally, this only covers the most essentialized version of what Tufte presents over the course of his full-day lecture. Along with the lecture, you receive his four published works on data display:
I found the lecture both extremely informative, and productive. I came home bursting with ideas on how to improve the data displays of the various projects I was working on, and I’ve been able to put his precepts to good use on a number of occasions since. I highly recommend attending if he comes to a city near you.
I recently picked up the audiobook version of “Atomic Habits” by James Clear. While I’ve only just started my second listen through, I already think it will become one of the most influential books I’ve read: right behind “Getting Things Done” by David Allen, and “The Fountainhead” by Ayn Rand.
The basic premiss of the book is that while goals are great for setting a direction, they are really lousy as a means to achieving anything. Instead, success comes from changing your daily habits—sometimes in very tiny ways—so that they accumulate, inevitably, and almost without effort, into success. This is accomplished by dissecting the life-cycle of a habit, and taking specific actions for each stage to ensure a new habit sticks. The same applies to habits you’d like to break: just apply the opposite actions for each stage to break the habit.
What impresses me the most about this book is its specificity. A lot of self-help books do a great job of laying out some interesting ideas or principles, but then fail to help the reader make the jump to practicing what is written. Not here. Every chapter starts with some motivating anecdote, then describes the principle involved, and then works through several different ways to put things into practice. Each chapter includes various kinds of mental exercises, checklists, and specific actions to take.
Another thing I like, is that the author fully understands how challenging it is to jump in at the deep end of creating some complex new habit (or breaking a very familiar one). He talks through various ways to simplify the process of easing into the new habit so that it doesn’t require tremendous willpower to accomplish it. Just a slow process of continuous improvement from very easy steps to more complex ones.
It’s not a long book at all, just 5½ hours in the audiobook version, and it’s caught my brain on fire with future possibilities. I highly recommend giving it a read.
I started using the Getting Things Done (GTD) method for staying organized almost 10 years ago now. Since then, I’ve learned a lot. This is part of a series describing where I’ve gotten to with my own GTD practice.
The first step in the GTD workflow is capture. This means writing down anything and everything you run across which may have value at some point in the future. These days, this is almost 100% electronic. In order of frequency, I capture by:
adding an entry to my Things app (on iOS or MacOS)
sending an email to myself
telling Siri on my iPhone: “Hey Siri, remind me to…”
asking my wife to email me a reminder
putting a physical object in a conspicuous place (e.g., my inbox)
Given by job, I spend most of my time in front of a computer, and any time I’m not actually in front of a computer, I pretty much always have my phone on me. Since I use Things as my “trusted system”, it’s often most useful to just use the capture tool built in to that app (whether on the computer or on the phone).
If, for some reason, I’m not sitting at a computer with Things on it, I’ll just email myself and process it later.
If I’m driving, or otherwise not able to type, I’ll tell Siri to remind me, and use Things’ integration with the Reminders feature to automatically sync.
My wife also uses GTD, so we very often will discuss things with one another and then send one another an email to capture the request.
And, finally, if the thing to do actually involves a physical object, I’ll use the object itself as the means of capture. For paper mail, I have a wooden box on my desk. This also works for items to repair, and other small items which need my attention. If it’s something I need to take into work, for example, I’ll just make sure to put it next to where I put on my shoes and coat.
While that pretty well covers what I do for my capture step, it’s worth noting that I find it absolutely essential to separate the capture step from process step. If mix these up, it creates a real impediment to capturing effectively and/or processing effectively.
The processing step is really the most thought-intensive part of GTD, and often requires a decent amount of time which you don’t always have in the moment you need to capture something. For example, if I’m in a meeting and hear something I need to follow up on, I need to make the shortest and quickest note possible so that I can return my attention to the person speaking. I literally don’t have time to think through all the processing steps.
By keeping the two separate, I can record some very tiny number of words in my inbox, and return my attention to what’s going on. I avoid both missing out on capturing altogether as well as entering something half-baked into my trusted system… which would, of course, become less trustworthy as a result!
This is a guest post by Joe Wilding, the CTO and Co-Founder of Boom Supersonic.
I was asked by someone recently: “How do you know so much about your field?” My short answer was, “I read a lot.” To that he replied: “Yeah, but how do you retain all of that knowledge?” I didn’t have a crisp answer at the time. But, as I have thought about that question since, I have come to realize I have developed a pattern over the years which has allowed me to retain much of the knowledge I have read.
While I have to admit this method requires additional effort, I’m personally convinced it’s required for long-term retention.
The method consists of two elements. The first is ensuring that you deeply understand the content when you first read the material. The second element is making the content sticky by refreshing your memory of it in a deliberate recurring process.
Deeply learning the topic
There are many ways to fully understand a topic when first exposed to it. If the topic is simple enough, the act of reading it, watching a video, or hearing it explained may be sufficient. For more complex topics, other tactics may be required. For me, they all come down to forming some sort of a model of the concept that makes sense to me. This model can be mental, or something that you actually sketch or turn into a diagram. Good books or other sources will do this for you, but not always.
I prefer to understand how the concept works based on the fundamental governing principles, whether that be physics, math, psychology, economics, etc. If math is involved, I do not gloss over the formulas. I pay attention to the inputs, the units, and the exponents on each variable. I try to deduce why each variable is there, and why others are not. I try to get a feel for how the answer would change based on different values of the inputs. If it is a topic i really want to understand I will enter the formula into a spreadsheet, plot it, and watch how the results change with different inputs. This “live feedback” method can increase your understanding tremendously and very quickly.
I also tend to formulate an understanding of the topic such that I can explain it to someone else. Often, I will literally do that: either out of necessity, or because I am typically surrounded by others who love to learn. It is very powerful to express a concept in your own words and to be prepared to answer questions or explain the parts that are not obvious.
Making it stick
A very unfortunate drawback of the human brain is that the knowledge it contains tends to fade over time. This is particularly true for concepts that are learned and then not accessed again before it is evolved into long-term memory. This means that all of the time and effort you put into learning a new topic could be lost if you don’t take action to make it stick.
This is less of an issue if the learned topic is something you will be using frequently for an extended period of time (such as in your daily job). However, much of what I read is a little more obscure, or something I will need only on infrequent occasions. To ensure this knowledge is not lost, I employ a method I read about many years ago called the Super-Memo Model, developed by the Polish researcher Piotr Woźniak.
The following graph shows how this works:
The graph shows that as a new topic is learned, but then not used again, the brain starts to lose that information on a decaying curve called the “curve of forgetting”. Nearly all of the knowledge on a topic can be lost in a few months. For example, try to remember something you might have heard on the news or read in a paper from a few months ago. If you didn’t have a direct connection to it, you probably can’t.
The happy side of this story is that if you refresh your memory of the topic, not only do you quickly get back to the 100% status, the rate of decay on the curve also decreases. If you can remember to do this three or four times, the decay curve becomes very flat and the information will be accessible nearly forever.
At this point you might be saying: “Great, I have to read everything four times if I really am going to learn it?!” Not at all. If you fully understand the topic the first time—in the way I talk about above—the refresh effort can be very quick. You just have to have a method of making the information quickly available and developing the discipline to actually go back and review it.
My favorite method for easy future accessibility is to take notes when I am first learning the topic. I summarize the key concepts, keep the sketch or diagram if there is one, hang onto the spreadsheet, and list all the references on where the knowledge came from. After that, it is just going back and rereading it a few times in the future. You could schedule these in your calendar, but I usually just keep the notes file on my desktop and then go back and reread it from time to time until I feel like it is fully committed to long term memory. At that point, I usually file it away in a folder for future reference.
There are other methods for reviewing material, including: reading other sources on the same topic, teaching it to others, or using the knowledge on a recurring basis. It doesn’t really matter what the method is. It is just important to refresh your memory routinely until you’ve really mastered it..
I’ll admit that this method requires effort, and I certainly don’t use it for everything I read. But if I have a topic that I really want to master long term, I have found that this method works every time.
Estimating most projects is necessarily an imprecise exercise. The goal of this post is to share some tools I’ve learned to remove those sources of error. Not all of those tools will apply to every project, though, so use this more as a reminder of things to consider when estimating, rather than a strict checklist of things you must do for every project. As always, you are the expert doing the estimating, so it is up to your own best judgement.
Break things into small pieces
When estimating, error is generally reduced by dividing tasks into more and smaller pieces of work. As the tasks get smaller, several beneficial things result:
Smaller tasks are generally better understood, and it is easier to compare the task to one of known duration (e.g., some prior piece of work).
The error on a smaller task is generally smaller than the error on a small task. That is, if you’re off by 50% on an 8 hour task, you’re off by 4 hours. If you’re off by 50% on an 8 day task, you’re off by 4 days.
You’re more likely to forget to account for some part of work in a longer task than a shorter one.
As a general rule, it’s a good idea to break a project down into tasks of less than 2 days duration, but your project may be different. Pick a standard which makes sense for the size of project and level of accuracy you need.
Count what can be counted
When estimating a large project, it is often the case that it is made up of many similar parts. Perhaps it’s an activity which is repeated a number of times, or perhaps there’s some symmetry to the overall structure of the thing being created. Whichever way, try to figure out if there’s something you already know which is countable, and then try to work out how much time each one requires. You may even be able to time yourself doing one of those repeated items so your estimate is that much more accurate.
Establish a range
When estimating individual tasks (i.e., those which can’t be further subdivided), it is often beneficial to start out by figuring out the range of possible durations. Start by asking yourself: “If everything went perfectly, what is the shortest time I could imagine this taking?” Then, turn it around: “If everything went completely pear-shaped, what shortest duration I’d be willing to bet my life on?” This gives you a best/worse-case scenario. Now, with all the ways it could go wrong in mind, make a guess about how long you really think it will take.
Get a second opinion
It’s often helpful to get multiple people to estimate the same project, but you can lose a lot of the value in doing so if the different people influence each other prematurely. To avoid that, consider using planning poker. With this technique, each estimator comes up with their own estimate without revealing it to the others. Then, once everyone is finished, they all compare estimates.
Naturally, there are going to be some differences from one person to the next. When these are small, taking an average of all the estimates is fine. However, when the differences are large, it’s often a sign that there’s some disagreement about the scope of the project, what work is required to complete it, or the risks involved in doing so. At this point, it’s good for everyone to talk about how they arrived at their own estimates, and then do another round of private estimates. The tendency is for the numbers to converge pretty rapidly with only a few rounds.
Perform a reality check
Oftentimes, one is asked to estimate a project which is at least similar to a project one has already completed. However, when coming up with a quick estimate, it’s easy to just trust to one’s intuition about how long things will take rather than really examining specific knowledge of particular past projects to see what you can learn. Here’s a set of questions you can ask yourself to try to dredge up that knowledge:
The last time you did this, how long was it from when you started to when you actually moved on to another project?
What is the riskiest part of this project? What is the worst-case scenario for how long that might take?
The last time you did this, what parts took longer than expected?
The last time you did this, what did you forget to include in your estimate?
How many times have you done this before? How much “learning time” will you need this time around?
Do already you have all the tools you need to start? Do you already know how to use them all?
There are loads of other questions you might ask yourself along these lines, and the really good ones will be those which force you to remember why that similar project you’re thinking of was harder / took longer / was more expensive than you expected it to be.
Create an estimation checklist
If you are planning to do a lot of estimating, it can be immensely helpful to cultivate an estimation checklist. This is a list of all the “parts” of the projects you’ve done before. Naturally, this will vary considerably from one kind of project to the next, and not every item in the checklist will apply to every new project, but they can be immensely valuable in helping you not forget things. In my personal experience, I’ve seen more projects be late from the things which were never in the plan, than from things which took longer than expected.
Estimation is super hard, and there’s really no getting around that. You’re always going to have some error bars around your estimates, and, depending upon the part of the project you’re estimating, perhaps some considerably large ones. Fortunately, a lot of people have been thinking about this for a long while, and there are a lot tricks you can use, and a lot of books on the subject you can read, if you’d like to get better. Here’s one I found particularly useful which describes a lot of what I’ve just talked about, and more: