Where to define S4 generics

You need to declare generic functions in S4 before you can define methods for them. If no definition exists you will see the following error:

> setClass("track", slots = c(x="numeric", y = "numeric"))
> setMethod("foo", signature(x = "track"))
Error in setMethod("foo", signature(x = "track"), definition = function(x) cat(x)) : 
  no existing definition for function ‘foo’

Generic functions are declared with the setGeneric() function, which must precede the call to setMethod():

> setClass("track", slots = c(x="numeric", y = "numeric"))
> setGeneric("foo", def = str)
[1] "foo"
> setMethod("foo", signature(x = "track"), definition = function(x) cat(x))

But when you develop an R package you may have several classes that define their own methods for the same generic function. So where to put the definition of the generic function? One solution is to put all your classes in the same source file, and have the call to setGeneric() precede all calls to setMethod(). This will work but maintainability dictates that each class should belong to its own source file, named R/<class name>.R.

Instead, you might consider placing all calls to setGeneric() in a source file guaranteed to be loaded before all other files. For example, you might call that file __generics.R, which will be loaded first because when R loads a package it reads the source files in alphabetical order. This will work too but there’s a more elegant way.

R will read the source files in alphabetical order unless a Collate field in the DESCRIPTION file says otherwise. That field lets you specify the order in which you want your source files to be loaded. If present, it must list all source files.

Maintaining such a field for more than about 5 source files can quickly become tedious. Fortunately, the roxygen2 package has a little-known feature which will generate the Collate field for you. If you use the @include tag anywhere in a file, roxygen will generate a Collate field which includes all source files, in such an order that all included files are listed first.

For example, if you have a file R/generics.R with the following declaration:

setGeneric("foo", ...)

and your MyClass needs to define that generic, use the following:

#' @include generics.R

MyClass <- setClass(...)

setMethod("foo", ...)

When you now run roxygen2 (and don’t forget this step, it won’t be done automatically for you) it will generate a Collate field with all your R code files, correctly sorted.

Connecting to SQL Server from R on a Mac with a Windows domain user

Connecting to an SQL Server instance as a Windows domain user is relatively straightforward when you run R on Windows, you have the right ODBC driver installed, and your network is setup properly. You normally don’t need to supply credentials, because the ODBC driver uses the built-in Windows authentication scheme. Assuming your odbcinst.ini file includes an entry for SQLServer, you typically just need the following:

con <- odbc::dbConnect(
  odbc::odbc(),                 
  Driver = "SQLServer",
  Server = "mysqlhost",
  Database = "mydbname",
  Port = 1433
)

But if you want to connect to SQL Server from a Mac, things are less simple.

Don’t bother installing the ODBC driver supplied by Microsoft; it just doesn’t work with a Windows domain user. No matter what I tried, I always got the following error message: Error: nanodbc/nanodbc.cpp:950: 28000: [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]Login failed for user 'dlindelof'. I’ve tried setting the user id to domain\\username, I’ve tried passing an extra parameter DOMAIN, all to no avail.

As far as I could determine, it simply is not possible to connect to SQL Server with a domain user using the ODBC driver supplied by Microsoft. Instead, obtain the open-source FreeTDS driver. If you use Homebrew, this is done with brew install freetds. Once installed you should see its libraries installed as /usr/local/lib/libtdsodbc.so. Edit your /usr/local/etc/odbcinst.ini file accordingly:

freetds                              = Installed
# ...
[freetds]
Driver = /usr/local/lib/libtdsodbc.so

You can, but don’t need to, also edit /usr/local/odbc.ini and /usr/local/etc/freetds.conf if you want to specify human-friendly aliases to specific database connections. I never used that.

You can now create a database source in R using the usual connection parameters, with the important gotcha that (unless you edit freetds.conf) you must specify the port number. The username must be prefixed by the domain and a double backslash (to prevent escaping). Putting it all together, my connection call looks like this:

con <- odbc::dbConnect(
   odbc::odbc(),                 
   Driver = "freetds",
   Server = "mysqlhost",
   Database = "mydbname",
   uid = "domainname\\username",
   pwd = "somepassword",
   Port = 1433
 )

Learning Gregg shorthand

This piece is a writing assignment for the Learning How To Learn online class, in which we are asked to reflect on a recent learning challenge.

Shorthand—the ability to write at possibly over 200 words per minute—is a dying skill. The ubiquitous use of computers and laptops for taking notes and meeting minutes has turned shorthand into a curiosity, a skill reserved for a dying generation or some die-hard hobbyists. Which is a shame—there’s a kind of elegance and beauty to some of the shorthand systems out there, and who wouldn’t want to be able to write and read scripts like this:

The Lord’s Prayer in Gregg Shorthand. Public Domain, https://commons.wikimedia.org/w/index.php?curid=306847

Shorthand belongs to a family of skills that were considered essential perhaps 50 years ago, but have been made all but obsolete by technology, such as:

  • Using a slide ruler
  • Note taking
  • Touch typing
  • Handwriting
  • Mnemotechnics
  • Shorthand

Yet I claim that many of these, if not most, should still be taught in our primary schools; in this piece I reflect on my experience in learning the Gregg Shorthand system.

As far as knowledge work goes, I’ve had a rather typical education: Master in Physics, PhD in Physics, self-taught in Computer Programming, Statistics, and Data Science. I’ve always taken my professional development very seriously and have almost always got some MOOC going on.

Being something of a compulsive note taker, I became interested in the various shorthand systems in 2005. I researched the different systems, and concluded that the Gregg system would be ideal for me, striking a good balance between ease of learning vs writing speed. So I began to learn the system, relying at first on the vast collection of free resources available online.

But in the last 14 years or so, my enthusiasm for learning shorthand has ebbed and flowed. My commitment to learning went through spikes and valleys. I never lost interest, but other interests would inevitably take priority. With hindsight, I believe the three largest mental hurdles were the following:

  • No incentive: I never entertained any illusion of gaining something tangible from learning shorthand, so my only motivation was my own curiosity.
  • Lack of resources: in spite of the website mentioned above, I feel that there aren’t that many resources out there for learning shorthand. I couldn’t find any reading material written in shorthand, for example. Nor could I find any online class.
  • Lack of priority: just as with anything else, the first excuse for dropping out will be the lack of time. But that’s seldom the root cause. More likely, I would frequently let other things take priority over the regular practice time needed for learning a new skill such as shorthand.

So what to do? How to get good at shorthand, when the only tangible benefit, to be honest, is the satisfaction of having learned something cool? Here is what seems to be working for me:

  • The book: the free resources available online are absolutely incredible, but they’re, well, free. When I download a free book I’m not vested in it; there’s no sunk cost, so no compulsion to make something good come out of my “loss”. Not so with a physical book. I bought The GREGG Shorthand Manual Simplified, so that I would feel bad whenever I saw the book on my desk gathering dust.
  • The community: sharing a ridiculous obsession with others is always more fun than being alone. I discovered a Reddit group dedicated to shorthand in general, and I joined it. Being part of such a community was a great boost to my motivation, and provided me with a place to ask questions about difficult reading exercises.
  • Self-testing: the book mentioned above features many reading exercises, but doesn’t give the answers. It made it difficult for me to assess whether I was making progress. Instead, I discovered that AnkiApp, one of many flashcards apps out there, would let me download and install a deck of flashcards for practising shorthand reading.
    But what about the book’s reading exercises? How could I make sure I understood them correctly without bothering the Reddit community? I discovered a website where you can enter text in English, and it would be rendered into Gregg shorthand (to this day, I have been unable to locate a tool that would read Gregg and turn it into English). I now had all the necessary means to test myself.

Practising Gregg shorthand has now been part of my daily routine for the past couple of months; I can read the Lord’s Prayer above, albeit slowly. I am still far from being able to take meeting notes in shorthand, but I’m confident I will be able to do so in a few months.

Notes from the “Learning How To Learn” course

Learning How To Learn” (LHTL), an online course freely available on Coursera, teaches techniques for becoming a better learner and thinker. Given by Dr Barbara Oakley (McMaster University) and Dr Terrence Sejnowski (University of California San Diego), the course covers the latest on how the brain works and suggests practices to make the best it.

Many of these practices are covered in numerous popular self-improvement books and you may be familiar with some of them, but it was great to have them collected in a single place, including explanations on why they work. Here are some that resonated the most with me.

Focused & Diffuse Modes

Keeping focused on a single task for hours ends up working against you. The brain needs time away to form new connections and get a sense of the big picture. Going against that flow will likely hinder your progress and leave you exhausted at the end of the day.

The evidence suggests that the brain works in either of two modes, which the instructors call “focused” and “diffuse”. In focused mode you concentrate on your task at hand, while in the diffuse mode you let go and give your mind a break; this is the kind of thinking you do when you are not consciously thinking, like when you go for a walk or take a shower. If you have ever enjoyed flashes of insights that came during such activities, it was probably diffuse mode at work.

I couldn’t help drawing obvious parallels with the rich vs linear modes of thinking described by Betty Edwards in “Drawing on the Right Side of the Brain“, and also neatly summarized by Andy Hunt in “Pragmatic Thinking and Learning“. It is obviously the explanation for techniques used by unusually creative people like Thomas Edison or Salvador Dali, who would take a nap and figured out tricks to wake up just at the onset of dreams; whatever they were thinking at that moment was probably the output of the diffuse mode. I believe Andy Hunt called that technique an “onar”, a portmanteau of oniric (which pertains to dreams) and sonar.

Deliberate practice

You might be familiar with the idea that it takes roughly 10’000 hours of practice to become a world-class expert at anything. Malcolm Gladwell is generally credited with popularizing this idea in his book “Outliers“, but later research has qualified this idea. As pointed out by Geoff Colvin in “Talent Is Overrated“, by Anders Ericsson in “Peak“, and by many others, any random kind of practice is not enough. Merely repeating the same skill over and over won’t do; one needs to be intentional, even deliberate about one’s practice. Hence the name Deliberate Practice.

I suspect that Deliberate Practice is far easier to apply in sports or arts than in knowledge work. Most sports and most arts have a long teaching tradition—there are moves, techniques, steps, swings, katas, scales, chords, chess tactics, strokes that can be practiced over and over again, often under a coach’s supervision. Identifying your weaknesses, and developing a workout routine to address them, seems to me to be far easier in, say, freestyle swimming than in computer programming.

So I’m not so sure how to best apply Deliberate Practice when learning a new subject such as mathematics, physics, computer programming, or any similar mostly-intellectual topic. I’m not sure there exists a body of, say, mathematical drills one can perform in order to become better at it. Computer programming might be an exception here: people (including me) have experimented with so-called code katas, where you solve (sometimes repeatedly) some programming problem. But that’s the closest thing to Deliberate Practice we have to date.

Procrastination

The course explains that procrastination is the brain’s natural defense mechanism against unpleasant tasks, such as sitting down to study. A valuable technique against this is the Pomodoro technique: you make a deal with yourself that you will only work for 25 minutes, then take a break. You might even bribe your brain with the promise of some treat after work: perhaps some social media time, or reddit, or twitter. (A very similar idea was proposed in The Power of Now.)

I’ve been using variations on the Pomodoro technique for many years now (I was introduced to it by the book of the same name published by the Pragmatic Programmers). I regularly work in 25 minute bursts, punctuated by 5 min breaks. During the breaks, I will frequently walk around, perhaps fetch fruit from the cafeteria three floors upstairs, or visit the bathroom. Of late I’ve experimented with the use of Brain.fm, setting their timer to 30 minutes. I find this very effective at improving my focus and blocking out distractions from the surrounding open space.

Metaphors

Metaphors are said to be a great way to internalize what you learn, though I don’t think I use that technique very much in my own learning. I have, however, been told that I’m pretty good at using metaphors and analogies when I explain technical concepts to a non-technical audience. Recently I’ve been working on a statistical model of the effectiveness of calls to action sent to customers, using a class of models called Survival Analysis. Originally developed to model the survival of patients in a clinical setting, it was rather easy to build the right metaphors: an email you send to your customer gives “birth” in his mind to a certain inclination to do something; that inclination can either “die” when it is acted upon, or “live on” forever if the customer never does anything about it. That kind of metaphor made it easy to communicate the gist of a highly technical subject.

Self-testing

When you sit down and study something, you will frequently end the study session overestimating what you’ve really internalized. This is also known as the illusion of competence. The best defense against this, and also a great way to consolidate what you have learned, is to test yourself: not only right after the course, but also at regular intervals thereafter. This is sometimes called Spaced Repetition.

Self-testing is, indeed, why the Cornell system of note taking works so well: you’re supposed to summarize, in your own words, the content of your notes at the end of the note-taking session. Recently I came across a fine piece on Medium by Robyn Scott who tells of a habit consisting of spending 30 seconds (not more, not less) after every important event in your life, writing down your own summary of the event.

Sleep

Nobody questions the benefits of sleep on thinking and learning, so I won’t berate the point. But the instructors included a little nugget of wisdom: before you go to sleep, they recommend going over your current to-do list (or personal kanban board, which is the only system that ever worked for me) and select 3-5 items that you commit to doing the next day. That way your brain won’t worry about what to do the next day; your “to-do” will be sitting there ready for you, and your brain will have more freedom to mull over more important things while you sleep—such as internalizing what you’ve learned during the day.

Conclusion

I don’t have time to cover all the tips and tricks, so I’ll have to stop here. The course is not over yet but I’m thoroughly enjoying it. There’s some material that I was already more or less aware of, and it’s great to review it again (spaced repetition, remember?) But there’s also plenty of genuinely new material, and I appreciate having it presented in such a clear and lucid manner by the instructors.

Why I (still) use C++

When I joined Neurobat in 2010, the company’s vision was to develop an add-on component that would compute optimal setpoints for your heating system. Such a device had to be small, cheap, and run reasonably fast. That ruled out modern embedded PCs that nowadays can comfortably run Python; the entire application, including the “smart” model-predictive control library, had to be programmed in a language compiled to native instructions.

Most of that library was initially written in C. Soon, we realized that the design could greatly benefit from full object-orientation. So once we made sure that our toolchain supported C++ we began to port many modules to that language. (That was C++98, the most recent version of C++ our toolchain would support.)

But around 2016, the company’s strategy began to shift from single homes in favour of large, commercial buildings whose facility managers were less price-sensitive than homeowners. It had become economical to shift to more powerful devices that could run Python, or even to cloud-based platforms. There was less of an imperative to stick to C++, and some team members experimented with porting our library to Python. In the end, I was perhaps the only one in a team of six engineers who knew the library, and C++, well enough to maintain it.

And yet, I’ve made a concerted effort to keep my C++ skills sharp, not only before we shut down the company but also afterwards. I’ve read Effective Modern C++, Modern C++ Design, and I listen to every episode of both CppCast and cpp.chat. For the first time, I used C++ instead of Python for the annual Google Code Jam competition. Why do I do that? And more importantly, should you?

If you’re doing any kind of programming, I believe you would benefit from knowing C++, and here’s why.

  1. C++ has been around for 30 years, and is likely to remain highly relevant for many more years. It is probably the most popular language that gets compiled to machine instructions, as opposed to “managed” languages that are either interpreted or compiled to bytecode. And it clearly dominates that niche; I can’t think of any alternative besides D, Rust, and possibly Go. C++ is probably the most expressive language that runs on bare metal, and that alone would be reason enough to learn it.
  2. C++ remains the best option for writing extensions to other languages or environments. Without C/C++, the only way you can extend existing systems is by writing libraries in the same language, which will only be as performant as that language platform. For example, I’m currently predominantly working with R, and C++ is a very natural choice when writing high-performance extensions for that environment.
  3. For better or for worse, C++ features almost all programming language features known to man. It’s been called a multi-paradigmatic language and with good reason. It combines aspects of procedural, functional, object-oriented, and generic programming. And it is currently undergoing something of a renaissance, judging from the rate of new features being considered and released. If you’re looking for some intellectual challenge, do your brain a favour and feed it some C++.

I know that C++ has gotten some bad reputation because of its complexity and syntax, but I believe that most of these criticisms are ill-founded. I know just as many programmers who sincerely claim they love C++ as who claim they love Python. The learning curve is steep, but well-worth the climb. Will you join me on the journey?

Our first 3D game programming project

My son Nathan made this:

 

We made it by following the first project in the book 3D Game Programming for Kids by Chris Strom.

Predicting where the bugs are

Adam Tornhill’s Your Code as a Crime Scene (YCAACS) has lain open next to my laptop for several months now. I’m usually a fast reader and a technical book rarely lasts that long, unless the book is crammed with practical tips and advice that I want to try as I go along. YCAACS is no exception.

atcrime_xlargecover

The book introduces a technique completely new to me: the mining of your code repository’s history for patterns known to correlate with code defects. For example, do the most complex modules in your project tend to become even more complex over time, suggesting that your technical debt is growing out of control? Each self-contained chapter presents a different analysis you can try out. In this post I will walk through the most simple example: correlating the number of revisions to a module with that module’s complexity.

I’ll start with one of our current internal project called romulus. We begin the analysis by extracting the repository log for the last two months, formatted in a way to make the analysis easier:

git log --pretty=format:'[%h] %aN %ad %s' --date=short --numstat --after=2016-05-01 > romulus.log

The key argument here is --numstat: this reports the number of lines added or deleted for each file. It will tell us how frequently a given file, or module, has changed during that reporting period.

Next we use the code-maat tool written by the author of YCAACS. It’s a tool that will analyse the log of a code repository and extract different summary statistics. For our example, all we want to know is how frequently each module has been changed:

maat -l romulus.log -c git -a revisions > romulus_revs.csv

Next we need to correlate those changes with the complexity of each file. We won’t be using any fancy complexity metric here: the number of lines of code will suffice. We use cloc:

cloc * --by-file --csv --quiet > romulus_sizes.csv

We now have two CSV files:

  • romulus_revs.csv: the number of revisions of each file in our repository
  • romulus_sizes.csv: the size of each file

By doing the equivalent of a SQL JOIN on these files, you obtain for each file its number of revisions and size. You can do this in the analysis tool of your choice. I do it in Tableau and show the result as a heatmap, where each rectangle represents a module. The size of the rectangle is proportional to the size, or complexity, of the module and its color darkness is proportional to the number of times it has changed over time. With Tableau you can hover over any of these rectangles and a window will pop-up, giving detailed information about that module:

So what does this heatmap tell me? There’s no obvious outlier here; a couple of modules in the upper right corner have recently seen a lot of change, but I know that these modules implement some of the stories we are currently working on so no surprise there. This map has, however, a tendency to become darker towards the left side, where the largest modules are shown. This suggests that some modules have been growing over time, possibly out of control. Clearly, this must be investigated and these modules should perhaps deserve more testing and/or refactoring than the average.

“Your Code as a Crime Scene” is a fantastic book. Every chapter has a couple of ideas that you can try right away on your project. I suspect this will be of most value to technical leads and testers, both of whom I consider the guardians of code quality. I’m less sure that the other developers will be able to apply the ideas from the book that easily though. Doing it properly does take time, requires a certain mindset, and a certain familiarity with data analytics. But if your team includes someone willing and capable of doing it, I’m sure you will all benefit from it.

How I review papers

Once you publish a paper in a journal, you are expected to regularly review papers for that journal. It’s part of the normal scientific process. Some may consider it a chore, but I see it as an opportunity to keep in touch with my field and to help quality papers get published.

dilbert_review

When I was first asked to review a paper there was very little help available on the subject. Things have considerably improved since; for example, Elsevier maintains an Elsevier for Reviewers website with plenty of information. I recommend you start there for some basic reviewer training. But the last time I checked, that site would not yet tell you anything about how to read a paper or how to actually write a reviewer report.

Here is a workflow that works for me. Once I receive a reviewer invitation, here’s what I do:

Accept a reviewer invitation immediately

The whole scientific process depends on reliable and speedy reviewers. Do unto others as you would have them do to you. When I am invited to review an article, that takes priority over any other writing.

I usually read articles as PDF on my iPad, with the GoodReader app. I immediately accept reviewer invitations and download the PDF version of the article, save it to an iCould folder where GoodReader can find it, and download it to my iPad.

Read a first time generously

As soon as possible I read through the article, from beginning to end. Ideally in a single sitting, but if that’s not possible I do it in several. The goal is to form a general idea of what the article is about, how it is structured, and to prime my mind for what to look out for on the next reading.

Read a second time more critically

Next I re-read the article, but far slower and more critically. This is where I use GoodReader’s annotation tools: I highlight passages that I think need to be mentioned in my report; I strike through passages that I think can be omitted; I underline with squiggly lines passages that don’t read well and deserve to be changed. Sometimes I add a comment box summarising a point I don’t want to forget in my report.

When I highlight a passage I seldom record why I highlighted them. If I cannot remember why I highlighted a passage by the time I write the report, it probably wasn’t important.

Write the report

I don’t know how it goes for other journals, but the one I review most frequently for (Energy & Buildings) provides the reviewer with a free-form text field in which to enter their observations. (There is also a text form for private comments to the editor, but I seldom use that.) It’s important to realise that the comments from all the reviewers will be collated together and sent to the author, and sometimes also to the reviewers to notify them of the editor’s decision.

You can also include supplementary files with your review. The only time I’ve found this useful was when I needed to typeset mathematics in my review. However, I discovered that the supplementary files are not forwarded to the other reviewers, and I now avoid them.

Your report will therefore be written in plain text. I try to stick to the following template:

<express thanks and congratulations for the paper>

<summarise the paper’s main points>

<if there are major concerns about the paper, enumerate them here as a numbered list, most important ones first>

<for each section of the paper, enumerate the other (minor) suggestions/remarks as a numbered list, in the order in which they are found in the paper>

Keep in mind that the author will be required to respond to each of the reviewer’s comments. If you provide them in a numbered list you make life simpler for them.

When I write the report I go through each of my annotations, one by one, and write a comment for each of them, either to the list of minor comments or to the major ones. By the time I reach the end of the paper, all my annotations will have a corresponding comment.

I write my report in Markdown with Vim. That way I do not need to worry about getting the numbering of the comments correct; I am free to re-order my comments, especially the ones that deal with major concerns, so that the most important ones come first. When I am satisfied I run the report through pandoc, and generate a text file:

pandoc -o %:r.txt %

After a final check I copy/paste the contents of that text file into the review submission platform.

Language issues

To this day I’m not sure whether the reviewer or the editor is responsible for fixing typos or other language errors. These days I tend to skip them, unless I find sentences whose meaning has become completely obscure. Otherwise I usually add to my list of major concerns a sentence such as:

There are many typos and grammatical mistakes throughout the paper. For example the last sentence of the first paragraph of the Introduction reads as follows:

> … that allows for a more active participation of the demand side in the operation a control task of the power system.

or even:

The language quality of this paper does not meet the standards for
an international journal, and I found the paper very hard to follow.

In general I do not try to reformulate any passages. For many authors, English is a second language and I appreciate how hard it can be to communicate with clarity, even for native speakers. When necessary I might suggest that the authors have the paper reviewed by a native speaker.

Summary

That, in a nutshell, how I review papers. I know it can feel like a chore, but I strongly encourage you to participate in the process. I hope this workflow might help you get started. If you have any comments, I’d love to hear them.

The DEBORAH project kick-off meeting

We are involved in DEBORAH, a Eurostars project nr E!10286,  led by EQUA Simulation AB, the vendor of the highly regarded IDA ICE building simulation software. Together with CSEM and Bengt Dahlgren AB, a Swedish consultancy firm specialised in buildings, the project’s stated objective is to optimise the design and operation of district thermal energy systems.

We held the project’s kick-off meeting on Thursday 16th June, 2016, in EQUA’s offices in Stockholm. Neurobat’s role in the project will consist in providing short- and long-term estimates of heating loads, and to extend IDA ICE with the Neurobat control algorithms.

A pilot site has been identified in Krokslätt, a district in the city of Göteborg, where heating to several buildings is provided by heat pumps combined with a system of boreholes: narrow shafts drilled through the rocky ground, where the water fed to the heat pumps have their temperature raised by the surrounding heat. Besides “pre-heating” the water, this also has the benefit of improving the heat pump’s coefficient of performance (COP). But few studies have been done regarding the optimal design (and operation) of such a system of boreholes, a negligence that this project hopes to address.

This 3-year long project is a great opportunity for us to work with some of the domain’s thought leaders, and to integrate IDA ICE in our own product development workflow.

Being blocked doesn’t mean you cannot work

If you’ve been on a Scrum team for some time, you will inevitably hear someone at the stand-up say:

Today I cannot work on <some feature> because of <some reason>, but that’s all right. I’m not otherwise blocked because I can also work on <some unrelated thing>.

There are two (very human) factors at play here: 1) the desire to be seen as a productive team member, and 2) the unwillingness to deal with bad news. Admitting to being blocked can even become a taboo in some teams. Yet what is the purpose of the stand-up, if not to bring such issues out in the open?

What’s wrong with having everybody always making some kind of progress? Isn’t that indeed one of the patterns in Coplien’s Organizational Patterns of Agile Software Development? The problem is that having your work blocked while you work on something else increases the amount of work in progress, or WIP. And WIP, in a software team, is waste and costs time, effort and money. Not all work is useful; working on non-priority items, when there’s a priority item that’s not taken care of, is the worst thing you can do.

Our team discussed this point at our last retrospective. No one contested the reality of this taboo in our team, and we resolved that from now on everyone should be open about his inability to progress.

As a team member, it’s ultimately your responsibility to be on the lookout for any such pattern. It’s not the ScrumMaster’s alone. Never let a team mate hide his impediments under a carpet of busy-ness; ultimately, he, you, and the whole team will suffer.