C++: when delete doesn’t delete

We once spent almost a week chasing after a mysterious memory leak in our application, built on top of the highly regarded eCos real-time operating system. The leak appeared after we had rewritten some of our code in C++ after recognising that C’s object-oriented capabilities were no longer adequate for our needs.

After about half a day we could reproduce the memory leak on the target system with code that essentially looked like this:

Controller* controller = new Controller();
delete controller;

What baffled us most was that running this code in unit tests on the development machines exposed no such memory leak. We routinely run all our unit tests under Valgrind to identify memory usage errors, but in this case there was none. It was very unlikely that the leak was caused by defective code.

What’s more, the leak was almost-but-not-quite consistent. We leaked about 952 bytes on average, but that figure could be as low as 920 or as high as 968. It was always a multiple of 8 bytes. After about 8.5 hours, the system would reboot, presumably because it ran out of memory. We used the mallinfo() function to display the amount of available memory.

After almost a week we found the answer. According to the documentation, the default implementation of the delete operator in eCos is a no-op! I suppose the rationale is that most developers of embedded systems tend to shy away from dynamic memory allocation, and that it is better to reduce the size of the firmware by not providing a (rarely-needed) delete operator.

Except when we need one, of course.

To enable a proper delete operator you simply disable the CYGFUN_INFRA_EMPTY_DELETE_FUNCTIONS option in your eCos configuration file.

Going for one-week sprints: a good wrong idea

A few weeks ago, our team held a sprint retrospective (which I unfortunately couldn’t attend) during which it was decided to shorten our sprint length from two weeks to one. The team was right in their decision, but probably for the wrong reasons and here’s why I think so.

The main driver behind this decision was Neurobat’s involvement with the Aargau Heizt Schlau project: a canton-wide project to measure the efficiency of our system on 50-100 individual houses in the Aargau canton during the 2015-2016 winter. The goal is to have an independent assessment of the energy-savings potential of our product, a replication of our own peer-reviewed investigation. The project is mostly driven from a team in Brugg, with ample support from our R&D team in Meyrin.

Keeping the project on track and on schedule turned out to be extremely challenging. Very soon, urgent support requests began to come at unpredictable times, and we were having trouble keeping our sprint commitments.

The realisation that urgent, random support requests were going to be the norm for this heating season is the main reason why the team decided to experiment with one-week sprints. They have a 5-year long history of sprint retrospectives and I’m convinced they collectively understand the principles underlying the practice of timeboxed iterations. (A practice always proceeds from a principle, and can be modified only when the principle is fully understood.) A less mature team should not have made this decision and should stick with 2-week sprints; but I believe our team was mature enough to carry out this experiment.

Many teams, when they begin with Scrum, will object to the “overhead” introduced by daily standups, sprintly retrospectives and planning meetings. And since they are likely to miss their sprint commitments during the first few months, they are very likely to ask for longer sprints. Resist this temptation.

Mike Cohn tells the story of a team facing exactly this problem: good quality work but systematic overcommitments. He agreed to let the team change the sprint duration, but went against the team’s request for longer sprints. Instead, they went for shorter ones. His rationale against longer sprints is simple:

The team was already pulling too much work into a four-week sprint.
They were, in fact, probably pulling six weeks of work into each
four-week sprint. But, if they had gone to a six-week sprint, they
probably would have pulled eight or nine weeks of work into those!

So if shorter sprints are generally to be preferred over longer ones, why do I think the decision was a solution to the wrong problem? Because I believe that switching to shorter sprints will only perpetuate the root cause of the situation we are in. We decided to go for shorter sprints because urgent support requests were coming in more frequently than ever. How will switching to shorter sprints solve that problem?

I’m reminded of this quote, which I believe came from Mike Cohn’s Succeeding with Agile:

Few organizations are in industries that change so rapidly that they cannot set priorities at the start of a two-week sprint and then leave them alone. Many organizations may think they exist in that environment; they don’t.

If your organisation has trouble planning for more than a week ahead, then do your development team a favour and try, at all costs, to address the underlying problem. Your team members should not be the ones whose productivity should suffer for the lack of foresight elsewhere in the organisation.

Scrum stories that are juuuust right

On thing has been bugging me for quite some time now as I observe our team at Neurobat. Most stories on our sprint board are being worked on by one developer each, leading to daily scrums where everyone reports on work that is completely independent from that of the others.

Even though we encourage people to pair program, the fact remains that most stories are such that one person can implement them by himself, with the possible exception of testing. (We have a rule that a developer may not write the acceptance tests for his own story, much less execute them.)

That, in turn, leads to a very quiet office. We work in an open-space office where the six of us are in direct line of sight of each others. Yet for most of the time, there is very little chatter as each of us is busy with “his bit”.

Perhaps Mike Cohn summarises the issue best, in his User Stories Applied:

Most user stories should be written such that they need to be worked on by more than one person, such as a user interface designer, programmer, database engineer, and a tester. If most of your stories can be completed by a single person, you should reconsider how your stories are written. Normally, this means they need to be written at a higher level so that work from multiple individuals is included with each.

I’m not a big fan of hyperbole, but this passage was a little bit of a revelation to me. Here we had been faithfully trying hard to break up stories that were too large into tiny weeny stories that could be implemented in a couple of days or two by a motivated developer; and now I’m being told that there is such a thing as a story that is too small? Talk about being in a Goldilock-ish fix.

Very well Goldilocks er… I mean Mr Cohn, I’ll bring this up at our next retrospective and we’ll see whether our stories are really too small.

Running CARNOT models under OSX

CARNOT (Conventional And Renewable eNergy systems Optimization Toolbox) is a set of MATLAB & Simulink models for simulating buildings and building systems, e.g. boilers, heat distributors etc. It’s been developed by a collaboration involving several companies and universities and is generally well-regarded. It’s one of several MATLAB toolboxes dedicated to the problem of simulating building physics; other toolboxes with a similar goal include the International Building Physics Toolbox and SIMBAD.

CARNOT is in the process of being moved to a new hosting provider. In the meantime, I’ve recently obtained a copy of this toolbox and here is my experience in getting it to run under OSX.

CARNOT is distributed as a zip file, carnot_60_2013b_public_22oct2015.zip. I decompress it and find what looks like a Simulink top-level model called carnot.slx, and several sub-folders. Very encouragingly, I see there’s an installation guide:

CARNOT root folder

I move the decompressed folder to the folder where I keep all my in-progress projects, and create a symbolic link to it named more simply carnot.

The installation guide is very well written, and the main steps consist in:

  1. Decompressing the toolbox;
  2. Running the init_carnot.m script, that will setup all the paths correctly;
  3. Compiling all the MEX file with the provided script.

When I ran init_carnot.m for the first time on my Mac, it didn’t work and the error message made it very clear that most file paths have been written with the assumption that the toolbox was going to be used on Windows. At this point, I could have done either of two things:

  1. Fix the issues myself as quickly as possibly and get on with the work;
  2. Fix the issues myself carefully, making sure that my fixes could then be sent back to the maintainer of the package.

Being currently on a business trip, I felt like I had the leisure to go for option 2. Seeing that this toolbox was going to need some fixing, I made a git repo out of it and added all its files. (I didn’t know at this point which, if any, files were generated. I figured that this was something I could worry about later and remove those files from the repo.)

I fixed the paths problem, which mostly lay in path_carnot.m, called by init_carnot.m. I created a patch from it and sent it to one of the maintainers. The paths were now correctly setup.

Next I ran mex -setup and this ran fine, MATLAB picked up my XCode installation with the Clang compiler. So the next step in the installation guide was to run MakeMEX.m from the version_manager directory. That ran fine for several .c files until it tried to compile dir2_mex.c. When I opened that file in the editor I saw that it depended on the Windows API. Here I had two options:

  1. Try to understand what this file was doing and try to rewrite it without using the Windows API;
  2. Skip the compilation of this file and hope that the rest of the toolbox would run fine without it.

The problem with option 1 was that I didn’t know at this point whether there were going to be other files with the same problem. I certainly didn’t want to enter that kind of endless loop of fixing file after file that depended on the Windows API. Since the build process had stoppped on encountering the first error, I had no way of knowing if there was going to be many other problems.

So that’s why felt more appropriate to find where in the call mex was being called, and wrap that call in a try/catch block, yielding a warning if any file failed to be compiled. Re-running MakeMEX now compiled all the files correctly except the single dir2_mex.c, which I hoped would not be needed to run any of the simulations I was planning.

Once this was done, I could finally type carnot at the command line and the toolbox would open:

Screen Shot 2015-12-04 at 03.56.40

I was immediately drawn to the box that says double click to open examples and that yielded another set of errors, again related to file paths. After fixing those I could open an example model, the example_House_SFH45, click run, and saw the simulation running. I was all set and done.

A couple of days after writing the first draft of this article, I learned from one of the main developers that they plan to setup a proper SVN repository for the code, and that the whole toolbox was going to released under a BSD licence instead of the current LGPL. Until this is done, I’ve pushed my fixes to a pubic repository on GitHub, to which I have contributed some extra small fixes. But keep in mind that this is in no way the official repository for CARNOT; that will be announced shortly.

 

Book review: Advanced R

I would like to call this the best second book on R, except that I wouldn’t know what the first one would be. I learned R from classes and tutorials about 10 years ago, used it on my PhD and four articles, and use it today on a daily basis at work; yet only now, after reading this book, do I feel like I could possibly be called an R programmer rather than just a user.

The book deals with a variety of topics that are seldom discussed in the R tutorials you are likely to find freely available. Some are perhaps unnecessary in a book like this (Ch. 5 Style Guide), some could easily deserve an entire book (Ch. 7 OO field guide), but the chapters on Functions, Environments, the three chapters in Part II (Functional Programming) and the chapter on Non-standard evaluation are easily reasons enough to buy this book.

How many time indeed have you spent hours, frustrated, trying to write a function that would act as a wrapper around, say, lattice plotting functions that use non-standard evaluation? Or try to call subset() from another function, only to see cryptic error messages? Now, for the first time, the solution is not only clear to me; I feel like I could also explain to a colleague why things work the way they do.

R is incredibly powerful and dynamic and will, most of the time, do just what you expect it to do. But if you want to really understand what is going on under the hood, or if you want to write your own packages (whether for self-, internal-, or widespread use), you owe it to yourself to read this book.

The only problem with daily scrums

Over the past five years, our team has attended more than 120 daily standup meetings, carefully following the “canonical” format and having each team member answer the usual questions:

  1. What did you do yesterday?
  2. What will you do today?
  3. Any impediments?

 

There seems to be one flaw with this format, however. The flaw is that you cannot say what you will do for the day before having heard if anyone else has an impediment.

For example, if Alice says her bit, announcing what she intends to do for the day and declines to mention any impediments, then when Bob’s turn comes and he mentions that he’s having some trouble and could use some help, then Alice will have to come back to what she has just said and amend her plans for the day.

In Neurobat we’ve implemented a partially debugged hack around this problem by having two standup rounds. During the first round we do the canonical standup meeting, then the ScrumMaster asks if anyone needs a second round. The goal of this second round is two-fold:

  1. To let anyone say something he may have forgotten about during the first round.
  2. To let anyone amend their plans for the day due to something they may have heard during the first round.

This solution is far from ideal, and sounds annoyingly like a two-pass compiler. But it is, for now, the best approach we have found to deal with what I perceive to be the main drawback to the canonical form of the daily scrum.