7 numbers why building automation can save the world

Automating buildings costs money. Lots, lots of money. The return on investment (ROI) is usually very low, and it takes a long, long time (on the order of 5 to 10 years) for such an investment to pay for itself.

To make matters worse, people who rent the home or apartment they live in have little incentive to make it energy-efficient. They have no guarantee they will still live in the same place 10 years in the future. And landlords? Why would they invest? Energy costs are always born by the tenants, so they too have little incentive.

If financial considerations won’t motivate people to invest in smarter buildings, here I propose another incentive. Building automation, if implemented globally, is one of the most cost-effective strategies for keeping the atmospheric CO2 concentration at safe levels until 2050.

I reviewed Thomas L. Friedman‘s Hot, Flat and Crowded in an earlier post. In that book, Mr Friedman refers to a paper published by Pacala and Socolow in Science in August 2004.

I’ve traced that paper. You can find it here: Stabilization Wedges: Solving the Climate Problem for the Next 50 Years with Current Technologies. Even if you don’t read the full paper, please do read the first couple of pages. The authors do a fantastic job at summarizing our current situation with respect with CO2 emissions and where we are headed if we do not act now. The abstract speaks for itself:

Humanity already possesses the fundamental scientific, technical, and industrial know-how to solve the carbon and climate problem for the next half-century. A portfolio of technologies now exists to meet the world’s energy needs over the next 50 years and limit atmospheric CO2 to a trajectory that avoids a doubling of the preindustrial concentration. Every element in this portfolio has passed beyond the laboratory bench and demonstration project; many are already implemented somewhere at full industrial scale. Although no element is a credible candidate for doing the entire job (or even half the job) by itself, the portfolio as a whole is large enough that not every element has to be used.

Let me summarize the key figures, and please commit them to memory:

280 ppm CO2 atmospheric concentration

For the most part of human history, the CO2 concentration in the atmosphere remained relatively stable at 280 ppm (parts per million). The industrial revolution coincided with the start of a clear increase in CO2 concentration.

375 ppm CO2 atmospheric concentration

The CO2 concentration at the time of the article (2004). But remember that CO2 concentration has always increased since careful measurements started in the late fifities:

CO2 atmospheric concentration for the past 50 years
CO2 atmospheric concentration measured on Mauna Loa (Hawaii) for the past 50 years, adapted from my thesis.

500 ppm CO2 atmospheric concentration

Even allowing for (healthy) skepticism, most scientists believe that mankind must at all costs prevent the CO2 levels from reaching double the preindustrial concentration, or about 560 ppm. To err on the side of caution, we as a species should pledge never to let CO2 level cross the 500 ppm limit.

7 billion tons of CO2 per year

When Pacala and Socolow wrote the article, mankind was dumping in the atmosphere the equivalent of 7 billion tons of CO2 per year (7 GtC/year). That’s enough CO2 to fill 1 billion hot-air balloons each year. It is also the upper limit of allowed global emissions if we are to stabilize CO2 atmospheric concentrations at their current levels for the next 50 years.

14 billion tons of CO2 per year

If we fail to act now, by 2054 we will be pumping out 14 billion tons of CO2 per year in the atmosphere, according to the so-called Business As Usual (BAU) scenarios. Such an emission rate will almost certainly result in a CO2 concentration of more than 500 ppm, i.e. beyond the safe upper limit. The consequences on global warming can only be disastrous.

Average global temperatures for the last 150 years
Average global temperatures for the last 150 years, adapted from my thesis.

50 years

Stabilizing CO2 emissions is only the first half of the battle. Our goal is to stabilize them at their current levels for the next 50 years, but after that we must devise solutions to reduce them.

7 wedges

The paper proposed 15 potential solutions (or “wedges”) for stabilizing our CO2 emissions. Each one of these is technologically feasible and has been commercially demonstrated. Any of them will prevent the increase in CO2 emissions by 1 GtC/year by 2054. Thus, to keep our CO2 emissions to current levels by 2054, we must implement at least 7 of these 15 strategies on a global scale.

Wedge 3

The third wedge proposed by the authors appears to me as the easiest to implement:

Cut carbon emissions by one-fourth in buildings and appliances projected for 2054.

Yes, that’s right. If we or our children are to make it safely through the second half of this century, we must implement at least 7 of 15 strategies, one of which is the reduction in carbon emissions by 25% in buildings and appliances.

And how, you may ask, can we achieve this? Well, there are really only two solutions. We may switch to more carbon-neutral energy sources, or we may reduce our energy demand. As I’ve argued in a previous post, we should prefer the latter option for the following reasons:

  • Our fundamental problem is our dependency on cheap sources of energy. Carbon-neutral energy sources, although much cheaper than only ten years ago, are still far from competitive.
  • We have enjoyed cheap sources of energy for so long that we have never had to consider the need to reduce our demand. In other words, we are addicted to energy, not oil.
    5 hours energy
    Credits: RogeSun Media
  • It is much, much more cost-effective to reduce the energy demand of buildings and appliances, particularly through better home and building automation, than attempting to replace our current sources of energy with carbon-neutral ones.

Conclusion: an elevator pitch for building automation

We currently emit 7 GtC/year in the atmosphere. If we fail to act now, we will be emitting 14 GtC/year in 2054 and the CO2 concentration will be more than twice its preindustrial level. Building automation, if implemented on a global scale, can make buildings at least 25% more energy effective, which will prevent the emission of 1 GtC/year by 2054, out of 7 GtC/year required to keep at current levels. It is arguably the most cost-effective strategy for mitigating climate change.

MATLAB, Java, Spring and dynamic classloading

I have sort of a love-hate relationship with MATLAB, and always had since I read its tutorial in January 2003.

On one hand it’s a proprietary closed-source system, which in my book rules it out for any scientific work. My one and only encounter with a Mathworks sales representative did nothing to help my misgivings. It is virtually impossible to reproduce any scientific work done on the MATLAB platform without a licence—rendering it almost by definition unscientific.

Furthermore, MATLAB as a language is very low-level, and lacks constructs found in most modern languages. Object-orientation is a joke. You can’t loop on anything else than vectors. There are no primitives per se apart from the matrix (a single scalar is a 1×1 matrix—come ooooon).

These gripes aside, MATLAB is remarkably useful in its own domain, namely technical computing. And the more I use Simulink, its embedded simulation tool, the more impressed I am.

A side project of mine involves a Simulink-based physical model of an office room, complete with wall and air temperatures, daylight illuminances, heating elements and power consumptions. But what’s really cool on this project is that we’ve extended the Simulink model with Java code running on the JVM that ships with MATLAB. This code exposes through RMI the heating, lighting and blinds control in the office room to any remote process.

I’ve used this model quite extensively on my PhD thesis, and I’m now trying to make it somewhat more useful to the wider community of building control algorithm designers. To that end I have explored the possibility of basing this Java code on the Spring framework, in particular the parts that require database queries.

Well the proof of concept I wrote worked out just fine, with the following caveats. First, forget about dynamically loading Java classes from MATLAB. Second, FORGET ABOUT DYNAMICALLY LOADING JAVA CLASSES FROM MATLAB. It just doesn’t work beyond baby-style HelloWorldish applications. And it certainly will not work for any code relying heavily on the Class.forName() construct, which is vital for Spring.

I think part of the reason is that it’s not the same classloader that does the dynamical loading and the static one. According to this bug report, if your classes are on the dynamic path then you must use the following signature:

java.lang.Class.forName('com.mycompany.MyClass',true,cloader)

where cloader is the classloader that loads classes from the dynamic path. But who’s going to dive in and change the Spring code for that?

So forget about dynamic classloading in MATLAB. Just edit classpath.txt to point to your jarfiles and everything will be fine. Really. I have packages the Java code using Maven’s jar-with-dependencies predefined assembly descriptor, yielding a single jarfile with all my dependencies, including Spring and the MySQL connector. Just splendid.

We even have a webpage for this project, but be warned that there’s not much there yet. The URL is http://smartbuildings.sf.net/coolcontrol/

The OpenRemote.org project

Recently I stumbled upon a blog entry by Marc Fleury, whom I believe is one of the lead developers on the JBoss Application Server project. In this post he describes his new pet project, OpenRemote, which has of late bloomed into a full blown affair.

There’s an official website and the project seems to be buzzing with activity. From what I understood, the goal of OpenRemote is to build an open-source universal remote control for your home, including all home automation protocols known to man: X10, KNX/EIB, Lon, etc. They are talking about a reference implementation, apparently targeting the iPhone.

I think a major challenge that’s these guys will face is how to make a truly usable UI for home automation. I’ve quoted Donald Norman before and given his opinions on this thorny issue. But I truly hope the OpenRemote people will eventually solve this problem.

Trends in Smart Buildings Meeting, August 2008

Four people again attended this second meeting, the purpose of which is to share information among people interested in home and building automation. As previously, here are photos of the group’s collective memory along with my comments.

20080804_1837

Fred told us first about a conference he had attended, organized and hosted by the Demontfort University in Leicester, England. The main subject of the conference was mainly user comfort, but according to Fred there’s a certain Cooper (or Copper, sorry if I got this wrong) who’s doing fairly detailed CFD simulations coupled with state-of-the-art thermal comfort models.

Friedrich, who joined us for the first time, told us about his research. He’s working on the non-visual aspects of indoor lighting, and on the important topic of spectral control on indoor lighting. I.e., he’s investigating whether “cold” fluorescent lights could have an impact on people’s feeling of well-being.

LESO has obviously been busy while I was away, as I learned that their sky scanner was being used again. It’s a highly reflective spherical mirror, laid flat on LESO’s roof, and a digital camera takes pictures of it from above. The mirror reflects the whole sky vault, so a computer that analyzes these pictures can then measure the sky luminance distribution. The sky can then be classified according to the standard CIE skies. (One ambition of LESO is to construct a catalog of representative skies for Lausanne, such as already exist for certain major cities.)

Friedrich is also using a head-mounted illuminance sensor that’s sensitive to the spectral quality of its incoming light. With this device, as I understood it, you can measure over a full day the quality of the eye-level light a person receives.

Very exciting work, I must say. I’m looking forward to seeing published papers.

20080804_1838

The (non-invasive) automatic data acquisition on an inhabited building has always been a strong point of LESO. I’ve been myself somewhat involved in that effort, and one of my great regrets was not having had the time to work on a canonical data format for data recorded on a building. I personally believe there’s a need for this, because it is at the moment impossible for separate research groups to share their data without a major translation effort. It would have benefits for industry too, in the same way that a standard XML format for business activities helps the integration of legacy systems.

Someone raised a very interesting question at this point, the gist of which was “What’s wrong with Excel files?”. I have ranted against the use of Excel in academia before, but to these arguments I would add the following.

Scientific data in our field is almost always structured (i.e., not tree-like nor with arbitrary fields for each data item). So how you store your data boils essentially down to a flat file, a relational database (RDB), or a proprietary program (Excel being the obvious example, but Matlab or Igor Pro are others).

I dislike proprietary program being used in scientific work on the principle that any such work must be repeatable and verifiable. This is by definition impossible with a proprietary program whose source code nobody can inspect, and whose license costs might be a barrier.

Flat files or RDB are my own, humble, personal preference, with a slight bias towards RDBs for any long-running measurement campaign that can yield thousands or millions (as for LESO) datapoints. Flat files are the format of choice for analysis, since they can be freely shared among co-workers and colleagues and even published along with their peer-reviewed article.

20080804_1839

Finally we talked about SUNtool, a simulation package to which LESO has contributed in the past but which has been plagued with difficulties. It appears that one commercial partner of that project has withdrawn its support, preventing the other partners from using their code. One ambition of LESO is now to start more-or-less from scratch and to develop a more open version of that tool.

That concluded our meeting, the next installment of which is tentatively scheduled for Moday September 1st.

Smart houses discussed in New York Times

A fun online article in the New York Times discusses the current public perception of smart houses.

The gist of the article was that home automation is, despite appearances, something perfectly obtainable provided someone pays for it. And you no longer have to be either filthy rich or a geek to obtain it. There are however other reasons why people do not invest in home automation.

The most interesting (to me) facts from the article were:

  1. A complete home automation solution can be bought for between US$ 5000-10,000;
  2. There is no evidence for a demand for smart homes from average buyers, except when energy savings are part of the package;
  3. The average american household has 46 electronic devices in it. There’s a great potential for home automation solutions that integrate all these `gadgets’.

Interview: H. Michael Newman

I have just come across a short but interesting interview with H. Michael Newman, the “Father” of BACnet. The interview can be downloaded from here.

Two points in particular caught my interest. One is that according to Newman, as far as standards go, BACnet is an EN standard and thus BACnet is now the “law of the land” in the 28 member states of Europe. But control algorithms cannot be entirely decoupled from the underlying communication philosophy, so I believe algorithm developers might do well to familiarize themselves with the principles of BACnet.

The other was the existence of a textbook on building control written by Michael Newman, Direct Digital Control of Building Systems: Theory and Practice. Textbooks on building control being such a rarity I think this book will be worth a trip to the library.

Computing sustainability and building automation

The energy demand of computers—including PCs, peripherals, and corporate data centers—produced about 830 million tons of CO2 in 2007, according to a report by the the Global eSustainability Initiative (GeSI), a group of technology firms interested in the potential impact of information and communication technologies on climate change. But they can also help us save energy—the question has always been how, and how much.

The June 21st issue of The Economist comments on this report, summarizing the areas in which computers can help us achieve CO2 savings. The savings estimated in gigatonnes for 2020 are as follows:

  1. Smart grid: 2.03
  2. Smart buildings: 1.68
  3. Smart logistics: 1.52
  4. Smart motors and industrial processes: 0.97
  5. Transport optimisation: 0.60
  6. Teleworking: 0.22
  7. Videoconferencing: 0.14

Notice that smart buildings occupy the number two spot. Enabling buildings that switch off heating and ventilation when nobody is around will, according to the report, reduce our emissions by more than 1.6 billion tons of CO2. Smart buildings had always been touted as an effective CO2 emission reducer, but this is as far as I know the first time a concrete figure is given for those savings. The total emissions from ICT by 2020 is estimated at 1.4 gigatonnes, or one-fifth of the total savings (7.8 gigatonnes).

One should, of course, be extremely suspicious of such data. I have not read the report itself and can’t comment on the methods used to derive these figures. But even if the absolute numbers are wrong, it is encouraging to see that smart buildings are estimated to contribute 20% of all CO2 savings from ICTs by 2020.

Trends in Smart Buildings Meeting, July 2008

On 4 July 2008 we held at LESO-PB the first of (hopefully) a series of meetings for people interested in home/building automation. The idea is to give people of widely different backgrounds a venue, time and opportunity to share, discuss and explore new ideas.

It was my pleasure to facilitate this meeting and although I did not hold any minutes, you can find here pictures of the notes I took during the meeting.

20080704_1819

We started with introductions. It was great to have people from industry, academia and just plain hobbyists (like yours truly) interested in this subject. One thing we agreed on quite early was to discriminate between building automation (BA) and home automation (HA). BA will typically use completely different hardware and control algorithms than HA, so when a distinction needs to be made we agreed that HA is a subset of BA.

We started the discussion with two questions. 1) What is the state of BA today and 2) What is the role of building simulation in BA.

20080704_1820

David started by telling us about the preliminary research he’s been doing for his PhD work at EPFL. He’s looking for building simulation software that would be modular enough to easily allow testing of different algorithms. This problem was similar to one I’d been working on during my own PhD so we talked a bit about the software I had used, SIMBAD, and in particular how it had been extended with Java/RMI to allow remote processes to connect to it.

20080704_1821

One thing that Antoine stressed was the importance of building simulations for BA designers. The nature of the problem makes it impossible to run tests of control algorithms on real, occupied buildings and to get feedback in a timely manner. And the results would need to be compared to some base case anyway.

20080704_1822

We talked a lot about the academic efforts in building simulation, especially the need for a good model of the users’ behaviour. The fact is that modern simulation packages do not have a good user model, and it is very difficult to estimate the errors being made on energy demand predictions.

On the other hand, it was very unclear whether such user models could be directly used by BA systems to anticipate user actions. Users usually act after some discomfort threshold has been exceeded, but any BA system should try to act before.

20080704_1823

We briefly reviewed the user behaviour models that LESO had been working on for the past years, most notably Jessen’s occupancy model (the subject of his PhD thesis) and Fred’s window opening model.

20080704_1824

Someone mentioned a research group in Zurich. I’m not 100% positive about this but I think this could be the group of Prof. Morari, with whom I had had a brief email exchange a couple of years ago.

Antoine stressed again the importance of reliability in BA systems. The reliability issue brought up a discussion on centralized vs distributed control systems.

20080704_1825

We were shortly running out of time, so I asked the audience their recommendations for academic or trade publications of interest to BA. We concluded the meeting by regretting the lack of real innovation in BA, both academic and industrial, and observed that the big challenge facing building simulation today was the modelling of human factors and the urban environment.

Thanks to everyone who participated, and see you next time!

Donald Norman on user interface design issues with smart houses

Let’s face it: us geeks do have a tendency to accept computer interfaces barely more user-friendly than, say, a shell. And that’s all right for me; it doesn’t really matter to me that Emacs maps C-w, M-w and C-y to cut, copy and paste instead of the more familiar C-x, C-c and C-v. But if we are to make the smart home a reality we’ll have to give its users something else than a screen and keyboard to interface it with.

There’s a great passage in Donald A. Norman’s classic The Design of Everyday Things that addresses just this point. It is so good that I’ll just let it speak for itself:

Even as this book is being completed, new sources of pleasure and frustration are entering our lives. Two developments are worthy of mention, both intended to serve the ever-promised “house of the future.” One most wonderful development is the “smart house,” the place where your every want is taken care of by intelligent, omniscient appliances. The other promised development is the house of knowledge. […] Both developments have great potential to transform lives in just the positive ways promised, but they are also apt to explode every fear and complexity discussed in this book into reality a thousand-times over.

Imagine all our electric appliances connected together via an intelligent “information bus.” This bus (the technical term for a set of wires that act as communication channels among devices) allows home lamps, ovens, and dishwashers to talk to one another. The central home computer senses the car pulling into the driveway, so it signals the front door to unlock, the hall lights to go on, and the oven to start preparing the meal. By the time you arrive in the house, your television set has already turned on to your favorite news station, your favorite appetizer is available in the kitchen, and the cooking of the meal has begun. Some of these systems “speak” to you (with voice-synthesizers inside their computer brains), most have sensors that detect room temperature, the outside weather, and the presence of people. All assume a master controlling device through which all the house occupants inform the system of their every want. Many allow for telephone control. Going to miss your favorite show on television? Call home and instruct your VCR to record it for you. Coming home an hour later than expected? Call your home oven and delay the starting time of the meal.

Can you imagine what it would take to control these devices? How would you tell your oven when to turn on? Would you do this through the buttons available at your friendly pay telephone? Or would you lug around a portable controlling unit? In either case, the complexity boggles the mind. Do the designers of these systems have some secret cure for the problems described in this book or have they perhaps already mastered the lessons within? Hardly. An article entitled “The `smartest house’ in America” in the technical magazine for design engineers, Design News, shows the normal set of of arbitrary control devices, overly complex panels, and conventional computer screens and keyboards. The modern cooktop (accompanied by the caption “for the ultimate chef”) has two gas burners, four electric burners, and a barbecue grill controlled through a row of eight identical-looking, evenly spaced knobs.

It is easy to imagine positive uses for intelligent home appliances. The energy-saving virtues of a home that turns on the heat only for rooms that are occupied, or waters the yard only when the ground is dry and rain does not threaten, seem virtuous indeed. Not the most critical of the problems facing mankind, perhaps, but reassuring nonetheless. But it is difficult to see how the complex instructions required for such a system will be conveyed. I find it difficult to instruct my children how to do these tasks appropriately and I often fail at them myself. Ho will I manage the precise, clear instructions required for my intelligent dishwasher, especially through the very limited control mechanisms I am sure to be provided with? I do not look forward to the day.

Amen to that.