Neurobat, day one

Yesterday marked my first day as Chief Technology Officer at Neurobat AG, a young company formed in Switzerland to industrialize and market advanced building control algorithms, such as the ones commonly researched and developed at my former laboratory, the Solar Energy and Building Physics Laboratory at EPFL.

This also marks the end of almost three years spent building enterprise integration systems in Java for a certain coffeeshop. I’m now moving back to my original topics of interest, namely the intelligent control and simulation of buildings. Indeed, without disclosing too much, the very first project I will be working on is the implementation of certain ideas formulated during the Neurobat research project carried out aeons ago at LESO-PB. Except this time the systems won’t be running in the quiet and safe environment of an experimental building whose occupants have a history of forgiveness towards enthusiastic graduate students and their ideas—including myself. No, this time we mean business, that is, embedded systems that must be build rock-solid and run unattended for years, or possibly decades.

One issue that’s come up more than once was whether we should keep MATLAB as our lingua franca for prototyping and trying out new ideas and concepts before porting them to languages more, shall we say, closer to the machine. Or should we just dump it (including its non-negligible licencing costs, especially for an non-academic organization) and work directly as close to the metal as we dare?

Personally, without wanting to sound overly smug or anything, I think that someone asking this question has obviously never tried multiplying two matrices in C. The implementation contributed by James Trevelyan to the Numerical Recipes in C website runs to about 33 lines:

void dmmult( double **a, int a_rows, int a_cols,
double **b, int b_rows, int b_cols, double **y)
/* multiply two matrices a, b, result in y. y must not be same as a or b */
{
int i, j, k;
double sum;

if ( a_cols != b_rows ) {
fprintf(stderr,”a_cols b_rows (%d,%d): dmmult\n”, a_cols, b_rows);
exit(1);
}

#ifdef V_CHECK
if ( !valid_dmatrix_b( a ) )
nrerror(“Invalid 1st matrix: dmmult\n”);
if ( !valid_dmatrix_b( b ) )
nrerror(“Invalid 2nd matrix: dmmult\n”);
if ( !valid_dmatrix_b( y ) )
nrerror(“Invalid result matrix: dmmult\n”);
#endif

/* getchar();
dmdump( stdout, “Matrix a”, a, a_rows, a_cols, “%8.2lf”);
dmdump( stdout, “Matrix b”, b, b_rows, b_cols, “%8.2lf”);
getchar();
*/
for ( i=1; i<=a_rows; i++ )
for ( j=1; j<=b_cols; j++ ) {
sum = 0.0;
for ( k=1; k<=a_cols; k++ ) sum += a[i][k]*b[k][j];
y[i][j] = sum;
}
}

Give me instead MATLAB's

y = a * b

anytime. Now of course I realize the comparison is completely unfair. The C version includes error checking, comments, etc. But still, C is, after all, originally a systems programming language, while MATLAB-the-language is a DSL for doing precisely this sort of stuff. I never wanted to prove that C sucked at doing linear algebra—I just wanted to show that most trivial operations in MATLAB would have to be—by us—re-implemented in C before we can even begin using them. And I don't think we have that sort of time. Not outside of academia.

DB4ALL: reformatting the mess that Internet has become

I always try very hard to keep my posts within the main topic of this blog, namely computers in the context of building automation and simulation. Occasionally I fail, like for today’s post.

I’d like to tell you about a software company co-founded by a friend and fellow Toastmaster of mine, David Portabella. The company’s name is [DB4ALL](http://www.db4all.com), and they specialize in software for retrieving structured data from the web.

(Disclaimer: I am not affiliated with this company. I have had the opportunity to play with their tool, which I sincerely think is a high-quality one, but I derive no remuneration from writing this piece.)

They’ve developed `Webminer’, a Java library for extracting data in a structured manner from any website. Suppose, for instance, that you need a relational database with the data from the [CIA World Factbook](https://www.cia.gov/library/publications/the-world-factbook/). That data, though in the public domain, cannot be obtained in the form of a relational database, but only by clicking around on the CIA website. But with ‘Webminer’, the smart guys at DB4ALL can write a custom application that will know how to navigate such websites, ‘scrape’ and ‘normalize’ its data, and save it to a relational database for you.

On [DB4ALL’s website](http://www.db4all.com) you will find references to [the two most popular datasets](http://db4all.com/databases/) that they’ve mined: the above-mentioned CIA World Factbook, and the SourceForge database of open-source projects. Having such data in a relational form is invaluable for any researcher or marketing analyst. Suppose for instance that you want scientific data on the popularity of different programming languages over time in open-source projects. Well with these datasets you have all you need to get started.

This, for instance, is a screenshot of the SourceForge dataset opened in Excel:

All in all, if you need publicly available data from a website stored in a relational database form, you should definitely consider using [DB4ALL](http://www.db4all.com)’s services.

Software engineering best practices in academia

As you might know, my primary background stems from the field of
academia and research, but over the past years my interests have
focused increasingly on software engineering.

With the benefit of hindsight, it’s clear to me today that if I had
known what I know today about software, I would without doubt have
been a much, much more productive researcher and graduate
student. It’s simply not possible today to carry out research without
programming. And research itself, to be considered valuable, requires
exactly the same qualitities demanded from modern software
engineering: repeatability, versioning, and safe explorations.

I’m convinced today that researchers would benefit if practicing
software engineers would give them some feedback on how they solve
these problems. And I’ve often pondered whether I should begin writing
on software engineering topics that I think could be relevant for
scientists and/or engineers, particularly in the academic field. It
could even form the basis for a series of blog posts.

I’d rather ask you, dear reader, for advice on this. **Would you like
me to begin a series of posts on software engineering topics relevant
to scientists and engineers in academia?** And if yes, which particular
subjects would you like to see me discuss?

I’m really, really looking forward to reading your comments on this matter.

The OpenRemote.org project

Recently I stumbled upon a blog entry by Marc Fleury, whom I believe is one of the lead developers on the JBoss Application Server project. In this post he describes his new pet project, OpenRemote, which has of late bloomed into a full blown affair.

There’s an official website and the project seems to be buzzing with activity. From what I understood, the goal of OpenRemote is to build an open-source universal remote control for your home, including all home automation protocols known to man: X10, KNX/EIB, Lon, etc. They are talking about a reference implementation, apparently targeting the iPhone.

I think a major challenge that’s these guys will face is how to make a truly usable UI for home automation. I’ve quoted Donald Norman before and given his opinions on this thorny issue. But I truly hope the OpenRemote people will eventually solve this problem.

Article watch: Building Simulation vol 1 nr 3

Two articles from new new issue of Building Simulation sound particularly interesting:

Comparing computer run time of building simulation programs, by Tianzhen Hong, Fred Buhl, Philip Haves, Stephen Selkowitz and Michael Wetter.

This paper presents an approach for comparing the computer run time of building simulation programs. The computing run time of a simulation program depends on several key factors, including the calculation algorithm and modeling capabilities of the program, the run period, the simulation time step, the complexity of the energy models, the run control settings, and the software and hardware configurations of the computer used to run the simulation. To demonstrate this approach, we ran simulations for several representative DOE-2.1E and EnergyPlus energy models. We then compared and analyzed the computer run times of these energy models.

DeST—An integrated building simulation toolkit Part II: Applications, by Xiaoliang Zhang, Jianjun Xia, Ziyan Jiang, Jiyi Huang, Rong Qin, Ye Zhang, Ye Liu and Yi Jiang.

This is the companion paper of part I of DeST overview. DeST was developed as a building simulation tool with the aim of benefiting both design of and research on building energy efficiency. During its development, DeST has been applied to many projects, development of building regulations, and research. This paper gives examples of several areas in which DeST has been applied, including building design consultation, building commissioning, building energy conservation assessment, a building energy labeling system, and scientific research. Examples from a demonstration building are presented to demonstrate the entire process of aiding design with DeST. Additional projects and regulations are also mentioned to introduce other applications of DeST.

Java GNU Scientific Library 0.2 released

I have released version 0.2 of the Java GNU Scientific Library (JGSL) project, its second public alpha release. Please visit the JGSL project website for more information.

This second release provides additional wrapper classes for the GSL stats module (mean, variance, standard deviation, etc.). Feel free to try it out and get back to me for questions/comments.

Journal of Building Performance Simulation – Volume 1 Issue 2 article watch

More papers directly relevant to the main topic of this blog:

Monitoring and modelling of manually-controlled Venetian blinds in private offices: a pilot study by Vorapat Inkarojrit

This study presents results from a window blind usage survey and field study that was conducted in California, USA during a period spanning from the vernal equinox to the winter solstice. A total of 113 office building occupants participated in the survey. Twenty-five occupants participated in the field study. In this study, 83 measurements of physical environmental conditions were cross-linked with participants’ window blind controlling preferences. A total of 13 predictive window blind control bivariate and multivariate logistic models were derived. As hypothesised, the probability of a window blind closing event increased as the magnitude of physical environmental and confounding factors increased (p < 0.01). The main predictors were window/background luminance level and vertical solar radiation at the window. The confounding factors included MRT, direct solar penetration and participants’ self-reported sensitivity to brightness. The results showed that the models correctly predict between 72-89% of the observed window blind control behaviour. This research extends the knowledge of how and why building occupants manually control window blinds in private offices, and provides results that can be directly implemented in energy simulation programs.

Article watch: Building and Environment, December 2008

The following papers from Building and Environment Volume 43, Issue 12, December 2008 are relevant to the field of building automation and simulation.

Photometry and colorimetry characterisation of materials in daylighting evaluation tools by M. Bodart, R. de Penaranda, A. Deneyer, G. Flamant.

This paper presents a methodology for evaluating the photometric and colorimetric characteristics of internal building materials, for daylight evaluation. The assessment of these characteristics is crucial both for modelling materials accurately in daylight simulation tools and for building correct daylight mock-ups. The essential photometric and colorimetric parameters that influence the reflection of light from and its transmission through building materials are identified and described. Several methods for evaluating these parameters qualitatively and quantitatively are then proposed and discussed. Our new methodology was fused to create a database of materials in a freely accessible web tool which compares full-size materials to scale-model materials in order to help architects and lighting designers choose materials for building daylight scale models.

On the behaviour and adaptation of office occupants by Frederic Haldi, Darren Robinson.

During the warm summer of 2006 a comprehensive longitudinal field survey of the adaptive actions of occupants, their thermal satisfaction and the coincident environmental conditions was conducted in eight Swiss office buildings. We have applied logistic regression techniques to these results to predict the probability of occupants’ actions to adapt both personal (clothing, activity and drinking) and environmental (windows, doors, fans and blinds) characteristics. We have also identified, for each type of control action exercised, the increases in temperature at which thermal sensation votes are reported. These “empirical adaptive increments” have also been defined for combinations of control action. In this paper we present the field survey methodology as well as the results relating to the above, which we discuss along with scope for further related work.

Minimisation of life cycle cost of a detached house using combined simulation and optimisation by Ala Hasan, Mika Vuolle, Kai Siren.

In the current study, minimisation of life cycle cost (LCC) for a single family detached house is achieved by combined simulation and optimisation. The house has a typical Finnish construction with initial U-values in accordance with the Finnish National Building Code C3 of 2003. The implemented approach is coupling the IDA ICE 3.0 building performance simulation program with the GenOpt 2.0 generic optimisation program to find optimised values of five selected design variables in the building construction and HVAC system. These variables are three continuous variables (insulation thickness of the external wall, roof and floor) and two discrete variables (U-value of the windows and type of heat recovery).

This investigation shows the advantages gained from the implemented approach of combining simulation and optimisation. The solution suggests lowering the U- values for the external wall, roof, floor and the window from their initial values. The exact values of the optimised design variables depend on the set up of the LCC data for each case. Reduction of 23–49% in the space heating energy for the optimised house is obtained compared with the reference case. Verification of the GenOpt results is made by comparison with results from a brute-force search method, which indicates that GenOpt has found, or has come very close to, the global minimum in the current study.

Modeling sky luminance using satellite data to classify sky conditions by S. Janjaia, I. Masiria, M. Nunezb, J. Laksanaboonsong.

Many traditional models of vegetation canopy reflectance have commonly used one of two approaches. Either the canopy is assumed to consist of discrete objects of known reflectance and geometric-optics are then used to calculate shading effects, or, as in the turbid medium approach, the canopy is treated as a horizontally homogeneous layer of small elements of known optical properties and radiative transfer theory is used to calculate canopy reflectance. This paper examines the effect of solar zenith angle on the reflectance of red and near-infrared radiation from forests using a combination of these modelling approaches. Forests are first modelled as randomly spaced eucalypt crowns over a homogeneous understorey and the fractional coverage of four components: shaded and sunlit canopy and shaded and sunlit understorey are calculated. Reflectance from each fraction is then modelled for a range of solar zenith angles using the Verhoef SAIL model. The overall scene reflection as seen by a nadir viewing satellite sensor is compared for three forest types representing a gradient of crown density from open dry grassy woodlands to dense wetter closed forest with an understorey of mesophytic plants. Modelled trends in scene reflectance change are consistent with aircraft measurements carried out at three different solar zenith angles. Results indicate that an increase in both tree density and solar zenith angle leads to an increase in the dominance of shaded components. In the visible band, both the sparsely treed woodland and the medium density dry forest show similar trends to that predicted by a turbid medium model, however, the wet forest shows a less rapid decrease in reflectance with solar zenith angle. In the near-infrared band, as tree density increases from woodland to wet forest, overall scene reflectance shows increased departure from that modelled using the traditional assumption of smooth homogeneous canopies, changing from an increase with solar zenith angle for the woodland to a decrease with solar zenith angle for the forest types.

Monitoring home electricity usage

If you can measure it you can control it, and that is also true of your energy consumption.

Two energy-monitoring devices have recently been brought to my attention that are not only easy to use, but also to install. The first one is the OWL, and consists of a wireless clip you put around your house’s main electricity cable. Data is then transmitted to a portable LCD device. I suppose it works by measuring the magnetic fields induced by the current.

The other one is the Wattson. It works along similar principles, but is maybe just a bit more stylish.

I welcome the introduction of such devices on the market. I’ve heard that when consumption meters were introduced in cars, their owners started paying attention to it and adjusted their driving accordingly. Some do it for genuine economic reasons, but others do it simply for the fun of it. These sort of toys are simply irresistible to us grown-up children.

Perhaps the only, small gripe I could have against the OWL or the Wattson is their lack of granularity. You measure the total energy consumption of the house, and cannot measure the consumption by appliance (fridge, oven, etc) or by kind (lighting, heating, etc). And I don’t suppose there’s any way to record historical data from them. But never mind, they’re cool nevertheless.

OptiControl project website

Recently I got this email from Dr. Dimitrios Gyalistras of ETH Zurich:

I would like to draw your attention to the newly launched website of the project Use of Weather and Occupancy Forecasts for Optimal Building Climate Control (OptiControl):

http://www.sysecol.ethz.ch/OptiControl/

OptiControl is a collaborative effort by the ETH Zurich, EMPA Dübendorf, MeteoSwiss and Siemens Building Technologies. The project is sponsored by “swisselectric research”, the ETH Domain’s “Competence Center Energy and Mobility” and Siemens Building Technologies.

I hope that the website will be of interest to you. Its contents will be updated regularly to reflect the project’s results and the newest literature in the field of predictive building climate control.