My Name is Embedded Industry, and I'm an...

It is known that admitting having a problem is the first step in the healing process. Ken Orr compared management that refuses to admit they have a process problem to an alcoholic, strongly denying having any problems.

Product development and management association (PDMA) claims that new products currently have a success rate of only 59 percent at launch, up only one percent since 1990. Cancelled or failed (in the market) products consumed an estimated 46% of product development resources (Cooper, Winning at New Products, 2001).

Stephen Balacco, embedded analyst at VDC was quoted in December 2002 issue of SD Times as “Embedded [developers are] frustrated by inadequate or changing specifications during product development”. In the article it is mentioned that two-thirds of the 400 respondents cited changes in specification as the number one cause of the delays.

Do we need more evidence to justify a strong intervention by advocates of more flexible practices in order to better cope with the change? The change that is evident today and should be concidered as a possibility instead of threat.

In embedded systems it is not the change alone that causes big up-front design processes to fail so miserably. It is the characteristic of very late learning. "Final" designs from all the different engineering disciplines arrive at the end (delusional end) of the project. We could compare that to software integaration, in power of ten. Component tolerances violate with mechanics, microcontroller I/O characteristics were misunderstood, up-front designed protocols have performance shortcomings etc.

A new round is called for.

At this point wouldn't it be nice if there was a flexible process to do this with ease?


The firmware "hassle" with TDD - #2

In Embedded Systems Design Magazine article Bugopedia Niall Murphy lays down few types of common defects in embedded software projects. Many of them are so common pitfalls that seasoned embedded programmers avoid them without paying attention. The one that goes beyond this are the "Nasty Compiler" -type of defects. We have had our share of these just recently. Compilers from two vendors for two different targets have been responsible for mysterious behaviour of compiled C source. C source which works fine on PC. Article works as an introduction for part 2 in our analysis of unit testing and TDD of firmware.

Why it would be impossible, or worthless?
"The compiler for target may have bug in it, and the code will not work even if it passes tests on PC."

Why would it still be worth it, and what kind of hassle is there?
It is true that because of "Nasty Compilers" we can not be sure that the source code passing unit tests on PC would work on final embedded target. There are experiences on running the unit tests in the actual HW target. This however only solves the problem partly. As all tests can not fit the target memory with all production code, test suite is a special build and will be linked differently. Especially if the final production code needs to be compiled with optimization turned on, severe problems may arise because of slightest change at linking phase. This makes trusting the suite difficult. These issues should be concidered when figuring out what to test and on what level.

But by all means this is not a show stopper for whole unit testing, nor TDD, because there are other benefits:

TDD enforces the "simplest thing that could possibly work" -thinking in developing. In firmware development (as I would think in any development) developers tend to take bigger bites than they can chew, ending up spending lot of time debugging. With TDD we write a little - test a little, and if the damn thing breaks we only need to go back couple of minutes worth of effort. It's much less painfull than realize that you have spent entire day developing something that you just cannot fix!

By writing unit tests we enforce simple interfaces to modules (C files). If a file is used in different configurations then a complete test suite protects us from making changes that would violate other configurations.

Again I want to remind that unit testing and TDD is very well worth concideration in embedded software and firmware development as well as in mainstream programming, but you have to do your decision based on your case.


Cruisecontrol is Great for Firmware, Too

I have been playing with an idea of setting up a nightly build server for the embedded project. I started hacking together a simple Python script executing automated GNU Make build, analyse the output, and generate notification emails to stakeholders. After a while I stopped and thought "this sounds all too familiar". I decided to take another look at Cruisecontrol, which would also get us straight to continuous integration instead of just nightly builds. With great help from Lasse Koskela's article "Driving on CruiseControl" I was able to create a simple Java project compiled with Ant and automated with Cruisecontrol. This was the out of the box solution. I was able to proceed to my original problem, automating an embedded project integration. Lasse's article refers to older version of CC, and it did not have the "exec" builder plugin. So I downloaded the latest version. It BTW has a ready made Java example that you can launch right after extracting the thing. OK, so I changed the config.xml to use CVS instead of Subversion and then use "exec" builder plugin to launch the GNU Make script for our C source. I opened my browser, directed it to local server, and ta'daa - our first project was driving on cruisecontrol! I will make the config.xml available after I clean it up.

We have written few unit tests in the project (some even in TDD fashion), but they use embUnit framework which is not able to spit out xml, so unit test results are not published by our CC yet. At the moment I'm concidering two options:

1. updating the existing tests to use CUnit
2.writing a script to translate results from embUnit to simple xml output


We are looking for an experienced chair buyer (three years minimum)

I wrote about managers becoming coaches in agile environment. Vasco Duarte wrote also about agile management. He pointed out this article by Joel Spolsky about "the development abstraction layer". It is entertaining reading as it usually is at Joel on Software. This posting does great job in describing the new era of manager's role. However I would personally like to see a bit more from management than getting me a new chair when I'm in need. I would prefer top management to create the grand vision and middle management to organize programs to put this vision into practice. I do not mean traditional middle management, which would drive the innovation by command, but middle management that builds the abstraction layer enabling us to harness the innovation power of the developer team. This is difficult task, and is what makes finding a good manager still a tough job. Chair buyers are much easier to recruit.


Three day faceoffs

My home town ice-hockey team lost the 5th and deciding semi-final game last thursday. They however won the faceoff statistics. Who cares!

In the current project we have had couple of faceoffs, and I think we have won both. We got a wireless communication protocol from outsider. We call this a stack, which is delivered as a C library. We decided to bring the software developer responsible for the stack to our premises for three days. We spent those three days around the hardware and laptops, not in the meeting room with powerpoints. On the fourth day we were developing full speed on top of stack API. Could we have achieved this with paper documents? I don't think so. There were documents available, but the knowledge of necessary initialization routines, timings, interrupt requirements, and so on, were achieved during these three days and not by experimenting according the document.

In other episode we needed to implement protocol support for a communication gateway. This time we brought a developer responsible for development of this gateway to our premises for three days. Again on the fourth day we were able to continue development full speed. This time documents were even mentioned to be "little out dated".

On earlier occasion when the hardware for the wireless communication was contracted to outsider, we used paper documents and emails to explain what we want from our prototype. This period took couple of weeks, at times a single email was waited for three days.

I believe winning at faceoffs is easy, even though it alone does not guarantee winning a game as we saw in this year's semi-finals.


Scrum - Plain and Simple

Conchango has put up a nice presentation of Scrum.


XP2006 in Oulu, Finland. Program available!

XP 2006 will be held in June in Oulu, Finland. The program for tutorials and conference is now announced.
I was hoping for experience papers from embedded/firmware domain, but it seems that I won't be getting this. However another interesting field is strongly present; scaling agile methods and applying them in real life project environment. Impressive team roster; Kent Beck, Barry Boehm, Poppendiecks, Mike Cohn, the list goes on. Good times ahead!


The firmware "hassle" with TDD?

I got few questions about what exactly is the hassle that I was writing about. Fine, I'll try to paint the picture for you big boys with your fancy workstations and Smalltalk, Java, Ruby etc.

My current hassle is caused by mains controlling devices, you may call these "light switches". These products however do a bit more, they can control the load, actually different types of loads, they can be adjusted with parametering, have sophisticated self diagnosis, be networked via RF technology, and the list goes on. A fully featured "light switch" will be a 3-microcontroller multi-processor system connected internally with I2C bus and RF networked with his buddies. Typical choise of a controller is Microchip PIC16F876A which has 8kB of flash memory (actually this is 14-bit words in PIC acrhitecture) and 368 bytes of RAM. The monster is running at 4MHz. All programmed in C. In parallel evolve new mechanics, electronics, thermal management, etc. This is called co-design and is known embedded development characteristic. At the end official certification procedures need to be passed.

Okay, you propably are starting to get the picture. Now we bring in agile development. I like to divide this to (at least) two forms of being agile; 1) agile project management and 2) agile software development. Here we talk about the latter, and actually only about unit testing. We might go to automation, TDD etc. a bit later...

Why it would be impossible, or worthless?
" We can't unit test because everything is depending on HW"

Why would it still be worth it, and what kind of hassle is there?
At first thought the above is true, but let's think a bit further. I talk about workstation based unit testing of code that is going to run in embedded system later. We gain from unit testing on this higher level as it enforces a simple hw driver interface design. I have seen hardware drivers (and written many more) that are initiated from one place, started from another and every now and then adjusted from where ever. If we develop our architecture this way it is obviously a hassle to unit test the code with any meaningfull coverage.

Traditionally when the hardware was not available we started writing detailed specifications for application/firmware/hardware interfaces. Everything based on strong belief that we will get it right. When the hardware arrived we of course found out that we got it all wrong. This is similar to big bang integration phase in early software processes. This is how the "dispersed hw drivers" are born.

With unit test ideology we ensure that we have as little coupling with hardware related stuff as possible. When we write tests for the higher levels we also learn the requirements for our sw/hw interface. We are actually doing same thing as traditionally, writing detailed specifications - EXCEPT - this time it is fun, and "working prototype is the primary measure of progress" (adapted from agile manifesto principle 7).


TDD in firmware development - worth the hassle?

Today James Grenning is giving a workshop on Test-Driven Development (TDD) of embedded software. Too bad I'm stuck in Finland and the Embedded System Conference is held in San Jose.

Applying TDD to embedded software has been talked about quite a bit earlier:

1. Nohau se offering a course for TDD. Exercices are done in Python, but there is a introduction part on how to do it for embedded software in C.
2. Object Mentor offers articles about embedded software and TDD. Many of them are written by James Grenning, who will be speaking about it today. They do offer training courses as well.
3. Nancy Van Schooenderwoert, founder of Agile Rules/XP Embedded, has done lots of experimenting with TDD and embedded system development, talked about it, written about it, and teached it to others. Her publications can also be found online. They have also developed their own testing framework, Catsrunner.

So, I was inspired to blog today, because James Grenning is giving his presentation. I'm working on firmware environment which is not quite equal to any of the environments reported in papers above. In my case the hassle is even bigger. Is it rewarded to be worth it? That is the question that I'm trying find an answer for. Today I do not have enough experience to say for sure. However I do believe in TDD in general.
I will blog about this in the forthcoming months.