5/13/2006
Listen to Evolving Excellence
5/12/2006
Objects and Firmware - Polymorphism
***** worker.h *****
typedef struct worker
{
void (*theJob)( void );
} _worker;
extern void WORKER_construct( BYTE meIndex,
void (*function)( void ) ) ;
extern void WORKER_doTheJob( BYTE meIndex );
extern void setPORTA( void );
extern void clearPORTA( void );
***** worker.c *****
#include "uc_defs.h"
#include "worker.h"
worker theWorker[2];
#define me theWorker[meIndex]
void WORKER_construct( BYTE meIndex,
void (*function)(void) )
{
me.theJob = function;
}
void WORKER_doTheJob( BYTE meIndex )
{
me.theJob();
}
void setPORTA( void )
{
PORTA = 0xFF;
}
void clearPORTA( void )
{
PORTA = 0x00;
}
***** main.c *****
#include "uc_defs.h"
#include "worker.h"
void main( void )
{
WORKER_construct(0, &clearPORTA );
WORKER_construct(1, &setPORTA );
while( 1 )
{
if( PORTB & 0x01 )
{
WORKER_doTheJob(0); // executes clearPORTA
}
else
{
WORKER_doTheJob(1); // executes setPORTA
}
}
}
I use the WORKER_doTheJob -function to wrap the function pointer in order to keep the interface consistent. I know this adds unnecessary overhead, but hey, we are all artists here and in my opinion it looks more clear.
I have started some series of posts, but not been able to finish any of them. Good news, I will soon use this post as a starting point to finish the TDD hassle -series... :-)
If you got interested in Object-Oriented C Programming (OOCP), here's my tips for a starting point (online):
Evanthelix OO C page
Axel-Tobias Schreiner has done a great job in this book
5/09/2006
Link: TDD Benefits the Creative Flow
5/07/2006
Agile Development Has Crossed the Chasm
This phenomenon of agile development becoming mainstream can also be seen in forthcoming XP2006 in Oulu, Finland. Lots of presentations focus on issues around scaling the agile development.
The Invisible Visibility
I have been on a well deserved vacation for a week now. Yesterday I, for some werd reason (being on vacation and on Saturday), wanted to know what is going on in the project I'm involved with.


This is the visibility I'm talking about. It just can not get any more transparent than that. I know exactly what is going on in my project. No emails, no phone calls or any other disturbance to the flow. Then again, I'm representative of technical developer/leader level.
I think traditional role of project manager consists of three components; leadership, management and administration. The bigger the project, the bigger the team and the bigger the organization - the bigger these individual responsibilities get. In a large organization the number of parallel projects may be high and backlog type reporting gives just way too much detail - too much visibility. The development abstraction layers described by Joel Spolsky apply also here. I blogged about them a while ago.
Schwaber explained an easy way of turning backlogs into Gannt charts in his book. Microsoft has already released a Scrum plugin for MS Project. This however for some reason does not satisfy me, there is some small piece missing before it makes sense to me. You have to be very careful when building up administrative or executive reporting mechanisms for an agile project. If you forget to watch out you may end up doing all the traditional reporting (Gannt, Earned Value, WBS, PERT, CPM, you name it...) AND agile reporting like backlogs from Scrum and/or parking lot reports from Feature-Driven Development. This means that you have lost the concept of traveling light that is so fundamental to agile philosophy.
5/05/2006
Objects and Firmware - Basics
- there is no implicit inheritance
- there is no polymorphism
- only a very reduced subset of the available values are objects (typically the GUI components)
Object Oriented problem #1 in small (or tiny) scale firmware development is that dynamic memory allocation and small microcontrollers - hmm, well, they just don't mix and match! If you ever have a C compiler that will generate code for malloc()'s and free()'s and you use them as intended, you will end up running out of memory because of fragmentation. You have couple of options here:
1. Don't care if you run out of memory, just manage the reset
2. Do your own simplified memory allocator, which is capable of allocating only for example three different object sizes.
3. Use static memory allocation, but program still in "object'ish fashion"
We have used the option 3 for couple of projects now. I'll explain briefly:
***** object.h *****
typedef struct _object
{
BYTE theValue;
} object;
extern void OBJECT_construct( BYTE meIndex );
extern BYTE OBJECT_getValue( BYTE meIndex );
extern BYTE OBJECT_addValue( BYTE meIndex, BYTE v );
***** object.c *****
#include "object.h"
object myObjects[2];
#define me myObjects[meIndex]
void OBJECT_construct( BYTE meIndex )
{
me.theValue = 0;
}
BYTE OBJECT_getValue( BYTE meIndex )
{
return me.theValue;
}
BYTE OBJECT_addValue( BYTE meIndex, BYTE v )
{
me.theValue += v;
}
***** main.c *****
#include "uc_defs.h"
#include "object.h"
void main( void )
{
while( 1 )
{
OBJECT_addValue( 0, 1 );
OBJECT_addValue( 1, 2 );
PORTA = OBJECT_getValue( 0 );
PORTB = OBJECT_getValue( 1 );
}
}
We use the word 'me' instead of 'this' because sometimes the code is compiled with C++ compiler. With some more macros you can make the code more readable.
I don't know if this is even object based (propably lots of people say no), but it works for us. It at least enforces capsulation better than relying only on developers discipline. There is an overhead from passing the meIndex -parameter (but it is a BYTE instead of a pointer) and the setters and the getters, but nothings free here - if you haven't noticed.
4/28/2006
My Name is Embedded Industry, and I'm an...
It is known that admitting having a problem is the first step in the healing process. Ken Orr compared management that refuses to admit they have a process problem to an alcoholic, strongly denying having any problems.
Product development and management association (PDMA) claims that new products currently have a success rate of only 59 percent at launch, up only one percent since 1990. Cancelled or failed (in the market) products consumed an estimated 46% of product development resources (Cooper, Winning at New Products, 2001).
Stephen Balacco, embedded analyst at VDC was quoted in December 2002 issue of SD Times as “Embedded [developers are] frustrated by inadequate or changing specifications during product development”. In the article it is mentioned that two-thirds of the 400 respondents cited changes in specification as the number one cause of the delays.
Do we need more evidence to justify a strong intervention by advocates of more flexible practices in order to better cope with the change? The change that is evident today and should be concidered as a possibility instead of threat.
In embedded systems it is not the change alone that causes big up-front design processes to fail so miserably. It is the characteristic of very late learning. "Final" designs from all the different engineering disciplines arrive at the end (delusional end) of the project. We could compare that to software integaration, in power of ten. Component tolerances violate with mechanics, microcontroller I/O characteristics were misunderstood, up-front designed protocols have performance shortcomings etc.
A new round is called for.
At this point wouldn't it be nice if there was a flexible process to do this with ease?
4/23/2006
The firmware "hassle" with TDD - #2
Why it would be impossible, or worthless?
"The compiler for target may have bug in it, and the code will not work even if it passes tests on PC."
Why would it still be worth it, and what kind of hassle is there?
It is true that because of "Nasty Compilers" we can not be sure that the source code passing unit tests on PC would work on final embedded target. There are experiences on running the unit tests in the actual HW target. This however only solves the problem partly. As all tests can not fit the target memory with all production code, test suite is a special build and will be linked differently. Especially if the final production code needs to be compiled with optimization turned on, severe problems may arise because of slightest change at linking phase. This makes trusting the suite difficult. These issues should be concidered when figuring out what to test and on what level.
But by all means this is not a show stopper for whole unit testing, nor TDD, because there are other benefits:
TDD enforces the "simplest thing that could possibly work" -thinking in developing. In firmware development (as I would think in any development) developers tend to take bigger bites than they can chew, ending up spending lot of time debugging. With TDD we write a little - test a little, and if the damn thing breaks we only need to go back couple of minutes worth of effort. It's much less painfull than realize that you have spent entire day developing something that you just cannot fix!
By writing unit tests we enforce simple interfaces to modules (C files). If a file is used in different configurations then a complete test suite protects us from making changes that would violate other configurations.
Again I want to remind that unit testing and TDD is very well worth concideration in embedded software and firmware development as well as in mainstream programming, but you have to do your decision based on your case.
4/21/2006
Cruisecontrol is Great for Firmware, Too

We have written few unit tests in the project (some even in TDD fashion), but they use embUnit framework which is not able to spit out xml, so unit test results are not published by our CC yet. At the moment I'm concidering two options:
1. updating the existing tests to use CUnit
2.writing a script to translate results from embUnit to simple xml output
4/19/2006
We are looking for an experienced chair buyer (three years minimum)
4/18/2006
Three day faceoffs

In the current project we have had couple of faceoffs, and I think we have won both. We got a wireless communication protocol from outsider. We call this a stack, which is delivered as a C library. We decided to bring the software developer responsible for the stack to our premises for three days. We spent those three days around the hardware and laptops, not in the meeting room with powerpoints. On the fourth day we were developing full speed on top of stack API. Could we have achieved this with paper documents? I don't think so. There were documents available, but the knowledge of necessary initialization routines, timings, interrupt requirements, and so on, were achieved during these three days and not by experimenting according the document.
In other episode we needed to implement protocol support for a communication gateway. This time we brought a developer responsible for development of this gateway to our premises for three days. Again on the fourth day we were able to continue development full speed. This time documents were even mentioned to be "little out dated".
On earlier occasion when the hardware for the wireless communication was contracted to outsider, we used paper documents and emails to explain what we want from our prototype. This period took couple of weeks, at times a single email was waited for three days.
I believe winning at faceoffs is easy, even though it alone does not guarantee winning a game as we saw in this year's semi-finals.
4/15/2006
4/11/2006
XP2006 in Oulu, Finland. Program available!
I was hoping for experience papers from embedded/firmware domain, but it seems that I won't be getting this. However another interesting field is strongly present; scaling agile methods and applying them in real life project environment. Impressive team roster; Kent Beck, Barry Boehm, Poppendiecks, Mike Cohn, the list goes on. Good times ahead!
4/09/2006
The firmware "hassle" with TDD?
I got few questions about what exactly is the hassle that I was writing about. Fine, I'll try to paint the picture for you big boys with your fancy workstations and Smalltalk, Java, Ruby etc.
My current hassle is caused by mains controlling devices, you may call these "light switches". These products however do a bit more, they can control the load, actually different types of loads, they can be adjusted with parametering, have sophisticated self diagnosis, be networked via RF technology, and the list goes on. A fully featured "light switch" will be a 3-microcontroller multi-processor system connected internally with I2C bus and RF networked with his buddies. Typical choise of a controller is Microchip PIC16F876A which has 8kB of flash memory (actually this is 14-bit words in PIC acrhitecture) and 368 bytes of RAM. The monster is running at 4MHz. All programmed in C. In parallel evolve new mechanics, electronics, thermal management, etc. This is called co-design and is known embedded development characteristic. At the end official certification procedures need to be passed.
Okay, you propably are starting to get the picture. Now we bring in agile development. I like to divide this to (at least) two forms of being agile; 1) agile project management and 2) agile software development. Here we talk about the latter, and actually only about unit testing. We might go to automation, TDD etc. a bit later...
Why it would be impossible, or worthless?
" We can't unit test because everything is depending on HW"
Why would it still be worth it, and what kind of hassle is there?
At first thought the above is true, but let's think a bit further. I talk about workstation based unit testing of code that is going to run in embedded system later. We gain from unit testing on this higher level as it enforces a simple hw driver interface design. I have seen hardware drivers (and written many more) that are initiated from one place, started from another and every now and then adjusted from where ever. If we develop our architecture this way it is obviously a hassle to unit test the code with any meaningfull coverage.
Traditionally when the hardware was not available we started writing detailed specifications for application/firmware/hardware interfaces. Everything based on strong belief that we will get it right. When the hardware arrived we of course found out that we got it all wrong. This is similar to big bang integration phase in early software processes. This is how the "dispersed hw drivers" are born.
With unit test ideology we ensure that we have as little coupling with hardware related stuff as possible. When we write tests for the higher levels we also learn the requirements for our sw/hw interface. We are actually doing same thing as traditionally, writing detailed specifications - EXCEPT - this time it is fun, and "working prototype is the primary measure of progress" (adapted from agile manifesto principle 7).
4/05/2006
TDD in firmware development - worth the hassle?
Applying TDD to embedded software has been talked about quite a bit earlier:
1. Nohau se offering a course for TDD. Exercices are done in Python, but there is a introduction part on how to do it for embedded software in C.
2. Object Mentor offers articles about embedded software and TDD. Many of them are written by James Grenning, who will be speaking about it today. They do offer training courses as well.
3. Nancy Van Schooenderwoert, founder of Agile Rules/XP Embedded, has done lots of experimenting with TDD and embedded system development, talked about it, written about it, and teached it to others. Her publications can also be found online. They have also developed their own testing framework, Catsrunner.
So, I was inspired to blog today, because James Grenning is giving his presentation. I'm working on firmware environment which is not quite equal to any of the environments reported in papers above. In my case the hassle is even bigger. Is it rewarded to be worth it? That is the question that I'm trying find an answer for. Today I do not have enough experience to say for sure. However I do believe in TDD in general.
I will blog about this in the forthcoming months.