5/27/2006

Get Work Done -Documentation Concidered Harmfull

James Shore writes about two kinds of documentation and agile development.

All of us who have tried to convince someone to value agile methods over plan-driven processes have encountered the phrase "but we need documentation". I really saw the light while reading Mr. Shore's article. It has been clear to me, but division of documentation into these two gatecories gave me the simple structure I have been looking for;

1. Get Work Done
2. Enable Future Work


The following situation is not too uncommon. An embedded system project is getting closer to a process gate. All of a sudden it is time to write documentation because documents are "demanded by the process". This is ridiculous! Process does not demand or want anything, even less likely it is going to NEED anything. Dominating parts in a project system are the people, not the process. What do they NEED in order to progress effectively? That's correct from the back row - communication. Everyone agrees with that, but why is communication so strongly associated with paper document in engineering, or project management? Are we still paying the price of the stereo type created decades ago, that engineers can not communicate? If so, what makes project managers think that engineers could write?


So back to the project. Project has built working prototypes with incremental development and short time-boxing in cross-disciplined engineering teams. Electronic schematics, part lists, price estimates, firmware, mechanical drawings, thermal management results, power handling results, EMC data etc. are available as natural outcome of experimenting. What is missing if we need to COMMUNICATE the project, its status, the risk etc.?

I heard that! "Documentation" you whispered. This is what the process demands, and for some werd reason this is interpreted as Word documentation. So, the team gets into this unproductive mode and writes Word ducumentation at velocity of 2pg/month. Gate passed, everyone happy?

Dead wrong. The first downside is that now the documentation is done. We proceed with prototypes and create more new knowledge on detailed level. This is the spearhead of new knowledge created in this project. Unfortunately since time is wasted at early stages we will be in a hurry towards the end of the project. This means documentation at this crucial stage, at the end, has less focus. Not to mention the frustrated engineers that have already documented a lot without any feedback (the dinosaur process just swallows the papers). It is harder to convince them about the importance of documentation at this later stage. This documentation, which James Shore called Enable Future Work -documentation, remains missing in many so called traditional projects.


The second downside comes from the fact that we actually now have the Get Work Done documentation, but not the high quality Enable Future Work -documentation. This documentation is passed to the next project team as a starting point for their work. This documentation contains obvious facts, which are worthless. More importantly it has flaws and speculations which were found wrong during further experimentation. Unfortunately this new information typically never gets updated into these documents. So there is lots of information that is not true and does not go one-to-one with the actual final design. The new team now spends more time figuring out whether to believe the document or the design. That is if they are not experienced enough to always go with the actual design.


My Three Day Faceoffs -post describes the power of face-to-face communication over paper document in practice.

Get Work Done -(Word) documentation is concidered harmfull.

Agile methods fit firmware as well

David J. Andersson informs us that HP has shown significant productivity gain from agile management techniques in firmware development. There is not much detail available, but here is the short story...

5/21/2006

Agile Boot Camp

Pygmalion effect (or self-fullfilling prophecy) is a phenomenon according which the behaviour of group is what is expected. The phenomenon was demonstrated in a study of military training in (Eden, D. Pygmalion without Interpersonal Contrast Effects: Whole Group Gain from Raising Management Expectations, Journal of Applied Psychology, 75, 1990). In the study two teams were observed during their training period. Platoon leaders of one group were told that their men were above averige, while leaders of another group received no information. In reality the two groups of men were equal in potential. However the group that was said to be above average outperformed the other group at the end of the training period.

This supports my belief and observations that to some degree a person has the potential that the leader is willing to see in her.

Based on these ideas I challenge the need for above average (or level 3-5) developers in agile development. Agile framework works also very well as a training camp for junior developers. After all if we said that all developers in agile environment need to be above average, where would all 50,0001% of developers at and below the average go? To marketing? No can do, we need agile customers as well! In these settings the scrummaster's (or XP coach's, or...) role just needs to extend to facilitate this type of learning. Maybe she needs to be more technical than with senior developers, being able to guide with simple design, unit testing, refactoring etc., but it is essential to avoid microleading and accept failure as a method of learning. It is also important to have eyes open for the existing knowledge of junior developers. In most cases it's there enabling two-way learning experience.

5/16/2006

Scrum Gives More Time to Surf

"It [Scrum] gives me more time to surf...they bought it...it's great". I think I'm still missing some of the fundamental aspects of Scrum framework. Check if you got it right!

The Firmware "Hassle" with TDD - #3

This post will end theoretical justification of firmware TDD.

Why it would be impossible, or worthless?
"In embedded system we may not have anything to signal the test results with."

Why would it still be worth it, and what kind of hassle is there?
We have to, I seem to repeat myself, distinguish two things in discussion of unit tests in embedded (firmware) system projetcs:
  1. host run
  2. target run

1. I have been mostly writing about running the tests on host (PC). We are using Cygwin environment. We compile our unit tests with gcc and tests are written on top of embUnit framework. Reporting is done both on screen and as XML.

This article at Object Mentor by James Grenning defines an embedded TDD cycle. This demonstrates the fact that embedded programming is the extreme end of programming. We not only have to worry about all the normal problems in software engineering, but we need to fight these challenges with limited processor resources, poor tools, hard real-time deadlines, stakeholders from other engineering disciplines, usually without formal computer science education etc. This means that having unit test suite alone is not enough for testing the embedded system. The other arrows in embedded TDD cycle illustrate this.

As can be seen in Grenning's article unit testing on host is an important corner piece in the puzzle. Doing TDD has so many other benefits that come as a side products that it is worth the hassle. Unit testing the modules will help you to write clear consistent interfaces and enforce good programming practices.

2. Keeping the modules small and less complex it should not be too much work to port the tests to be run at target environment as well. Nancy has done some work on running the tests in target. In microcontrollers today it is easy to find some kind of serial port to report the results back to PC to be further formatted. Even if you do not have HW pheripheral on board, it's not a big deal to write serial communication driver with bit-banging. This however always needs some arrangements (getting the code to target, resetting the target etc.) In my (limited) experience these little arrangements may just be too much of a barrier to cause the tests not to be run by developers. At least not regularly enough for TDD. Fully automating this at high level is a bit difficult. It needs to be remembered that this would still only partly resolve the problem since the unit test code seldom fits the target memory with production code.

This said, when I get the unit testing on host really going, I will focus on running the same tests on target - and automated. There just is no rush, you cannot get to nirvana over night.

Nancy Van Schooenderwoert will give a presentation at ESC Boston 2006 about their CATS C unittest framework. That should be interesting.

5/13/2006

Listen to Evolving Excellence

Kevin Meyer and Bill Waddel have added a "Listen Now" link to their Evolving Excellence blog. You can listen every post as a podcast feed. Give it a try!

5/12/2006

Objects and Firmware - Polymorphism

I mentioned earlier that the constraints of object based programming can be overcome in C in order to do full OO programming. This time I will review the simple polymorphing technique in C. This is of course extremely simplified example, but polymorphism in C can be done with function pointers. The example below defines a class called worker which has only one method (function) called doTheJob. We create two entities of worker, but define different implementation for doTheJob. The implementation is selected with the constructor and stored as a function pointer theJob.

***** worker.h *****

typedef struct worker
{
void (*theJob)( void );
} _worker;

extern void WORKER_construct( BYTE meIndex,
void (*function)( void ) ) ;
extern void WORKER_doTheJob( BYTE meIndex );
extern void setPORTA( void );
extern void clearPORTA( void );

***** worker.c *****

#include "uc_defs.h"
#include "worker.h"

worker theWorker[2];
#define me theWorker[meIndex]

void WORKER_construct( BYTE meIndex,
void (*function)(void) )
{
me.theJob = function;
}

void WORKER_doTheJob( BYTE meIndex )
{
me.theJob();
}

void setPORTA( void )
{
PORTA = 0xFF;
}

void clearPORTA( void )
{
PORTA = 0x00;
}

***** main.c *****

#include "uc_defs.h"
#include "worker.h"

void main( void )
{
WORKER_construct(0, &clearPORTA );
WORKER_construct(1, &setPORTA );

while( 1 )
{
if( PORTB & 0x01 )
{
WORKER_doTheJob(0); // executes clearPORTA
}
else
{
WORKER_doTheJob(1); // executes setPORTA
}
}
}

I use the WORKER_doTheJob -function to wrap the function pointer in order to keep the interface consistent. I know this adds unnecessary overhead, but hey, we are all artists here and in my opinion it looks more clear.

I have started some series of posts, but not been able to finish any of them. Good news, I will soon use this post as a starting point to finish the TDD hassle -series... :-)

If you got interested in Object-Oriented C Programming (OOCP), here's my tips for a starting point (online):
Evanthelix OO C page
Axel-Tobias Schreiner has done a great job in this book

5/09/2006

Link: TDD Benefits the Creative Flow

Eric Hodel writes nicely about the benefits of Test-Driven Development to creative flow. This is good reading if you are still judging if TDD offers something for your project, be it an embedded project or not.

5/07/2006

Agile Development Has Crossed the Chasm

Scott W. Ambler (Agile Modeling and Ambysoft) used Moore's technology adaption curve to describe agile development adaptation in Dr.Dobb's Journal article. According to Ambler we have already clearly crossed the chasm, the divider beyond which the spreading of a technology or methodology will really take off. He also reformats the agile cost of change curve and gives tips for us all who are on a crucade to promote agile methods in their own organization, or for consultants who do this in someone else's organization.

This phenomenon of agile development becoming mainstream can also be seen in forthcoming XP2006 in Oulu, Finland. Lots of presentations focus on issues around scaling the agile development.

The Invisible Visibility

"Total Visibility" is an often heard argument by the agile advocants. Interestingly enough when I recently interviewed two large organization project managers about how they have experienced an year long agile experience, they mentioned lack of visibility as the main problem! When backlogs and burndown charts are explained, sure, they give you the detailed information on daily basis. When you look at your product backlog, yes, you could give an rough project end date estimate based on the Release burndown using team velocity. Still that seems to fall short for some purposes.

I have been on a well deserved vacation for a week now. Yesterday I, for some werd reason (being on vacation and on Saturday), wanted to know what is going on in the project I'm involved with.



I first took out the latest version of Excel sheet working as our backlog. I checked the Sprint burndown chart, fine, nothing to worry about. OK, what's been done? Let's see the Sprint backlog, OK, couple of features finishing nicely. Is there new stuff to do? Nope, Product backlog looks the same. I knew pretty much what's been done, as I had build reports from CruiseControl in my email, but I wanted a summary...


I hooked to CVS repository server, updated my sandbox, ran StatCVS and checked the commit history. Fine, I know that everything is going smoothly. I also know for a fact that in 2 weeks I'm going to see working prototypes with some test results in the next Sprint review. What I missed is a list of impediments to see what's going wrong, but we have not seen this meaningfull so far.


This is the visibility I'm talking about. It just can not get any more transparent than that. I know exactly what is going on in my project. No emails, no phone calls or any other disturbance to the flow. Then again, I'm representative of technical developer/leader level.

I think traditional role of project manager consists of three components; leadership, management and administration. The bigger the project, the bigger the team and the bigger the organization - the bigger these individual responsibilities get. In a large organization the number of parallel projects may be high and backlog type reporting gives just way too much detail - too much visibility. The development abstraction layers described by Joel Spolsky apply also here. I blogged about them a while ago.

Schwaber explained an easy way of turning backlogs into Gannt charts in his book. Microsoft has already released a Scrum plugin for MS Project. This however for some reason does not satisfy me, there is some small piece missing before it makes sense to me. You have to be very careful when building up administrative or executive reporting mechanisms for an agile project. If you forget to watch out you may end up doing all the traditional reporting (Gannt, Earned Value, WBS, PERT, CPM, you name it...) AND agile reporting like backlogs from Scrum and/or parking lot reports from Feature-Driven Development. This means that you have lost the concept of traveling light that is so fundamental to agile philosophy.

5/05/2006

Objects and Firmware - Basics

Several years ago this article got me into thinking about object-oriented programming in my work as a firmware developer. As the title says "Object Based Development Using UML and C" it is talking about object based, not object oriented development. According to Wikipedia object based programming has one or more of three constraints compared to object oriented programming (language):

  1. there is no implicit inheritance
  2. there is no polymorphism
  3. only a very reduced subset of the available values are objects (typically the GUI components)
In firmware projects written in C it is obvious that this definition typically holds. However it is important to notice that all of these constraints can be solved. It is quite a bit of work, and may be even complex, but it can be done. This post is about the basics, we will come to other stuff later.

Object Oriented problem #1 in small (or tiny) scale firmware development is that dynamic memory allocation and small microcontrollers - hmm, well, they just don't mix and match! If you ever have a C compiler that will generate code for malloc()'s and free()'s and you use them as intended, you will end up running out of memory because of fragmentation. You have couple of options here:

1. Don't care if you run out of memory, just manage the reset
2. Do your own simplified memory allocator, which is capable of allocating only for example three different object sizes.
3. Use static memory allocation, but program still in "object'ish fashion"

We have used the option 3 for couple of projects now. I'll explain briefly:


***** object.h *****

typedef struct _object
{
BYTE theValue;
} object;

extern void OBJECT_construct( BYTE meIndex );
extern BYTE OBJECT_getValue( BYTE meIndex );
extern BYTE OBJECT_addValue( BYTE meIndex, BYTE v );

***** object.c *****

#include "object.h"

object myObjects[2];
#define me myObjects[meIndex]

void OBJECT_construct( BYTE meIndex )
{
me.theValue = 0;
}

BYTE OBJECT_getValue( BYTE meIndex )
{
return me.theValue;
}

BYTE OBJECT_addValue( BYTE meIndex, BYTE v )
{
me.theValue += v;
}

***** main.c *****

#include "uc_defs.h"
#include "object.h"

void main( void )
{
while( 1 )
{
OBJECT_addValue( 0, 1 );
OBJECT_addValue( 1, 2 );

PORTA = OBJECT_getValue( 0 );
PORTB = OBJECT_getValue( 1 );
}
}


We use the word 'me' instead of 'this' because sometimes the code is compiled with C++ compiler. With some more macros you can make the code more readable.

I don't know if this is even object based (propably lots of people say no), but it works for us. It at least enforces capsulation better than relying only on developers discipline. There is an overhead from passing the meIndex -parameter (but it is a BYTE instead of a pointer) and the setters and the getters, but nothings free here - if you haven't noticed.

4/28/2006

My Name is Embedded Industry, and I'm an...

It is known that admitting having a problem is the first step in the healing process. Ken Orr compared management that refuses to admit they have a process problem to an alcoholic, strongly denying having any problems.

Product development and management association (PDMA) claims that new products currently have a success rate of only 59 percent at launch, up only one percent since 1990. Cancelled or failed (in the market) products consumed an estimated 46% of product development resources (Cooper, Winning at New Products, 2001).

Stephen Balacco, embedded analyst at VDC was quoted in December 2002 issue of SD Times as “Embedded [developers are] frustrated by inadequate or changing specifications during product development”. In the article it is mentioned that two-thirds of the 400 respondents cited changes in specification as the number one cause of the delays.

Do we need more evidence to justify a strong intervention by advocates of more flexible practices in order to better cope with the change? The change that is evident today and should be concidered as a possibility instead of threat.

In embedded systems it is not the change alone that causes big up-front design processes to fail so miserably. It is the characteristic of very late learning. "Final" designs from all the different engineering disciplines arrive at the end (delusional end) of the project. We could compare that to software integaration, in power of ten. Component tolerances violate with mechanics, microcontroller I/O characteristics were misunderstood, up-front designed protocols have performance shortcomings etc.

A new round is called for.

At this point wouldn't it be nice if there was a flexible process to do this with ease?

4/23/2006

The firmware "hassle" with TDD - #2

In Embedded Systems Design Magazine article Bugopedia Niall Murphy lays down few types of common defects in embedded software projects. Many of them are so common pitfalls that seasoned embedded programmers avoid them without paying attention. The one that goes beyond this are the "Nasty Compiler" -type of defects. We have had our share of these just recently. Compilers from two vendors for two different targets have been responsible for mysterious behaviour of compiled C source. C source which works fine on PC. Article works as an introduction for part 2 in our analysis of unit testing and TDD of firmware.

Why it would be impossible, or worthless?
"The compiler for target may have bug in it, and the code will not work even if it passes tests on PC."

Why would it still be worth it, and what kind of hassle is there?
It is true that because of "Nasty Compilers" we can not be sure that the source code passing unit tests on PC would work on final embedded target. There are experiences on running the unit tests in the actual HW target. This however only solves the problem partly. As all tests can not fit the target memory with all production code, test suite is a special build and will be linked differently. Especially if the final production code needs to be compiled with optimization turned on, severe problems may arise because of slightest change at linking phase. This makes trusting the suite difficult. These issues should be concidered when figuring out what to test and on what level.

But by all means this is not a show stopper for whole unit testing, nor TDD, because there are other benefits:

TDD enforces the "simplest thing that could possibly work" -thinking in developing. In firmware development (as I would think in any development) developers tend to take bigger bites than they can chew, ending up spending lot of time debugging. With TDD we write a little - test a little, and if the damn thing breaks we only need to go back couple of minutes worth of effort. It's much less painfull than realize that you have spent entire day developing something that you just cannot fix!

By writing unit tests we enforce simple interfaces to modules (C files). If a file is used in different configurations then a complete test suite protects us from making changes that would violate other configurations.

Again I want to remind that unit testing and TDD is very well worth concideration in embedded software and firmware development as well as in mainstream programming, but you have to do your decision based on your case.

4/21/2006

Cruisecontrol is Great for Firmware, Too

I have been playing with an idea of setting up a nightly build server for the embedded project. I started hacking together a simple Python script executing automated GNU Make build, analyse the output, and generate notification emails to stakeholders. After a while I stopped and thought "this sounds all too familiar". I decided to take another look at Cruisecontrol, which would also get us straight to continuous integration instead of just nightly builds. With great help from Lasse Koskela's article "Driving on CruiseControl" I was able to create a simple Java project compiled with Ant and automated with Cruisecontrol. This was the out of the box solution. I was able to proceed to my original problem, automating an embedded project integration. Lasse's article refers to older version of CC, and it did not have the "exec" builder plugin. So I downloaded the latest version. It BTW has a ready made Java example that you can launch right after extracting the thing. OK, so I changed the config.xml to use CVS instead of Subversion and then use "exec" builder plugin to launch the GNU Make script for our C source. I opened my browser, directed it to local server, and ta'daa - our first project was driving on cruisecontrol! I will make the config.xml available after I clean it up.




We have written few unit tests in the project (some even in TDD fashion), but they use embUnit framework which is not able to spit out xml, so unit test results are not published by our CC yet. At the moment I'm concidering two options:

1. updating the existing tests to use CUnit
2.writing a script to translate results from embUnit to simple xml output

4/19/2006

We are looking for an experienced chair buyer (three years minimum)

I wrote about managers becoming coaches in agile environment. Vasco Duarte wrote also about agile management. He pointed out this article by Joel Spolsky about "the development abstraction layer". It is entertaining reading as it usually is at Joel on Software. This posting does great job in describing the new era of manager's role. However I would personally like to see a bit more from management than getting me a new chair when I'm in need. I would prefer top management to create the grand vision and middle management to organize programs to put this vision into practice. I do not mean traditional middle management, which would drive the innovation by command, but middle management that builds the abstraction layer enabling us to harness the innovation power of the developer team. This is difficult task, and is what makes finding a good manager still a tough job. Chair buyers are much easier to recruit.