5/27/2006

Get Work Done -Documentation Concidered Harmfull

James Shore writes about two kinds of documentation and agile development.

All of us who have tried to convince someone to value agile methods over plan-driven processes have encountered the phrase "but we need documentation". I really saw the light while reading Mr. Shore's article. It has been clear to me, but division of documentation into these two gatecories gave me the simple structure I have been looking for;

1. Get Work Done
2. Enable Future Work


The following situation is not too uncommon. An embedded system project is getting closer to a process gate. All of a sudden it is time to write documentation because documents are "demanded by the process". This is ridiculous! Process does not demand or want anything, even less likely it is going to NEED anything. Dominating parts in a project system are the people, not the process. What do they NEED in order to progress effectively? That's correct from the back row - communication. Everyone agrees with that, but why is communication so strongly associated with paper document in engineering, or project management? Are we still paying the price of the stereo type created decades ago, that engineers can not communicate? If so, what makes project managers think that engineers could write?


So back to the project. Project has built working prototypes with incremental development and short time-boxing in cross-disciplined engineering teams. Electronic schematics, part lists, price estimates, firmware, mechanical drawings, thermal management results, power handling results, EMC data etc. are available as natural outcome of experimenting. What is missing if we need to COMMUNICATE the project, its status, the risk etc.?

I heard that! "Documentation" you whispered. This is what the process demands, and for some werd reason this is interpreted as Word documentation. So, the team gets into this unproductive mode and writes Word ducumentation at velocity of 2pg/month. Gate passed, everyone happy?

Dead wrong. The first downside is that now the documentation is done. We proceed with prototypes and create more new knowledge on detailed level. This is the spearhead of new knowledge created in this project. Unfortunately since time is wasted at early stages we will be in a hurry towards the end of the project. This means documentation at this crucial stage, at the end, has less focus. Not to mention the frustrated engineers that have already documented a lot without any feedback (the dinosaur process just swallows the papers). It is harder to convince them about the importance of documentation at this later stage. This documentation, which James Shore called Enable Future Work -documentation, remains missing in many so called traditional projects.


The second downside comes from the fact that we actually now have the Get Work Done documentation, but not the high quality Enable Future Work -documentation. This documentation is passed to the next project team as a starting point for their work. This documentation contains obvious facts, which are worthless. More importantly it has flaws and speculations which were found wrong during further experimentation. Unfortunately this new information typically never gets updated into these documents. So there is lots of information that is not true and does not go one-to-one with the actual final design. The new team now spends more time figuring out whether to believe the document or the design. That is if they are not experienced enough to always go with the actual design.


My Three Day Faceoffs -post describes the power of face-to-face communication over paper document in practice.

Get Work Done -(Word) documentation is concidered harmfull.

Agile methods fit firmware as well

David J. Andersson informs us that HP has shown significant productivity gain from agile management techniques in firmware development. There is not much detail available, but here is the short story...

5/21/2006

Agile Boot Camp

Pygmalion effect (or self-fullfilling prophecy) is a phenomenon according which the behaviour of group is what is expected. The phenomenon was demonstrated in a study of military training in (Eden, D. Pygmalion without Interpersonal Contrast Effects: Whole Group Gain from Raising Management Expectations, Journal of Applied Psychology, 75, 1990). In the study two teams were observed during their training period. Platoon leaders of one group were told that their men were above averige, while leaders of another group received no information. In reality the two groups of men were equal in potential. However the group that was said to be above average outperformed the other group at the end of the training period.

This supports my belief and observations that to some degree a person has the potential that the leader is willing to see in her.

Based on these ideas I challenge the need for above average (or level 3-5) developers in agile development. Agile framework works also very well as a training camp for junior developers. After all if we said that all developers in agile environment need to be above average, where would all 50,0001% of developers at and below the average go? To marketing? No can do, we need agile customers as well! In these settings the scrummaster's (or XP coach's, or...) role just needs to extend to facilitate this type of learning. Maybe she needs to be more technical than with senior developers, being able to guide with simple design, unit testing, refactoring etc., but it is essential to avoid microleading and accept failure as a method of learning. It is also important to have eyes open for the existing knowledge of junior developers. In most cases it's there enabling two-way learning experience.

5/16/2006

Scrum Gives More Time to Surf

"It [Scrum] gives me more time to surf...they bought it...it's great". I think I'm still missing some of the fundamental aspects of Scrum framework. Check if you got it right!

The Firmware "Hassle" with TDD - #3

This post will end theoretical justification of firmware TDD.

Why it would be impossible, or worthless?
"In embedded system we may not have anything to signal the test results with."

Why would it still be worth it, and what kind of hassle is there?
We have to, I seem to repeat myself, distinguish two things in discussion of unit tests in embedded (firmware) system projetcs:
  1. host run
  2. target run

1. I have been mostly writing about running the tests on host (PC). We are using Cygwin environment. We compile our unit tests with gcc and tests are written on top of embUnit framework. Reporting is done both on screen and as XML.

This article at Object Mentor by James Grenning defines an embedded TDD cycle. This demonstrates the fact that embedded programming is the extreme end of programming. We not only have to worry about all the normal problems in software engineering, but we need to fight these challenges with limited processor resources, poor tools, hard real-time deadlines, stakeholders from other engineering disciplines, usually without formal computer science education etc. This means that having unit test suite alone is not enough for testing the embedded system. The other arrows in embedded TDD cycle illustrate this.

As can be seen in Grenning's article unit testing on host is an important corner piece in the puzzle. Doing TDD has so many other benefits that come as a side products that it is worth the hassle. Unit testing the modules will help you to write clear consistent interfaces and enforce good programming practices.

2. Keeping the modules small and less complex it should not be too much work to port the tests to be run at target environment as well. Nancy has done some work on running the tests in target. In microcontrollers today it is easy to find some kind of serial port to report the results back to PC to be further formatted. Even if you do not have HW pheripheral on board, it's not a big deal to write serial communication driver with bit-banging. This however always needs some arrangements (getting the code to target, resetting the target etc.) In my (limited) experience these little arrangements may just be too much of a barrier to cause the tests not to be run by developers. At least not regularly enough for TDD. Fully automating this at high level is a bit difficult. It needs to be remembered that this would still only partly resolve the problem since the unit test code seldom fits the target memory with production code.

This said, when I get the unit testing on host really going, I will focus on running the same tests on target - and automated. There just is no rush, you cannot get to nirvana over night.

Nancy Van Schooenderwoert will give a presentation at ESC Boston 2006 about their CATS C unittest framework. That should be interesting.

5/13/2006

Listen to Evolving Excellence

Kevin Meyer and Bill Waddel have added a "Listen Now" link to their Evolving Excellence blog. You can listen every post as a podcast feed. Give it a try!

5/12/2006

Objects and Firmware - Polymorphism

I mentioned earlier that the constraints of object based programming can be overcome in C in order to do full OO programming. This time I will review the simple polymorphing technique in C. This is of course extremely simplified example, but polymorphism in C can be done with function pointers. The example below defines a class called worker which has only one method (function) called doTheJob. We create two entities of worker, but define different implementation for doTheJob. The implementation is selected with the constructor and stored as a function pointer theJob.

***** worker.h *****

typedef struct worker
{
void (*theJob)( void );
} _worker;

extern void WORKER_construct( BYTE meIndex,
void (*function)( void ) ) ;
extern void WORKER_doTheJob( BYTE meIndex );
extern void setPORTA( void );
extern void clearPORTA( void );

***** worker.c *****

#include "uc_defs.h"
#include "worker.h"

worker theWorker[2];
#define me theWorker[meIndex]

void WORKER_construct( BYTE meIndex,
void (*function)(void) )
{
me.theJob = function;
}

void WORKER_doTheJob( BYTE meIndex )
{
me.theJob();
}

void setPORTA( void )
{
PORTA = 0xFF;
}

void clearPORTA( void )
{
PORTA = 0x00;
}

***** main.c *****

#include "uc_defs.h"
#include "worker.h"

void main( void )
{
WORKER_construct(0, &clearPORTA );
WORKER_construct(1, &setPORTA );

while( 1 )
{
if( PORTB & 0x01 )
{
WORKER_doTheJob(0); // executes clearPORTA
}
else
{
WORKER_doTheJob(1); // executes setPORTA
}
}
}

I use the WORKER_doTheJob -function to wrap the function pointer in order to keep the interface consistent. I know this adds unnecessary overhead, but hey, we are all artists here and in my opinion it looks more clear.

I have started some series of posts, but not been able to finish any of them. Good news, I will soon use this post as a starting point to finish the TDD hassle -series... :-)

If you got interested in Object-Oriented C Programming (OOCP), here's my tips for a starting point (online):
Evanthelix OO C page
Axel-Tobias Schreiner has done a great job in this book

5/09/2006

Link: TDD Benefits the Creative Flow

Eric Hodel writes nicely about the benefits of Test-Driven Development to creative flow. This is good reading if you are still judging if TDD offers something for your project, be it an embedded project or not.

5/07/2006

Agile Development Has Crossed the Chasm

Scott W. Ambler (Agile Modeling and Ambysoft) used Moore's technology adaption curve to describe agile development adaptation in Dr.Dobb's Journal article. According to Ambler we have already clearly crossed the chasm, the divider beyond which the spreading of a technology or methodology will really take off. He also reformats the agile cost of change curve and gives tips for us all who are on a crucade to promote agile methods in their own organization, or for consultants who do this in someone else's organization.

This phenomenon of agile development becoming mainstream can also be seen in forthcoming XP2006 in Oulu, Finland. Lots of presentations focus on issues around scaling the agile development.

The Invisible Visibility

"Total Visibility" is an often heard argument by the agile advocants. Interestingly enough when I recently interviewed two large organization project managers about how they have experienced an year long agile experience, they mentioned lack of visibility as the main problem! When backlogs and burndown charts are explained, sure, they give you the detailed information on daily basis. When you look at your product backlog, yes, you could give an rough project end date estimate based on the Release burndown using team velocity. Still that seems to fall short for some purposes.

I have been on a well deserved vacation for a week now. Yesterday I, for some werd reason (being on vacation and on Saturday), wanted to know what is going on in the project I'm involved with.



I first took out the latest version of Excel sheet working as our backlog. I checked the Sprint burndown chart, fine, nothing to worry about. OK, what's been done? Let's see the Sprint backlog, OK, couple of features finishing nicely. Is there new stuff to do? Nope, Product backlog looks the same. I knew pretty much what's been done, as I had build reports from CruiseControl in my email, but I wanted a summary...


I hooked to CVS repository server, updated my sandbox, ran StatCVS and checked the commit history. Fine, I know that everything is going smoothly. I also know for a fact that in 2 weeks I'm going to see working prototypes with some test results in the next Sprint review. What I missed is a list of impediments to see what's going wrong, but we have not seen this meaningfull so far.


This is the visibility I'm talking about. It just can not get any more transparent than that. I know exactly what is going on in my project. No emails, no phone calls or any other disturbance to the flow. Then again, I'm representative of technical developer/leader level.

I think traditional role of project manager consists of three components; leadership, management and administration. The bigger the project, the bigger the team and the bigger the organization - the bigger these individual responsibilities get. In a large organization the number of parallel projects may be high and backlog type reporting gives just way too much detail - too much visibility. The development abstraction layers described by Joel Spolsky apply also here. I blogged about them a while ago.

Schwaber explained an easy way of turning backlogs into Gannt charts in his book. Microsoft has already released a Scrum plugin for MS Project. This however for some reason does not satisfy me, there is some small piece missing before it makes sense to me. You have to be very careful when building up administrative or executive reporting mechanisms for an agile project. If you forget to watch out you may end up doing all the traditional reporting (Gannt, Earned Value, WBS, PERT, CPM, you name it...) AND agile reporting like backlogs from Scrum and/or parking lot reports from Feature-Driven Development. This means that you have lost the concept of traveling light that is so fundamental to agile philosophy.

5/05/2006

Objects and Firmware - Basics

Several years ago this article got me into thinking about object-oriented programming in my work as a firmware developer. As the title says "Object Based Development Using UML and C" it is talking about object based, not object oriented development. According to Wikipedia object based programming has one or more of three constraints compared to object oriented programming (language):

  1. there is no implicit inheritance
  2. there is no polymorphism
  3. only a very reduced subset of the available values are objects (typically the GUI components)
In firmware projects written in C it is obvious that this definition typically holds. However it is important to notice that all of these constraints can be solved. It is quite a bit of work, and may be even complex, but it can be done. This post is about the basics, we will come to other stuff later.

Object Oriented problem #1 in small (or tiny) scale firmware development is that dynamic memory allocation and small microcontrollers - hmm, well, they just don't mix and match! If you ever have a C compiler that will generate code for malloc()'s and free()'s and you use them as intended, you will end up running out of memory because of fragmentation. You have couple of options here:

1. Don't care if you run out of memory, just manage the reset
2. Do your own simplified memory allocator, which is capable of allocating only for example three different object sizes.
3. Use static memory allocation, but program still in "object'ish fashion"

We have used the option 3 for couple of projects now. I'll explain briefly:


***** object.h *****

typedef struct _object
{
BYTE theValue;
} object;

extern void OBJECT_construct( BYTE meIndex );
extern BYTE OBJECT_getValue( BYTE meIndex );
extern BYTE OBJECT_addValue( BYTE meIndex, BYTE v );

***** object.c *****

#include "object.h"

object myObjects[2];
#define me myObjects[meIndex]

void OBJECT_construct( BYTE meIndex )
{
me.theValue = 0;
}

BYTE OBJECT_getValue( BYTE meIndex )
{
return me.theValue;
}

BYTE OBJECT_addValue( BYTE meIndex, BYTE v )
{
me.theValue += v;
}

***** main.c *****

#include "uc_defs.h"
#include "object.h"

void main( void )
{
while( 1 )
{
OBJECT_addValue( 0, 1 );
OBJECT_addValue( 1, 2 );

PORTA = OBJECT_getValue( 0 );
PORTB = OBJECT_getValue( 1 );
}
}


We use the word 'me' instead of 'this' because sometimes the code is compiled with C++ compiler. With some more macros you can make the code more readable.

I don't know if this is even object based (propably lots of people say no), but it works for us. It at least enforces capsulation better than relying only on developers discipline. There is an overhead from passing the meIndex -parameter (but it is a BYTE instead of a pointer) and the setters and the getters, but nothings free here - if you haven't noticed.