I got few questions about what exactly is the hassle that I was writing about. Fine, I'll try to paint the picture for you big boys with your fancy workstations and Smalltalk, Java, Ruby etc.
My current hassle is caused by mains controlling devices, you may call these "light switches". These products however do a bit more, they can control the load, actually different types of loads, they can be adjusted with parametering, have sophisticated self diagnosis, be networked via RF technology, and the list goes on. A fully featured "light switch" will be a 3-microcontroller multi-processor system connected internally with I2C bus and RF networked with his buddies. Typical choise of a controller is Microchip PIC16F876A which has 8kB of flash memory (actually this is 14-bit words in PIC acrhitecture) and 368 bytes of RAM. The monster is running at 4MHz. All programmed in C. In parallel evolve new mechanics, electronics, thermal management, etc. This is called co-design and is known embedded development characteristic. At the end official certification procedures need to be passed.
Okay, you propably are starting to get the picture. Now we bring in agile development. I like to divide this to (at least) two forms of being agile; 1) agile project management and 2) agile software development. Here we talk about the latter, and actually only about unit testing. We might go to automation, TDD etc. a bit later...
Why it would be impossible, or worthless?
" We can't unit test because everything is depending on HW"
Why would it still be worth it, and what kind of hassle is there?
At first thought the above is true, but let's think a bit further. I talk about workstation based unit testing of code that is going to run in embedded system later. We gain from unit testing on this higher level as it enforces a simple hw driver interface design. I have seen hardware drivers (and written many more) that are initiated from one place, started from another and every now and then adjusted from where ever. If we develop our architecture this way it is obviously a hassle to unit test the code with any meaningfull coverage.
Traditionally when the hardware was not available we started writing detailed specifications for application/firmware/hardware interfaces. Everything based on strong belief that we will get it right. When the hardware arrived we of course found out that we got it all wrong. This is similar to big bang integration phase in early software processes. This is how the "dispersed hw drivers" are born.
With unit test ideology we ensure that we have as little coupling with hardware related stuff as possible. When we write tests for the higher levels we also learn the requirements for our sw/hw interface. We are actually doing same thing as traditionally, writing detailed specifications - EXCEPT - this time it is fun, and "working prototype is the primary measure of progress" (adapted from agile manifesto principle 7).