3/05/2015

Agile Hardware - Support from a Surprising Source


“Agile development may be suitable for software development, but for hardware development we have to use something more formalized and waterfall just works for this”.

Some time ago I stumbled across the above comment. I can’t disagree more.
 
I have electrical engineering background and for two decades I have been involved with embedded systems development. I have never seen a specification being complete nor frozen before the design for electronics. To me that’s completely irrational idea. Further, I have yet to witness a project during which changes, and/or discovery, did not affect the pcb and mechanical co-design. Fast-paced iterative process models fit the challenge much better than up-front plan-driven. I have been lucky to work for more than a decade with people who think the same (at least on this topic).

Software development domain faced the same challenge and answered with Agile software development. But based on my own observation, product development is very similar ball game – despite the engineering discipline. Yet, in hardware development domain it is still not so rare to hear comments like above and for some reason waterfall mindset prevails.  This leads us to the main point of this post. Actually flexible hardware development is supported by a very surprising source; the authors that are cited to “demand the waterfall process model”!

To begin with, of course we, members of agile community, know that the original "waterfall paper" by Winston Royce actually argued against using waterfall approach (link).

Second, the N&N paper, which inspired Scrum framework for software development, did not talk about software development. The case projects developed copiers, cameras and cars. And this was decades ago, when these products were not that software intensive either (link).

Further, PMI’s PMBOK is not that exclusive either. It presents the waterfall approach, but only as one alternative process model. As equally considerable alternatives the guide presents models with overlapping phases and iterative approach.

Finally, Stage-Gate process model introduced to us by Robert Cooper has gone through several generations since its first introduction in 1986 and has continuously moved towards recommending the fast paced iterative development. However, the very first edition already recommended iterative, collaborative approach;

“…parallel marketing and operations activities are also undertaken. For example, market-analysis and customer-feedback work continue concurrently with the technical development, with customer opinion sought on the product as it takes shape during development. These activities are back-and-forth or iterative, with each development result – for example, rapid prototype, working model, or first prototype – taken to the customer for assessment and feedback”.

After this brief review, I summarize the situation to be exactly like the case with misunderstanding of Winston Royce’s paper but on a larger scale. People read the material to find what they want to find. Beauty is in the eye of the beholder. Or they skip reading altogether, and just go with their hard-wired mental model. As an example, several years ago I had a presentation with quotes from a process manual of a very large enterprise. The quotes were 100% supportive to agile development. I presented this to a room full of project managers and PMO directors. I asked if they knew from where the quotes were. None was able to answer. I asked who had read the complete process manual, their own by the way. None had. Despite this, they were very consistent in arguing that the very manual “demanded waterfall”.

This would be funny, if only it wasn’t true.

Incremental and iterative models are recommended for hardware development in the literature. For some reason, the adaptation rate seems much higher in software development domain. I think it is worth checking what actually made agile software development methods so popular in the industry and is there something to be learned for the system development domain. Yes, software development has certain characteristics increasing the flexibility, but my belief is that this is not the whole picture. Further, in systems development the product development challenges are shared between the engineering disciplines and it would be beneficial to have a shared approach to development.

Luckily, there’s been signs of more and more people getting interested in transforming the agile development knowledge from software development domain to system development domain (containing for example electronics and mechanical engineering), f.ex (link)

Knowledge about iterative hardware development exists, but learning together between the domains can’t hurt anyone. Agile Alliance has just recently initiated a program for Agile Engineering aiming at this (link)

12/05/2013

Scaling the Product Owner Role

At XP Days Benelux 2013 we talked about why and how the Product Owner role is scaled in organizations. You can find my slides here (pdf).
 
[EDIT 1.2.2014]
InfoQ Interview on the topic can be found here.
 
Abstract
 
Come to this workshop to learn how to scale the Product Owner role in order to harness the true potential of self-organizing Agile teams. You will learn by doing, but you will be also hear a real-life story.
 
Using Agile development methods development teams improve their capability to deliver through transparency and predictability. However, this does not bring the outcome companies are seeking if the developed features are not the right ones. The product owner role in Scrum simplifies the interface between business and development. This obviously brings immediate relief in many cases, but in practice, the responsibility of the Product Owner is far too wide for a single person in all except the most simplistic scenarios. Naturally the pragmatic Product Owner works with number of domain experts and other stakeholders. Things in real-life get even more complicated, as organizations have multiple product lines, yet the development should be aligned with the company’s strategy. To add to the challenge, for example embedded system development including own hardware platform development brings additional concerns, such as less flexibility than software development and dependency on external suppliers. To date, the practical information on how to do this remains limited.
 
During this workshop you will will create a framework for scaling the Product Owner role. Continuous customer collaboration at different levels, with different focus, will also be covered. Your work will be guided with a real-life story linking the exercise to reality.

10/11/2013

My Hardware and Co-Design talk at Scrum Gathering


Couple of weeks ago I attended my first Scrum Gathering in the city of light, Paris. On the first day I was offered an opportunity to present my Hardware and Co-Design talk. You can find the abstract below, and slides are available here.

Summary

Agile methods are gaining foothold in embedded software development. Embedded software is not developed in isolation, but it has dependencies to hardware development. The system development is facing the demands of ever increasing amount of change and learning. Agile methods aim at helping in these challenges. This talk summarizes authors observations on hardware development teams using Scrum during the past 10 years. Teams have varied in terms of disciplines involved and collocation.

Come to this session to get the practitioner’s view on using Scrum beyond embedded software development.

Description 


Agile software development is getting more and more attention also in embedded software development. Embedded system development on the other hand requires different engineering disciplines working together towards a shared goal. When embedded software development begins using agile methods it triggers a need for change also in other disciplines. Agile development emphasizes continuous learning through experimenting and collaboration instead of following a detailed up-front plan. Agile embedded software team expects different behavior in system co-design.

In addition to the above, product development in general and not only software development is facing the demands of ever increasing amount of change and learning. Change happens in several areas, such as technology, competition and marketplace. This is what agile methods aim at tackling. This implies that new product development in general could benefit from knowledge created on agile development.

This presentation summarizes authors observations on hardware development team members and hardware teams using Scrum and agile methods during the past 10 years. Team configurations range from collocated cross-disciplined team (electronics, printed circuit board, mechanics and embedded software) to globally distributed teams of different disciplines. Several real-life products will be used as examples.

10/08/2012

Invasion of Agile Hardware at Design East, Boston

Topics around Agile Development have slowly but steadily been making their way into Embedded Systems Conference program. Couple of weeks ago Design East in Boston had several sessions on Agile Development, but most noticeably 3 talks on Agile Hardware. One of them by yours truly. You can find the slides and technical paper associated with it via these links (slides, technical paper).


Agile methods are gaining foothold in embedded software development. Embedded software is not developed in isolation, but it often has strong dependencies to hardware development. The system development is facing the demands of ever increasing amount of change and learning. Agile methods aim at helping in these challenges. This session summarizes authors observations on hardware development team members and hardware teams working with Agile methods during the past 10 years. Team configurations range from collocated cross-disciplined team (electronics, printed circuit board, mechanics and embedded software) to globally distributed teams of different disciplines. This session will give you the practitioner’s view to applicability of Agile methods beyond embedded software development.

12/28/2011

TDD and design


Earlier I wrote about the long distance I went with mac driver design (link). The current design is sketched below with examples of functions, and responsibilities (green). The clouds are C files. The design has 95% unit test line coverage. Tests have proven their power. I refactored the code using a local repository while traveling on vacation on remote island (during off-days from diving) and obviously no access to real target for testing. I made 53 commits. My commit frequency is very high, so many of the refactorings were just renaming and extracting helpers, but there were also more fundamental design changes. When I finaly, and sadly, made it back to the lab, I was kind of afraid that the code won't run and the fastest thing to do is to throw away all the refactorings. The next fastest thing would be to repeat them one by one in real repository. I gotta say I was surprised when the code worked right out the cross-compiler and all I needed to do was one massive merge from local to real repository. This is very rewarding. During the refactoring there was a handfull of incidents when tests caught a stupid mistake made by me. This is worthy even if you had the access to real target. The nice thing is that unit tests on dev environment tell it right away.
There is no need to make tradeoffs and large/long changes without feedback because of lengthy burning times, or lengthty stepping path to debug newly written code.






But it wasn't the biggest learning. The biggest learning was that the design resulted quite the different from what one would expect. I base this claim to investigation of several example MAC driver source codes available in the internet. The design has proven to be good, in terms of testability (that was the driver) and adaptability (this was the proof). More about adaptability later below. The thing that differentiates this style of design is the emphasis it puts to testing. Design is good if it is easy to test. If the tests are complicated to understand or difficult to write all together, then there is a good chance that design has flauses.

I think what I experienced is well explained by Michael Feathers in his talk "The Deep Synergy Between Testability and Good Design" (video). Take a look, and don't think this applies only to OO languages. You'd be wrong. The driver we are talking about here is written in C.

Current design was tested when the hw team decided to have a second option for MAC driver. They wanted the final pcb so they could proceed with emission tests and we together did not have enough information to do the decision either way. The candidate for production pcb has routing for both options. One implementation of set based design. But back to the sw side of it...

The concepts in driver design kept most of the files completely untouched. SPI and DMA drivers were independent compile units and they needed no touching. This also means that no code was duplicated in the production code mass.

The original design was done with just testability in mind. At that time there was no knowledge about the extra hardware the design needs to comply with.

In my opinion the code became adaptable and reusable by designing it for testability.

I don't think writing code this way, by seprating concerns, focusing on single responsibility, and not mixing abstraction levels, is slower to write. Is it different? Oh yeah. You have to really develop a new sense for good coding. As a remark, which I did earlier, I wouldn't refactor the code after fiddling around, but write a decent design based on learning from exploring, aka spike.

In the current MAC driver code one detailed design decision may make you raise your eye browse; Single function ClockByte() is in separate file. This is because I wanted to assert through just the bytes been send, not through processor register dummies. Other option would have been to inject a function pointer for this. I chose to use link time seam.

On the other hand just few simple tests for basic correctness of ClockByte() function are enough. This can also been seen as principle of separation of concerns and keeping the files at the same level of abstraction.

Current design is far away from being perfect. It is not what I think should be achieved. It continues to offer me opportunities for deeper understanding of tdd, and synergy between design and tests, more deeply. The next lesson will be available when it gets factored to enable irq based interfacing between uC and mac driver. So far it has been just message polling.


9/10/2011

Slides from Agile 2011

Agile 2011 ended in Salt Lake City, Utah a few weeks ago. It was again great to meet the growing circle of friends in agile community. This year my own conference was a bit different as I submitted two talks and participated in the review process for Agile for Embedded Systems Development -track.

Indeed, we had dedicated track for embedded stuff. It was the 10 year anniversary for agile manifesto and this was the first time the embedded got this much attention. The track had quality sessions and averaged around 20 attendees for each session.

You can find my slide decks via the links below:



Embedded Testing Cycle - the First 3 Years, Markku Ã…hman and Timo Punkka

James Grenning presented the embedded test-driven development (TDD) cycle already in 2004. Whispers on the hallways of conference hotels tell that somebody is actually implementing this idea. However, there are only few documented implementation details available. Schneider Electric’s fire security team has been implementing TDD cycle as an integral part of the development process for 3 years. Come to learn from their real-life experience and mistakes in automated testing at different levels: unit testing, acceptance testing using simulation, and in real target hardware.




Agile Hardware and Co-Design, Timo Punkka

Agile software development is getting attention also in embedded software development. Embedded system development on the other hand requires different engineering disciplines working together. When embedded software team starts using agile methods, it affects also other disciplines. Agile development emphasizes continuous learning through experimenting and collaboration instead of following a detailed up-front plan. Agile embedded software team expects different behavior in system co-design. This talk discusses reasons and ways to adapt agile development to co-design of system development.



Lots of other presentations, including all other embedded track presentations, are available via the conference program site.



8/18/2011

A New Home


A fiend of mine did fantastic job on my new site. My fuzzy ideas and his zero-whining attitude made a good pair. It was truly an agile project. The site was up and running from day 1, and it emerged during number of customer-developer pairing sessions. Final outlook is a result of countless "you know, why don't we just try it"s.


12/17/2010

Embedded Agile, ESC2010, Boston

Here are the slides and technical paper from my talk on Embedded Agile at Embedded Systems Conference 2010, Boston.

I had a good time at the conference. Hope you can find the material useful!

Abstract. New product development (NPD) is getting more and more challenging. Change happens all the time in all dimensions, including own organization, technology, competition, and marketplace. Agile development is targeted at working in a turbulent environment driven by continuous learning. Originated from software industry, its applicability to embedded system development has been analyzed over the years. In this paper, I present some observations on implications of embedded system development to agile development. I introduce findings on frequent releasing, automated testing, co-design including non-SW development and quality systems like ISO9001.

11/08/2010

See you in Grenoble

I'll be in Grenoble, France November 22.-24.11.2010 to attend Agile Grenoble.

If you happen to be around, get in touch.


9/11/2010

See you in Boston


I'll be attending ESC 2010 in Boston in a couple of weeks. You can catch me speaking about embedded agile on Tuesday 21st. You can find me somewhere in the conference throughout the week. Let me know if you're around.

8/27/2010

Fiddling around before TDD

The first of the Uncle Bob's Three Rules of tdd states:

1. You are not allowed to write any production code unless it is to make a failing unit test pass.

A few recent discussions among embedded developers revealed that this rule has caused some confusion among fellow beginners of embedded tdd. So, let me first point you to another tip from the same source(Uncle Bob advises to do fiddling around on things that you are not sure of how they work). I will share a short story:

I've been writing a proof-of-concept driver for serial to Ethernet controller. It was not sure if the serial port on uC could be configured to work with the MAC controller, or if it would be possible to use the DMA controller to manage longer transfers. I needed the proof-of-concept for the hardware team fast. I learned how to use the controller by trial and error, glueing together several bits from application notes and examples and running them in combination of two evaluation kits. Needless to say it was a mess. It even turned out I couldn't get the job done without a few circuits from my hardware pals. It would have been really awkward and laborous to have tests written during this fast paced back and forth experimenting based on a sample code which of course didn't come with tests. All this was done in C, with the tools for C.

After I knew which bits worked and which didn't, I wanted to illustrate this learning in tests. I harnessed the quickly hammered code with tests and then massaged the tests and code hand in hand into better shape. In retrospective, I should have treated the original code as a throw away prototype (aka code from Spike). I thought I would be faster by continuing to work with the code I had. Sad, but lacking the discipline made my overall cycle time propably massively longer. I believe that this is more of a rule.

It might be from the first XP book, I'm not sure, but when I was first introduced to agile methods people always listed the last rule; rules are just rules. Based on this experience I do believe it is pragmatic to fiddle things around without tests. But when you start sculpting the solution towards production code, you should take a fresh start and drive test-first based on your newly acquired knowledge. I know I will next time.

4/03/2010

Test Driven Development for Embedded C in beta

James Grenning's book Test Driven Development for Embedded C is now available in beta from The Pragmatic Bookshelf.

I have taken a peek and checking it out is strongly recommended.

12/19/2009

Bowling Game Kata in C

Olve Maudal has shared his presentation of Bowling Game Kata using C. You can find the slides directly here (pdf).

I'll add a link to my TDD in C delicious.

6/30/2009

Coverage with lcov, and so what?


A while back we ran an experimental line coverage analysis on our acceptance test suite. The result was 68% on the code for the main control board. I got the result from nightly build and mentioned it in Daily Scrum, and prompted "so what do we think about it, should we track it"? Everyone on the team had a blank stare and then finally a team member came forward "yeah, that's a good question. So, what?"

Coverage is information. It is just that, an additional piece of information, not by any means the final truth. I don't remember who teached me this, but;


"If you take all your assert's away you still have the same coverage. You just ain't testing anything at all."


This has been explained here and of course in Brian Marick's classic How to Misuse Code Coverage (pdf).

Well, maybe the good coverage can not say anything about the quality of your tests, but poor coverage can certainly say thing or two of opposite nature. If your coverage is 20% we can say quite confidently that you ain't there yet.

I started with acceptance test line coverage, but the rest is about unit test line coverage. Some embedded teams use gcov and I have heard people fiddling the data to generate fancier reports. Being lazy as I am I didn't do it myself. I did what I'm good at and searched for what others have already done. I found lgov, which is a tool in Perl to format gcov data.

We run lcov under Cygwin. You can get lcov for example from here, extract, and execute "make install". Next compile and link unit tests with gcc using flags "-fprofile-arcs" and "-ftest-coverage". We have a special build target for intrumenting the unit test executables with debug and coverage information so that we don't unnecessary slow down the bulk of builds. Next execute your full suite just like you normally would.

In our case all .obj files from test build are in ./bin directory, and that's where all the coverage data files go to. Our unit test script moves them to ./bin/code_coverage directory away from .obj files, and we want the final html report to be in ./build/test/code_coverage. Now we have the information necessary to create a shell script to do the actual analysis and reporting of coverage data:

path=`dirname $0`
lcov --directory $path/bin/code_coverage -b $path --capture --output-file $path/ic.info
genhtml -o $path/build/test/code_coverage $path/ic.info

Vola', your disappointing(?) results are ready to be browsed, like so:





What the heck, it's all green, while you only have tests for simple utils files? In this approach there is a limitation - you only get coverage information for the files that are involved with your test suite. With huge legacy code, this would yield too promising picture early on. Again you need to think for yourself.

Experiment with coverage in your team, I think it's worth every penny but even when you start closing 100%, remember to keep analyzing, "so what?"

2/06/2009

Learning to cope with legacy C

New responsibilities during the past year have been a great learning experience. The key learning is that now I really know how incompetent I am. I can’t wait to move again and learn how many more things I do really badly, or what would be even better, can't do at all. This is a brief story of one such finding during this joyrney.

For the past year we have focused on ATDD with our own framework written in Python. We have 200+ automated acceptance tests for the system. With unit tests we however have struggled. While we have over 100 (well, it’s a start) of them, without the exception of the latest ones they are not really meaningful.

What's different with the latest tests then? They focus on higher level. I’m not sure what these tests as programmer tests should be called, but a programmer test will do for now. I do believe unit tests should be focused when doing TDD, but, wait, wait, I have an excuse… The code is old. It has its dependencies, and while maybe not the worst case in the world, it is a pain to get something compiled in isolation. The code has responsibility based structure (or should have had), and this structure is expressed in the source code folder structure. Each of the responsible "modules", or folders, typically contain own task. A typical task looks something like this:

task_specific_inits();

for(;;) {
s = OS_wait_for_something();
switch(s) {
case 1:
do_something1(s);
break;
}
}

Sometimes do_something1(s) is inlined and you may get a bitter sweet taste of those infamous 1000+ line functions. Other times you are lucky and the whole high level event parsing is already done in own function, along with lines do_something_with_the_event_from_X(s). This function continues the handling with loooong switch case, hopefully just calling further functions.

So, when we decide to test something inside a selected "module", or a folder in our case, we compile and link single test file, all the production code from a single responsible module/folder, production code for everything considered utils, like linked lists etc., and fake everything else. For faking we use Atomic Object's Cmock and manually written stuff when appropriate. We choose the task handling for injecting the test actions.

We arrange the test execution environment as we wish by initializing all the parties to expected state and teaching the mocked neighbours accordingly. We inject a single event, or short sequence of events, into task's handling routine and we try to find ways to assert if everything went as we wished for. Sometimes we can use this to learn what really happens when you give such and such event. After all the default assumption is that the code works, as it has been in production for years. We want to make sure it stays that way, when we change it. We have several options for observing the behavior:

1. Automatically generated mocks will tell us if the interaction was as expected
2. We can use getters of utilities, like linked lists
3. We can sense the internal status of any of the production code files with few nasty little tricks like #define STATIC

When the first test, and maybe her friend, is running it is time to start refactoring your code. Refactoring your test code, that is. If you take a closer look on what you have done, you most likely see 1-2 300 lines long test cases, which look pretty much the same. Now it is a good time to start extracting helpers. When creating an event sequence to be run you probably generate similar data structures. These can be extracted into functions. You probably do a bunch of similar assertions on many of your test. These can be extracted to helper functions. And so on, and so on. Each refactoring is likely to reveal more opportunities for cleaning the code. This can't be emphasized more. It is important to keep the code clean from the beginning. Otherwise you will have a 10KLOC test file on your hands, and it is much more work to start cleaning it only at that point.

This is very far from TFD (test first design). It is a battle to get some tests going to be in better place to continue improving and changing the code. The code is not going to disappear anywhere soon, so there will be lots of changes.

Why it took us a year to get to this point? Blame is on me. I got bitten by the test bug while writing a really hard real-time firmware app with a former colleague bunch of years back, and we learned that small exact tests leading into small steps of coding lead into zero debugging time. This was type of SW where we earlier had spent majority of our time debugging the code with oscilloscope and manually monitoring led blinks with throw away debugging code. During that experiment I saw the light (as saw my colleague), and thought that this is how also firmware should be written. Write each line of code to make a small test pass. However it is fairly rare in embedded domain to get your hands on a green project. This may not be a characteristic of just embedded sw, but sw in general today. We mostly write enhancements to existing products. Existing products in 2009 are not typically delivered with automated tests, and even less so developed this in mind. There is going to be plenty of opportunities for battles like this. Letting go on the ideal very low level unit testing took a year for me. It is still my ideal way of coding, but we can not get there overnight with legacy code.

If getting first tests in place sounds easy(?), calm down. It is only a starting place. You will notice how hard it is to test your code for example because of scattered initialization routines or that there is no structure in the first place. You should concider all these problems as good things. They are indicators for places of improvement. Those problems are in the code, building tests only make it more visible. If you work on those problems, you should be able to see more dependency breaking opportunities and eventually get to more focused tests. That’s the plan at the moment.

Michael Feathers uses term pinch point in his book about working with legacy code. Pinch point is a function or small collection of functions that you can write tests against and cover changes in many more functions. I guess event handlers for tasks are our first natural pinch points. This at least is the current step on the learning ladder for me. Hope the ladders won’t fall.

James Grenning also made a nice job articulating the whole legacy code testing process in C language (link).

Atomic Object also presented the importance of refactoring the test code from the beginning (link).