CHAPTER 4: Learning and adapting

4. Learning and Adapting


Software development is a continuous learning activity between the customer who knows the business domain and the developers who know the software. Since the software product needs to combine the two areas of knowledge, the customers must teach the software developers about the business domain and the software developers must teach the customer enough about software so that they can better express their requirements. This learning activity is done throughout the project, having both the business domain experts and the software developers communicating and collaborating all the time, teaching each other what is needed in order to build the best software.

Circle of life – learning and adapting

In most occasions, at the beginning of a software project, the customer doesn’t really know what he wants or cannot express very well what he wants. The intangible, virtual nature of software is often the main factor in this problem. Traditional, predictability based processes, which need to write all the customer requirements at first, are exposed to high risks in projects where the customer cannot express his needs from the beginning. Not knowing what they want at first, means that the customer will want changes later and the cost of change is very high late in development, in waterfall based processes:

[Cost of change in traditional processes, by Scott W. Ambler]

Agile methodologies have acknowledged the problem of having customers that cannot express their whishes at first, and have built, at the base of their processes, a way of development that allows continuous learning and adapting throughout the project lifecycle, handling very well the cost of change:

[Cost of change in XP, by Scott W. Ambler]

In “Extreme Programming: Embrace Change” [], Kent Beck, explains how XP handles the problem of having customers that don’t know what they want and want changes late in development:

For decades, programmers have been whining, “The customers can’t tell us what they want. When we give them what they say they want, they don’t like it.” This is an absolute truth of software development. The requirements are never clear at first. Customers can never tell you exactly what they want.

The development of a piece of software changes its own requirements. As soon as the customers see the first release, they learn what they want in the second release…or what they really wanted in the first. And it’s valuable learning, because it couldn’t have possibly taken place based on speculation. It is learning that can only come from experience. But customers can’t get there alone. They need people who can program, not as guides, but as companions.

What if we see the “softness” of requirements as an opportunity, not a problem?

In his book “Agile and iterative development: A manager’s guide”, Craig Larman shows a very good sample of how, iteration by iteration, requirements evolve:

[Craig Larman, 2004]

The iterative and incremental nature of the agile methodologies allows the customers and the developers to continually adapt the software to their needs. Kent Beck compares software development with driving a car:

We need to control the development of software by making many small adjustments, not by making a few large adjustments, kind of like driving a car. This means that we will need the feedback to know when we are a little off, we will need many opportunities to make corrections, and we will have to be able to make those corrections at a reasonable cost.

The software process is a continuous cycle following a few simple steps: the developers and the customer decide what to do in the next iteration, the developers do it and then they show the result to the customer, who can now see the software and learn from it. The customers can now decide what needs to be corrected or improved and they can now give feedback about the product, that makes the programmers understand better what the customers really want. After a cycle like this, everything is taken from the beginning again. This is a continuous learning activity between the customer and the developers. Ron Jeffries calls this “the circle of life”, saying that:

On an XP project, the customer defines business value by writing stories, and the programmers implement those stories, building business value. But there’s an important caveat: on an XP project, the programmers do what the customer asks them to do!

Every time we go around the circle, we learn. Customers learn how valuable their proposed features really are, while programmers learn how difficult the features really are. We all learn how long it takes to build the features we need.

As important it is to develop closely with the customer and learn and adapt after every iteration, in order to improve the software, it is more important that the developers collect feedback from its real users, every release when the software is put in production and is used daily. Throughout the development of a release, the learning and adapting is done with the customer together, but nothing can give better feedback than real use of the software after a software release or delivery has been made to the real end users.

Development and production

Mary and Tom Poppendieck consider that one of the most important principles in lean thinking (adapted to software development) is amplifying learning, and that this is done as described above, using the main tools: iterations and feedback.

They also manage to show that it is very important to make the distinction between software development and product manufacturing, saying:

Think of development as creating a recipe and production as following the recipe. … Developing a recipe is a learning process involving trial and error. You would not expect an expert chef’s first attempt at a new dish to be the last attempt. In fact, the whole idea of developing a recipe is to try many variations on a theme and discover the best dish

Showing the main differences between development and production

Development – Designs the Recipe

  • Quality is fitness for use
  • Variable results are good
  • Iteration generates value

Production – Produces the Dish

  • Quality is conformance to requirements
  • Variable results are bad
  • Iteration generates waste (called rework)

[Mary and Tom Poppendieck, 2003]

Not seeing these very important differences means not seeing development as a continuous learning and adapting process, thus working against nature, increasing the risks of failure in software projects.

A game of invention and communication

Alistair Cockburn defines software development as in Agile Software Development []:

Software development is a (resource limited) cooperative game of invention and communication. The primary goal is to deliver useful, working software. The secondary goal, the residue of the game, is to set up for the next game. The next game may be to alter and replace the system or to create a neighboring system

He expresses the need for cooperation between developer and customers as in a group game, and the need between the participants at the game to learn the problem and invent or imagine a solution for it, constantly readapting it to fit the needs.

3.3 Reflective improvement

One of the most important aspects of the agile movement is that its followers constantly need to look back at their activity, and learn from what they do about what is beneficial and what isn’t, and, based on this information, adapt and improve their working process. The need to improve the process is very well described by the last principle of the agile manifesto:

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

Alistair Cockburn, describes a very pragmatic way to put the self improving technique in practice, in the book “Crystal Clear” []. At the end of each iteration, the team gets together and lists the activities they did in the last iteration, dividing them into two columns: to keep (activities that worked well and should be kept in the process) and to drop (activities, practices that didn’t work so well, and should not be put in practice anymore). Besides the two columns, a third column is added: “to try”, where ideas about different practices that could be tried are added by the team. Alistair calls this technique: reflective improvement.

The intervals at which the team reflects should not be very long, and Alistair suggests twice per iteration, once in the middle of the iteration when things can be improved as the iteration progresses, and one at the end of the iteration, to reflect on the entire iteration, including the delivery to the client and his satisfaction, called “post mortem” reflection workshop.

The activities and techniques discussed in a reflection workshop inside the team do not necessarily need to be from the past iteration or related to a project. They can be general conventions used throughout a larger base than the actual team like code conventions or database versioning and the people participating in them do not necessarily need to be from the team, so this kind of sessions could be done even with the customer to improve the whole collaboration and development process together.

Eliminating waste and cutting overhead

In order for a team to be able to drop some techniques that they are using, they need first to be aware that those techniques are not useful. Although this sounds incredibly simple, it proves to be one of the hardest to put in practice techniques. Seeing what generates waste is a very difficult activity and needs to be discussed in detail starting from what waste is.

Mary and Tom Poppendieck, have adapted a method to see waste in software development, from lean production manufacturing, that is based on Taichi Ohno’s first steps at Toyota in the 40s:

The 7 wastes in lean manufacturing

The 7 wastes of software development


Extra Processing






Partially Done Work

Extra Processes

Extra Features

Task Switching




In many occasions we develop code, especially when designing software, to allow flexibility for future changes. Some of the investment in the flexible design is well paid off, but in other occasions, those changes never occur, and the investment in the flexibility of the design is waste.

Following religiously a software process does not guarantee success. However, when software development fails, in many cases, the developers and the management come to the conclusion that the process wasn’t followed and they decide to make it stricter: more documentation is produced, more meetings are scheduled, the plans and designs go in further details. This can be good in some cases, but extra process overhead is one of the biggest problems in software development today. Programmers that work for 6 hours a day on writing documents that comply with the process and two hours on actually producing software is one of the best hidden wastes in software development, especially in large organizations.

Moving programmers from one project to another, requires time for them to get used to the new project. If that project is big, then understanding takes a lot of time, and if the task performed by the programmer moved to the project is small it does not cover the initial investment in understanding and, even worse, because the software wasn’t understood properly the programmers decrease the internal quality of the project, making it harder to extend and maintain.

One of the biggest problems with waste is that most programs have features that are never used. These contribute essentially to the cost, length and complexity of a software project, and not using them is just waste.

Self improving sessions

Back to work:

Some time ago, we started to use a new open source ORM on .NET, called NH. It was a fairly small (2.5 months) and non complex project, that was built by me and another colleague, let’s call him John. I was needed for the first 3 iterations (6 weeks) to help get the project started then it would be continued by John. The project was delivered on time for the customer and it still works. Then John and another colleague, Michael started a new web project, quite similar to the previous and developed it for two months. Then Michael and another colleague, George, started the third project.

On one discussion outside the company, Michel and George were complaining about the ORM saying that in many situations it proved to be an overhead. Suddenly, I detected a potential problem. I had been using it for the same period and my conclusion was exactly the opposite. Listening further, we decided to take a look in the code, and it was then that I realized that some parts of it were completely misused, becoming an overhead. Going back to the second project, I noticed the same mistake there. And, quite surprisingly in the first project also. I realized that one mistake, made by John, in using NH, was considered the right way to go by Michael and later by George resulting in much more code then it was actually supposed to be and the impression that it wasn’t a good technology.

I decided that I needed to do something about this, so I said to the entire team that the next day at 2pm everyone needs to be available for a meeting to sort out the problem; however I warned everyone that it wasn’t going to be a conventional meeting. When everyone came, I gave them a document, of about 40 pages that described in detail how NH really worked and I asked everyone to read it. While they were reading I made a list of questions to see if they understood the technology and we were supposed to discuss the responses in the end. When one finished reading I sent the questions by email expecting the answers in the same way. Of course we were all collocated and everyone had his computer. When all the answers came I was surprised that they were 99% right.

So, after 2.5 hours, that might be considered lost, because instead of working we read about a technology that was not used by everyone at the time, everyone started to understand it. If these 2.5 hours would have been done at the beginning of the first project, this might have saved days or maybe weeks of wasted resources because the technology wasn’t properly understood by someone and these misunderstandings were taken by the others as proper ways to use the technology.

We decided that this kind of “ school like sessions” could have tremendous potential also in our attempt to become more agile, so we started with Extreme Programming, reading, answering questions and discussing a few pages every day, having information and knowledge spread much faster than ever before, which later avoided many overheads and wasted resources.

CHAPTER 3 (continued) : 3.4 Collaborating

3.4 Collaborating


One of the values of the agile manifesto states: “Customer collaboration over following a contract” being completed then in the principles by: “Business people and developers must work together daily throughout the project”. It is very clear that day by day collaboration, between team members or between programmers and customers can deliver better software and, as a result, close collaboration stands at the base of all agile methodologies.

Why is collaboration important?

Building software is a continuous learning process between the customers that know the business constraints and the programmers who can develop the software. The customers must learn about software, about how software can be built, about what can be done and what can’t, about the cost of the different pieces of functionality because he is the one to “steer” the software direction to what he needs for his business. To build the right software, the programmers must learn about the business domain of the customer. This learning can only be done by working together daily.

Robert C. Martin said in “Agile Software Development, Principles, Practices and Patterns” [] that there is this tendency for managers to tell the programmers what needs to be done then go away for as long as they think it is necessary for the software to be built, then come back and find everything working. This tendency is in human nature, and it exists also in the case of the customers; however it was proven over and over that having everything written down in a functional specification document, signed off at the beginning can lead to misunderstanding the software’s purpose, from the client point of view.


Delivering software that fails to comply with the business needs of the customer contributes to building an atmosphere of distrust between software developers and the customers. This lack of trust is then tried to be compensated by very detailed contracts on what can and what cannot be done in future projects.

Customers are afraid that

• They won’t get what they asked for.

• They’ll ask for the wrong thing.

• They’ll pay too much for too little.

• They must surrender control of their career to techies who don’t care.

• They won’t ever see a meaningful plan.

• The plans they do see will be fairy tales.

• They won’t know what’s going on.

• They’ll be held to their first decisions and won’t be able to react to changes in the business.

• No one will tell them the truth.

Developers are afraid, too. They fear that

• They will be told to do more than they know how to do.

• They will be told to do things that don’t make sense.

• They are too stupid.

• They are falling behind technically.

• They will be given responsibility without authority.

• They won’t be given clear definitions of what needs to be done.

• They’ll have to sacrifice quality for deadlines.

• They’ll have to solve hard problems without help.

• They won’t have enough time to succeed.

Unacknowledged Fear Is the Source of All Software Project Failures

If these fears are not put on the table and dealt with, then developers and customer each try to protect themselves by building walls.

[Fowler , Beck – Planning Extreme Programming, 2001]


Protecting by building walls, usually means contracts. Trying to replace trust and close collaboration with contracts is not solving the problem, but amplifying it. Mary and Tom Poppendieck describe why trust cannot be replaced very well with samples from car manufacturing and from software, enumerating the different types of contracts and showing the flaws they have when it comes to “enforcing” trust.

Can the parties involved in a software project as buyer and as vendor collaborate efficiently without each following their own interests? Because of this great fear, contracts are signed and expected later to replace trust between the companies involved on each side. If as quoted in Lean Software Development, trust means: “one party’s confidence that the other party … will fulfill its promises and will not exploit its vulnerabilities”, then contracts are considered by very many people, a way to enforce this.

Let’s study a little the contract types, used mostly between software companies and examine how trust can be enforced by them or how they can obtain exactly the opposite because of the contracts:

  1. Fixed price contracts: The most commonly used type of contract, seems to give the customer the faith that the cost won’t be bigger then the one planned, but this means that the burden is moved on the shoulders of the software vendor, who will try to find different ways to compensate its eventual losses (if any) by overcharging for changes. They tend to push an atmosphere of distrust between the parties, each trying to follow their interest and thus favoring the failure of the software projects
  2. Time and materials contracts: this type of contract deals better with unpredictability and uncertainty but simply shifts the risk from the vendor to the buyer. The longer the contract takes to complete the more the customer will pay, encouraging in a way the vendor to charge more for the work. The customer is always on the pressure that the things might be done faster and this favors ,just as the fixed price contract, an atmosphere of distrust
  3. Multistage contracts: these types of contracts try to merge together the fixed price and time and materials contract types benefits, minimizing the risks. Usually a big fixed priced contract is divided into a multiple succeeding stage contracts governed by another contract. There is a certain risk in this approach also: if after each small stage contract is delivered, the contract is fully renegotiated, then it can become very expensive and it will not bring the two parties close together.
  4. Target cost contracts: this type is the hardest to put in practice, but the most fair for both parties as well, which means that it favors trust. They are structured in such a way that the total cost, including changes – is a joined responsibility of the customer and vendor. This enforces them to collaborate and take decisions together, negotiating so that the target cost is met.

Although, some types of contracts can be seriously flawed when it comes to software development and to building programs that have value for the customers, it has been proven that they can be adapted to work well in software as long as they do not try to replace trust. Agile methodologies tend to favor the types of contracts that are aimed at building trust rather than building distrust. Robert C. Martin gives a very good example of an agile contract:

As an example of a successful contract, in 1994 I negotiated a contract for a large, multi-year, half-million-line project. We, the development team, were paid a relatively low monthly rate. Large payouts were made to us when we delivered certain large blocks of functionality. Those blocks were not specified in detail by the contract. Rather the contract stated that the payout would be made for a block when the block passed the customer’s acceptance test. The details of those acceptance tests were not specified in the contract.

During the course of this project we worked very closely with the customer. We released the software to him almost every Friday. By Monday of Tuesday of the following week he would have a list of changes for us to put into the software. We would prioritize those changes together, and then schedule them into subsequent weeks. The customer worked so closely with us that acceptance tests were never an issue. He knew when a block of functionality satisfied his needs because he watched it evolve from week to week.

The requirements for this project were in a constant state of flux. Major changes were not uncommon. There were whole blocks of functionality that were removed, and others that were inserted. And yet the contract, and the project, survived and succeeded. The key

to this success was the intense collaboration with the customer; and a contract that governed that collaboration rather than trying to specify the details of scope and schedule for a fixed cost.

[Robert C. Martin 2005]

The agile methodologies come with a series of practices that are aimed at building trust between the vendor and the buyer of the software, frequent delivery being the most important of them, because the customer can see the software, at small intervals, with the planned features, built and working. Risks are also addressed by having the most important features (from the client’s point of view) developed first and by allowing him to make mistakes, to change his mind, to figure out how things are really done later after he sees what the software would look and feel.

Customer and programmer rights

Ron Jeffries, in “Extreme Programming Installed” [] and Martin Fowler and Kent Beck in “Planning Extreme Programming” [], built a list of rights for all the three parties usually involved in a software project, which, if known and respected from the beginning, can enhance collaboration.

Customer Bill of Rights

• You have the right to an overall plan, to know what can be accomplished when and at what cost.

• You have the right to get the most possible value out of every programming week.

• You have the right to see progress in a running system, proven to work by passing repeatable tests that you specify.

• You have the right to change your mind, to substitute functionality, and to change priorities without paying exorbitant costs.

• You have the right to be informed of schedule changes, in time to choose how to reduce the scope to restore the original date. You can cancel at any time and be left with a useful working system reflecting investment to date.

Programmer Bill of Rights

• You have the right to know what is needed, with clear declarations of priority.

• You have the right to produce quality work at all times.

• You have the right to ask for and receive help from peers, managers, and customers.

• You have the right to make and update your own estimates.

• You have the right to accept your responsibilities instead of having them assigned to you.

Customer collaboration

In “Extreme Programming Explained: Embrace Change” [], Kent Beck says:

XP planning assumes that the customer is very much part of the team, even if the customer works for a different company and the rest of the team works for a contractor. The customer must be part of the team because his role is far too important to leave to an outsider. Any (XP) project will fail if the customer isn’t able to steer.

So the customer’s job is a huge responsibility. All the best talent and technology and process in the world will fail when the customer isn’t up to scratch. Sadly, it’s also a role on which we can’t offer particularly sage advice. After all, we’re nerds, not business people. But here’s what we know for sure.

[Kent Beck, 1999]

This very much shows the role of the customer in an agile project, and how close collaboration is actually being done, with each party having their responsibilities and rights, and collaborating to “steer” and implement the project successfully.

The most obvious way in which the customers and the programmers collaborate, is agile planning, which is done together. SCRUM iteration or sprint planning summarizes in a way the whole concept behind agile planning that is also used in all the other agile “flavors”. The customers and the programmers sit together, starting with the customer explaining in the first part of their meeting where he wants to go, showing the “shopping” list he wants for the next sprint and explaining what each item means, so that the programmers can come up with the cost (estimated time) for those items in the second part of the meeting. In the case that the cost is over the budget (30 days in a sprint) then based on the priorities of the customer some of the low priority items are dropped, so that the cost matches the budget. In case the cost is under, then the customer is asked to come up with other items, that are the next most important to him, so the gap between cost and budget is filled.

One unwanted possibility (but, since agile methodologies are very pragmatic, it is taken into account) is when the team realizes that they won’t be finishing in time, with the next iteration. Since honesty sits at the base of agile principles, then the customer is informed right away and shown the situation. He is then asked to pick one or more of the planned features, and “drop” it, so that the remaining can be delivered successfully. In this way, the crisis is solved fast and together, whereas the alternative: lying would be much more costly. It is important to remember that in this “negotiating scope” session, for example , 8, 100% finished features will be delivered out of 10 planned, rather then 10, 80% finished features that would be unusable. The “dropped” features will be then planned for the next iteration.

The two methods described above are incredibly efficient at reducing the risk of a project failure since the top most important features are built and delivered first, thus showing how close collaboration between the customer and the programmers can lead to success.

CHAPTER 3: Communicating and Collaborating

3.1 Introduction

Communication is considered to be the main factor in a project’s success or failure. Alistair Cockburn calls software development “a game of communication and invention”, in his book “Agile Software Development”, where a big part is dedicated to explaining communication problems and how communication influences the development of a software project. In all agile methodologies, communication and close collaboration with the customer is emphasized as a “must do” practice; in XP two of the five values are communication and feedback, in SCRUM’s values we can find communication, Crystal comes with osmotic communication between team members and with early access to expert users and so on.

Bad communication between the different people involved directly in a software project, can create a lot of overhead and waste important resources, and in many cases can cause the failure of the project. For this reason, all software methodologies have developed more standards for communicating, formal and informal, while working on a project, between the different stakeholders.

Communication can be divided in: communication inside the development team and communication outside the development team, with the customer and with the upper management.

3.2 Communicating with the customer

3.2.1 Requirements

Functional specification documents signed off at the beginning

Traditional software methodologies need to gather the client’s requirements at the beginning of the project, putting everything in a big functional specification document that needs to be signed off by the client before it can go into production. This approach seems fair and quite straightforward but practice over the years has shown that it can have some very serious flaws. The biggest problem with this approach is that due to the problems in understanding software and how it works, the customer cannot always tell what he wants from the beginning, and might miss some very important aspects when the functional specification document is signed off.

There have been numerous cases in the past when projects were developed within budget, delivered on time and as asked for, by the functional specification document, but when put into production they were completely useless for the customer. Yes, the customer could be blamed for not knowing what he wants when he needs to, but the pure fact is that he is unhappy about paying for unusable software. A customer that had such an experience will be very reluctant when needing to sign off another functional specification document and in many cases a lot of overhead will be generated in development because of his fear of being forced to pay for unusable software.

Although big functional specification documents seem very good to the untrained eye, they do have serious flaws:

– They are very costly to build (how much time do you need to write 100-600 pages of text?)

– They might miss important aspects of the program. Written text is not very good at communicating things. The writer cannot capture whatever he needs to capture and the one that reads tends to imagine the rest. Imagination is good, but not in this case because, in this case, it is very costly.

– They tend to push the customer proxy. The more indirection and steps trough which something goes from the customer to the actual developer building the software, the more misunderstandings can, and usually do occur. When the information needs several nodes to be transmitted from the ones that know the business to the one that builds the program, important time is lost, and important precision.

Big documents are hard to read. Someone might say it is important to capture everything there otherwise programmers would complain about not being complete, but over completing it is just as bad.

What if something changes? What if at the time the document was written there was one situation and it just changed? What do you do? Throw up the part or the entire document? If you wrote it in 1 month, and it cost you x, you have just lost a 10-20-30-…100% of x and need to pay again for the new one.

Big functional documents are written to replace trust between the parties involved. We need to write everything in the specification document and cast it in stone, so it can’t change and we can’t be held responsible by the customer later. Where there isn’t trust, there isn’t good business. Lack of trust is usually compensated with enormous overhead in work and cost. But on the other hand, not everyone is to be trusted.

Agile requirements

Mike Cohn wrote in User Stories Applied []:

I felt guilty throughout much of the mid-1990s.[…]

I knew that what we were doing worked. Yet I had this nagging feeling that if we’d write big, lengthy requirements documents we could be even more successful. After all, that was what was being written in the books and articles I was reading at the time. If the successful software development teams were writing glorious requirements documents then it seemed like we should do the same. But, we never had the time. Our projects were always too important and were needed too soon for us to delay them at the start.

Because we never had the time to write a beautiful, lengthy requirements document, we settled on a way of working in which we would talk with our users. Rather than writing things down, passing them back and forth, and negotiating while the clock ran out, we talked. We’d draw screen samples on paper, sometimes we’d prototype, often we’d code a little and then show the intended users what we’d coded. At least once a month we’d grab a representative set of users and show them exactly what had been coded. By staying close to our users and by showing them progress in small pieces, we had found away to be successful without the beautiful requirements documents.

Agile methodologies have developed methods to capture requirements in such a way that the problems and risks associated with the methods presented above are very well addressed. Agile thinking works with the fact that requirements change in time and works with human nature, allowing the customer not to know exactly what he wants from the beginning. In every agile methodology, a very strong emphasis is put on close and direct communication with the customer, so the misunderstandings generated by customer proxies are eliminated. The most efficient methods of communication are used: verbal, real-time, face to face communication is pushed forward against written documentation, which is kept to a minimum.

Intermediate work products (documentation, use cases, UML diagrams, database schemas etc) are not important models of reality, nor do they have intrinsic value. They have value only as they help the team make a move in the game. Thus, there is no idea to measure intermediate work products for completeness or perfection. An intermediate work product is to be measured for sufficiency: Is it sufficient to remind or inspire the involved group?

[Alistair Cockburn, 2003]

User stories, backlog items, acceptance tests

Since the emphasis is put on close collaboration and face to face, real time communication with the customer, written requirements are kept as brief as possible. They usually act mostly like a reminder of a thing that needs to be discussed between the customer and the developers.

The most well known ways to capture requirements from the customer in agile methodologies are the user stories, originating from XP and the backlog items, originating from SCRUM. Both describe functionality that will be valuable to the customer and the users of the system.

Mike Cohn defines in ‘User Stories Applied’ [], the following characteristics of a good user story: independent, negotiable, valuable to users or customers, estimable, small and testable.

Because of their light nature, user stories and backlog items act more like an instrument used for planning, than for describing what is wanted. They give the directives on what is wanted and the details are mostly discussed with the customer and expressed as acceptance tests.

Acceptance tests are a feedback mechanism defined by the customers (with the help of programmers) to allow programmers to know if a feature has been finished and if that feature has been implemented exactly as the customer wants it. Instead of making the customer describe what he wants, acceptance tests are built by asking the customer: “how will I know that what we did is according to what you want?”. And he will start saying what needs to be done, for him to be able to know if what was requested was built and finished or not. Instead of heavy descriptions, two purposes are achieved by this question: you know what needs to be done and you know how you can test, at the same time. Also, ambiguity tends to be dissolved by acceptance tests, because the customer, in order to know how to test something, he must have a very intimate knowledge and understanding of that thing.

The user story mini-sample

Case 1: Requirements document

4.3 Outstanding contracts report

The system will allow the users to see a report of outstanding contracts. An outstanding contract is a contract that exceeds a predefined number of days, since it has been sent to the customer and no response has been received.

The outstanding contacts page:

Contract management system

My contracts (1) Reports (2)

Outstanding contract report

Choose the number of days: [ (3)] [Search] (4)


Customer (5)

Contract reference (6)

Number of days passed (7)

Customer #1

Ref #1


Customer #2

Ref #2


Customer #3

Ref #3






My contracts

Description: Menu item (-the) will allows navigation to the page where my contracts can be managed



Description: Allows user to navigate to the reports screen where all reports reside


Number of days

Description: Allows entry of the number of days that need to be exceeded so that the contract will be displayed.

Function: User types the number of days as an integer value


Search button

Description: Allows the query to be performed

Function: On click the system will search the database for the contracts specified and will display them in the table below the button on finish. If no results have been found then a message “No search results.” will be displayed



Description: Displays the customer name


Contract Reference

Description: Displays the contract reference number


Number of days passed

Description: Displays the number of days that have passed since the contract was sent to the customer

Case 2: User story and acceptance test

Outstanding contracts report

Report all contracts sent to the customer and no answer was received for a number of days above a given amount.

Acceptance test

For a list of contracts:



Sent date







Coca Cola










Test case 1: Filtering at 2005.10.20, for contracts above 100 days would result in:

No results to display.

Test case 2: Filtering at 2006.04.20, for contracts above 15 days would result in:


Contract reference

No of days passed




Test case 3: At 2006.04.30 for 10 days:


Contract reference

No of days passed







3.2.2 Set-based development

Communicating the requirements can be done in two ways: communicating the constraints, allowing multiple solutions to be developed by the programmers until one emerges or communicates the solution, and refining it until it works. Mary Poppendieck calls this: set based vs. point based development.

As a simple example, the customer can say: we need a desktop application that enables us to see our sales or we need a way to see our sales. The first thing enforces a desktop application solution, while the second allows the possibility to develop more solutions: web based, desktop based, mobile etc. If the second requirement is detailed as: we need to be able to see our orders real-time, as quickly as possible as some of them need a sales manager’s approval, and he needs to do that no matter where he is, then suddenly, the mobile solution might seem like a better solution, having the mobility constraint. Further defining the constraints, one solution will emerge. If we take the point based approach going directly on a desktop application, when the mobility constraints is revealed we might be forced to go back and rethink the desktop solution, throwing away the desktop solution code.

Agile methodologies have built in their practices ways to communicate and implement, using set based development like: developing multiple solutions in the early iterations and letting one emerge, communicating constraints instead of solutions , a very good example being acceptance tests which communicate testing constraints not solutions for implementation and design for change, where extensive refactorings, and fast changes and shifts based on constraints can let a solution emerge from a larger range of possibilities.

3.2.3 Feedback

Customer feedback

It’s two in the morning and you are driving home. The traffic light is red, and there’s not another car in sight. But the traffic light is red, so you stop. And wait. And wait. Finally, the light changes, after allowing time for lots of nonexistent cross-traffic. You think to yourself, it’s going to be a long drive home. And sure enough, the next light is also red. But as you approach the light, it turns green. Ah ha! you think, An automatic sensor. That light is smart enough to know I’m here and there’s no one else around. I hope the rest of the lights are like that!

The difference between the two lights is feedback. The first light was preprogrammed.

[Poppendieck 2003]

Improvement can only be done as a result of feedback. The development team writes a piece of the system, the customer sees it and tells them if that program is good or not. He can also tell the developers what would be even more liable to improve the program’s quality.

The virtual nature of software makes it very hard for customers to understand how it’s build and done, so it is very hard for them to communicate you what they really want. To address this extremely important problem, software should be shown very often to the customer. This has two very important benefits:

he can see where you are, the real progress, and this increases his trust that you’re doing what you were planning to do and that his money is not wasted

he can see if something is not what he wanted and tell the development team, so they can fix it or after seeing it he can start to ‘feel’ the software and be able to give you much better information about what he wants. The smaller the delivery cycles, the better.

The small delivery cycles are incredibly important in a project’s communication. As previously stated they show one very good technique to avoid derailing the project and they embrace changing requirements.

When the first requirements are being gathered from the customer, the developers explain him how something can be done, but it is very easy for him to imagine a completely different thing. It is also very hard for him to understand what can and what cannot be done; why something costs more or what costs less. After the team delivers the first result of an iteration, he can see a computer program, and everything seems to take shape in his mind. He can use this as a basis for future requirements, and can see about how much time one thing or another takes to be implemented. The second iteration is even better and as the project progresses, the dark fog that the software seemed to be in the beginning is starting to become clearer, easier to control and he becomes less and less stressed. He starts to understand how things are done and he sees real progress. There is nothing more honest than showing the customer the real program from which he can see the progress.

Code feedback

If feedback is the base for all improvement, how can feedback help the internal quality of a project? What we discussed earlier about feedback is the external quality that can be improved, the part that the customer understands and helps improving, but what about internal quality. But what is internal quality in a software project?

A project is of high internal quality if it can be tested and debugged easily and it can change with very little effort. Being able to be changed easily assures the program a much longer life in production, which makes it cheaper. If the customer pays $1.000.000 for a software and the software is used for 5 years, it will cost $200.000 every year. If the same software can be fixed, readapted and maintained longer in usage, for instance for 10 years, it costs $100.000 per year. Twice cheaper, isn’t it?

But how can we make the software give us feedback about its health? In agile methodologies the best technique for receiving feedback from the code itself about its health are automated tests. They can detect problems very early and report them to the developers so they can act very fast on them. The automated tests become break detectors. If the developers can detect breaks in the existing functionality fast, they can fix them fast and that allows them a great flexibility. A programmer makes a change in the software, even in the code he didn’t write. Then he runs the tests. If they detect a break, he knows it and he can choose either to reverse the change or repair the affected part.

A big problem in a program’s flexibility is that if it becomes big, it is very rigid, and making a change can affect parts of it that cannot be easily predicted. So the developers are afraid to change it. So the project starts to be doomed. The hardest it is to be changed, the hardest it is for it to cope with the market requirements so the shorter its life will be. Having an army of break detectors as automated tests ensure great flexibility, diminishes fear of change, even in big and complex systems, because of the fact that they can give the developers feedback about their actions very fast.

3.2.4 Showing progress

Working software doesn’t lie. One of the agile manifesto principles states: “Working software is the primary measure of progress”. Showing the customer the product instead of reports after each iteration is completed (2-4 weeks), is a great way to communicate progress and to build a product of much bigger value to the customer. Putting that software into production every few months and seeing how it is used, collecting what the users think can improve the software, can build thrust between the client and the programmers, driving the software to success.

For communication inside the team or with upper management, big visible charts on a wall, showing very clearly the features implemented, and the features not implemented can be a very effective and silent technique to show everyone passing by that wall, the current status of a project.

3.3 Communicating inside the team

The project related information must go very fast inside the team. Crystal calls this osmotic communication and it is at the very core of the methodology. It says that people working on the same project need to be able to use the best communication technique: face to face communication, with and without a whiteboard. For this the best technique is to have the entire team collocated in the same room as much as possible , all facing each other, to be able to ask each other questions and to enable others to hear discussions etc. The team must also have a whiteboard in their proximity where they can design or use sketches on the whiteboard to communicate better.

Extreme Programming goes even further, addressing the communication problem mainly with user stories written on pieces of paper and stuck to the wall, and with writing all the business code by pair programming. This means that two programmers, work together at the same computer, understanding 100% of what’s done, and not 50% as if they were doing it separately.

The techniques above and others mainly address communication when time is crucial, but there are other problems of communication within the organization, where time is not that crucial. This is best described by a situation, where a team is developing a program for a few months, delivering it to the client in the end. Now the client wants to continue the development of the program, to add extensions to it, to improve certain parts etc, but the team that built it is not available anymore or it is partially available. How will the new programmers who come in contact with the program understand what was required, what was done, and how it was done?

Extreme Programming comes up and says that the best way to document a computer program is the code itself and the automated tests, it has and not documents describing what was done, as these documents need to be written by someone, which is costly and becomes obsolete very soon. In most cases, I find this approach very good, as I noticed that, recently, whenever I download an open source framework or look at a code written in the past, the first thing I instinctually do when I want to learn how to use it, is to look at the unit tests. They show me how the software can be used, and which the limitations are, in the same language I want to use it: code.

3.4 Improving communication: quantity vs. quality

If communication plays such an important role in all aspects then how can we improve it? For the sake of improving communication, many people fall into a very big trap, believing that improving communication means increasing quantity: more meetings, more documentation, more emails and more phone calls. This is profoundly wrong: communication can only be improved by quality and in many cases improving quantity means increasing the communication problem and not solving it.

Meetings are failures when participants leave the meeting feeling no productive results were achieved; or, that the results that were derived were accomplished without the full engagement of the participants. In short, these useless meetings do not engage collaboration and participative decision making.

[ Booch and Brown, Collaborative Development Environments]

In order to improve the quality of communication we must see which types of communication are better then others. Following this idea, Alistair Cockburn, makes a communication top in Agile Software Development, concluding that Interactive, face-to-face communication is the cheapest and fastest channel for exchanging information.

If verbal, face to face communication is not at hand but there are other great ways to deal with it, like visual communication, web based collaboration tools acting as whiteboards and allowing people to chat real-time, application sharing, instant messengers, etc.

As an example of visual communication, a single screenshot can show even the context in which a bug appeared and give the developers a great deal of information about the user actions. Screenshots or even small recorded movies of user actions, are very cheap, and communicate a lot of useful information, which in some cases might be missed just by sending a plain text email or document.

Another example of improving communication are customer improvement programs where the user is either asked to add suggestions on how he would like a program to be improved or when a bug appears the software automatically detects it and sends by email or other means, information (maybe even screenshots) about the problem to the developers to be analyzed and the problem solved.

Macromedia Flash recorded movies are becoming a more and more popular way to show how software can be used instead of plain text or “classical” help documents, because their visual nature enhances the experience the user has when learning about the program.

Table based communication is one of the most used at this time in businesses (mostly due to Microsoft Excel) because of its simplicity and concreteness. The same approach has been brought into software development by Ward Cunningham, who invented a brilliant communication, organization and testing tool based on tables called Fit ( ). Fit is a great framework for acceptance tests. His work is now continued by a wiki concepts based tool developed at ObjectMentor, being supervised by Robert C. Martin, called Fitnesse ( ).

New ways of enhancing communication are emerging, allowing people to communicate better, thus making them able to collaborate closely even if they are not situated in the same room, and obtaining better software.

CHAPTER 2: Agile methodologies

2.1 Introduction

What is a methodology?

A methodology for software is a set of related rules, principles and practices that, once put in practice, can help deliver valuable software, to the client on time, over and over again.

Your methodology is everything you do regularly to get your software out. It includes who you hire, what you hire them for, how they work together, what they produce and how they share.

[Cockburn, 2004]

Characteristics of an agile methodology

Agile methodologies are characterized by close collaboration with the customer, and by delivering working, tested within fixed time intervals, called iterations, which usually lasts from one to four weeks. These practices help ensure that the software is developed according to the customer wishes, projects staying on track, and can also react well to changing requirements and priorities.

Working software is shown to the customer at the end of each iteration which provides a tangible mean to track project progress rather than with charts and reports. Written documentation is kept to a minimum, in favor of real-time, face to face verbal communication and close collaboration both within the team and with the customer. This close communication and frequent feedback loop with the client enables a continuous learning process in which the customer understands and participates in software development and the developers are kept close to the business domain of the customer.

2.2 Agile manifesto and principles

Throughout the 90’s several new “light weight” software methodologies emerged in reaction to the problems with the heavy methodologies used throughout the industry.

In 2001, Jim Highsmith and Robert C. Martin, organized a meeting of 17 people, many of them being founders of these “new” methodologies to discuss the similarities and differences of their methodologies, and establish a set of principles that were common to all. Thus the “agile” movement was born with the establishing of a manifesto and a set of 12 common practices that expressed the essence of what agile development is.

The agile manifesto

1. Individuals and interactions over processes and tools

The heavy, high ceremony processes with lots of process and documentation overhead, were found to produce less valuable software for the customers, than self organized teams of open minded individuals, who were interacting and focusing more on delivering software rather then strictly following the process.

2. Working software over comprehensive documentation

In many cases, vast amount of documentation was produced for different purposes, from gathering requirements, to tracking and showing progress to the client or to document what had been done, with people focusing on getting the right documents rather than the right software, for the customer, on time. They discovered that there is no better way to show progress, then showing the customer the software, because working, tested software does not leave room for misinterpretations.

Documentation can be helpful, but it is very important to do just enough of it for the needed purpose, staying focused on delivering software, instead of focusing on making the best documentation possible.

3. Customer collaboration over contract negotiation

In many cases, because of the virtual nature of software, clients cannot express their requirements very well at first, but they tend to become good at it as they see the software being built. This continuous learning of both the customer, about software, and for the developers about the customer’s business was proved to be a very good way to build software. By having everything negotiated and signed off from the beginning, the software’s purpose, from the client point of view, could be misunderstood. Collaborating directly with the client is usually a guarantee that the software that he needs is developed and that it is shaped as close to what he wants as possible.

4. Responding to change over following a plan

Plans are used to calm us down, to give us a direction to follow and as we go to let us know where we are by comparing the reality with the plan. But we need to be able to readapt the plan fast if the direction of the software being built has changed. Following a rigid plan, loses its calming purpose for both developers and customer, when it no longer reflects reality.

The principles

The principles of the agile manifesto are the ones that explain better, the practical side of agile manifesto. They show which principles are followed in all agile methodologies. They represent a more pragmatic approach to what must be done to become agile.

1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
2. Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
4. Business people and developers must work together daily throughout the project.
5. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
6. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
7. Working software is the primary measure of progress.
8. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
9. Continuous attention to technical excellence and good design enhances agility.
10. Simplicity–the art of maximizing the amount of work not done–is essential.
11. The best architectures, requirements, and designs emerge from self-organizing teams.
12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

The agile alliance

The agile alliance defines itself as:

“a non-profit organization that supports individuals and organizations who use agile approaches to develop software. Driven by the simple priorities articulated in the agile manifesto, agile development approaches deliver value to organizations and end users faster and with higher quality”.

2.3 Agile methodologies, practices, properties and tools

There are, as Martin Fowler said, more “flavors” of agile methodologies.
-XP – Extreme Programming
-Crystal Family
-Lean Software Development
-Adaptive Software Development
-AMDD – Agile Modeling Driven Development
-FDD – Feature Driven Development

All agile methodologies are based on a set of rules, values, principles and practices specific to each of them, or that have been influenced or evolved with practices from other agile methodologies. Each methodology has a set of “must do” and a set of “could try” practices and tools, thus showing what the core of the methodology is and how it could be customized and adapted.

Most of the authors invite practitioners to use practices from the other methodologies, when it comes to an area that is not specifically covered by the methodology they propose, or where they leave much flexibility.

SCRUM and XP dominate the market at the time, being the best known and the most used, but it is important to have an overview of the entire range of agile methodologies. Mixing practices from more agile methodologies can better adapt to one’s needs thus making his process more customized and more efficient.

2.4 XP

Extreme programming is probably the most known (and controversial) agile methodology, being defined by Kent Beck, in 1999 in “Extreme Programming Explained: Embrace Change, First Edition”, after being shaped along Ron Jeffries, Ward Cunningham, Martin Fowler or Robert C. Martin throughout the late 90’s, especially in a famous Chrysler project called C3, where the basis of it had been put into practice.

Extreme programming is based on a set of values, principles and practices, that are recommended to be followed, covering all aspects from communication, to organization, quality control and coding. It is the methodology that puts the highest accent on testing, and especially automated testing from all software disciplines known.

Values, Variables, Principles and Practices

XP has 5 values, on which it is based: communication, simplicity, feedback, courage and respect.

The most important practices of XP are:
– small releases
– the planning game
– refactoring
– test driven development
– pair programming
– sustainable peace
– team code ownership
– coding standard
– simple design
– metaphor
– continuous integration
– on-site customer

The key principles, on which XP is based, are: rapid feedback, assume simplicity, incremental change, embrace change and quality work

How do people work in XP?

In XP, projects are developed by collocated teams, usually with 4-10 developers, delivering the work to the client or onto the market, every 2 to 4 months, calling these timed units, releases. These releases are also divided in a set of iterations, each lasting from one to 4 weeks (most common are 2 and 3 week iterations) and at the end of which, a part of the entire product, is delivered. The end result of such an iteration should be a high quality piece of software, that could be potentially deployed into a production environment.

In XP, the client has to be collocated with the team, to be able to answer questions and “steer” the direction of the project all the time. The client is called “the customer” and the technique described above, is called “on site customer”. As a consequence of this collocation, feedback is very fast, so the project doesn’t derail, and the need for written functional specifications is less obvious. The unit for gathering requirements from the client that is also later used in planning is the user story and it usually represents a valuable feature for the client. It is not written in very much detail and most times it is very brief, acting more like a reminder for the customers and programmers to talk about it and be able to estimate and implement it.

In XP software is developed in cycles; every few months a release is being planned then built trough several succeeding iterations, then the product developed so far deployed to the customer finalizing one cycle and starting another if necessary. At the beginning, the project is planned for the next release (2-4 months in advance). The user stories are written with the customer, and then these user stories are estimated by the programmers, providing the client the “cost” for each of these pieces of functionality. Based on these costs the customer can then give the priorities, putting the most important one to be developed first, so the most valuable features for the client are being developed and delivered the fastest. The user stories in the plan, after being estimated and prioritized are now divided between the iterations in the planned release.

At the beginning of each iteration or release, the customer can change the plan and readapt it to his needs, re-prioritizing the list of user stories, being able to change the direction, by bringing forward user stories from other iterations, adding new ones or dropping some of them, then the programmers re-estimate the work, redefine the plan.

Going deeper into detail for user stories is done trough verbal communication between programmers and the customer. Programmers estimate the user stories by dividing them into programming tasks. The customer writes acceptance tests or customer tests as they are called in XP that will demonstrate by passing that what was requested was really implemented. Many times these acceptance tests become automated tests.

Once the iteration has been planned, the programmers divide the user stories or tasks among them and then write the code in pairs or two people (pair-programming) writing automated tests for anything that could possibly break, integrating the pieces of code that each pair delivers and building the system and running the suite of tests automatically as often as possible. The code is written after a coding standard document, and the pairs of programmers are changed often so that each programmer can work on more parts of the system and pairing with more colleagues throughout the project, thus knowing better what is happening in it.

In XP, each programmer can change every piece of code, if he needs to, as there is no individual code ownership, but all the developers own all the code, this technique is called collective ownership.

The architecture or design of the code is not built up front like in traditional software engineering models, although risks are being evaluated (especially by building prototypes called spikes, but not necessarily at the beginning) but is actually emerging from the code, as this code is continually refactored to be the simplest possible (one of XP values is simplicity) and using the YAGNI (You Ain’t Gonna Need It) technique, focusing on what the code needs to deliver now rather then what could be needed in the future. When the changes will be needed, the code will be refactored to accommodate them, as refactoring is easy because of the extensive automated test suites that can instantly say if a change has broken some existing functionality and secondly because rotating the programmers in the pairs facilitates better knowledge of the whole system for all the programmers.

A special case is when a customer wants software and has a deadline for it which cannot be negotiated, but the amount of features he wants clearly need more time, than the specified deadline. In such a scenario, the customer and the developers sit together and first get the list of user stories the customer wants and then the developers estimate the cost of each. The customer is then asked to choose from that list enough user stories to fill the time until the first deadline. The rest of the user stories will be developed in another release after the first release has been delivered. This special case exposes very well the practice called negotiating scope.

XP (and other practices) strongly believes in sustainable peace, where projects are developed at a sustainable and constant rhythm of work, doing as much as possible in 40 hour weeks, without lows and highs in the number of hours needed throughout the development of the project, and this can be achieved applying the practices described above.


SCRUM is an iterative and incremental methodology that was developed starting from project management problems, by three people: Ken Schwaber, Jeff Sutherland and Mike Beedle, since 1993, and it has been proven as a great way to produce successful software continuously with small and large projects on a wide range of companies.

SCRUM is based on 5 values: commitment, focus, openness, respect and courage.

How does a SCRUM team work?

SCRUM teams and key roles

A SCRUM team is usually of four to seven developers. All these developers work together, being self organized without any intervention from upper management on individual responsibilities (who does what, who designs, who tests), all these activities are decided and performed inside the team. The developers have no fancy job names like architect, tester, programmer, etc.

A SCRUM master, is the equivalent of a project manager in traditional software development processes, but the way he does his job is very different. He is more a leader than a manager, letting the team self organize and having the primary goal in removing whatever obstacles the team would have when trying to deliver valuable software to the client. The SCRUM master serves the team, rather than imposing directions to it.

In SCRUM, the role of the client or the customer is called, the product owner. He is working with the team in defining and refining requirements, which are kept in a product backlog, each feature or item in there being called backlog item. His job is to produce the backlog items and to prioritize them, so that the highest value can be delivered by the team first.

Planning sprints

The product backlog is the master list of all functionality desired in the product.

[Mike Cohn, 2004]

A project starts in SCRUM with gathering all the important features that the product owner thinks will be needed in the product and all of them are put in the product backlog. Unlike traditional requirements gathering techniques, it is not important to get all the features into the product backlog at the beginning because the product backlog can be added to, updated and priorities changed in the future as the product owner starts to see the product learning more about the software product while the developers are also learning more about the business. This allows the customer to drive the product according to the real needs he has at a certain time. It is important however to get as many features in the backlog at first, enough for the product owner to be able to prioritize the work and for the developers to develop in the first monthly iteration called sprint.

Projects are developed in SCRUM, using one month iterations called sprints. At the end of each sprint a potentially shippable product is delivered and presented to the product owner. Then another sprint is planned, built and delivered in the next 30 days and so on until the project is finished.

Each sprint starts with a sprint planning meeting where all the parties involved (the program owner, developers and SCRUM master) participate, but other interested parties like management or customer representatives can also be present. A meeting like this usually lasts up to an entire day, and is divided into two parts.

Part one of the planning meeting is when the product owner describes the most valuable items from the remaining features in the product backlog. This is done so that the features are described in more detail, so that later they can be estimated easier by the development team. When the team has something to clear up, this is done trough questions and direct conversations with the product owner. Not all remaining features are important to be described here by the customer, but enough from the product backlog, that can possibly fit in this sprint, choosing the most valuable first. The remaining items will be discussed in the future sprint planning meetings.

Trough this activity, the team and the product owner define where they want to get to when this sprint will be finished. Mike Cohn refers to this as defining the sprint goal. This is the expected and will be compared at the end of the iteration with the actual.

Part two of the sprint planning meeting is done to let the team see how much from the product backlog can they built in this sprint. For this the team discusses what they heard separately, seeing how much effort each item will need and seeing how many items fit into this sprint. Since the product backlog is sorted, by the product owner, descending on the basis of priority, the team can draw a line under the last item that can be included in the sprint. All these items will now be moved to the sprint backlog, in order to be built having the product owner approve the list. In some cases this list can be negotiated as some of the items might be more costly (in time) than others and based on this the product owner can reprioritize what needs to be done.

After the product backlog items are added to the sprint backlog, this list is sealed. Only in exceptional cases can it be modified, or items added to it, but this only with the explicit accept from the team.

The sprint backlog is different from the product backlog as in the sprint backlog, after the items have been moved from the product backlog, the sprint backlog items can be divided into programming tasks that are also items. The product backlog needs to be understood by the customer, but the sprint backlog can and should include tasks that can be understood only by the programmers and that help organize their work better.

Daily SCRUM meetings

At the beginning of each day, the team gets together for 15-30 minutes, to discuss the status of the project. This meeting goal cannot be derailed in any way, and is usually held with everyone standing in a circle, with each team member being asked the following questions by the SCRUM master:
– What did you do yesterday?
– What will you do until the tomorrow daily SCRUM?
– What prevents you from doing your work?

The daily meeting’s main purpose is for all to know where the project stands and for each to commit to something in front of the others, not the SCRUM master. The SCRUM master needs to do whatever possible to eliminate the obstacles raised by the answers to the 3rd question.

The daily meetings can be attended by other stake holders such as managers, even CEO to see progress; however they are not allowed to speak, just to hear. This is also known as the “pigs and chicken” technique, with the team being the pigs and the management being the chickens, being inspired from a short very relevant story:

A pig and a chicken discussed the name of their new restaurant. The chicken suggested Ham n’ Eggs. “No thanks,” said the pig, “I’d be committed, but you’d only be involved!”

[Larman, 2005]


SCRUM does not impose strict engineering practices like XP does, but it emphasizes the need for quality work, automated tests and daily builds of the system. Many teams do use TDD, Continuous Integration, even user stories and other practices from XP with great success, having a mixture of agile methodologies, as these agile methodologies have evolved together, XP planning game being influenced by SCRUM, and developing very successful projects that way.

Product presentation meeting or sprint review meeting

When a sprint has been finished (at the end of the 30 days) a sprint review meeting is held to compare the expected with the actual built, together with the entire team, the SCRUM master, the product owner being present and other stake holders. The meeting is very informal and PowerPoint slides and presentations are strictly being prohibited, the focus being on showing progress, by showing the software, going trough all newly developed backlog items from this sprint. After all, what better way to show progress than the product itself?

An emphasis is also put on getting feedback after seeing the product demo. Craig Larman says:

Feedback and brainstorming on future directions is encouraged, but no commitments are made during the meeting. Later at the next Sprint Planning meeting, stakeholders and the team make commitments

[Larman, 2005]

Doing it all over again

After the sprint has been finished and the actual result presented to the product owner and the other interested parties, it is time to take things from the beginning with the next sprint starting with the sprint planning meeting. This is done over and over again until the customer considers that the product is finished.

Scaling SCRUM

Usually a SCRUM team consists of four to seven team members. This does raise the question of how does SCRUM work when more then 7 developers are needed. The answer is in having more SCRUM teams, with daily meetings also being held across teams, each team sending a representative to these meetings.

Projects with several hundred developers have been developed using SCRUM, from small projects to large multi country teams with 330 to 600 developers involved, as the IDX project [Larman, 2005], [Cohn, 2004].

SCRUM is a project management method that exceeds the borders of software development being used in other domains as well. It is a “simple process for managing complex projects” in general, very well suited for software.

2.6 Crystal Clear


Since 1990, methodology anthropologist Alistair Cockburn has studied all over the world how successful teams of developers build and deliver software, over and over again, and listed his findings in a methodology family called Crystal, which is composed of more methodologies, divided by the size of the team and also by the importance of the project, varying from the lightest: Crystal Clear to heavier methodologies, that get darker colors, such as yellow, Orange, Red etc, having the author believe that heavier processes are needed for larger projects, starting from the concept that one methodology cannot cover all aspects of software development.

What is Crystal Clear?

CC aims to be a simple and tolerant set of rules that puts the project on the safety zone. It is, if you will, the least methodology that could possibly work, because it shows much tolerance (Cockburn, 2005)

How do Crystal Clear teams work?

Properties, Tools and Practices

CC is based on a set of seven properties, which deeply influence development: frequent delivery, reflective improvement and close communication, personal safety, focus, early access to expert users and technical environment with automated tests, configuration management and frequent integration, but only the first 3 are a must.

Following CC the developers are organized in teams consisting of 2-8 members, with at least one level 3 developer, usually sitting in the same room, facing each other in order to facilitate verbal communication as much as possible. They sit side by side and collaborate often, this technique being called side-by-side programming. This is very similar with XP’s pair programming but differs from it as each team member has his own computer and can develop alone, or with the colleague on the right or on the left.

The development is divided into periods lasting from 1-4 months called deliveries, at the end of which the software will be deployed to the end users. Between these deliveries, at regular intervals, called iterations, the customer representatives are shown the working product and the feedback from them is gathered.

Requirements are obtained from the customer as “light” use cases and are obtained and prioritized together, with the customer and then a delivery plan is built for the next delivery.

CC teams constantly look back and improve the way they are working, both at the end of the iteration but also in the middle of it. This technique is put in practice by dividing actions done in 3 lists: things that worked and should be kept, things that didn’t work and should be dropped and things that could be tried next.

The design activity, which is continuous throughout the project, is done on a whiteboard, and if there’s need to keep a record of it, it is photographed and included into a document straight from the whiteboard.

The code being written is deliberately architected so that it can support a fully regression automated testing suite. These regression tests give them the possibility to do lots of changes and to adapt easily to shifts in direction of the product being developed, but the coding style is in the hand of the developers, they being the one that decide what best suites them. There are also as many integrations as possible putting together all the parts of the system and running the whole regression suite of tests.

Any team member can feel safe always telling what he thinks, like telling a manager that a schedule is unrealistic or a colleague that his design needs rework. Alistair Cockburn calls this the first step toward trust.

CC teams always focus on what’s the most important thing that needs to be taken care of now, and always have easy access to expert users or the client, that can always be asked to clear misunderstandings, to plan the next deliverable product or to receive feedback from them.

Besides the 7 properties of CC, it also comes with a series of recommended strategies and techniques, to be able to deliver value to the client fast. Some of them are very similar, inspired from or have evolved with practices from other agile methodologies.

There are 4 strategies: exploratory 360, walking skeleton, incremental rearchitecture and information radiators and 9 techniques: methodology shaping, reflection workshop, blitz planning, Delphi estimation using expertise rankings, daily stand up meetings, essential interaction design, process miniature, side by side programming and burn charts.

2.7 Lean Software Development

Lean software development is not a day to day strict practice methodology, but rather a set of 7 principles and 22 tools that can be used to successfully manage a project. Lean software development comes to complete the practices from other agile methodologies, with lessons learned in other industries, that are more advanced and mature that the software development branch. Jim Highsmith, said about lean software development that it presents a toolkit for project managers, team leaders, and technology managers who want to add value rather then become roadblocks to their project teams.


Much before the software industry, the auto industry had given up the waterfall model, in favor of a more lightweight but efficient model, that can produce results faster and in a concurrent manner. Mary and Tom Poppendieck published “Lean Software Development: An Agile Toolkit” in 2003,doing a wonderful job in adapting the lean principles, used in manufacturing, to software development. They have proved these practices, developing and managing software projects, demonstrating that not the inspiration from another industry was bad (some advocates of the agile methodologies have claimed that the inspiration in the 60-70’s from mechanical and construction engineering caused many problems in software), but that the mistake was not evolving with manufacturing and other industries when they dropped the waterfall model.

Mary and Tom also showed that some of the previous attempts to adapt lean principles to software failed because they were started wrong, without realizing the difference between development and product manufacturing.

The 7 principles

Eliminate waste – Taiichi Ohno’s (the father of Toyota Production System) said that anything that does not add value to a product, as perceived by the customer is waste. All agile methodologies have emerged as a response to chaos where the amount of wrongly used resources generates lots of overhead and waste , thus having a method to identify and cut unproductive activities, is considered, for software development as well, a very important tool.

In software development, the following are considered waste: partially done work, extra processes, extra features (never needed or used), task switching (people learning one system, then moving to another where another learning curve is needed), waiting, motion and defects.

Value stream mapping is the chart that enables you to simply see the periods that add value and the periods where time is wasted. Put in practice, this is a tremendous tool, showing huge gaps between value and waste.

[Value stream mapping in traditional processes, from “Lean Software Development: An Agile Toolkit”]

Amplify learning

Software development is a continuous learning and adapting activity between the customer and the developers, which learn from each other, communicate better and thus build better and more useful software. Feedback (tool 3) amplifies the learning process and because it is based

Synchronization (tool 5) allows concurrent development. Good synchronization is achieved with the customer through iterations (tool 4) and with the colleagues mostly with frequent code integration and reverse testing sessions. Synchronization is achieved better if we start from the interaction point first, from the interfaces of the different components.

Set based development (tool 6) is one of the greatest lessons we can learn about communication. Saying what the constrains of a system that needs to be built are allows the developer of the system to come up with more choices, with a set of solutions. Combined with further requirements, from that set of solutions the one that passes all constrains will emerge. If we ask specifically for a choice we might miss several alternatives and find ourselves stuck.

Decide as late as possible

Deciding for the future has been proved wrong as it can never predict changes and makes the cost of change enormous. Deciding later based on more information and keeping your options open (Options thinking – tool 7) is a better option. The last responsible moment (tool 8) enables you to take responsible decisions (making decisions – tool 9) late but not too late.

Deliver as fast as possible

Until recently rapid software development has not been valued; taking a careful, don’t make any mistakes approach has seemed to be more important

[Poppendieck and Poppendieck, 2003]

Lean product manufacturing and agile software development have proved that fast delivery is possible and leads to more satisfactory results for the customer. Recommended tools inspired from lean manufacturing are: Pull systems (tool 10), queuing theory (tool 11) and cost of change (tool 12)

Empower the team

Highly motivated, self organized teams, that have power to take decisions and put their ideas into practice, being lead by people with expertise, have always been a recipe for success (Self determination (tool 13), Motivation (tool 14), Leadership (tool 15), Expertise (tool 16))

Build integrity in

Perceived and conceptual integrity is what we understand by external and internal quality of a software product. If the integrity (tool 17) perceived by the customer is good, then the software that was asked for was built and is of high quality, and if the conceptual quality (tool 18) is goof, then the product’s internal quality is good, meaning that it can be extended, maintained and kept longer in use. The two fundamental practices at the base of integrity are testing (tool 20) and refactoring (tool 19).

See the whole

Making parts of a whole, at the highest quality does not necessary mean that when they are assembled together, the resulting system is at the highest quality and as required. Focusing on the whole system, helps making the parts better suited to work with each other and to compose a better system. The tools mentioned in achieving these are: Measurements (tool 21) and Contracts (tool 22)

Lean software development principles and tools should be in every agilist’s pocket no matter what methodology they follow, as they represent invaluable advice and proven practices to enhance efficiency.

2.8 AMDD – Agile Model Driven Development

AMDD is a light-weight approach for enhancing modeling and documentation efforts for other agile software processes such as XP and RUP [Ambler 2005], being extremely valuable in situations where written documentation and modeling are required. It is based on a set of principles, values and practices, which explain and enhance the role of modeling and documentation, in agile methodologies, based on the sufficiency principle, leaving unnecessary detail out.

Its author, Scott Ambler, a very active member of the agile community with great contributions especially on agile modeling, agile databases and database refactoring techniques, emphasizes in AMDD, that all agile methodologies include modeling, from the simple user stories and CRC cards to the automated tests as primary artifacts: acceptance tests as primary artifacts for requirements and unit tests as detailed design artifacts.

2.9 DSDM – Dynamic Systems Development Method

DSDM is an agile methodology, developed since 1994, and continuously improved by a non profit organization, called the DSDM consortium.

DSDM, breaks from traditional software methodologies, looking the work, from an entire different perspective:

DSDM is based on the following as core concepts: active user involvement, team empowered to make decisions, frequent releases, iterative development that are user feedback driven, reversible changes, requirements defined early at a high level, integrated testing, essential communication and collaboration, 20%/80% rule and having the goal to fit the business purpose.

2.10 FDD – Feature Driven Development

FDD is an agile methodology, developed by Jeff DeLuca and Peter Code, since around 1997, that puts bigger emphasize on modeling and offers good predictability in projects with more stable requirements, consisting of two main stages: discovering the features to implement and implementing them one by one.

The first stage, discovering the features, is done with the customers on site and marks the features to be implementing using UML.

The second stage develops the features, in 1-3 week iterations, in which more features to be done, are packaged. At the end of each iteration, the software produced is considered shippable.

2.11 Adaptive Software Development

Jim Highsmith is one of the most respected book authors and lightweight methodology advocates, in the agile community. He wrote a series of books on agile software development and agile software management, defining the Adaptive methodology, where he promotes the need for the continuous evolution and adaptation of the process used. He states that software development should be developed incrementally and iterative, trough repetitive cycles that include 3 stages: speculate, collaborate and learn.

CHAPTER 1: Problems and causes with the way we develop software

Troughout this chapter I will try to show briefly how the software industry went from one extreme to another, in the processes used to develop software. If at first chaos was the main problem, then the software industry went straight into the other extreme adopting the extremely heavyweight “waterfall” process, where software was very hard to develop because there were too many rules. As an example at the beginning of Mary and Tom Poppendieck’s book “Lean Software Development: An Agile Toolkit” the following example is presented:

Jim Johnson, chairman of the Standish Group, told an attentive audience the story of how Florida and Minnesota each developed its Statewide Automated Child Welfare Information System (SACWIS). In Florida, system development started in 1990 and was estimated to take 8 years and to cost $32 million. As Johnson spoke in 2002, Florida had spent $170 million and the system was estimated to be completed in 2005 at the cost of $230 million. Meanwhile, Minnesota began developing essentially the same system in
1999 and completed it in early 2000 at the cost of $1.1 million. That’s a productivity difference of over 200:1. Johnson credited Minnesota’s success to a standardized infrastructure, minimized requirements, and a team of eight capable people.

In many organizations, the way the projects are organized and managed can determine very well the end result. The lack of good organization is the “cancer” in any business, ending sooner or later in failure. So what can we do as software developers?


At the beginning of the software development industry, due to lack of guidance and maturity, most projects were developed without any clear process applied. The difference was made by the people that developed the projects and their decisions, without any guidance from outside.

As the software industry started to develop, the number and the complexity of the projects started to increase, and soon enough, the need for a software process to be followed started to emerge. Many projects were developed in completely chaotic conditions and this made most of them end up as failures. The project managers wanted more visibility into their projects, and this need of predictability started to make them look in other engineering disciplines and adapt the management practices from there.

At the time, most industries had already developed and used, for decades, predictable processes where everything was carefully planned from the beginning and then designed, implemented and tested before being deployed.

In 1970, W. Royce writes a paper, where he exposes the so called “waterfall model”, where a project is developed in more stages, in a very clear order, no stage starting before the completion of the former one. At the time, the waterfall model started to be adopted by many software organizations, being a massive step ahead from the chaos that used to characterize the software industry.

The waterfall model

The waterfall model presumes four consecutive phases:
1. requirements & analysis,
2. design,
3. implementation
4. testing.

The process takes its followers from one phase to the next sequentially, each phase being needed to be finished before the next can be started.

The first phase presumes obtaining the requirements from the customers and is finished only after these requirements are analyzed and approved by the client, usually signing off a requirements specification document that needs to be carved in stone before the next phase can start: designing the software. This phase, presumes that the whole system is designed and architected in the smallest details before it goes into production. This process is often called, big design up front, and needs to find all the risks involved and solve all the problems before it can go to the next phase. The design is a very detailed plan of how the requirements should be implemented for the coders.

Following the detailed design the programmers implement the software. Usually the implementation is done in many different components that are integrated together after they have been built. When all the components have been build and integrated to form a whole, the next phase can start: testing and debugging.

The obtained system is being now thoroughly tested and the bugs found are now solved to be able to release the software. When this phase ends the software is ready and it can now be shipped and/or deployed to the client.

By the early 80’s many software developers realized that the waterfall model based processes have some very serious problems especially when predictability in a software development is hard, requiring lots of changes late in development, which were considered extremely expensive.

Risks and smells associated with the waterfall model

The waterfall model seems very good, predictive and easy to grasp and follow, but practice has proved it has some serious risks associated to it. The fundamental idea behind waterfall is that finding flaws earlier in the process is more economical and thus less risky. Finding a design flaw in the design phase is the least expensive, but finding it in a later phase like implementation or testing can generate serious problems and derail the project. Many times this is very hard, even for very experienced architects such as Martin Fowler who said:

Even skilled designers such as I consider myself to be, are often surprised when we turn such designs into software

[Martin Fowler, 2005].

Big design up front is driven by the enthusiasm that risks can be hunted down and eliminated early and a very good design produced. But …

…the designs produced, can in many cases look very good on paper, or as UML designs, but be seriously flawed when you actually have to program the thing

[Martin Fowler, 2005].

The most serious risk when it comes to the waterfall model is the fact that it is very change adverse. Since it is based on the presumption that everything can be predicted, changes can be very hard to do once the process was started, and extremely costly in later phases.

In many cases, there’s a lot of overhead associated with the process. Usually lots of documentation is produced, to plan, to track, to demonstrate progress, to test, to etc, and in many cases producing the documents becomes a purpose, having people produce very professional documents, very detailed plans, very eye capturing charts, that have a lot of time invested in them, but do not have direct business value for the client, or the value is under the cost.

In a business process, there are documents, reports and charts that can be very helpful to be able to organize and manage, but it is also very important to constantly keep an eye on the amount of documentation, on the amount of time spent in meetings, and basically, in doing anything else that doesn’t have direct value for the client or keeps the whole process together so that it can deliver running tested features to the client.

Being fundamentally a sequential process, which presumes that one phase much be completely finished before the next can be started, the waterfall model propagates delays from one phase to another, every delay adding to the overall length of the project.

Although the risks were being seen from the late 70’s and early 80’s, in the mid 80’s the US Department of Defense, adopted the waterfall processes as a standard to develop software. This was later extended to NATO, and most west European armies adopted it. If the military made the waterfall a standard, then the entire industry was influenced, so the waterfall model process found new heights in adoption and promotion.

The need to evolve

In the early ’90s, a lot of developers started individually, to develop a new process, which was a response to both chaos and to the flaws in waterfall processes.

In their quest to find an answer to the problems associated with chaos and the heavyweight processes used throughout the industry, the developers started to question the very base of the waterfall model, it’s inspiration from other industries. Martin Fowler, one of the most respected authors in the programming field, wrote in an article called “The New Methodology” [] :

When you build a bridge, the cost of the design effort is about 10% of the job, with the rest being construction. In software the amount of the time spent in coding is much, much less. McCornell suggests that for a large project, only 15% of the project is code and unit test, an almost perfect reversal of the build bridge ratios. Even if you lump all testing as part of the construction, the design is still 50% of the work. This raises a very important question about the nature of design in software compared to its role in other branches of engineering.

On the other hand, Craig Larman says that software development is more like new product development where there are lots of uncertainties, rather then product manufacturing which is much more, straight forward, because it means doing something that has been done before:

Mary and Tom Poppendieck, continue the idea that software is a development activity, very different from production environments in other industries, making the following comparison of development and production.

Think of development as creating a recipe and production as following the recipe. … Developing a recipe is a learning process involving trial and error. You would not expect an expert chef’s first attempt at a new dish to be the last attempt. In fact, the whole idea of developing a recipe is to try many variations on a theme and discover the best dish

The idea that software, is by its nature, more a continuous learning and adapting process, then a predictive process, sits at the base of many new processes that started to be developed in the early 90’s and which in 2001, were named agile.

On the other hand, some people coming from manufacturing, showed that it was not the inspiration from other industries that was wrong, but the fact that the software industry did not see that the other industries dropped the waterfall based processes 10-20 years ago, in favor of concurrent and lean development. Mary Poppendieck states:

I had been out of the software development industry for a half dozen years, and I was appalled at what I found when I returned. Between PMI (Project Management Institute) and CMM (Capability Maturity Model) certification programs, a heavy emphasis on process definition and detailed, front-end planning seemed to dominate everyone’s perception of best practices. Worse, the justification for these approaches was the lean manufacturing movement I knew so well.

I was keenly aware that the success of lean manufacturing rested on a deep understanding of what creates value, why rapid flow is essential, and how to release the brainpower of the people doing the work. In the prevailing focus on process and planning I detected a devaluation of these key principles. I heard, for example, that detailed process definitions were needed so that “anyone can program,” while lean manufacturing focused on building skill in frontline people and having them define their own processes.

I heard that spending a lot of time and getting the requirements right upfront was the way to do things “right the first time.” I found this curious. I knew that the only way that my code would work the first time I tried to control a machine was to build a complete simulation program and test the code to death. I knew that every product that was delivered to our plant came with a complete set of tests, and “right the first time” meant passing each test every step of the way. You could be sure that next month a new gizmo or tape length would be needed by marketing, so the idea of freezing a product configuration before manufacturing was simply unheard of.

Even worse, it was shown, that the paper of W. Royce from 1970, where the waterfall model was first defined was actually showing the risks associated with the waterfall model, and the author urged the software developers to use the spiral model, which he thought was better. However, the misunderstanding that he was actually saying that waterfall processes are the way to go in software is the biggest misunderstanding in software development.


Clearly good software cannot be developed on the long term in chaotic conditions, and at the same time, waterfall processes are too heavyweight, as in many cases they are an overkill. The new business world in which we live in, where dramatic changes happen very fast in the market, require that software development come up with a new, evolved method, where software can be delivered faster, on more unstable requirements, and where the direction can be changed if needed in the middle of the way.

Agile methodologies have the same goal as all the other processes: producing software. However, the problems that they aim to solve are very different from traditional processes: fast delivery in changing, unpredictable conditions. Adapting to changes rather then trying to predict the future.

The agile mini book – a 6 week series

About 2 years, ago, I spoke with someone about writing an agile mini book. I wrote the book, but things changed, and it wasn’t published. Now I decided, to publish it, chapter by chapter, week by week so for the next 6 weeks this is what can be expected:

CH1: Problems and causes
CH2: Agile methodologies
2.1 Introduction
2.2 Agile menifesto and principles
2.3 Agile methodologies, practices, properties and tools
2.4 XP
2.6 Crystal Clear
2.7 Lean Software Development
2.8 AMDD
2.9 DSDM
2.10 FDD
2.11 Adaptive Software Development
CH3: Communicating
3.1 Introduction
3.2 Communicating with the customer
3.2.1 Requirements
3.2.2 Set based development
3.2.3 Feedback
3.3.4 Showing progress
3.3 Communicating inside the team
3.4 Improving communication: quality vs quantity
3.5 Collaborating
CH4: Learning and adapting
4.1 Introduction
4.2 Circle of life – learning and adapting
4.3 Reflective improvement
CH5: Managing and organizing
5.1 A small agile process practice sample
5.2 Iterative and Incremental process
5.3 Adaptive planning strategy
5.4 Evolutionary design strategy
5.5 Fast delivery strategy
5.6 People first strategy

CH6: Quality and testing
6.1 Internal and external quality of a system
6.2 Automated tests and manual testing
6.3 Test Driven Development

I hope you’ll enjoy reading the 80 pages. Each chapter will also be available for download for free, as a pdf.


Lucene indexes as agile databases

Download Source code


In product development, as opposed to bespoke project development where each client gets its own software, you deliver the same software to each client. This has some advantages: single development stream, with multiple sales but in reality each client will want to be able
to customize its own product. Each will divide its products differently, will have different fields and if your product is not flexible enough to allow these policies they will reject your product.

Relation databases are not agile enough

In an agile manner, when developing a software project, you plan for change. In Extreme Programming, they say a piece of code has a good internal quality if it can be easily maintained, changed and debugged. In product development, the data store changes from one version to another and this is often a big burden. Relational databases simply haven’t been designed to be easy to change their structure. Once an ALTER deletes a column, the entire database becomes unstable, you have to check each select, update, insert, each statement to make sure it doesn’t crash. Migration scripts are hard to create and very error prone.
There are many workarounds, but none of them is trivial work.

ActiveDocument: Indexes as truly dynamic databases

Indexes were designed, having in mind that they need to search large repositories, extremely fast. Look at Google. It indexes over a billion websites and it is FAST.

Could an index be better structured the just having large quantities if text documents, for instance having types and properties, relations between entities, pretty much what a common relational database would have, and beyond that a truly flexible infrastructure, where typescan be created, modified and destroyed on the fly, where their properies could be added, removed or their names changed withought fear? Something that in C# would be like:

As it can be easly seen we create a type (Product) on the fly, with no need to define a table or any kind of structure, and then we add properties to the instances of this type, which can differ. Structure is actually defined and changed dynamically at runtime. The two product instances above, both have a Name, but only one has a Category, and the query still works.

Ladies and gentelmen this technology is possible, and is called ActiveDocument, and the sources can be downloaded from here

Besides being able to store typed data, and retrieve it fast and easy, we can also relate data:


Is it a breaktrough? Will this kill the relation databases? Is this 100% safe? I have no idea. I am discovering the advantages and disadvantages as we go, and I am sure some others will certainly like to help me on this journey.

Pros and cons to using indexes instead of databases

-they don’t break


-no joins
-no transactions
-no foreign keys

Please feel free to add :)

Testing drag’n’drop with WatiN

Many of the recent Web 2.0 applications require drag and drop functionality. Sometimes it is useful, sometimes it is just marketing, but if you have to do it and at the same time you want to write all your code test first (TDD), a new problem arises: how to write a functional test to test the drag drop?

Starting from a wonderful article on the ‘Z Bar Zone’ blog, I modified it to work with WatiN in C#. The code tests the wonderful based drag’n’shop:

using System.Threading;
using mshtml;
using NUnit.Framework;
using WatiN.Core;

namespace TestDragDropWatiN

public class DragDropScriptaculousTests
IE ie = null;

public void TestDragDrop()
ie = new IE("");

//add some items to the cart
int[] ids = new int[] { 1, 1, 2, 1, 2 };
foreach (int id in ids)
string product = "product_" + id;
RunScript("document.blah = document.createEventObject(); document.blah.button = 1");//create event object
RunScript("document.getElementById('" + product + "').fireEvent('onmousedown', document.blah)");//click on product-fire mouse down on product_id div
RunScript("document.blah = document.createEventObject(); document.blah.clientX = document.getElementById('cart').offsetLeft; document.blah.clientY = document.getElementById('cart').offsetTop");//obtain position of the cart
RunScript("document.getElementById('" + product + "').fireEvent('onmousemove', document.blah)");//move the product div to te cart-fire onmousemove
RunScript("document.getElementById('" + product + "').fireEvent('onmouseup', document.blah)");//drop-fire onmouseup


//remove some items from cart
string[] items = {"item_1_1","item_2_0"};
foreach(string item in items)
RunScript("document.blah = document.createEventObject(); document.blah.button = 1");//event object
RunScript("document.getElementById('"+item+"').fireEvent('onmousedown', document.blah)");//click on item in the shopping cart
RunScript("document.blah = document.createEventObject(); document.blah.clientX = document.getElementById('wastebin').offsetLeft; document.blah.clientY = document.getElementById('wastebin').offsetTop");//obrain position of the wastebin
RunScript("document.getElementById('"+item+"').fireEvent('onmousemove', document.blah)");//move item to the wastebin
RunScript("document.getElementById('"+item+"').fireEvent('onmouseup', document.blah)");//drop-fire onmouse up


private void RunScript(string js)
IHTMLWindow2 window = ((HTMLDocument)ie.HtmlDocument).parentWindow;
window.execScript(js, "javascript");



Enjoy :)

Test First Web Applications: TDDing a Castle MonoRail application with C# and Selenium

Published on

Download Source code


TDD samples are mostly based on very simple unit tests. But how could we do something like building a web application test first.

What we need to do

Let’s say that we need to write test first the following feature for an application

-manage users (add new, delete, edit user details, list all).

Each user will have a Full Name, a Username , a Password and an Email all mandatory.

What is TDD

Following TDD steps:

1. write the test
2. make it fail
3. write the code to make the test succeed
4. refactor
5. go to 1

Our first test

The first test we could write would be the test to add a new user. Test Driven Development is a design technique rather then a testing technique, because when writing the test, we will define how the code/page will work, designing it.

In order to be able to add a new use we think that we will need a form like this:

so for our functional test we would need to open up the add page (prepare stage), fill the fields and save (action stage) and to verify if the user was actually saved (verification stage of the project). In order to do that, we would probably need to update our page and add a new list with the users on the left side, where we can later check if our user exists after clicking save.

Selenium comes into action

But in order to do something like this we need a tool that can actually do this action on behalf of us, in a browser. For this task there is an excellent open source tool called: Selenium , which can allow this kind of web based functional tests, and which also allows the tests to be written as simple html tests, having an interpretor that runs those actions for us:

The great news for people that want to integrate their tests on a continuous integration tool is that they can write the tests in their own preferred language like C#, Java, VB.NET, ruby, python, using an extention of Selenium called Selenium RC.

Using Selenium RC, .NET version our test would look like:

At this stage we haven’t written any piece of code ar page, so our test should fail. Let’s see…

First we start the Selenium RC server (a small java server that handles the selenium commands and transmits them to the browser):

java -jar selenium-server.jar

and running the test fails:

Yes it does. This is a good sign as this means that our test fails when it has to, otherwise our test wouln’t really test anything and would be worthless.

On step 3 in our TDD steps, we need to start writing the code until we make it work. So we create the UsersController and the view and run the tests:

and we just create an empty add.vm and resun the test:

Selenium.SeleniumException: ERROR: Element link=Add new user not found

at Selenium.HttpCommandProcessor.DoCommand(String command, String[] args)
at Selenium.DefaultSelenium.Click(String locator)
at MRProjectTest.Functionals.Selenium.ManageUsersTests.TestAddNewUser() in ManageUsersTests.cs:line 34

Now since the error says that it couln’t find the elements on the page, we add them in add.vm:

and retest:

…. error again since it submits the content on the form to create.aspx when clicking the button but that is not implemented yet. So we add the code to save the data:

but wait because we don’t have the User class, the list action or the database.

TDD-ing the layers under the presentation layer

But in order to build this code we need to build it test first. Although in some cases this isn’t really necessary since ActiveRecord is already quite well tested and since this thing is also covered by our functional test, we still do it, to show how for more complicated situations it should.

The test, which is not a functional but a integration tests (a unit test that also uses the database):

Test if it fails. In fact it doesn’t compile so the first thing would be to create a User class, with empty methods to make the code compile:

Now, we run the test:

Castle.ActiveRecord.Framework.ActiveRecordException: An ActiveRecord class (UserManagement.Model.User) was used but the framework seems not properly initialized. Did you forget about ActiveRecordStarter.Initialize() ?

at Castle.ActiveRecord.ActiveRecordBase.EnsureInitialized(Type type)
at Castle.ActiveRecord.ActiveRecordBase.Save(Object instance)
at Castle.ActiveRecord.ActiveRecordBase.Save()
at MRProjectTest.Database.UsersDataAccessTests.TestSaveNewUser() in UserDataAccessTest.cs:line 23

We do not have the User class initialized in ActiveRecord, for that we adapt our test:

and also the User class, adding the appropiate attributes, for ActiveRecord and the constructors and by reruning the test, it tells us that we do not have the corresponding database table. By adding the following line in the test:

ActiveRecordStarter.CreateSchema();//create the database schema

Running the test, we see that the database table was created, but still we have a problem:

System.NotImplementedException: todo

at UserManagement.Model.User.Find(Int64 id) in User.cs:line 72
at MRProjectTest.Database.UsersDataAccessTests.TestSaveNewUser() in UserDataAccessTest.cs:line 41

We finish implementing the Find method in User class:

public static User Find(long id)
return (User) FindByPrimaryKey(typeof(User),id,false)

and finally we have a database test that works!

Top down TDD approach

Usually tests like this one aren’t really required, for the two reasons mentioned earlier but it was done so that the flow is understood, in a test frist, vertical development, for a n-tier application.

Back to the functional test

Now that we know that the User class exists and the database access works, it is time to continue on the presentation layer.

We now implement the list action and view:

public void List()
PropertyBag[“users”] = User.FindAll();

and create a list.vm:

For the view we could have used a GridComponent. By running the test we see that for the first time we have a UI test working :)

Edit functionality

Now we need to add to our site the edit user functionality. Basically the functionality will be like this: on the list of users page, each user will have an Edit link, which clicked upon will transfer the user to a editing for where he can modify the user details. When the form is saved the user is sent back to the list. Now let’s write the test:

as you can see we added a User to the database, so when we open the list page, we have something to edit. But wait, we have a problem. If we run twice the test, the user will be inserted twice in the database. In order to avoid that we do the following:

Running all the tests, we see that the edit test fails:

Selenium.SeleniumException: ERROR: Element link=Edit not found

at Selenium.HttpCommandProcessor.DoCommand(String command, String[] args)
at Selenium.DefaultSelenium.Click(String locator)
at MRProjectTest.Functionals.Selenium.ManageUsersTests.TestEditUser() in ManageUsersTests.cs:line 28

we now add the Edit link in the list.vm:

and an Edit action in the controller:

public void Edit(long id)
PropertyBag[“user”] = User.Find(id);

now the view for this action: edit.vm

and since the value will be saved in the update action we’ll also have:

public void Update([DataBind(“user”)] User user)

All tests passed. :)

All tests run. Refactoring

We can easily remark that there are a few refactorings that we can make. First, the TestAddNew and TestEdit methods are almost similar, so we can do:

having also:

We run the tests, they still work. Now we go further, into the views, which have the same problem: add.vm and edit.vm are almost identical. We’ll separate the common part into _form.vm. Running the tests still confirms the fact that we haven’t ruined anything we did before:

For the delete, we use the same priciple with the test first, until we make it work. To add validations or any other functionality the user will be able to use, we add new tests, then the code to make them pass.


We have shown an example of designing an application, with a method called incremental architecture, where the actual architecture of the system does not need a month to be thought in advance, but is constructed as the code is constructed, because changes are very easy to be made, continuously refactoring the code to make it better, all provided by our tests.

1. Selenium – – open source web functional testing tools
2. Castle Project (MonoRail and ActiveRecord) – – open source lightweight ASP.NET/ADO.NET alternative

Model View Presenter – is testing the presenter enough?

Source code: Download

Lately, I have noticed that the Humble Dialog Box or Model View Presenter are gaining more and more acceptance among software developers, especially in agile communities, because of its benefits regarding the very good separation between the view and the behavior and because it can be very easily unit tested, on a problematic field: user interface.

Let’s meet Model View Presenter

In model view presenter just as Martin Fowler or Michael Feathers [2] say, the logic of the UI is separated into a class called presenter, that handles all the input from the user and that tells the “dumb” view what and when to display. The special testability of the pattern comes from the fact that the entire view can be replaced with a mock object and in this way the presenter, which is the most important part, can be easily unit tested in isolation. Let’s take a small sample:

A client writes in a user story:

User management

The system will support managing users. Each user can have a username and a password. The system will present a list of user and when one is selected, it can be edited. Also there will be functionality to add new users and to be able to select a user from the list and delete it.

Ok, now let’s get to work:

Our view should need the following data, which will be put into an interface:

The UsersList represents the list of users, and the User is the current user selected at a certain time.

When the UI is displayed, we want to be able to see the list of available users, and by default the first in the list to be selected. Let’s write a small test for that. Having no view, we will make a simple implementation of the IUsersView, called UsersMockView. Using it we will be able to see if the presenter when it is initialized sets the needed list and the first is selected in it.

where we have:

And for the data we use a separate model class, like:

If I want to make my test compile, I must write my presenter, see the test failing then keep working on it until the test is working:

and now it is time for a new test, and a small refactoring in the test code:

Now we can go further and add the missing code … until it works. We then follow the same procedure until we have quite a suite that test selecting a user in the list, updating its details, deleting a user, deleting all the users, and others. Now we know we have a Presenter that works fine. At least where it is tested.

Could we have used a dynamic mock?

I like writing in some cases the mock objects by hand. However there are a few dynamic solutions that can prove quite helpful, like NMock [3] for instance. NMock can create mock objects, using Reflection.Emit and your interfaces “on the fly”. Then it is also very useful when it comes to seeing what methods were invoked in the mock object by your class under test. You can also set predefined responses, when the class under test invokes your mock object, but that is another matter. Let’s make a small sample, just as we did before manually. For the TestInit, using NMock it will be like:

It works. The presenter has invoked the properties of the mock object, once as expected and set the expected values. This way we can throw away our previous tests, and can build a new presenter based on testing with dynamic mock objects.

Implementing the real view

If you read Michael Feather’s “Humble Dialog Box” article, you’ll be able to see very well how the real view should be built. The presenter sets the values into the view which are the propagated to the properties of the controls, much like:

but once .NET was released, it comes with the concept of data binding, and there are some developers that do not want to ignore it and use the “old and safe” ways as above.

How does MVP cope with data-binding?

.NET 2.0 comes with a new and much better concept for data binding then 1.0 or 1.1 did, the BindingSource. This now has the possibility to be connected to different kinds of data sources, including business objects. For our interface we create two: bsUser for User and bsUserList for the list of Users. Then for binding we can use, the designer as:

Now, we go back to our properties that are used by the presenter to “push” data into the view to be displays, and modify the code to use the binding sources as follows:

As you can also see, user actions on the view are delegated to the presenter, which is now fully in control of what behavior of out user management interface.

If we run the tests again, they are all green, all succeed. This is great, but this green is false if you think that it works with data-binding because the tests do not touch in any way the actual view. So what do we do?

A first option would be to leave it like that. But since a view for a desktop application is not exactly as “dumb” as we would like, and can quickly become a source of error, we need to add some tests.

Using a functional testing framework like NUnitForms or SharpRobo could be a solution but this can result in tests that test the same thing, and that is not exactly maintenance friendly. But wait, what if our stub is not exactly a stub but an extension of the view itself, to which we can add testing code. For this we inherit, the real view and create ExtendedUsersView. Now we want to modify the testing code in such a way, that the existing code is still reused. For this we inherit the manual test we have written, and just override the SetUp method:

If we run it, it will run the initial tests but on the actual view. It needs to be shown as data-binding to the controls is done when the controls become visible, but that is not a problem. The problem can be speed, but if you run the code you’ll be able to see that the difference is not that significant. If speed is a problem, then maybe you can run the tests that use the actual view, more rarely, as you can get quite enough feedback from the mock tests.

The process

About the process of building destop application pieces, the way I have constructed the application above, might seem wrong, as using TDD, I built the presenter, but I did not have the view yet, and that was constructed later on the exact same set of tests. In practice I wouldn’t recommend this, as the first step, building the presenter without the view, using TDD might lead to a code that will need big refactorings (both presenter and tests) when the view is built. A better approach would be to construct, a two test cases, one inheriting teh other, with two setup methods, one with the mock view, and one with the actual extended view, then build a test inside the first, and build some code on it in both the presenter and the view, then add a new test and so on, until we have tested both the presenter and the view, having both tests that are fast and isolate the presenter’s bahavoius and tests that also cover the view.


I have tried to present a solution, that can cover more of the code with unit tests, then only the presenter, as for desktop applications this can be an important issue. I have also tried to find a solution that works with data-binding, and what I have found seems ok so far, but only time and more research and tests can really tell if this was a good choise….

1. Martin Fowler – Model View Presenter
2. Michel Feathers – The humble dialog box
3. Peter Provost – TDD in .NET
4. Jeremy Miller – A simple example of “humble dialog box”
5. Ben Reichelt – Learning teh model view presenter pattern