User Stories

Category: Common Practices

Written by: Mark Levison - Charles Bradley

Introduction

The real challenge in writing software isn’t the time spent writing the code itself. Instead it’s the time spent deciding what software we should build, and perhaps just as importantly what we shouldn’t build.

Traditional development focuses on getting everything right up front. It’s based on Boehm’s Cost of Change curve, whose point is that change gets more expensive over time. Basic architectural principles (i.e., isolation), modern coding practices (Test Driven Development, Refactoring, Acceptance Test Driven Development, etc.) and modern tools (IDEs with built-in refactoring tools, Ruby vs. Fortran/C) have largely flattened the curve.

Figure 1: Barry Boehm's Cost of Change Curve, 1981

We spend so much time up front gathering traditional requirements, trying to get the details right, that we’re reluctant to change them or throw them away as the user’s needs evolve.

User Stories are an Agile approach to solving this problem; they’re lightweight, simple requirements. We don’t try to get the details right up front, because inevitably they will change. They typically replace Use Cases and other heavyweight requirements. Our goal is to place the focus on the needs of the User. Instead of trying to write down all the words that need to be said for a requirement, a User Story is the opposite: it’s intended to start a conversation between the people who will implement the Story, and the Customer/Product Owner.

Overview

In this article, we’ll start by giving you some of the background on User Story origins. We’ll then talk about the three components of all User Stories, and dive deeper for more details on each component. Along the way, we’ll introduce a fictional book store project as a backdrop for giving concrete examples of the three User Story components.

User Story Background

The term “User Story” was first used Kent Beck in 1996, and became popular through its inclusion in the first Extreme Programming Project. They idea has been further refined by Ron Jeffries, Chet Hendrickson, Bill Wake, Mike Cohn, and Alistair Cockburn. User Stories, while not required by Scrum, are often used by Scrum Teams as a way to represent Product Backlog Items.

The Three Components of User Stories

There are three components of User Stories, often referred to as the three Cs: Card, Conversations and Confirmations (from Ron Jeffries).

Card
A token, with a story title and description, used for planning and acting as a reminder to have conversations. The card is basically just a title or some descriptive text about the Story. You can use an actual card, or just a sentence or so of descriptive text, to represent a story as a token to remind you to have conversations about the Story. It is a good practice to keep the title and descriptive text to as few words as possible. The purpose of the card is to remind us to have conversations.
Conversations
Conversations discuss the story details, and result in one or more test confirmations.
Confirmations
Acceptance Criteria that can be turned into automated acceptance tests. These automated tests are vital, and they are what enable the simple and light approach implemented by the first two Cs (Card and Conversations).

World’s Smallest Online Bookstore

We will use this example to help illustrate the major points of User Stories.

Product Vision

We are building Smallestonlinebookstore.com.

Smallestonlinebookstore.com caters to the view that Amazon is wrong: an infinite supply of books is too much. Readers don’t want an infinite supply, just the right choice for their next book. They don’t want to spend hours agonizing; instead they want to spend that time reading.

All books are read and reviewed by our staff. In addition we verify that public reviews come from real readers and not authors/publishers. When you have questions or problems, we have real staff on hand to answer all of your questions. Our goal is to save you time (by not asking you to sift through hundreds of books) and money (by not selling you boring books).

Users

The “user” in User Stories can be any end user or stakeholder that is trying to get business value from the system. Examples from Smallestonlinebookstore.com could be: Book Buyer (First Time, Frequent, Casual), Warehouse Staff, Site Team, Publisher (Small, Large), Director of Marketing.

“The Card”: Initial User Stories at Smallestonlinebookstore.com

This is the initial product backlog that was created by the Team in their release planning session.

As a first time book buyer, I want to purchase my first book so that I can read it.

As a first time book buyer, I want to find the perfect mystery novel so I can while away the time on my next plane flight.

As a frequent book buyer, I want to buy my book with a minimum of hassle so that I don’t waste my time.

As a first time book buyer, I want to find another book by my favorite author Dan Airely so that I won’t waste my time reading junk.

As a shipping clerk, I want shipping labels printed on a self-adhesive label so that I can easily add it to the final package.

As a system administrator, I want to be able to reset lost passwords with a minimum of hassle so I don’t waste my time on trivial things.

Other Styles for Expressing Stories

There are other styles we can use to express User Stories. Here are three versions of the same User Story described, with different styles:

  1. As a frequent book buyer, I want strong passwords so that my credit card information remains secure.

  2. In order to keep my credit card information secure as a frequent book buyer, I want the site to use strong passwords.

Conversations

User Story conversations are the second component of User Stories. They can happen at any time, and the nature of these conversations will vary widely from Team to Team. The main focus of the conversations will be to discuss details of the story, and to create test confirmations: Acceptance Criteria that can be turned into automated tests.

Who participates in these conversations?

This varies widely as well, but frequently the Product Owner is involved, since they have the last say on product vision and requirements. Sometimes users, stakeholders, testers, programmers, or even the whole Scrum Team are involved. To the extent possible, it is always highly encouraged that you get lots of key Stakeholders and Users involved in these conversations.

When do these conversations happen?

They happen on an ongoing basis, at any time, but especially as a part of Product Backlog Refinement. Product Backlog Refinement is an ongoing activity throughout a Scrum project, and includes adding, changing, removing, splitting and estimating Product Backlog items. It also includes defining Acceptance Criteria. Scrum Teams often allocate up to 10% of the time in a given Sprint to prepare for the next Sprint, and this preparation can include Product Backlog Refinement.

What is the expected outcome of these conversations?

User Story conversations, whether they occur during Product Backlog Refinement or not, should result in shared Team understanding, and the Team and Product Owner coming up with test confirmations for each story. While conversations and minor changes to stories will continue inside of the Sprint, every reasonable attempt should be made to bring the Team to a shared understanding of the vast majority of story details and test confirmations—at least the Acceptance Criteria—before the Sprint begins.

Conversations at Smallestonlinebookstore.com

Returning back to our example product, the Team generally implements the Conversation component of User Stories in these ways:

  • The Development Team and Product Owner meet to do Product Backlog Refinement once a week (for an hour or two) and refine the stories that will be in the next Sprint.
  • At spontaneous and often random times, conversations will happen with Stakeholders, Team members, etc. If there are any material updates to a story or its test confirmations, those involved in the conversation must be sure to communicate the updates to the rest of the Team.

Confirmations

Typically, test confirmations are created in two steps:

  1. Acceptance Criteria are communicated, and
  2. The Acceptance Criteria are turned into automated tests that verify that each has been met.

Acceptance Criteria

The goals of Acceptance Criteria are:

  • To clarify what the Team should build, in code and automated tests, before they start work;
  • To ensure that everyone has a common understanding of the problem;
  • To help the Team members know when any given story is complete;
  • To help verify the story via automated tests.

When are Acceptance Criteria created?

As mentioned above, they should be well known before implementation of a User Story begins. In addition, experience shows us that discovering these Acceptance Criteria at least a few days before the Sprint Planning meeting makes for a shorter, more productive meeting. Discovering them in advance of the Sprint also usually means the Team’s commitment is closer to their real capacity. Discovering numerous Acceptance Criteria after the Sprint Planning meeting usually leads to over-commitment and waterfall type behavior. Acceptance Criteria are typically created during User Story conversations and Product Backlog Refinement.

How do we communicate Acceptance Criteria?

There are numerous ways to communicate them that are beyond the scope of this article. Below are four examples for the User Story described above: “As a frequent book buyer, I want strong passwords so that my credit card information remains secure”.

  1. Acceptance Criteria are composed and managed in an electronic form, like on a wiki.

    • Test that the system allows passwords with all of the following characteristics:
      • At least 8 characters
      • Contains at least one character from each of the following groups:
        • Lower case alphabet
        • Upper case alphabet
        • Numbers
        • Special Characters (!,@,#,$,%,^,&,*)
    • Test that the system disallows passwords that are lacking any one of the above characteristics.
    • Test that … (and so on)
  2. Acceptance Criteria written on a card. In this example the Team has had lots of conversations, taken some notes on the card, and felt very confident that everyone has a shared understanding of what the Acceptance Criteria would be, based on the notes on the card.

  3. Acceptance Criteria described in a “Specification By Example” style (also on a card).

  4. Acceptance Criteria composed and managed in an electronic form, like on a wiki or spreadsheet, using the same “Specification By Example” style.

    Data Expected Result Expected Message
    Aa9ab$ Fail Too Short
    AAbbCC11 Fail No Special Characters
    $$$bbb111 Fail No Upper Case
    AAA%% Fail No Lower Case
    AAAA%%%%bbbbb Fail No numbers
    IsThis$AGood11 Pass  

Should we create documentation to communicate the Acceptance Criteria?

User Story practitioners should be careful about creating elaborate documentation to describe Acceptance Criteria. We don’t want to fall into traditional, wasteful, requirement-driven habits. The User Story practice strongly emphasizes and prefers conversations over documents. The point of User Stories is to communicate efficiently, and adding documentation can lead to waste and confusion. User Story practitioners should strive for the minimum amount of documentation that could possibly work.

Having said that, it is OK to create very light documentation around Acceptance Criteria—but again, the Team should strive for the minimum documentation that can possibly work.

Automated Tests

The second typical step towards building test confirmations is automated tests. Automated acceptance tests are what allow the User Story practice to thrive without the need for extensive documentation that’s needed in traditional requirement-driven practices.

Who automates the tests?

It doesn’t matter who automates the tests, so long as someone automates tests that verify every Acceptance Criteria. Automated tests help flatten out the cost of change curve, as discussed at the beginning of this article.

When are the tests automated?

The ideal situation is to have them automated before the programmers begin implementing a story. That way, the programmers can get immediate feedback by running the tests against the code as they progress. The second-best situation is when someone is automating the tests while the programmers are implementing the User Story, in parallel. Teams can also automate tests after the User Story has been implemented if they are disciplined about always doing a thorough job. On the other hand, automating after the fact often leads Teams to shortchange the automation of tests, and this can lead to technical debt buildup and waterfall-type behavior. Automated tests act as a long term, real-time verification of correct system behavior, and since the User Story practice encourages very little documentation, leaving the automated testing piece out can be very wasteful and risky.

User Story Key Points

  • User Stories are relatively small: a few days’ effort for one or a pair of Team members
  • User Stories are focused on the what (the needs of the user), not the how (the technology).
  • User Stories are the starting point for an ongoing collaboration between the Product Owner and the entire Scrum Team.
  • User stories are best framed in language that users and stakeholders familiar with the domain would understand.
  • Once implemented with supporting Automated Acceptance tests, User Story documentation can be reduced.
  • Many Teams use a template to help formulate the description of a User Story (the card): As <a user> I want to <do something> so that <value statement>. The user in question is able to achieve something they value with the functionality captured by the “I want” statement. Not all Teams use this template, and as discussed above there are several other ways of communicating the User Story description.
  • Not everything in the Product Backlog needs to be a User Story.
  • Not everything a Scrum Team does is associated with a specific User Story or Product Backlog Item, though all work of the Scrum Team should be made visible to the entire Team.

INVEST

Good User Stories can be measured against this simple set of criteria, originally proposed by Bill Wake):

Independent
Dependencies between stories limit the flexibility of both the Product Owner and development Team. The Product Owner should be able to ask for stories in whatever order makes the most sense to them.
Some stories are independent relative to one another, because they can be built in whichever order the Product Owner sees fit. “As a Canadian Book Buyer I want my book shipped to my Canadian address so that I can order painlessly” and, “As an American Book Buyer I want my book shipped to my US address so that I can read it quickly”. However, the Team may want to point out to the Product Owner that the first one implemented will be more expensive than the second, since it involves the infrastructure to support addresses. Subsequent address stories will be cheaper, because the hard work has already been done.
Negotiable
The elegance of a User Story is that the precise details are left until later. This gives the Product Owner and Team a chance to delay unnecessary decision-making until implementation begins. It allows the Team to discover new options right up until they’re done.
Valuable
Each Story needs to deliver at least some value, all on its own. In other words, the customer has to be able to see the value in every Story. This pushes us towards slicing our work into vertical1 chunks, and not horizontal technological layers.
In addition, this prevents the creation of “Infrastructure” User Stories. Scrum and XP take the view that Infrastructure should only be built to solve the current problem.
Estimable
If the Team can’t estimate a Story, whether through lack of experience or differences in technical understanding, then they shouldn’t fake it. Instead they should run a short experiment to gain the experience they need to estimate its difficulty. These experiments are called Spikes.
Sized Appropriately
Stories at the top of the Backlog (~3 sprints) should be small: so small that the Team should be able to get 5–10 similar sized stories completed every sprint.2 Stories in the middle of the Backlog (between 4–~10 sprints out) can be larger. The Team might only complete 1–2 of these in a Sprint. Further out, and the Stories can be very large.
Testable
It is clear how you will test the Story.

Epics & Themes

Epics are Stories that won’t be implemented until further into the future (i.e., < 4 Sprints away). They are often too large or too vague to be completed in one Sprint. In a way, it’s good that Stories this far out are often vague and/or large, because it saves us the effort of refining Stories which describe uncertain or contingent value that might change often, or even disappear, in the coming weeks.

A Theme is just a collection of User Stories that have some unifying trait or concept. It is sometimes useful to talk in terms of Epics and Themes when doing long-range planning, like release planning and roadmapping.

Conclusions

One way to tell if your Team is implementing the User Story practice well is to look at the questions below. If you answer “No” to any of them, your Team needs to inspect and adapt its User Story practices.

  • When we demonstrate and deliver the software functionality of a User Story, is it only a rare occasion when we fail to meet the Product Owner, user, or other stakeholders’ desires?
  • Do we have automated tests that verify the Acceptance Criteria from previously implemented User Stories?
  • Are we using the least amount of documentation that could possibly work, while also satisfying the above two questions?
  1. “Vertical” here refers to a story that slices its way through all the software layers, i.e. Database, Business Logic, UI. “Horizontal” refers to specific software layers, i.e. the Database, Business Logic etc. For more detail see http://scrumftw.blogspot.ca/2008/10/slices-verticals-user-stories-and-scrum.html

  2. Interestingly this seems to vary with team size, and more than sprint length. Mike Cohn writes, “Based on data I analyzed on successfully finished sprints, I determined that a team should average around 1 to 1–1/2 user stories (product backlog items of any sort, really) per person per sprint. So, a six-person team should get somewhere around 6–9 user stories done per sprint.”