Story Mapping: Forget computers exist

Since I moved closer to the product discovery side of things I've been spending quite a lot of time participating in story mapping. I have thoughts I want to share, but before I do that, let's try to answer two questions: what is a user story and what is a user story map, exactly?

You keep using the word...

These days, it's probably understood through the prism of how companies use Jira. So a User Story is a kind of a ticket, which describes a particular feature in detail, along with its interface and acceptance criteria. And obviously the mandatory story points. And that's how we arrive at what I call "feature-level waterfall". Amusingly, the "story" part doesn't really exist in this scenario, it's just a different name for requirements.

In reality, the key is literally in the name. A User Story (as a piece of paper with text on it) is simply a record of a short story told by a user about what and maybe why they do. That's all it is. In fact, it's not even that, because as Jeff Patton reminds us, the actual written piece is just a recall device. It's supposed to kick your brain into remembering the conversations and be a starter for new conversations, but "the value is in the conversation", not in the written piece.

Notice one thing above, thought. We have "what" and "why", I've purposefully left out the "how". That's because, until you get to the nitty gritty, a Story Map describing a physical store should be nearly identical to one describing an online store. At least until the very, very last moment.

But how does that even make sense? I mean... we're building software, shouldn't our designs reflect that? And how does innovation happen, when all stores are created equal?! If the innovation is in the UI, it will simply come in much later on. If it's in the process, thought, it will be reflected here if you do it right ;). The point is, if you get into the how too early on, you might not notice your innovation even if it's right in front of you. Or, even worse, you might solve the wrong problem...

Analog problem solving

One of the things I've learned about story mapping is that initially it's really helpful to forget computers even exist. Which is harder than you might think and it's only going to get harder. I mean, the question you should ask your user (not yourself) when starting to story map is "how would you do it if computers didn't exist", and the answer is increasingly "what do you mean?". If my math is right, I have 7 computers of various shapes and sizes at home right now, which would've been a bonkers image for my 10 year old self. No wonder we're getting increasingly bad at thinking outside the... screen?

Right off the bat, we always have some idea of what the product will look and feel like at the end. We have an image in our head, made of forms and buttons and animations which we believe will facilitate a process and solve the client's problem. But the story mapping stage is too early for that because it's when we're trying to understand the actual problem.

If we enclosed it in a particular interface too early on, there's a high chance we either miss the process or find bits and pieces of it which can't be fit into our understanding. We would simply trip over the interface decisions that are already stuck in our head, trying to fit the new information where there's no space for it. We'd be creating rigid, legacy software before it's even software!

If you don't start with really good understanding of the process, the interface you create to realize it may look great and modern, and it may even make sense on the surface, but it will fall apart when confronted with day to day use. This is exactly what happened many years ago at my Mom's job.

Anecdotal evidence #1

For many years, her job was all pen and paper. Lots and lots of paper. At some point, the management decided to modernize it and they hired some company to write bespoke software for this process. What this company did, was spend an enormous amount of time understanding the intricacies of the manual process they were about to automate.

The result was a DOS program, which my Mom absolutely loved. Seeing her and her colleagues using it was like watching the masters of Dance Dance Revolution. They don't even acknowledge the screen is there, they just know all the movements by heart. It was immediately obvious the software fit the process like a glove.

They've been using it, with modifications dictated by new laws and taxes, well into the apparent death of DOS until the (very different) management stepped in to modernize their work once again with a brand new solution. Apparently it was off the shelf with just some customization to fit the legal requirements, so its not entirely apples to apples, but the point stands: it was awful.

The software looked a lot more modern, but the way it modeled the process was just terrible. It had a fancy UI, but how the users worked was not understood and led to a lot of wasted time and frustration. No wonder it was dropped after awhile, but only to be replaced with an equally bad solution...

Now I obviously don’t know how the people responsible for the DOS-based system did their job. I highly doubt they used story mapping, or that this term even existed back then. But they obviously looked at the documents, they talked to people, and did their homework before they started designing whatever little interface the application had.

Legacy systems in your brain

This is obviously anecdotal, of course, but it serves as a nice reminder that understanding the process should always come first and the software should grow out of this understanding. Trying to retrofit a process into a piece of software that can't really facilitate it simply doesn't work... And that's equally important when you build a bespoke solution as when you take one off the shelf.

The moment your story says "email a document to accounting" instead of "deliver a document to accounting" you've locked yourself into a very particular mindset. This must happen at some point, but if you do it too early on you'll have to work around this mindset as new information comes in. It's like creating a rigid, legacy system in your brain without writing a single piece of code.

What I keep learning is that forgetting computers exist really helps bring the process to the foreground and makes growing the right interface around that process easier, even if it's becoming more and more bizarre as time goes on and as we accumulate more computers on our desks, in our pockets, and on our wrists.

Same ideas, different contexts

This is a very similar idea to how acceptance tests should work, at least according to Dave Farley, but I trust his opinion on that. The outer layers are supposed to remain invariant under changes to your UI, not to mention the actual implementation. It’s all about decoupling the what and why from the how.

PS. It depends…

Time for a disclaimer. Sometimes it will be very hard to take computers out of the picture. Other times, it will be impossible. The point here is to err on the side of avoiding bringing them up, not to avoid it at all cost.