When teams are creating new solutions (product, service, experience) or are iterating on their current product, a discussion often arises: whether to take an audience-first or a problem-first approach. On the surface, it is a fundamental discussion that often links back to the mantra that product teams should obsess over the problem. Both are entirely different discussions. Let's start with the difference between a problem-first and an audience-first approach and how they link to obsessing over the problem.
What is an audience-first approach, a problem-first approach and what’s the difference between them? Let’s ask our artificial friends at chatGPT:
An audience-first approach focuses on understanding the needs, wants, and limitations of the target audience when creating a product or solution. This approach aims to create something that resonates with the audience and meets their needs. On the other hand, a problem-first approach focuses on identifying and solving a specific problem or issue. This approach typically involves identifying a problem, brainstorming solutions, and then creating a product or solution that addresses the problem. The main difference between the two approaches is the starting point: the audience-first approach starts with the audience, while the problem-first approach starts with the problem.
So why shouldn’t I use one or the other?
Both are theoretically and methodologically lovely approaches. However, from a practitioner's point of view, this often feels like either working with a high margin or error or waiting until something exciting passes along the way. It doesn't reflect the reality where technology, own problems or even larger-scale societal problems are often at the basis.
When I asked teams why they would go for one of the approaches in that way, they often quote, "obsessing about the problem". I found this very strange. Because if you have a solution, it's reverse engineering, and if you don't, I wonder how you will ensure funding for your project. The latter leads to retrofitting the problems into the ideas you had anyway. It also doesn't guarantee that you're obsessing over the problem. You're obsessing over the methodology.
Some reasons why:
It doesn't mean you should throw these out of the door. There are some solutions where you must dive very deep into the context or procedures before moving on (however, applying the following approach can also be very beneficial). It can also be helpful in much more mature solutions (though in many cases, the iterations become smaller and smaller with growth and more feature-focused and therefore require crazy budgets at the top level). An audience-first approach is wonderful when you actually know your audience quite well, which is often with more mature products. The same with a problem-first approach when you know that you’re working on the right problem. But until you have one or the other, that high margin of error remains.
"OK, Joeri, that's true, but how else could we move forward and remain focused on the problem?" I hear you think. Obsessing over the problem means that you love to understand what goes on and make sure that your solution solves something that people perceive to be solvable. As a side note, this is arguable, though, as there are many services where people love to pay and where you would wonder if this really 'solves a problem'. Hence, I love to take a more practical approach to obsess over the problem.
The result is the same, if not much better (at least from my experiences), and it keeps teams and businesses growing. I developed this as a response to being part of teams that ran into similar situations as I described above. Each time I tried to find a way to avoid running into the same position while keeping the same quality of results and output.
At the core of this view are three principles:
In all the projects and products I'm involved, I swear with the 80/20-principle (or the Pareto principle). For those that don't know it yet, the concept is simple (and scientifically proven, search on Google, and you'll find a lot of cases): 20% of the input will generate 80% of the output. Conversely, if you would like to gain that 20% additional output, it will, in most cases, require 80% of the total input. There are many forms (30/70, 10/90, 1/99), while the concept remains the same: a minimum of input will generate most of the output.
Try to gather insights by spending only 20% of your effort (effort, resources, time and budget). That output (insights and learnings) will count for around 80% of what you need to get started with your work. In the beginning, this will be more directional, and as you mature and progress, it will become more focused and narrow. Once you've gathered your insights, you need to move on to the following principle: launch to learn.
Which tools do I use for that:
To make sure you are obsessing over the problem:
There is an excellent book written that goes into much more detail: The Mom Test by Rob Fitzpatrick. Please read it. It will help you to become better at obsessing over the problem because that is nothing more and nothing less than to leave you outside of the equation and learn about others.
Another learning (and I had to learn it the hard way): you can only actually know what people think of your solution until you launch it. You can set up many validation tracks involving smoke tests, surveys, search analysis, interviews, test panels, and user tests. They all have in common that none of the participants stands to lose anything. You will only learn a thing once there is a final economic transaction. A final economic transaction means a payment is processed, and the money-back guarantee has expired. Only then will you know people will stick with your solution. And even then, the first round of customers or returning customers are often entirely different. When people have engaged in an economic transaction and started using your solution, you will see actual behaviour, and you will be able to get real feedback based on frustrations.
Sometimes you have to use validation techniques to ensure you can move on. There are many reasons for those, and I still use this with clients when it's adding value. But this principle is about giving context to what you will learn through validation: it's just validation and not the same as launching. Once you get a confirmation about what your first insights are, try to make it tangible into a prototype or something shareable. Learn from this, and move on.
Suppose you can launch, launch. It will be the best school (there is a whole movement around it: building in public). If needed, you can always do it under another brand to avoid risks with your current brand. That is entirely fine as well.
Based on what you will learn during your validation or your actual launch, you will have to learn from this. See what it tells you, and then make sure you iterate to improve (or at least try to improve because you'll only know if it is an improvement when people start using it). Keep in mind the previous principle, though.
In the past, I've had many instances where I would have gathered data and still, you know something is wrong. You have to move on, so you take a decision. That decision has often proved to be the best thing I could have done. That doesn't mean you will feel better. It also comes with much pushback from teams: the data says it. But everyone knows that the data is incorrect or not telling the whole story. It is a downside of using data-driven approaches. Sometimes you will only know, once you've gathered the data, that some data is missing or you were looking at it the wrong way. That doesn't mean you have to move on, though, so someone needs to take a decision.
Though I've applied this principle for many years, it became much clearer to me once I read Build by Tony Fadell. He spends a lot of time describing these types of decision-making in his book. In short, Tony says you have two kinds of decisions: data-driven decisions and gut-driven decisions. There is a time to take one, and there are times to take the other. Both are great, and none is better than the other. Read his book to learn much more about it.
I apply this by looking at whether or not there is data available. Then I think about what the data tells me and if my gut feels the same. If my gut tells me something about the data is off, and I cannot find other data to alleviate that decision, it's time for a gut-based decision. I follow the data if my gut is OK. I had times when the data seemed perfect, and then it was. So I chose not to follow it. Sometimes this approach proves to be wrong. It's part of being in business, but more often than not, it will prove it was the right thing to do. If the data is good, but your gut doesn't feel right, you either need to visit a doctor or something else is going on. In both cases, I wouldn't take a decision and investigate further.
One caveat with this principle: only take gut-based decisions if you have the authority to do so. In some organisations, someone other than you may be the person that can take that decision. Then go and talk transparently with the decision makers and try to move them towards that gut-based decision. That will be your gut-based decision, and if they do not follow you, your gut will at least be calm because you've done the right thing. Now it's time to learn about getting more authority about moving others in the direction you prefer, but that's another topic.
I hope that sharing my past experiences will help many, from solopreneurs to larger product teams and even senior executives, to move things forward much quicker. Trust me, and it will improve results and help you build better solutions for your clients. It's also possible you may disagree with me, and that's fine. I'm happy to hear your stories, as this was based on my experiences. I'd love to see how I can improve my approach every day.
The header image of this post was generated with DALL•E 2 and edited afterwards by myself.