Skip to content

How to Build a Great Product

Published: at 11:15 PM

What makes for a great product?

When I think about a great product I think about many things. Has it impacted the society or does it at least hold that potential? Is it defined by its ubiquitousness—or its quotidian nature? Maybe it doesn’t have to be invisible, just bring a lot of value to its user; one or many. Maybe it leads to a bright future, even justifying adverse effects in the present. Maybe it’s marked by quality, if you ask an engineer—the product’s ability to not break often. From a more philosophical view, maybe it challenges the status quo as it improves on the past, maybe it is perceived as magic to first users, or maybe it simply poses to do something outrageous. I think we can agree that a great product can be defined in a few ways, or possibly a lot. But, something that remains constant, is that great products aren’t easy to build.

Let’s prove this point through assuming the opposite.

Hypothesis: Great products are easy to build.

Let’s consider this scenario—suppose it’s easy to build a great product.

Empirically, here’s what needs to happen to build a great product easily. The problem that it solves is a good problem, because bad problems either lead to bad products or no product at all. Further, the problem should be easy to identify, the solution to it should be easy to imagine, it should be easy to build into a product, the product should be easy to sell, and by implication, the value it provides needs to be tangible. Failing to achieve any of these steps would make the product hard to build. But, why does it need to be a good problem?

Let’s start by first understanding what is a good problem?

Rittel and Webber define good problems as inherently complex—as they are interconnected, evolving and systemic. In contrast, bad problems are characteristically superficial or misdiagnosed. Further, they argue that good problems generally lack a single “root cause” and defy linear and easy to find solutions—requiring collaboration and iterations. Firestein argues that good problems need to be grounded in data rather than assumptions, this leads to better explainability establishing the “apparentness” of the problem. According to Einstein, good problems are layered and contain granularities that reveal themselves the more time you spend exploring and understanding.

Simply, good problems are hard to understand, leading to complexities in building a product. It requires a deliberate iterative approach to understand and solve. This directly affects the ease with which a great product can be built. But what if you didn’t really need the problem to be “good” for the product to be great?

Hypothesis: Great products can still be built from a bad problem.

A bad problem, according to Rittel and Webber is either superficial, misdiagnosed or both. It is based in assumptions and can’t be easily defended. So, to explore our new hypothesis, we need an answer to the following question:

Can misidentified problems, based on assumptions, be used to build something useful?

Simply, a misidentified (superficial and/or misdiagnosed) problem is divergent from the existent problem. Basing it on untested assumptions further widens the gap. And a product solving a non-existent problem shouldn’t be considered a good product. It can be a great engineering feat, yes, or can work like magic, but if it is not useful, it becomes an artefact.

Having established the axiom that great products are hard to build and stem from good problems, an important question now becomes; How are great products found?

Good problem to a great product!

Good problems are hard to find. But once you find one, how to go about creating a great product? Where does one even start? Can a great product be derived or does it always need to be novel? Can it be built in isolation?

Great products don’t always have to be technically novel, they can be derived from existing technology or solutions. They have to be “significantly” different to prompt a unidirectional shift from the old and rugged solution to the new one. High significance comes with focusing on a good problem. Solving a superficial problem won’t lead to significant outcomes.

You could argue that the easiest—no brainer—way to build a working product would be to build anything that closely resembles the solution required and iterate from thereon; effectively searching for the optimum. Alternatively, to save iteration time, you could prioritize understanding the problem better to not have to cold start. Furthermore, you can save your research time by asking an expert in the domain; contingent on their existence and willingness to help. Neither of them offer relief from further iterations as solving a good problem isn’t a trivial task.

We know that great products rely on deep understanding of the problem that needs iterative refinement. Establishment of a fast feedback loop therefore becomes critical. Faster feedback on iterations leads to a better understanding, making feedback speed proportional to speed of achieving a great product. We also know that good problems are complex and offer multiple directional opportunities (perspectives to approach a solution), it is therefore also arguably important to test assumptions and keep narrowing down towards a well constrained problem; something actionable and achievable. Incomplete insights from the feedback could lead to myopic convergence over the wrong solution. The feedback needs to be external, and unbiased. To avoid narrowing on the wrong solution, open and honest feedback helps classify the right assumptions from the wrong ones. The iterative and collaborative refinement makes the great product practically impossible to create in isolation.

But, let’s still entertain the thought—what if you could achieve the greatest possible product in the very first iteration?

While not entirely unlikely, the possibility of that happening reduces drastically the more “good” the problem is. It is in the very nature of a good problem to be vague and easily misunderstood. And simple bayesian inference will require you to collect more information to improve your odds of achieving a great product in the first iteration. Given that you can collect all the necessary information online (keeping true to working in isolation), separating the noise from real insights, and building a truly great product shouldn’t be a trivial task since great products aren’t easy to build. Generally, secondary information doesn’t effectively replace real world insights gathered through iteration.

Mathematically, solution spaces for good problems can be approximated to multi-variate gaussian distributions, meaning they represent a realm of possible solutions. Most of the solutions are worthless, many are ideal, some are plausible, few are useful, and the great ones are rare. Finding the rare solution is hard, and much like how machine learning models train, needs to be gradually found through careful iteration. Given the high uncertainty around getting to the perfect product in a few iterations, it’s easy to argue iterations are important.

Iterations are useless without feedback and so the key to building a truly great product from a good problem is to start and refine the product through feedback. So if the key to building great products is iterations, what could possibly stop people from iterating?

Challenges to building a great product

Iterating and collecting feedback isn’t something new, it has been the norm among builders; good or great. But, if iterating was the key to great products, and if iterating is the norm, then, why aren’t all the products great?

Let’s define a team’s iteration competence as their ability to handle uncertain feedback and effectively keep moving in the direction of the great product. So, merely being aware of, or even believing in iterations isn’t enough. You have to be primarily skilled in pattern recognition through noisy data. Secondary issues might stem from not having the right resources to collect feedback or having exhausted all resources in collecting feedback. These resources might be people you consult with, in engineering it might be physical/digital resources you use to perform tests, or simply time and associated deadlines. So, the competence also depends on the effectiveness of the team to utilize the resources efficiently along with extracting insights from the feedback.

A relatively lesser important but significant resistance might also come from limited empirical understanding of the problem at hand, bottlenecking the iteration cycles. This one might severely limit the product timeline.

While practical limitations exist in a individual/team’s abilities to iterate, how come many teams still come through and build game changing products?

Accuracy vs. Iteration

To understand how people build great products, we need to further understand what effective teams do differently?, and how do they become effective in solving an uncertain problem?

To explore the answers to these questions, let’s start by circling back to understanding the “good problem” further. As defined by Rittel and Webber, good problems are vague, and hard to understand. They are not well understood, and have multiple possible solutions. While highly reductive, it is safe to represent the process of building a great product as a hill-climbing endeavour. This means, finding the best solution isn’t merely finding the right direction (as there are multiple right directions) and iteratively moving towards it. Rather, it is a random—and at times contradicting—explorative process, where you need to have assumptions, test these assumptions and update them through feedback.

Constructive feedback in one direction shouldn’t be equated to the right direction unless other possible directions prove relatively ineffective. Firstly, this means running a lot of experiments. Secondly, this means understanding that a lot of these experiments/assumptions will be proven wrong, and resource utilization during these experiments is a necessary evil. Lastly, it also means ensuring all the possibilities are exhausted before taking the next step.

Teams with iterative competence understand these concepts, and are effective at maximizing insights through iteration while minimizing resource utilization. To further consolidate the point, let’s study the relationship between accuracy and iterations.

Let’s define some axioms. We know that accuracy is directly proportional to resource requirements. Here accuracy and resource requirements are used loosely, meaning that to achieve better insights on the problem at hand you need to put in more effort. We also know that it is hard to retrace or reverse a big step in a direction. This means that the more you account for a “directional” feedback and incorporate it into your next step, the harder it becomes to change the direction in case a better direction is revealed later. This might be partly attributed to sunken cost, but also due to practical resource requirements.

Given that finding the best product requires you to optimize resource usage while maximizing insights, it can be said that effective teams perform many fast and inaccurate iterations. Inaccuracy here shouldn’t be confused with lack of insights. It means sacrificing understanding granularities of the solution for performing faster exploration.

Many fast inaccurate iterations

Good actionable atomic steps that provide insights come in all shapes and sizes. Depending on your problem, it’s a judgement best left to the reader. However, some rules could come in handy to ensure you aren’t overutilizing resources while diminishing insights in return.

Until you do, it shouldn’t be a road block to need “more” external resources to execute that step. Needing external resources will mean taking a big step, making it slow and effort intensive. Avoiding taking large steps as long as possible make it easy to iterate and improve.

Tracking the evolution of your problem understanding over time should be a regular task. Effectiveness of your iterations manifests itself as better understanding of the problem. Over time, you should stack up lists of validated and invalidated assumptions. Through these you should increasingly be able to explore further unknowns and iteratively be experimenting with newer hypothesis.

Building a product definitely doesn’t happen in a day. Nevertheless, a day might be a long time to make an iteration. It’s a long process, and making many mistakes while iteratively getting closer to the goal should always trump a one-shot or a few-shot attempts at getting the great product.