• Polyweb
  • Posts
  • Why Products Fail: Betting on the Wrong Solution (Part 2 of 4)

Why Products Fail: Betting on the Wrong Solution (Part 2 of 4)

In which we explore how one problem can have multiple solutions and how to pick the right one

Hi, I’m Sara! In this newsletter, I share my musings at the intersection of tech, product, and human tinkering, with the aim of navigating business and life in the Technology Era with purpose. Subscribe to join me on this journey and check out the Polyweb podcast.

If you have been building products for a while, you have been there. At some point in your career, it happened. So let’s talk about it.
Welcome to the second issue of a 4-part series about why products fail and how to avoid it.

Here are 4 big reasons why your product might fail:

  1. Failure in premise: You picked the wrong problem to solve.

  2. Failure in execution: You picked the right problem to solve, but you built the wrong solution.

  3. Failure in launch: You picked the right problem to solve, you built the right solution, but you launched it wrong.

  4. Failure in scalability: You built a product that works initially but fails to scale with increased users or complexity.

In this issue, we will focus on the second type of failure, the failure in execution.

The Fickle Dance of Problems and Solutions

Identifying the right problem is thrilling - it’s like discovering the secret formula to unlock a treasure. But having the formula in your hands it’s only the beginning. The real magic lies in the execution - going from problem to solution.

During this shift, Failure in Execution occurs when you craft a misaligned solution despite a well-defined problem. 

This is because a single problem can have multiple viable solutions.

A problem can have multiple solutions

History is littered with examples of product efforts failing in execution, even among highly successful companies.

Take Google+. Launched in 2011, it aimed to be Google's response to Facebook. But while the problem was clearly validated, the solution was a complex array of features users didn't grasp. It was retired in 2019.

Google+

In the upcoming sections, we'll examine all the nuances of Failure in Execution, exploring its causes and how we can sidestep this common pitfall.

When and Why Does Failure in Execution Occur?

Failure in execution can result from both internal and external missteps. Often, it's a combination of both. Here are some key reasons why it happens:

External issues:

  • The solution/market fit is unclear and users don't immediately grasp why it's valuable.

  • Misinterpretation of user needs

  • Lack of validation, little or no testing prior to full release

  • Ignoring feedback, especially negative ones

  • Copying competitors without understanding why a certain solution works for them

Internal issues:

  • Overcomplicating the product and assuming more features equate to more value

  • Resource constraints (i.e., time/team/money)

  • Shifting priorities on a whim rather than on data

  • Ineffective communication among teams

A 3 Steps Approach to Turn Problems into Solutions that Convert

In a recent role evolving an on-demand visual marketing platform into a SaaS model, I faced a critical question with my team - why would clients pay a monthly premium for added services?

After identifying the right problem and avoiding failure in premise, we uncovered the key user need - selling assets faster. While we couldn't directly impact sales, we found opportunities to streamline how users obtained and collaborated on marketing visuals for those assets.

The core job-to-be-done was: When selling their assets, our clients need a way to obtain and collaborate on marketing visuals efficiently, so that they can accelerate the sales process

This job had multiple potential solutions and as a resource-constrained startup, choosing a direction felt risky.

My method of narrowing down the options is to begin with four questions.

Step1: The 4 questions framework

The 4 questions framework significantly narrows down options when evaluated based on each question. Afterwards, I assign a score to each question according to the assessment.

The 4 questions framework

  1. Will this solution effectively address my problem?

    If you look at the solution objectively and from a distance, without any vested attachment, do you believe that this solution would really solve the problem?

  2. Is this solution significant?

    Does this solution matter? Is it relevant to your users/company/society? Are there other solutions that might have a greater impact?

  3. How certain are we? Is there any evidence?
    At this juncture, you may lack empirical evidence, which is acceptable as we aim to gather this in subsequent steps. Here, evidence may be derived from analyzing past data, examining competitors, engaging with customers and stakeholders, and identifying consistent patterns.

  4. Is this feasible?

    Considering the effort and time required to bring this to market, can we do it? Do we have the right people and resources? How soon can we bring it to market?
    A solution may appear excellent on paper, but it's crucial to evaluate whether you possess the necessary resources and capabilities for execution. Generally, a swift market entry for early testing is advisable, provided the costs of rollback in case of failure are not exorbitant.

In responding to these questions, I’m looking to identify asymmetric bets.

What scenarios exist where, even in loss, there's a win?

Step 2: Gathering Evidence: The Test Card

After selecting one or two highest-ranking solutions from the 4 questions framework, test cards are an excellent method to state how you are going to gather evidence. Test cards are formulated as follows:

We believe that, (the hypothesis)

To verify that, we will (the experiment)

And measure (the metrics you wish to impact)

We are right if, (success measure)

For example:

  • Hypothesis: We believe that some users will pay $20 a month for our subscription to market their properties faster.

  • Experiment: To verify that, we will guarantee 24-hour asset delivery for subscribers

  • Metrics: And measure the % of users subscribing and retained for 3+ months

  • Success metric: We are right if we achieve a 20% retention

The advantage of a test card lies in its ability to compel you to specify the most crucial metric you aim to monitor, and define, in unison with the entire company, what success looks like.

Step 3: Setting up the experiment

Once you have your test card, it’s time to set up the experiment.

In our case, we developed a basic MVP for beta testing, collected user feedback via surveys and interviews, and tracked usage analytics to decide which iterations to implement.
I wish I could say that our MVP launch was an overnight success, but at the beginning the reception was lukewarm. Clients weren’t used to paying a premium and converting them necessitated offering more benefits than initially provided. Yet, we eventually discovered our product-market fit.

This stage of testing was pivotal as we had assumed certain features would significantly resonate based on internal projections. However, real-world testing yielded contrasting outcomes. Some aspects we anticipated would greatly impress users did not meet expectations, while others we deemed as nice-to-haves emerged as core features.

By testing solutions iteratively before over-investing, we avoided wasted effort. Here are some key learnings:

Validate Continuously

Test early concepts and prototypes with real users to get feedback on whether the solution resonates. Use methods ranging from simple prototypes to higher-fidelity MVP beta tests. We initiated beta testing as soon as we had a basic product to gather insights.

Solicit Ongoing Customer Input

Embrace pivoting based on user feedback. We engaged clients via regular interviews and surveys to understand their evolving needs. This loop helps realign the solution to stay relevant.

Keep It Simple, Sherlock (KISS):

There's a certain elegance in simplicity. Resist overcomplicating. Nail one core aspect before expanding features.

Leverage Data

Let metrics guide iterations and enhancements. Track user engagement and other key metrics, as defined through the 4 questions framework

Remain Flexible

Have a roadmap but adapt based on user insights. Balance long-term plans with short-term adjustments.

Anticipate Issues

Conduct "pre-mortems" to get ahead of potential problems. If your product were to fail, what would be most likely to go wrong?

Align Cross-Functionally

Educate all teams on the core problem and solution. Even amidst a rush to build swiftly, this focus speeds execution and prevents misalignment.

If you like this newsletter and found the content useful, please consider sharing it 🙏