Shipitwise started in 2016 and produced software for transit.
They wrote a complex algorithm for the transportation industry and developed a universal API to connect retailers and e-commerce platforms. Their work was, by several accounts, well ahead of their time.
But in the market, Shipitwise learned transportation companies weren’t interested in improving customer experience. Many of them preferred manual solutions, and enterprises in that industry weren’t looking to shake up their internal structures.
Shipitwise shut down in 2019. On Failory, their listed cause of failure is “No Market Need.” Teaforia won the 2016 Best Tea Innovation Award from the World Tea Expo. The two founders raised $17.1 million and delivered a “perfect” cup of tea via an internet-connected tea infuser. Their cutting-edge tea packages were 90% compostable and 100% recyclable. The glass in the machine was hand blown by an artisan. The mobile app started the infuser for you.
In 2017, they closed doors. Tea enthusiasts weren’t interested in a $1,000 high-tech brewing machine.
While I don’t know the ins and outs of ShipItWise and Teaforia, I’ve worked with many other startups first hand.
One thing I’ve noticed is this: startups place big bets on products they have little evidence will succeed.
The main reason is you’re gambling big opportunity costs. First, there’s the cost of building the actual product or feature. You pay x engineers, y designers, and z managers to spend h total hours on the idea. (We’re talking thousands to hundreds of thousands of dollars here.) But that’s only one part of your gamble. The second part of your gamble is the time and money you could have spent on a better solution.
the cost of building the wrong product
the cost of not building the right product
In a startup, where time and cash are tight, betting on the wrong product usually sinks the business.
Data backs this up, too. CBInsights analyzed 101 post-mortems and found 42% of startups fail because there’s no market need. Autopsy analyzed 300 failed startups and found 11.1% failed because no market need. Either way, you slice it, that’s a lot of startups placing some really bad bets.
Fortunately, there’s a better way to gamble.
Hiten Shah started Crazy Egg and KISSmetrics, is a co-founder at FYI, and serves as an advisor or investor in over 120 companies. He’s experienced, to say the least. Yet every time he starts something new, he assumes, “I don’t know anything, and neither do you.”
The clearest example of this is his latest company, FYI. When Shah and his co-founder, Marie Prokopets, started working on FYI, they didn’t immediately place a big product bet. In fact, they didn’t even start with building; they started with learning.
Before writing any code, Shah and Prokopets gathered qualitative data from potential customers. They sent out an early-access survey, and the responses revealed something interesting. The number one problem their prospective customers face is finding documents across different apps.
When Shah looked at the market more closely, he discovered the Net Promoter Scores of document apps are low. Of the apps he and his co-founder surveyed, most fell into the lowest satisfaction category. The market wasn’t effectively meeting the customer problem.
With data supporting a painful problem and a real market gap, Shah and Prokopets had enough evidence to place a smart product bet. Their team spun up an MVP of FYI in 5 days, and they used the MVP to gather additional information from a small batch of customers. Those insights determined the direction of the current product.
When it was time to publicly, Shah and Prokopets didn’t cross their fingers and pray FYI met a market need—they knew it did.
This is how you place a smart bet.
Most founders start with their idea. But an idea doesn’t tell you whether customers have the pain you’re solving or if they want the outcome you’re delivering. This makes ideas a terribly fragile and risky starting point.
Because even if you craft a beautiful design, flawless interface, and lightning-fast tech, you’ll have a product flop. As Martin Christensen, a UX researcher and agile coach, observed, “You can build a product in the right way, but if it is not the right product (and it rarely is, we as humans have too many biases), that is just colossal waste.”
To avoid colossal wastes, start with a hypothesis instead.
A hypothesis codifies what you don’t know into a theory you can test. With a hypothesis, you’re looking to make observations and see if you’re right or wrong. In case this seems like semantics, here’s how a hypothesis is fundamentally different than an idea:
And here’s another set of key differences:
Assumptions are all the things you don’t know. And when you’re working with a product, you have a lot of things you don’t know:
Some of these assumptions are a bigger deal than others. For example, solving a painful problem matters more than getting the product in front of customers (at least at first). Because getting the product in front of customers doesn’t matter at all if you’re not solving a problem they’ll whip out their wallet for.
A strong hypothesis will focus on the most crucial assumption. Or, as I like to think of it, the riskiest assumption. Many SaaS founders assume that usability (can customers use it?) or feasibility (can we build it?) is their biggest and riskiest assumption. But 99% of the time, your riskiest assumption is going to be one that deals with value. As in, “will customers buy or choose to use the product?”
Below are example assumptions that deal with value. These are great candidates for a product hypothesis.
And, for clarity, here are some assumptions that don’t deal with value:
It’s unlikely any of those are the riskiest assumption you’re facing. But they could make great hypotheses later, once you verify value.
The first time you construct a hypothesis, use this formula:
We believe doing x will result in outcome y for the customer which will have z business results.
Where x is a specific action, y is a trackable outcome, and z is a measurable impact on the business. This isn’t the only formula that works, but it will keep you focused on creating a hypothesis that’s specific and measurable.
Here are a few examples of hypotheses that follow this formula:
From experience, I know it’s easy to be lazy and haphazard with these. If your hypothesis looks like any of the examples below, keep working; you’re not there yet.
The examples above are bad hypotheses because they’re not nearly specific enough. They fail in one or more ways to outline what you’re changing, who you’re changing it for, or how you’ll know if your hypothesis is true or false. Avoid these pitfalls.
Finally, once you think you have a good option, run it through the checklist below. A great hypothesis will tick every box.
Once you have your hypothesis, there are a variety of ways you can go about testing it. (Remember, testing means looking to prove OR disprove. We’re not playing the validation game here.)
The best testing method for you depends on the time, resources, and budget at your disposal.
It’s helpful to picture the different testing options as a spectrum. On one end of the spectrum is surveys. These are relatively low effort and fast to distribute. On the opposite end of the spectrum is a variety of MVPs. You can (and should) build these in a week or two, but they’ll take much more effort than distributing a survey. In between surveys and MVPs are options like low-fidelity wireframes and user walkthroughs.
Keep in mind the best method will also depend on what you’re testing. Some questions are best answered by qualitative data, others by quantitative data.
Regardless of what method you choose, the end goal is to learn quickly. Ideally, you want to discover whether you have a good or bad bet on your hands. If a good bet, you keep going. If a bad bet, you pivot and save yourself thousands of dollars and months of time.
I’ll dive into choosing the right method in another post. For now, give the hypothesis formula a go, and let me know how it goes.