This.
That's one big part of testing the hypothesis that I feel a lot of people (including mentoring organizations) really don't get.
In an ideal scenario you've clearly identified a problem and can accurately gauge the validity of your solution by just talking about it with prospects, as they can easily relate. I'll label this approach the "Let's-imagine" validation.
In other situations though, target customers aren't even aware that they have a problem (maybe the problem is not immediately apparent) and it can be difficult for them to understand your value proposition, since they've "very successfully been using Excel for that for the past 20 years". You still need to somehow relay that there's a reality where things are significantly better than in the status quo. In some such circles, relying on simple conversations or mock-ups can at best prove very difficult (people won't give you time to wander in fuzzy hypotheticals), or at worst be plain misleading (they will dismiss your idea as irrelevant). In situations where you can't rely on your prospects' imagination, you need to bring proof and the mvp plays a more important role. I'll label this the "Show-me" validation.
"Let's-imagine" and "show-me" are not absolutes. They're on a spectrum, but it seems that the advice to validate by talking to customers is being taken to some dogmatic extremes sometimes. It's something for founders to be aware of when they receive canned advice. Regarding your own solution, you need to be smart to identify where on the above range you fall.
... started by 2 famous HN commenters. Why did you chose not to name these HN commenters?
The point is that you can't simply make a mockup, assume it proves your hypothesis, and then go heads-down building the exact product shown in the mockups for 6-24 months and expect great results. Instead, you need to iterate both the mockups and the product in parallel, engaging with the market along the way.
As in the example I provided, sometimes customers will claim to love your mockups but then decline to pay for the product when it arrives. It's one of the first things every product manager learns in the real world.
> It's a good way to separate the technology decisions from the business decisions.
Trying to separate technology from the business decisions is a mistake. At the start of the process, you need technology, product, sales, and business to be tightly intertwined.
Dividing the labor and sending different roles in different directions is a mistake at an early stage company. You need to get everyone working together to iterate over and over again.
Also, User research can be done in a way to account for the fact that users will say they love something, but actually never use it. I've done it many times.
All of these things that you've mentioned should be done in tandem, but in practice, that's pretty rare. The good thing is that it creates market opportunities for other companies. Amazon is a more extreme example, but it's basically the business model of AWS. Another example is Terraform. HashiCorp has done a poor job of keeping up with user needs so there are add-ons or straight up replacements coming out of the woodwork to fill the feature and experience gaps that have been around in TF for years and years. You will see more competitors to TF and other HC products coming up for exactly this reason. They aren't even the close to the most glaring example, but just the first thing that came to mind.
Instead, it needs to be an iterative process. Build a little bit of product, tease it to the market, gauge interest, engage with customers, adjust trajectory as necessary, and continue to iterate.
Testing your business hypthesis before building anything is prone to generating false signal. Some times your customers don't realize they need what you're building until they see it and use it. Many times your customers will praise an idea, right up until you ask them to use it or pay for it.
A good example might be the recruiting company started by a famous HN commenter around 2015 that tried to use programming games as a recruiting filter. The idea received huge amounts of praise on HN and large numbers of waiting list signups before they released anything. The demand and hypothesis appeared to be validated. Then they switched to the other extreme, building enormously complex systems for years without actually selling anything to their customers. Eventually they shut down because they realized the market didn't actually want what they were building. The initial signal was misleading, but going heads-down to build a product according to the initial signal was also misleading. A better approach would have been to start recruiting up front and slowly iterate on improving it with programming games, rather than going off in the weeds to build a product that no one actually wanted.