You might be speaking a little more broadly than I am interpreting.
Startups are a great example. When you raise your first chunk of money, the size of that isn't really driven by a carefully considered, detailed plan with engineering hours estimated per task. What you get is basically determined what's currently fashionable among angels and small-end VCs, plus who's doing your fundraising. (If you're Jeffery Katzenberg and Meg Whitman, you can raise $1bn. [1] https://en.wikipedia.org/wiki/Quibi But the rest of us have to make do with what we can get.)
So at that point you have a strong constraint (whatever you raised) and some relatively clear goal. As I said, cost isn't nearly as relevant as ROI, and nobody can put real numbers on the R in a startup. At that point you have two choices.
One is just to build to whatever the CEO (or some set of HiPPOs wants). Then you launch and find out whether or not you're fucked. The other is to take something akin to the Lean Startup approach, where you iteratively chase your goal, testing product and marketing hypotheses by shipping early and often.
In that later context, are people making intuitive ROI judgments? Absolutely. Every thing you try has people doing what you could casually call estimating. But does that require an estimation practice, where engineers carefully examine the work and produce numbers? Not at all. Again, I've done it many times. Especially in a startup context, the effort required for estimation is much better put into maximizing learning per unit of spending.
And how do you do that? Relentless prioritization. I was once part of a team that was so good at it that they launched with no login system. Initial users just typed their names in text fields. They wanted proper auth, and eventually they built it, but for demonstrating traction and raising money there were higher priorities. It worked out for them; they built up to have millions of users and were eventually acquired for tens of millions. On very little investor money.
Being great at prioritization makes estimation way less necessary. The units of work get small enough that the law of large numbers is on your side. And the amount of learning gained from the things released change both the R and I numbers frequently enough that formal estimates don't have a long shelf life.
So I get what you're saying in theory, but I'm telling you in practice it's different.
I think you are well missing the point - everything you put into your rebuttal is about estimates - in time, money, or resources
What I am talking about here, though, is a practice of software estimation where programmers produce detailed numbers on the amount of time requested work will take. Which is certainly the common meaning of estimating around here, and also what the original article is about.
Estimates are difficult, and in unhealthy environments are weaponized against developers. That doesn't mean they're unnecessary or impossible.
If developers (or anyone giving estimates) discovers that the initial estimate was based on faulty information then they need to push that information back to whomever they are reporting it to (Team Lead, Product Owner, Manager, customer, angel investor...). The receiver of that information then needs to decide on how to react according to the changes.
The entire premise of a project is "Look at this, with the intent to find X, and, if it's not possible, break it down so that we can create more projects to work toward that goal" which is an estimate, or a breakdown into sub projects that also come with estimates.
There is no scenario where it's appropriate or necessary when developing software professionally or even as a side project where others are expecting you to complete work at some point.
One of the many misconceptions in the original comment in this thread is that "worthwhile software is usually novel", which is not the case without a very specific and novel definition of worthwhile that I don't believe was intended.
I think that writing software that isn't novel fails to be worthwhile by a perfectly ordinary, mainstream definition of "worthwhile".
That's a completely valid definition of worthwhile software, but to claim it's impossible to create an estimate to complete said development is absurd.
I hope this isn't a semantics game where things like "1 - 6 months" counts as an estimate in this context.
The point way back up this thread was accurate timelines for complicated, novel work have large error bars but those error bars aren't as bad as the equivalent error bars on estimating whatever "return" it is being pitted against.
I've written what is probably several pages now in response to two individuals who are redefining terms in order to play the exact semantic games you mentioned, but in order to claim no estimation of any sort needs to be done. We seem to be done talking past each other now that I explicitly pointed out their usage of non-standard terms and my suspicions of why (having also unfortunately lived through software development managed by Gantt chart and other unpleasant experiences where someone who had no idea what they were managing was in control of a project), which is fine with me.
Feel free to describe your experience in practice when working in an organization where software developers answer to no one but themselves and are never asked for any justification for their progress or any projections of when they will be finished (both of which would require estimation to provide).
If you are able to tell stakeholders something like you'll be done in 1-6 months or provide no insight at all into when your tasking will be done, do no tracking of progress internally, and perform no collaboration around the completion of synchronous tasks within your team, I'll acknowledge no estimation is taking place during that process.