Maximum impact is a long term proposition.
Here's a real scenario. I worked for a company that evaluated by impact. They had a cellular modem with known issues in the field. A replacement was designed to fix those issues, but couldn't be backwards compatible. The cost to immediately upgrade the field units was high, so deployment was delayed to a cheaper, later time. One way of looking at this is that the decision saved millions of dollars and that argument was made. After the evaluation and before the deployment, a set of older field units failed in such a way that it made headlines across the country, which would have been prevented by the new units.
So, was the impact of those decisions negative the whole time in an unknowable way? Did their impact become negative as soon as the incident occurred? If the incident was still possible but hadn't occurred, would the impact be different?
People aren't good at evaluating things that haven't happened yet, so they'll tend to focus on the things they can see immediately in front of them. That incentivizes engineers to build things that have immediate short term impacts and discount long tail risks which can't be reliably evaluated.
Did these people have a business impact? I guess Tesla made Westinghouse a lot of money at one point, but that seems far from the most distinguishing thing that made him great at what he did. If anything, he was mediocre at business.
Even if we want to look at current titans of the computing industry, I admire the work done by orgs like Nvidia or humans like Geoff Hinton, but they also just got lucky that what they were doing for completely different reasons ended up benefiting so tremendously from the galaxy-scale data harvesting that has been going on due to the Internet becoming primarily ad-monetized, which they didn't know was going to happen. How many equally great engineers toiled in obscurity on dead ends but did equally great work? Doug Lenat was just as great an AI engineer, if not better, than Geoff Hinton. History just went one way and not the other, due to factors completely outside of the control of either of them.
You can build systems for efficiency, or build them to minimize disaster. It's really hard to see the negative impact on the business that was prevented.
the exception was places where leadership already thought in the same terms about software quality/etc, which meant I didn't have to do much convincing :P
how would you build teams or structures to support that sort of holistic thinking about software?
this is why i advocate the engineer/manager pendulum so strongly. we get better results when management has strong tech skills (and staff+ engineers have organizational skills as well).