My approach is to slap together a bunch of stuff to get a better feel for what's really needed. Once things solidify you can often see that you only need a few things which you then can either custom develop or with less components.
Starting out from scratch without exactly knowing what's needed seems extremely slow, inflexible and prone to wrong decisions.
For a big company it can make sense to take on a big fixed cost (custom software) to reduce a variable cost (cpu/storage/bandwidth consumption). A small company may not have the economy of scale to justify it.
I agree with the points being made here but there's also that issue that your custom creation is the next employee's 3rd party solution.
And any time a dependency breaks, there's a chance that the external fix will not come timely, or that whatever fix is right for you will not be compatible with that dependency's overall goal, so you'll end up maintaining an internal fork anyway.
I don't think there's a way around it. Professionals don't shy away from taking responsibility for all the code that goes into their product.
That's why I fab my own chips!
If I had unlimited time and resources that’s exactly what I’d do. Big companies often have both so that’s exactly what they do.
applauds loudly
It might even be bad for the project and/or client but lets not forget these tv dinner people cant even cook!
It's far better if you have the time and resources to develop your own software from as low a level as possible. Each layer of abstraction that you can shed is an opportunity to tailor your solution more closely to your problem and to have expertise in-house. Big tech companies know this and it's why they do a lot of stuff in-house. The key is in knowing what to develop in-house and what to punt on, and when.