Real security has a quantifiable difficulty to break through. Security through obscurity means the quantity of effort to needed to break through is an unknown.
Example:
We do know what it takes to break bcrypt. So if you've implemented bcrypt for security, great. Not obscure, but known to be safe.
We don't know how long it'll take random black hat to find out you're storing passwords in plaintext but hiding the fact cleverly.
If you release your source code auditors / community can see quickly that "oh storing plaintext passwords is a bad idea" and fix the bug. If you don't you might not know you're vulnerable and the obscurity will ultimately cost you for your ineptitude.
> Obfuscation as a way to purposefully hide security holes is terrible.
I misspoke; I meant to say 'obscurity', which is the relevant concept in this thread, and there are most certainly reasons to have security through obscurity: once you've found a flaw, you must fix it before its obscurity vanishes. This is certainly relevant it the development of fuzzers where novel approaches could reveal 0-days.
MD4 and SHA0 were both once believed to be good...
A bug would be us continuing to use those algorithms without being able to mitigate their flaws.
The fact that we can find out that these functions are not as good as we hope and improve upon them is argument against obscurity. You can't do those things unless knowledge of these functions is common knowledge.
Hell, if you're talking about a time scale of hours (not uncommon with 0-days) even using a trivial cypher could slow people (attempting to understand (and then fix) your vulnerability) down for long enough to "get away" with the data/transfer/rootkit/whatever.
Obfuscation has its role; it's to retard understanding, not to prevent understanding.