Preferences

Because the fuzzers can be used by blackhats to find vulnerabilities to be exploited. The blackhats surely write their own fuzzers, but why give them a head start?

SEJeff
Say it with me, "Security through obscurity is no security at all". Surely you'd know this working for Mozilla :)

Probably due to business reasons in that it isn't meant to be a general purpose fuzzer and is likely tied to internal google infra.

nickpsecurity
Say it with me: "Obfuscation on top of solid, security practices provably reduces risk against talented hackers." It's why the Skype attack took forever. They spent so long on reverse engineering where open software with obvious weaknesses would've cracked almost instantly. Now, put that concept in for runtime protections, NIDS, recovery mechanisms, etc. Hacker has much more work to do that increases odds of discovery during failures.

Obfuscation was also critical to spies helping their nations win many wars. There's so many ways for well-funded, smart attackers to beat you when they know your whole playbook. When they don't, you can last a while. If it changes a lot, you might last a lot longer. If not well-funded or maximizing ROI (most malware/hackers), they might move on to other targets that aren't as hard.

SEJeff
That's fair, but I still believe in the idea of Kerckhoff's principal. The gist being you should build a system based on the the idea that all parts are open, because any security researcher (whitehat OR blackhat) worth their salt will eventually figure it out. Of all of the layers for good defense in depth, obscurity is generally a mediocre one.

Edit: Spelled Auguste Kerckhoffs's name correctly.

bitexploder
I am with you, and I think everyone generally agrees with the principle in infosec, but I just wanted to add on that from this whole "attacker" math perspective often you just want to make your app a little less fun to play with than the next guys. The bored security researcher will just move on to a less hardened target. When you design for real security and mitigate low and medium risk issues with obfuscation when it makes sense it can lower the total cost of your information security program because you control the exposure and information out there.

As long as you know why and when to obfuscate vs. implement real security measures it is a valuable tool. If I can make something that much more expensive for an attacker I haven't bought any iron clad security but I have moved a system from "likely to be hacked by a bored security researcher" to "likely to be hacked only by a highly motivated or well funded attacker".

When doing time boxed black box assessment work and I have no choice but to expend effort figuring out things (stripped symbols, obfuscated names, purposefully unfriendly backend things, etc.) it means I have less time to play with the important stuff. This exactly simulates and helps inform the "real attacker" picture.

So, it can be mediocre technically (no actual security increase in a technical sense), but perfect when it was really low cost to implement. When you spot those low cost obfuscation opportunities they are often worth doing :)

And on the content of the article, I was right there saying, "Okay, just because you now have your own pile of code doesn't mean it is any more secure!" In an absolute sense true. But if you can save a bundle of money not having to rush out to emergency patch some feature, distracting from important work, because QEMU got a new widely disclosed vuln... you are winning :)

nickpsecurity
Kerchkoff's principle is similar to the principle of TCB's in high-assurance security. You want your main trust to be something small and highly vetted. The openness part introduces extra attack or defense potential to that. This can only help if significantly more effort is going into bug-fixing the open tool than exploiting it. Currently, that's backwards for software that's not incredibly popular.

It should be straight-forward to test our hypotheses with a few examples plus a highly-simplified deployment. Scenario is private information served via web server over TLS with people only seeing what their credentials authorized them to. NIDS looks for baseline activity and attack profiles. Attacker wants all of it. Attacker has one month to get in.

Option 1: Regular http server on Linux leveraging OpenSSL for protection of secrets. All configuration data except passwords or private keys are published. This includes how traffic looks.

Option 2: Unknown, non-mainstream OS w/ decent quality & defaults running unknown server with unknown crypto & compiler-assisted protections in unknown configuration with unknown traffic patterns.

Which do you think will succeed easiest? Your interpretation of Kerckhoff suggests No 1 is going to be hardest for attacker. Mine says No 2 is going to give them a lot of detective work that's also likely to set off the NIDS. We do know No 1 gets smashed regularly and with low effort. Evidence leans in my favor so far.

Option 3: Runs OpenBSD on POWER processors with HSM doing the crypto. The HSM is black box w/ tamper resistance that's also a black box. It cost a fortune like the POWER server itself. Reverse engineering the HSM's, esp if each attack bricks it, can take tons of time and money. The server is advertised as an Intel server running FreeBSD or Linux with options turned off. Think your attacker will hack it since obfuscation = obscurity = no security? And if they could, how many are even able to try to given economic cost and skills required?

Option 4: Runs on Boeing SNS server w/ HSM. That's one of earliest systems in high-assurance security certified through NSA pentesting in early 1990's. No reported hacks to this day (20+ years) although undoubtedly something to hit in there. Also unavailable for purchase outside defense. If available, you're probably spending $60-150k a unit if XTS-400 is any indicator of cost of low-volume, high-security servers. Docs say it has Xeon CPU's, custom firmware, a "transactional" kernel (also tiny), and MLS policy. Think your attacker will do better than the others did over two decades?

Option 5: Uses LOCK platform. Ancestor of SELinux made by Secure Computing Corporation. Did Type Enforcement at the level of CPU & memory interactions with security kernel on software layer. Built-in crypto-processor called SIDEARM. UNIX layer running deprivileged for the server-side app. Security-critical components developed & reviewed in rigorous way. Although no longer available, this organization still has the installation media that it uses on obsolete computers it buys off the Internet. The untrusted, networking interface just says it's a BSD. So, it's a high-security product that's not available for sell or on eBay that looks like an old UNIX box on outside. How you think the remote attacker will get in?

I hope I've amply demonstrated Kerckhoff's principle or the interpretation you're bringing are incorrect. The best approach is a combo of solid, vetted security with obfuscation. Some obfuscations can even make it impossible for vast majority of attackers to hack the system. They'll go for supply chain poisoning or infiltration before trying to hack SNS or LOCK. If physical and personnel security are good, then that obfuscation just bought you a lot.

zenlikethat
> They spent so long on reverse engineering where open software with obvious weaknesses would've cracked almost instantly

Not necessarily true. If the software is available to everyone white hats are much more likely to find bugs and help fix them. They might actually have skin in the game alongside you.

White hats won't bother with proprietary software at all, and baddies sure aren't going to turn in their exploits, they'll just sit on them or sell them. If you're being targeted by sophisticated nation-state attackers keeping the code private isn't going to help you. These are people who make worms like Stuxnet, MITM major Internet services, and pop government employee Gmail accounts for their full time job.

You're just reciting the same tired old rhetoric that security through obscurity is a valid defense mechanism. It's just not.

nickpsecurity
" If the software is available to everyone white hats are much more likely to find bugs and help fix them. They might actually have skin in the game alongside you."

The state of most FOSS security says otherwise. A better assumption is virtually nobody will review the code for security unless you get lucky. If they do, they won't review much of it. Additionally, unless its design is rigorous, the new features will often add vulnerabilities faster than casual reviewers will spot and fix them. This situation is best for the malware authors.

"White hats won't bother with proprietary software at all"

You mean there's never been a DEFCON or Black Hat conference on vulnerabilities found in proprietary systems + responsible disclosure following? I swore I saw a few.

Regardless, proprietary software should be designed with good QA plus pentesting contracts. Those relying on white hats to dig through their slop are focusing on extra profit instead of security. ;) White hats will also definitely improve proprietary software for small or no payment if they can build a name finding flaws in it. Some even do it on their own for same reason. This effect goes up if the proprietary software is known for good quality where finding a bug is more bragworthy.

"You're just reciting the same tired old rhetoric that security through obscurity is a valid defense mechanism. It's just not."

You're misstating my points to create a strawman easier to knock down. I said attacking unknowns takes more effort than attacking knowns. I also said, if monitoring is employed, the odd behavior that comes with exploration increases odds alarms will be set off. These are both provably true. That means obfuscation provably can benefit security. Whether it will varies on case-by-case basis per obfuscation, protected system, and use case.

Feel free to look at my obfuscated options in recent reply to SEJeff to tell me how you'd smash them more easily than a regular box running Linux and OpenSSL whose source & configs are openly published to allegedly benefit their security.

cpeterso OP
There's a difference between the system being open and the test tools being open. Mozilla has open sourced most of its fuzzers [1], but only after they are no longer finding existing bugs. The fuzzers are then used to prevent regressions.

[1] https://github.com/MozillaSecurity

felipemnoa
Not sure why you are getting down-voted. "Security through obscurity is no security at all" is a very true statement. It is the reason why the code for AES is known to everybody.
dgfgfdagasdfgfa
Except it's not true. Obscurity still has cost to decipher, and the cost may be the ends you're looking for. Not all security-oriented goals are make-or-break.

Hell, if you're talking about a time scale of hours (not uncommon with 0-days) even using a trivial cypher could slow people (attempting to understand (and then fix) your vulnerability) down for long enough to "get away" with the data/transfer/rootkit/whatever.

Obfuscation has its role; it's to retard understanding, not to prevent understanding.

kajecounterhack
Obfuscation as a way to prevent copyright violation makes sense. Obfuscation as a way to purposefully hide security holes is terrible. "Security through obscurity is not real security" is true, and has nothing to do with obfuscation in general. It has more to do with auditability.

Real security has a quantifiable difficulty to break through. Security through obscurity means the quantity of effort to needed to break through is an unknown.

Example:

We do know what it takes to break bcrypt. So if you've implemented bcrypt for security, great. Not obscure, but known to be safe.

We don't know how long it'll take random black hat to find out you're storing passwords in plaintext but hiding the fact cleverly.

If you release your source code auditors / community can see quickly that "oh storing plaintext passwords is a bad idea" and fix the bug. If you don't you might not know you're vulnerable and the obscurity will ultimately cost you for your ineptitude.

dgfgfdagasdfgfa
I guess you can call certain forms of protections people use "real" security versus "unreal" security, but I don't see your demarcation in practice.

> Obfuscation as a way to purposefully hide security holes is terrible.

I misspoke; I meant to say 'obscurity', which is the relevant concept in this thread, and there are most certainly reasons to have security through obscurity: once you've found a flaw, you must fix it before its obscurity vanishes. This is certainly relevant it the development of fuzzers where novel approaches could reveal 0-days.

londons_explore
Hashing algorithms have historically been mostly obscurity. It turns out we're really good at coming up with functions we think are one way and later find aren't.

MD4 and SHA0 were both once believed to be good...

kajecounterhack
I don't think it's so much about obscurity as it is about an arms race. Hashing algorithms are constantly being measured up against new exploit methods, faster cracking speeds, etc. It's a feature that we found collisions and other problems, not a bug.

A bug would be us continuing to use those algorithms without being able to mitigate their flaws.

The fact that we can find out that these functions are not as good as we hope and improve upon them is argument against obscurity. You can't do those things unless knowledge of these functions is common knowledge.

This item has no comments currently.