Preferences

It wasn't in their product. It was just on a devs machine

I think the OP is aware of that and I agree with them that it’s bad practice despite how common it is.

For example with AWS, you can use the AWS CLI to sign you in and that goes through the HTTPS auth flow to provide you with temporary access keys. Which means:

1. You don’t have any access keys in plain text

2. Even if your env vars are also stolen, those AWS keys expire within a few hours anyway.

If the cloud service you’re using doesn’t support OIDC or any other ephemeral access keys, then you should store them encrypted. There’s numerous ways you can do this, from password managers to just using PGP/GPG directly. Just make sure you aren’t pasting them into your shell otherwise you’ll then have those keys in plain text in your .history file.

I will agree that It does take effort to get your cloud credentials set up in a convenient way (easy to access, but without those access keys in plain text). But if you’re doing cloud stuff professionally, like the devs in the article, then you really should learn how to use these tools.

> If the cloud service you’re using doesn’t support OIDC or any other ephemeral access keys, then you should store them encrypted. There’s numerous ways you can do this, from password managers to just using PGP/GPG directly. Just make sure you aren’t pasting them into your shell otherwise you’ll then have those keys in plain text in your .history file.

This doesn't really help though, for a supply chain attack, because you're still going to need to decrypt those keys for your code to read at some point, and the attacker has visibility on that, right?

Like the shell isn't the only thing the attacker has access to, they also have access to variables set in your code.

I agree it doesn’t keep you completely safe. However scanning the file system for plain text secrets is significantly easier than the alternatives.

For example, for vars to be read, you’d need the compromised code to be part of your the same project. But if you scan the file system, you can pick up secrets for any project written in any language, even those which differ from the code base that pulled the compromised module.

This example applies directly to the article; it wasn’t their core code base that ran the compromised code but instead an experimental repository.

Furthermore, we can see from these supply chain attacks that they do scan the file system. So we do know that encrypting secrets adds a layer of protection against the attacks happening in the wild.

In an ideal world, we’d use OIDC everywhere and not need hardcoded access keys. But in instances where we can’t, encrypting them is better than not.

It's certainly a smaller surface that could help. For instance, a compromised dev dependency that isn't used in the production build would not be able to get to secrets for prod environments at that point. If your local tooling for interacting with prod stuff (for debugging, etc) is set up in a more secure way that doesn't mean long-lived high-value secrets staying on the filesystem, then other compromised things have less access to them. Add good, phishing-resistant 2FA on top, and even with a keylogger to grab your web login creds for that AWS browser-based auth flow, an attacker couldn't re-use it remotely.

(And that sort of ephemeral-login-for-aws-tooling-from-local-env is a standard part of compliance processes that I've gone through.)

> 1. You don’t have any access keys in plain text

That's not correct. The (ephemeral) keys are still available. Just do `aws configure export-credentials --profile <YOUR_OIDC_PROFILE>`

Sure, they'll likely expire in 1-24 hours, but that can be more than enough for the attacker.

You also can try to limit the impact of the credentials by adding IP restrictions to the assumed role, but then the attacker can just proxy their requests through your machine.

> That's not correct. The (ephemeral) keys are still available. Just do `aws configure export-credentials --profile <YOUR_OIDC_PROFILE>`

That’s not on the file system though. Which is the point I’m directly addressing.

I did also say there are other ways to pull those keys and how this isn’t completely solution. But it’s still vastly better than having those keys in clear text on the file system.

Arguing that there are other ways to circumvent security policies is a lousy excuse to remove security policies that directly protect you against known attacks seen in the wild.

> Sure, they'll likely expire in 1-24 hours, but that can be more than enough for the attacker.

It depends on the attacker, but yes, in some situations that might be more than long enough. Which is while I would strongly recommend people don’t set their OIDC creds to 24 hours. 8 hours is usually long enough, shorter should be required if you’re working on sensitive/high profile systems. And in the case of this specific attack, 8 hours would have been sufficient given the attacker probed AWS while the German team were asleep.

But again, i do agree it’s not a complete solution. However it’s still better than hardcoded access keys in plain text saved in the file system.

> You also can try to limit the impact of the credentials by adding IP restrictions to the assumed role, but then the attacker can just proxy their requests through your machine.

In practice this never happens (attacks proxying) in the wild. But you’re right that might be another countermeasure they employ one day.

Security is definitely a game of ”cat and mouse”. But I wouldn’t suggest people use hardcoded access keys just because there are counter attacks to the OIDC approach. That would be like “throwing the baby out with the bath water.”

> That’s not on the file system though.

They are. In `~/.aws/cli/cache` and `~/.aws/sso/cache`. AWS doesn't do anything particularly secure with its keys. And none of the AWS client libraries are designed for the separation of the key material and the application code.

I also don't think it's even possible to use the commonly available TPMs or Apple's Secure Enclave for hardware-assisted signatures.

> 8 hours is usually long enough. And in the case of this specific attack, 8 hours would have been sufficient given the attacker probed AWS while the German team were asleep.

They could have just waited a bit. 8 hours does not materially change anything, the credential is still long-lived enough.

I love SSO and OIDC but the AWS tooling for them is... not great. In particular, they have poor support for observability. A user can legitimately have multiple parallel sessions, and it's more difficult to parse the CloudTrail. And revocation is done by essentially pushing the policy to prohibit all the keys that are older than some timestamp. Static credentials are easier to manage.

> In practice this never happens (attacks proxying) in the wild. But you’re right that might be another countermeasure they employ one day.

If I remember correctly, LastPass (or was it Okta?) was hacked by an attacker spying on the RAM of the process that had credentials.

And if you look at the timeline, the attack took only minutes to do. It clearly was automated.

I tried to wargame some scenarios for hardware-based security, but I don't think it's feasible at all. If you (as a developer) have access to some AWS system, then the attacker running code on your behalf can also trivially get it.

You can use keyring/keychain with credential_process although it's only a minor shift in security from "being able to read from the fs" to "being able to execute a binary"
> They are. In `~/.aws/cli/cache` and `~/.aws/sso/cache`. AWS doesn't do anything particularly secure with its keys.

Thanks for the correction. That’s disappointing to read. I’d have hoped they’d have done something more secure than that.

> And none of the AWS client libraries are designed for the separation of the key material and the application code.

The client libraries can read from env vars too. Which isn’t perfect either, but on some OSs, can be more secure than reading from the FS.

> If I remember correctly, LastPass (or was it Okta?) was hacked by an attacker spying on the RAM of the process that had credentials.

That was a targeted attack.

But again, I’m not suggesting OIDC solves everything. But it’s still more secure than not using it.

> And if you look at the timeline, the attack took only minutes to do. It clearly was automated.

Automated doesn’t mean it happens the moment the host is compromised. If you look at the timeline, you see that the attack happened over night; hours after the system was compromised.

> They could have just waited a bit. 8 hours does not materially change anything, the credential is still long-lived enough.

Except when you look at the timeline of those specific attack, they probed AWS more than 8 hours after the start of the working day.

A shorter TTL reduces the window of attack. That is a material change for the better. Yes I agree on its own it’s not a complete solution. But saying “it has no material benefit so why bother” is clearly ridiculous. By the same logic, you could argue “why bother rotating keys at all, we might as well keep the same credentials for years”….

Security isn’t a Boolean state. It’s incremental improvements that leave the system, as a whole, more of a challenge.

Yes there will always be ways to circumvent security policies. But the harder you make it, the more you reduce your risk. And having ephemeral access tokens reduces your risk because an attacker then has a shorter window for attack.

> I tried to wargame some scenarios for hardware-based security, but I don't think it's feasible at all. If you (as a developer) have access to some AWS system, then the attacker running code on your behalf can also trivially get it.

The “trivial” part depends entirely on how you access AWS and what security policies are in place.

It can range anywhere from “forced to proxy from the hosts machine from inside their code base while they are actively working” to “has indefinite access from any location at any time of day”.

A sufficiently advanced attack can gain access but that doesn’t mean we shouldn’t be hardening against less sophisticated attacks.

To use an analogy, a burglar can break a window to gain access to your house, but that doesn’t mean there isn’t any benefit in locking your windows and doors.

Agreed.

> A sufficiently advanced attack can gain access but that doesn’t mean we shouldn’t be hardening against less sophisticated attacks.

I'm a bit worried that with the advent of AI, there won't be any real difference between these two. And AI can do recon, choose the tools, and perform the attack all within a couple of minutes. It doesn't have to be perfect, after all.

I've been thinking about it, and I'm just going to give up on trying to secure the dev environments. I think it's a done deal that developers' machines are going to be compromised at some point.

For production access, I'm going to gate it behind hardware-backed 2FA with a separate git repository and build infrastructure for deployments. Read-write access will be available only via RDP/VNC through a cloud host with mandatory 2FA.

And this still won't protect against more sophisticated attackers that can just insert a sneaky code snippet that introduces a deliberate vulnerability.

They are on the filesystem though.

Login then check your .aws/login/cache folder.

Oh that’s disappointing. Thanks for the correction.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal