Preferences

Do you have a practical sense of the level of mischief possible in the sandbox? It seems like a game of regexp whack-a-mole to me, which seems like a predictable recipe for decades of security problems. Allow- and deny-lists for files and domains seem about as secure as backslash-escaping user input before passing it to the shell.

If you configure it with the "no network access" environment there's nothing bad that can happen. Worst is you end up wasting a bunch of CPU cycles in a container somewhere in Anthropic's infrastructure.

Their "restricted network access" setting looks questionable to me - it allow-lists a LOT of stuff: https://docs.claude.com/en/docs/claude-code/claude-code-on-t...

If you configure your own allow-list you can restrict to just domains that you trust - which is enforced by a separate HTTP/HTTPS proxy, described here: https://docs.claude.com/en/docs/claude-code/claude-code-on-t...

How do you run a remote LLM with no network access?
OpenAI Codex, Claude Code for web and Gemini Jules have all managed that.

You use firewalls to prevent code running inside the container from opening network connections to anywhere else. The harness that surrounds it can still be made accessible via the network.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal