Preferences

selinkocalar
Joined 134 karma
Delve is an AI-native compliance platform that helps 100s of fast-growing companies get SOC 2, HIPAA, ISO, etc. compliant in days, not months.

https://delve.co/book-demo


  1. P2P app distribution is cool in theory but the security model gets complex fast. Without centralized review, you're basically trusting individual developers to not ship malicious code.
  2. The technical implementation is messy too. Most age verification systems either don't work well or create massive privacy risks by requiring government ID uploads.
  3. As someone who's built an entire business on "anti-screenshots" this is brilliant.

    PDF redaction fails are everywhere and it's usually because people don't understand that covering text with a black box doesn't actually remove the underlying data.

    I see this constantly in compliance. People think they're protecting sensitive info but the original text is still there in the PDF structure.

  4. The M-series chips really changed the game here
  5. This is the kind of thing that works until it spectacularly does not. XML parsing with regex is fine for simple, well-controlled cases but breaks as soon as you hit edge cases. We learned this the hard way trying to parse security questionnaire exports. Started with regex, ended up rewriting with a proper XML parser after hitting too many weird formatting issues.
  6. CLI tools have weaker security models than their GUI counterparts bc the assumption is usually that if you have terminal access, you already have elevated privileges.

    But in shared environments or CI/CD pipelines, this doesn’t work. And the credential exposure through process lists is pretty bad.

  7. We've seen cases where AI-generated code includes snippets that look suspiciously like they came from proprietary codebases. If an AI model was trained on copyrighted code and reproduces patterns from it, who's liable? The training process makes it really hard to trace back to original sources.
  8. The compute requirements for these models are getting wild!! We're already seeing costs become a real constraint for smaller companies trying to build AI features.

    And if you're building anything serious with AI, you're basically dependent on a handful of cloud providers who control the GPU supply.

  9. The combination of LLMs and formal verification tools is pretty interesting. We've been thinking about this for compliance automation - there are a lot of regulatory requirements that could theoretically be expressed as formal constraints. Curious about the performance though. Z3 can be really slow on complex problems, and if you're chaining that with LLM calls, the latency could get rough for interactive use cases.
  10. Garbage collection improvements are always welcome. We've had some .NET services where GC pauses were causing noticeable latency spikes under load.

    I think the regional GC approach is potentially promising if it’s for applications with large heaps. I’ll bet most web apps probably won't notice much difference though.

  11. We've experimented with different formats for feeding data to LLMs and markdown tables usually work pretty well. JSON is more structured but harder for the model to parse visually.

    CSV works okay but you lose a lot of context about what the columns actually represent. The model performs better when it can 'see' the structure clearly.

  12. This is a real concern. We use AI for a lot of our development work now and I've noticed people will be less likely to dig deep into problems before asking Claude.

    The trick is using AI to handle the grunt work while still maintaining the critical thinking skills. But it's so easy to slip into autopilot mode.

  13. Context management is one of those things that seems simple until you actually build with it. We've run into issues where Claude loses important context halfway through complex tasks. Love the idea of being able to mark parts of the conversation as 'resolved' or 'outdated' would be huge.
  14. The concept makes sense but the execution is always where these things fall apart. Most people don't want to manage their own data infrastructure.

    The bigger issue is interoperability. Your personal data store is only useful if apps actually integrate with it, and getting developers to adopt new standards is tough.

  15. Omg this is so annoying. The number of sites that break basic browser functionality for no good reason drives me crazy.

    I think its bc they use JavaScript to prevent 'content theft' but it just makes the site harder to use. Like if someone wants to copy your text, they'll find a way.

  16. Wonder how they're handling attribution and false positives. Threat intel quality can vary so wildly between sources.
  17. The sandboxing benefits are real, especially for multi-tenant environments where you can't trust user code. Performance is still going to be hit-or-miss depending on the workload.
  18. Truly. Most companies are still treating it as a box to check, while figuring out basic data hygiene. Real value is gonna comee from automating specific workflows
  19. Ok but what if you don’t have a smartphone ?
  20. The migration costs are probably huge though. Retraining users and converting all the existing documents is going to take years.
  21. Hmm curious how it handles commands that could be destructive. Like AI suggesting 'rm -rf' commands seems risky without good guardrails.
  22. 2M devices is a maaaaassive attack surface.

    This is why zero trust networking makes sense. You can't assume the network layer is secure when the infrastructure itself is compromised.

  23. We've definitely fallen into the trap of performative code reviews where everyone feels obligated to find something to comment on

    Quickly learned the best code reviews are focused on logic and architecture, not formatting. But it's easy to slip into nitpicking because those comments are easier to write

  24. Real-time usage monitoring for AI APIs makes sense. We've had issues where OpenAI goes down and it takes us way too long to notice because the failures aren't obvious.

    The cost tracking piece is probably more valuable though. AI API bills can get expensive really quickly

  25. We've tried to automate similar processes and always end up needing human review because websites are inconsistent and messy.
  26. Email infrastructure is one of those things that seems simple until you actually try to do it.
  27. This is going to break so many things. End-to-end encryption either works or it doesn't. There's no middle ground where you can scan messages but keep them 'private.'

    Compliance overhead alone will kill a bunch of smaller messaging apps that can't afford all the regulatory stuff.

  28. Labor agreements in tech are soooo difficult to monitor. How do you collectively bargain around stock options, or remote work policies, or the pace of AI automation? The traditional labor playbook doesn't really apply here.
  29. This is exactly why App Tracking Transparency was never going to work. Apple gave users a consent dialog but didn't actually prevent the data collection - they just made it slightly more annoying. Device fingerprinting, session replay, and a dozen other techniques make the whole 'ask permission to track' model fundamentally ineffective. The data is still flowing, just through different pipes.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal