1
point
kissgyorgy
Joined 7,807 karma
SysAdmin, Web and App Developer with Python.
https://kissgyorgy.me
[ my public key: https://keybase.io/kissgyorgy; my proof: https://keybase.io/kissgyorgy/sigs/_h3FGDf9mSW2s3hEXdZaErfM_wow-flxro_dDRxnnlA ]
- kissgyorgyI find templates atrocious to use for component fragments like this, that's why I wrote a Python component library when I started using Django with HTMX. Order of magnitude more pleasant to use, works with _every_ Python web framework not just Django: https://compone.kissgyorgy.me/
- It's just simple validation with some error logging. Should be done the same way as for humans or any other input which goes into your system.
LLM provides inputs to your system like any human would, so you have to validate it. Something like pydantic or Django forms are good for this.
- But hey! At least these four AI components made it in, so the important stuff is okay...
- What's the equivalent of prepared statements when using AI agents?
- Maybe I'm wrong on this, but I rather have 1 tool everyone else is using. Cargo in Rust ecosystem works really well, everyone loves it.
- I simply forbid or force Claude Code to ask for permission to run a dangerous command. Here are my command validation rules:
find and bfs -exec is forbidden, because when the model notices it can't delete, it works around with very creative solutions :)( r"\bbfs.*-exec", decision("deny", reason="NEVER run commands with bfs"), ), ( r"\bbfs.*-delete", decision("deny", reason="NEVER delete files with bfs."), ), ( r"\bsudo\b", decision("ask"), ), ( r"\brm.*--no-preserve-root", decision("deny"), ), ( r"\brm.*(-[rRf]+|--recursive|--force)", decision("ask"), ), - Why is that a good thing?
- I strongly disagree with the author not using /init. It takes a minute to run and Claude provides surprisingly good results.
- 1 point
- I think (hope) it's meant to be a joke.
- Scott Hanselman have a good blog post about this suggesting you should detach yourself from your code: https://www.hanselman.com/blog/you-are-not-your-code
Especially true when working as an employee where you don't own your code.
- This prompt: "What do you have in User Interaction Metadata about me?"
reveals that your approximate location is included in the system prompt.
- I just tried it out and docling finished in 20s (with pretty good results) the same document which in Tensorlake is still pending for 10 minutes. I won't even wait for the results.
- There is also the llm tool written by simonwillison: https://github.com/simonw/llm
I personally use "claude -p" for this
- I was excited because it looks really good. I looked into the backend code and it's vibe coded with Claude. All the terrible exception handling patterns, all the useless comments and psychopancy is left there. I can't trust this codebase. :(
- It's MIT licensed, can be easily picked up by someone else.
- If you don't want to upgrade and follow model development so much, I would just pay one provider and stick with them.
This model worth knowing about, because it's 3x cheaper and 2x faster than the previous Claude model.
- I think clean code is more important than ever. LLMs can work better with good code (no surprise), and they are trained on so much shit code they produce garbage in terms of clean code. They also don't have a good taste or deeper architectural understanding if big codebasis where it's even more important.
What you learned over the years, you can just scale up with agents.