Preferences

rdos
Joined 10 karma

  1. > Are you saying this from experience?

    Yes. I mostly work on Quarkus microservices and use cursor with auto agent mode.

    > we wouldn't give an AI some vague requirements and ask it to build something > we would discuss as a team

    seems like a reasonable workflow. It's the polar opposite of what was written in the blog post. That is the usual, easy way people use agents and what I think is the wrong path. May I also ask what language and/or framework you work with where so much context works good enough?

    > Asking AI to explain code and help me learn how it works means I can pick up new systems significantly quicker.

    Summarization is generaly a great task for LLMs

  2. I didn't read the blog yet because I clicked on cat pics and there weren't any!!!
  3. LLM's are good at making stuff from scratch and perfect when you don't have to worry about the codes future. 'Research' can be a great tool. But LLMs are horrible in big codebases and multiple micro services. Also at making decision, never let it make a decision for you. You need to know what's happening and you can't ship straight AI code. It can save time, but it's not a lot and it won't replace anyone.
  4. I don't want to sound rude, but what was your reason to go from scratch instead of joining an already established, open source effort? The likes of Cline, Roo, Continue, ...
  5. nice, what's your approach? Graphs?
  6. this is going straight into my funny folder
  7. It won't load for me right now
  8. Hello, would you add something to this list? I think it's pretty good

    > Over‑polished prose – flawless grammar, overly formal tone, and excessive wordiness.

    > Repetitive buzzwords – phrases like “delve into,” “navigate,” “vibrant,” “comprehensive,” etc.

    > Lack of perspective shifts – AI usually sticks to a single narrative voice; humans naturally mix first, second, and third person.

    > Excessive em‑dashes – AI tends to over‑use them, breaking flow.

    > Anodyne, neutral stance – AI avoids strong opinions, trying to please every reader.

    > Human writing often contains minor errors, idiosyncratic punctuation, and a more nuanced, opinionated voice.

    > It's not just x, it's y

  9. Hello, I am interested in this topic. What would you say were the tale tale signs of AI generated text for you? Apart from:

    - excessive em-dashes

    - useless words, verbosity

  10. There would't be a problem if there was transparency and clear boundaries. The future is simply enjoying what you want, but we have to get there past these first steps.
  11. Benchmarks show that open models are equal to SOTA closed ones but own experience and real world use shows the opposite. And I really wish they were closer, I run GPT-OSS 120b as a daily driver
  12. casual workstation flex to kick off the blog
  13. my bad, but you know what I meant
  14. Anthropic has much more funding than that. Most recent one was at $13B at the one before was at $3.5B. Now imagine that GPT recieved $40B in one round!
  15. There is no point in using a low-bandwidth card like the B50 for AI. Attempting to use 2x or 4x cards to load a real model will result in poor performance and low generation speed. If you don’t need a larger model, use a 3060 or 2x 3060, and you’ll get significantly better performance than the B50—so much better that the higher power consumption won’t matter (70W vs. 170W for a single card). Higher VRAM wont make the card 'better for AI'.
  16. In that case you should run a model locally, this one for example: https://huggingface.co/ds4sd/docling-models
  17. I'm from shitty part of Europe and I never saw a beehive that looked different than those in the presentation. I looked up 'American beehive' and they look roughly the same. So isn't this already the used standard?
  18. Qwen3 32B is a hybrid reasoning model and is very good. You have to generate a lot of think tokens for any agentic activity but you will probably run the model locally and it wont be a problem. If you need something quick and simple, /no_think is good enough in my experience. It might also be because its not a moe architecture
  19. The list has 'Watership Down' that I found very quickly and liked
  20. Why make an AI GPU and make it have 644.6 GB/s bandwidth?

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal