rdos
Joined 10 karma
- LLM's are good at making stuff from scratch and perfect when you don't have to worry about the codes future. 'Research' can be a great tool. But LLMs are horrible in big codebases and multiple micro services. Also at making decision, never let it make a decision for you. You need to know what's happening and you can't ship straight AI code. It can save time, but it's not a lot and it won't replace anyone.
- Hello, would you add something to this list? I think it's pretty good
> Over‑polished prose – flawless grammar, overly formal tone, and excessive wordiness.
> Repetitive buzzwords – phrases like “delve into,” “navigate,” “vibrant,” “comprehensive,” etc.
> Lack of perspective shifts – AI usually sticks to a single narrative voice; humans naturally mix first, second, and third person.
> Excessive em‑dashes – AI tends to over‑use them, breaking flow.
> Anodyne, neutral stance – AI avoids strong opinions, trying to please every reader.
> Human writing often contains minor errors, idiosyncratic punctuation, and a more nuanced, opinionated voice.
> It's not just x, it's y
- There is no point in using a low-bandwidth card like the B50 for AI. Attempting to use 2x or 4x cards to load a real model will result in poor performance and low generation speed. If you don’t need a larger model, use a 3060 or 2x 3060, and you’ll get significantly better performance than the B50—so much better that the higher power consumption won’t matter (70W vs. 170W for a single card). Higher VRAM wont make the card 'better for AI'.
- In that case you should run a model locally, this one for example: https://huggingface.co/ds4sd/docling-models
- Qwen3 32B is a hybrid reasoning model and is very good. You have to generate a lot of think tokens for any agentic activity but you will probably run the model locally and it wont be a problem. If you need something quick and simple, /no_think is good enough in my experience. It might also be because its not a moe architecture
Yes. I mostly work on Quarkus microservices and use cursor with auto agent mode.
> we wouldn't give an AI some vague requirements and ask it to build something > we would discuss as a team
seems like a reasonable workflow. It's the polar opposite of what was written in the blog post. That is the usual, easy way people use agents and what I think is the wrong path. May I also ask what language and/or framework you work with where so much context works good enough?
> Asking AI to explain code and help me learn how it works means I can pick up new systems significantly quicker.
Summarization is generaly a great task for LLMs