- bionhowardWould SRAM make weight updates prohibitive vs DRAM?
- Gemini is absolutely not Gryffindor since it auto-opts users into training AI on their codebases without informed consent
- Anthropic also has legal terms that say no one is allowed to use the service for anything work related, but nobody seems to care
- Cursor still wins over Claude Code because Cursor has privacy mode
- I think it’s more about deciding how much to think about stuff and not a model router per se. 5 and 5.1 get progressively better calibrated reasoning token budgets. Also o3 and “reasoning with tools” for a massive consumer audience was a major advance and fairly recent
- model provider CLIs are a trap, less freedom of choice, less privacy, way more prohibitions buried in the fine print
- Good reason to use Cursor, you can insta-switch to whatever model you want and even run diverse models from different providers at the same time. If one of em isn’t working then you can try something else instead of being stuck on one model provider
- And here I thought it was a pedantic word for “data box”
- Is using these terminal agents with customer noncompete and no privacy questionable when cursor has the same models and privacy mode?
- The point is it’s sucking your data into some amorphous big brother dataset without explicitly asking you if you want that to happen first. Opt out AI features are generally rude, trashy, low-class, money grubbing data grabs
- Rust for everything, I could explain but it would get rude
- Rust feels easy for me, could it be we’re just used to what we use more?
Anyway, it’s all pretty easy, what’s the use arguing which of multiple easy things is easiest?
- Wouldn’t a breakpoint or log message require the code to compile and run in order to work?
- Doesn’t compare, because Cursor has a privacy mode. Why would anyone want to pay OpenAI or Anthropic to train their bots on your business codebase? You know where that leads? Unemployment!
- Meh, what’s the point if it’s got no privacy, which companies want to let OpenAI read your codebase? Cursor keeps winning because of privacy mode IMHO, there is no level of capability which outweighs privacy mode
- That time to first token is impressive, it seems like it responds immediately
- Is this a “bullshit injection?”
- Is this really surprising given how VC funded capitalism works? Spend money to build amazing technology and gain market share, then eventually flip into extraction mode.
Yes, a pullback will kill some weaker companies, but not the ones with sufficient true fans. Plus, we’re talking about a wide-ranging technological revolution with unknown long term limits and economics, you don’t just give up because you’re afraid to spend some money.
I don’t want to pay Anthropic, because I don’t trust them, but I will absolutely pay cursor, because I trust them, and I doubt I’m alone. My cursor usage goes to GPT-5, too, so it’s definitely not 100% Anthropic, even if I’m the only idiot using GPT5 on Cursor
It’s fun to innovate. Making money is a happy byproduct of value creation. Isn’t the price of success always paid in advance, anyway? Why would winning AI tech companies pack it up and stop crushing it over the long term just because they’re afraid to lose someone else’s money in the short term? Wouldn’t capitulation guarantee losses moreso than continued effort?
- What about Amazon’s work in formal verification research and Apple’s machine learning research?