Preferences

behnamoh
Joined 22,745 karma

  1. Before anyone says "yes you can", read this: https://blog.seemsgood.com/posts/installing-old-versions-wit...

    In 2025/2026 you can't still simply do `brew install opencode@1.0.190`...

  2. does it violate ISP terms (like at&t)? how to make it less obvious to them?
  3. I mean, OpenCode has had this feature for a while: https://opencode.ai/docs/lsp/
  4. No Python LSPs yet!
  5. > I mean, what's the point of using local models if you can't trust the app itself?

    and you think ollama doesn't do telemetry/etc. just because it's open source?

  6. > LMStudio is not open source though, ollama is

    and why should that affect usage? it's not like ollama users fork the repo before installing it.

  7. the ability to change email address is not that complicated of a feature to postpone to later.

    maybe they should ask CC to fix this...

  8. My expectations from M5 Max/Ultra devices:

    - Something like DGX QSFP link (200Gb/s, 400Gb/s) instead of TB5. Otherwise, the economies of this RDMA setup, while impressive, don't make sense.

    - Neural accelerators to get prompt prefill time down. I don't expect RTX 6000 Pro speeds, but something like 3090/4090 would be nice.

    - 1TB of unified memory in the maxed out version of Mac Studio. I'd rather invest in more RAM than more devices (centralized will always be faster than distributed).

    - +1TB/s bandwidth. For the past 3 generations, the speed has been 800GB/s...

    - The ability to overclock the system? I know it probably will never happen, but my expectation of Mac Studio is not the same as a laptop, and I'm TOTALLY okay with it consuming +600W energy. Currently it's capped at ~250W.

    Also, as the OP noted, this setup can support up to 4 Mac devices because each Mac must be connected to every other Mac!! All the more reason for Apple to invest in something like QSFP.

  9. I like Anthropic's approach: Haiku, Sonnet, Opus. Haiku is pretty capable still and the name doesn't make me not wanna use it. But Flash is like "Flash Sale". It might still be a great model but my monkey brain associates it with "cheap" stuff.
  10. > Don’t let the “flash” name fool you

    I think it's bad naming on google's part. "flash" implies low quality, fast but not good enough. I get less negative feeling looking at "mini" models.

  11. > I don't understand how they expected to sustain the advantage against Google's infinite money machine.

    I ask this question about Nazi Germany. They adopted the Blitkrieg strategy and expanded unsustainably, but it was only a matter of time until powers with infinite resources (US, USSR) put an end to it.

  12. > OpenAI made a huge mistake neglecting fast inferencing models.

    It's a lost battle. It'll always be cheaper to use an open source model hosted by others like together/fireworks/deepinfra/etc.

    I've been maining Mistral lately for low latency stuff and the price-quality is hard to beat.

  13. the bigger question is: what business does the Netherlands have all the way across the ocean in an island? Who gave them the "right" to own it?

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal