Preferences

xyzzy123
Joined 6,653 karma
contact zxcdotmx@gmail.com

  1. Open cameras make information that was previously local and difficult to collect global and easier to collect. Relatively, it reduces the privacy and power of people on the ground in your neighbourhood and increases the power of more distant actors. It doesn't seem very socially desirable as an outcome. It also increases the relative power of people with technical capacity and capital for storage and processing etc.

    I do buy your argument that open access could help check the worst abuses. But, if widespread, it'd be so catastrophic for national security that I can't see how it would ever fly.

  2. One of the worst things IMHO is a loss of... hard to explain, but optionality? When the price of shelter is high and the need for occupational specialisation is also high, you feel more "trapped", situationally. It seems like people used to be able to YOLO things a lot more.
  3. It seems to me like the use case for local GPUs is almost entirely privacy.

    If you buy a 15k AUD rtx 6000 96GB, that card will _never_ pay for itself on a gpt-oss:120b workload vs just using openrouter - no matter how many tokens you push through it - because the cost of residential power in Australia means you cannot generate tokens cheaper than the cloud even if the card were free.

  4. Amazon is the second largest retailer in the world, 250 billion dollars of revenue is a lot of anecdotes and IMHO does kinda add up to being data.
  5. Given Docker's track record it won't be free indefinitely, this is a move to gauge demand and generate leads.
  6. Yeah the cyberpunk part is you can compute without explicitly needing someone's permission.
  7. You also have the issue of information creating liability. A thing that nobody knows about is nobody's problem, a thing that COULD be a problem but probably isn't in someone's professional judgement creates liability for the decision maker.
  8. Maybe? But it also seems like you are that you are not accounting for new information at inference time. Let's pretend I agree the LLM is a plagiarism machine that can produce no novelty in and of itself that didn't come from what it was trained on, and produces mostly garbage (I only half agree lol, and I think "novelty" is under-specified here).

    When I apply that machine (with its giant pool of pirated knowledge) _to my inputs and context_ I can get results applicable to my modestly novel situation which is not in the training data. Perhaps the output is garbage. Naturally if my situation is way out of distribution I cannot expect very good results.

    But I often don't care if the results are garbage some (or even most!) of the time if I have a way to ground-truth whether they are useful to me. This might be via running a compile, a test suite, a theorem prover or mk1 eyeball. Of course the name of the game is to get agents to do this themselves and this is now fairly standard practice.

  9. Using this reasoning, would you argue that a new proof of a theorem adds no new information that was not present in the axioms, rules of inference and so on?

    If so, I'm not sure it's a useful framing.

    For novel writing, sure, I would not expect much truly interesting progress from LLMs without human input because fundamentally they are unable to have human experiences, and novels are a shadow or projection of that.

    But in math – and a lot of programming – the "world" is chiefly symbolic. The whole game is searching the space for new and useful arrangements. You don’t need to create new information in an information-theoretic sense for that. Even for the non-symbolic side (say diagnosing a network issue) of computing, AIs can interact with things almost as directly as we can by running commands so they are not fundamentally disadvantaged in terms of "closing the loop" with reality or conducting experiments.

  10. One interesting angle for me is that I am seldom given complete specs or requirements when asked for an estimate. Of course you ask questions to try to determine key information that has not been specified but often the answers are not available or fully reliable.

    So any estimate has to include uncertainty about _the scope of the work itself_ as well as the uncertainties involved in delivering the work.

    The natural follow on question when you present a range as the answer to an estimate is: what would help you narrow this range? Sometimes it is "find out this thing about the state of the world" (how long will external team take to do their bit) but sometimes it is "provide better specs".

  11. Yeah I think the caveat is that the compressor and maybe seals, lights and few other bits are the ONLY repairable parts of most fridges. The whole structure of a modern fridge is foam panels and sheet metal folds that aren't ever meant to come apart after being assembled.
  12. Totally fair. There are some situations where you can "undercut" cloud native object storage on a per TB basis (e.g. you have a big dedi at Hetzner with 50TB or 100TB of mirrored disk) but you pay a cost in operational overhead and durability vs managed object store. It's really hard to make the economics work at $20 price point, if you get up to a few $100 or more then there are some situations where it can make sense.

    For backup to a dedi you don't really need to bother running the object store though.

  13. I'm confused why you would want to turn an expensive thing (cloud block storage) into a cheaper thing (cloud object storage) with worse durability in a way that is more effort to run?

    I'm not saying it's wrong since I don't know what it's for, I'm just wondering what the use-case could be.

  14. Certificate per request
  15. This can lead to weird dynamics. A lot of workplaces, no one seems to have direct power (or incentive!) to say "yes" to anything but lots of people (including 3 teams you weren't even aware existed) are able to "provide feedback" or say no.

    This leads to all progress being achieved very slowly if at all, or by using the element of surprise and then seeking forgiveness.

  16. I think the figure Musk circulated was that it was losing $4M / day. Although that probably includes the interest bill on the debt used for financing and might be inflated.
  17. I would legitimately be worried if they were doctors, but they're philosophers (in medical ethics). Their job is to come up with insane moral edge cases and then try to follow them to their logical conclusion - the siller and more unhinged the better. This is absolutely expected of them.

    Many of their other papers have a similar flavour:

      - How do we justify research into enhanced warfighters?
      - Compulsory moral bioenhancement should be covert
      - Abolishing morality in biomedical ethics

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal