Preferences

codeflo
Joined 9,230 karma

  1. > Apple cares a lot about phone gaming

    The kind of gacha games that dominate the in-app sales charts, sure. Actual gaming, they don't care about or even understand.

  2. My theory: Some manager's KPI is to increase the number of sold GitHub runner minutes. So they did some market research -- not enough to have a clear picture, but barely enough to be dangerous -- and found that some companies use self-hosted runners for cost reasons. So they deploy a two-pronged strategy: lower the cost of GitHub runners, and charge for the use of self-hosted runners, to incentivize switching.

    This fails for several reasons that someone who actually uses the product might have intuited:

    (a) For some use-cases, you can't switch to GitHub's runners. For us, it's a no-go for anything that touches our infrastructure.

    (b) Switching CI providers isn't hard, we had to do it twice already. Granted, most of our CI logic is in a custom build script that you can run locally, and not in the proprietary YAML file. But to be honest, I'd recommend that sort of setup for any CI provider, as you always want the ability to debug things locally.

    (c) GitHub Actions doesn't get the amount of love you'd expect from something billed as a "premium service". In fact, it often feels quite abandoned, barely kept working. Who knows what they're brewing internally, but they didn't coordinate this with a major feature announcement, and didn't rush to announce anything now that they got backlash, which leads me to believe they don't have anything major planned.

    (d) Paying someone -- by the minute, no less -- to use my own infrastructure feels strange and greedy. GitHub has always had per-user pricing, which feels fair and predictable. If for some reason they need more money, they can always increase that price. The fact that they didn't do that leads me to believe this wasn't about cost per se. Hence the KPI theory I mentioned above: this wasn't well-coordinated with any bigger strategy.

  3. They have all kinds of costs hosting GitHub, which is why there's per seat pricing for companies. If those prices are too low, they can always increase them. Charging on top of that per minute of using your own infrastructure felt greedy to me. And the fact that this was supposed to be tied to one of the lesser-maintained features of GitHub raised eyebrows on top of that.
  4. One problem is that GitHub Actions isn't good. It's not like you're happily paying for some top tier "orchestration". It's there and integrated, which does make it convenient, but any price on this piece of garbage makes switching/self-hosting something to seriously consider.
  5. In the sense that when people want to use a piece of MIT-licensed software in another piece of software, they don't in practice find themselves restricted from doing so by the conditions of the license. "Permissive" might be a word I should rather have used.
  6. That's a good point, entropy is only a heuristic for the thing you actually want to optimize, worst-case guesses (though it's probably a very good heuristic).

    > Basically, using the entropy produces a game tree that minimises the number of steps needed in expectation

    It might be even worse than that for problems of this kind in general. You're essentially using a greedy strategy: you optimize early information gain.

    It's clear that this doesn't optimize the worst-case, but it might not optimize the expected number of steps either.

    I don't see why it couldn't be the case that an expected-steps-optimal strategy gains less information early on, and thus produces larger sets of possible solutions, but through some quirk those larger sets are easier to separate later.

  7. > For wordle, «most probable» is mostly determined by letter frequency

    I don't think that's a justified assumption. I wouldn't be surprised if wordle puzzles intentionally don't follow common letter frequency to be more interesting to guess. That's certainly true for people casually playing hangman.

  8. IANAL either, so my own legal theories are as creative as yours, but I'd like to offer the following data point: All unrestricted open-source licenses that were written by actual lawyers, from MIT to CC0, have found it necessary to include such a liability clause.
  9. Before Minecraft, basically all voxel engines used some form of non-axis-aligned normals to hide the sharp blocks. Those engines did this either through explicit normal mapping, or at the very least, by deriving intermediate angles from the Marching Cubes algorithm. Nowadays, the blocky look has become stylish, and I don't think it really even occurs to people that they could try to make the voxels smooth.
  10. To my eyes, this author doesn't write like ChatGPT at all. Too many people focus on the em-dashes as the giveaway for ChatGPT use, but they're a weak signal at best. The problem is that the real signs are more subtle, and the em-dash is very meme-able, so of course, armies of idiots hunt down any user of em-dashes.

    Update: To illustrate this, here's a comparison of a paragraph from this article:

    > It is a new frontier of the same old struggle: The struggle to be seen, to be understood, to be granted the same presumption of humanity that is afforded so easily to others. My writing is not a product of a machine. It is a product of my history. It is the echo of a colonial legacy, the result of a rigorous education, and a testament to the effort required to master the official language of my own country.

    And ChatGPT's "improvement":

    > This is a new frontier of an old struggle: the struggle to be seen, to be understood, to be granted the easy presumption of humanity that others receive without question. My writing is not the product of a machine. It is the product of history—my history. It carries the echo of a colonial legacy, bears the imprint of a rigorous education, and stands as evidence of the labor required to master the official language of my own country.

    Yes, there's an additional em-dash, but what stands out to me more is the grandiosity. Though I have to admit, it's closer than I would have thought before trying it out; maybe the author does have a point.

  11. Once you notice the pattern, you see it everywhere:

    > Stability isn’t declared; it emerges from the sum of small, consistent forces.

    > These aren’t academic exercises — they’re physics that prevent the impossible.

    > You don’t defend against the impossible. You design a world where the impossible has no syntax.

    > They don’t restrain motion; they guide it.

    I don't just ignore this article. I flag it.

  12. Are the rumors still hinting at a VR-only experience as they did a couple of years ago when Half-Life: Alyx released, or is that no longer the speculation? Because that would be unfortunate for me, I'd have to play with a bucket in hand.
  13. YouTube Premium was originally called YouTube Red. Grandparent poster may have made a Freudian slip. :)
  14. The HN moderation system seems to hold, at least mostly. But I have seen high-ranking HN submissions with all the subtler signs of LLM authorship that have managed to get lots of engagement. Granted, it's mostly people pointing out the subtle technical flaws or criticizing the meandering writing style, but that works to get the clicks and attention.

    Frankly, it only takes someone a few times to "fall" for an LLM article -- that is, to spend time engaging with an author in good faith and try to help improve their understanding, only to then find out that they shat out a piece of engagement bait for a technology they can barely spell -- to sour the whole experience of using a site. If it's bad on HN, I can only imagine how much worse things must be on Facebook. LLMs might just simply kill social media of any kind.

  15. You were proven right three minutes after you posted this. Something happened, I'm not sure what and how. Hacking became reduced to "hacktivism", and technology stopped being the object of interest in those spaces.
  16. Obviously?
  17. "Matter" can in practice also mean "Matter over Wi-Fi", and lots of vendors use it that way.
  18. It's easily imaginable that there are new CPU features that would help with building an efficient Java VM, if that's the CPU's primary purpose. Just from the top of my head, one might want a form of finer-grainer memory virtualization that could enable very cheap concurrent garbage collection.

    But having Java bytecode as the actual instruction set architecture doesn't sound too useful. It's true that any modern processor has a "compilation step" into microcode anyway, so in an abstract sense, that might as well be some kind of bytecode. But given the high-level nature of Java's bytecode instructions in particular, there are certainly some optimizations that are easy to do in a software JIT, and that just aren't practical to do in hardware during instruction decode.

    What I can imagine is a purpose-built CPU that would make the JIT's job a lot easier and faster than compiling for x86 or ARM. Such a machine wouldn't execute raw Java bytecode, rather, something a tiny bit more low-level.

  19. Logical contradictions in AI slop? Unthinkable!

    But to address the serious question: We can't have all three of: a simple language, zero-cost abstractions, and memory safety.

    Most interpreted language pick simplicity and memory safety, at a runtime cost. Rust picks zero-cost abstractions and memory safety, at an increasingly high language complexity cost. C and Zig choose zero-cost abstractions in a simple language, but as a consequence, there's no language-enforced memory safety.

    (Also, having a simple language doesn't mean that any particular piece of code is short. Often, quite the opposite.)

This user hasn’t submitted anything.