- Technically (or, at least, historically), they should have used the indefinite pronoun "one" i.e. "...because their defense systems seem overly sensitive to one's email address". But I imagine that would've got more comments than using you/your.
- > Hacker news is there to promote ycombinator companies. So long as you know and avoid this it's surprisingly high quality. But that's there to lend more ligitimacy to ycombinator.
Everything has a cost. For the web, that's typically monetary or your data and attention to advertisers. I think you're right that the cost of Hacker News is that my participation is lending some (tiny incremental) legitimacy to Y Combinator. It's also costing some tiny amount of my attention, in the sense that I may not have heard of Y Combinator if it weren't for Hacker News. For me personally, that is absolutely fine – but I'm glad you made it explicit so that it's a conscious choice.
[Edit: Of course it costs an absolutely vast amount of my attention :-) but I mean only a teeny tiny fraction of that is "payment" in the sense of noticing that Y Combinator exists.]
- Absolutely fabulous work.
Ludicrously unnecessary nitpick for "Remove all the brown pieces of candy from the glass bowl":
> Gemini 2.5 Flash - 18 attempts - No matter what we tried, Gemini 2.5 Flash always seemed to just generate an entirely new assortment of candies rather than just removing the brown ones.
The way I read the prompt, it demands that the candies should change arrangement. You didn't say "change the brown candies to a different color", you said "remove them". You can infer from the few brown ones that you can see that there are even more underneath - surely if you removed them all (even just by magically disappearing them) then the others would tumble down into a new location? The level of the candies is lower than before you started, which is what you'd expect if you remove some. Maybe it's just coincidence, but maybe this really was its reasoning. (It did unnecessarily remove the red candy from the hand though.)
I don't think any of the "passes" did as well as this, including Gemini 3.0 Pro Image. Qwen-Image-Edit did at least literally remove one of the three visible brown candies, but just recolored the other two.
- > It simply isn't possible to do serious math with vectors that are ambiguously column vs. row ... if you have gone through proper math texts
(There is unhelpful subtext here that I can't possibly have done serious math, but putting that aside...) On the contrary, most actual linear algebra is easier when you have real 1D arrays. Compare an inner product form in Matlab:
vs numpy:x' * A * y
OK, that saving of one character isn't life changing, but the point is that you don't need to form row and column vectors first (x[None,:] @ A @ y[:,None] - which BTW would give you a 1x1 matrix rather than the 0D scalar you actually want). You can just shed that extra layer of complexity from your mind (and your formulae). It's actually Matlab where you have to worry more - what if x and y were passed in as row vectors? They probably won't be but it's a non-issue in numpy.x @ A @ y> math texts ... are all extremely clear about column vs row vectors and notation too, and all make it clear whether column vs. row vector is the default notation, and use superscript transpose accordingly.
That's because they use the blunt tool of matrix multiplication for composing their tensors. If they had an equivalent of the @ operator then there would be no need, as in the above formula. (It does mean that, conversely, numpy needs a special notation for the outer product, whereas if you only ever use matrix multiplication and column vectors then you can do x * y', but I don't think that's a big deal.)
> This is also a constant issue working with scikit-learn, and if you regularly read through the source there, you see why.
I don't often use scikit-learn but I tried to look for 1D/2D agreement issues in the source as you suggested. I found a couple, and maybe they weren't representative, but they were for functions that could operate on a single 1D vector or could be passed as a 2D numpy array but, philosophically, with a meaning more like "list of vectors to operate on in parallel" rather than an actual matrix. So if you only care about 1d arrays then you can just pass it in (there's a np.newaxis in the implementation, but you as the user don't need to care). If you do want to take advantage of passing multiple vectors then, yes, you would need to care about whether those are treated column-wise or row-wise but that's no different from having to check the same thing in Matlab.
Notably, this fuss is precisely not because you're doing "real linear algebra" - again, those formulae are (usually) easiest with real 1D arrays. It when you want to do software-ish things, like vectorise operations as part of a library function, that you might start to worry about axes.
> unless you ingrain certain habits to always call e.g. .ravel or .flatten or [:, :, None] arcana
You shouldn't have to call .ravel or .flatten if you want a 1D array - you should already have one! Unless you needlessly went to the extra effort of turning it into a 2D row/column vector. (Or unless you want to flatten an actual multidimensional array to 1D, which does happen; but that's the same as doing A(:) in Matlab.)
Writing foo[:, None] vs foo[None, :] is no different from deciding whether to make a column or row vector (respectively) in MATLAB. I will admit it's a bit harder to remember - I can never remember which index is which (but I also couldn't remember without checking back when I used Matlab either). But the numpy notation is just a special case of a more general and flexible indexing system (e.g. it works for higher dimensions too). Plus, as I've said, you should rarely need it in practice.
- It means the same thing in MATLAB and numpy:
Gives:Z = np.array([[1,2,3]]) W = Z + Z.T print(W)
It's called broadcasting [1]. I'm not a fan of MATLAB, but this is an odd criticism.[[2 3 4] [3 4 5] [4 5 6]][1] https://numpy.org/devdocs/user/basics.broadcasting.html#gene...
- > Big disadvantages of matlab:
I will add to that:
* it does not support true 1d arrays; you have to artificially choose them to be row or column vectors.
Ironically, the snippet in the article shows that MATLAB has forced them into this awkward mindset; as soon as they get a 1d vector they feel the need to artificially make it into a 2d column. (BTW (Y @ X)[:,np.newaxis] would be more idiomatic for that than Y @ X.reshape(3, 1) but I acknowledge it's not exactly compact.)
They cleverly chose column concatenation as the last operation, hardly the most common matrix operation, to make it seem like it's very natural to want to choose row or column vectors. In my experience, writing matrix maths in numpy is much easier thanks to not having to make this arbitrary distinction. "It's this 1D array a row or a column?" is just over less thing to worry about in numpy. And I learned MATLAB first, do I don't think I'm saying that just because it's what I'm used to.
- Good advice but there's a bit of a difference between a device (or even several) you can knock together yourself and throw out of the side of a (surface) boat vs access to a whole undersea cable which (I have just learned) is what you need for DAS. Plus, if you can do it yourself with virtually no resources, it's a safe bet that any potential adversaries are already doing something many orders of magnitude greater.
Supposedly new submarines are so quiet that they can't be detected anyway. I'm sure there's a large element of exaggerating abilities here, but there's definitely an element of truth: in 2009, two submarines carrying nuclear weapons (not just nuclear powered) collided, presumably because they couldn't detect each other. If a nuclear submarine cannot detect another nuclear submarine right next to it then it's unlikely your $5 hydrophone will detect one at a distance.
Of course, none of this means that the military will be rational enough not to be annoyed with you.
[1] https://en.wikipedia.org/wiki/HMS_Vanguard_and_Le_Triomphant...
- I don't see a reply to nateb2022 by you.
- JNI for io_uring is not trivial code.
- Ah, I see. The last sentence in your previous comment makes more sense now ("Mapping is great, but ... you can violate it at run time"). A type checker would normally catch violations but I can still see a frozendict would be useful.
- That is still true and still irrelevant here. The comment we're talking about was not written by a bot with a disclaimer at the start. They just asked about its output. They didn't even quote its output - they paraphrased it and added their own commentary!
I know HN rules prohibit saying "did you even read it?" but you surely can't have read the comment to have come to this view, or at least significantly misread it. Have another look.
Most of all, HN guidelines are about encouraging thoughtful discussion. sundarurfriend's comment asked a genuinely interesting question and inspired interesting discussion. This subthread of "but AI!" did not.
- True, but the original comment that we're talking about here (by sundarurfriend) just mentioned an LLM's output in passing as part of their (presumably) human-written comment. Nothing you've linked to prohibits that.
- > There’s a huge difference between functions that might mutate a dictionary you pass in to them and functions that definitely won’t.
Maybe I misunderstood, but it sounds to me like you're hoping for the following code to work:
But this won't work (as in, type checkers will complain) because dict is not derived from frozendict (or vice-versa). You'd have to create a copy of the dict to pass it to the function. (Aside from presumably not being what you intended, you can already do that with regular dictionaries to guarantee the original won't change.)def will_not_modify_arg(x: frozendict) -> Result: ... foo = {"a": 1, "b": 2} # type of foo is dict r = will_not_modify_arg(foo) - Great point, I can't believe I missed that.
- > arbitrary keys, decided at runtime" vs "fixed set of fields decided at definition time" (can't an NT's keys also be interpolated from runtime values?)
If you want to create a named tuple with arbitrary field names at runtime then you need to create a new named tuple type before you create the instance. That is possible, since Python is a very dynamic language, but it's not particularly efficient and, more importantly, just feels a bit wrong. And are you going to somehow cache the types and reuse them if the field names match? It's all a bit of a mess compared to just passing the entries to the frozendict type.
- The sibling comment already answered your question, but just to add: As I mentioned earlier, this was actually how old programming languages worked. Famously(ish), Dijkstra secretly snuck recursive functions into the ALGOL 60 standard, thus forcing compiler authors to use a stack!
- Do you mean changed the default text box / picture / drawing canvas mode from "in line" to "in front of text" (which lets you put it anywhere, including over margins)? You actually can do that in advanced options. "DTP mode" sounds like marketing overkill for a simple option but maybe it would help.
- Taken literally, your statement said that [non-pro] DTP died because it had good tooling. I don't know what tooling for DTP is, but it seems unlikely so good that it would kill the software it supports, so your comment seems like nonsense. Why bother posting it if you're perfectly happy with that?
The real truth is more boring: DTP didn't die at all, it just merged as a category of software with word processors because computers got powerful enough to run programs with a union of their features. Whether the programs in this new combined category got called one thing or the other mainly depended on their history: Word and InDesign today have a lot more in common with each other than either does with programs from the early 1990s that are nominally in their respective categories. Whatever you were saying, it didn't seem to be that, so it was wrong anyway! But I asked nicely because I was curious if there was some substance there.
But even in complex applications, there's still truth to the idea that your code will get simpler over time. Mostly because you might come up with better abstractions so that at least the complex bit is more isolated from the rest of the logic. That way, each chunk of code is individually easier to understand, as is the relationship between them, even if the overall complexity is actually higher.