- tyg13 parentEh, I got a cheap degree from a public school (URI), albeit in Mathematics and not Comp Sci and it hasn't stopped me getting good tech jobs over the last decade or so. I'm currently working at a FAANG. Maybe I'm just extra hard-working, smart, or lucky? Or maybe your pedigree isn't as big a deal as it once was? Hard to say from my N=1 data point.
- If you live in a jurisdiction where there is a speed limit enforced by law, you likely have driven above it at some point. By definition, this is a violation of the law. Yet you have observed that you have never been arrested (perhaps never even ticketed?) as a result of this. Is this a logical contradiction? Obviously not. The law isn't always enforced, and not every violation of the law is punished.
I can't speak for where you live, but in America, there are many, many traffic laws. They differ greatly by jurisdiction. Most of them are not enforced. Sometimes explicitly -- for example, in my city, they recently announced they would no longer detain people for specific minor traffic violations -- but usually, it's implicit which go unpunished. It's also selective. By creating an unseen web of violations, the detaining officer is given all the necessary tools to make each stop as painful or as peaceful as they'd like.
- Sure, you can ask the agents to "identify and remove cruft" but I never have any confidence that they actually do that reliably. Sometimes it works. Mostly they just burn tokens, in my experience.
> And it's not like any of your criticisms don't apply to human teams.
Every time the limitations of AI are discussed, we see this unfair standard applied: ideal AI output is compared to the worst human output. We get it, people suck, and sometimes the AI is better.
At least the ways that humans screw up are predictable to me. And I rarely find myself in a gaslighting session with my coworkers where I repeatedly have to tell them that they're doing it wrong, only to be met with "oh my, you're so right!" and watch them re-write the same flawed code over and over again.
- Yeah, I see quite a lot of misanthropy in the rhetoric people sometimes use to advance AI. I'll say something like "most people are able to learn from their mistakes, whereas an LLM won't" and then some smartass will reply "you think too highly of most people" -- as if this simple capability is just beyond a mere mortal's abilities.
- I'll never doubt the ability of people like yourself to consistently mischaracterize human capabilities in order to make it seem like LLMs' flaws are just the same as (maybe even fewer than!) humans. There are still so many obvious errors (noticeable by just using Claude or ChatGPT to do some non-trivial task) that the average human would simply not make.
And no, just because you can imagine a human stupid enough to make the same mistake, doesn't mean that LLMs are somehow human in their flaws.
> the gap is still shrinking though
I can tell this human is fond of extrapolation. If the gap is getting smaller, surely soon it will be zero, right?
- IMO, Python should only be used for what it was intended for: as a scripting language. I tend to use it as a kind of middle ground between shell scripting and compiled languages like Rust or C. It's a truly phenomenal language for gluing together random libraries and data formats, and whenever I have some one-off task where I need to request some data from some REST API, build a mapping from the response, categorize it, write the results as JSON, then push some result to another API -- I reach for Python.
But as soon as I have any suspicion that the task is going to perform any non-trivial computation, or when I notice the structure of the program starts to grow beyond a couple of files, that's when Python no longer feels suitable to the task.
- Mathematical notation isn't at all backwards compatible, and it certainly isn't consistent. It doesn't have to be, because the execution environment is the abstract machine of your mind, not some rigidly defined ISA or programming language.
> Everyone seems to agree tau is better than pi. How much adoption has it seen?
> It took hundreds of years for Arabic numerals to replace Roman numerals in Europe.
What on earth does this have to do with version numbers for math? I appreciate this is Hacker News and we're all just pissing into the wind, but this is extra nonsensical to me.
The reason math is slow to change has nothing to do with backwards compatibility. We don't need to institute Math 2.0 to change mathematical notation. If you want to use tau right now, the only barrier is other people's understanding. I personally like to use it, and if I anticipate its use will be confusing to a reader, I just write `tau = 2pi` at the top of the paper. Still, others have their preference, so I'm forced to understand papers (i.e. the vast majority) which still use pi.
Which points to the real reason math is slow to change: people are slow to change. If things seem to be working one way, we all have to be convinced to do something different, and that takes time. It also requires there to actually be a better way.
> Is this what "heavy optimization" looks like?
I look forward to your heavily-optimized Math 2.0 which will replace existing mathematical notation and prove me utterly wrong.
- I weep for a world that is increasingly dominated by corporations, filled with people who are insistent (probably correctly) that they are being taken advantage of, doing the bare minimum, all resulting in an awful experience for everyone. Behind every support ticket that you just can't seem to get resolved, every horrible experience trying to use some product seemingly designed to drive you insane, behind every hare-brained decision that makes your life miserable for seemingly no reason, there's an apathetic worker who's taken your mindset. The impact of your efforts doesn't just affect your employer. We all work together to create the world. What kind of world do you want to live in?
I would hope there to be a healthy medium between "pulling all nighters" and "Do bare minimum" -- perhaps somewhere where we all try to do our best, but don't push ourselves too hard for no reason? I mean, that's more reasonable than imagining we'll one day overthrow our corporate overlords. Probably, I'm naive and idealistic. But I can't help but feel like the result of apathy is not satisfaction.
- The surprise is the federal government acting like an unfair negotiator, substantially altering the deal after it had already been struck. Equity in return for investment grants was never a part of CHIPS, and was only made part of it by Trump who seems to have originally wanted to kill the deal because it wasn't made by him.
- Depends on who you ask. Trump himself seems to think the US is getting 10% for free. I think that's a fair assessment given that these grants were already supposed to be paid out to Intel, without any kind of equity stake promised.
Worth noting that Intel is the only company that had these kinds of shenanigans pulled with their grant. Samsung, TSMC, Micron and others were granted similar funds without any kind of withholding or demands for equity from the federal government.
- I think this takes an unnecessarily narrow view of what "intelligence" implies. It conflates "intelligence" with fact-retention and communicative ability. There are many other intelligent capabilities that most normally-abled human beings possess, such as:
- Processing visual data and classifying objects within their field of vision.
- Processing auditory data, identifying audio sources and filtering out noise.
- Maintaining an on-going and continuous stream of thoughts and emotions.
- Forming and maintaining complex memories on long-term and short-term scales.
- Engaging in self-directed experimentation or play, or forming independent wants/hopes/desires.
I could sit here all day and list the forms of intelligence that humans and other intelligent animals display which have no obvious analogue in an AI product. It's true that individual AI products can do some of these things, sometimes better than humans could ever, but there is no integrated AGI product that has all these capabilities. Let's give ourselves a bit of credit and not ignore or flippantly dismiss our many intelligent capabilities as "useless."
- I feel like, if nothing else, this new wave of AI products is rapidly demonstrating the lack of faith people have in their own intelligence -- or maybe, just the intelligence of other human beings. That's not to say that this latest round of AI isn't impressive, but legions of apologists seem to forget that there is more to human cognition than being able to regurgitate facts, write grammatically-correct sentences, and solve logical puzzles.
- > Is the critical difference here that the array access happens outside of the "for" expression?
Precisely: this means that `d[k]` is guaranteed to execute before the check that `k < 16`. In general, if you have some access like `d[k]` where `k` is some integer and `d` is some array of size `N`, you can assume that `k < N` on all paths which are dominated by the statement containing `d[k]`. In simpler terms, the optimizer will assume that `k < N` is true on every path after the access to `d[k]` occurs.
To make this clearer, consider an equivalent, slightly-transformed version of the original code:
Now consider a slightly-transformed version of the correct code:int d[16]; int SATD (void) { int satd = 0, dd, k; dd = d[k=0]; do { satd += (dd < 0 ? -dd : dd); k = k + 1; dd=d[k]; // At this point, k must be < 16. } while (k < 16); // The value of `k` has not changed, thus our previous // assumption that `k < 16` must still hold. Thus `k < 16` // can be simplified to `true`. return satd; }
It's important that this is understood in terms of dominance (in the graph-theoretical sense), because statements like "k < 16 can never be false because it's used in d[k] where k == 16" or "the compiler will delete checks for k < 16 if it knows that d[16] occurs" which seem equivalent to the previously-stated dominance criterion simply are not. It's not that the compiler is detecting UB, thus deleting your checks -- it's that it assumes UB never occurs in the first place.int d[16]; int SATD (void) { int satd = 0, k = 0; k = 0; do { satd += d[k] < 0 ? -d[k] : d[k]; // At this point, k must be < 16 k = k + 1; } while (k < 16); // The value of `k` has changed -- at best, we can assert // that `k < 17` since its value increased by 1 since we // last assumed it was less than 16. But the assumption // that `k < 16` doesn't hold, and this check cannot be // simplified. return satd; } - Leaving aside the fact that that code reads an array out-of-bounds (which is not a trivial security issue) that's a ridiculously obtuse way to write that code. For loop conditions should be almost always be expressed in terms of their induction variable. A much cleaner and safe version is
int d[16]; int SATD (void) { int satd = 0, k = 0; for (k = 0; k < 16; ++k) satd += d[k] < 0 ? -d[k] : d[k]; return satd; } - I particularly like how they use `space` in two different ways, from Q-Tip's second verse:
While the rich and powerful imagine of blasting off to outer space, advancing the frontiers of humanity, boldly going where no man has gone before -- the poor still find themselves squabbling for just a little space on the planet we all call home. Imagine if 'space program' really meant 'space' program?These notions and ideas and citizens live in space I chuckle just like all of y'all, absurdity, after all Takes money to get it running and money for trees to fall Imagine for one second all the people are colored, please Imagine for one second all the people in poverty No matter the skin tone, culture or time zone Think the ones who got it Would even think to throw you a bone? Moved you out your neighborhood, did they find you a home? Nah cypher, probably no place to Imagine if this shit was really talkin’ about space, dude - Just to further illustrate what I'm saying, are you really trying to say that
``` //explicitly annotating this struct is default initializable and copyable #[derive(Default, Copy, Clone)] struct Foo { ... } ```
is actually worse than
``` struct Foo {...}; // rule of zero, copy/move/default are defined/deleted based arcane rules predicated on the contents of Foo ```
- You're mistaken. Rust does not require you to define all constructors. Rust does not have constructors.
All structs in Rust must be initialized using brace syntax, e.g. `Foo { bar: 1, baz: "" }`. This is commonly encapsulated into static functions (e.g. `Foo::new(1, "")`) that act similarly to constructors, but which are not special in any way compared to other functions. This avoids a lot of the strangeness in C++ that arises from constructors being "special" (can't be named, don't have a return type, use initializer list syntax which is not used anywhere else).
This combined with mandatory move semantics means you also don't have to worry about copy constructors or copy-assignment operators (you opt into copy semantics by deriving from Clone and explicitly calling `.clone()` to create a copy, or deriving from Copy for implicit copy-on-assign) or move constructors and move-assignment operators (all non-Copy assignments are moves by default).
It's actually rather refreshing, and I find myself writing a lot of my C++ code in imitation of the Rust style.
- Seventeen-hundreds doesn't sound stilted at all, to this native English speaker. I would use this even in a very formal context, and certainly no one would bat an eye. One-thousand-seven-hundreds is almost certainly incorrect in English -- I would actually find this to be a very clear marker of not speaking English very well. Are you a native speaker? I find your claims rather bold and quite frankly incorrect.
Also, curious to find out (from elsewhere in this thread) is that Finnish does not typically use centuries. Rather, they use a construction that maps directly to 1700s (1700-luku). I would be careful in accidentally applying your own cultural bias when accusing others of the same ;)