Preferences

emn13
Joined 3,622 karma

  1. Create an image that displays two seven-pointed stars, two eight-pointed stars, and two nine-pointed stars. All stars are connected to each other, except for the ones with the same number of strands. The lines connecting the stars must NOT intersect.
  2. I think it's not at all a marshmellow test; quite the opposite - docs used to be written way, way in advance of their consumption. The problem that implies is twofold. Firstly, and less significantly, it's just not a great return on investment to spend tons of effort now to maybe help slightly in the far future.

    But the real problem with docs is that for MOST usecases, the audience and context of the readers matter HUGELY. Most docs are bad because we can't predict those. People waste ridiculous amounts of time writing docs that nobody reads or nobody needs based on hypotheses about the future that turn out to be false.

    And _that_ is completely different when you're writing context-window documents. These aren't really documents describing any codebase or context within which the codebase exists in some timeless fashion, they're better understood as part of a _current_ plan for action on a acute, real concern. They're battle-tested the way docs only rarely are. And as a bonus, sure, they're retainable and might help for the next problem too, but that's not why they work; they work because they're useful in an almost testable way right away.

    The exceptions to this pattern kind of prove the rule - people for years have done better at documenting isolatable dependencies, i.e. libraries - precisely because those happen to sit at boundaries where it's both easier to make decent predictions about future usage, and often also because those docs might have far larger readership, so it's more worth it to take the risk of having an incorrect hypothesis about the future wasting effort - the cost/benefit is skewed towards the benefit by sheer numbers and the kind of code it is.

    Having said that, the dust hasn't settled on the best way to distill context like this. It's be a mistake to overanalyze the current situation and conclude that documentation is certain to be the long-term answer - it's definitely helpful now, but it's certainly conceivable that more automated and structured representations might emerge, or in forms better suited for machine consumption that look a little more alien to us than conventional docs.

  3. Yep. Still, I think it's a pretty decent benchmark in the sense that it's fairly short, quite repeatable, does have a quite a few subtest, and it's horribly different from the nebulous concept that is "typical workloads". It's suspiciously memory-latency bound, perhaps more than most workloads, but that's a quibble. If they'd have simply labelled it "lightly threaded" instead of "multithreaded", it would have been fine.

    As it is, it's just clearly misleading to people that haven't somehow figured out that it's not really a great test of multithreaded throughput.

  4. It's not trash - it's quite nice for its niche. It's just not very scalable with cores, so it's best interpreted as a benchmark of lightly threaded workloads - like lots of typical consumer workloads are (gaming, web browsing, light office work). Then again, it's not hard to find workloads that scale much better, and geekbench 6 doesn't really have a benchmark for those.

    For the first 8 threads or so, it's fine. Once you hit 20 or so it's questionable, or at least that's my impression.

  5. I mean, reliably tracking ownership and therefore knowing that e.g. an aliased write must complete before a read is surely helpful?

    It won't prevent all races, but it might help avoid mistakes in a few of em. And concurrency is such a pain; any such machine-checked guarantees are probably nice to have to those dealing with em - caveat being that I'm not such a person.

  6. There's also the alternative of announcing this breakage publicly to electron beforehand; and the alternative of having a hack and publicly announcing it will be removed in a year. There's even the alternative of just announcing the caveat at all, so your users aren't unwitting guinea pigs. If they don't want to support a million workarounds forever, they don't have to it's not all or nothing.
  7. Put it this way: if I were in charge of a major OS, and I having one of the major app frameworks used on my OS tested on for my annual upgrade, I'd feel pretty embarrassed, even if there's a figleaf excuse why it's not my fault.

    This doesn't exactly instill confidence in Apple's competence.

  8. Hyper amusing, thanks for sharing! Doesn't really improve the analogy, but fun quirk of history :-D.
  9. So on the one hand we have a product which isn't even remotely designed for the use case (hamsters), and during normal use shows obvious behaviour (cooking) that should imply risk to said hamsters. On the other side, we have a product designed to be installed in an electrical system, and shows no signs during normal use that it's installed unsafely, and where the advertised specs are not actually safe for normal usage.

    Whether or not the company in this case shares some or most of the blame with novice users - the analogy is not a great one.

  10. The author's examples of rough edges are however no better when hosted on vercel. The architecture seems... overly clever, leading to all kinds of issues.

    I'm sure commercial incentives would lead issues that affect paying (hosted) customers to have better resolutions than those self-hosting, but that's not enough to explain this level of pain, especially not in issues that would affect paying customers just as much.

  11. If you care about absolute accuracy, I'm skeptical you want floats at all. I'm sure it depends on the use case.

    Whether it's the standards fault or the languages fault for following the standard in terms of preventing auto-vectorization is splitting hairs; the whole point of the standard is to have predictable and usually fairly low-error ways of performing these operations, which only works when the order of operations is defined. That very aim is the problem; to the extent the stardard is harmless when ordering guarrantees don't exist you're essentially applying some of those tricky -ffast-math suboptimizations.

    But to be clear in any case: there are obviously cases whereby order-of-operations is relevant enough and accuracy altering reorderings are not valid. It's just that those are rare enough that for many of these features I'd much prefer that to be the opt-in behavior, not opt-out. There's absolutely nothing wrong with having a classic IEEE 754 mode and I expect it's an essentialy feature in some niche corner cases.

    However, given the obviously huge application of massively parallel processors and algorithms that accept rounding errors (or sometimes conversely overly precise results!), clearly most software is willing to generally accept rounding errors to be able to run efficiently on modern chips. It just so happens that none of the computer languages that rely on mapping floats to IEEE 754 floats in a straitforward fashion are any good at that, which is seems like its a bad trade off.

    There could be multiple types of floats instead; or code-local flags that delineate special sections that need precise ordering; or perhaps even expressions that clarify how much error the user is willing to accept and then just let the compiler do some but not all transformations; and perhaps even other solutions.

  12. I get the feeling that the real problem here are the IEEE specs themselves. They include a huge bunch of restrictions that each individually aren't relevant to something like 99.9% of floating point code, and probably even in aggregate not a single one is relevant to a large majority of code segments out in the wild. That doesn't mean they're not important - but some of these features should have been locally opt-in, not opt out. And at the very least, standards need to evolve to support hardware realities of today.

    Not being able to auto-vectorize seems like a pretty critical bug given hardware trends that have been going on for decades now; on the other hand sacrificing platform-independent determinism isn't a trivial cost to pay either.

    I'm not familiar with the details of OpenCL and CUDA on this front - do they have some way to guarrantee a specific order-of-operations such that code always has a predictable result on all platforms and nevertheless parallelizes well on a GPU?

  13. Yeah, before required properties/fields, C#'s nullability story was quite weak, it's a pretty critical part of making the annotations cover enough of a codebase to really matter. (technically constructors could have done what required does, but that implies _tons_ of duplication and boilerplate if you have a non-trivial amount of such classes, records, structs and properties/fields within them; not really viable).

    Typescript's partial can however do more than that - required means you can practically express a type that cannot be instantiated partially (without absurd amounts of boilerplate anyhow), but if you do, you can't _also_ express that same type but partially initialized. There are lots of really boring everyday cases where partial initialization is very practical. Any code that collects various bits of required input but has the ability to set aside and express the intermediate state of that collection of data while it's being collected or in the event that you fail to complete wants something like partial.

    E.g. if you're using the most common C# web platform, asp.net core, to map inputs into a typed object, you now are forced to either expression semantically required but not type-system required via some other path. Or, if you use C# required, you must choose between unsafe code that nevertheless allows access to objects that never had those properties initialized, or safe code but then you can't access any of the rest of the input either, which is annoying for error handling.

    typescript's type system could on the other hand express the notion that all or even just some of those properties are missing; it's even pretty easy to express the notion of a mapped type wherein all of the _values_ are replaces by strings - or, say, by a result type. And flow-sensitive type analysis means that sometimes you don't even need any kind of extra type checks to "convert" from such a partial type into the fully initialized flavor; that's implicitly deduced simply because once all properties are statically known to be non-null, well, at that point in the code the object _is_ of the fully initialized type.

    So yeah, C#'s nullability story is pretty decent really, but that doesn't mean it's perfect either. I think it's important to mention stuff like Partial because sometimes features like this are looked at without considering the context. Most of these features sound neat in isolation, but are also quite useless in isolation. The real value is in how it allows you to express and change programs whilst simultaneously avoiding programmer error. Having a bit of unsafe code here and there isn't the end of the world, nor is a bit of boilerplate. But if your language requires tons of it all over the place, well, then you're more likely to make stupid mistakes and less likely to have the compiler catch them. So how we deal with the intentional inflexibility of non-nullable reference types matters, at least, IMHO.

    Also, this isn't intended to imply that typescript is "better". That has even more common holes that are also unfixable given where it came from and the essential nature of so much interop with type-unsafe JS, and a bunch of other challenges. But in order to mitigate those challenges TS implemented various features, and then we're able to talk about what those feature bring to the table and conversely how their absence affects other languages. Nor is "MOAR FEATURE" a free lunch; I'm sure anybody that's played with almost any language with heavy generics has experienced how complicated it can get. IIRC didn't somebody implement DOOM in the TS type system? I mean, when your error messages are literally demonic, understanding the code may take a while ;-).

  14. I love building libraries, so having the chance to talk about the gotchas with things like this is a fun chance to reflect on what is and is not possible with the tools we have. I guess my favorite "feature" in C# is how willing they are to improve; and that many of the improvements really matter, especially when accumulated over the years. A C# 13 codebase can be so much nicer than a c# 3 codebase... and faster and more portable too. But nothing's perfect!
  15. "Recovered" sounds so binary.

    I think it's pretty usuable now, but there is scarring. The solution would have been much nicer had it been around from day one; especially surrounding generics and constraints.

    It's not _entirely_ sound, nor can it warn about most mistakes when those are in the "here-be-dragons" annotations in generic code.

    The flow sensitive bit is quite nice, but not as powerful as in e.g. typescript, and sometimes the differences hurt.

    It's got weird gotcha interactions with value-types, for instance but likely not limited to interaction with generics that aren't constrained to struct but _do_ allow nullable usage for ref types.

    Support in reflection is present, but it's not a "real" type, and so everything works differently, and hence you'll see that code leveraging reflection that needs to deal with this kind of stuff tends to have special considerations for ref type vs. value-type nullabilty, and it often leaks out into API consumers too - not sure if that's just a practical limitation or a fundamental one, but it's very common anyhow.

    There wasn't last I looked code that allowed runtime checking for incorrect nulls in non-nullable marked fields, which is particularly annoying if there's even an iota of not-yet annoted or incorrectly annotated code, including e.g. stuff like deserialization.

    Related features like TS Partial<> are missing, and that means that expressing concepts like POCOs that are in the process of being initialized but aren't yet is a real pain; most code that does that in the wild is not typesafe.

    Still, if you engage constructively and are willing to massage your patterns and habbits you can surely get like 99% type-checkable code, and that's still a really good help.

  16. While I'm most familiar with C#, and haven't used Ruby professionally for almost a decade now, I think we'd be better off looking at typescript, for at least 3 reasons, probably more.

    1. Flowsensitivity: It's a sure thing that in a dynamic language people use coding conventions that fit naturally to the runtime-checked nature of those types. That makes flow-sensitive typing really important.

    2. Duck typing: dynamic languages and certainly ruby codebases I knew often use ducktyping. That works really well in something like typescript, including really simple features such as type-intersections and unions, but those features aren't present in C#.

    3. Proof by survival: typescript is empirically a huge success. They're doing something right when it comes to retrospectively bolting on static types in a dynamic language. Almost certainly there are more things than I can think of off the top of my head.

    Even though I prefer C# to typescript or ruby _personally_ for most tasks, I don't think it's perfect, nor is it likely a good crib-sheet for historically dynamic languages looking to add a bit of static typing - at least, IMHO.

    Bit of a tangent, but there was a talk by anders hejlsberg as to why they're porting the TS compiler to Go (and implicitly not C#) - https://www.youtube.com/watch?v=10qowKUW82U - I think it's worth recognizing the kind of stuff that goes into these choices that's inevitably not obvious at first glance. It's not about the "best" lanugage in a vacuum, it's a about the best tool for _your_ job and _your_ team.

  17. Of course they had a choice: they could have stuck with google maps for longer, and they probably also could have invested more in data and UI beforehand. They could have launched a submarine non-apple-branded product to test the waters. They could likely have done other things we haven't thought of here, in this thread.

    Quite plausibly they just didn't realize how rocky the start would be, or perhaps they valued that immediate strategic autonomy more in the short-term that we think, and willingly chose to take the hit to their reputation rather than wait.

    Regardless, they had choices.

  18. While some of what you say is an interesting thought experiment, I think the second half of this argument has, as you'd put it, a low symbolic coherence and low plausibility.

    Recognizing the relevance of coherence and plausibility does not need to imply that other aspects are any less relevant. Redefining truth merely because coherence is important and sometimes misinterpreted is not at all reasonable.

    Logically, a falsehood can validly be derived from assumptions when those assumptions are false. That simple reasoning step alone is sufficient to explain how a coherent-looking reasoning chain can result in incorrect conclusions. Also, there are other ways a coherent-looking reasoning chain can fail. What you're saying is just not a convincing argument that we need to redefine what truth is.

  19. Perhaps the solutions(s) needs to be less focusing on output quality, and more on having a solid process for dealing with errors. Think undo, containers, git, CRDTs or whatever rather than zero tolerance for errors. That probably also means some kind of review for the irreversible bits of any process, and perhaps even process changes where possible to make common processes more reversible (which sounds like an extreme challenge in some cases).

    I can't imagine we're anywhere even close to the kind of perfection required not to need something like this - if it's even possible. Humans use all kinds of review and audit processes precisely because perfection is rarely attainable, and that might be fundamental.

  20. Is the advantage over an enum not kind of small? We're seeing bugs here because people tried to do the right thing but the tooling has absolutely no way of helping anybody to do that. Simply preventing accidental mistakes would prevent these. Adding complexity to make it harder (though never impossible) for consumers to misuse the API in a complex way seems like it's potentially going to far.

    Then again, it's been years since I used this kind of C, so maybe my instincts are rusty here (no rust-pun intended!)

  21. While a humorous response by Milton and an interesting debating point, the argument he makes is pretty weak because it almost inevitably reduces to complete lawlessness, doesn't really define which government "granted" monopolies he's willing to give up, and ultimately relies on a fairly arbitrary definition of what government even is - and one that if you really let it go to the extreme not only obviously just doesn't work well for most people, it also does not avoid monopolies as is witnessed every day around the globe.

    After all, the natural inclination of a powerful elite is to protect their interest. It's business 101 to want a moat, and tearing down one set of artifical legal protections that allows for a moat allows on the other hand for the far more extreme quite physically violent moat in the form of a putin-esque kleptocracy.

    The argument merely sounds convincing because it's very selectively implying that certain monopolies are created by state power and might be weakened by free market principles without considering what a free market even is (generally a regulated one), nor addressing the fact that other monopolies will arise precisely because because the lack of regulation allows winner-take-all brute force strategies to work.

    That doesn't mean Milton's ideas are without merit - but that there is a breaking point; dogmatically hoping for anarchy to avoid harmful centralization of power is problematic because of the dogma; not because it's never a valid approach.

    But sure, if you're going to embrace Milton's (intentionally) vague proposition in the way it was likely intended - to provoke thought - then sure; there are state regulations that are in part to blame for some of today's near monopolies - the interaction between intellectual property, incorporation, and state-enforced contract law. As a matter of debate, sure, it'd be interesting to weaken all three and in particular their interactions. I just highly doubt that's very practical, nor would it be very easy to predict the outcome, especially once international power-plays start circumventing even the best of intentions.

  22. I'm curious what you base that on. For instance, we've never really allowed literally cutthoat competition, nor things like fraud and we've generally not allowed misrepresentation. Governments intervene heavily and always have to set those kind of boundary conditions - but there are really lots of them. Economies of scale seem to be very, very common ever since the industrial revolution; and even more so in today's information-economy platform era.

    I'm sure there are plenty of cases where significant competition is a natural end state, but how common those are in comparison? I'm curious.

  23. One race-to-the-bottom phenomenon that (to me at least) appears to aggravate the impact of "corporate greed" is the social loop that goes as follows:

    1. company decides to push the boundaries of the socially acceptable when it comes to cutting corners (e.g. screwing their customers, or employees, or environment, or debtors)

    2. People don't like it, but rationalize this as being a natural consequence of incorporation and the profit motive. Hence while they grumble, there negative impacts to public perception don't actually cost the company as much as you might think

    2b. Even if there's a boycott, there will be vocal minority that thinks it's all a bunch of whiny <target audience we're better than>. They'll actively harass or undermine said boycott or backlash, even if in a purely egotistical sense their interests are actually aligned with the boycotters

    3. Social norm is reset; we all collectively expect even less from companies. That doesn't however mean the new norm is better or maintained, because as soon as there's some new major conflict between short-term profit and maintaining a decent reputation in public, we go back to step 1 from the new, lower baseline.

    Stuff like increasing partisanship, and decreasing incentives for journalist (whether profession, citizen or influencer) to maintain their professional standing (as opposed to targeting clickbate) probably smears those gears nicely.

    Many companies have historically clearly paid well over the odds to maintain their reputation, and done well doing so. It's just not true that nihilistic short term greed has always paid; obviously it didn't and still doesn't really. It profitable to do the little, but simultaneously also to do as many cheap things that materially affect public standing as possible.

    By promoting the profit motive past a merely utilitarian means to an efficiency-optimizing end into a matter of national identity and point of distinction vs. in particular the USSR, we've shifted our culture beyond what's really rational. We (as a society) don't merely respect and understand the profit motive; we see it as a sign of merit - and significant enough merit that "winning" on that scale excuses a lot of other bad behavior.

  24. Outright monopolistic pricing is also "the market rate". Frankly, virtually any price somebody is willing to pay is almost by definition "the market rate". It's a meaningless defense for an artificially high price.
  25. Certainly helped!
  26. The article lists 2: firstly, simply that ML models are now feasible at a scale they weren't only a few years ago. Secondly, compute power is now better enough that it can now simulate more realistic environments which enables sim-based (pre)training to work better. That second one is potentially particularly alluring to nvidia given how it plays on two of their unique strengths - AI and graphics.
  27. Presumably the author really is missing a driver; I doubt he'd have missed being able to install without it. If he really does need such a driver; then the exact name of it or the details of Dell's BIOS options and whether they help sound fairly incidental to the underlying story.

    Your criticism may be reasonable; but does it really cut at the heart of the issue? Also; some of these options are occasionally oddly named, so let's not ignore the possibility that the article's author is right on this.

  28. If you're literally using it just once, why not stick it in a local variable instead? You're still getting the advantage of naming the concept that it represents, without eroding code locality.

    However, the example is a slightly tricky basis to form an opinion on best practice: you're proposing that the clearly named example function name is_enabled is better than an expression based on symbols with gibberish names. Had those names (x, foo, bar, baz, etc) instead been well chosen meaningful names, then perhaps the inline expression would have been just as clear, especially if the body of the if makes it obvious what's being checked here.

    It all sounds great to introduce well named functions in isolated examples, but examples like that are intrinsically so small that the costs of extra indirection are irrelevant. Furthermore, in these hypothetical examples, we're kind of assuming that there _is_ a clearly correct and unique definition for is_enabled, but in reality, many ifs like this have more nuance. The if may well not represent if-enabled, it might be more something like was-enabled-last-app-startup-assuming-authorization-already-checked-unless-io-error. And the danger of leaving out implicit context like that is precisely that it sounds simple, is_enabled, but that simplicity hides corner cases and unchecked assumptions that may be invalidated by later code evolution - especially if the person changing the code is _not_ changing is_enabled and therefore at risk of assuming it really means whether something is enabled regardless of context.

    A poor abstraction is worse than no abstraction. We need abstractions, but there's a risk of doing so recklessly. It's possible to abstract too little, especially if that's a sign of just not thinking enough about semantics, but also to abstract too much, especially if that's a sign of thinking superficially, e.g. to reduce syntactic duplication regardless of meaning.

  29. Those deduction in no way change the basics of income (and sales) tax, which is on revenue, not profit. A person that has a good wage will pay a significant amount in tax even if at the end of a year they have no more wealth than before it; i.e. no profit. And while of course there _exist_ places that don't work this way or don't tax real estate that doesn't diminish the fact that there exist places that _do_ work this way - which demonstrates the fact that there's no broad agreement that taxation must be limited to and occur after profits. Norwegian self-proclaimed entrepreneurs aren't unique in their "victimhood", which seems to be the angle of the original article.

    Consider a though experiment: In a fast-growing world, taxation limited to profits when honestly applied and without exploitable loopholes (not an obviously satisfied precondition) might be able to cover costs of shared concerns, i.e. government's primary business. But imagine for a moment that that growth were to significantly slow or even stop - without profits, taxation could fall to a trickle (limited to those niches that have zero-sum profits yet lack the ability to amortize over loss making periods and lack the ability to strike a deal to fiscally merge with a loss-making business for tax purposes). Clearly, that's not sustainable. I think it's hard to imagine that the the only "just" way to tax is one that is fundamentally dependent on permanent significant growth, even if we've been lucky enough to live in such a world for quite a while, at least on paper. Given how some costs (e.g. depletion of natural resources and pollution) aren't on the fiscal books, and that from an idealistic free market stance one might prefer to include those costs on the books, the true global growth is surely already lower than it looks on paper, even if it's still hopefully positive.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal