Research interests: programming languages, compilers and runtimes; parallel, distributed and high-performance computing. Feel free to contact me (see about page below) if you have interesting problems to talk about.
https://elliottslaughter.com/
Contact: https://elliottslaughter.com/about/
[ my public key: https://keybase.io/slaughter; my proof: https://keybase.io/slaughter/sigs/pfzdOL11OUYKphzUo2UqlDmZz37F1iYPq7omhhQtgjM ]
- The solution I've found is to make using the API a hard error with an explicitly temporary and obnoxiously-named workaround variable.
It's loud, there's an out if you need your code working right now, and when you finally act on the deprecation, if anyone complains, they don't really have legs to stand on.WORKAROUND_URLLIB3_HEADER_DEPRECATION_THIS_IS_A_TEMPORARY_FIX_CHANGE_YOUR_CODE=1 python3 ...Of course you can layer it with warnings as a first stage, but ultimately it's either this or remove the code outright (or never remove it and put up with whatever burden that imposes).
- It depends on what you care about. If you're writing purely for yourself, then by all means, go ahead and do so.
I've found there's a balance to be found in listening to others vs yourself. Usually, if multiple people give you the same feedback, there is some underlying symptom they are correctly diagnosing. But they may not have the correct diagnosis, or even be able to articulate the symptoms clearly. The real skill of an author/editor is in figuring out the true diagnosis and what to do about it.
In the communication example, this means rooting conflicts in the true personalities of the characters and/or their context, so that even if they sat down to have a deep chat, they still wouldn't agree. E.g., character A has an ulterior motive to see character B fail. Now you hint at that motive in a subtle way that telegraphs to readers that something is going on, without stopping the action for what would turn into a pedantic conversation. At least, that's what I'd do.
- It's because if you explain what's going on, you stop the action. And viewers/readers don't like that.
In fiction it's called an info dump. As an aspiring science fiction author, virtually every beta reader I've had has told me they don't like them. I want my fiction to make sense, but you have to be subtle about it. To avoid readers complaining, you have to figure out how to explain things to the reader without it being obvious that you're explaining things to the reader, or stopping the action to do it.
Movies are such a streamlined medium that usually this gets cut entirely. At least in books you can have appendices and such for readers who care.
- To me it's interesting that (a) most people die of old age, and (b) the leading cause of death is essentially preventable (heart disease being highly lifestyle related) or else plausibly curable in the future (I certainly hope we'll see progress on cancer in my lifetime).
That was very much not the case historically; you can Google numbers yourself but the percentage of childhood deaths prior to modern medicine was truly shocking.
It also seems to indicate that, with some thought and care, a meaningful impact (both at individual and societal levels) is possible by altering our lifestyles to be healthier.
- Conda doesn't do lock files. If you look into it, the best you can do is freeze your entire environment. Aside from this being an entirely manual process, and thus having all the issues that manual processes bring, this comes with a few issues:
1. If you edit any dependency, you resolve the environment from scratch. There is no way to update just one dependency.
2. Conda "lock" files are just the hashes of the all the packages you happened to get, and that means they're non-portable. If you move from x86 to ARM, or Mac to Linux, or CPU to GPU, you have to throw everything out and resolve.
Point (2) has an additional hidden cost: unless you go massively out of your way, all your platforms can end up on different versions. That's because solving every environment is a manual process and it's unlikely you're taking the time to run through 6+ different options all at once. So if different users solve the environments on different days from the same human-readable environment file, there's no reason to expect them to be in sync. They'll slowly diverge over time and you'll start to see breakage because the versions diverge.
P.S. if you do want a "uv for Conda packages", see Pixi [1], which has a lot of the benefits of uv (e.g., lock files) but works out of the box with Conda's package ecosystem.
- If you're going to do this, why not generate Pandoc ASTs directly? You can do so from a number of languages and they support (by definition) a superset of any given markup's features, with blocks to call out directly for things you can only do in Latex.
I assume the original question is asking about programmatic document generation, in which case working with a real AST is probably also a productivity and reliability win as well.
- I'd try to find ways to stack everything else in your favor as much as possible. If X is difficult, you try to optimize Y, Z, etc. so that at least you're not coping with multiple adverse environmental factors at the same time.
For me personally, the best-case scenario seems to be intentionally scheduled, one-on-one interactions in "clean" environments (i.e., quiet, unscented, no smoke/incense, dressed casually for maximum comfort, etc.). The next best would be some sort of group setting with structured, intentional sharing (i.e., not just doing something together but explicitly organized for the purpose of sharing). It can be a bit hit or miss to find these, so it can take some iteration to figure out what actually works.
Otherwise, "escalating" (i.e., inviting someone into a deeper/more meaningful interaction) is a skill you can practice, but if you're dealing with the rest of it at the same time, you're basically playing with a handicap. So incrementalize your goals as much as possible, practice in small, regular intervals with sufficient breaks for recovery, and don't compare yourself to anyone else, no matter how tempting that might be.
Hope that helps, and feel free to contact me on Keybase (in profile) or email (run the Perl script on my website) if you want help brainstorming.
Disclaimer: not a therapist.
- Recently I was introduced to the distinction between anxiety and dread. Anxiety is, essentially, a form of fear. You fear a worst-case consequence that isn't actually that likely. If you put up with your anxiety and just go and do the thing (on average) you'll do just fine, or at least ok-ish. Over time your body learns that the anxious activity is ok and the anxiety is reduced.
Dread is different. Dread is the expectation of a bad situation. It's not a worst-case scenario, it's a typical scenario. If what you are experiencing is dread, then pushing yourself into that situation will confirm to your body that, yup, it really is as bad as you thought, and will amplify the dread rather than diminish it.
A classic example is that certain forms of neurodivergence create sensory overload in typical "social" environments. This is likely to result in dread rather than anxiety. Your body is literally telling you that this situation is problematic, and repeat exposure isn't going to improve anything.
In our modern culture the language of anxiety is widespread but the language of dread much less so, and I think that's unfortunate because a lot of advice centers around "just get over it", which works only if what you're experiencing is anxiety. Personally, learning about this gave me permission to do "social" activities on my own terms and stop worrying about what other people think "social" means; turns out the social anxiety I had was relatively minimal and what I was experiencing was mostly the dread from environments where social activities often occur.
- One thing I've always wondered is what fraction of c is actually realistically achievable with current technologies? (Maybe with scenarios for manned/unmanned spacecraft.)
Like are we at 0.1% or 0.01% or more orders of magnitude off?
- Or any company in its first 5 years of operation. (Or any company, period, within the first 5 years of the law being introduced.)
It takes 5 years to fill the pipeline, so even if the steady state would be fine, getting to that state might be impossible.
- The difference is that the abstraction provided by compilers is much more robust. Not perfect: sometimes programmers legitimately need to drop into assembly to do various things. But those instances have been rare for decades and to a first approximation do not exist for the vast majority of enterprise code.
If AI gets to that level we will indeed have a sea change. But I think the current models, at least as far as I've seen, leave open to question whether they'll ever get there or not.
- Async vs. non-async is the main example today. There are libraries that support one or the other, or sometimes one library will have two usage modes (effectively two different code bases) because you can't really mix them.
In the future who knows, because we don't know what features will get added to the language.
- How do you not completely destroy your concentration when you do this though?
I normally build things bottom up so that I understand all the pieces intimately and when I get to the next level of abstraction up, I know exactly how to put them together to achieve what I want.
In my (admittedly limited) use of LLMs so far, I've found that they do a great job of writing code, but that code is often off in subtle ways. But if it's not something I'm already intimately familiar with, I basically need to rebuild the code from the ground up to get to the point where I understand it well enough so that I can see all those flaws.
At least with humans I have some basic level of trust, so that even if I don't understand the code at that level, I can scan it and see that it's reasonable. But every piece of LLM generated code I've seen to date hasn't been trustworthy once I put in the effort to really understand it.
- I'm not sure what the author intended, but one way to implement atomics at the microarchitectural level is via a load-linked/store-conditional pair of instructions, which often involves tracking the cache line for modification.
https://en.wikipedia.org/wiki/Load-link/store-conditional
It's not "24/7" but it is "watching" in some sense of the word. So not entirely unfair.
- There has been talk of new language frontends for C++:
Cpp2 (Herb Sutter's brainchild): https://hsutter.github.io/cppfront/
Carbon (from Google): https://github.com/carbon-language/carbon-lang
In principle those could enable a safe subset by default, which would (except when explicitly opted-out) provide similar safety guarantees to Rust, at least at the language level. It's still up to the community to design safe APIs around those features, even if the languages exist. Rust has a massive advantage here that the community built the ecosystem with safety in mind from day 1, so it's not just the language that's safe, but the APIs of various libraries are often designed in an abuse-resistant way. C++ is too much of a zoo to ever do that in a coherent way. And even if you wanted to, the "safe" variants are still in their infancy, so the foundations aren't there yet to build upon.
I don't know what chance Cpp2 or Carbon have, but I think you need something as radical as one of these options to ever stand a chance of meaningfully making C++ safer. Whether they'll take off (and before Rust eats the world) is anyone's guess.
- How do you even get 5.2 billion unique users when the total population on the internet is 5.5 billion?
https://www.statista.com/statistics/617136/digital-populatio...
- Signal is distributed as an app. Furthermore, the client is open source, you can see the repositories here:
I don't know the latest details about Android/iOS app signing, but presumably reproducible builds + sufficiently strong signing would make it secure enough for most users. For those who are truly paranoid, then can build it themselves (subject to their own device OS's requirements, which are hardly a unique problem to Signal).
In short, Signal's security should be as good as any mobile app can be, and can be even better if you're willing to put in legwork.
- Maybe not fully ergonomic yet, but this exists today (at least for max):
https://docs.rs/nonmax/latest/nonmax/
If you're really attached to it being min you'd have to copy that library.
Edit to add: we actually use these in one of my main Rust codes; they're useful, but I'm not sure they're so useful I'd want them built into the language.
- Confidence hijacks the human brain. Without direct, personal expertise or experience to the contrary, spending time around your hypothetical "friend who's very well-read and talkative but is also extremely confident and loves the sound of their own voice" is going to subconsciously influence your opinions, possibly without you even knowing.
It's easy to laugh and say, well I'm smart enough to defeat this. I know the trick. I'll just mentally discount this information so that I'm not unduly influenced by it. But I suspect you are over-indexing on fields where you are legitimately an expert—where your expertise gives you a good defense against this effect. Your expertise works as a filter because you can quickly discard bad information. In contrast, in any area where you're not an expert, you have to first hold the information in your head before you can evaluate it. The longer you do that, the higher the risk you integrate whatever information you're given before you can evaluate it for truthfulness.
But this all assumes a high degree of motivation and effort. Like the opening to this article says, all empirical evidence clearly points in the direction of people simply not trying when they don't need to.
Personally, I solve the problem in my friend circle by avoiding overconfident people and cultivating friendships among people who have a good understanding of their own certainty and degree of expertise, and the humility to admit when they don't know something. I think we need the same with these AIs, though as far as I understand getting the AI to correctly estimate its own certainty is still an open problem.
There was some research on parsing C++ with GLR but I don't think it ever made it into production compilers.
Other, more sane languages with unambiguous grammars may still choose to hand-write their parsers for all the reasons mentioned in the sibling comments. However, I would note that, even when using a parsing library, almost every compiler in existence will use its own AST, and not reuse the parse tree generated by the parser library. That's something you would only ever do in a compiler class.
Also I wouldn't say that frontend/backend is an evolution of previous terminology, it's just that parsing is not considered an "interesting" problem by most of the community so the focus has moved elsewhere (from the AST design through optimization and code generation).