- I use firefox for three things.
1. The ad blocker. 2. The sense of superiority over the normies (because of the ad blocker) 3. Theming
If adblockers are killed, that removed points 1 and 2. I am pretty sure I can do the same theming in Chrome (I have simple tastes) so that makes 3 a non-factor. And combined with the companies that refuse to make their sites work with firefox, there is no reason not to use chrome. Privacy is a non-factor since my identity is already wholly linked to my google account. I would have to first switch off off there and I am not putting in the effort for that.
- I am not the author haha. Just someone who found and really liked that library.
- And for the opposite, where you keep your main pipeline in shell but want to use python for some parts of it, there is pypyp.
https://pypi.org/project/pypyp/
It takes cares of the input and output boilerplate so you can focus on the actual code that you wanted python for.
> seq 1 5 | pyp 'sum(map(int, lines))' > ls | pyp 'Path(x).suffix' - The various projects that say something is deprecated but then don't give a removal timeline or keep delaying the removal (or even explicitly say it won't be removed, just remain deprecated) are the cause of this problem.
IMO, any deprecation should go in the following steps:
1. Decide that you want to deprecate the thing. This also includes steps on how to migrate away from the thing, what to use instead, and how to keep the existing behaviour if needed. This step would also decide on the overall timeline, starting with the decision and ending with the removal.
2. Make the code give out big warnings for the deprecation. If there's a standard build system, it should have support for deprecation warnings.
3. Break the build in an easy to fix way. If there is too much red tape to take one of the recommended steps, the old API is still there, just under a `deprecated` flag or path. Importantly, this means that at this step, 'fixing' the build doesn't require any change in dependencies or (big) change in code. This should be a one line change to make it work.
4. Remove the deprecated thing. This step is NOT optional! Actually remove it. Keep it part of your compiler / library / etc in a way to give an error but still delete it. Fixing the build now requires some custom code or extra dependency. It is no longer a trivial fix (as trivial as the previous step at least).
Honestly, the build system should provide the tools for this. Being able to say that some item is deprecated and should warn or it is deprecated and should only be accessible if a flag is set or it is removed and the error message should say "function foo() was removed in v1.4.5. Refer to the following link:..." instead of just "function foo() not found"
If the build system has the option to treat warnings as errors, it should also have the option to ignore specific warnings from being treated as such (so that package updates can still happen while CI keeps getting the warning). The warning itself shouldn't be ignored.
- Why not use apt?
- Rust's design eliminates data races completely. It also makes it much easier to write thread safe code from the start. Race conditions are possible but generally less of a thing compared to C++ (at least that's what I think).
Nothing is preventing you from writing correct C++ code. Rust is strictly less powerful (in terms of possible programs) than C++. The problem with C++ is that the easiest way to do anything is often the wrong way to do it. You might not even realize you are sharing a variable across threads and that it needs to be atomic.
- Why not have a transient client generated ID for idempotency but a server generated ID for long term reference and storage?
- Never use someone else's synthetic key as your primary key. If you want ordered keys, even if the API is giving out sequential integers, you should still use your own sequential IDs.
- Off-topic, For this kind of pointer casting, shouldn't you be using a union? I believe this is undefined behaviour, as written.
- Pure js without typescript also has "types". Typescript doesn't give you nominal types either. It's only structural. So when you say that you "know it's already been processed", you just have a mental type of "Parsed" vs "Raw". With a type system, it's like you have a partner dedicated to tracking that. But without that, it doesn't mean you aren't doing any parsing or type tracking of your own.
- Why does "true" parsing have to error out on the very first problem? It is more than possible (though maybe not easy) to keep parsing and collecting errors as they appear. Zod, as the given example in the post, does it.
- The difference, in my opinion, is that you received the cli args in the form
``` some_cli <some args> --some-option --no-some-option ```
Before parsing, the argument array contains both the flags to enable and disable the option. Validation would either throw an error or accept it as either enabled or disabled. But importantly, it wouldn't change the arguments. If the assumption is that the last option overwrites anything before it then the cli command is valid with the option disabled.
And now, correct behaviour relies on all the code using that option to always make the same assumption.
Parsing, on the other hand, would put create a new config where `option` is an enum - either enabled or disabled or not given. No confusion about multiple flags or anything. It provides a single view for the rest of the program of what the input config was.
Whether that parsing is done by a third party library or first party code, declaratively or imperatively, is besides the point.
- Fwiw, the two feed into each other. I read this same thing somewhere else before and have since tried to not take my phone with me into the bathroom. I started doing that because I would take a lot of time in there. But once I have my phone, if I start reading something a bit too long, I could easily spend 20 mins in there before realising it.
- I want an option to give fake permissions. A lot of apps are pretty necessary (due to network effects). I don't want to give my contact or location data to them but they also refuse to work without it, even though they don't it for the stuff I am doing. So just let me provide fake data instead. As far as the app is concerned, it has the permissions it so wanted.
- For the current category of LLM based AI, "AI optimised" means "old and popular". Even if you add a layer that has much more details but may be a lot more verbose or whatever, that layer would not be "AI optimised".
- Instead of reference counting, consider having two types. An "owner" type which actually contains the resource and the destructor to dequire the resource. And "lender" types which contain a reference (a pointer or just logically (e.g., an fd can just be copied into the lender but only closed by the owner) to the resource which don't dequire on destruction.
Same thing as what Rust does with `String` and `str`.
- FWIW, team screen resolution is pretty much already there when the company provides the laptops (+ screens).
- The script example in TFA is just a starting point. I believe you would still manually go through all the columns it finds and decide which ones are actually supposed to be nullable and which ones don't need to be. As the article said, nullable fields that don't contain null could be a sign of incomplete migrations and such.
- What they are saying is that the field is always present in the domain model but we don't have the information to backfill it. For example, say you have a customers table. Originally, it just stored their name and internal ID. But now you are adding in their government ID as well. Except that you already have thousands of customers and you don't have their government ID. So you either make the column nullable and slowly backfill it over time. Or you find some default value which isn't null but the code understands it to still be empty. And again, you slowly backfill over time.
If you want the builtin interpolation to become a noop in the face runtime log disabling then the logging library has to be a builtin too.