https://waspdev.com
- Maybe it's better now in some distros. Not sure about other distros, but I don't like Ubuntu's Snap package. Snap packages typically start slower, use more RAM, require sudo privileges to install, and run in an isolated environment only on systems with AppArmour. Snap also tends to slow things some at boot and shutdown. People report issues like theming mismatches, permissions/file-access friction. Firefox theming complaints are a common example. It's almost like running a docker container for each application. Flatpaks seem slightly better, but still a bandaid. Just nobody is going to fix the compatibility problems in Linux.
- This might offend some people but even Linus Torvalds thinks that the ABI compatibility is not good enough in Linux distros, and this is one of the main reasons Linux is not popular on the desktop. https://www.youtube.com/watch?v=5PmHRSeA2c8&t=283s
- IMHO this also makes custom tags no longer very useful beyond custom HTML components (JS is also required for that). The standard tags provide good semantics, SEO and accessibility out of the box.
- Yes, I know about them. But I don't think they are very useful outside custom HTML components (JS required). I'd rather use the standard semantic elements as much as possible since they provide good SEO / accessibility out of the box.
- There is N-th child selector with filter, for example you can write :nth-child(3 of .red)
https://waspdev.com/articles/2025-06-29/css-features-web-dev...
- Yes, HTML & CSS alone won't replace JS. Of course, for complicated form validation HTML is not sufficient. But IMHO it's very important to provide basic functionality in HTML / CSS as much as possible / reasonable. Moving the functionality to HTML / CSS can potentially improve the SEO.
As for positioning, there is an experimental feature @position-try. Here I made a small demo where it handles overflows.
https://waspdev.com/articles/2025-06-29/css-features-web-dev...
But yeah, that's kind of limited if you need nice animations or some other complicated thing. Although it's fun.
- Maybe. But I remember one game developer told that they face even a more challenging problem, which is the synchronization between players in multiplayer real-time games. Just imagine different users having significantly different network latencies in a multiplayer shooter where a couple milliseconds can be decisive. Someone makes a headshot when the game state is already outdated. If you think about this you can appreciate how it's complicated just to make the gameplay at least not awful...
- When I visit such pages, my impression is that someone want to break my browser.
- In this case yes, but on the other hand Red Hat won't publish the RHEL code unless you have the binaries. The GPLv2 license requires you to provide the source code only if you provide the compiled binaries. In theory Meta can apply its own proprietary patches on Linux and don't publish the source code if it runs that patched Linux on its servers only.
- > I don't perceive that as an ordering issue. "Pure mathematics" has multiple division definitions, what we see here is the definition you use in class 1: integer division. The issue here is not associativity, it is that the inverse of an integer division is NOT integer multiplication, the inverse of division is the sum of multiplication and the modulo. Integer division is a information destroying operation.
Agree, I've gone too far with integer division. But a similar problem exists for floats as well. In abstract mathematics the order of some operations between real numbers doesn't matter, but since the CPU floats have limited size and accuracy, it does. This is why when you are calculating some decreasing convergent series, you should better to start from the smallest terms, because the accuracy would be lost during float normalization when adding a tiny term to an already large accumulated sum. A compiler is unlikely to do any optimization here and people should be aware of this. Compilers can't assume the intention in your code, so they make sure the program behavior isn't affected after the optimizations.
> Yes, this is because optimizing compilers are not optimizers in the mathematical sense, but heuristics and sets of folk wisdoms. This doesn't make them any less impressive.
I'm not implying that it's not impressive, but I'm implying that compilers still aren't magic wands, and you should still optimize the algorithms (to a reasonable degree). Just let the compiler do the microptimizations (all this register allocation golf, instruction reordering, caching, the discussed division trick, etc.). IMHO this suboptimal output in this particular case was somewhat expected because it's some "niche" case although it's obvious. I'm not blaming the compiler people. Yes someone could add that optimization rule for my case, but as I said, It's quite rare and it's probably not worth adding optimization rules for such case to make the optimizer more bloated and complicated.
- 2 points
- > That's only the problem of floats, with ints this issue doesn't exist.
With ints the results can be dramatically different (often even worse than floats) even though in pure mathematics the order doesn't matter:
This is a trivial example, but it shows why it's extremely hard for compilers to optimize expressions and why they usually leave this task to humans.1 * 2 * 3 * 4 / 8 --> 3 3 * 4 / 8 * 1 * 2 --> 2But x % 2 == 0 && x % 3 == 0 isn't such case, swapping operands of && has no side effects, nor swapping operands of each ==.
> Are you sure, that dividing by 6 is actually faster
Compilers usually transform divisions into multiplications when the denominator is a constant.
I wrote another example in other comment but I'll write again.
I also tried this
is_divisible_by_15 still has a branch, while is_divisible_by_15_optimal does notbool is_divisible_by_15(int x) { return x % 3 == 0 && x % 5 == 0; } bool is_divisible_by_15_optimal(int x) { return x % 15 == 0; }
My point is that the compiler still doesn't notice that 2 functions are equivalent. Even when choosing 3 and 5 (to eliminate the questionable bit check trick for 2) the 1st function appears less optimal (more code + branch).is_divisible_by_15(int): imul eax, edi, -1431655765 add eax, 715827882 cmp eax, 1431655764 jbe .LBB0_2 xor eax, eax ret .LBB0_2: imul eax, edi, -858993459 add eax, 429496729 cmp eax, 858993459 setb al ret is_divisible_by_15_optimal(int): imul eax, edi, -286331153 add eax, 143165576 cmp eax, 286331153 setb al ret - LOL, that was good.
- We are definitely better at survival and safety. In modern societies we are less likely to starve, die in infancy / childhood, have longer life expectancy, etc.
But when we compare by other metrics, such as mental and physical health, it becomes more complicated. The problem is that out brains and bodies aren't well adapted to the modern world. In the past there were stresses (predators, hunger, conflict), but they were more acute, big spike of stress, but you usually had a lot of time to recover. For example, predator appears, huge spike in stress, run/fight, either you die or it's over. But afterwards (if you survived) you usually had a lot of rest. Also you more or less directly saw the results of your actions. For example, you hunt means you eat, you build shelter means stay dry, etc.
Meanwhile, modern people tend to have chronic low-level stress caused by the complicated and fast paced society: money worries, grind, bureaucracy, deadlines, school / college / university, burnout, job insecurity, notifications, news doomscrolling. Our stress systems are constantly activated which is devastating for long-term mental health. It's no wonder that we have higher rates of depression, anxiety and suicidality. Today's stress is more akin to death by thousands of small cuts. The same is for our physical health.
I'm not claiming hunter gatherers' lives were not challenging. There were a lot risks, physical hardship, famines, etc. But evolutionary speaking, our bodies / minds were more equipped to deal with those types stresses. Here is a good video that talks about this: https://www.youtube.com/watch?v=Mo1A45ShcMo
- I think what definitely has improved, is the survival. We are less likely to starve, die in infancy / childhood, have longer life expectancy, etc. In the past there were also stresses. But I think the stresses were different then. They were less chronic and were more occasional instead (although probably more intense). However, after an acute stress you had a lot of time to recover. Evolutionary speaking, our brains have been adapted for that. It was necessary for our survival.
However, nowadays the stresses are different, they are more chronic / frequent. You have less time to recover from them. This is partially the result of our more complex and fast paced society / economy. Our brains are not well adapted for the modern work / educational environments and to the stresses associated with them, despite they are usually milder in intensity. Today's stress is more like to death by a thousand small cuts. Nowadays people have more anxiety, depression and suicidality. Here is a good video that talks about the modern stress: https://www.youtube.com/watch?v=Mo1A45ShcMo .
- I also tried this
is_divisible_by_15 still has a branch, while is_divisible_by_15_optimal does notbool is_divisible_by_15(int x) { return x % 3 == 0 && x % 5 == 0; } bool is_divisible_by_15_optimal(int x) { return x % 15 == 0; }is_divisible_by_15(int): imul eax, edi, -1431655765 add eax, 715827882 cmp eax, 1431655764 jbe .LBB0_2 xor eax, eax ret .LBB0_2: imul eax, edi, -858993459 add eax, 429496729 cmp eax, 858993459 setb al ret is_divisible_by_15_optimal(int): imul eax, edi, -286331153 add eax, 143165576 cmp eax, 286331153 setb al ret - Yeah, this one as well:
Mathematically x % 2 == 0 && x % 3 == 0 is exactly the same as x % 6 == 0 for all C/C++ int values but the compiler doesn't see them as identical, and produces less optimal code for is_divisible_by_6 than for is_divisible_by_6_optimal.bool is_divisible_by_6(int x) { return x % 2 == 0 && x % 3 == 0; } bool is_divisible_by_6_optimal(int x) { return x % 6 == 0; } - The compiler didn't recognize that x % 2 == 0 && x % 3 == 0 is exactly the same as x % 6 == 0 for all C/C++ int values. In theory a compiler could detect that and generate identical code for both functions, but it isn't done because this case is "niche" despite being trivial. My point is not to over rely on optimizer for math expressions and algorithms.
- So you claim that the compiler "knows about this but doesn't optimize because of some safety measures"? As far as I remember, compilers don't optimize math expressions / brackets, probably because the order of operations might affect the precision of ints/floats, also because of complexity.
But my example is trivial (x % 2 == 0 && x % 3 == 0 is exactly the same as x % 6 == 0 for all C/C++ int), yet the compiler produced different outputs (the outputs are different and most likely is_divisible_by_6 is slower). Also what null (you mean 0?) checks are you talking about? The denominator is not null/0. Regardless, my point about not over relying on compiler optimization (especially for macro algorithms (O notation) and math expressions) remains valid.
This is not a big problem if it's hard/unlikely enough to write a code that accidentally relies on raw syscalls. At least MS's dev tooling doesn't provide an easy way to bypass the standard DLLs.
> makes me wonder how exactly Windows containers work
I guess containers do the syscalls through the standard Windows DLLs like any regular userspace application. If it's a Linux container on Windows, probably the WSL syscalls, which I guess, are stable.