- Wonder if someone used effect handlers for error logging. Sounds like a natural and modular way of handling this problem.
- It has many language bindings, including python and js. Though the js backend is not parallel because it uses wasm, and we had problem with mimalloc memory usage with pthread enabled.
- At least 1 would not be enough. So how many branches are enough? And what about people with less money and time available?
- But this is not related. You still have to pay the APC.
- I thought this means for category theory people
anyway, quite cute :)
- I remember people saying that chromium is better at sandboxing than firefox, so more secure.
- If what they did is never revealed to someone else, what is the problem here? It is not like we have no way to hide stuff without cryptography, and people are not advocating for police to search every apartment once in a while to look for illegal stuff.
- Authorities cannot tap into your brain, cannot tap into physical face-to-face conversations, and people can plan out crimes using these means. It is not like there is no way to hide stuff before the born of modern cryptography.
And who want everything to be open and transparent? I am not aware of anyone who wants this.
- What I miss from vscode is the remote functionality, can you do it with emacs? For neovim there is distant.nvim, but idk if it is mature enough and configuration seems a bit annoying...
- You can static link.
- C/C++ dependency management is easy on windows? Seriously? What software did you build from source there?
- I feel like these are stuff that the C-suite needs for justifying their pay. If it is "boring browser development", it will show that they are doing nothing, redundant, and cannot have bonuses and salary raise.
- The nice thing is that a fully broken clock is accurate more often than a slightly deviated clock.
- I am curious how the last algorithm is an order of magnitude faster than the one based on sorting. There is no benchmark data, and ideally there should be data for different mesh sizes, as that affects the timing a lot (cache vs RAM).
I work on https://github.com/elalish/manifold which works with triangular meshes, and one of the slowest operations we currently have is halfedge pairing, I am interested in making it faster. We are already using parallel merge sort for the stable sort, switching to parallel radix sort which works well on random distribution is not helping and I think we are currently bandwidth bound. If building an edge list for each vertex can improve cache locality and reduce bandwidth, that will be very interesting.
- Yeah, in that case I guess it is usually memory-bound? When it is memory-bound you don't really care that much about scheduling etc.
- They won't use it intentionally, but it is not like it can never happen. E.g., https://reproducible.nixos.org/nixos-iso-minimal-r13y/ is still only 96.59% reproducible.
- While I think the OP did not mean the compilation process is nondeterministic, I won't be surprised if it is actually non-deterministic. A lot of algorithms and data structures rely on nondeterminism for performance or security (by default). It is too easy to introduce nondeterminism accidentally, and it is tempting to use that to speed up algorithms. Also, if it relies on floating point, results on different machines and environments may be different (depending on libm and hardware implementation), which is, in some sense, nondeterministic.
- While it is decidable, people typically never produce optimal programs even for the hot path. It is just intractable and too slow to do right now.
For register allocation and instruction selection, there is hope because it is FPT and there are algorithms to do it optimally in polynomial time, albeit with a large constant factor (FPT), making it impractical to apply to compilers as of today. For instruction scheduling, it is just too hard. If you read literature on scheduling algorithms, it is NP-hard for apparently simple instances, e.g., 2 parallel identical machines with no preemption and bounding completion time (https://www2.informatik.uni-osnabrueck.de/knust/class/), while actual microarchitecture is much more complicated than this...
Needless to say, these are already the simpler problems. The longer the program or the more profiling data you can optimize for, the more tricks you can throw at it, and most of them are NP-hard to optimize optimally.
Being NP-hard doesn't imply that you can't obtain the optimal result, but compilers that I know of do not implement them, because most users are not willing to wait for days for such a compilation to complete. Ideally, one should make something that can run on clusters of CPUs or GPUs to optimize this, and people having those clusters will typically be willing to do this because they want to optimize the program they later run on the clusters. However, to my knowledge, no one is working on this at the moment.
- Will they really get paid less? The feeling I have now is that people are paid a lot not because of what they do, but because of the potential damage they can do in case they fucked up. E.g. CEOs, lawyers, etc. Moving some of the work to AI doesn't reduce the risk, so they should have the same pay in my mental model.
Plus C-level executives typically don't lower their pay, and IMO investors apparently don't care that much about their pay, I can't see a reason why their pay will be reduced (significantly).
It is quite annoying when you do parallelization, and idk if that many people cared about bitwise reproducibility, especially when it requires compromising a bit of performance.