@mastodon.social/@kornel
- In the sidebar of the Amazon website (which may vary by device/locale), I get Prime, Echo, Alexa, Fire TV/Tablets, Kindle, Audible listed before all other product categories. The special treatment so explicit, I didn't think anybody would even doubt it, but rather reply "duuuh, of course Amazon.com is for selling Amazon's own stuff".
I'm a follower of Cory Doctorow's anti-enshittification ideology (https://www.youtube.com/watch?v=FwkaS389W-g). Amazon is well-known for giving preferential treatment to its own products, while squeezing other sellers to pay for placement (Amazon ads).
If you want something more data-driven, see "Self-Preferencing at Amazon: Evidence from Search Rankings" (DOI 10.1257/pandp.20231068), but this one is about everyday products. I'd expect Roombas to get more blatant promos like Kindle, Fire, and Ring products get. For example, if I search for "doorbell" on Amazon, the very first thing I get is a huge promo for Blink products (an Amazon company), four results from random brands nobody heard of, and then another huge promo for Ring (Amazon brand).
- There's also a pure-Rust implementation of a syntax highlighter, which uses TextMate/SublimeText grammars: https://lib.rs/syntect
- Maybe instead of ad-hoc fear-driven interventions in the market when Chinese companies do too well (TikTok), US should have some general data protection laws, and not allow making surveillance devices that are locked-down and serve their corporate overlords?
Amazon isn't on your side. They would have sold this access to China if they could make a buck on it.
- Amazon wouldn't have kept the manufacturer alive by making Roombas better, but by making it more expensive for other manufacturers to sell their vacuums through Amazon.
- There's a hybrid approach of C -> WASM -> C compilation, which ends up controlling every OS interaction and sandboxing memory access like WASM, while technically remaining C code:
- There's a need for some portable and composable way to do sandboxing.
Library authors you can't configure seccomp themselves, because the allowlist must be coordinated with everything else in the whole process, and there's no established convention for negotiating that.
Seccomp has its own pain points, like being sensitive to libc implementation details and kernel versions/architectures (it's hard to know what syscalls you really need). It can't filter by inputs behind pointers, most notably can't look at any file paths, which is very limiting and needs even more out-of-process setup.
This makes seccomp sandboxing something you add yourself to your application, for your specific deployment environment, not something that's a language built-in or an ecosystem-wide feature.
- If a codebase is being maintained and extended, it's not all code with 20 years of testing.
Every change you make could be violating a some assumption made elsewhere, maybe even 20 years ago, and subtly break code at distance. C's type system doesn't carry much information, and is hostile to static analysis, which makes changes in large codebases difficult, laborious, and risky.
Rust is a linter and static analyzer cranked up to maximum. The whole language has been designed around having necessary information for static analysis easily available and reliable. Rust is built around disallowing or containing coding patterns that create dead-ends for static analysis. C never had this focus, so even trivial checks devolve into whole-program analysis and quickly hit undecidability (e.g. in Rust whenever you have &mut reference, you know for sure that it's valid, non-null, initialized, and that no other code anywhere can mutate it at the same time, and no other thread will even look at it. In C when you have a pointer to an object, eh, good luck!)
- It can be done in 100% safe code as far as Rust is concerned (if you use `dyn Fn` type instead of c_void).
The only unsafe here is to demonstrate it works with C/C++ FFI (where void* userdata is actually not type safe)
- Many of these mistakes weren't even made by any committee, but were stuff shipped in a rush by Netscape or Microsoft to win the browser wars.
There was some (academic) reaserch behind early CSS concept, but the original vision for it didn't pan out ("cascading" was meant to blend style preferences of users, browsers and page authors, but all we got is selector specificity footguns).
Netscape was planning to release their own imperative styling language, and ended up shipping a buggy CSS hackjob instead.
Once IE was dominant, Microsoft didn't think they have to listen to anybody, so for a while W3C was writing CSS specs that nobody implemented. It's hard to do user research when nothing works and 90% of CSS devs' work is fighting browser bugs.
- They are wrong, and didn't get the point of separating semantics and presentation.
- there's table-layout:fixed that makes rendering of large tables much faster.
I'd argue that if you have so many rows that DOM can't handle, humans won't either. Then you need search, filtering, data exports, not JS attaching a faked scrollbar to millions of rows.
- Yeah, Rust closures that capture data are fat pointers { fn*, data* }, so you need an awkward dance to make them thin pointers for C.
It requires a userdata arg for the C function, since there's no allocation or executable-stack magic to give a unique function pointer to each data instance. OTOH it's zero-cost. The generic make_trampoline inlines code of the closure, so there's no extra indirection.let mut state = 1; let mut fat_closure = || state += 1; let (fnptr, userdata) = make_trampoline(&mut &mut fat_closure); unsafe { fnptr(userdata); } assert_eq!(state, 2); use std::ffi::c_void; fn make_trampoline<C: FnMut()>(closure: &mut &mut C) -> (unsafe fn(*mut c_void), *mut c_void) { let fnptr = |userdata: *mut c_void| { let closure: *mut &mut C = userdata.cast(); (unsafe { &mut *closure })() }; (fnptr, closure as *mut _ as *mut c_void) } - I've had similar experience when the compiler immediately found unsynchronized state deep inside a 3rd party library I've been using. It was a 5-minute fix for what otherwise could have been mysterious unreproducible data corruption.
These days even mobile phones have multicore CPUs, so it's getting hard to find excuses for single-threaded programs.
- There's Rust for Dreamcast (https://dreamcast.rs) via Rust's GCC backend.
- If you create wrappers that provide additional type information, you do get extra safety and nicer interfaces to work with.
- Apple is extending Swift specifically for kernel development.
- In Rust dev, I haven't needed Valgrind or gdb in years, except some projects integrating C libraries.
Probably kernel dev isn't as easy, but for application development Rust really shifts majority of problems from debugging to compile time.
- There's always someone willing to write COBOL for the right premium.
I'm working on Rust projects, so I may have incomplete picture, but I'm from what I see when devs have a choice, they prefer working with Rust over C++ (if not due to the language, at least due to the build tooling).
- The idea behind the safe/unsafe split is to provide safe abstractions over code that has to be unsafe.
The unsafe parts have to be written and verified manually very carefully, but once that's done, the compiler can ensure that all further uses of these abstractions are correct and won't cause UB.
Everything in Rust becomes "unsafe" at some lower level (every string has unsafe in its implementation, the compiler itself uses unsafe code), but as long as the lower-level unsafe is correct, the higher-level code gets safety guarantees.
This allows kernel maintainers to (carefully) create safe public APIs, which will be much safer to use by others.
C doesn't have such explicit split, and its abstraction powers are weaker, so it doesn't let maintainer create APIs that can't cause UB even if misused.
The split between tag and branch pipelines seems like intentional obfuscation with no upsides (you can't build non-latest commit from a branch, and when you use a tag to select the commit, GitLab intentionally hides all branch-related info, and skips jobs that depend on branch names).
"CI components" are not really components, but copy-paste of YAML into global state. Merging of jobs merges objects but not arrays, making composition unreliable or impossible.
The `steps` are still unstable/experimental. Composing multiple steps either is a mess of appending lines of bash, or you have go all the way in the other direction and build layered Docker images.
I could go on all day. Programming in YAML is annoying, and GitLab is full of issues that make it even clunkier than it needs to be.