[ my public key: https://keybase.io/karlding; my proof: https://keybase.io/karlding/sigs/6RYb_SBJ3cAPDcj228kqB9ivjISZddnsSETccp_mJz4 ]
justkding.me
github.com/karlding
linkedin.com/in/karlding- I'm not sure if you're aware, but there's the Wheel Variants proposal [0] that the WheelNext initiative is working through that was presented at PyCon 2025 [1][2], which hopes to solve some of those problems.
uv has implemented experimental support, which they announced here [3].
[0] https://wheelnext.dev/proposals/pepxxx_wheel_variant_support...
[1] https://us.pycon.org/2025/schedule/presentation/100/
- If you watch the video, (one of) the reasons why the AI was winning was because it was using “meta” information from the Street View camera images, and not necessarily because it’s successfully identifying locations purely based on the landmarks in the image.
> I realized that the AI was using the smudges on the camera to help make an educated guess here.
- Things like cargo-crev [0] or cargo vet [1] aim to tackle a subset of that problem.
There’s also alternate implementations of crev [2] for other languages, but I’m not sure about the maturity of those integrations and their ecosystems.
[0] https://github.com/crev-dev/cargo-crev
- The University of Waterloo has a similar course, CS452: Real-time Programming.
It’s not quite the same as having physical access to the train set, but a student eventually wrote a simulator for the Märklin train set [0]. Another student wrote an emulator for the TS-7200 used for the class [1] if you don’t want to test your kernel in QEMU.
- I don't own AirPods, but one of the things that I've struggled with after the proliferation of headphone jack removal is that on all the Bluetooth headphones/earbuds I've tried the lowest volume setting is still too loud. I normally use Shure SE215s wired, but I've tried the Sennheiser PXC550, Sony WH-1000XM3, Jabra Elite 7 Sport with similar impressions, and tried using my work 2021 MacBook Pro as the audio source instead of my phone. Surely I'm not the only one who feels this way?
On my Samsung phone, I've had to manually set individual app volumes to 80% via Sound Assistant, have additional volume steps enabled, and have the system sound set to the lowest setting when using Bluetooth.
- Rob ter Horst's (The Quantified Scientist) test results compared against a fingertip pulse oximeter—which include at ground level and on flights—seem to indicate that they're okay for detecting whether your SpO2 readings are normal/abnormal. Basically it seems that if you get a one-off abnormal reading then it's possible for it to be a false positive, while you're unlikely to consistently get false positive results.
See the video for the Apple Watch Series 6 [0], and Series 7 [1].
There's also tests for the Series 8 [2], although it doesn't include data collected in a low oxygen environment.
[0] https://youtube.com/watch?v=8HIcwMhEny0
- 2 points
- 2 points
- 6 points
- 3 points
- 1 point
- 3 points
- 3 points
- 64 points
- To me, it sounds more like the old system that WaterlooWorks replaced (JobMine). JobMine was just Oracle/PeopleSoft's PeopleTools under the hood.
Some of my friends who graduated earlier told stories about how JobMine at one point accepted resumes in HTML. Of course, this also meant that it was vulnerable to XSS attacks. The eventual fix was just to only allow PDF resumes.
- There's a "Typo-tolerance" option [0] HN exposes in the search settings [1]. If you disable that, then those results no longer show up.
[0] https://www.algolia.com/doc/guides/managing-results/optimize...
- 1 point
The devs also wrote a write-up here about how they handle the desyncs in netcode [1].
[0] https://slippi.gg/
[1] https://medium.com/project-slippi/fighting-desyncs-in-melee-...