Curve: secp521r1 Public key (b64 encoded): MIGbMBAGByqGSM49AgEGBSuBBAAjA4GGAAQBNtwf+HWIV/ifAz826Anbd6Ce5L3WPvXGBZ99EEd1QNYqzToWCCLMd5ajzFOidBESl5jjX0jwgpxvV626KBHaJMgB6zKDw3zd2v1IC7IkNCXUDe7DRgqyjFpkLTJ+aGrBRfBgJq20Sqf/RHINHvlzulzQYKV0/vrdGqdqbsQURHoWZGQ=
- Thanks for your informative reply. I see now I was approaching this incorrectly. I was considering drawing conclusions from a high RTT rather than a RTT so small it would be impossible to have gone the distance.
- Contrasting take: RTT and a service providing black box knowledge is not equivalent to knowledge of the backbone. To assume traffic is always efficiently routed seems dubious when considering a global scale. The supporting infrastructure of telecom is likely shaped by volume/size of traffic and not shortest paths. I'll confess my evaluation here might be overlooking some details. I'm curious on others' thoughts on this.
- 35m ago edit: Apple uses many predictive systems for typing. My sentiment in pointing out just slide to type might be misguided as it does not exist in a vacuum. I'd love to see these tests redone with slide to type disabled. I'm leaving the original comment below for reference.
Slide to type. This "issue" is at most 6 years old for iOS users.
Turn off slide to type if you do not use it. Slide to type does key resizing logic. This is the direct cause of this issue. Please upvote this comment for visibility.
Please reply if you think I'm wrong. I see this get posted frequently enough I'm actually losing it.
Please refer to https://youtu.be/hksVvXONrIo?si=XD7AKa8gTl85_rJ6&t=72 (timestamp 1:12) to see that slide to type is enabled.
- I didn't see any reference to a sender or actively blasting RF from the same access point. I think the approach relies on other signal sources creating reflections to a passively monitoring access point and attempting to make sense of that.
- 5GHz WiFi has a wavelength of ~6cm and 2.4GHz ~12.5cm. Anything achieving smaller is a result of interferometry or a non WiFi signal. Mentioning this might not add much substance to the conversation, but it felt worth adding.
- I'm interested but am also incredibly dubious. Not because it seems impossible but the opposite. On one hand, an open source repo like this making an approach for hackable extension should be praised, but the "Why Built WiFi-3D-Fusion" section[0] gives me very, very bad vibes. Here's some excerpts I especially take issue with:
> "Why? Because there are places where cameras fail, dark rooms, burning buildings, collapsed tunnels, deep underground. And in those places, a system like this could mean the difference between life and death."
> "I refuse to accept 'impossible.'"
WiFi sensing is an established research domain that has long struggled with line of sight requirements, signal reflection, interference, etc. This repo has the guise of research, but it seems to omit the work of the field it resides in. It's one thing to detect motion or approximately track a connected device through space, but "burning buildings, collapsed tunnels, deep underground" are exactly the kind of non-standardized environments where WiFi sensing performs especially poorly.
I hate to judge so quickly based on a readme, but I'm not personally interested in digging deeper or spinning up an environment. Consider this before aligning with my sentiment.
[0] https://github.com/MaliosDark/wifi-3d-fusion/blob/main/READM...
- 4 points
- 2 points
- 3 points
- I really want to love rust, and I understand the niches it fills. My temporary allegiance with it comes down to performance, but I'm drawn by the crate ecosystem and support provided by cargo.
What's so damning to me is how debilitatingly unopinionated it is during situations like error handling. I've used it enough to at least approximate its advantages, but strongly hinting towards including a crate (though not required) to help with error processing seems to mirror the inconvenience of having to include an exception type in another language. I don't think it would be the end of the world if it came with some creature comforts here and there.
- I'll provide a contrasting, pessimistic take.
> How do you write programs when a bug can kill their user?
You accept that you will have a hand in killing users, and you fight like hell to prove yourself wrong. Every code change, PR approval, process update, unit test, hell, even meetings all weigh heavier. You move slower, leaving no stone unturned. To touch on the pacemakers example, even buggy code that kills X% of users will keep Y% alive/improve QoL. Does the good outweigh the bad? Even small amounts of complexity can bubble up and lead to unintended behavior. In a corrected vibrator example, what if frequency becomes so large it overflows and leads to burning the user? Youch.
The best insight I have to offer is that time is often overlooked and taken for granted. I'm talking Y2K data type, time drift, time skew, special relativity, precision, and more. Some of the most interesting and disturbing bugs I've come across all occurred because of time. "This program works perfectly fine, but after 24 hours it starts infinitely logging." If time is an input, do not underestimate time.
> How do we get to a point to `trust` it?
You traverse the entire input space to validate the output space. This is not always possible. In these cases, audit compliance can take the form of traversing a subset of the input space deemed "typical/expected" and moving forward with the knowledge that edge cases can exist. Even with a fully audited software, oddities like a cosmic bit flip can occur. What then? At some point, in this beautifully imperfect world, one must settle for good enough over perfection.
The astute reading above might be furiously pounding their keyboards mentioning the halting problem. We can't even verifiably prove a particular input will provide an output - moreover an entire space.
> I am convinced that open code, specs and (processes) must be requirement going forward.
I completely agree, but I don't believe this will outright prevent user deaths. Having open code, specs, etc aids towards accountability, transparency, and external verification. I must express I feel there are pressures against this, as there is monumental power in being the only party able to ascertain the facts.
- elzbardico is pointing out how the author is having the confidence value generated in the output of the response rather than it being the confidence of the output.
- I too once fell into the trap of having an LLM generate a confidence value in a response. This is a very genuine concern to raise.
- 3 points
- I think it's fair to say AI generated code isn't visibly making a meaningful impact in open source. Absence of evidence is not evidence of absence, but that shouldn't be interpreted as a defense to orgs or the fanciful predictions made by tech CEOs. In its current forms, AI feels comparable to piracy where the real impact is fuzzy and companies claim a number is higher or lower depending on the weather.
Yes, open source projects would be the main place where these claims could be publicly verifiable, but established open source projects aren't just code--they're usually complex, organic, and ever shifting organizations of people. I'd argue the metric of interacting with a large group of people whom have cultivated their own working process and internal communication patterns is closer to AGI than coding assistant, so maybe the goal posts we're using for AI PRs are too grand. I think it's expected to hear claims from within walled gardens, where processes and teams can be upended at will, that AI is making an unverifiable splash, because they're precisely the environments where AI could be the most disruptive.
Additionally, I think we're willfully looking in the wrong places when trying to measure AI impact by looking for AI PRs. Programmers don't flag PRs when they use IntelliJ or confer with X flavor of LLM(tm), and expecting mature open source projects to have AI PRs seems as dubious as expecting then to use blockchain or any other technology that could be construed as disruptive. It just may not be compatible or reasonable with their current process. Calculated change is often incremental and boring, where real progress is only felt by looking away.
I made a really simple project that automatically forwards browser console logs to a central server, programmatically pull the file(s) from the trace, and had an LLM consume a templated prompt + error + file. It'd make a PR with what it thought was the correct fix. Sometimes it was helpful. The problem was it needed to do more than code, because the utility of a one shot prompt to PR is low.
- I've been off socials and on forums for 8+ years now for the same reason. I share similar sentiment as Bizzy's sibling reply. I say these things because lately I've been thinking about lot about dead internet theory and how strongly some believe it.
One of the most profound realizations I've had lately is that the perception of the medium of communication itself is a well that can be poisoned with artificial interactions. Major empahsis on perception. The meer presence of artifical can immediately taint real interactions; you don't need a majority to poison the well.
How many spam calls does it take for you to presume spam? How many linkedin autoreply ai comments does it take to presume all comments are ai? How many emails before you immediately presume phishing? How many rage baiting social posts do you need to see before you believe the entire site is composed of synthetic engagement? How many tinder bots do you need to interact with before you feel the entire app is dead? How many autodeny job application responses until you assume the next one is a ghost job posting? How many interactions with greedy people does it take to presume that it's human nature?
- 3 points
- Thanks for your reply. The target market is anyone who has interest, as I've open sourced the tooling I used for translating text to braille and generating the geometry of the molds. I'm hoping to create a follow up post announcing a web ui that will make the functionality more accessible to a broader audience.
It's sad to hear the poor performance of embossers. In terms of lifetime, I only have about 150 or so pages created with my prototype molds, but I haven't been able to detect a degradation in the embossing quality. I think at that number of copies created and a set of molds only costing $1.50, it makes me feel it's gotten some value out of it. I think someone can buy the equipment I'm using for around $800 in total. Obviously there's a bit of a tradeoff and at some point the number of molds would exceed the cost of an embosser, but I argue this approach is pretty quick for a manual one (for now).
The pain of the approach is the printing times for sure. One bit of confusion I can clear up is that, while yes, the single 3D printer I use can only print 4 unique sets of page molds a day, using the molds is fast! My time to beat is 15 seconds to manually position a sheet of paper between the mold bodies and roll it though a cheap roller press. 4 pages a minute is pretty quick for an embosser, but this is still just manual. Now imagine it automated :) Expecting a human to do 15 seconds/press for a full work day makes me sad, but that'd be a lot of birthday cards.
I really appreciate the link to the BrailleRap project. It's cool to see innovation in the space, and I subscribe to the RepRap philosophy. I'm currently prototyping designs to automate the pressing process, but until that component is automated, this approach lies somewhere between manual and automated embossing in terms of utility. That being said, still much more accessible and cost-effective.
- 3 points
Let's say you're a global VPN provider and you want to reduce as much traffic as possible. A user accesses the entry point of your service to access a website that's blocked in their country. For the benefit of this thought experiment, let's say the content is static/easily cacheable or because the user is testing multiple times, that dynamic content becomes cached. Could this play into the results presented in this article? Again, I know I'm moving goalposts here, but I'm just trying to be critical of how the author arrived at their conclusion.