- How do people find these trackpads? I’ve seen them or at least similar in the Kyria at al keyboards[0] and am intrigued but suspicious too.
- Agreed. And I feel it fair to argue that this is the intended interface between proprietary software and its users, categorically.
And more so with AI software/tools, and IMO frighteningly so.
I don’t know where the open models people are up to, but as a response to this I’d wager they’ll end up playing the Linux desktop game all over again.
All of which strikes at one of the essential AI questions for me: do you want humans to understand the world we live in or not?
Doesn’t have to be individually, as groups of people can be good at understanding something beyond an individual. But a productivity gain isn’t on it’s a sufficient response to this question.
Interestingly, it really wasn’t long ago that “understanding the full computing stack” was a topic around here (IIRC).
It’d be interesting to see if some “based” “vinyl player programming” movement evolved in response to AI in which using and developing tech stacks designed to be comprehensively comprehensible is the core motivation. I’d be down.
- Salient quote under the “AI” question in the FAQ:
> we aim for a computing system that is fully visible and understandable top-to-bottom — as simple, transparent, trustable, and non-magical as possible. When it works, you learn how it works. When it doesn’t work, you can see why. Because everyone is familiar with the internals, they can be changed and adapted for immediate needs, on the fly, in group discussion.
Funny for me, as this is basically my principal problem with AI as a tool.
It’s likely very aesthetic or experiential, but for me, it’s strong: a fundamental value of wanting to work to make the system and the work transparent, shared/sharable and collaborative.
Always liked B Victor a great deal, so it wasn’t surprising, but it was satisfying to see alignment on this.
- Agreed!
The only silver lining I can see is that a new perspective may be forced on how well or badly we’ve facilitated learning, usability, generally navigating pain points and maybe even all the dusty presumptions around the education / vocational / professional-development pipeline.
Before, demand for employment/salary pushed people through. Now, if actual and reliable understanding, expertise and quality is desirable, maybe paying attention to how well the broader system cultivates and can harness these attributes can be of value.
Intuitively though, my feeling is that we’re in some cultural turbulence, likely of a truly historical magnitude, in which nothing can be taken for granted and some “battles” were likely lost long ago when we started down this modern-computing path.
- Rings true for my impression too. In the end, she’s a YouTuber now, for better or worse, but still puts out what look like thoughtful and informative enough videos, whatever personal vendettas she holds grudges over.
I suspect for many who’ve touched the academic system, a popular voice that isn’t anti-intellectual or anti-expertise (or out to trumpet their personal theory), but critical of the status quo, would be viewed as a net positive.
- > Ironically, the best answer to many of the article's suggestions (thousands rather than millions, easy to modify, etc.) is to write your own software with LLMs.
Not sure exactly irony you mean here, but I’ll bite on the anti-LLM bait …
Surely it matters where the LLM sits against these values, no? Even if you’ve got your own program from the LLM that’s yours, so long as you may need alterations, maintenance, debugging or even understanding its nuances, the nature of the originating LLM, as a program, matters too … right?
And in that sense, are we at all likely to get to a place where LLMs aren’t simply the new mega-platforms (while we await the year of the local-only/open-weights AI)?
- Are there good deep dives on how far you can practically this? Especially in combination with headless browser pdf generation?
Last time I looked into it, a while ago, my impression was that it would get rickety too soon. It’d be a good place to be, I think, if web and “document” tech stacks could have nice and practical convergence.
- Yea.
Having intentionally stayed away from going down the PDF rabbit hole, but now confronting it again recently … what’s the deal with how sparsely populated the space is with solid and (relatively) light weight rendering solutions/back-ends?
Am I missing something or am I right in thinking that there’s a kinda pandoc/FFmpeg shaped hole in the document tooling space that no one wants to (or can’t) fill? Where tex and chrome based solutions are arguably just too heavy for a number of needs but all we really have?
- In the abstract this is an excessive take.
The point is that trust is a major component of scientific work and how it functions collectively. A effect being that when that is violated a lot breaks down with a god amount of collateral damage.
With increasing complexity in research and pressure to produce and publish more, it’s a growing weakness.
Bottom line is that team work and trust are now indelible parts of research in a culture predicated on individual success and contributions.
Honestly not sure how science adjusts.
- I feel like the new kinda-alternative on the rise is to support federation with either ActivityPub, BlueSky/ATProto or both.
That is, instead of going for search engines, go for open social. It’s obviously a new and relatively unpopular ecosystem, but makes much more sense than this SEO stuff IMO.
Wordpress have rolled out their AP support and it seems to be working well so far. I just “replied” to a blog post on mastodon today without even realising it was from a blog.
- > 1. Always sad for me to know how much popular are wireless chargers, wasting 47% more energy aprox for charging the same as a wired charger.
Lots of sibling replies pointing out that the absolute energy loss is negligible and reasonable price for the convenience.
That’s fine.
But there’s a bigger point. This convenience is being used as a justification for sticking with big brand phones. Which maybe tips the balance on the reasonableness, and, more broadly, raises the general issue of how much buying for convenience is a slippery slope. Maybe just charge with a cable?
- The idea of GPT or code generators supplanting the need for libraries and languages with large standard libraries is likely jumping ahead with optimism.
But … I think the idea is relevant and, security/QA issues aside, this is a real hard-to-see shift that might be brought on by AI: shifting of the equilibria and practicalities around what parts of the craft and profession are worth caring about and which are best left to the computer/AI.
Dependencies vs “write it yourself” (with an AI). Syntax/API design for readability/comprehension vs for computer/AI parse-ability and thoroughness. Rewrite something to be better vs train an AI on its usage and move on.
- > it's hard to imagine a line between "it is possible to build a computer capable of computing <X>" and "it is expensive, on the scale of reasonably-advanced civilizations, to build a computer capable of computing <X>."
Well it’s not so much about whether there is a line but what the probability distributions are and whether it continues to make sense to think that of all sentient beings the majority are likely simulated.
And while I personally get the argument you and the parent post make, I think it’s worthwhile highlighting that it’s likely not a simple matter of whether it’s possible and that the biases/utopianism that facilitate making that leap are also factors and worth making explicit.
Personally, I find it hard to conclude that a sufficiently advanced civilisation would necessarily be concerned with running so many simulations when there are probably a number of things they could spend time on that we can think of and many more we can’t because we’re not that advanced.
- > If it is possible that to make a simulation which matches our experience, then it is likely possible to make an unbounded number of such simulations.
Why? This seems to me to be the weakness in the argument.
Of all the universes in which it is possible for a technological species to evolve and create a simulation of our universe, what’s the probably of said simulations having a given incentive or conducive cost/benefit ratio for said species?
Theoretically this could range from “can only do one once before our budget runs out and we move on” to your “unbounded” claim. But with what distribution?
This question seems fundamental and so reduces the initial question to a more complex one than what you pose: is it possible and if so how plausible?
Unless I’m missing something, leaping over this factor, as seems to be the mainstream approach, indicates to me that some techno-utopic-transcendentalism bias is at play.
- For me, getting Latex out of the dependency chain is a huge attraction. Just too much cruft and slowness and mysterious errors. Seems way too outdated today. I hope something like typst can provide a nice, fast and modern pdf generator backend. My understanding is that pandoc already supports typst.
- There are theories and principles behind what an AI is doing and a growing craft around how to best use AI that may very well form relatively established “best practices” over time.
Yes there’s a significant statistical aspect involved in the workings of an AI, which distinguishes it from something more deterministic like syntactic sugar or a garbage collector. But I think one could argue that that’s the trade off for a more general tool like AI in the same way that giving a task to a junior dev is going to involve some noisiness in need of supervision. But in grand scheme of software development, is devs are in the end tools too, apart of the grand stack, and I think it’s reasonable to consider AI as just another tool in the stack. This is especially so if devs are already using it as a tool.
Dwelling on the principled v statistical distinction, while salient, may very well be a fallacy or irrelevant to the extent that we want to talk about the stack of tools and techniques software development employs. How much does the average developer understand or employ said understanding of a principled component of their stack? How predictable is that component, at least in the hands of the average developer making average but real software? When the end of the pipeline is a human and it’s human organisation of other humans, whether a tool’s principled or statistical may not matter much so long as it’s useful or productive.
- Yep! It’s something some aren’t seeing.
The AI coding assistant is now part of the abstraction layers over machine code. Higher level languages, scripting languages, all the happy paths we stick to (in bash, for example), memory management with GCs and borrow checkers, static analysis … now just add GPT. Like mastering memory management and assembly instructions … now you also don’t have to master the fiddly bits of core utils and bash and various other things.
Like memory management, whole swathes of programming are being taken care of by another program now, a Garbage Collector, if you will, for all the crufty stuff that made computing hard and got in between intent and assessment.
For better/worse, and whether completely so or not, the time of the professional keyboard-driven mechanical logic problem solver may simply have just come and gone in ~4 generations (70 years?).
By 2050 it may be more or less as niche as it was in 1950??
Personally, I find the relative lack of awareness and attention on the human aspect of it all a bit disappointing. Being caught in the tides of history is a thing, and can be a tough experience, worthy of discourse. And causing and even forcing these tides isn’t necessarily a desirable thing, maybe?
Beyond that, mapping out the different spaces that are brought to light with such movements (eg, the various sets of values that may drive one and the various ways that may be applied to different realities) would also certainly be valuable.
But alas, “productivity” rules I guess.