Preferences

SirensOfTitan
Joined 4,444 karma

  1. I’ve been working on a product for a little while that Ivan Illich would call a “convivial tool,” one that doesn’t take from the user but makes them more effective, independent, and creative from its use. I’ve been interested in these kinds of tools for a long time, but I feel some sense of urgency in the LLM era, where I’ve already seen peers lose their edge by offloading the cognitive work.

    I’ve been interested in these kinds of tools for a while, that actually act as a bicycle for the mind. Most apps forgo the metacognitive and emotional labor that actually helps people learn effectively in favor of gamification because 1. Modeling these skills is hard 2. The first step to building effective learning habits is to restore the so-called “learn drive” which is the love of learning, play, and tinkering that underlies most effective learning and gamification does so but on an artificial level.

    There is so much content out there, and a sufficiently motivated person will find it and make meaning out of it. Most people are not motivated and don’t know how to motivate, meander, explore without gritting teeth, and I think you’ll probably just see churn without gamification unless you deal with that side of the process.

    Since I've tried to ship such tools before and ultimately failed, I’m explicitly not doing the whole SV fail fast and iterate thing here: I’m meandering, taking my time, letting motivation move me when it strikes versus going for the easiest or most obvious thing.

    (also sorry if this is itself meandering: I’m lifting while typing this on my phone)

  2. If the world is too complex for a “regular person” to understand then universal suffrage is a mistake.

    Just say what you mean: you want technocracy or some other non representative or democratic form of government.

  3. This essay rubs me the wrong way in that it continues to invest in this coastal elite attitude that the masses should do what we say because we are the experts. These people continue to miss the forest for the trees by avoiding the question: why have Americans lost faith in institutions?

    I largely consider Trump a symptom of a larger disorder, I think it is lazy to assume that he and his administration is the source of the breakdown here.

    Two thinkers come to mind to me in this case:

    1. Hannah Arendt, particularly her writing in The Human Condition (and maybe as an analogue: the Anthony Downs book on Bureaucracy and perhaps Jacques Ellul's The Technological Society I think?):

    > Bureaucracy is the form of government in which everybody is deprived of political freedom, of the power to act; for the rule by Nobody is not no-rule, and where all are equally powerless we have a tyranny without a tyrant.

    Another comment talks about accountability, but a bureau is composed of people "just doing their jobs" without the personal accountability that helps keep systems accountable.

    Per Downs, bureaus eventually become mainly obsessed with their own survival over their original mandate, and it requires careful design to avoid this consequence.

    2. Christopher Lasch: The idea that government institutions are required to force an centralized objectivity for democracy to survive is just about the opposite of what I think we actually need, per Lasch:

    > "[Specialized expertise is] the antithesis of democracy."

    > "Democracy works best when men and women do things for themselves, with the help of their friends and neighbors, instead of depending on the state."

    The attitude as espoused in this essay will not do any work to re-establish trust with Americans, it continues a long line of unaccountability or reflectiveness from the "adults in the room" on their own contributions to the degradation of the system by pretending Republicans or Trump are a unique aberration.

  4. I’ve been working on a learning / incremental reading tool for a while, and I’ve found LLM and LLM adjacent tech useful, but as ways of resolving ambiguity within a product that doesn’t otherwise show any use of LLM. It’s like LLM-as-parser.
  5. While I don’t think proper pluralization is indicative of anything outside of real world time constraints, I am a fan of these kinds of tacit signals.

    Last week, my wife and I toured a school for our daughter. The school gave us these pretty notebooks with a blackwing pencil, saying that they “take writing seriously here.” I noticed that the students, however, did not use blackwings but cheap low quality yellow pencils. This signal prompted me to pay closer attention, and I found half a dozen things that affirmed the bad feeling I had about the place.

    It’s a simple rule, but in the era where everyone is trying to sell me, I use Bill Hamilton’s Say Mean Do rule from his “Saints and Psychopaths” about finding real spiritual mentors. Broadly: saints say what they mean and do what they say. Unfortunately it’s probably just as hard to find tech companies who are honest as it is to find a true spiritual mentor. B2B SaaS sales cycle is usually just checkbox hunting and CYA.

  6. These tools have already peaked usage, and even its greatest proponents are questioning its viability, see:

    https://garymarcus.substack.com/p/is-vibe-coding-dying

    I'm under the impression a lot of these tools are:

    1. Aggressively pushed by VCs on company boards.

    2. Productive of code that is not maintainable and becomes very difficult to deal with when it is non-trivial. 3. Not what customers want.

    Don't get me wrong, I use LLMs and LLM tech in my work*, they are useful and interesting products, but they are a small part of the work and a small part of the product. Sure, there are people who use them extremely effectively, but those people are offset by those who have LLMs write code they don't understand and do not review (leading to a scenario where code is effectively ghost code often without even provenance back to the LLM that wrote it).

    It seems to me that layoffs in tech are partly a cultural contagion in executive circles, but more importantly it seems to me like offshoring is much more responsible.

    [*]: Mainly for codemods and reorganization of code, where I'm not really changing the intent of the code but its structure.

  7. LLMs are the latest progression in decades of technology and social changes that leave people less connected and less capable in exchange for more comfort. I think it's likely that AI technology eclipses humans at least partially by atrophying our own skills and abilities, particularly 1. our ability to endure discomfort in service of a goal and 2. our capacities to make decisions.

    I don't really know what to do about it, even with ground rules of engagement, we all still need to participate in a larger culture where it seems like it's a runaway guarantee that LLMs erode more critical skills that leave us with less and a handful of companies who develop this tech with more.

    I'm slowly changing my life around what LLMs tell me, but not necessarily in the ways you'd expect:

    1. I have a very simple set of rules of engagement for LLMs. For work, I don't let LLMs write code, and I won't let myself touch an LLM before suffering on a project for an hour at least.

    2. I am an experienced meditator with a lot of experience in the Buddhist tradition. I've dusted off my Christian roots, and started exploring these ideas with new eyes, partially from a James Hillman-esq / Rob Burbea Soulmaking Dharma look. I've found a lot of meaning in personal fabrication and myth, and my primary practice now is Centering Prayer.

    3. I've been working for a little while on a personal edu-tech idea with the goal of using LLM tech as an auxiliary tech to help people re-develop lost metacognitive skills and not use LLMs as a crutch. I don't know if this will ever see the light of day, it is currently more of a research project than anything, and it has a certain kind of iconoclastic frame like Piotr Wozniak's around what education is and what it should look like.

  8. A good anchor here is that cigarette smokers are 15-30 times more likely to get or die from lung cancer compared to non-smokers.

    Effect size and baseline risk matter a lot, and while the idea that alcohol was pro-health always felt a little suspect, I don't think this kind of risk profile is at all significant enough for people to change their habits for.

    I didn't also read too much into this study, but there is a stark difference between old age dementia and younger dementia. My mom contracted dementia symptoms at 58, which is so much more devastating than another family member who started showing symptoms at 97.

  9. My pulse today is just a mediocre rehash of prior conversations I’ve had on the platform.

    I tried to ask GPT-5 pro the other day to just pick an ambitious project it wanted to work on, and I’d carry out whatever physical world tasks it needed me to, and all it did was just come up with project plans which were rehashes of my prior projects framed as its own.

    I’m rapidly losing interest in all of these tools. It feels like blockchain again in a lot of weird ways. Both will stick around, but fall well short of the tulip mania VCs and tech leaders have pushed.

    I’ve long contended that tech has lost any soulful vision of the future, it’s just tactical money making all the way down.

  10. > Each of these 'phases' of LLM growth is unlocking a lot more developer productivity, for teams and developers that know how to harness it.

    I still find myself incredibly skeptical LLM use is increasing productivity. Because AI reduces cognitive engagement with tasks, it feels to me like AI increases perceptive productivity but actually decreases it in many cases (and this probably compounds as AI-generated code piles up in a codebase, as there isn't an author who can attach context as to why decisions were made).

    https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

    I realize the author qualified his or her statement with "know how to harness it," which feels like a cop-out I'm seeing an awful lot in recent explorations of AI's relationship with productivity. In my mind, like TikTok or online dating, AI is just another product motion toward comfort maximizing over all things, as cognitive engagement is difficult and not always pleasant. In a nutshell, it is another instant gratification product from tech.

    That's not to say that I don't use AI, but I use it primarily as search to see what is out there. If I use it for coding at all, I tend to primarily use it for code review. Even when AI does a good job at implementation of a feature, unless I put in the cognitive engagement I typically put in during code review, its code feels alien to me and I feel uncomfortable merging it (and I employ similar levels of cognitive engagement during code reviews as I do while writing software).

  11. I’m convinced all of the major LLM providers silently quantize their models. The absolute worst was Google’s transition from Gemini 2.5 Pro 3-25 checkpoint to the May checkpoint, but I’ve noticed this effect with Claude and GPT over the years too.

    I couldn’t imagine relying on any closed models for a business because of this highly dishonest and deceptive practice.

  12. I do this but annotate books heavily by writing in the margins (digitally through my remarkable) and only very rarely ever revisit them.

    Writing while reading is a way of focusing on what either resonates with me or confounds me.

  13. I use AI almost exclusively for search, and usually force myself to grind against a problem a little before engaging it. I treat AI as a smart codemod tool when I do use it for software development: against easily verifiable, well-defined tasks, low mental effort but higher time commitment tasks.

    I keep a list of "rules of engagement" with AI that I try to follow so it doesn't rob me of cognitive engagement with tasks.

  14. I’m tired of the hype, which is highly disconnected from reality, and I think that there is a long tail of tasks in information work that requires context, relationship that AI will struggle with for a while.

    But it is also a fun and interesting technology that when used appropriately can reduce barriers around explorations and learning new things.

    Unfortunately like most technology nowadays, engagement at the expense of everything else will continually eek away minimally at peoples’ remaining attention skills or worse by reducing peoples’ capacity to form relationships with real flawed humans.

    I recently embarked on a personal journey of reading the Gnostic nag hammadi collection and a modern translation of the Christian gospels from a James Hillman multiplicity frame (and from my direct experience from thousands of hours of Buddhist meditation) and it’s been tremendously fruitful. I mark up the documents I read and then every so often discuss with an LLM and I find it a lovely experience. I’ve started seeing Christianity from a new frame (having been raised Presbyterian against my will when younger) as something much more lovely than I considered before. Christian mysticism reminds me a lot now of non-dual Buddhist traditions, and centering prayer has reignited a meditation practice stalled from a bad retreat experience years ago. LLMs didn’t give me anything I wasn’t giving them, they just acted as a mirror and helped bootstrap a weird interest I have that I’m now sharing with friends.

    I also use LLMs in a very limited way when writing software:

    1. As a rubber duck, and often I don’t even click enter, I figure out what I needed by writing it down.

    2. For explorations of new concepts I’m not yet familiar with, like a glorified search engine.

  15. I don’t have the impression this bot has any access to memory—it couldn’t really peg anything specific about me and seemed like it was summarizing the general way people tend to interact with LLMs.
  16. There was a significant nerf of Gemini 3-25 a little while ago, so much so that I detected it without knowing there was even a new release.

    Totally convinced they quantized the model quietly and improved on the coding benchmark to hide that fact.

    I’m frankly quite tired of LLM providers changing the model I’m paying for access to behind the scenes, often without informing me, and in Gemini’s case on the API too—at least last time I checked they updated the 3-25 checkpoint to the May update.

  17. This is even the case with Gemini:

    The Gemini 2.5 Pro 05/06 release by Google’s own reported benchmarks was worse in 10/12 cases than the 3/25 version. Google re routed all traffic for the 3/25 checkpoint to the 05/06 version in the API.

    I’m also unsure who needs all of these expanded quotas because the old Gemini subscription had higher quotas than I could ever anticipate using.

  18. They're using these "Preview" models on their non-technical user facing Gemini app and product. Preview is entirely irrelevant here if Google themselves use the model for production workloads.
  19. Anyone who sees Trump as either an aberration or a savior is deeply deluded on the state of America.

    In my opinion, the US world order’s decay was unmasked in 2008, and it has been accelerating since. The two economic realities between the poor rural America and the rich coastal cities (and even within them there is so much clear wealth disparity) have only gotten worse, and the political and bureaucratic system isn’t really capable of skillfully dealing with it.

    Trump actually speaks to the realities that few politicians will (Bernie Sanders did too in 2016, hence his appeal), though his prescribed solutions are likely just accelerating the country’s demise.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal