Preferences

agentcoops
Joined 551 karma
Working on something new... Contact @gmail.

Ex-Stripe


  1. I definitely agree on the importance of personalized benchmarks for really feeling when, where and how much progress is occurring. The standard benchmarks are important, but it’s hard to really feel what a 5% improvement in X exam means beyond hype. I have a few projects across domains that I’ve been working on since ChatGPT 3 launched and I quickly give them a try on each new model release. Despite popular opinion, I could really tell a huge difference between GPT 4 and 5 , but nothing compared to the current delta between 5.1 and Gemini 3 Pro…

    TLDR; I don’t think personal benchmarks should replace the official ones of course, but I think the former are invaluable for building your intuition about the rate of AI progress beyond hype.

  2. Yes, it’s a very narrow-minded perspective that cannot understand the second-order implications of this development beyond their own experience as an experienced developer. For argument, let’s imagine that the quality of software at the top valley firms is just phenomenal (a stretch, as we all know, even as a hypothetical). That is obviously not the case for the quality of software at 99% of firms. One could argue that the dominance of SaaS this past decade is an artifact of the software labor market: any vaguely talented engineer could easily get a ridiculously well-paid position in the valley for a firm that sold software at great margins to all the other firms that were effectively priced out of the market for engineers. I think the most interesting case study of this is actually the gaming industry, since it’s a highly technical engineering domain where margins are quickly eroded by paying the actual market wage for enough engineers to ship a good product, leading to the decline of AAA studios. Carmack’s career trajectory from gaming industry to Meta is paradigmatic of the generational shift, here.

    TLDR; in my opinion, the interesting question is less what happens at the top firms or to top engineers than what happens as the rest of the world gains access to engineering skills well above the previous floor at a reasonable price point.

  3. I didn’t say to use ChatGPT as a therapist——I said don’t use a human therapist who is worse than it.
  4. As long as their service offering is better than me typing the question into my phone I’m fine with it.
  5. Indeed. Similarly, I like now having ChatGPT as the absolute lowest bar of service I’m willing to accept. If you aren’t a better therapist, lawyer or programmer than it then why would I think about paying you.
  6. I agree with your point about the Heculaneum, but my understanding is we're far enough with research into the form of the Quipus to know that they aren't simply either a linear script or merely accounting data.

    For a long time, it was thought that they indeed contained only the latter, but my certainly non-specialist grasp on the matter is that we now know they were used to encode much more than that. In addition to being used directly to verify calculations [0], they contained "histories, laws and ceremonies, and economic accounts" or, as the Spanish testified at the time: "[W]hatever books can tell of histories and laws and ceremonies and accounts of business is all supplied by the quipus so accurately that the result is astonishing"[1]. My---again crude---understanding is this was through embedding categories, names and relational data in addition to numbers, signaled not least through texture and color [2].

    I likely come across as if I'm trying to over-inflate the Incan knots, but really it's just to say they appear to be a rather fascinating in-between of legal-administrative inscriptions, whose discovery transformed understanding of Roman institutions over the last century or so, and the straight-forward manuscripts of the Herculaneum.

    [0] My elementary and probably out-of-date recollection: an emissary would come to towns with Quipus containing work orders, which would be validated with the community on the spot.

    [1] https://www.jstor.org/stable/27087183

    [2] https://read.dukeupress.edu/ethnohistory/article-abstract/65..., https://www.jstor.org/stable/483319

  7. I've looked at the data for some of the Russell Group and, coming from a US perspective, I was rather shocked at how reliant even top UK universities are on tuition. Apart from Oxbridge, they mostly don't have anywhere near the cashflow from endowments or alumni donations as the US Ivy League does.
  8. For this reason, one of the most fascinating historical relics to me are the Incan Quipu [0]. Not only because their logic appears to be 'proto-computational' (at the very least a very complex system of encoding numeric and narrative information through sequences of knots that were also used directly for calculations), but since, neither in the form of a valuable material like gold nor obviously a book to be destroyed, a large enough number survived to this day. There are few traces of the past we know exist that might contain everything from astronomical calculations to old social-institutional histories.

    They're comparable in that sense to the Heculaneum manuscripts, which researchers have lately made great progress on with deep learning [1]. I hope an equivalent initiative someday starts on the Quipu.

    [0] https://en.wikipedia.org/wiki/Quipu [1] https://www2.cs.uky.edu/dri/herculaneum-papyrus-scrolls/

  9. I bought my dad a Mac laptop when I got my first job out of college and he used it for well over a decade. I even later got him a MacBook Air and he kept using the old one for years yet out of habit… I imagine that’s not an uncommon pattern for non-programmers who aren’t gamers.
  10. The question of origin is still pretty unclear. There seems to be a tension between things that are more developmental (if you have mental imagery, for example, you seem to be able to get better or worse) and those that are likely genetic (research does suggest a connection between aphantasia and autism spectrum etc).

    As someone said below, I suggested figuring it out early is best because of a lot of things that just work differently, especially in learning. There seems to be a real selection bias that most people who learned they were aphantasic reading a New Yorker article, say, by definition figured out how to make it work somewhere along the line. Aphantasia isn’t at all a learning disability in a real sense, but you definitely have to approach things differently.

  11. Weak mental imagery and no visual imagery are distinct.

    The connection of aphantasia to strongly deficient autobiographical memory (SDAM) is well-attested now. You can find numerous clinical studies on the matter.

  12. No, certainly not. I was trying to pose a thought experiment that draws one's attention to the how of their thinking more than "think of an apple." Even if you can't figure out the person's eye color, did you bring to mind a blurred workplace image that just didn't have enough detail in the right place? For an aphantasic, especially if you don't even know this person's name, it's really a sort of experience of an empty thought in the way that thinking about an apple isn't.

    It's hard to write about these things...

  13. The _absence_ of visual imagery is binary: you cannot see images at all or, to whatever extent, you can. Those who do have any mental imagery at all, however, fall on a scale. There are numerous studies of certain real downsides to aphantasia, notably tied to episodic memory, which don't seem to be present in those simply with diminished visual imagery.
  14. Comically, though, programming communities really seem to have a statistical over representation of both aphantasics and hyperphantasics. One of these articles comes out every few years and I've witnessed at numerous workplaces how quickly a large portion of the engineers realize they're aphantasic and everyone else is aghast that they can't rotate complete architectural diagrams etc.

    That said, it really is binary or not whether you cannot see images at all in your head and there are, in fact, some very real downsides related to episodic memory. As someone who realized I was aphantasic late in life, I think it's pretty important to realize you are if in fact you are---ideally as early in your educational process as possible. For everyone else, it's interesting to realize some people have more vivid imagery than you and some people less, but probably that doesn't change very much about your life.

  15. There is really a fundamental difference as many studies now have shown---and I can attest from personal experience. Honestly, if you have to ask the question there's a pretty high chance you are: everyone at some level believes that their own inner experience generalizes to the rest of humanity, but it's those with aphantasia who thereby believe that everyone else's description is just a manner of speaking ("they, like me, surely don't really think in pictures").

    I find the typical thought experiment of "picture an apple" less illustrative than something like "picture the face of a co-worker you see every day but aren't friends with and tell me the color of their eyes." In the apple case, everyone has a "concept" of apple and an experience of "thinking about an apple"---the difference is really in what you can deduce from that thinking and how, if that make sense. Are you reasoning on the basis of an image or from more or less linguistic facts ("apples are red therefore..." etc)?

    The main difference that's more than an "implementation" detail of how you think, so to speak, but really a limit concerns what's called "episodic memory." People with aphantasia rather singularly cannot re-experience the emotions of past experiences. There are a lot of studies on this and I can look up the references if you're interested.

    When I was really trying to make sense of my own aphantasia, I found https://www.hurlburt.faculty.unlv.edu/codebook.html to be one of the most fascinating resources: it's essentially a catalog of all the different modalities of inner experience a large study found. Probably there are critiques of his methodology etc, but regardless it's an invaluable aid for trying to figure out how exactly you think.

  16. Honestly, I wouldn't be surprised if OpenAI has done the math and determined that even releasing frontier quality models wouldn't put much of a dent in either their B2B or B2C businesses. Or, rather, that any such dent would be vastly overshadowed by the value of fending off potential competitors.

    I haven't looked too much into Deepseek's actual business, but at least Mistral seemed to be positioning themselves as a professional services shop to integrate their own open-weight models, compliant with EU regulations etc, at a huge premium. Any firm that has the SOA open model could do the same and cannibalize OpenAI's B2B business---perhaps even eventually pivoting into B2C---especially if regulations, downtime or security issues make firms more cloud-skeptical with respect to AI. As long as OpenAI can establish and hold the lead for best open-weight/on-premise model, it will be hard for anyone to justify premium pricing so as to generate sufficient cash flow from training their own models.

    I can even imagine OpenAI eventually deciding that B2C is so much more valuable to them than B2B that it's worth completely sinking the latter market...

  17. Staying in his lane, living his best life—dropping incredible things to humanity ever now and then. I had to check since I hadn’t thought about it for… a decade apparently, but looks like TAOCP 4B came out a couple years ago.
  18. I think the big issue with design is the ux to LLMs at the moment: it’s really hard to iterate on a design, see the output, make changes etc. I’ve had terrible luck getting good design from ChatGPT/Codex, but V0 is probably one of the most impressive AI UX experiences I’ve encountered — I often show it to non-technical friends who are ai skeptical.
  19. Yeah, I mean the stock market is made to either pay passive income if you have millions or to slowly accumulate value through compound interest—expecting anything else is just gambling. If you’re living paycheck to paycheck, neither of the first two are particularly helpful even medium term — and it’s not… entirely irrational to go all in on option (C). I’d be really curious to actually know the scale of how many people became millionaires from crypto — I have no intuition for what order of magnitude it is. Regardless, there’s clearly a growing belief that the world is now full of such moonshots.
  20. I saw an absolutely shocking number of posts from people clearly on minimum wage at best with 20k or so in CS skins, buying loot boxes every week, and no other investments. Obviously no way to verify the accuracy of such statements, but my sense is you would be horrified to know the scale of the market.
  21. Yeah, I was trying to think about that when I wrote my comment. I do think there’s at the very least a scale change between the proliferation of unregulated “markets” these days (crypto, cs skins, Pokémon cards, I’m sure others) and the “your beanie babies will be worth a fortune some day” of the past. Perhaps what’s more surprising, though, is how consumer behavior leads these markets to now actually move up and to the right for remarkably long. My sense is you were always in fact delusional to think your beanie babies would hold value, but perhaps people are not entirely crazy looking at the charts of skin prices over X years and expecting it to continue. Perhaps I’m being too harsh on the collectors and too charitable to our contemporaries…
  22. I mean, I think they’ve proven over the last century that the single thing they’re good at is protecting the regular payment of dividends (and of course buybacks more recently)… One might not be entirely mistaken to compare expecting much more than that from the modern state to expecting Valve to protect your skin investments.
  23. I waded through some Tik Tok comment threads on this change and it was so eye-opening: there are a shocking number of people without disposable income who were seriously investing in CS items, thinking it was a retirement portfolio. I fear the crypto era has lead to even further diminished financial literacy at large… Blessed be compound interest and financial regulation.
  24. I used LaTeX for decades and had convinced myself nothing could ever replace it. Just this month, however, I converted to Typst for a large project. Absolutely no regrets: undying respect to the great Knuth, but the experience with Typst is already simply better on almost every axis. I use TinyMist with vscode and the development experience is terrific. I was modifying templates within a day of picking it up, which—-skill issue undoubtedly—-always gave me nightmares in LaTeX.
  25. Like many other aphantasics below, just my eyelids. It’s ironic because I’ve always had really good (better than 20:20) eyesight, but I can only remember words.

    The dreaming question is really fascinating: it doesn’t seem to be impacted in its essence by all the incredibly diverse structures of inner experience. It’s clearly a function of the brain much older than conscious experience [1] and I’ve also read research supporting its necessary role in learning (roughly equivalent to reinforcement learning on synthetic data). There are very rare periods in my life when I’ve remembered my dreams often—-which definitely suggests it’s a skill I could refine—-but generally I recall one or two a year.

    One of the interesting questions is which properties of inner experience are genetic, which early developmental, and which skills one can refine at any point in life. Before I knew I was aphantasic, I had a phase studying chess and I tried so hard to “get better” at visualizing games—-one of the most frustrating experiences of my life! Knowing one’s limitations, you can then refine appropriate techniques like algebraic representations etc.

    [1] GPT found some terrific papers on this question. In fact, dreaming (measured by two-phase REM sleep cycles) goes back to vertebrates — and seems to have been convergently evolved in insects and cephalopods. Jellyfish appear as the limit with only a single sleep phase. https://www.nature.com/articles/s41586-023-06203-4 is fascinating.

  26. Same—including the time I dabbled in “experience altering” compounds when much younger. I always find it so strange that many people, including in this thread, find the presence of language in their inner experience unsettling or “imperfect”—-I really wouldn’t trade my inner monologue for anything…
  27. For me, it’s dependent on waking up in a half-dreaming state. Then I’m able to sort of “translate” the dream into language, which I remember—-and sometimes from there I can get back to parts of the dream I didn’t think I remembered. It’s still very rare for me and I’ll go years without remembering a single dream—-in fact, mentioning this to friends when I was younger was one of the first areas where I learned my conscious experience was so different. I imagine getting better at it would be similar to getting better at lucid dreaming.
  28. It’s the same for me and every other aphantasic I’ve spoken with. I go years and years without remembering dreams, but there are distinct periods in my life when I remembered them often. For me it’s essentially if I wake up in a dreaming state and can quickly “translate” them into language. Strange to describe, but I do have a very distinct experience of dejavu sometimes, which I’ve come to believe is tied to latent dream memories—curious if you have anything like that?

    It’s actually something very interesting about the function of dreaming in the brain that this is the case. That there’s such insane variability in the structure of conscious experience and memory, but the imagistic quality of dreaming fulfills a necessary role for all. I’ve read reputable studies that suggest it’s crucial for learning, something similar to training on synthetic data.

  29. I’m aphantasic with no mental imagery at all so my inner experience could not be more different: it’s strange to explain, but I experience “unvocalized” language, which means the words are sort of just there without “hearing” them in my head—-I don’t have inner sound at all either and so the words don’t have an accent, for example. My thought moves at a speed much faster than speaking and I can read fast with high comprehension—-but it takes me incredible effort to remember the color of someone’s eyes, for example. I more or less skip descriptions in novels and prefer to read philosophy.

    I’ve always found it interesting that in programming communities the two extremes of aphantasic and hyperphantasic seem to both be very overrepresented.

  30. Agreed. Crucially, it doesn’t ask _why_ they want this line of credit and assumes it’s to serve as an equal source of financing as capital investment. Yet, I think the reason for this credit line is rather straight-forward risk management: it is not at all inconceivable that any one of the numerous legal proceedings the firm is already entangled in (to say nothing of ones surely to come) conclude in settlements that would be existential without it. If I were an OpenAI investor, I would certainly want a story for how they would handle such an expected emergency. A few other high-growth startups are publicly known to have obtained such a line of credit at a similar stage.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal