- Because the concept of skills is not tied to code development :) Of course if that's what you're talking about, you are already very close to the "interface" that skills are presented in, and they are obvious (and perhaps not so useful)
But think of your dad or grandma using a generic agent, and simply selecting that they want to have certain skills available to it. Don't even think of it as a chat interface. This is just some option that they set in their phone assistant app. Or, rather, it may be that they actually selected "Determine the best skills based on context", and the assistant has "skill packs" which it periodically determines it needs to enable based on key moments in the conversation or latest interactions.
These are all workarounds for the problems of learning, memory...and, ultimately, limited context. But they for sure will be extremely useful.
- I'm so excited for the future, because _clearly_ our technology has loads to improve. Even if new models don't come out, the tooling we build upon them, and the way we use them, is sure to improve.
One particular way I can imagine this is with some sort of "multipass makeshift attention system" built on top of the mechanisms we have today. I think for sure we can store the available skills in one place and look only at the last part of the query, asking the model the question: "Given this small, self-contained bit of the conversation, do you think any of these skills is a prime candidate to be used?" or "Do you need a little bit more context to make that decision?". We then pass along that model's final answer as a suggestion to the actual model creating the answer. There is a delicate balance between "leading the model on" with imperfect information (because we cut the context), and actually "focusing it" on the task at hand, and the skill selection". Well, and, of course, there's the issue of time and cost.
I actually believe we will see several solutions make use of techniques such as this, where some model determines what the "big context" model should be focusing on as part of its larger context (in which it may get lost).
In many ways, this is similar to what modern agents already do. cursor doesn't keep files in the context: it constantly re-reads only the parts it believes are important. But I think it might be useful to keep the files in the context (so we don't make an egregious mistake) at the same time that we also find what parts of the context are more important and re-feed them to the model or highlight them somehow.
- You mean it hasn’t been that way for the last 14 years and it hasn’t survived 5 different computer changes and a dozen or so OS migrations and you don’t still have a tiny document with “fun business ideas” to start that company with your fresh out of college gang right next to that “how to organize a great 10 year reunion” sheet from 3 years ago?
Must be a me thing, then.
- While I was writing the reply, I considered giving the absolutely exact same explanation, which I agree with, down to the example you gave.
Two examples in portuguese that always trip me up, and absolutely never used to before are (i) "voz" (voice) and "vós" (you); and (ii) "trás" (back) and "traz" (bring).
I also do a lot more code-switching.
- I'm 32, but I noticed I started making similar mistakes around 28 or so. Occasionally I write out words which are completely wrong, but sound similar.
It's as if one part of the brain is doing the thinking, and another one is "listening" to it and transcribing/typing it out, making mistakes.
For a little while I was a bit worried, but I then realized nothing else had changed, so I've just gotten used to it and like to jokingly say "I've become so fast at thinking that even I can't keep up!"
- > Then somebody funds an RCE in server components.
I'd say they found it, but I love the conspiracy theory :D :D :D
- I feel you. It's definitely a tradeoff. SPAs do tend to be buggier but I can't deny that, when done right, they also tend to be better.
Unfortunately, there's _more_ people, building _more_ stuff, so there's _more_ terrible stuff out there. The amount of new apps (and new developers, especially ones with quite limited skills) is immense compared to something like ten years ago. This means that there's just more room for things to be poorly-built.
- I think the logic can be applied to humans as well as AI:
Sure, the AI _can_ code integrations, but it now has to maintain them, and might be tempted to modify them when it doesn't need to (leaky abstractions), adding cognitive load (in LLM parlance: "context pollution") and leading to worse results.
Batteries-included = AI and humans write less code, get more "headspace"/"free context" to focus on what "really matters".
As a very very heavy LLM user, I also notice that projects tend to be much easier for LLMs (and humans alike) to work on when they use opinionated well-established frameworks.
Nonetheless, I'm positive in a couple of years we'll have found a way for LLMs to be equally good, if not better, with other frameworks. I think we'll find mechanisms to have LLMs learn libraries and projects on the fly much better. I can imagine crazy scenarios where LLMs train smaller LLMs on project parts or libraries so they don't get context pollution but also don't need a full-retraining (or incredibly pricey inference). I can also think of a system in line with Anthropic's view of skills, where LLMs very intelligently switch their knowledge on or off. The technology isn't there yet, but we're moving FAST!
Love this era!!
- - Strict team separation (frontend versus backend)
- Moving all state-managament out of the backend and onto the frontend, in a supposedly easier to manage system
- Page refreshes are indeed jarring to users and more prone to leading to sudden context losses
- Desktop applications did not behave like web apps: they are "SPA"s in their own sense, without jarring refreshes or code that gets "yanked" out of execution. Since the OS has been increasingly abstracted under the browser, and the average computer user has moved more and more towards web apps[1], it stands to reason that the behavior of web apps should become more like that of desktop apps (i.e. "SPA"s)[2]
(Not saying I agree with these, merely pointing them out)
[1] These things are not entirely independent. It can be argued that the same powers that be (big corps) that pushed SPAs onto users are also pushing the "browser as OS" concept.
[2] I know you can get desktop-like behavior from non-SPAs, but it is definitely not as easy to do it or at least to _learn it_ now.
My actual opinion: I think it's a little bit of everything, with a big part of it coming from the fact that the web was the easiest way to build something that you could share with people effortlessly. Sharing desktop apps wasn't particularly easy (different targets, java was never truly run everywhere, etc.), but to share a webapp app you just put it online very quickly and have someone else point their browser to a URL -- often all they'll do is click a link! And in general it is definitely easier to build an SPA (from the frontender's perspective) than something else.
This creates a chain:
If I can create and share easily
-> I am motivated to do things easily
-> I learn the specific technology that is easiest
-> the market is flooded with people who know this technology better than everything else
-> the market must now hire from this pool to get the cheapest workers (or those who cost less to acquire due to quicker hiring processes)
-> new devs know that they need to learn this technology to get hired
-> the cycle continues
So, TL;DR: Much lower barrier to entry + quick feedback loops
P.S (and on topic): I am an extremely satisfied django developer, and very very very rarely touch frontend. Django is A-M-A-Z-I-N-G.
- 100% agree. This is exactly the kind of big picture thinking that so many people often seem to miss. I did too, when I was young and thought the world was just filled with black and white, good vs evil dichotomies
- I very unironically think everyone should watch The Good Place to get an initial feel for ethics.
- As a (rather obsessive, perhaps compulsive) poet, I will indeed remain in my camp :)
(I do get what you're saying. Yet, I am not convinced that processes such as tokenization, and the inherent discretization it entails, are incompatible with creativity. We barely understand ourselves, and even then we know that we do discretize several things in our own processes, so it's really hard for me to just believe that tokenization inherently means no creativity).
- It is a tale as old as time, and one which no doubt all of us repeat at some point in our lives. There are hundreds of clichéd books, hundreds of songs, and thousand of letters that echo this sentiment.
We are rarely capable of valuing the freedoms we have never been deprived of.
To be privileged is to live at the quiet centre of a never-ending cycle: between taking a freedom for granted (only to eventually lose it), and fighting for that freedom, which we by then so desperately need.
And as Thomas Paine put it: "Those who expect to reap the blessings of freedom, must, like men, undergo the fatigues of supporting it."
- Utilitarianism is a bitch, huh?
(I make this remark merely as a gag. I think you pinpoint an issue which has been unresolved for ages and is knee-deep into ethics. One could argue that many of our disagreements about AI and progress (in a broader sense) stem from different positions on ethics, including utilitarianism).
- I disagree with your view on creativity, and indeed find LLMs to be remarkably creative. Or perhaps, to sidestep the anthropomorphization issue, I find that LLMs can be used as tools to produce creative works, greatly amplifying one's creativity to the point that a "near 0 creativity" subject (whatever that is) can create works that others will view as profoundly creative.
In truth, I don't think there's likely to be a correct definition for what "creativity" is. We're all just moving each other's goal posts and looking at different subject aspects of our experience.
What I do know is that I have talked to dozens of people, and have friends who have talked to hundreds. When shown what LLMs and generative models can do, there's a significant portion of them who label this work as creative. Rather than deem these people ignorant, I would rather consider that maybe if enough people see creativity somewhere, perhaps there is something to it. LLMs creating poems, stories, connecting concepts which you wouldn't otherwise see connected, birthing ideas in images that video which had never existed before.
Of course, I'm aware that by this logic many things fall apart (for example, one might be tempted to believe in god because of it). Nonetheless, on this issue, I am deeply on the creativity camp. And while I am there, the things I get LLMs to create, and people I know do so as well, will continue to induce deep emotion and connection among ourselves.
If that's not creativity to some, well, that's life, full of creative ways to define creativity or eschew it from the world.
- 100% this.
Just don't expect to get decent code often if you mostly rely on something like cursor's default model.
You literally get what you pay for.
- Maybe it's a matter of the websites I use and my specific usage patterns.
I've used many browsers throughout the years: Chrome, Safari, Firefox, Arc, Zen, Orion. For many years I ran safari because it was so energy-efficient and the integration was absolutely great. I would LOVE to get back to safari!...
For my usage patterns, though, Safari is noticeably slower and much more sluggish. I can't really put it any other way.
Things that are pretty terrible for me in Safari: YouYube, Google Docs, GitHub diff viewer, just to name a few. Safari was also noticeably terrible on pages that do HTML animations via JS and not CSS (they shouldn't do it, but they do, and I can instantly tell on Safari).
I will add that although I did have Safari as my main browser several years ago, it was never for its speed. It felt "OK" in terms of speed (a bit slower, but not too noticeable back then), but it felt AMAZING in terms of better life and OS integration.
- I've tried Orion a couple of times. I even used it as my default browser for ~3 months about two years ago. Most recently, I tried to use it again about 2 months ago, but it still had loads of bugs and, most of all, was painfully slow.
The truth is, Orion being based off of WebKit comes with the obvious limitation that....it's based off of WebKit! So much slower than chrome or firefox, and plagued with decisions that are just not to my taste. For example, just the way it behaves when I hit the back button (or, rather, when I swipe back) feels incredibly sluggish. Loading is often terrible, with constant repaints of the screen as well. A bunch of websites don't work properly either.
The only true reason why I wanted Orion to work was because I wanted a browser that would be good for my battery life and "optimized for the mac". But, since then, I've realized I don't really use the battery that much (or that I don't notice it being a problem), and that, whatever "optimized for the mac" means, it definitely isn't speed.
After Arc went around and poo-pooed on its users, I migrated to Zen (I did try Orion again, like I mentioned). Zen is also filled with bugs, but at least I don't want to throw my computer out the window because of it being slow.
- Oh, sure!
1. The Book of Disquiet by Bernardo Soares/Fernando Pessoa ("Livro do Desassossego" in the original Portuguese)
2. 1984 by George Orwell
3. The Unbearable Lightness of Being by Milan Kundera
4. The Design of Everyday Things by Don Norman
5. (As mentioned) The Picture of Dorian Gray by Oscar Wilde
It doesn't make much sense, but between 1. and 2. I'd put all of the currently known poetry and prose by Álvaro de Campos, Alberto Caeiro, and Ricardo Reis (in that order). I have a 3-book collection with this.
Gemini 3 has this extremely annoying habit of bleeding its reasoning process onto comments which are hard to read and not very human-like (they're not "reasoning", they're "question for the sake of questioning", which I get as a part of the process, but not as a comment in the code!). I've seen it do things like these many times:
Beyond that, it just does not produce what I consider good python code. I daily-drove Gemini 2.5 before I realized how good Anthropic's models were (or perhaps before they punched back after 2.5?) and haven't been able to go back.As for GPT 5.2, I just feel like it doesn't really follow my instructions or way of thinking. Like it's dead set on following whatever best practices it has learned, and if I disagree with them, well tough luck. Plus, and I have no better way of saying this, it's just rude and cold, and I hate it for it.