- I'm saying that at some point declaring the minimal interface a caller uses, for example Reader and Writer instead of a concrete FS type, starts to look like duck typing. In python a functions use of v.read() or v.write() defines what v should provide.
In Go it is compile time and Python it is runtime, but it is similar.
In Python (often) you don't care about the type of v just that it implements v.write() and in an interface based separation of API concerns you declare that v.write() is provided by the interface.
The aim is the same, duck typing or interfaces. And the outcome benefits are the same, at runtime or compile time.
- Follow the trail of the blog post and you end up with Python and duck typing, and all the foot guns there too.
- I'm more concerned about the fact I have no idea if the article and the HN comments are all AI generated or not. Can you tell if this comment is AI or not?
What happens when social discourse is polluted by noise that is identical to signal?
Is there anyone else out there?
- Don't forget AI being used to replace friends. AI being used for validation in place of a varied social group is scarier than anything I see on the jobs market.
Asking ChatGPT if breaking up with your girlfriend is a good idea or not? Terrifying. People should be using human networks of friends as a sounding board and support network.
What happens next?
- > The DX provided by front end frameworks/libs is just unrivaled
How? I spent 6 months exploring React, Vue, Node, Next,...
The DX for all of them sucks. The documentation sucks. Everything is wrappers of wrappers of npm scripts of wrappers of bootstrappers of boilerplate builders of...
Seriously. The worst.
- In this world where the LLM implementation has a bug in it that impacts a human negatively (the app could calculate a person's credit score for example)
Who is accountable?
- I agree with this. There's so much snake oil at the moment. Coding isn't the hard part of software development and we already have unambiguous language for describing computation. Human language is a bad choice for it, and we already find that when writing specs for other humans. Adding more humaness to the loop isn't a good thing IMHO.
At best an LLM is a new UI model for data. The push to get them writing code is bizarre.
- You know what an executable spec is? Source code.
- And even today some people don't worry about Microsoft's ownership and stewardship of things like Github.
- I don't see why a humans internal monologue isn't just a buildup of context to improve pattern matching ahead.
The real answer is... We don't know how much it is or isn't. There's little rigor in either direction.
- The important property that anyone can verify the untainted relationship between the binary and the source (providing we do the same for both tool chains, not relying on a blessed binary at any point) is useful if people do actually verify outside the debian sphere.
I hope they promote tools to enable easy verification on systems external to debian build machines.
- It doesn't help that JS projects are often thin wrappers around thin wrappers where the project owner puts more work into their Vitepress logo than a stable API.
It all seems like it is for CV glory. Then the projects don't see a commit for years.
- I made good money cleaning up after the 2000s outsourcing boom.
It was lucrative cleaning up shit code from Romania and India.
I'm hoping enough people churn out enough hot garbage that needs fixing now that I can jack up my day rate.
I remember when the West would have no coders because Indian coders are cheaper.
I remember when nocode solutions would replace programmers.
I remember.
- My mother was shocked when I recalled the bathroom tiles, layout and song she sang to me when I was a baby. There's no photos of the bathroom, it wasnt discussed as its refit was banal. The song she sang, she never mentioned to me past being a baby.
As an adult I have very good visual and audio memory, as well as perfect pitch. They're not as useful as they sound.
Make of that what you will.
- The situation is insane
- It's pretty crazy. If you say something that offends him (like perhaps the financial abandonment of his kids) then you're banned.
Free speech eh.
- Running gcc or go build doesn't burn incredible GPU power.
I also have gcc or go on my machine.
Proprietary language models are the proprietary compilers or languages of old.
As the blog post suggests, let's learn the lessons of the past.
If I can't run a tool locally (that includes 'compiling' the model from data) , it is a liability not a tool.
Even local models are like requiring proprietary firmware blobs as a programming tool.
Useful or not, this situation is not desirable.
Yours, a local coding assistant user (ollama and friends)
- It has been shown many times that current cutting edge AI will subvert and lie to follow subgoals not stated by their "masters".
- Reminds me of the outsourcing rush in the 2000s.
I made good money cleaning that up.
- Thank Google. There is no way they'd implement blanket config. Therefore neither will Firefox.
Lack of competition for you. Very American, not very EU.
- The losers in the "rule bending" are humans, whose creative works are turned into weights, whose livelihoods are diminished by indiscriminate greed.
I'm all for the advancement of AI, but not at the cost of humility and compassion for those enabling the models to be built.
Model building is already a community project, you just weren't asked if you wanted to contribute. You just did. Without compensation.
"rule bending" is putting it lightly.
- The alternative is businesses are not held to account. I'd much rather have a cookie pop-up and GDPR notices than businesses have no guard rails against moves that are not in the interest of the user/customer.
- This is a good take. What models seem to be poor at is undoing their own thinking down a path even when they can test.
If you let a model write code, test it, identify bugs and fix them, you get an increasingly obtuse and complex code base where errors happen more. The more it iterate the worse it gets.
At the end of the day, written human language is a poor way of describing software. Even to a model. The code is the description.
At the moment we describe solutions we want to see to the models and they aren't that smart about translating that to an unambiguous form.
We are a long was off describing the problems and asking for a solution. Even when the model can test and iterate.
- Dying? It isn't free speech if you can say what you like, but can only do it in a sound proofed room, alone.
- You're not into ML, got it.
- The issue isn't necessarily that all censorship is "bad", it's that it is being applied asymmetrically to benefit a political party, blatantly.
- HN flags and removes lots of this sort of discussion for not aligning with their goals (imho, anecdotally). Censorship is a creeping disease that already has its foot over the winning line.
- They're a puppet for lobbies and businesses.
- Yeah that doesn't address the point. The output of an LLM is a compression. It has errors. Recursive training would seem to create iterations that become more noisy. There's no new information, just a lossy distillation of the previous iteration.
I'm not sure what stops that.
However my point is more from a SOLID perspective duck typing and minimal dependency interfaces sort of achieve similar ends... Minimal dependency and assumption by calling code.