- I was an engineer on the Visual Studio team when we first introduced syntax highlighting and code completion. The rollout triggered quite a bit of internal controversy. A sizable group of developers strongly opposed these features—syntax coloring, parameter completion, signature validation—arguing that “real programmers write their code unaided.”
I can’t help but wonder how those same engineers are adapting to the current wave of AI-powered development tools like Claude Code and Cursor.
- Ah... we found the person who thinks they can pass judgement on how people choose to live their lives. I didn't say that my friend doesn't love his job (he does) - I said that he'll probably die before retiring.
Stephen Hawking, Einstein, Marie Curie, and Linus Pauling never retired. Did they not "truly live"?
- > I'm a person that wants to learn anything and everything.
That is exactly what I do now. Every question I've ever had I now have the time to devote to answering it. I take classes, I volunteer, I mentor Comp. Sci. students. But, more than anything, I still write code. I spent the last few months creating an LLM from scratch which was incredibly fun.
That said, I have a friend who will probably work until he dies. His only real interest in life is his job. I'm not suggesting that is a bad thing; its more to the point that "retirement" isn't a panacea for everyone.
- I'm in my 50s. About two months into retirement I fell into the deepest depression of my life because I couldn't shake the "who am I without my job?" question. It took almost a year (and therapy) to accept that I still add value without working.
- Agree... but that is exactly what MVPs are. Humans have been shipping MVPs while calling them production-ready for decades.
- I really wonder what means for software moving forward. In the last few months I've used Claude Code to build personalized versions of Superwhisper (voice-to-text), CleanShot X (screenshot and image markup), and TextSniper (image to text). The only cost was some time and my $20/month subscription.
- I've been using git worktrees with Claude and it's pretty awesome:
https://www.youtube.com/watch?v=up91rbPEdVc
Pair worktrees with the ralph-wiggum plugin and I can have Claude work for hours without needing any input:
https://looking4offswitch.github.io/blog/2026/01/04/ralph-wi...
- 2 points
- This is fantastic. I’m currently building a combustion engine simulator doing exactly what you did. In fact, I found a number of research papers, had Claude implement the included algorithms, and then incorporated them into the project.
What I have now is similar to https://youtu.be/nXrEX6j-Mws?si=XdPA48jymWcapQ-8 but I haven’t implemented a cohesive UI yet.
- If the goal is to learn how to solve a Rubik's Cube when you've never seen a Rubik's Cube before, you have no idea what "halfway solved" even looks like.
This is precisely how RL worked for learning Atari games: you don't start with the game halfway solved and then claim the AI solved the end-to-end problem on its own.
The goal in these scenarios is for the machine to solve the problem with no prior information.
- My brother left school after ninth grade and struggles financially — he can’t afford basic health insurance, yet he’ll spend $100 on lottery tickets whenever possible.
I understand the utility he’s purchasing: a temporary sense of hope. What concerns me is the implicit misunderstanding of probability. The difference in expected value between purchasing one ticket and fifty is statistically negligible. This isn’t about elitism — it’s simply about recognizing orders of magnitude and the arithmetic reality of vanishingly small odds.
- Was anything he claimed in the article incorrect? Personally, I enjoy these types of historical stories.
- If the goal is to achieve end-to-end learning that would be cheating.
If you sat down to solve a problem you’ve never seen before you wouldn’t even know what a valid “later state” looking like.
- I define art as something that evokes an emotion or feeling. I’ve seen people wax poetic about the ”meaning” of an imagine only to find out that the image was created synthetically.
Were those “feelings” not authentic?
- 5 points
- Isn’t that just “I can’t regulate my media intake” problem?
Why wife and I watched the entire series over a year and loved every minute.
- I've been using 22.04 for about six months (AI development and some Steam games) - I really enjoy it. The 24.04 upgrade was flawless.
It may sound a little odd but I'd describe my time with Pop!_OS as "quiet". It feels good to be total control again. I don't have to constantly disable things and there isn't a Copilot icon on my dock that comes back from the dead every few days.
Obsidian, 1Password, VS Code, Warp, etc. all work without issue.
- Excuse me? Algorithms are invented. The transformer architecture was invented.
The transformer was not some aspect of nature that was waiting to be discovered. The human mind created it.
- There are human beings who believe absolutely insane and easily disprovable things. Even in the face of facts they continue to remain willfully ignorant.
Humans can convince themselves of almost anything. So I don’t understand your point.
- So, if we find a way to ensure that transformers balk at hallucinating will you then say that they “understand” what they’re saying?
Because that’s what your comment indicates.
- I’ve worked with many neuroscience researchers during my career. At a minimum, I’m extremely well-read on the subject of cognition.
I am not going to lie or hide my experience. The world is a fucked up place because we no longer respect “authority”. I helped build one of these systems; my opinion is as valid as yours.
Yours is the “standard meaningless” response that adds zero technical insight. Let’s talk about supportive tracing or optimization of KV values during pre-training and how those factors impact the apparent “understanding” of the resulting model.
- We had no idea that there would be emergent properties. AlexNet was not designed for language translation and yet it worked.
If we go with your joke, we’d never build or create anything.
- > Where the humans aren't omniscient it fills the blanks with nonsense
As do most humans. People lie. People make things up to look smart. People fervently believe things that are easily disproved. Some people are willfully ignorant, anti-science, anti-education, etc.
The problem isn't the transformer architecture... it is the humans who advertise capabilities that are not there yet.
- > as someone who is a sociopath completely devoid of ethics
Ah yes... the hundred thousand researchers and engineers who work at MS are all evil. Many people who've made truly significant contributions to AI have either worked directly (through MS Research) or indirectly (OpenAI, Anthropic, etc) at MS. ResNet and concepts like Differential Privacy were invented there.
What about the researchers at Stanford, Carnegie Mellon, and MIT who receive funding from companies like MS? Are they all evil sociopaths, too? Greg Hinton's early research was funded by Microsoft btw.
I originally joined MS in the early 90s (then retired) and came back to help build Copilot. The tech was fantastic to work with, we had an amazing team, and I am proud of what we accomplished.
You seem slightly confused between the people who invent technology and the assholes who use it for evil. There is nothing evil about the transformer. Humans are the problem.
- As someone who was an engineer on the original Copilot team, yes I understand how tech works.
You don’t know how your own mind “understands” something. No one on the planet can even describe how human understanding works.
Yes, LLMs are vast statistical engines but that doesn’t mean something interesting isn’t going on.
At this point I’d argue that humans “hallucinate” and/or provide wrong answers far more often than SOTA LLMs.
I expect to see responses like yours on Reddit, not HN.
- I was an engineer on the Visual Studio team at Microsoft from 2002-2009. If you had told me it would eventually run in a web browser I would have thought your were crazy. Back then we had to work closely with the core Windows team (memory optimizations, etc.) just so that it could run natively on a semi-beefy PC. I'm pumped that the team has been able to pull this off!
- It's 2018 and we still can't get a decent video card in $3k-$4k+ Apple laptop? The new MBP will still be horrible at delivering a high-ish end gaming experience and I'll continue to be forced to perform all of my GPU-based machine/deep learning work on a separate machine (or eGPU).
- I'm a software developer too. I've held senior engineering roles at Microsoft, Apple, and Intel (in that order). I grew up loving Windows but soon after getting really deep into development that love vanished.
Anything I do today (everything from Intel microcode, x86 assembly to C/C++) happens on macOS simply because I can do every single thing I need to in one place. Most devs I bump into that really hate macOS have no idea it is really just (open source) BSD with Apple's oddly unique visual facade. There is literally nothing I can't do on my Mac even during the times I run Visual Studio, VTune, etc. I also find it amusing when devs tell me that macOS isn't as customizable as Windows. Sure... sure... ;-)
Also, VS (codename Boston) was used as the de-facto internal development IDE for a few years before it we released it to the public. There were also arguments about shipping those types of features publicly.