- Okay what we're saying is slightly different, you mean to reach a certain bar. I kind of agree to that
Through the marginal improvement is still pretty high to knowing how the tools work and how to use them more effectively, in a way that people that spend time with the tools will be _more_ effective
- I would argue that for all of these there's still a skill element evolved.
If I give an accountant an electronic calculator and a problem to solve, they'll be more efficient than me
If I give someone who spent thousands of hours on a computer a task on it, they'll be able to do more than my parents
If I give someone that writes a lot a ballpen, their writing will be faster and more legible than someone like me who barely writes on paper.
- I don't think dropping a random software developer into a random project to do their open source duty would end well
It takes a _lot_ of time for someone to meaningfully contribute to a project, and would just result in maintainers having the overhead of training that many new people on a project
I'd much rather figure out a way to finance those open source projects in a sustainable way where those projects can decide to hire full time employees.
- And they do not link out to anything that even remotely validates this
> Various recently published COBOL articles state the following…
For all we know those articles could be LLM written blogpost spam that just happens to support what the authors of a COBOL consulting firm want us to believe
- On the other hand, I know a lot of people who spend more time / salary messing around with their infra than the couple hundred bucks they've saved from not pressing a couple of buttons on vercel / cloudfare
There's a time and place for just deploying quickly to a cloud provider versus trying to manage your infra. It's a nuanced tradeoff that rarely has a clear winner.
- I don't know, companies investing in AI in the goal of AGI is now allowing me to effortlessly automate a whole suite of small tasks that weren't feasible before. (after all I pinged a bot on slack using my phone to add a field to an API, and then got a pull request in a couple of minutes that did exactly that)
Maybe it's a scam for the people investing in the company with the hopes of getting an infinite return on their investments, but it's been a net positive for humans as a whole.
- Couldn't you extend this line of thinking to any language-specific syntax on any programming language?
Don't use `match`, macros, lifetimes, ... in rust, someone coming from another language without them might not get what it means. Instead write the equivalent C-looking code and don't take advantage of any rust specific things.
Don't use lisp, someone coming from another language might not be able to read it.
Etc..
At one point if you write code and want to be productive, you need to accept that maybe someone that is not familiar with the language you're using _might_ have to look up syntax to understand what's going on.
- In my opinion the tooling like sbt, scalafmt, ... is actually semi decent, but the LSP is a pain to use.
The only way I'm productive on scala is on Intellij, as soon as I have to even glance at the metals LSP implementation my productivity skydives
This is compounded by the fact that most good AI editors are based on VSCode, which means I either have to suffer through it, or alt-tab to intellij any time I want to edit code
- But the point here is not that you need to realize you typed something wrong and then cancel (in that case just don't enable the setting if you always want to abort). The point is that you need to decide if the autocorrect suggestion was the right one. Which you can't know until it tells you what it wants to autocorrect to.
- If I follow through what you said, paper has obvious positive impact in schools, or we can at least imagine that positive impact. And so banning paper would be very likely not to result in an improvement if studied. And like any smartphone ban, it _should_ be studied rigorously before implementing.
But lets say they do find that smart phones during class _are_ good, but just social media is bad, then it also sounds reasonable to me that a kids phone might be required to have some type of block on social media app during class time. Just like it sounds reasonable for a school to ban papers _with porn printed on them_ during class time. There's no issue besides on a practical level with getting more fine grained and isolating the impact.
Or do you also oppose that later, is your kid printing porn on paper and bringing it to school part of the personal freedom you want control over and which the school should not have to authority to ban?
- I'm going to go to an extreme, but if we had solid research that said banning phones at school resulted in some extreme, lets say 50%, improvement in their ability to learn, would you support the ban of phones during school time? Would you expect the school to _not_ implement a policy that would benefit learning that much?
What if we swapped this out for "not taking edibles during class", would that infringe on your kids personal freedom too much?
In a world where parents feed their children fast food all the time, and let them play mindless Ipad games from an early age, I have lower faith in every parent reading the relevant literature and implementing best practices than I have in academic institutions figuring out how to optimize learning (not that I have a huge amount of faith in that either, just more)
- You just went to where 99% of parents don't even know how to get to. Do you think it should be expected of parents to figure out how to use whatever "PiHole" is to protect their kids?
I admire your personal dedication to making it as hard as possible to be exploited, but we really can't expect non-tech people to go to the same lengths. And at one point, we might have to admit that parents who spend 99% of their time struggling to even get by and do the basics for their kids need schools and other resources to help out by doing things such as banning phones.
- Quite honestly, as long as the UX is _actually_ improving, I'm completely fine with having to adapt. I don't want to live in a world where things stay the same just because it's comfortable.
Having said that, at least 50% of the time that people change the experience, it makes it worst. So I agree that for companies that don't know how to design interfaces, this is maybe a benefit.
- I disagree with this. The touchscreen on my phone allows for so much versatile applications than is possible with physical buttons.
I really don't miss the days where applications had to retrofit their controls onto a fixed physical setting.
Sure, maybe for dialling a phone number or texting it was better. But for everything else I do on a phone, give me a touchscreen.
- Ya, even claude artifacts have replaced scripting for me to some extent. For example, if I need some data transformation thing "Create a web app that takes text input and does X, Y, and Z".
It's correct for me 99% of the time, and the remainder I can trivially ask it to tweak something. (especially since for those kind of one-off tools, I really don't care about the actual UI and styling)
Beats figuring out the right incantation of JQ, regex, whatever other tool I only use every 3 months, ... every time. And I can trivially just go back to that artifact and iterate on it later.
This really sucks for him. Through should Microsoft _not_ layoff specific people due to health conditions? Is that something we require from companies?