Preferences

cleandreams
Joined 951 karma

  1. For the first time I have begun to doubt Microsoft's chosen course. (I am a retired MS principal engineer.) Their integration of copilot shows all the taste and good tradeoff choices of Teams but to far greater consequence. Copilot is irritating. MS dependence on OpenAI may well become dicey because that company is going to be more impacted by the popping of the AI bubble than any other large player. I've read that MS can "simply" replace ChatGPT by rolling their own -- maybe they can. I wouldn't bet the company on it. Is google going to be eager to license Gemini? Why would they?
  2. This is great. Thanks for sharing. Should be a book someday.
  3. I'm amazed no one brings up the obvious: the need for reactions that are reliable at high speed. There is no way I will trust my tuckus to a freeway driving Waymo for a couple of years.
  4. The judge IIRC found that training models using copyrighted materials was fair use. I disagree. Furthermore this will be a problem for anyone who generates text for a living. Eventually LLMs will undercut web, news, and book publishing because LLMs capture the value and don't pay for it. The ecosystem will be harmed.

    The only problem the judge found here was training on pirated texts.

  5. I think the universities made a mistake in becoming too culturally left wing. The faculty (and students) in the humanities in particular are far to the left of the political mainstream. (I am on the left though more focused on labor rights and good jobs than identity politics.)

    The attack on the universities is fueled by this divergence, now that the right is firmly in power. This will just hurt the country in the long run. There was so much group think and silencing happening on the left over the last decade. It seems now to have been self-destructive.

  6. Less than 10 years ago but not recent.
  7. My base salary was fine but the magic was in the stock.

    I got a payout on acquisition by a FAANG+ (as first employee). It was only 300K but I put 50K of that into Nvidia. Actually I invested all my payout from my startup stock into tech stocks. And I got a terrific golden handcuffs deal.

    After that I could afford to retire and I did.

  8. At first it seemed LLM's were perfect assists for coding because they are trained on text and generate text. But code isn't typical text. It's basically a machine that requires a very high degree of precision and accuracy. Seen this way, LLM's are suited for coding only at specific stages -- to generate something like boiler plate, to brainstorm, to evaluate diverse approaches, identify missing tests. Anything that ties LLMs to actual code implementation is asking for trouble in my view.
  9. The weird thing about AI is that it doesn't learn over time but just in context. It doesn't get better the way a 12 year old learning to play the saxophone gets better.

    But using it heavily has a corollary effect: engineers learn less as a result of their dependence on it.

    Less learning all around equals enshittification. Really not looking forward to this.

  10. A paper to make the teachers I know weep.
  11. This type of thing really incentivizes founding a startup. If you are a very senior developer, who needs the corporate stupid factory? You can do a lot of work with half the people and work for yourself.
  12. A new study urges a move away from metaphysical debates over consciousness and instead toward a systems-theoretical perspective that redefines AI as a structurally unique communicative system.
  13. I am going through Math Academy and I like it very much. I have done advanced technical work in my field but my math background had weaknesses from my public schooling in a large urban area and some experimental math instruction in high school. The ability to do it over is oddly exhilarating.
  14. Why the hate? This is good. And very funny too.

    It's not just professionalism. It's the challenge of removing irritation from one's communications because that generally doesn't get the best cooperation.

  15. Sounds about right. The startup I worked for (acquired by a FANG) turned over the whole code base, for example.
  16. My current solution is the freedom app. I have all social media blocked during work hours and after 10:30 at night. I am mostly susceptible to reddit, twitter, instagram reels. I track some issues on reddit & twitter that I am genuinely interested in and impacted by. Freedom will block on the phone and laptop.

    Last time this didn't work because I kept turning off the freedom app. (Sigh.) This time I seem to be holding the line though. I'm getting more done and feel better.

  17. I think this pardon just reflects Trump's transactional politics. Ulbricht has sympathizers in high places now because crypto is all over this administration.

    In the long run letting political influence trump (no pun intended) the criminal justice system is a very bad thing.

    By world standards our criminal justice system is a strength of the country. A pity if we lose that.

  18. There are strong signals that continuing to scale up in data is not yielding the same reward (Moore's Law anyone?) and it's harder to get quality data to train on anyway.

    Business Insider had a good article recently on the customer reception to Copilot (underwhelming: https://archive.fo/wzuA9). For all the reasons we are familiar with.

    My view: LLMs are not getting us to AGI. Their fundamental issues (black box + hallucinations) won't be fixed until there are advances in technology, probably taking us in a different direction.

    I think it's a good tool for stuff like generating calls into an unfamiliar API - a few lines of code that can be rigorously checked - and that is a real productivity enhancement. But more than that is thin ice indeed. It will be absolutely treacherous if used extensively for big projects.

    Oddly, for free flow brainstorming like associations, I think it will be a more useful tool than for those tasks for which we are accustomed to using computers, required extreme precision and accuracy.

    I was an engineer in an AI startup, later acquired.

  19. I once worked for a manager who had 5 highly skilled AI engineers quit in two years. Somehow I thought I would not be impacted. I just wasn't used to working for dysfunctional personalities. He did stab me in the back when I brought in (as tech lead) a complex project maybe 5 mo late. He had managed an earlier iteration and it was over 2 years late. I got a lot of blame in my immediate management chain but outside that it was seen as critical and important. So weird. The other thing he did, my god how petty, was to refuse to approve a development environment for me. I used the freebie and had to reauthorize every 2 hours. Believe it or not, I now think this was because I was so much a better coder than him that I scared him. I never had to deal with dynamics like this before. I was an innocent.
  20. I'm a poet (published etc) and this was a treat to read. For me the parallels between coding and writing poetry are the iterative quality. As I work on my code I simplify, tighten, code share. I use expressive names, etc. I will think, "This is it!" and then discover the need for another iteration. With poetry it is the same.
  21. They also get the most revenue and users.
  22. I describe it as "appetite calming." Not sure how else to put it. I used to have a low grade constant craving for food. I wasn't obese but the weight had crept up. Losing weight was impossible. I had high blood sugar, blood pressure, and cholesterol. (Now on meds for all those things and they are under control.)

    Immediately after going on metformin my appetite calmed. Slowly, over the next year, I lost 25 pounds. I did not try to diet. My BMI is now 23.7.

    After a few months I started weight training. I changed my diet to lower carbs. I haven't lowered my HbA1c into a totally comfortable level (it is 5.6 and 5.7 and above is prediabetic). But I am better.

    But before any of those changes, just the Metformin alone calmed my appetite. I can go hours without thinking about food. I am just less concerned. I feel normal.

    Just as a data point, the last time I got Covid, it was quite mild. It's at least possible that the better metabolic health and weight loss had an impact.

  23. I'm retired and on metformin. I feel great. The wage slave problem is a separate problem for which there is only a political, not medical, solution.
  24. Thanks. It's hard to connect to self-care when you are in a tech grind. You are expected to grind it out and I enjoyed that but at some point in my life that didn't work anymore. Taking even a month off can help. By the way I went back to work about 3 weeks after my spouse died. It wasn't the wrong thing to do. But a couple years after that I needed a real break.
  25. I got burned out with the combination of a high intensity job, care taking a dying spouse, and managing the decline and dementia of a parent. Both deaths occurred near each other. I had always been highly productive and successful but at a certain point I couldn't really focus anymore. My brain felt empty. I had trouble learning things. Luckily I had had one of those golden jobs in which I made a small fortune. I retired early. Now I feel fresh again and I want to get back into the mix. It took a couple of years though.

    What did I do? Grief workshops, therapy, gym, meditation, grief groups, community service, deeper friendships. It works. It just takes time.

  26. The problem is that current generative AI is not actually intelligent.

    Yann LeCunn had a great tweet on this: Sometimes, the obvious must be studied so it can be asserted with full confidence: - LLMs can not answer questions whose answers are not in their training set in some form, - they can not solve problems they haven't been trained on, - they can not acquire new skills our knowledge without lots of human help, - they can not invent new things. Now, LLMs are merely a subset of AI techniques. Merely scaling up LLMs will not lead systems with these capabilities.

    link https://x.com/ylecun/status/1823313599252533594?ref_src=twsr...

    To focus on this: - LLMs can not answer questions whose answers are not in their training set in some form, - they can not solve problems they haven't been trained on

    Given that we are close to maximum in the size of the training set, this means they are not going to improve without some completely unknown at the moment technical breakthrough. Going from "not intelligent" to "intelligent" is a massive shift.

  27. Sure. It's coming with GPT-5.
  28. I worked for an AI startup that got bought by a big tech company and I've seen the hype up close. In the inner tech circles it's not exactly a big lie. The tech is good enough to make incredible demos but not good enough to generalize into reliable tools. The gulf between demo and useful tool is much wider than we thought.
  29. I use ebay a lot. I think it is fine.

    The difference between ebay and others are several: much better search. It's set up to allow customers to target what they want. Better filters. It includes the capability to have a complex user defined search that emails any new items that meet the criteria to the customer. Layers of search: one set of results can be searched to provide another set. It's amazing how other sites don't do this.

    The result is that ebay provides benefit to customers that in practice is not so easily gamed by big sellers. It gives you very, very, very niche results. E.g. arts and crafts pottery of a particular vintage and maker and color.

    I've found replacement parts for kitchen implements that haven't been manufactured in 20 years.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal