Preferences

RevEng
Joined 240 karma

  1. This is exactly where I find myself. I've been asked several times to take on management, but I have no interest in it. I got to be a principal after 18 years of experience by being good at engineering, not management. Like you said, I can and do help with leadership through mentorship, offering guidance and advice, giving presentations on technical topics, and leading technical projects.
  2. While I have no particular love for AI generated code, I think this has nothing to do with AI. Software has been unreliable for over a decade. Companies have been rushing out half baked products and performing continual patches for many years. And it's our fault because we have just come to accept it.
  3. How exactly does one make software that makes it impossible to view or transmit CSAM? What is looking at each picture, each video, each text message to make this determination?

    This is the same classic "wiretap everything" solution that never works and just undermines people's rights. There's no way they would ever use this ability for anything else, right? And nobody else would ever abuse it either, right?

    How long until people start using this to capture and produce CSAM?

  4. Between you and your provider the downloads are over HTTP. The distribution of content between the Usenet providers is over the Usenet protocol which predates HTTP and the WWW.
  5. Having worked with this stuff a lot, privacy isn't the biggest problem (though it is a problem). This shit just doesn't work. Wide-eyed investors might be willing to overlook the 20% failure rates, but ordinary people won't, especially when a single mistake can cost you millions of dollars. In most places I've seen AI shoved - especially Copilot - it takes more time to read and dismiss its crappy suggestions than it does to just do the work without it. But the really insidious case is when you don't realize it is making shit up and then you act on it. If you are lucky you embarrass yourself in front of a customer. If you are unlucky you unintentionally wipe out the production database. That's much more of an overt and immediate concern than leaking some PII.
  6. What a world we will live in when everyone with skill and experience has moved in to management. Then who will be doing all the work?
  7. I argue that the first dark pattern is the "hallucination" that we all just take for granted.

    LLMs are compulsive liars: they will confidently and eloquently argue for things that are clearly false. You could even say they are psychopathic because they do so without concern or remorse. This is a horrible combination that you would normally see in a cult leader or CEO but now we are all confiding in them and asking them for help with everything from medical issues to personal relationships.

    Bigger models aren't helping the problem but making it worse. Now models will give you longer arguments with more facts used to push their false conclusion and they will even insist that you are wrong for disagreeing with it.

  8. That's not a matter of training, it's an inherent part of the architecture. The model has no idea of its own confidence in an answer. The servers get a full distribution of possible output tokens and they pick one (often the highest ranking one), but there is no way of knowing whether this token represents reality or just a plausible answer. This distribution is never fed back to the model so there is no possible way that it could know how confident it was in its own answer.
  9. They gave up on that a long time ago.
  10. These growth requirements are simply infeasible.

    Semiconductor density, speed, and power efficiency grow much slower than doubling every six months. Creating custom silicon for this won't help - plenty of companies, including Nvidia, are already optimizing their hardware for these tasks and they are very good at it. Production capacity can't scale nearly that fast for myriad reasons, including access to materials, production capacity of inputs, the time and complexity of building new production facilities, the lack of experts available for all of this, and the long time frames for training new people.

    This to me is the biggest sign that this is a bubble. Even if demand shrinks, it could still be huge. Even if many use cases are impractical, we will still find some where it's valuable. But the market is basing its valuations on forecasts of tremendous growth that simply can't be supported physically.

  11. I'm finding myself in this position right now. For almost two decades I have worked as a software engineer, slowly learning more and becoming more capable at designing and developing software. Yet, we have a significant lack of leaders and managers. I have been asked several times to take on such a lead role, but what makes me great at developing software does not make me great at organizing and managing a group of people. I have tried it - it didn't go well. Those aren't my interests or my aptitudes. Still, management keeps trying to get me to take on these vacant roles just because they know I have done well for them in everything else, but they overlook that they are asking me to do something very different from the thing I have been shown to do well.
  12. Hosts alone won't solve many ads. Plenty of companies include their own annoying content from their own domains. uBlock lets you get far more fine grained, blocking specific paths.
  13. Idealism does with capitalism. Executives and shareholders don't care about ideals: they want money. All of the questionable things Google has done have been in pursuit of ever larger profits.
  14. The big reason for me was that it "just worked". Altavista was the biggest player at the time but you had to learn a while query language to get good results. Google's search engine took plain English keywords and have relevant results.
  15. Every meeting I'm in where we talk numbers or strategy starts with someone saying, Please don't record or share this. The documents all say CONFIDENTIAL all over them. That's not true of all our presentations, just ones that we really wouldn't want our competition to see.

    Many people still take screenshots of things they think are useful. Things still get shared though emails and occasionally posted on social media.

    I have worked with various secure chamber VPN and VNC systems that make it quite difficult to record or screenshot. These are companies where their IP is worth billions of dollars and everyone wants a piece of it. It's difficult enough that it's not worth the effort to try and work around it. The rare time I really need something for debugging, I'll take a photo with my cameraphone, but it rarely comes to that.

    Because it's that much harder, I record a lot less of it. Likewise for all the other engineers I work with. Friction won't stop it entirely, but it will make it far less frequent.

  16. It happens. Our CTO "resigned" about 6 years after we started our VC funded startup. He sold his shares to the rest of the investors. It wasn't his choice to leave.
  17. I'm seeing a lot of that in my own job. Every company has a top-down mandate to use more AI and a budget to back it up, my employer included. But we write software. Now this means we are chasing the AI dragon making LLM-based software in the hopes that we can capitalize on the hype. I would be more charitable in my description if I thought we were going after the high value targets, but our marketing strategy is literally "AI in everything". I'm all for selling what the customer wants to buy, but I don't think that's actually what we are making. Trying to make everything at once means making none of it well.

    There's a huge disconnect between what customers think AI will do for them and what we are actually capable of delivering. When customers and colleagues alike complain that something doesn't work well and ask how we are going to fix it I have to keep reminding them "this is an open area of research in the field; nobody has found a solution to this yet." Nobody likes hearing that, but it's the truth.

    It reminds me a lot of the dotcom bubble. People thought they could get rich just my making a web site that talked about their company. Even many of the biggest most successful early movers failed completely. We can all agree today that the Internet has provided tremendous value and really has changed the world, but not in the way most people thought it would then. Most of the startups being started didn't have a business plan beyond "it's on the web" and that's where I see a lot of the AI development right now. They think making the technology makes it profitable without asking what problems it solves better than our existing solutions. Some of what we are developing shows potential, but a lot of it is a solution in search of a problem.

    It's definitely resulting in opportunity costs as we pour money and people into projects that only exist because we can put AI in the name, taking priority over projects that we had already planned and vetted with customers and market research. For a company that prides itself on slow and steady progress and long term stability, we sure are jumping head first into AI solutions and we are doing so at a rate that precludes any kind of design, testing, or quality control. We are selling shaky prototypes and our customers are happily paying for them. Everyone on both sides is so blinded by the hype that they are setting aside everything they've learned over decades of experience and success.

    I can't wait for the hype to end so we can start talking about where LLMs and other generative AI are actually useful.

  18. Not entirely. Since generation is auto regressive, the next token depends on the previous tokens. Whatever analysis and decisions it has spit out will influence what it will do next. This tends to cause it to be self reinforcing.

    But it's also chaotic. Small changes in input or token choices can give wildly different outcomes, particularly if the sampling distributions are fairly flat (no one right answer). So restarting the generation with a slightly different input, such as a different random seed (or in OP's case, a different temperature) can give wildly different outcomes.

    If you try this, you'll see some examples of it vehemently arguing it is right and others equally arguing it is wrong. This is why LLM as judge is so poor by itself, bit also why multiple generations like used in self-consistency can be quite useful at evaluating variance and therefore uncertainty.

  19. The article rightly points out that people don't enjoy just being reviewers: we like to take an active role in playing, learning, and creating. They point out the need to find a solution to this, but then never follow up on that idea.

    This is perhaps the most fundamental problem. In the past, tools took care of the laborious and tedious work so we could focus on creativity. Now we are letting AI do the creative work and asking humans to become managers and code reviewers. Maybe that's great for some people, but it's not what most problem solvers want to be doing. The same people who know how to judge such things are the same people who have years of experience doing this things. Without that experience you can't have good judgement.

    Let the AI make it faster and easier for me to create; don't make it replace what I do best and leave me as a manager and code reviewer.

    The parallels with grocery checkouts are worth considering. Humans are great at recognizing things, handling unexpected situations, and being friendly and personable. People working checkouts are experts at these things.

    Now replace that with self serve checkouts. Random customers are forced to do this all themselves. They are not experts at this. The checkouts are less efficient because they have to accommodate these non-experts. People have to pack their own bags. And they do all of this while punching buttons on a soulless machine instead of getting some social interaction in.

    But worse off is the employee who manages these checkouts. Now instead of being social, they are security guards and tech support. They are constantly having to shoot the computer issues and teach disinterested and frustrated beginners how to do something that should be so simple. The employee spends most of their time as a manager and watchdog, looking at a screen that shows the status of all the checkouts, looking for issues, like a prison security guard. This work is inactive and unengaging, requiring constant attention - something humans aren't good at. When little they do interact with others, it is in situations where that are upset.

    We didn't automate anything here, we just changed who does what. We made customers into the people doing checkouts and we made more level staff into managers of them, plus being tech support.

    This is what companies are trying to do with AI. They want to have fewer employees whose job it is to manage the AIs, directing them to produce. The human is left assigning tasks and checking the results - managers of thankless and soulless machines. The credit for the creation goes to the machines while the employees are seen as low skilled and replaceable.

    And we end up back at the start: trying to find high skilled people to perform low skilled work based on experience that they only would have had if they had being doing high skilled work to begin with. When everyone is just managing an AI, no one will know what it is supposed to do.

  20. Not 24 hours ago he said the tariffs were non negotiable, now he says this was always the plan. There's no reason to believe a word he says.

This user hasn’t submitted anything.