Preferences

827a
Joined 1,700 karma

  1. Holy crap. This is going to trigger a five-alarm fire at Spotify Engineering. This has got to be among the largest proprietary datasets ever unintentionally publicized by a company.
  2. Google attempting to claim any percentage of revenue from an external transaction will never happen. I believe the current situation with the App Store is that Apple has been barred by US courts from attempting to charge a fee similar to this; though they still do in the EU. USG antitrust, especially in the current admin, hates Google, far more than Apple; this structure will never survive being challenged.

    Charging a reasonable fee for the installation of an app can be, IMO, a fair and reasonably cost-correlative way for app store providers to be compensated for what few services they do provide application developers. That's within an order of magnitude of how much bandwidth would cost, if they were paying market cloud rates, and certainly there are other services rendered, like search indexing.

    I would emphasize to the people at Google, however, that your customers bought the phone, which came with the operating system, and thus ethically the core technology your application developers depend on has already been paid for. In Google's case, this happens through Samsung/etc's Android licensing; a relationship which landed them on the wrong side of antitrust lawsuits in the US quicker than Apple's racket did. They dip further by charging developers a direct fee to publish on their stores ($100/year for Apple, $25/one time for Google). Attempting to triple-dip by "reflecting the value provided by Android and Play and support our continued investments across Android and Play" convinces exactly no one of your benign intent; not your investors, nor the US Government, nor consumers, nor developers. The only person who may be convinced that any of this makes any sense is some nameless VP somewhere in some nameless org at your mothership, who can pat themselves on the back and say "at least its legal's problem now". Its possible no one at all in this business unit remembers what the words "produce value" even mean, let alone have the remote understanding of what it takes to do so. Exactly everyone who has ever interacted with it know this; your CEO certainly knows this, given how much investment he's made into AI and not into the Play Store. Continuing to cause so many global legal problems, for such an unpromising, growth-stunted business unit, is not generally a good recipe for keeping your job or saving your people from layoffs.

  3. Archetypes of prompts that I find AI to be quite good at handling:

    1. "Write a couple lines or a function that is pretty much what four years ago I would have gone to npm to solve" (e.g. "find the md5 hash of this blob")

    2. "Write a function that is highly represented and sampleable in the rest of the project" (e.g. "write a function to query all posts in the database by author_id" (which might include app-specific steps like typing it into a data model)).

    3. "Make this isolated needle-in-a-haystack change" (e.g. "change the text of such-and-such tooltip to XYZ") (e.g. "there's a bug with uploading files where we aren't writing the size of the file to the database, fix that")

    I've found that it can definitely do wider-ranging tasks than that (e.g. implement all API routes for this new data type per this description of the resource type and desired routes); and it can absolutely work. But, the two problems I run into:

    1. Because I don't necessarily have a grokable handle on what it generated, I don't have a sense of what its missing and needed follow-on prompts to create. E.g.: I tell it to write an endpoint that allows users to upload files. A few days later, we realize we aren't MD5-hashing the files that got uploaded; there was a field in the database & resource type to store this value, but it didn't pick up on that, and I didn't prompt it to do this; so its not unreasonable. But oftentimes when I'm writing routes by hand, I'm spending so much time in that function body that follow-on requirements naturally occur to me ("Oh that's right, we talked about needing this route available to both of these two permissions, crap let me implement that"). With AI, it finishes so fast that my brain doesn't have time to remember all the requirements.

    2. We've tried to mitigate this by pushing more development into the specs and requirements up-front. This is really hard to get humans to do, first of all. But more critically: None of our data supports the hypothesis that this has shortened cycle times. It mostly just trades writing typescript for reading & writing English (which few engineers I've ever worked with are actually all that good at). The engineers still end up needing long cycle times back-and-forth with the AI to get correct results, and long cycle times in review.

    3. The more code you ask it to generate, the more vibeslop you get. Deeply-nested try/catch statements with multiple levels of error handling & throwing. Comments everywhere. Reimplementing the same helper functions five times. These things, we have found, raise the cost and lower the reliability & performance of future prompting, and quickly morph parts of the system into a no-man's-land (literally) where only AIs can really make any change; and every change even by the AIs get harder and harder to ship. Our reported customer issues on these parts of the app are significantly higher than others, and our ability to triage these issues is also impacted because we no longer have SMEs that can just brain-triage issues in our CS channels; everything now requires a full engineering cycle, with AI involvement, to solve.

    Our engineers run the spectrum of "never wanted to touch AI, never did" to "earnestly trying to make it work". Ultimately I think the consensus position is: Its a tool that is nice to have in the toolbox, but any assertion that its going to fundamentally change the profile of work our engineers do, or even seriously impact hiring over the long-term, is outside the realm of foreseeable possibility. The models and surrounding tooling are not improving fast enough.

  4. These exist for apex domains; the real use-case is subdomains.
  5. Thousands of systems, from Google to script kiddies to OpenAI to nigerian call scammers to cybersecurity firms, actively watch the certificate transparency logs for exactly this reason. Yawn.
  6. This is a semi-solved problem e.g. https://www.sonnetstore.com/products/thunderlok-a

    Apple’s chassis do not support it. But conceptually that’s not a Thunderbolt problem, it’s an Apple problem. You could probably drill into the Mac Studio chassis to create mount points.

  7. They do still sell the Mac Pro in a rack mount configuration. But, it was never updated for M3 Ultra, and feels not long for this world.
  8. Yeah, its bad out there. At my company, we have a team of security professionals that focus on keeping our systems (and others') secure. AI for them has gone from "using it for scripting together nmap" to "we really need the platform your team is working on to do X, Y, and Z, so we vibed up this PR". On the engineering side, I don't have the political power to tell them no, because we don't really have senior leadership and we're behind schedule on everything. Why? Well, I spent two hours today resolving dozens of vulnerabilities our code scanners found in some vibed security team PR. The scanners that they set up, and demanded we use. Half the stuff they vibe we literally have to feature flag off immediately after release, because they didn't QA it, but they rarely revisit the feature because to them its always either "on to the next big idea" or, more often, "we're just security, platform isn't our responsibility".

    The thing is: I know you might read that and think I'm anti-AI. In this specific situation, at my company: We gave nuclear technology to a bunch of teenagers, then act surprised when they blow up the garage. This is a political/leadership problem; because everything, nine times out of ten, is a political/leadership problem. But the incentives just aren't there yet for generalized understanding of the responsibility it requires to leverage these tools in a product environment that's expected to last years-to-decades. I think it will get there, but along that road will be gallons of blood from products killed, ironically, by their inability to be dynamic and reliable under the weight of the additive-biased purple-tailwind-drenched world of LLM vibeput. But, there's probably an end to that road, and I hope when we get there I can still have an LLM, because its pretty nice to be able to be like "heyo, i copy pasted this JSON but it has javascript single quotes instead of double quotes so its not technically JSON, can you fix that thanks"

  9. Interestingly, some of Apple’s devices do already serve a special purpose like this in their ecosystem. The HomePod, HomePod Mini, and Apple TV act as Home Hubs for your network, which proxy WAN Apple Home requests to your IoT devices. No other Apple devices can do this.

    They also already practice a concept of computational offloading with the Apple Watch and iPhone; more complicated fitness calculations, like VO2Max, rely on watch-collected data, but evidence suggests they’re calculated on the phone (new VO2Max algorithms are implemented when you update iOS, not watchOS)

    So yeah; I can imagine a future where Apple devices could offload substantial AI requests to other devices on your Apple account, to optimize for both power consumption (plugged in versus battery) and speed (if you have a more powerful Mac versus your iPhone). There’s good precedent in the Apple ecosystem for this. Then, of course, the highest tier of requests are processed in their private cloud.

  10. I said "Consumer AI". Even Apple is likely beating Google in consumer AI DAUs, today. Google has the Pixel and gemini.google.com, and that's it; practically zero strategy.
  11. People said the same things about mobile gaming [1] and mainframes. Technology keeps pushing forward. Neural coprocessors will get more efficient. Small LLMs will get smarter. New use-cases will emerge that don't need 160IQ super-intellects (most use-cases even today do not)

    The problem for other companies is not necessarily that data center-borne GPUs aren't technically better; its that the financials might never make sense, much like how the financials behind Stadia never did, or at least need Google-levels of scale to bring in advertising and ultra-enterprise revenue.

    [1] https://apps.apple.com/us/app/resident-evil-3/id1640630077

  12. I would bet significant money that, within two years, it will become Generally Obvious that Apple has the best consumer AI story among any tech company.

    I can explain more in-depth reasoning, but the most critical point: Apple builds the only platform where developers can construct a single distributable that works on mobile and desktop with standardized, easy access to a local LLM, and a quarter million people buy into this platform every year. The degree to which no one else on the planet is even close to this cannot be understated.

  13. This is a way of attributing where the comment is coming from, which is better than responding with what the AI says and not attributing it. I would support a guideline that discourages posting the output from AI systems, but ultimately there's no way to stop it.
  14. Agreed. It would be extremely strange if he were even in the running, let alone picked.
  15. Look outside the HackerNews/Silicon Valley bubble: Apple is doing very well. Consumers broadly don't care whether their phone has AI, as long as it has the ChatGPT/etc apps. iMessage and FaceTime have a stranglehold on, uh, everyone in America. They sell more iPhones every quarter. Their services revenue keeps going up. Mac sales are up big. Apple Silicon is so far ahead of anything else on the market, they could stay on the M5 platform for three years and still be #1. Apple Watch is the most popular watch brand in the world (and its not close; sensing a pattern?). Airpods, alone, make more money than Texas Instruments or SuperMicro. Yes; Vision Pro and iPhone Air sold poorly. Who cares? They're both obvious stepping stones to products that will sell well (Vision Pro -> Glasses-style AR device, iPhone Air -> thin engineering will help with the iPhone Fold). Apple can afford to take risks and adjust.

    Sure, there can be cultural things going on. But at the senior leadership level, the degree to which those would have to be bad, in the absence of major revenue problems, to cause this reaction is... unheard of.

  16. IMO: Cook is going to announce his retirement by the end of Q1, they've already selected a CEO (probably Ternus), the incoming CEO wants leadership change, and some of these departures are because its better that this purge happens before the CEO change than after. I think this explains Giannandrea, Williams, and Jackson.

    Dye may have also been involved in that, given how unpopular he was internally at Apple. But more likely just personal / Meta offered him a billion dollars. Maestri leaving was also probably totally uninvolved.

    Srouji is the weirdest case, and I'm hesitant to believe its even true just given its a rumor at this point. Its possible he was angry about being passed over for CEO, but realistically, it was always going to be Ternus, Williams, or Federighi. If Ternus is the next CEO, its likely we'll see Apple combine the Hardware Technologies and Hardware Engineering divisions, then have Srouji lead both of them. I really do not see him leaving the company.

    The other less probable theory is that they actually picked Fadell, and this deeply pissed off many people in Apple's senior leadership. So, what we're seeing is more chaos than it first seems.

    Generally, as long as Srouji doesn't leave, these changes feel positive for Apple, and especially if there's a CEO change in early 2026: This is what "the fifth generation of Apple Inc" looks like. I don't understand the mindset of people who complain about Apple's products and behavior over the past decade, then don't receive this news as directionally positive.

  17. Java is not for sale.
  18. One of the things I dislike about the Youtube app on Apple TV is how it appears to maintain an entirely separate list of recommended videos, specific to the kinds of videos I tend to watch on TV, versus the phone and desktop (which might themselves also each have their own recommendation algorithm, but my behavior there is closer so as to not notice).

    The difference is stark. I use YouTube on the Apple TV to play mostly background videos; 8 hour AI generated lofi mixes, burning fireplaces, things like that. Ambiance. Its all that gets recommended now when I pull up the app; but only on the TV.

    This behavior is somewhat desirable: but the issue is, the youtube apple TV app is an abhorrent experience that feels deeply tailored to stop you from getting to any content that is not expressly recommended. And these videos are all that get recommended. A new Linus Tech Tips video might be in my feed on desktop/mobile; but finding that video on the TV literally requires me to search "Linus Tech Tips" and go to their channel -> all videos.

    I certainly don't mind the platform raising the prominence of videos I tend to watch on that platform; but to me it feels like I should be able to at least scroll down on the home page a bit to get a more "centralized" view into everything my account watches and would be recommended.

  19. There isn't necessarily rationality behind venture deals; its just a numbers game combined with the rising tide of the sector. These firms are not Berkshire. If the tide stops rising, some of the companies they invested in might actually be ok, but the venture boat sinks; the math of throwing millions at everyone hoping for one to 200x on exit does not work if the rising tide stops.

    They'll say things like "we invest in people", which is true to some degree, being able to read people is roughly the only skill VCs actually need. You could probably put Sam Altman in any company on the planet and he'd grow the crap out of that company. But A16z would not give him ten billion to go grow Pepsi. This is the revealed preference intrinsic to venture; they'll say its about the people, but their choices are utterly predominated by the sector, because the sector is the predominate driver of the multiples.

    "Not investing" is not an option for capital firms. Their limited partners gave them money and expect super-market returns. To those ends, there is no rationality to be found; there's just doing the best you can of a bad market. AI infrastructure investments have represented like half of all US GDP growth this year.

  20. Slightly related but unpopular opinion I have: I think software, broadly, today is the highest quality its ever been. People love to hate on some specific issues concerning how the Windows file explorer takes 900ms to open instead of 150ms, or how sometimes an iOS 26 liquid glass animation is a bit janky... we're complaining about so much minutia instead of seeing the whole forest.

    I trust my phone to work so much that it is now the single, non-redundant source for keys to my apartment, keys to my car, and payment method. Phones could only even hope to do all of these things as of like ~4 years ago, and only as of ~this year do I feel confident enough to not even carry redundancies. My phone has never breached that trust so critically that I feel I need to.

    Of course, this article talks about new software projects. And I think the truth and reason of the matter lies in this asymmetry: Android/iOS are not new. Giving an engineering team agency and a well-defined mandate that spans a long period of time oftentimes produces fantastic software. If that mandate often changes; or if it is unclear in the first place; or if there are middlemen stakeholders involved; you run the risk of things turning sideways. The failure of large software systems is, rarely, an engineering problem.

    But, of course, it sometimes is. It took us ~30-40 years of abstraction/foundation building to get to the pretty darn good software we have today. It'll take another 30-40 years to add one or two more nines of reliability. And that's ok; I think we're trending in the right direction, and we're learning. Unless we start getting AI involved; then it might take 50-60 years :)

This user hasn’t submitted anything.