Preferences

abixb
Joined 3,481 karma
Balaji "Abi" Abishek

Cybersecurity/SWE. Madison, WI.


  1. One could argue that the quality of life per horse went up, even if the total number of horses went down. Lots more horses now get raised in farms and are trained to participate in events like dressage and other equestrian sports.
  2. >" One interpretation is that the extra $10 billion from the price increases will offset some of the red ink Microsoft is bleeding because of the investments they’re making in datacenter capacity, hardware, and software needed to make Copilot useful"

    Saying the quiet part out loud. Looks like O365 folks will have to subsidize MSFT's losses in giving Azure compute away for its LLM customers. Not great.

  3. I get that, but what I'm saying is that it's anticompetitive as heck. In a fair system, profits from NVDA's revenue growth should've been distributed to shareholders as dividends or reinvested into the company itself, not buy its own customers -- that's my (and countless others') biggest gripe with the whole AI bubble bs.

    Antitrust regulators must be sleeping at the wheels.

  4. Anthropomorphizing non-human things is only human.
  5. The first step in building a large language model. That's when the model is initiated and trained on a huge dataset to learn patterns and whatnot. The "P" in "GPT" stands for "pre-trained."
  6. >They’re absolutely going to get bailed out and socialize the losses somehow.

    I've had that uneasy feeling for a while now. Just look at Jensen and Nvidia -- they're trying to get their hooks into every major critical sector as they're able to (Nokia last month, Synopsys just recently). When chickens come home to roost, my guess is that they'll pull out the "we're too big to fail, so bailout pls" card.

    Crazy times. If only we had regulators with more spine.

  7. Switching a Japanese dumbphone (Kyocera) was the best thing I ever did. Eliminating your smartphone (as inconvenient and life altering as it may be -- you'll need to figure out a path) is probably the single most effective thing you can do to get into the top 1-10%... of people with properly functioning cognition.
  8. Glad to see a fellow Madisonian make it to HN frontpage. Great work!
  9. I feel it's strategic, like a massive DDoS/"shock and awe" style attack on competitors. Gotta love it as PROsumers though!
  10. Insightful paper. Policy/lawmakers needs to take much more input from high-quality, publicly funded (aka unbiased) research and make informed decisions on restricting content type. The social media companies rn are akin to tobacco companies selling products/services to kids (and adults!) with zero meaningful restriction or warnings. There's a mountain of research showing cognitive performance impacts from content consumed through smartphone, especially fluffy, low quality "algorithmic feed" content.

    BTW, I still need to use YouTube and this one extension has protected my YouTube experience from being TikTok-ified -- "ShortsBlocker - Remove Shorts from YouTube" [0]

    When people do send me random Shorts, I use another browser (consciously) to watch that particular video and shut it back down. You can also pair that with "Block YouTube Feed - Homepage, Sidebar Videos" [1] for another layer of YouTube cruft removal.

    Finally, I've also installed "Turn Off YouTube Comments & Live Chat" [2] which keeps me from scrolling down to comments and letting that 'color' my perception of the video -- has restored my own ability to judge the value of a video.

    [0] https://chromewebstore.google.com/detail/shortsblocker-remov...

    [1] https://chromewebstore.google.com/detail/block-youtube-feed-...

    [2] https://chromewebstore.google.com/detail/turn-off-youtube-co...

  11. Okay, Gemini 3.0 Pro has officially surpassed Claude 4.5 (and GPT-5.1) as the top ranked model based on my private evals (multimodal reasoning w/ images/audio files and solving complex Caesar/transposition ciphers, etc.).

    Claude 4.5 solved it as well (the Caesar/transposition ciphers), but Gemini 3.0 Pro's method and approach was a lot more elegant. Just my $0.02.

  12. We might be on to creating a new crowd-ranked LLM benchmark here.
  13. >Qwen 2.5's clocks, on the other hand, look like they never make it out of the womb.

    More like fell headfirst into the ground.

    I'm disappointed with Gemini 2.5 (not sure Pro or Flash) -- I've personally had _fantastic_ results with Gemini 2.5 Pro building PWA, especially since the May 2025 "coding update." [0]

    [0] https://blog.google/products/gemini/gemini-2-5-pro-updates/

  14. As someone with barebones understanding of "world models," how does this differ from sophisticated game engines that generate three-dimensional worlds? Is it simply the adaptation of transformer architecture in generating the 3-D world v/s using a static/predictable script as in game engines (learned dynamics vs deterministic simulation mimicking 'generation')? Would love an explanation from SMEs.

This user hasn’t submitted anything.