- Just the set the record straight on how and why these acquisitions go at IBM. This is a first hand account working at and with IBM and competitors and being in the room as tech-guy accessory to murder.
IBM lives off huge multi-year contract deals with their customers, each are multi-multi-million dollars worth. IBM has many of these contracts, maybe ~2000 of them around the planet, including your own government wherever it is that you live. This is ALL that matters to IBM. ALL. That. Matters.
These huge contracts get renegotiated at every X years. IBM renewal salespeople are tough and rough, in particular the ones on the renewal teams, and they spend every minute of every hour in between renewals grooming the decision makers, sponsors, champions and stakeholders (and their families) within these big corporations. Every time you see an IBM logo at a sports event (and there are many IBM-sponsored events), that's not IBM marketing to you the ad-viewer. They are there for grooming their stakeholders, who fight hard to be in the best IBM sponsored-seats at those venues, and in the glamorous pre and after party, celebs included. IBM also sponsors other stuff, even special programs at universities. Who go to these universities? Oh, you bet, the stakeholder's kids, who get the IBM-treatment and IBM-scholarship at those places.
But the grooming is not enough. The renewal is not usually at risk - who has the balls to uninstall IBM out of a large corp? What is at risk is IBM's growth, which is fueled by price increases at every renewal point not the sale of new software or new clients - there are no new clients for IBM anywhere anymore! These price increases need to happen, not just because of inflation but because of the stock price and bonuses that keep the renewal army and management going strong, since this is a who-knows-who business. To justify the price increase internally at those huge client corps (not to the stakeholder but to their bosses, boards, users, etc) IBM needs to throw a bone into these negotiations. The bone is whatever acquisition you see they make: Red Hat, Hashicorp... Or developments like Watson. Or whatever. They are only interested in acquiring products or entering markets that can be thrown at those renewal negotiations, with very few exceptions. Why Confluent? Well, because they probably did their research and decided that existing Confluent licenses can be applied to one (yeah, one) or many renewal contracts as growth fuel for at least 1-to-N iterations of renewals.
Renewal contracts correspond anywhere from 60% to 95% of IBM's revenue, depending on how you account for the the consulting arm and "new" (software/hw sales/subscriptions). I particularly have not seen lots of companies hiring IBM consultants "just because we love IBM consultants and their rates", so consulting at a site is always tied to the renewal somehow, even if billed separately or not billed at all. Same for new sw sales, if a company wants something IBM has on their catalog from their own whim and will, then that will just probably be packed into the next renewal because that's stakeholder leverage for justifying the renewal's increase base rate. Remember, a lot of IBM's mainframes are not even sold, they are just rentals.
Most IBM investment into research programs, new tech (quantum computing!) etc are there just to help the renewals and secure a new Govt deal here and there. How? Well, maybe the increase in the renewal for the, ie, State of Illinois contract gets a bone thrown in for a new "Quantum Research Center (by IBM)" at some U of I campus or tech park that the now visionary Governor will happily cut the ribbon, photo op and do the speech. Oh wait! I swear I made this up as an example, but this one is actually true, lol:
https://newsroom.ibm.com/2024-12-12-ibm-and-state-of-illinoi...
You get the drill?
- Yes there was a reason as Perl took inspiration from Lisp - everything is a list- and everyone knows how quick C's variadic arguments get nasty.
So @_ was a response to that issue, given Perl was about being dynamic and not typed and there were no IDEs or linters that would type-check and refactor code based on function signatures.
JS had the same issue forever and finally implemented a rest/spread operator in ES6. Python had variadic from the start but no rest operator until Python3. Perl had spread/rest for vargs in the late 80s already. For familiarity, Perl chose the @ operator that meant vargs in bourne shell in the 70s.
- Not only the movie theater, Netflix killed social life. Well, streaming, feeds and their algorithms in general, but Netflix is very much the ones that really owned the narrative of what to do on a weekend night.
This is very anecdatal, certainly, but I've spoken/overheard a few neighborhood hospitality business owners that had to forclose or cut down due to the constant decline of people leaving the house to just meet in a bar or coffee shop. Only sport nights keeps them going, because sports online remain expensive in most places.
Maybe just my observation or my neck of the woods, but seems to fit the general sentiment of a reduced social environment on the streets in certain parts of the world.
- Fine, but there's a noticeable asymmetry in how the three languages get treated. Go gets dinged for hiding memory details from you. Rust gets dinged for making mutable globals hard and for conceptual density (with a maximally intimidating Pin quote to drive it home). But when Zig has the equivalent warts they're reframed as virtues or glossed over.
Mutable globals are easy in Zig (presented as freedom, not as "you can now write data races.")
Runtime checks you disable in release builds are "highly pragmatic," with no mention of what happens when illegal behavior only manifests in production.
The standard library having "almost zero documentation" is mentioned but not weighted as a cost the way Go's boilerplate or Rust's learning curve are.
The RAII critique is interesting but also somewhat unfair because Rust has arena allocators too, and nothing forces fine-grained allocation. The difference is that Rust makes the safe path easy and the unsafe path explicit whereas Zig trusts you to know what you're doing. That's a legitimate design, hacking-a!
The article frames Rust's guardrails as bureaucratic overhead while framing Zig's lack of them as liberation, which is grading on a curve. If we're cataloging trade-offs honestly
> you control the universe and nobody can tell you what to do
...that cuts both ways...
- Yeah, now they are part of Anthropic, who haven't figured out monetization themselves. Shikes!
I'm a user of Bun and an Anthropic customer. Claude Code is great and it's definitely where their models shine. Outside of that Anthropic sucks,their apps and web are complete crap, borderline unusable and the models are just meh. I get it, CC's head got probably a powerplay here given his department is towing the company and his secret sauce, according to marketing from Oven, was Bun. In fact VSCode's claude backend is distributed in bun-compiled binary exe, and the guy is featured on the front page of the Bun website since at least a week or so. So they bought the kid the toy he asked for.
Anthropic needs urgently, instead, to acquire a good team behind a good chatbot and make something minimally decent. Then make their models work for everything else as well as they do with code.
- As far as the data goes, adjusted for inflation, tuition and fees have eased up in the last ~5 years [1]. But overall, college enrollment has been going down anyway [2], except for 2025, where it hints at a slight rebound.
So I'd say we have to consider the full set of drivers that can correlate: overall rising cost of living making it very expensive to be at a university full-time, general labor market sentiment which is mostly down since covid, interest rates and debt risk which are still high despite recent cuts, etc.
1. https://www.nbcnews.com/news/education/college-costs-working...
- Not very encouraging to imagine ChatGPT to be the first earthling to reach another star system, but that's an option we'll have to keep on the table, at least for the time being...
- Around that time in the video what I see is a journalist that did not do his homework, as he crumbled under the CEO's snarky "do you know this research company went out of business?" - he should just started to read the report findings and ask if they are true [1] or popped out the 16 public arrests [2] tied to Roblox in the US of A.
Both journalists were VERY agreeable and were like trying not to pick a fight. Want to talk about the fun stuff Mr CEO? There's no fun when so many kids are being systematically harassed by evil adults in the platform.
[1] https://hindenburgresearch.com/roblox/
[2] https://thebearcave.substack.com/p/problems-at-roblox-rblx-4
- Red Hot Chili Peppers!
- Perl was the internet in 1990s. People (me) who were doing unix systems work (C, shell, Perl and some DBs and FTPs) could now quickly throw a CGI script behind an Apache HTTP server, which tended to be up and running in many unixes :80 port back then (Digi, HP, Sun, etc). Suddenly I had a working app that would generate reports directly to people's browsers or full-blown apps on the internet! But Perl CGI did not scale at all (spawn 1 short-lived process per request will choke a unix fast), and even after mod_perl [1], that got quickly superseded by PHP, which was really built for the web (of the 1990s). Web frameworks and fastcgi arrived too late to Perl, so internet Perl was practically dead at the turn of the century.
The enterprise, who either did not have any webapps or had tried Perl CGI first and suffered it dearly, got pinged by their sales reps that Java and .NET (depending if you were a IBM, Sun or MS shop) were the way to go, and there they went with their patterns and anti-patterns for "scalable" million-dollar web stacks. That kicked-off the age of the famed application servers that resist up until today (Websphere, Weblogic, etc).
So Perl went back to being a glue language for stitching up data, C/C++ and shell, and that's how the 2000s went by. But by then, Ruby and Python had more sane communities and Ruby was exciting and Python was simpler - Perl folks were just too peculiar, funny and nerdy to be taken seriously by a slick new generation that coded fast and had startup aspirations of the "only $1B is cool" types. Also the Perl6 delusion was too distracting to make anyone event care about giving Perl5 some good love (the real perl keeping servers running worldwide), so by the 2010s Perl was shooting down to collective ostracism, even though it still runs extremely well, fast and reliably in production. By the 2020s the release cycles were improved after Perl6 became a truly separate project (Raku, renamed in 2019), the core has gone through a relative cleanup and finally got a few popular features in demand [3]. The stack and ecosystem is holding up fine, although CPAN probably needs some good tidying up.
The main issue with Perl at this point is that it has not been a target for any new stuff that comes out: any cool module, library, database, etc that is launched does not put out a Perl api or a simple example of any kind, so it's up to the Perl community to release and maintain apis and integrations to the popular stacks on its own, which is a losing game and ends up being the nail-in-the-coffin. By the way, nothing (OSS) that comes out today is even written in Perl. That reduces even further the appeal of learning Perl.
Strangely enough, lately Perl has seen a sudden rise in the TIOBE index [4] back into a quite respectable 9th position. TIOBE ranks search queries for X language and is not much of a indicator, being quite noisy and unreliable. My guess is that those queries are issued by AI agents/chats desperately scraping information so that it can answer questions and help humans code in a language that is not well-represented in the training datasets.
[1] mod_perl was released in 1996, and became popular around 1999: https://perl.apache.org/about/history.html
[2] PHP was released 1994, took off ~1998 with PHP3: https://www.php.net/manual/en/history.php.php
[3] Perl's version changes simplified: https://en.wikipedia.org/wiki/Perl_5_version_history
- This is the multi-million dollar .unwrap() story. In a critical path of infrastructure serving a significant chunk of the internet, calling .unwrap() on a Result means you're saying "this can never fail, and if it does, crash the thread immediately."The Rust compiler forced them to acknowledge this could fail (that's what Result is for), but they explicitly chose to panic instead of handle it gracefully. This is textbook "parse, don't validate" anti-pattern.
I know, this is "Monday morning quarterbacking", but that's what you get for an outage this big that had me tied up for half a day.
- This. There's just as many human commenters and content creators that generate plenty of human slop. And there are many AI produced content that is very, very interesting. I've subscribed to a couple of newsletters that are AI generated which are brilliant. Lot's of project documentation is now generated by AI which can, if well-prompted, capable of great docs that are deeply rooted in the code-as-primary-source and is eadier to keep up to date. AI content is good if the human behind it is committed to producing good content.
Hack, that's why I use Chatgpt and other LLM chat, to have AI generate content taylored for my reading pleasure and specific needs. Some of the longer generations of AI research mode I did lately are among my personal best reads of the year - all filled with links to its sources and with verified good info.
I wish people generating good AI responses would just feel free to publish it out and not be bullied by "AI slop detectors by Kagi" that promise to demote your domain ranking. Kagi: just rank the quality and veracity of the content, independently of if it's AI or not. It's not the em-dashes that make it bad, it's the sloppy human behind the curtain.
- I'm also a synthetic.new user, as a backup (and larger contexts) for my Cerebras Coder subscription (zai-glm-4.6). I've been using the free Chatbox client [1] for like ~6 months and it works really well as a daily driver. I've tested the Romanian football player question with 3 different models (K2 Instruct, Deepseek Terminus, GLM 4.6) just now and they all went straight to my Brave MCP tool to query and replied all correctly the same answer.
The issue with OP and GPT-5.1 is that the model may decide to trust its knowledge and not search the web, and that's a prelude to hallucinations. Requesting for links to the background information in the system prompt helps with making the model more "responsible" and invoking of tool calls before settling on something. You can also start your prompt with "search for what Romanian player..."
Here's my chatbox system prompt
1. https://chatboxai.appYou are a helpful assistant be concise and to the point, you are writing for smart pragmatic people, stop and ask if you need more info. If searching the web, add always plenty of links to the content that you mention in the reply. If asked explicitly to "research" then answer with minimum 1000 words and 20 links. Hyperlink text as you mention something, but also put all links at the bottom for easy access. - 3 points
- I wish @pg would just add "Replace YouTube" to his Frighteningly Ambitious Startup ideas.
- Here's a customer of the $200 max plan for 2 months. I fell in love with the Qwen3 Coder 480B model, Q3C, that was fast, twice the speed of GLM. GLM 4.6 is just meh, I mean, way faster than competitors, and practically at Sonnet 4.x level in coding and tool use, but not a life-changing difference.
Yes, Qwen3 made more mistakes than GLM, around 15% more in my quick throwaway evals, but it was a more professional model overall, more polished in some aspects, better with international languages, and being non-reasoning, ideal for a lot of tasks through the API that could be ran instantaneously. I think the Qwen line of models is a more consistent offering, with other versions of the model for 32B and VL, now a 80B one, etc. I guess the problem was that Qwen Max was closed source, signalling that Qwen may not have a way forward for Cerebras to evolve. GLM 4.6 covers precisely that hole. Not that Cerebras is a model provider of any kind, their service levels are "buggy" (right now it's been down for 1h and probably won't be fixed until California wakes up at 9am PST). So it does feel like we are not the customers, but the product, a marketing stunt for them to get visibility for their tech.
GLM feels like they (Z.ai) are just distilling whatever they can get into it. GLM switches to Chinese sometimes, or just cuts off. It does have a bit of more "intelligence" than Q3C, but not enough to say it solves the toughest problems. Regardless, for tough nuts to crack I use my Codex Plus plan.
Ex: In one of my evals, it took 15 turns to solve an issue using Cerebras Q3C. I took 12 turns with GLM, but overall GLM takes 2x the time, so instead of doing a full task from zero-to-commit in say 15 minutes, it takes 24 minutes.
In another eval (Next.js CSS editing), my task with Q3C coder was done in 1:30 minutes. GLM 4.6 took 2:24. The same task in Codex took 5:37 minutes, with maybe 1 or 2 turns. Codex DX is that of working unattended: prompt it and go do something else, there's a good chance it will get it right after 0, 1 or 2 nudges. With CC+Cerebras it's a completely different DX, given the speed it feels just like programming, but super-fast. Prompt, read the change, accept (or don't), accept, accept, accept, test it out, accept, prompt, accept, interrupt, prompt, accept, and 1:30 min later we're done.
Like I said I use Claude Code + a proxy (llmux). The coding agent makes a HUGE difference, and CC is hands-down the best agent out there.
- For the fans of the genre, some more speculation by the master speculator himself:
https://avi-loeb.medium.com/post-perihelion-data-on-3i-atlas...
- 1 point
- I've used it with CC and the match was great, not a lot of issues, I believe Qwen had a clear focus on distilling Anthropic models. GLM 4.6 is slightly better maybe, but the speed dropped to half on Cerebras so that's the price for maybe ~15% improvement in model overall quality. This quality does not necessarily means the end product (the code) is 15% better, just that now I take 12 turns with GLM instead of 15 turns with Qwen to get something done, but turn speed has been reduced to half in Cerebras, so my TTC (time-to-completion) has actually gone from 15min to 24min!
Gosh, I really wish Mozilla would just dig into their user-base and find a way to adequately become sustainable... or find a way to make it work better as a foundation that is NOT maintained by Google, ie like the Wiki Foundation. I do spend a LOT of time in FF, can't anyone see there's a value beyond selling ads and personal info that could make Mozilla more sustainable, dependable and resilient?