- PeterCorlessShould be far, far larger news, to be honest.
- "You are entirely correct!"
- Exactly.
- Fair. A good callout. And maybe the right move. However, a healthy IBM would not have needed to calve off its entire Global Technology Services business.
- Thank you for the correction.
- So much that we presume in the modern cloud wasn't a given when Apache Kafka was first released in 2011.
kevstev wrote just above about Kafka being written to run on spinning disks (HDDs), while Redpanda was written to take advantage of the latest hardware (local NVMe SSDs). He has some great insights.
As well, Apache Kafka was written in Java, back in an era when you were weren't quite sure what operating system you might be running on. For example, when Azure first launched they had a Windows NT-based system called Windows Azure. Most everyone else had already decided to roll Linux. Microsoft refused to budge on Linux until 2014, and didn't release its own Azure Linux until 2020.
Once everyone decided to roll Linux, the "write once run everywhere" promise of Java was obviated. But because you were still locked into a Java Virtual Machine (JVM) your application couldn't optimize itself to the underlying hardware and operating system you were running on.
Redpanda, for example, is written in C++ on top of the Seastar framework (seastar.io). The same framework at the heart of ScyllaDB. This engine is a thread-per-core shared-nothing architecture that allows Redpanda to optimize performance for hardware utilization in ways that a Java app can only dream of. CPU utilization, memory usage, IO throughput. It's all just better performance on Redpanda.
It means that you're actually getting better utility out of the servers you deploy. Less wasted / fallow CPU cycles — so better price-performance. Faster writes. Lower p99 latencies. It's just... better.
Now, I am biased. I work at Redpanda now. But I've been a big fan of Kafka since 2015. I am still bullish on data streaming. I just think that Apache Kafka, as a Java-based platform, needs some serious rearchitecture,
Even Confluent doesn't use vanilla Kafka. They rewrote their own engine, Kora. They claim it is 10x faster. Or 30x faster. Depending on what you're measuring.
1. https://www.confluent.io/confluent-cloud/kora/
2. https://www.confluent.io/blog/10x-apache-kafka-elasticity/
- There have been annual layoffs at RedHat since 2023. This year they just laid off more. The layoffs this year are expected to be "a low single digit percentage of our global workforce." Which will likely include hundreds of folks at Red Hat.
1. https://www.cio.com/article/4084855/ibm-to-cut-thousands-of-...
2. https://www.newsobserver.com/news/business/article312796900....
- I have thought quite a bit today about the news from Confluent and IBM. I have friends and colleagues at both companies. When I was an undergrad at Carnegie Mellon University in the 1980s I used to wear a big brown and tan IBM button that said "THINK."
And here is a picture of Ben Lorica 罗瑞卡 interviewing Jay Kreps and other industry leaders at The Hive back on the evening of 25 February 2015. I believe they were talking about strategies for implementing Lambda Architecture.
All of which is to say: I have been a big fan of both companies for a long, long time. While today I am at employed at Redpanda Data, a direct competitor of Confluent, I hope to set aside any "team"-based bias to provide a sober and honest appraisal.
First, IBM has been shrinking. They were at 345,000 employees as of their 2020 Annual Report. But the COVID-19 pandemic was only one of many setbacks the company faced when Arvind Krishna took the helm as CEO. By December 2024 the employee base shrank to 270,000 — a drop of nearly 22%.
IBM revenue in 2020: $73.6B.
IBM revenue in 2024: $62.75B — a less-precipitous drop of 15%.
Revenue per employee over that period rose from $213k to $232k.
Confluent on its own? $400k.
And to compare: Amazon earns $580k per employee. Microsoft generates over $1M per. Nvidia? $4M-$5M.
And now, in November, they announced thousands of more layoffs. No one seems safe, regardless of job title. Those cut include positions in "artificial intelligence, marketing, software engineering and cloud technology."
Next, IBM has had a mixed record as a steward of acquisitions. Red Hat has doubled in revenues since their 2019 acquisition. For a while its headcount continued to grow, as much as 19,000 by 2023. But then it was forced into layoffs by parent IBM in April of that year, and then each year since, even while it remains one of the highest margin businesses in their portfolio.
SoftLayer — "IBM Cloud Classic" — also suffered significant layoffs in early 2025, with offshoring sending jobs to India.
DataStax had layoffs in 2023-2024, even before its acquisition was announced. Maybe they were "trimming the fat" to get into a shape to be acquired.
As a person with a long career in marketing, I know that many of the first roles to be jettisoned at a newly-acquired company tend to be in go-to-market organizations. Sales, Marketing, Developer Relations, Documentation, Training, Community, Customer Service. These tend to be seen as "nice to haves" by upper management. But their loss guts organizations and hollows out user-facing teams and open source communities.
My hope is that Confluent is spared as much of the pain and turmoil as possible. That, like Red Hat, it is run autonomously as much as possible.
[Crossposted from LinkedIn here, where you can see the photo mentioned: https://www.linkedin.com/feed/update/urn:li:activity:7404052...]
- You can have DeepWiki literally scan the source code and tell you:
> 2. Delayed Sync Mode (Default)
> In the default mode, writes are batched and marked with needSync = true for later synchronization filestore.go:7093-7097 . The actual sync happens during the next syncBlocks() execution.
However, if you read DeepWiki's conclusion, it is far more optimistic than what Aphyr uncovered in real-world testing.
> Durability Guarantees
> Even with delayed fsyncs, NATS provides protection against data loss through:
> 1. Write-Ahead Logging: Messages are written to log files before being acknowledged
> 2. Periodic Sync: The sync timer ensures data is eventually flushed to disk
> 3. State Snapshots: Full state is periodically written to index.db files filestore.go:9834-9850
> 4. Error Handling: If sync operations fail, NATS attempts to rebuild state from existing data filestore.go:7066-7072"
https://deepwiki.com/search/will-nats-lose-uncommitted-wri_b...
- French is actually <30%. There is a well-sourced Wikipedia article about this.
• French (including Old French: 11.66%; Anglo-French: 1.88%; and French: 14.77%): 28.30%;
• Latin (including modern scientific and technical Latin): 28.24%;
• Germanic languages (including Old English, Proto-Germanic and others: 20.13%;
• Old Norse: 1.83%; Middle English: 1.53%; Dutch: 1.07%; excluding Germanic words borrowed from a Romance language): 25%;[a]
• Greek: 5.32%;
• no etymology given: 4.04%;
• derived from proper names: 3.28%; and
• all other languages: less than 1%
https://en.wikipedia.org/wiki/Foreign-language_influences_in...
Also, one could argue French itself is an agglomeration of Vulgar Latin (87%) as well as its own Frankish Germanic roots (10%), and a few of Gaulish and Breton Celtic origin.
https://en.wikipedia.org/wiki/List_of_French_words_of_German...
- 1 point
- Thanks.
KubeCon is going be affected. I can imagine what will happen when speakers, vendors and attendees miss their flights.
- Anyone have eyes on the enumerated list of airports affected?
- 2 points
- Seems all Microsoft-related domains are impacted in some way.
• https://www.xbox.com/en-US also doesn't fully paint. Header comes up, but not the rest of the page.
• https://www.minecraft.net/en-us is extremely slow, but eventually came up.
- True. Redpanda does not use Zookeeper.
Yet to also be fair to the Kafka folks, Zookeeper is no longer default and hasn't been since April 2025 with the release of Apache Kafka 4.0:
"Kafka 4.0's completed transition to KRaft eliminates ZooKeeper (KIP-500), making clusters easier to operate at any scale."
Source: https://developer.confluent.io/newsletter/introducing-apache...
- The only thing that might take "weeks" is procrastination. Presuming absolutely no background other than general data engineering, a decent beginner online course in Kafka (or Redpanda) will run about 1-2 hours.
You should be able to install within minutes.
- Correct. Redpanda is source-available.
When you have C++ code, the number of external folks who want to — and who can effectively, actively contribute to the code — drops considerably. Our "cousins in code," ScyllaDB last year announced they were moving to source-available because of the lack of OSS contributors:
> Moreover, we have been the single significant contributor of the source code. Our ecosystem tools have received a healthy amount of contributions, but not the core database. That makes sense. The ScyllaDB internal implementation is a C++, shard-per-core, future-promise code base that is extremely hard to understand and requires full-time devotion. Thus source-wise, in terms of the code, we operated as a full open-source-first project. However, in reality, we benefitted from this no more than as a source-available project.
Source: https://www.scylladb.com/2024/12/18/why-were-moving-to-a-sou...
People still want to get free utility of the source-available code. Less commonly they want be able to see the code to understand it and potentially troubleshoot it. Yet asking for active contribution is, for almost all, a bridge too far.
- Yes, for Redpanda. There's a blog about that:
"The use of fsync is essential for ensuring data consistency and durability in a replicated system. The post highlights the common misconception that replication alone can eliminate the need for fsync and demonstrates that the loss of unsynchronized data on a single node still can cause global data loss in a replicated non-Byzantine system."
However, for all that said, Redpanda is still blazingly fast.
https://www.redpanda.com/blog/why-fsync-is-needed-for-data-s...