Preferences

Kudos to Cloudflare for clarity and diligence.

When talking of their earlier Lua code:

> we have never before applied a killswitch to a rule with an action of “execute”.

I was surprised that a rules-based system was not tested completely, perhaps because the Lua code is legacy relative to the newer Rust implementation?

It tracks what I've seen elsewhere: quality engineering can't keep up with the production engineering. It's just that I think of CloudFlare as an infrastructure place, where that shouldn't be true.

I had a manager who came from defense electronics in the 1980's. He said in that context, the quality engineering team was always in charge, and always more skilled. For him, software is backwards.


"Kudos"? This is like the South Park episode in which the oil company guy just excuses himself while the company just continues to fuck up over and over again. There's nothing to praise, this shouldn't happen twice in a month. Its inexcusable.
twice in a month _so far_
We still have two holidays and associated vacations and vacation brain to go. And then the January hangover.

Every company that has ignored my following advice has experienced a day for day slip in first quarter scheduling. And that advice is: not much work gets done between Dec 15 and Jan 15. You can rely on a week worth, more than that is optimistic. People are taking it easy and they need to verify things with someone who is on vacation so they are blocked. And when that person gets back, it’s two days until their vacation so it’s a crap shoot.

NB: there’s work happening on Jan 10, for certain, but it’s not getting finished until the 15th. People are often still cleaning up after bad decisions they made during the holidays and the subsequent hangover.

Those AI agents are coding fast, or am I missing some obvious concept here?
reaching for that _one 9 of uptime_
It's weird reading these reports because they don't seem to test anything at all (or at least there's very little mention of testing).

Canary deployment, testing environments, unit tests, integration tests, anything really?

It sounds like they test by merging directly to production but surely they don't

The problem is that Cloudflare do incremental rollouts and loads of testing for _code_. But they don't do the same thing for configuration - they globally push out changes because they want rapid response.

It's still a bit silly though, their claimed reasoning probably doesn't really stack up for most of their config changes - I don't see it to be that likely that a 0.1->1->10->100 rollout over the period of 10 minutes would be a catastrophically bad idea for them for _most_ changes.

And to their credit, it does seem they want to change that.

Yeah to me it doesn't make any sense - configuration changes are just as likely to break stuff (as they've discovered the hard way) and both of these issues could have been found in a testing environment before being deployed to production
In the post they described that they observed errors happening in their testing env, but decided to ignore because they were rolling out a security fix. I am sure there is more nuance to this, but I don’t know whether that makes it better or worse
> but decided to ignore because they were rolling out a security fix.

A key part of secure systems is availability...

It really looks like vibe-coding.

This is funny, considering that someone that worked on the defense industry (guide missile system) found a memory leak on one of their products, at that time. They told him that they knew about it, but that it's timed just right with the range of the system it would be used, so it doesn't matter.
This paraphrased urban legend has nothing to do with quality engineering though? As described, it's designed to the spec and working as intended.
It tracks with my experience in software quality engineering. Asked to find problems with something already working well in the field. Dutifully find bugs/etc. Get told that it's working though so nobody will change anything. In dysfunctional companies, which is probably most of them, quality engineering exists to cover asses, not to actually guide development.
It is not dysfunctional to ignore unreachable "bugs". A memory leak on a missile which won't be reached because it will explode long before that amount of time has passed is not a bug.
It's a debt though. Because people will forget it's there and then at some point someone changes a counter from milliseconds to microseconds and then the issue happens 1000 times sooner.

It's never right to leave structural issues even if "they don't happen under normal conditions".

The way it always seemed to go for me, when I was in that role, is the product is already complete, development is done, you're handed all the tests/etc that the disinterested developers care to give you, and you're told to make those tests presentable and robust, and increase test coverage. The process of doing that inevitably uncovers issues, but nobody cares because the thing is already done and working, so what was the point of any of it? The point was just to check off a box. At companies like this, the role is bullshit work.
Having observed an average of two mgmt rotations at most of the clients our company is working for this comes at absolutely no surprise to me. Engineering is acting perfectly reasonable, optimizing for cost and time within the constraints they were given. Constraints are updated at a (marketing or investor pleasure) whim without consulting engineering, cue disaster. Not even surprising to me anymore...
... until the extended-range version is ordered and no one remembers to fix the leak. :]
Ariane 5 happens.
They will remember, because it'll have been measured and documented, rigorously.
I've found that the real trick with documentation isn't creation, it's discovery. I wonder how that information is easily found afterwards.
>I wonder how that information is easily found afterwards.

Military hardware is produced with engineering design practices that look nothing at all like what most of the HN crowd is used to. There is an extraordinary amount of documentation, requirements, and validation done for everything.

There is a MIL-SPEC for pop tarts which defines all parts sizes, tolerances, etc.

Unlike a lot in the software world military hardware gets DONE with design and then they just manufacture it.

By reading the documentation thoroughly as a compulsory first step to designing the next system that depends on it.

I realise this may probably boggle the mind of the modern software developer.

For the new system to be approved, you need to document the properties of the software component that are deemed relevant. The software system uses dynamic allocation, so "what do the allocation patterns look like? are there leaks, risks of fragmentation, etc, and how do we characterise those?" is on the checklist. The new developer could try to figure this all out from scratch, but if they're copying the old system's code, they're most likely just going to copy the existing paperwork, with a cursory check to verify that their modifications haven't changed the properties.

They're going to see "oh, it leaks 3MiB per minute… and this system runs for twice as long as the old system", and then they're going to think for five seconds, copy-paste the appropriate paragraph, double the memory requirements in the new system's paperwork, and call it a day.

Checklists work.

If ownerless code doesn’t result in discoverability efforts then the whole thing goes off the rails.

I won’t remember this block of code because five other people have touched it. So I need to be able to see what has changed and what it talks to so I can quickly verify if my old assumptions still hold true

When people don't read the documentation, discovery is a real problem. When people do read the documentation, things are different. Many software engineers do not read the documentation, and then complain to you if they break something in a documented way. If you compare to hardware engineers, whose vendors put out tens of thousands of pages of documentation for single parts, they have a lot of skill at reading documentation (and the vendors at writing it).
Was this one measured and documented rigorously?

Well obviously not, because the front fell off. That’s a dead giveaway.

My hunch is that we do the same with memory leaks or other bugs in web applications where the time of a request is short.

This item has no comments currently.