I don’t quite agree with that reasoning, however, because a test that fails to test the property it should test for is a very different kind of error than having an error in the implementation of that property. You don’t have to make the “same” error on both sides for an error to remain unnoticed. Compared to bookkeeping, a single random error in either the tests or the implementation is more likely to remain unnoticed.
Yeah but it's very different from tests vrs code though, right? Every entry has two sides at least and you do it together, they are not independent like test and code.
You can easily make a mistake if you write a wrong entry and it will still balance. Balanced books =/= accurate books is my point. And there is no difference between "code" and "tests" in double entry, it's all just "code".
So it seems like the person who made the metaphor doesn't really know how double-entry works or took maybe one accounting class.
The point of the current thread is that the use of AI coding agents threatens to disrupt that. For example, they could observe a true positive test failure and opt to modify the test to ensure a pass instead.
Naturally. Hence "high confidence" and not "full confidence". But let's not travel too far into the weeds here. Getting us back on track, what about the concept of "high confidence" is not understandable?
Also tests and code are independent while you always affect both sides in double-entry always. Audits exist for a reason.