I agree, it can be. My example above matches yours (fuzzing a data structure).
I coded some tests that ensure the data structure is useful, many of them test examples from papers describing the data structure, but that don't necessarily cover all the corner cases.
Then I fuzzed it. I used Go's fuzzer, which is geared towards parsers and stuff. It can generate a stream of bytes and use that for fuzzing. The data structure is a set/map. So I interpret the stream of bytes as commands to add/remove random elements from the set/map. After I add an element, contains needs to return true; after I remove one, contains need to return false; if I put in a mapping, finding it must return what I just put there, etc. And at every step all the data structure invariants (that ensure logarithmic search, etc) need to hold.
That was stupidly effective at finding a few bugs, all within seconds, all with sequences of less than a dozen operations. Then it stops. And you get 100% coverage.
I'm assuming that, apart from ergonomics, where I kinda build my own state machine transitions out of a stream of bytes, the tooling actually seems more effective than property testing libraries.
I coded some tests that ensure the data structure is useful, many of them test examples from papers describing the data structure, but that don't necessarily cover all the corner cases.
Then I fuzzed it. I used Go's fuzzer, which is geared towards parsers and stuff. It can generate a stream of bytes and use that for fuzzing. The data structure is a set/map. So I interpret the stream of bytes as commands to add/remove random elements from the set/map. After I add an element, contains needs to return true; after I remove one, contains need to return false; if I put in a mapping, finding it must return what I just put there, etc. And at every step all the data structure invariants (that ensure logarithmic search, etc) need to hold.
That was stupidly effective at finding a few bugs, all within seconds, all with sequences of less than a dozen operations. Then it stops. And you get 100% coverage.
I'm assuming that, apart from ergonomics, where I kinda build my own state machine transitions out of a stream of bytes, the tooling actually seems more effective than property testing libraries.
Still curious to understand what I'm missing out.