I believe it wasn't even clear that the big O complexity is important in assessing how effective an algorithm is. This is pretty much obvious to us now (sometimes too obvious - there are some edge cases when the constant factor wins over the asymptotic complexity).
You cannot really say that cache-aware algorithms have higher asymptotic bound. Some of them might happen to have below-optimal asymptotic bound in a cache-unaware memory model, which sort of misses the point of the algorithms being aware of cache.
I think that one of the differences is that, in 1954, computer programs were single entities written by one person or by a team working closely together. So the "between two pieces of working code" bugs-- which tend to be the nastiest kind in modern development-- weren't really seen yet. Million-line codebases weren't even on the table as a reasonable concept, and the idea of a program depending on 120 other libraries or frameworks was unimaginable.
— Maurice Wilkes