However, when someone says an operation is O(1) vs O(log N), it still tells you something important. Very broadly speaking (tons of caveats depending on problem domain, of course) O(log N) usually implies some kind of tree traversal, while O(1) implies a very simple operation or lookup. And with tree traversal, you're chasing pointers all over memory, making your cache hate you.
So, like, if you have a binary tree with 65000 elements in it, we're talking a height of 15 or 16, something like that. That's not that much, but it is 15 or 16 pointers you're chasing, possibly cache-missing on a significant amount of them. Versus a hash-table lookup, where you do a single hash + one or two pointer dereferences. If this is in a hot path, you're going to notice a difference.
Again, lots of caveats, this article provides a good exception. In this case, the sorting has much more beneficial cache behavior than the hash table, which makes sense. But in general, log(N) hints at some kind of tree, and that's not always what you want.
But yes, don't be afraid of log(N). log(N) is tiny, and log(N) operations are very fast. log(N) is your friend.
What's the complexity of computing the nth fibonacci number? Make a graph of computation time with n=1..300 that visualizes your answer.
There are those that very quickly reply linear but admit they can't get a graph to corroborate, and there are those that very quickly say linear and even produce the graph! (though not correct fibonacci numbers...)
However you do it it probably can't be linear, since multiplication is probably at best O(n log(n)), though that lower bound hasn't been proven. A naive recursive calculation will be even worse since that has exponential complexity.
How can I unread this?