How can I unread this?
However, when someone says an operation is O(1) vs O(log N), it still tells you something important. Very broadly speaking (tons of caveats depending on problem domain, of course) O(log N) usually implies some kind of tree traversal, while O(1) implies a very simple operation or lookup. And with tree traversal, you're chasing pointers all over memory, making your cache hate you.
So, like, if you have a binary tree with 65000 elements in it, we're talking a height of 15 or 16, something like that. That's not that much, but it is 15 or 16 pointers you're chasing, possibly cache-missing on a significant amount of them. Versus a hash-table lookup, where you do a single hash + one or two pointer dereferences. If this is in a hot path, you're going to notice a difference.
Again, lots of caveats, this article provides a good exception. In this case, the sorting has much more beneficial cache behavior than the hash table, which makes sense. But in general, log(N) hints at some kind of tree, and that's not always what you want.
But yes, don't be afraid of log(N). log(N) is tiny, and log(N) operations are very fast. log(N) is your friend.
What's the complexity of computing the nth fibonacci number? Make a graph of computation time with n=1..300 that visualizes your answer.
There are those that very quickly reply linear but admit they can't get a graph to corroborate, and there are those that very quickly say linear and even produce the graph! (though not correct fibonacci numbers...)
However you do it it probably can't be linear, since multiplication is probably at best O(n log(n)), though that lower bound hasn't been proven. A naive recursive calculation will be even worse since that has exponential complexity.
Another reason to do this is that O(1) is typically a lie. Basic operations like addition are assumed to be constant time, but in practice, even writing down a number, n, is O(log(n)). More commonly thought of as O(b) where b is the bit-length of n.
N is the number of bits, not the number of elements, so no.
It is helpful to use N=# of elements, since the elements are often fixed/limited size. If elements aren't a fixed size, it's necessary to drop down to # of bits.
Interestingly, recently I've been thinking that basically the Big-O notation is essentially a scam, in particular the log(N) part.
For small values of N, log(N) is essentially a constant, <= 32, so we can just disregard it, making sorting simply O(N).
For large values, even so-called linear algorithms (e.g. linear search) are actually O(N log(N)), as the storage requirements for a single element grow with log(N) (i.e. to store distinct N=2^32 elements, you need N log(N) = 2^32 * 32 bits, but to store N=2^64 elements, you need 2^64 * 64 bits).
Cache locality consideration make this effect even more pronounced.