Even before LLMs, most coding problems really are a test of efficient pointer manipulation. You either move a pointer to character in words, pointer to data in array, or traverse n-node linked lists.
This level of pointer manipulation is rarely ever needed these days, as things like sorting, searching, and parsing are all handled by functions. An LRU cache is literally a function decorator in Python.
What I care more is if someone can understand that network calls take time, that data processing is fast, and how to optimize that pipeline. Threading is not a requirement, they can do it with async as well, or even without async with just smart scheduling.
The core of the problem that makes it LLM proof is that the problem doesn't disclose anything about network latencies or data structure. So standard iterative solutions from LLM usually end up taking longer because they get stuck waiting on a single network response. And so you can clearly tell who understand the core operations at hand, versus someone who just memorized a bunch of patterns and/or using LLMs.
Even before LLMs, most coding problems really are a test of efficient pointer manipulation. You either move a pointer to character in words, pointer to data in array, or traverse n-node linked lists.
This level of pointer manipulation is rarely ever needed these days, as things like sorting, searching, and parsing are all handled by functions. An LRU cache is literally a function decorator in Python.
What I care more is if someone can understand that network calls take time, that data processing is fast, and how to optimize that pipeline. Threading is not a requirement, they can do it with async as well, or even without async with just smart scheduling.
The core of the problem that makes it LLM proof is that the problem doesn't disclose anything about network latencies or data structure. So standard iterative solutions from LLM usually end up taking longer because they get stuck waiting on a single network response. And so you can clearly tell who understand the core operations at hand, versus someone who just memorized a bunch of patterns and/or using LLMs.