I actually have the feeling it’s not as hardcore as it used to be on average. E.g. OpenAI doesn’t have a straight up LC interview even though they probably are the most sought after company. Google and MS and others still do it, but it feel like it has less weight in the final feedback than it did before. Most en-vogue startup have also ditched it for real world coding excercices.
Probably due to the fact that LC has been thoroughly gamed and is even less a useful signal than it was before.
Of course some still do, like Anthropic were you have to have a perfect score to 4 leetcode questions, automatically judged with no human contact, the worst kind of interview.
>Of course some still do, like Anthropic were you have to have a perfect score to 4 leetcode questions, automatically judged with no human contact, the worst kind of interview.
Should be illegal honestly.
Thankfully not everything from SV culture gets adoption.
That wouldn't be hard to do. Given the disparate impact standard, everything is biased against a protected class.
I can't imagine this kind of entitlement. If you don't want to work for them, don't study leetcode. If you want to work for them (and get paid tons of money), study leetcode. This isn't a difficult aristotelian ethics/morals question.
Ten years ago it was more based on Cracking the Coding Interview.
So i'd guess what you're referring to is even older than that.
Apart from those companies where social capital counts for more ...
Based on my own experiences, that was true 25 years ago. 20 years ago, coding puzzles were now a standard part of interviewing, but it was pretty lightweight. 5 years ago (covid!) everything was leet-code to get to the interview stage.
The faangs jump and then the rest of the industry does some dogshit imitation of their process
It was humbling having to explain to fellow adult humans that when your test question is based on an algorithm solving a real business problem that we work on every day, a random person is not going to implement a solution in one hour as well as we can.
I’ve seen how the faangs interview process accounts for those types of bias and mental blindness and are actually effective, but their solutions require time and/or money so everywhere I’ve been implements the first 80% that’s cheap and then skips on the rest that makes it work
Any way to reach out? :)
I think it boils down to companies not wanting to burn money and time on training, and trying to come up with all sorts of optimized (but ultimately contrived) interview processes. Now both parties are screwed.
>It was humbling having to explain to fellow adult humans that when your test question is based on an algorithm solving a real business problem that we work on every day, a random person is not going to implement a solution in one hour as well as we can.
Tell me about it! Who were you explaining this to?
Ah, but, the road to becoming good at Leetcode/100m sprint is:
>a slow arduous never ending jog with multiple detours and stops along the way
Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.
Barring a few core library teams, companies don't really care if you're any good at algorithms. They care if you can learn something well enough to become world-class competitive. If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.
That's basically also the reason that many Law and Med programs don't care what your major in undergrad was, just that you had a very high GPA in whatever you studied. A decent number of Music majors become MDs, for example.
Startups that wanted to emulate FAANGs then cargo-culted them, particularly if they were also founded by CS students or ex-FAANG (which describes a lot of them). Very, very few of these actually try any other way of hiring and compare them.
Being able to study hard and learn something well is certainly a great skill to have, but leetcode is a really poor one to choose. It's not a skill that you can acquire on the job, so it rules out anyone who doesn't have time to spend months studying something in their own time that's inherently not very useful. If they chose to test skills that are hard and take effort to learn, but are also relevant to the job, then they can also find people who are good at learning on the job, which is what they are actually looking for.
Memorizing the Top 100 list from Leetcode only works for a few companies (notably and perplexingly, Meta) but doesn't for the vast majority.
Also, just solving the problem isn't enough to perform well on the interview. Getting the optimal solution is just the table stakes. There's communication, tradeoffs between alternative solutions, coding style, follow-up questions, opportunities to show off language trivia etc.
Memorizing problems is wholly not the point of Leetcode grinding at all.
In terms of memorizing "patterns", in mathematics and computer science all new discovery is just a recombination of what was already known. There's virtually no information coming from outside the system like in, say, biology or physics. The whole field is just memorized patterns being recombined in different ways to solve different problems.
I guess it's a matter of opinion but my point is, this is probably the right metric. Arguably, the kind of people who shut up and play along with these stupid games because that's where the money is make better team players in large for-profit organizations than those who take a principled stance against ever touching Leetcode because their efforts wouldn't contribute anything to the art.
That's literally what CS teaches you too. Which is what "leetcode" questions are: fundamental CS problems that you'd learn about in a computer science curriculum.
It's called "reducing" one problem to another. We had an entire semester's mandatory class spend a lot of time on reducing problems. Like figuring out how you can solve a new type of question/problem with an algorithm or two that you already know from before.
Like showing that "this is just bin packing". And there are algorithms for that, which "suck" in the CS kind of sense but there are real world algorithms that are "good enough" to be usable to get shit done.
Or showing that something "doesn't work, period" by showing that it can be reduced to the halting problem (assuming that nobody has solved that yet - oh and good luck btw. if you want to try ;) )
Then comes the ability/memorization to actually code it, e.g. if I knew it needs coding red-black tree I wouldn't even start.
Architecture is not part of leetcode.
People complain, rightly so in some cases, that their "interview" is really doing some (unpaid) work for the company
Math is like that as well though. It's about learning all the prior axioms, laws, knowing allowed simplifications, and so on.
or that writing a new book is the same.
I.e. it's not about that. Like sure it helps to have a base set of shared language, knowledge, and symbols, but math is so much more than just that.
In all seriousness, the intersection between correctness and project delivery is where engineering sits. Solutions must be good enough, correct enough, and cheap enough to fit the use case, but ideally no more than that.
This is an appeal to tradition and a form of survivorship bias. Many successful companies have ditched LeetCode and have found other ways to effectively hire.
> If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.
My company uses LeetCode. All I want is sane interfaces and good documentation. It is far more likely to get something clever, broken and poorly documented than something "excellent", so something is missing for this correlation.
This that I've singled out above is a very confident statement, considering that inertia in large companies is a byword at this point. Further, "work" could conceivably mean many things in this context, from "per se narrows our massive applicant pool" to "selects for factor X," X being clear only to certain management in certain sectors. Regardless, I agree with those who find it obvious that LC does not ensure a job fit for almost any real-world job.
You're assuming that something else works better. Imagine if we were in a world where all interviewing techniques had a ton of false positives and negatives without a clear best choice. Do you expect that companies would just give up, and not hire at all, or would they pick based on other factors (e.g. minimizing the amount of effort needed on the company side to do the interviews)? Assuming you accept the premise that companies would still be trying to hire in that situation, how can you tell the difference between the world we're in now and that (maybe not-so) hypothetical one?
If it didn't work, these companies wouldn't be able to function at all.
It must be the case that it works better than running a RNG on everyone who applied.
Does it mean some genius software engineer who wrote a fundamental part of the Linux kernel but never learned about Minimum Spanning Trees got filtered out? Probably. But it's okay. That guy would've been a pain in the ass anyway.
When I look at the messy Android code, Fuchsia's commercial failure, Dart being almost killed by politics, Go's marvellous design, WinUI/UWP catastrophical failure, how C++/CX got replaced with C++/WinRT, ongoing issues with macOS Tahoe,....
I am glad that apparently I am not good enough for such projects.
The fact is that they fail is not evidence that leetcode interviews fails to select for high quality engineers.
I see it differently. I wouldn't say it's reasonably good, I'd say it's a terrible metric that's very tenuously correlated with on the job success, but most of the other metrics for evaluating fresh grads are even worse. In the land of the blind the one eyed man is king.
> If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.
Eh. As someone who did tech and then medicine, a lot great doctors would make terrible software engineers and vice versa. Some things, like work ethic and organization, are going to increase your odds of success at nearly any task, but there's plenty other skills that are not nearly as transferable. For example, being good at memorizing long lists of obscure facts is a great skill for a doctor, not so much for a software engineer. Strong spatial reasoning is helpful for a software developer specializing in algorithms, but largely useless for, say, an oncologist.
In my experience, it's totally not true.
Many college students of my generation are pretty good with LC hards these days purely due to FOMO-induced obsessive practice, which doesn't translate to a practical understanding of the job, (or any other parts of CS like OS/networks/languages/automata either).
I will give you an exercise, pick an LC hard problem and it's very likely an experienced engineer who has only done "real work" will not know the "trick" required to solve the problem. (Unless it's something common like BFS or backtracking).
I say this as someone with "knight" badge on leetcode, whatever that means, lest you think it's a sour grapes fallacy.
That makes the assumption that company hiring practices are evidence based.
How many companies continue to use pseudo-science Myers Briggs style tests?
I've always explained it as demonstrating your ping pong skills to get on the basketball team.
Microsoft, Google, Meta, Amazon, I'm guessing... but, what are the other two?
But yeah that's the game you have to play now if you want the top $$$ at one of the SMEGMA companies.
I wrote (for example) my 2D game engine from scratch (3rd party libs excluded)
https://github.com/ensisoft/detonator
but would not be able to pass a LC type interview that requires multiple LC hard solutions and a couple of backflips on top. But that's fine, I've accepted that.