- Thanks for the details.
It's definitely true that common stock gets $0 if the acquisition price is <= (sum raised + debt).
That sort of sounds like the startup wasn't doing well, and the acquisition wasn't for a lot of money (relative to amount raised), which seems very different from these Groq/Windsurf situations.
- I think grit and hard work will still be valuable attributes, even if AI starts producing perfect software tomorrow.
The world also just doesn't change that quickly.
Even with the most rosy projections, there is no way that software engineers are unnecessary in 2-3 years. Go have a look at METR's projections, even rosy projections aren't getting us to software that can replace engineers in a few years, let alone having that change ripple through the economy.
And nobody actually knows how far AI progress will go on the current trajectory. Moore's law was a steady march for a long time, until it wasn't.
- Liquidation preferences are typically 1x these days, so they only matter when companies are sold at fire sale prices where basically nobody is making any money.
The deals are all weird so it's hard to really know what's happening, but if Groq gets $20b, I don't see how common stock holders don't get paid.
- Can you say more about why mechanically she didn't get anything?
If you exercise your options you have real stock in the company, so I don't see how you can get shafted here.
Did investors do some sort of dividend cash out before employees were able to exercise their options? (Obviously shady, but more about investors/leadership being unethical than the deal structure).
Would love to know more about how this played out.
- I think this is a not insane prediction, but much like truck driving and radiology the timeline is likely not that short.
Waymo has been about to replace the need for human drivers for more than a decade and is just starting to get there in some places, but has had basically no impact on demand yet, and that is a task with much less skill expression.
- I haven't actually dug into it, but I would assume that open redirects would strip a Sec-Fetch-Site: cross-site header and replace it with none or same-site or something. So would things like allowing users to specify image URLs, etc. And if you rely on Sec-Fetch-Site for security on GETs, these turn into actual vulnerabilities.
I think these sorts of minor web app issues are common enough that state changing GETs should be explicitly discouraged if you are relying on Sec-Fetch-Site.
- It's good that folks working on browsers are working on making this easier, but I don't think you can really rely on this for GET requests.
It's often easier to smuggle a same-origin request than to steal a CSRF token, so you're widening the set of things you're vulnerable to by hoping that this can protect state mutating GETs.
The bugs mentioned in the GitHub issue are some of the sorts of issues that will hit you, but also common things like open redirects turn into a real problem.
Not that state mutating GETs are a common pattern, but it is encoded as a test case in the blog post's web framework.
- I generally recommend the book Founding Sales (available for free online), but it's targeted at SaaS founders.
But you're actually doing something even more common: running a consulting business, and there's plenty of content on that for just that reason, so I would go find content on how to scale a consulting business, e.g. this seems like the start of a thread to pull on https://training.kalzumeus.com/newsletters/archive/consultin...
- This does not currently show up on Google Trends, so unless you're trying to go full conspiracy mode here I don't see why we should trust this random screenshot to be accurate: https://trends.google.com/trends/explore?date=now%207-d&geo=...
- > To me, this heavily biases towards engineers that have already built or at least designed a similar system to the one you're presenting them.
Yes, this is not an IQ test, we are trying to see how people react to problems in our domain not measure some generalized form of reasoning. The advantage of picking a problem as close to our real problems as possible is that I don't have to worry how they generalize from the interview to work.
In general, my experience with system design interviews is that people make bad designs, and when you drill down on them they give bad rationales. Similar to coding screens, people just out themselves as not very good at their jobs regularly.
> Those are interesting challenges, and if the interview is an experienced developer, I don't think a candidate could really bullshit through them to a significant degree.
It's not really about "bullshit" per se, but about whether their understanding of their context is correct or not. They can tell you fully reasonable sounding things that are just wrong. In a mock interview, you can see if they ask good questions about their context.
> I personally wouldn't care if the ideas originated with the candidate vs. another member of the team. I'd be looking for how well the candidate understood the problem and the solution they implemented, the other solutions considered, the tradeoffs, etc.
I totally disagree with this. It is very different to be able to remember the design doc for the project and parrot the things that were talked about vs actually writing it.
If I want to hire someone who can design things well from scratch and I get someone who makes bad decisions unless someone is supervising them, I will be very disappointed.
In general, I have given both interviews to the same candidate and after saying a bunch of reasonable things about their existing work, when I ask them how to do design something I quickly find that they are less impressive than they seem. Again, maybe I'm bad at giving experiential interviews, but being hard to administer is a point against them.
My experience of hiring is also that I am generally not looking to take chances, unwinding bad hires is super painful.
- Maybe I am just bad at interviewing people, but I have tried giving the experiential interviews Casey describes, but I find it quite hard to get signal out of them.
You run into questions of how well a candidate remembers a project, which may not be perfect. You may end up drilling into a project that is trivial. The candidate may simply parrot things that someone else on the team came up with. And when candidates say things, you really have no way to understand if what they're saying is true, particularly when internal systems are involved.
I have found system design interviews specifically much much better at getting signal. I have picked a real problem we had and start people with a simplified architecture diagram of our actual system and ask them how they would solve it for us. I am explicitly not looking for people to over design it. I do give people the advice at the start of every skills interview tp treat this as a real work problem at my startup not a hypothetical exercise.
I have had a lot more luck identifying the boundaries of people's knowledge/abilities in this setting than when asking people about their projects.
And while everyone interviewing hates this fact, false positives are very expensive and can be particularly painful if the gap is "this person is not a terrible programmer, just more junior than we wanted" because now you have to either fire someone who would be fine in another role if you had the headcount for it or have a misshapen team.
- This was a major issue, but it wasn't a total failure of the region.
Our stuff is all in us-east-1, ops was a total shitshow today (mostly because many 3rd party services besides aws were down/slow), but our prod service was largely "ok", a total of <5% of customers were significantly impacted because existing instances got to keep running.
I think we got a bit lucky, but no actual SLAs were violated. I tagged the postmortem as Low impact despite the stress this caused internally.
We definitely learnt something here about both our software and our 3rd party dependencies.
- Yeah, it's nonsense.
I think the core problem is that innovators typically only capture low single digit percent of the value they generate for society.
Bell Labs existed in an anomalous environment where their monopoly allowed them to capture more of the value of R&D, so they invested more into it.
This is the typical argument for public subsidy of R&D across both public and private settings because this low capture rate means that it is underprovisioned for society's benefit.
- Despite people mocking the parallel, I think it's actually pretty apt. RHNA is basically a socialist planning exercise because people were unwilling to stomach a market economy for housing construction and demanded state control through zoning.
It's better than the alternative of letting local governments do what they want, but it very much is a socialist planning exercise.
- The note about economists and data science in the article felt weird, because data science as a title was invented to get non-CS PhDs to do analyst work because they wanted smarter people doing it.
The point of hiring an economics PhD in industry is largely not because they learnt something but because it's a strong and expensive signal.
Claude is extremely verbose when it generates code, but this is something that should take a practicing software engineer an hour or so to write with a lot less code than Claude.
I like all the LLM coding tools, they're constantly getting better, but I remain convinced that all the people claiming massive productivity improvements are just not good software engineers.
I think the tools are finally at the point where they are generally a help, rather than a net waste of time for good engineers, but it's still marginal atm.