- > What a weird blast furnace! Would anyone try to use this tool in such a scenario? Not most experienced metalworkers.
Absolutely wrong. If this blast furnace would cost a fraction of other blast furnaces, and would allow you to produce certain metals that were too expensive to produce previously (even with high error rate), almost everyone would use it.
Which is exactly what we're seeing right now.
Yes, you have to distinguish marketing message vs real value. But in terms of bang for buck, Claude Code is an absolute blast (pun intended)!
- Funny, I've spoken about something like this to a colleague couple weeks ago. This could be a future of software development we're headed towards, if the DX is done right.
There are definitely cases where the spec is much easier to understand than the code that implements it.
Think systems with complex lifecycles or lots of required boilerplate.
Have you thought of embedding the specs into existing code?
E.g.
# @spec: if any method takes longer than 1s to execute, a warning must be logged class X: ... - > React feels natural because it never asks you to stop writing JavaScript
I want to increment some counter on the webpage. Which approach feels natural?
No one wakes up saying "please let me mutate simple state with function calls".increment = () => { this.setState((prevState) => ({ count: prevState.count + 1 })); }; const increment = () => setCount((count) => count + 1); function increment() { count += 1; } - Have you read the whole section?
> Documentation for parameters and return values ([23])
> Let IDEs show what types a function expects and returns ([16])
> For example, one library might use string-based annotations to provide improved help messages, like so:
def compile(source: "something compilable", filename: "where the compilable thing comes from", mode: "is this a single statement or a suite?"): - > It's just not as good
Again, the evidence (as limited as it is) suggests otherwise. You are more likely to succeed if you're going with dynamic language and not doing "proper engineering". This has been widely accepted before type-checker era, and I see no reason why it would be different now. Utilize type checker when it's free, but don't waste time on type puzzles.
"Proper engineering" doesn't get you to product-market fit faster. All it does is tickle your ego.
- I would expect dynamic type crowd to embrace microservices first, given how everybody says that dynamic codebases are a huge mess.
Regardless, to me enterprise represents legacy, bureaucracy, incidental complexity, heavy typing, stagnation.
I understand that some people would like to think that heavy type-reliance is a way for enterprise to address some of it's inherent problems.
But I personally believe that it's just another symptom of enterprise mindset. Long-ass upfront design documents and "designing the layout of the program in types first" are clearly of the same nature.
It's no surprise that Typescript was born at Microsoft.
You want your company to stagnate sooner? Hyperfixate on types. Now your startup can feel the "joys" of enterprise even at the seed stage.
- Large programs are harder to maintain because people don't have the balls to break them into smaller ones with proper boundaries. They prefer incremental bandaids like type hints or unit tests that make it easier to deal with the big ball of mud, instead of not building the ball in the first place.
- How come all those unicorns were built with intolerable Python/Ruby, not Java/C#/Go?
https://charliereese.ca/y-combinator-top-50-software-startup...
- > Most code with type hints is easier to read
That has not been my experience in the past few years.
I've always been a fan of type hints in Python: intention behind them was to contribute to readability and when developer had that intention in mind, they worked really well.
However, with the release of mypy and Typescript, engineering culture largely shifted towards "typing is a virtue" mindset. Type hints are no longer a documentation tool, they are a constraint enforcing tool. And that tool is often at odds with readability.
Readability is subjective and ephemeral, type constraints (and intellisense) are very tangible. Naturally, developers are failing to find balance between the two.
- What's even worse, when typing is treated as an indisputable virtue (and not a tradeoff), pretty much every team starts sacrificing readability for the sake of typing.
And lo and behold, they end up with _more_ design bugs. And the sad part is that they will never even recognize that too much typing is to blame.
- > Writing software without types lets you go at full speed. Full speed towards the cliff.
Isn't it strange that back when Python (or Ruby) didn't even have type hints (not type checkers, type hints!), it would easily outperform pretty much every heavily typed language?
Somehow when types weren't an option we weren't going towards the cliff, but now that they are, not using them means jumping off a cliff? Something doesn't add up.
- First of all, this is a really dumb point. If it is so easy to find good assets, why don't you buy those, take the loan against those like the reddit post suggests, then buy even more assets with those money, take another loan, and just keep doing that to generate infinite amount of money? The only reason stopping you is admitting that every asset carries a risk.
But whether it's on you or not is beyond the point. Being taxed on top of a loss is not a good thing.
Also there are plenty of reasons not to sell a depreciating asset. For example, CEO willing to maintain control over company, or simply not wanting to send bad signals to the public by selling their shares (because that would depreciate the asset even more).
A better argument would be to adjust basis for inflation instead of resetting it.
- This strategy is usually presented as a way for billionaires to avoid paying taxes on their wealth, but that's a blatant manipulation. The reality is that it allows you to introduce a bit of risk to potentially save taxes on day-to-day expenses. Which, even for billionaires, are not that high to have a noticeable impact on society.
Nobody is going to take a loan provided in the reddit example, since even a minor market fluctuation will trigger a margin call and cause you to lose all your assets used as collateral (and pay taxes on it).
The higher the loan amount, the higher the risks. The longer you have left to live, the higher the risks.
Since you have to commit to this strategy till the end of your life in order for it to work, you're essentially betting that your asset will always appreciate faster than losses accumulate on your compound rate on the loan. Making a bet like that for the rest of your life is quite the gamble (unless you're planning to die in the near future).
This strategy is only viable if you use it for a tiny fraction of your wealth, so it can potentially be used to fund your day-to-day expenses. But it's still a lifelong gamble. What if you happen to die during a market crash?
- Apologies, but such mindset is the essence of the worst programming traits.
> The code that is more easily unit testable, is the code I care about.
Author argues that his code is more readable. Sounds like you're saying that being unit-testable is more important than being readable.
> Neither example is easily tested.
Only if you're a unit testing zealot. Integration/E2E testing is easy for both.
> Neither support injecting the dependencies, which make mocking really difficult.
Mocking is not a virtue. Also, if mocking is the sole reason you're using DI, you're doing it wrong.
- > What you're doing by breaking things into functions is trying to prevent it's eventual growth into a bug infested behemoth
Not every piece of code grows into a bug-infested behemoth. A lot of code doesn't grow for years. We're biased to think that every piece of code needs to "scale", but the reality is that most of it doesn't.
Instead of trying to fix issues in advance you should build a culture where issues are identified and fixed as they come up.
This piece of code will be a pain to maintain when the team gets bigger? So fix it when it actually gets bigger. Create space for engineers to talk about their pains and give them time to address those. Don't assume you know all their future pains and fix them in advance.
> In my experience, nearly every case where an area of a code base has become unmaintainable - it generally originates in a large, stateful piece of code that started in this fashion
In my experience it gets even worse with tons of prematurely-abstracted functions. Identifying and fixing large blocks of code that are hard to maintain is way easier that identifying and fixing premature abstractions. If you have to choose between the two (and you typically do), you should always choose large blocks of code.
The great thing about big blocks of code is that their flaws are so obvious. Which means they are easy to fix when the time comes. The skill every team desperately needs is identifying when the time comes, not writing code that scales from scratch (which is simply impossible).
- > For example, lets say I gave a simple coding puzzle (think leetcode) to 10 python engineers. I would get at least 8 different responses. A few might gravitate around similar concepts but they would be significantly different.
This is true for any language. Arguably, what's different about Python is that the more senior the engineers you're interviewing, the more likely their solutions to converge.
Just because something is in the Zen of Python, doesn't mean it automatically gets followed by every Python engineer. It's Zen, after all - you don't automatically get enlightened just by reading it.
- Not only do this, but do it way more successfully. I'll never get tired of repeating that among top YC startups, Java as a primary language contributes to roughly 1% of value, while Python + Ruby are almost at 70%.
https://charliereese.ca/y-combinator-top-50-software-startup...
- > Python is very difficult to maintain when you program goes over 100k lines of code, while the static type system of c++ is good for millions
I see this argument a lot, but people often forget that Python is very concise (yet readable) compared to other languages.
100k LOC in Python typically contains way more business logic than C++, so it is only natural to be harder to maintain.
- If you have services with lots of dependencies on other services, then you probably ended up the worst of two worlds - distributed monolith.
Typically one-way or two-way is not a choice you can make as an engineer, most processes in your business require two-way, because they are initiated by a user and user wants a definitive response.
The "potentially delayed" approach is very tempting for engineers, but it should be exception rather than the rule.
It
- dilutes responsibilities between services (who is responsible for delays? is service A producing too many messages or is service B too slow to process them?)
- makes your SLA vague (message was processed 3 days later, do we treat it as downtime or not?)
- requires more infrastructure & processes (every service has a queue, dead-letter queue, and a process to deal with dead letters)
- requires a ton monitoring overhead (what delay is acceptable? how do we even measure delay? what if you have different SLA for different messages? we'll have a monitor per message type?)
- introduces a lot of unnecessary complexity and rules (how do you deal with TOCTOU, e.g. admin deactivates a user, but by the time the message gets processed he's no longer an admin)
- ruins user experience (we received your payment information, but we won't immediately tell you that it's wrong).
Despite it's downsides, potentially delayed approach can be a fine tradeoff when it saves you 7-8 digits per year. Most companies never reach this phase.
- Unfortunately it's not a popular belief anymore, but [asynchronous] central event bus is a terrible idea on it's own, and it goes against what [micro]services are about.
If you want microservices to work you shouldn't have anything central in them, and you should avoid async as much as possible.
- > For instance, I’ve never encountered a JavaScript codebase that I have thought couldn’t benefit from just being statically typed with Typescript
That's the type bias. If you look at a non-typed codebase, it always feels like it will be better with types. But if you had a chance to go back in time and start the same codebase in Typescript, it would actually come out way worse than what you have today.
Types can be great when used sparingly, but with Typescript everyone seems to fall into a trap of constantly creating and then solving "type puzzles" instead of building what matters. If you're doing Typescript, your chances of becoming a product engineer are slim.
Most engineers feel like Claude Code is a multiplier for their productivity, despite all the flaws that it has. You're arguing that CC is unusable and is a net negative on the productivity, but this is the opposite of what people are feeling. I am able to tackle problems I wouldn't even attempt previously (sometimes to my detriment).