- I won't go into too much details on the topic, as it's loaded with triggering elements. Let's just say that if you were to study how different cultures apprehend and conceptualize life and death (whether philosophically or religiously), I'm fairly sure that you'd come out the other end questioning a lot of your original assumptions (which I only presume you hold based on your comment). Our collective outlook can have significant and far reaching influence in individual decisions.
- You're both right, but talking past each other. You're right that shared dependencies create a problem, but it can be the problem without semantically redefining the services themselves as a shared monolith. Imagine someone came to you with a similar problem and you concluded "distributed monolith", which may lead them to believe that their services should be merged into a single monolith. What if they then told you that it's going to be tough because these were truly separate apps, but that used the same OS wide Python install, one ran on Django/Postgres, another on Flask/SQLite, and another was on Fastapi/Mongo, but they all relied on some of the same underlying libs that are frequently updated. The more accurate finger should point to bad dependency management and you'd tell them about virtualenv or docker.
- > I guess the word contemporary has been misused to the point of just meaning current or modern and I shouldn't nitpick it!
According to at least a few references, it very clearly applies to the two meanings. I couldn't find a single dictionary that excludes or seems to favor one over the other.
- As they said, it depends on the task, so I wouldn't generalize, but based on the examples they gave, it tracks. Even when you already know what needs done, some undertakings involve a lot of yak shaving. I think transitioning to new tools that do the same as the old but with a different DSL (or newer versions of existing tools) qualifies.
Imagine that you've built an app with libraries A, B, and C and conceptually understand all that's involved. But now you're required to move everything to X, Y, and Z. There won't be anything fundamentally new or revolutionary to learn, but you'll have to sit and read those docs, potentially for hours (cost of task switching and all). Getting the AI to execute the changes gets you to skip much of the tedium. And even though you still don't really know much about the new libs, you'll get the gist of most of the produced code. You can piecemeal the docs to review the code at sensitive boundaries. And for the rest, you'll paint inside the frames as you normally would if you were joining a new project.
Even as a skeptic of the general AI productivity narrative, I can see how that could squeeze a week's worth of "ever postponed" tasks inside a day.
- Even though I'm using the second person, I actually don't care at all to convince you particularly. You sound pretty set in your ways and that's perfectly fine. But there are other readers on HN who are already pretty efficient at log debugging or are developing the required analytical skills and I wanted to debunk the unsubstantiated and possibly misleading claims in your comments of some superiority in using a debugger for those people.
The logger vs debugger debate is decades old, with no argument suggesting that the latter is a clear winner, on the contrary. An earlier comment explained the log debugging process: carefully thinking about the code and well chosen spots to log the data structure under analysis. The link I posted was to confirms it as a valid methodology. Overall code analysis is the general debugging skill you want to sharpen. If you have it and decide to work with a debugger, it will look like log debugging, which is why many skilled programmers may choose to revert to just logging after a while. Usage of a debugger then tends to be focused on situations when the code itself is escaping you (e.g. bad code, intricate code, foreign code, etc).
If you're working on your own software and feel that you often need a debugger, maybe your analytical skills are atrophying and you should work on thinking more carefully about the code.
- The debugger is fine, but it's not the key to unlock some secret skill level that you make it out to be. https://lemire.me/blog/2016/06/21/i-do-not-use-a-debugger/
- In the original article, author never actually went into why they disliked their experience with Ruby. They listed some historical shortcomings which we can only presume they were experiencing on the code base they found themself working on. I think the best point out of that first article was don't pick Ruby as a development language in 2025, there are better options for any advantage you might think it will give you.
I think that should've been the main point to attack.
In the present article, the author went for pathos instead and in some ironic sense confirmed the previous article's notion that Ruby is powered by sentimentality.
Many people that adore Elixir also think Ruby is a no go, despite the latter being a strong influence. Arguments against Elixir tend to revolve around its lack of traction, not its lack of seriousness.
- Could you elaborate?
- I think the point is "ok, account is free, then what?"
At 5$/m I might give the paid subscription a try.
- > Remote work eliminates a lot of problems with office work: commutes, inefficient use of real estate, and land value distortion. But software development is better when you breathe the same air as the folks you work with.
It's pretty hard to know where the opinion is.
The whole paragraph presents as though author is relating known symptoms of a disease. We're never really sure which they themself actually experienced. They look more like arguments in support of a cause.
Author is totally entitled to open that door, but then it also becomes fair game to attack the perspective.
- Giving a blank check to anything someone says because they disclaimed that they'll be uttering opinions? That sounds kinda naive. Have you never heard someone include facts to support their opinions? Would you disagree that it's fair game to attack opinions presented as facts? The "problematic" paragraph jumps out because the assertive generalizations moot the earlier agreement that the author is sharing their experience. The proclamations are not subjective they're factual. Perhaps re-read that passage yourself while donning your own critical thinking hat.
- > Gradual growth inevitably results in loads of technical debt.
Why is this stated as though it's some de facto software law? The argument is not whether it's possible to waterfall a massive software system. It clearly is possible, but the failure ratios have historically been sufficiently uncomfortable to give rise to entirely different (and evidently more successful) project development philosophies, especially when promoters were more sensitive to the massive sums involved (which in my opinion also helps explains why so many wasteful government examples). The lean startup did not appear in a vacuum. Do things that don't scale did not become a motto in these parts without reason. In case some are still confused about the historical purpose of these benign sounding advices, no, they weren't originally addressed at entrepreneurs aiming to run "lifestyle" businesses.
- If the last 15 years are any indication, in less than 15 years, Microsoft will make Azure Linux their main OS and they'll skin the desktop edition to resurrect Lindows. It'll take a 2-5 years campaign to move most of the remaining Windows user base to it. Oh and also, it'll probably be free.
- > Knowing that there are two constructors that exist for normal, non-native, Python classes, and that the basic constructoe Class.__new__, and that the constructor Class() itself calls Class.__new__() and then, if Class.__new__() returns an instance i of Class, also calls Class.__init__(i) before returning i, is pretty basic Python knowledge.
I disagree that this is basic knowledge. In python a callable is an object whose type has a __call__() method. So when we see Class() its just a syntax proxy for Metaclass.__call__(Class). That's the true (first of three?) constructor, the one then calling instance = Class.__new__(cls), and soon after Class.__init__(instance), to finally return instance.
That's not basic knowledge.
- I call it "read-optimized code". Inheritance is biased toward conservative writing. Once your mind becomes so enmeshed with the code base that you can no longer fathom a future where you might fall out of sync with it, inheritance becomes extremely appealing. It's all in your head! You pull the Razzle parent, sprinkle a bit of Dazzle mixin, everything is alchemized into a Fizzle class and abracadabra. Meanwhile newbies in the team have their eyes welling up from having to deal with your declarative mess.
- Inheritance being "sane" in Python is a red herring for which many smart people have fallen (e.g. https://www.youtube.com/watch?v=EiOglTERPEo). It's like saying that building a castle with sand is not a very good idea because first, it's going to be very difficult to extract pebbles (the technical difficulty) and also, it's generally been found to be a complicated and tedious material to work with and maintain. Then someone discovers a way to extract the pebbles. Now we have a whole bunch of castles sprouting that are really difficult to maintain.
- > Look, I understand duck typing. Go doesn’t have it, the fact you keep calling go’s dynamic dispatch duck typing is a bit of a red flag
No, more like static typed duck typing, or more accurately its close cousin structural typing.
But if you need to hear it from the horse's mouth https://research.swtch.com/interfaces
> You’re saying, essentially, that you just use the objects method and you don’t need to read its implementation to understand what it does
Your recent points have nothing to do with mine. What you're thinking that I'm saying is not what I'm saying. I'm still very much aligned with the original topic of the article, Interface Segregation as it's done in Go. An article to which you reacted adversely, while demonstrating that you're clearly still looking for Java where it doesn't exist. I'll leave things at that, since I'd just be repeating myself at this point.
- I see a few of my words and others that I neither said nor thought. With respect, perhaps you're rushing in a particular unrelated tangent and drawing conclusions.
The points I was trying to draw your attention to was that duck-typing as it's done in Go (structural typing to be more exact), is at the crux of its approach to interfaces. Do you understand duck typing or structural typing?
To summarize what I've already tried to say before, Go interfaces are not Java interfaces. Java cares about the interface as a type, while Go cares only about the listed methods (the behavior). These are two completely different paradigms, they don't mix so well, as former Java programmers doing Go are discovering. In Java, interfaces themselves are important because they're treated as types and programmers tend to be economical about them. They're used everywhere to say that a class implements them and that they're a dependency of some consumer. In Go the interface is just a convenient but unimportant label that points to a list of methods that a consumer expects a particular dependency to have. Only the list of methods is meaningful. The label isn't. That's it. Done.
Again, completely different paradigms. If you embrace Go interfaces, the way you read, write and think about Go code also changes. But complaining about them with a Java mindset is complaining that a wrench is a bad screwdriver.
At the end of the day, it's up to you to decide whether you can open your mind and actually learn to use a tool as it was meant, or just assume that its creator and all the people that claim to do so successfully are in denial for not admitting to share the pains you have.
- > I think you skipped over the commenter's main point while replying to them: you need to be able to have a good mental model of the code.
I actually addressed the root cause of the main point: a misunderstanding of the purpose of interfaces in Go. To me these complaints are analogous to someone saying that they're not able to move fast enough while trying to run underwater. Why don't you try swimming? The fact that whenever a complainer elaborates a bit, it often points to indications that they might be looking for Java in Go, also leads me to connect the original difficulty to the latter misunderstanding.
> A few well defined interfaces have the advantage of being easy to understand and see usages around the codebase without the overhead of many different variants of an interface.
Interfaces in Go are not a thing. They're a notice from a consumer to say "I'll be using these methods on whatever object you pass in this slot". Not much more. They're closely tied to the consumer (or a closely related set of consumers, if you don't want to be too zealous). It's a different mental model, but if you embrace it, it changes the way you write, read, and think about code.
> I've found it much more difficult to get my IDE to point out all the different ways some interface could be implemented.
Implemented? Forget that word completely. Ask instead "does the object I'm about to send have all the required methods?" If not, add the missing ones. That's it. It's all about the methods. Forget the interface itself, it's a label on a piece of napkin, a tag, to list the set of methods required by the consumer on a particular dependency.
I think Python duck-typing philosophy is a much better access door to Go's interfaces than Java interfaces. You just care about how a dependency will be used by its consumer. Now, if as a language designer you wanted to add the discipline of static typing on top of duck-typing, the logical conclusion would be either a syntax for "anonymous" interfaces that lets you duck-type
or the less ad-hoc style we've come to know from Go.func Consumer(obj interface{doThis(str), doThat(int)}) { obj.doThis('foo'); obj.doThat(123); }
Why would you say pretending? I would say remembering.