One of the biggest learning moments in my work has been collaborating my past self. You need to have stepped away from some piece of code for a while to get this. If the code is not well organized you will think an alien wrote it.
It's only over time you realize that "it works because I know how to use it" is actually a problem. Over time you also figure out how to write the code in a way where you aren't surprised at your own decisions.
Basically, yes.
In a large codebase the steps in your example could well be separated by thousands of lines of code, with much branching, or perhaps complicated inheritance hierarchies.
In that case leveraging type hints to avoid logic errors can avoid a lot of hard-to-spot bugs
I worked in a team with low skill. One of our flagship apps had a lot of inherent complexity, and was coded by an over-promoted fool. He wrote spaghetti that was vile even at launch.
To my complete lack of surprise, after launch, the users wanted an enormous list of changes and new features. When you try to add features to code that's already spaghetti, the complexity compounds.
There's only one way to manage that, and it's to refactor. But to refactor you need good tests, and your team needs to accept the use of resources for refactoring.
This dungheap had objects with fucky interlocking responsibilities, e.g., scheduling was partly done by the ORM classes and partly by the class for customer output.
This app was also extremely time-based. A CRUD app doesn't really require you to think about how state changes over time; but in the dungheap, you couldn't "see" the state from the code, you also had to understand when in the sequence of operations a method was called. This is one of the hardest types of complexity to deal with.
The app was completely untyped. Some functions take enums and others take strings. There were state enums and also free text strings for state. You might see both "FAILED" and "FAILURE" for the same state concept. A huge amount of data was passed as nested dicts, and without the benefit of consistent kwarg names, so you would not know whether the variable you're looking at has an "error" or an "err_msg" key, or an "output" key containing a dict with an "error" key. To find out, you had to run the app for several minutes with a bunch of print statements, and the answer you got might vary depending on any of the 20 input flags.
I generated a call graph to try to grok the sequence statically but the graph was spaghetti and the sequence contained loops. A few code paths called methods once with one datatype and then again with another type.
Type hints massively reduced the cognitive burden. My violent impulses towards our dickhead "architect" reduced from daily occurrences to weekly.
Fwiw, TypedDict turned out to be a great fit for the use case.
The seperate types will come in handy, once you end up using that client object throughout the codebase and you are not sure anymore who is connected, who is authenticated and so on. By pushing that onto the type system you can make less mistakes.
It is also more convenient for your own private libraries, but makes not so much of a difference for one-off scripts you maintain alone (unless you are also the type that forgets what you were even doing in that script a few months from now, in which case you could benefit like me). It's about lowering cognitive overhead and chance of development errors in the long run if everything is nicely abstracted.
In simpler situations where every time you just do the sequence you probably would want to combine some of these calls anyway.
The same reasoning applies to other developers working on your code next month. However if your Python turns out to be very far from idiomatic Python you're not helping them the slightest. You're doing harm to the team and to your career, unless everybody agree to make it the company standard and you have your back covered.
But I have a question. I'm a junior dev(gimme some leeway here).
I don't really understand how important these design patterns are because in the programs I write, I usually write the classes and call them in runtime myself. I think usually we write the servers and client ourselves.
Let's take the different client types example. You are making an assumption that users can call close on a closed client. Is it so hard to just follow the sequence
` client = Client()
client.connect()
client.authenticate()
client.send_message()
client.close() `
Aren't they overengineering? Perhaps I have not worked in a large code bases to understand the problems TypeState Pattern or more generally these design patterns solves.
I understand that these patterns are elegant and make the future modifications or enhancements easier but I have never seen the tangible value enough in real life