So in .NET, like Java as you mention, we have attributes, .
e.g.
[JsonPropertyName("username")]
[JsonIgnore]
etc.This is simple, and obvious. The JsonPropertyName attribute is an override, you can set naming policies for the whole class. camelCase by default, with kebab-case, snake_case etc as alternative defaults.
C#/.NET of course has the benefit of having public properties, which are serialised by default, and private properties, which aren't, so you're unlikely to be exposing things you don't want to expose.
This contrasts to Go's approach, much like python, of using casing convention to determine private vs public fields. ( Please correct me if I'm wrong on this? )
The first example still confuses me though, because either you want IsAdmin to come from the user, in which case you still want to deserialise it, or you don't, in which case it shouldn't even be in your DTO at all.
Deserialisation there is a bit of a red-herring, as there should be a validation step which includes, "Does this user have the rights to create an admin?".
The idea of having a user class, which gets directly updated using properties straight from deserialized user input, feels weird to me, but I'd probably be dismissed as an "enterprise programmer" who wants to put layers between everything.
I think calling it a convention is misleading.
In Python, you can access an _field just by writing obj._field. It's not enforced, only a note to the user that they shouldn't do that.
But in Go, obj.field is a compiler error. Fields that start with a lowercase letter really are private, and this is enforced.
So I think it's better to think of it as true private fields, just with a... unique syntax.
Go actually ties visibility to casing, instead of using separate annotations. And it will not serialise private fields, only public.
Python has no concept of visibility at all, conventionally you should not access attributes prefixed with `_` but it won't stop you.
The reason its like that is that Go philosophically is very much against the idea of annotations and macros, and very strongly about the idea of a clear upfront control flow, and this is one of the reasons I love the language. But it does come at the cost of a few highly useful usecases for annotations (like mapping JSON and XML, etc.) becoming obtuse to use.
The idea of more compile-time macros in Go is interesting to me, but at the same time the ease of debugging and understanding the Go control flow in my programs is one of the reasons I love it so much, and I would not want to invite the possibility of "magic" web frameworks that would inevitably result from more metaprogramming ability in Go. So I guess I'm prepared to live with this consequence. :/
Annotations have no control flow, they just attach metadata to items. The difference with struct tags being that that metadata is structured.
The solution is usually to have an even better language. One, where the typesystem is so powerful, that such hacks are not necessary. Unfortunately, that also means you have to learn that typesystem to be productive in language, and you have to learn it more or less upfront - which is not something that Google wanted for golang due to the turnover.
What might be interesting is a language ecosystem, where one can write parts of a system in one language and other parts in another. The BEAM and JVM runtimes allow for this but I don't think I've seen any good examples of different languages commingling and playing to their strengths.
> The BEAM and JVM runtimes allow for this but I don't think I've seen any good examples of different languages commingling and playing to their strengths.
Probably because the runtime is always the lowest common denominator. That being said, there are lots of tools e.g. written in Scala but then being used by Java, such as Akka or Spark. And the other way around of course.
It's one of many examples of 80/20 design in Go: 80% of functionality with 20% of complexity and cost.
Struct tags address an important scenario in an easy to use way.
But they don't try to address other scenarios, like annotations do. They are not function tags. They're not variable tags. They are not general purpose annotations. They are annotations for struct fields and struct fields only.
Are they are as powerful as annotations or macros? Of course not, not even close.
Are they as complex to implement, understand, use? Also not.
80/20 design. 80% of functionality at 20% of cost.
There's no free lunch here, and the compromises Go makes to achieve its outcomes have shown themselves to be error-prone in ways that were entirely predictable at design time.
It does occasionally, although I'll push back on the "often". Go's simplifications allow most of the codebase to be... well... simple.
This does come at the cost of some complexity on the edge cases. That's a trade off I'm perfectly willing to make. The weird parts being complex is something I'm willing to accept in exchange for the normal parts being simple, as opposed to constantly dealing with a higher amount of complexity to make the edge cases easier.
> There's no free lunch here
This I'll agree with as well. The lunch is not free, but it's very reasonably priced (like one of those hole in the wall restaurants that serves food way too good for what you pay for it).
> the compromises Go makes to achieve its outcomes have shown themselves to be error-prone in ways that were entirely predictable at design time.
I also agree here, although I see this as a benefit. The things that are error prone are clear enough that they can be seen at design time. There's no free lunch here either, something has to be error prone, and I like the trade offs that go has made on which parts are error prone. Adding significant complexity to reduce those error prone places has, in my experience, just increased the surface area of the error prone sections of other languages.
Could you make the case that some other spot in design space is a better trade-off? Absolutely, especially for a particular problem. But this spot seems to work really well for ~95% of things.
Exactly this.
Basically: have a complex compression algorithm? Yes, it's complex, but the resulting filesize (= program complexity) will be low.
If you use a very basic compression algorithm, it's easier the understand the algorithm, but the filesize will be much bigger.
It's a trade-off. However, as professionals, I think we should really strive to put time to properly learn the good complex compression algorithm once and then benefit for all the programs we write.
[insert Pike's Google young programmers quote here]
That's just not the philosophy of the language. The convention in Go is to be as obvious as possible, at the cost of more efficient designs. Some people like it, others don't. It bothers me, so I stopped using Go.
You can just not use them though - you can unmarshal to a map instead and select the keys you want, perform validation etc and then set the values.
Same when publishing - I prefer to have an explicit view which defines the keys exposed rather than than publishing all by default based on these poorly understood string keys attached to types.
Are you somehow under the impression that Go is unique in having a terse way to map fields to fields?
> It’s really quite novel once you understand it.
It's the opposite of novel, putting ad-hoc annotations in unstructured contexts is what people used to do before java 5.
This allows you to derive a safe parser from the structural data, and you can make said parser be really strict. See e.g., Wuffs or Langsec for examples of approaches here.
The accidental omitempty and - are a good example of the weirdness even if they might not cause problems in practice.