Go: proposal: Go 2: spec: introduce structured tags

Created on 31 Jan 2018  Â·  70Comments  Â·  Source: golang/go

This proposal is for a new syntax for struct tags, one that is formally defined in the grammar and can be validated by the compiler.

Problem

The current struct tag format is defined in the spec as a string literal. It doesn't go into any detail of what the format of that string might look like. If the user somehow stumbles upon the reflect package, a simple space-separated, key:"value" convention is mentioned. It doesn't go into detail about what the value might be, since that format is at the discretion of the package that uses said tag. There will never be a tool that will help the user write the value of a tag, similarly to what gocode does with regular code. The format itself might be poorly documented, or hard to find, leading one to guess what can be put as a value. The reflect package itself is probably not the biggest user-facing package in the standard library as well, leading to a plethora of stackoverflow questions about how multiple tags can be specified. I myself have made the error a few times of using a comma to delimit the different tags.

Proposal

EDIT: the original proposal introduced a new type. After the initial discussion, it was decided that there is no need for a new type, as a struct type or custom types whose underlying types can be constant (string/numeric/bool/...) will do just as well.

A tag value can be either a struct, whose field types can be constant, or custom types, whose underlying types are constant. According to the go spec, that means a field/custom type can be either a string, a boolean, a rune, an integer, a floating-point, or a complex number. Example definition and usage:

package json

type Rules struct {
    Name string
    OmitEmpty bool
    Ignore bool
}

func processTags(f reflect.StructField) {
    // reflect.StructField.Tags []interface{}   
    for _ ,t := range f.Tags {
        if jt, ok := t.(Rules); ok {
              ...
              break
        }
    }
}
package sqlx

type Name string

Users can instantiate values of such types within struct definitions, surrounded by [ and ] and delimited by ,. The type cannot be omitted when the value is instantiated.

package mypackage

import json
import sqlx

type MyStruct struct {
      Value      string [json.Rules{Name: "value"}, sqlx.Name("value")]
      PrivateKey []byte [json.Rules{Ignore: true}]
}

Benefits

Tags are just types, they are clearly defined and are part of a package's types. Tools (such as gocode) may now be made for assisting in using such tags, reducing the cognitive burden on users. Package authors will not need to create "value" parsers for their supported tags. As a type, a tag is now a first-class citizen in godoc. Even if a tag lacks any kind of documentation, a user still has a fighting chance of using it, since they can now easily go to do definition of a tag and just look up its fields, or see the definition in godoc. Finally, if the user has misspelled something, the compiler will now inform them of an error, instead of it occurring either at runtime, or being silently ignored as is the case right now.

Backwards compatibility

To preserve backwards compatibility, string-based tags will not be removed, but merely deprecated. To ensure a unified behavior across libraries, their authors should ignore any string-based tags if any of their recognized structured tags have been included for a field. For example:

type Foo struct {
    Bar int `json:"bar" yaml:"bar,omitempty"` [json.OmitEmpty]
}

A hypothetical json library, upon recognizing the presence of the json.OmitEmpty tag, should not bother looking for any string-based tags. Whereas, the yaml library in this example, will still use the defined string-based tag, since no structured yaml tags it recognizes have been included by the struct author.

Side note

This proposal is strictly for replacing the current stuct tags. While the tag grammar can be extended to be applied to a lot more things that struct tags, this proposal is not suggesting that it should, and such a discussion should be done in a different proposal.

Go2 LanguageChange NeedsInvestigation Proposal

Most helpful comment

Having the field tags before the field declaration is not very readable, compared to having them afterwards. For the same reason that it is more readable to have the type of a variable after the variable. When you start reading the struct definition, you come upon some meta information about something you haven't yet read. Currently, you know that a PrimaryKey is a byte array, and it is ignored for json marshaling. With your suggestioun, You know that something will be ignored for json marshaling, and afterwards you learn that what is ignored will be a primary key, which is a byte array.

All 70 comments

Related to #20165, which was recently declined. But this version is better, because it proposes an alternative.

I don't see any special need for a new tag type. You may as well simply say that a struct field may be followed by a comma separated list of values, and that those values are available via reflection on the struct.

On the other hand, something this proposal doesn't clearly address is that those values must be entirely computable at compile time. That is not a notion that the language currently defines for anything other than constants, and it would have to be carefully spelled out to decide what is permitted and what is not. For example, can a tag, under either the original definition or this new one, have a field of interface type?

@ianlancetaylor
You raise an interesting point. A struct will pretty much have the same benefits as a new tag type would. I imagine it would probably make the implementation a bit simpler. Other types might only be useful if they are the underlying type of a custom one, and as such one would have to use them explicitly, otherwise there might be ambiguity when a constant is provided directly:

package sqlx

type ColumnName string
...

package main
import sqlx

type MyStruct struct {
    Total int64 [sqlx.ColumnName("total")]
}

vs what I would consider an invalid usage:

package main
import sqlx

type MyStruct struct {
    Total int64 ["total"]
}

For your second point, I assumed that it would be clear that any value for any field of a tag has to be a constant. Such a "restriction" makes it clear what can and cannot be a field type, and will rule out having a struct as a field type (or an interface, in your example).

I wonder if we could solve this without having to change the language, and even better, in Go 1.X rather than waiting for Go 2. As such, I've tried to understand the problem as well as the proposed solution and came up with a different approach to the problem, please see below.

First, the problem.
I think the description starts from the wrong set of assumptions:

There will never be a tool that will help the user write the value of a tag, similarly to what gocode does with regular code.

There can totally be a tool that understands how these flags work and allow users to define custom tags and have them validated.
One such tool might for example benefit from "magic" comments in the code, for example, the structure proposed could be "annotated" with a comment like // +tag.

This would of course have the advantage of not having to force the change in the toolchain, with the downside that you'd need to have the tool to validate this instead of the compiler. The values should be json, for example:

package mypackage

import "json"
import "sqlx"

type MyStruct struct {
    Value string `json:"{\"Name\":\"value\"}" sqlx:"{\"Name\":\"value\"}"`
    PrivateKey []byte `json:"{\"Ignore\":true}"`
}
package json

// +tag
type Tag struct {
    Name string
    OmitEmpty bool
    Ignore bool
}
package sqlx

// +tag
type SQLXTag struct {
    Name string
}

More details can be put into this on how this should be a single tag per package, the struct must be exportable, and so on (which the current proposal also does not address).

The format itself might be poorly documented, or hard to find, leading one to guess what can be put as a value. The reflect package itself is probably not the biggest user-facing package in the standard library as well, leading to a plethora of stackoverflow questions about how multiple tags can be specified.

This sounds like a problem one could be able to fix with a CL / PR to the documentation of the package which specifically improves it by documenting the tag available or how to use these struct tags.

I myself have made the error a few times of using a comma to delimit the different tags.

Should the above proposal with "annotating" a struct in a package work, this means that the tools could also fix the problem of navigating to the definition of tag.

Furthermore, the original proposal adds the problem that tag now is a keyword and cannot be used as in regular code. Imho, should any new keyword be added in Go 2, there should be a _really_ good reason to do so and it should be kept in mind that it would make the porting of existing Go 1 sources that much harder given how now the code needs to be refactored before being ported over.

The downside of my proposal is that this requires people to use non-compiler tools. But given how govet is now partially integrated in go test, this check could also be added to that list.

Tools that offer completion to users can be adapted to fulfill the requirement of assisting the user in writing the tag, all without having the language changed with an extra keyword added.

And should the compiler ever want to validate these tags, the code would already be there in govet,

@dlsniper

Furthermore, the original proposal adds the problem that tag now is a keyword and cannot be used as in regular code

You can always make compiler a bit smarter to understand context and when a keyword is a keyword. tag keyword would appear in a very specific context which compiler could easily detect and understand. No code could ever be broken with that new keyword. Other languages do that and have no problem adding new contextual keywords without breaking backwards compatibility.

As for your proposal, for me adding another set of magic comments further establishes opinion that there's something wrong with the design of the language. Every time I look at these comments they look out of place. Like someone forgot to add some feature and, in order to not break things, shoved everything into comments. There's plenty of magic comments already. I think we should stop and implement proper Go features and not continue developing another language on top of Go.

You can always make compiler a bit smarter to understand context and when a keyword is a keyword.

As I said above, though, I see no advantage at all to using tag rather than struct.

This proposal still needs more clarity on precisely what is permitted in a tag type, whatever we call it. It's not enough to say "it has to be a constant." We need to spell out precisely what that means. Can it be a constant expression? What types are the fields permitted to have?

One such tool might for example benefit from "magic" comments in the code, for example, the structure proposed could be "annotated" with a comment like // +tag.

This seems to make the problem worse, to be honest. Instead of making tags easier to write and use, you are now introducing more magic comments. I'm sure I'm not the only one opposed to such solutions, as such comments are very confusing to users (plenty of questions on stackoverflow). Also, what happens when someone puts // +tag before multiple types?

Value string json:"{\"Name\":\"value\"}" sqlx:"{\"Name\":\"value\"}"

This not only ignores a major part of the problem, illustrated by the proposal (syntax), but also makes it harder to write.

More details can be put into this on how this should be a single tag per package, the struct must be exportable, and so on (which the current proposal also does not address).

Should we address the obvious? Honest question, I skipped some things as I thought they were too obvious to write.

This sounds like a problem one could be able to fix with a CL / PR to the documentation of the package which specifically improves it by documenting the tag available or how to use these struct tags.

It's still in the reflect package. Why would an average user ever go and read the reflect package. It's index alone is larger that the documentation of some packages.

Furthermore, the original proposal adds the problem that tag now is a keyword and cannot be used as in regular code. Imho, should any new keyword be added in Go 2, there should be a really good reason to do so and it should be kept in mind that it would make the porting of existing Go 1 sources that much harder given how now the code needs to be refactored before being ported over.

I edited my original proposal to remove the inclusion of a new type. This was discussed in the initial discussion with @ianlancetaylor, and I was hoping further discussion would include that as well.

This proposal still needs more clarity on precisely what is permitted in a tag type, whatever we call it. It's not enough to say "it has to be a constant." We need to spell out precisely what that means. Can it be a constant expression? What types are the fields permitted to have?

I've edited the proposal to add more information as to what types are permitted as tags.

i like the idea of the proposal. I would love to be able to write Metadata that is then checked at compile time, and that i do not need to parse the metadata at runtime.

The updated proposal makes sense to me. A Metadata is simply a struct.
Also the syntax is ok for the tags. It event lets me write the metadata line by line

1. initial proposal

type MyStruct struct {
    Value string [
        json.Rules{Name: "value"}, 
        sqlx.Name{"value"}
    ]
    PrivateKey []byte [
        json.Rules{Ignore: true}
    ]
}

That appears even to be readable (at least to me). Fields and annotations are clearly distinguishable, even without syntax highlighting.

Now, i know this is about tags only but if we would want to add metadata to other things than struct fields, i would suggest to put the metadata above the thing that you annotate instead of having it behind.

2 examples that arise to me, are C# Attributes and Java Annotations. Lets see how they would look like on go struct fields

2. C# like

type MyStruct struct {
    [json.Rules{Name: "value"}]
    [sqlx.Name{"value"}]
    Value string

    [json.Rules{Ignore: true}]
    PrivateKey []byte 
}

That is less readable than 1.

3. Java like

type MyStruct struct {
    @json.Rules{Name: "value"}
    @sqlx.Name{"value"}
    Value string

    @json.Rules{Ignore: true}
    PrivateKey []byte 
}

That is less readable than 1. but way cleaner than 2.

Now in go we already have a syntax for multiple imports and multiple constants. Lets try that

4. Go like

type MyStruct struct {
    meta (
        json.Rules{Name: "value"}
        sqlx.Name{"value"}
    )
    Value string

    meta (json.Rules{Ignore: true})
    PrivateKey []byte 
}

That is less compact. Removing the meta keyword wouldnt help i think. Neither when using square brackets

5. with square brackets

type MyStruct struct {
    [
        json.Rules{Name: "value"}
        sqlx.Name{"value"}
    ]
    Value string

    [json.Rules{Ignore: true}]
    PrivateKey []byte 
}

The single statement looks like 2. C# like and the multiline statement is still not as compact as i would like it to be.

So far i still like the 3. Java like style best. However, if metadata should not be applied to anything other than struct fields (ever) then i prefer the 1. initial proposal style. Now if there are some legal issues with stealing a syntax from another language (i am not a lawyer) then i could think of following

6. hash

type MyStruct struct {
    # json.Rules{Name: "value"}
    # sqlx.Name{"value"}
    Value string

    # json.Rules{Ignore: true}
    PrivateKey []byte 
}

Having the field tags before the field declaration is not very readable, compared to having them afterwards. For the same reason that it is more readable to have the type of a variable after the variable. When you start reading the struct definition, you come upon some meta information about something you haven't yet read. Currently, you know that a PrimaryKey is a byte array, and it is ignored for json marshaling. With your suggestioun, You know that something will be ignored for json marshaling, and afterwards you learn that what is ignored will be a primary key, which is a byte array.

I understand your point and i partially agree on that. Having metadata on tags only, your suggestion looks best (i might want to omit the comma in multiline though)

My intention is to suggest a syntax that might work on structs and methods. Those elements have their own body. Adding metadata behind the body might push it out of sight if the implementation is lengthy. I think metadata should live somewhere near to the name of the element that it annotates and my natural choice is above that name, since below is the body.

Syntax highlighting helps to spot the parts of the code you are interested in. So if you are interested in reading the struct definition, your eye will skip the metadata syntax.

I don't think tags are that important to care about them being out of sight. For the most part, I only care about actual fields and look at tags only in very specific cases. It's actually a good thing that they're out of the way because most of the time you don't need to look at them.

Your examples with @ and # prefixes look good and readable but I don't think it's that important to pursue and change existing syntax. Even C# syntax is easy to read for me being a C# programmer.

Just wanted to add another quick anecdote. I recently saw this committed by a colleague of mine, a seasoned developer:

ID int `json: "id"`

Obviously this wasn't tested at runtime, but its clear that even the best of us can overlook the syntax, especially since were are used to catching 'syntax' errors at compile time.

I like this proposal for property tags. Would it be too much to ask for a similar feature for struct-level tags?

@kprav33n I'm not sure what you mean by struct-level tags, but I'm guessing it's something that the language does not support today. It sounds like an orthogonal idea that should be discussed separately from this issue.

@ianlancetaylor Thanks for the comment. Yes, this feature doesn't exist in the language today. Will discuss as a separate issue.

I really like this proposal since I encountered problems with the current implementation of tags

type MyStruct struct {
    field string `tag:"This is an extremely long tag and it's hard to view on smaller screens since it is so incredibly long, but it can't be broken using a line break because that would prevent correct parsing of the tag."`
}

Adding line breaks will prevent the tag from being parsed correctly:

package main

import (
    "fmt"
    "reflect"
)

type MyStruct struct {
    field string `tag:"This is an extremely long tag and it's hard to view on smaller 
screens since it is so incredibly long, but it can't be broken using a 
line break because that would prevent correct parsing of the tag."`
}

func main() {
    v, ok := reflect.ValueOf(MyStruct{}).Type().Field(0).Tag.Lookup("tag")
    fmt.Printf("%q, %t", v, ok)
    // Output: "", false
}

This is kind of documented at https://golang.org/pkg/reflect/#StructTag which clearly forbids line breaks between the "tag-pairs" and says that the quoted string part is in "Go string literal syntax". Which means "interpreted string literal" per specification.
In this case a compile time check could have saved me some debugging time.

I'm not sure if this is worth the increase in complexity to the language, etc., etc. But the potential benefits to tooling and improvement to developer experience do seem quite nice, though. I think it is worth exploring the idea.

I'm not concerned about the particulars of the syntax, so I'll stick with the convention in the first post. (To give the [] syntax a distinct name, I'll call it a vector.)

There's discussion about extending what kinds of types can be constants—which, currently, seems to be mostly taking place in #21130. Let's assume, for the moment that, it's extended to allow a struct whose fields are all types that can be constants.

While I agree that a tag should be a defined type, I don't think that should be enforced by the compiler—that can be left to linters.

With the above, the proposal reduces to: any vector of constants is a valid tag.

This also allows an old-style tag to be easily converted to a new-style tag.

For example, say we have some struct with a field like

Field Type `json:",omitempty" something:"else" another:"thing"`

Given a tool with a complete understanding of the tags defined in the stdlib but no third party libraries, this could be automatically rewritten to

Field Type [
    json.Rules{OmityEmpty: true},
    `something:"else"`,
    `another:"thing"`,
]

Then, the third party tags could be manually rewritten or rewritten further by tools provided by the third party packages.

It would also be possible for the reflect API to work with both old- and new-style tags: Get and Lookup would search for a tag that is an untyped string with the old-style format in the vector of constants while a new API allowed introspection of the new-style tags.

I'd also note that most of the benefits of this proposal are for tooling and increased compile time checking. There's little benefit for the program at runtime, but there are some:

  1. No parser needed in already reflect-heavy code, reducing the bug/security surface while requiring fewer tests
  2. Tags can have methods, potentially allowing a better factoring of the code for handling the tags.

Some points brought up in #24889 by myself, @ianlancetaylor, @balasanjay, and @bcmills

If the tags are any allowed constant they could also be named constants and not just constant literals, for example:

const ComplicatedTag = pkg.Qual{ /* a lot of fields */ }
type S struct {
  A int [ ComplicatedTag ]
  B string [ ComplicatedTag, pkg.Other("tag") ]
  C interface{} [ ComplicatedTag ]
}

which allows both reuse and documentation on ComplicatedTag

Tags, being ordinary types, can use versioning which in turn allows them to be standardized and collected in central locations.

I dislike the idea of binding tags and external packages. While field tags are very widely used by those external packages, binding them is very hindering.

Having an json:... tag does not means the struct (or the package holding the struct) should have a dependency on encoding/json: since that Go currently does not have ways to modify field tags, it makes sense to be there for other packages (usually in the same program) to marshal/unmarshal. Being dependent on encoding/json does not make sense.

I think field tags have problems need to be addressed, like OP said: It needs syntax check and tools helping that. But binding them to dependency feels overdoing.

@leafbebop

Why would importing the json package pose problems? Or in fact, any package that will provide tags? The only problem with dependencies is circular imports, which would never happen here. And so far, I haven't seen that many third-party packages that use other third party packages' tag definitions, which means that more or less all current tags, like json, are pretty much placing a dependency on a package that defines them, since you are also more than likely to be importing said package somewhere else in your code in order to work with the structs that have these tags.

@urandom

Say I am writing a website, and since Go is supporting Wasm, I am going fullstack. Because of that, I isolate my code of data models for re-usabilty.

Now that there is not a way to add field tags to struct, for my server code to be able to co-operate with sql, I add tag fields for sqlx. To do so, I imported sqlx, of course.

And then I decide to re-use the code of data models for my Wasm-based frontend, to trully enjoy the benefit of full-stacking. But here is the problem. I imported sqlx, and sqlx has an init function, which means the whole package cannot be eleminated by DCE - that means binary size increases for no gain at all - and for Wasm, binary size is critical. The worst part is yet to come: sqlx uses cgo, which cannot be compiled to Wasm (yet, but I do not think Go will ever come to a point that compiles C to Wasm, that's pure magic).

Sure, I can just copy my code, delete all tags and build it again. But why should I? The code suddenly becomes non-cross-platform-able (now JS is a GOOS) just because something trivial. It does not make sence.

Alternatively, I think it can remains in a keyword style - instead of package, use a string-typed keyword.

@leafbebop
I'd say that first of all, that is an incredibly specific example with not a lot of real world consequence. That being said, since you are worried about the resulting file size (again, not something a lot of us care about on a daily basis), you can also copy the struct and omit the tags. As you said so yourself, that would work just fine.

Such code would be in the extreme minority. Not only would it target wasm, but would also have to import cgo modules. Not a lot of people will have to deal with that. And why should everyone else have to suffer the current error-prone structureless mess that you have to triple check when you write, because no tool will be able to help you with that, and then pray that down the line you won't get any runtime errors because a field didn't match?

@urandom

No. The problem here is just: Data modeling should not be depend on an external package, by theory or by practise. And asking a dependency because of a piece of meta info that may or may not be used, is, non-idomatic and does not feel likes Go.

You don't need to import io to implent an io.Reader but you need to import sqlx to define a data model that might be used by sqlx? It seems wrong to me.

And about error-prone part. Detecting errors before running is what Go is good at. But not all those detecting happens when you run go build. There are many other tools, including go vet, to check things like that and, as far as I am concerned, a Go tool is not hard to write.

I am not against to have a better meta-info (be it tag or not) about fields, because the old way is not expressive, and hard for tools to check. But binding a package for it? That is another problem.

What I propose as keyword style, is that somehow we have a syntax like:

type S struct {
    F T meta (
        "json" {
            omitempty bool = true
        }
    )
}

P.S.: I don't think that re-using code between front-end and back-end, especially data model code is rare; And from what I read, Go with Wasm is widely welcomed and far from minority. But those are beyond the scope of this issue.

@leafbebop Alternately, it could just be best practice to put the tag structs in a separate package when using init to avoid the coupling.

You don't need to import io to define an io.Reader, but you do need to import time to have a time.Duration field, which seems the more apt analogy here.

@jimmyfrasche That requires rewrites to all packages using database/sql (and image). Which does not seem best or good to me. And furthermore, it does make sense.

Field tags and field types are very different on aspect of code logic. Field types are determinant on how the struct is organized and how the logic of that struct is written. On the other hand, field tags are, descriptive info about that field, often offered to other package.

That means, when you declare a field is a time.Duration, the type - the data structure hold a time.Duration and logic of that type use time.Duration. But if you have a field tag as json:omitempty, its logic can often has absolutely nothing with json. There is a reason why current spec allows assigning structs with different tags.

On the other hand, field tags are more like interface: They are both about how "outside" code use the type. An io.Writer does not care about how data to write is produced as long as it is a []byte, as a field with yaml: name does not care how and by which package (yaml has multiple non-official package to parse) any value is unmarshalled into that field as long as its name is name.

@leafbebop Why would packages using database/sql or image need to be rewritten? They use init but they don't expose any struct tags (unless I missed something re-skimming their docs to double check).

sqlx might need a separate package to define the types to use for the tags it provides, but it would need to define those types somewhere as part of the transition, regardless, so it would mostly just be a matter of what directory the file containing those definitions goes in.

I do get the yaml problem, though. It would be bad if every yaml parser was dependent on every other yaml parser just so that they could understand each others tags. Of course, it would also be possible for them to work together on a central repository that just contains the least-common-denominator tag definitions that they all agree on. They might need to define additional tag types locally but those could be upstreamed to the central repo later. This would have been an unmanageable mess before type aliases and vgo, but it no longer seems like it would be much of an issue. The tags being ordinary types allows them to use these existing mechanisms.

@jimmyfrasche I think you missed my points so I summarized it here:

  1. By philosophy, field tags are descriptive pieces of info. It is oriented to outer program. Much like implenmenting an io.Reader is not specified to be used in package io, having a json tag does not necessarily means the model is for json. It is wrong on philosophy to bind such information to a concrete use, let alone a specific package.

  2. By theory, even when a field is meant for a certain topic (json,sql,or yaml and so on), that topic does not bind to a certain package. Though versioning and type alias might simplify a unified tag system, but that seems complicated and non-idom for no reason. As I put it, it is literally just a topic and to adrress a general topic, a keyword is clear, to the point and easy to implement.

  3. By practise, having extra dependency on a data model package can cause problem. Every single sql driver has an init function (to utilize database/sql) and if it has a tag to interept, it means overhead.

And there is really no drawback for using a keyword-styled structured tag. The old form of field tag spec has two aspects being weak. Those are, expresiveness and error checking.

The expressiveness problem is solved in almost identical way of a "package" way, so I'll leave it there. And as I said in previous comment, error checking can happen outside of compile. It is not hard at all to write a go tool that checks field tags, and it can easily scale well to custom schema.

@leafbebop That counter-proposal seems to lack a significant amount of detail that makes it hard to evaluate; you should consider opening a separate proposal, if you think that is the way to go. Among other things, I don't understand how such "error checking" could be implemented, nor do I understand how packages would read that data (e.g. how would the json package read whether a field is tagged with omit empty). If the binding here are just strings, then how would we globally agree on who gets the "sql" string, or the "json" string, when validating a struct with a particular tag? There are multiple packages that deal with these subjects, and they may well want different data structures. Its also unclear how to version changes to these structures.

To discuss your objections:

1) I agree that field tags are descriptive pieces of info, oriented to an "outer" program. But I don't understand how it follows that its philosophically wrong to bind the information to a concrete use. I also see little difference whether you say "json" as a string, or json as a package reference (which you import as a string "encoding/json"). In either case, you're clearly describing the exact same thing semantically. (And here, there is no init function, nor any loss in dead-code-elimination).

2) Versioning and type aliases were presumably referenced to discuss how a change like this (or any approach which namespaces symbols) enables evolution in the ecosystem, using the same tools that we use for evolving structs. And until you address how your counterproposal would deal with that problem, it is hard to evaluate its effectiveness.

3) Let's say that we grant that this use-case is a valid one[1]; to me, it seems that its a straightforward anti-pattern to define tag-structs in packages that have init functions (or call cgo, or use unsafe, or include a giant dependency tree). First, the language does not need to prevent every anti-pattern, it can't. Second, consider if you're the author of such a sql package and you get a bug report to this effect ("I'd like to use the tag without pulling in the whole DB driver"); it is rather straightforward to fix in a completely backwards-compatible manner. Simply add a new "sqltag" package (or whatever you want to call it), move the tag types in there, and leave forwarding type aliases in the sql package for compatibility. Similarly, if many packages want to loosely couple a common set of tags, you could imagine defining an interface to represent the tag, and having the independent tag structs implement that interface. This opens up all of our usual tools in API design, and makes tags amenable to them. That seems to me to be a huge advantage of this proposal, and one that would be hard to replicate without inventing lots of concepts that would be very similar to the familiar notion of structs/interfaces/type-aliases/etc.

[1] For the record, I really don't think this sort of sharing is a good idea (I'm fairly sure one of the very first entries on the protobuf API best practices list is "don't use the same protos for storage and for clients", or something to that effect; it introduces a lot of coupling between systems that evolve very differently (compare how often a backend is deployed (maybe daily) with how often a client might update (maybe never, for users who don't update their apps on their phone or if the WASM is cached by the browser)), and does not allow you to evolve your storage (e.g. denormalize data) without making the difference visible to clients.

@leafbebop

The more I think about it, the more I don't see importing packages for tags as a problem.
Everything else staying the same, if a package with side effects has tags, they can be defined in a separate package so as not to cause problems when used in the model definitions. Since this is a new concept, tagged for Go2, potential rewrites to accommodate this shouldn't be taken negatively into account.

And even if they don't, perhaps the compiler will be able to remove the coupling when compiling your modules. It will know that the struct tags are only used during runtime, and if it sees that these structs are not passed to any other piece of code from the third party package, it could probably deduce that the import is only for tags and essentially remove it, thus not executing its init scripts at all.

Finally, considering the equivalent use cases of annotations in other languages (Java, C# come to mind from the ones I've used), I've not seen problems being raised by having such coupling. There, the annotations are defined structures that you have to import them from their defining packages.

As for code reuse, I've only read about that in articles that deal with nodejs for now. So far I haven't seen the same codebase being shared in any other environment. Of course, that's all anecdotal.

EDIT: As for the yaml problem. I don't really see that as a problem as well. FIrst, there are no guarantees that all the yaml parsers right now use the same tag, let alone the same format even if the name was the same. Second, once you settle on a yaml parser, its unlikely that you will switch to a different one later down the road. Same with databases, once you pick one, you usually stick with it. I've personally yet to see a client want to switch to a different database down the road (anecdotal).

@leafbebop Alright, I think we're going to have to agree to disagree.

Most of these seem either misapplied (e.g. LSP isn't saying that any two arbitrary software constructs should be substitutable, just types and subtypes; where for packages, there's no such thing as subpackages, and it just generally seems to be talking about an entirely different domain) or contradictory (e.g. this definition of LSP contradicts the desire stated in OCP where some package invents their own notion of json's omitempty tag) or are "issues" with the current state of the world (e.g. SRP is equally violated if your data model has any functionality _and_ has json string tags on it).

And there are more fundamental problems with the current state of the world than dreamed off by the authors of these principles; for instance, the compiler cannot help you if you typed "jsn" (after all, some "outer" package might legitimately be interpreting these), or if you include commas instead of spaces (see bug linked above).

I'm happy to admit the Proverb is relevant; it is legitimately a downside that there are potentially more dependencies being taken (though, again, keep in mind that this could be entirely mitigated if authors feel strongly via isolating tag structs in their own packages). But compared to the upside, this downside feels incredibly minor (to the point of insignificance).

@leafbebop

which will be hindered because encoding/json asks json:omitempt

I don't see how. The proposal follows the usual Go syntax for referencing package identifiers. If you have a conflict you can use the same tools that Go provides now.

Data model codes should not be forced to depend on models they do not use.

depend on low-level modules, be it sqlx or yaml

As was mentioned, in the context of this proposal you would define tags in a separate package.

Could a single mechanism address the problems of both tags and magic comments?

Consider:
// go:generate vs @go:generate
- go:generate could take an interface for something that generates code

// +build i686 vs @build(i686)

For an example of a user extension consider go contracts

// requires:
// * x > 1

might look better as: @requires(x > 1)

These things can then be checked by the compiler and accessed via reflection or the AST instead of by parsing comments.

Also would it be fair to refer to this proposal as generalised attributes/annotations for go, in line with what other languages call this. It might be worth considering if compiler pragmas are just a subclass.
In C++ attributes are replacing pragmas in some uses. C++ also had a rule that programs should be interpreted the same if attributes are removed. Something which I think may not be the case for comments in go.
I realize attributes/annotations are one of the horrors some other languages have that golang would like to improve on but the current state of struct tags and magic comments is inferior to attributes.
There really ought to be a better solution with more thought. Perhaps requiring that a tag/attribute/annotation is a constant value of type that satisfies a special interface may be sufficient for most cases. The "requires" syntax above for adding design by contract would require it to be an expression however (and a way to convert that expression to a string). C++20 includes contracts as attributes.

I think it might be good to update the original proposal to include the following points mentioned throughout the discussion:

  1. Libraries, especially those with side-effects (i.e. in init()), should probably expose a separate package for struct tags. For example, you may want to define a field with a JSON tag (i.e. Go v1 syntax: json:",omitempty"). This does not necessarily mean that you want to depend on the encoding/json package. By separating tag stuff into a package like encoding/json/tag, you can now write:
import json "encoding/json/tag"

type MyStruct struct {
      Value string [json.Rules{OmitEmpty: true}]
}

The idea would be to make encoding/json/tag extremely lightweight, so it can be imported without any bloat or side-effects.

  1. To avoid doing the above everywhere and still reduce coupling, the Go compiler could detect when an import is used exclusively for tags and remove it where possible.

Both of these points are summarized in this comment: https://github.com/golang/go/issues/23637#issuecomment-404397329

Another idea to take this further... Suppose you import a package that's not even found by the compiler? If it's only used for tags, you might decide to silently ignore the import statement (gasp!). Obviously, in this case, no compile-time error checking is performed for that tag. A variant of this idea could be to put a "tag" keyword in the import declaration itself to indicate that it's only being imported for tags. I'm not really sure that I like any of these ideas, but it's food for thought. ;)

import "encoding/json" tag

I agree that coupling is not what we want, especially in annotations, but I believe the advantages of structured tag annotations and compile-time error checking outweigh this.

any movement on this issue? I love the simplicity of tags in Go but I hate to use reflection to make use of them.

@domdom82 There has been no movement on this issue other than what is recorded above.

Personally I think there may be something here but it seems to me that the proposal is not fully clear. It requires a notion of constant value that is not currently in the language. The suggested syntax may be ambiguous; consider

type S struct {
    f func() [2]T // Is this a result type or a tag?

Also I think @leafbebop raises some valid points.

Finally, this proposal does not save you from using reflect. You still need to use reflect to look at tags, you just get a interface{} rather than a string. So this proposal doesn't address your main concern.

It requires a notion of constant value that is not currently in the language.

I think I clarified this in the EDIT, where I mention that after discussions here, there is no need for such a construct. A struct with fields whose types can be constants, or other types whose underlying types can be constants is sufficient.

The suggested syntax may be ambiguous;

It might be ambiguous to the lexer. I'm not sure. It's not ambiguous for the reader. Your example cannot possibly be a tag.

Also I think @leafbebop raises some valid points.

I think @balasanjay and @bminer did a good job addressing these.

Finally, this proposal won't save you from using reflection, since that is the fundamental way of consuming tags. It will make the code afterwards easier however, since it will not need to parse the obtained tag string to produce something more descriptive.

@ianlancetaylor

It requires a notion of constant value that is not currently in the language.

I think something like the extension to constants described here https://github.com/golang/go/issues/6386#issuecomment-406824755 would suffice.

The suggested syntax may be ambiguous

I'm sure a syntax could be found if the core idea is worth doing. Since the individual tags would be regular constants it would be a matter of demarcating/enclosing the list of tags. Just to throw something out there you could do field T / [list, of, tags].

Finally, this proposal does not save you from using reflect. You still need to use reflect to look at tags, you just get a interface{} rather than a string. So this proposal doesn't address your main concern.

I don't think the main concern in the first post was having to use reflect. It looks like it can be broken down into some more concrete points:

  • hard to know how to write a struct tag since the library has to define and document its own parser
  • hard for tools to help writing a struct tag
  • hard for tools to validate a struct tag's syntax
  • requires the library to parse the struct tag (which can introduce errors)

If the struct tags are a sequence of constants then, aside from whatever syntax is used to enclose the tags:

  • you write each struct tags like a regular constant literal (using the extended notion linked above) so you don't have to worry about bespoke or ad hoc formats—it's just regular code. The types used for the tags show up in godoc like anything else.
  • tools can be written to autocomplete the tags like any regular constant literal
  • the struct tag are validated at compile time as much as any regular constant literal is
  • the library can type assert the struct tag to get the overall structure instead of having to define and implement its own parser

If the struct tag has complex validation logic that can't be expressed in the type system you still have the last two problems to a degree, of course, but it's much less of a degree.

Also since most packages would be looking for a single tag of some specific type, there could be a helper like

var tag Tag
ok := field.Tag.OfType(&tag)
// if ok, tag is filled in

ok I think I got it. Upgrading tags from simple strings to types makes them easier to parse and check at compile time. The reflection issue should still be addressed IMHO though in another issue. bump.

The tags are a property of the type so reflection will always be needed to access them.

@rsc @robpike

how can this wasn't settled before go 1 when json is part of standard library

There's one problem none of those proposals address.

Sometimes we want to completely decouple the declaration of the type with the (potentially large) number of packages that do various things with it. I believe that the information we currently pass using tags should be passed using normal go types, wherever that information is actually needed. Sample use cases where this approach is better are:

  • Splitting the app into various layers, where the entities layer should not know anything about the kind of database used, the format used for returning API responses etc. Telling the JSON decoder or the DB package how to interact with our types should not be the responsibility of the entity layer, but of the controllers, which actually deal with the API/db.
  • Marshaling/unmarshaling data from packages we have no control over. We can't add struct tags to types someone else has created. However, we could pass an additional argument to a method in package json, telling it to i.e. omit a particular field.
  • Different rules in different scenarios. We may want to marshal/unmarshal one type in multiple ways. For example, a guest, a random logged in user and an author might need completely different information about a post. We want to hide some fields for one group of users, while exposing them for others. Same for i.e. old and new API versions, backwards compatibility, renaming fields for clients that support the new API etc.
  • Security-sensitive situations and other situations where struct-wide rules should be applied. On some types, we would like to tell packages to never marshal a field unless we specifically tell it to do so. For example, we might add a sensitive field to a struct, forget the appropriate tag and unwillingly expose it to users. Another common example is returning all JSON fields in lowercase, which is what users usually want, without declaring alternative field names on each field.

So, instead of writing this:

// This lives in entities and shouldn't have anything to do with marshaling JSON.
type User struct {
    Name string `json:"name"` // We'd like to return Username to newer clients instead, but we can't, not without introducing a new field here, just for json.
    Age int `json:"age"`
    Email string `json:"email"` // Not everyone should see this.
    PasswordHash string `json:"-"`
    CreditCard stripe.CC `json:"credit_card" // Maybe We want to omit some of the subfields, but we can't.

    // Ten other fields...
    SSN string // We've forgotten the tag! Now all guests can see the SSNs of our users!
}

// ... 
func (u Users) Show(w http.ResponseWriter, *http.Request) {
    // We get the requested user from the db, save into u.

    if !isAdmin(r) {
        u.Email = ""
    }

    // We forgot about SSN, somebody is going to get hacked soon.
    b, err := json.Marshal(u)
    // Do whatever.
}

We would write:

// This lives in entities and doesn't have anything to do with marshaling JSON.
type User struct {
    Name string  
    Age int
    Email string
    PasswordHash string
    CreditCard stripe.CC

    // Ten other fields...
    SSN string
}

// ...
func (u Users) Show(w http.ResponseWriter, *http.Request) {
    // We get the requested user from the db, save into u.

    r := []json.Rule{
        json.UseNamingStrategy(myutilspackage.SnakeCase),
        // If we forget to allow a field, we're going to omit it, but that's better than transmitting too much.
        json.Allow("name", "age")}
    if isNewClient(r) {
        r := append(r, json.Rename("name", "username"))
    }

    if isAdmin(r) {
        r := append(r, json.Allow("email"))
    }

    b, err := json.Marshal(u, ...r)
    // Do whatever.
}

This is example code for illustration purposes only, function names would likely be different. We could also declare some of those options on an encoder, i.e. to always omit some fields from the stripe.CC type. For those who want rules to belong with the types, we could say that i.e. if a package implements the JSONRules method, returning a slice of JSON rules, the package JSON will call that and follow the rules declared there. Similar changes could be made to other packages using tags.

All I'm saying is that we should move code responsible for encoding, databases, validation etc. where it belongs, instead of putting it all in the type declarations. The approach I propose is more flexible than tags, type-checked at compile time, and allows for greater extensibility, security, testability and introspection. It's also one less concept to learn for new go developers, and that's an important thing, considering how much confusion tags cause. There's no new syntax, absolutely no changes to the language spec for now, everything can be done via the standard library and external packages. Sure, tags will still need to be supported by the Go compiler for a very long time, but they can be marked deprecated. Go2 could remove them, theoretically, but I think it would be better to just let them stay, to minimize effort when upgrading programs. Sure, warn about their usage in go vet, write a gofix rule that would convert them into real go code for standard library packages and let other package authors do the same, but that's about it.

I'm surprised no one has suggested this before, as doing things this way seems pretty obvious to me. It seems pretty go-like. We reapply existing concepts (plain go code, functions, variadic parameters, interfaces etc) to achieve something, without needing one more magical language feature that's easy to misuse and needs to be learned. This seems almost too simple to not be implemented already and I'm wondering where the error in my reasoning lies, but I can't find it myself.

@devil418
This is an already solved problem, since the current string tag system already produces coupling that may need to be avoided. And we have several solutions. 2 of the top of my head are: creating a local struct with the needed tags, and creating a new type using the desired one as a base, and implementing whatever interface the encoding/decoding library is using. Of course, nothing is stopping anyone from implementing what your wrote as a library, since it doesn't require any changes in the language.

@devil418

type-checked at compile time

A lot of your example is stringly typed. json.Allow and json.Rename refer to struct fields by name; nothing about that is type-checked at compile time.

(Apologies if I repeat what has been said - I've skimmed the thread for the points I made, but it's very possible that I overlooked something)

I agree with the general criticism that how a type is encoded into JSON shouldn't really be a property of the type, but a property of whatever does the encoding (that is, what field-names to use should be something that's somehow passed to json.Marshal, not something that is put on the struct). But I think that's likely a lost battle by now.

I also agree with the criticism that typed struct-tags require an import that shouldn't be necessary. If I annotate a struct with json-tags, that doesn't necessarily mean it ever gets encoded into json. And a program that doesn't do it, shouldn't have to compile the json package in when using the type. Importing isn't really the relevant question here, though. It is very possible for a program to import a package without it actually being compiled in - the linker can sometimes determine that a package isn't actually used and strip it out. So even if we use typed struct fields, a sufficiently clever linker might solve this problem. But as the tags are available via reflection, I think that's really hard (I assume, for example, that passing the type to fmt would then also trigger the "we need to include type-info about the tags" check).

Lastly: I also agree that it would be better, in general, if struct-tags would have more structure. One way to do this, that addresses at least some of the issues brought up by @ianlancetaylor, is to allow them to be a list of constant-expressions instead of a single string-literal. Untyped constants get their default type assigned (to deal with the question of precision and such for runtime-representation). The way this could work, is that json declares something like this

package json

type Name string

type ImaginaryTag int

type boolTag bool

const (
    OmitEmpty = boolTag(true)
    Omit = boolTag(true)
)

which could be used as

type Foo struct {
    Foo int json.Name("bar"),json.OmitEmpty,bson.Name("bar")
    Bar float64 json.Name("baz"),bson.Name("baz")
    Baz string json.ImaginaryTag(42)
    EmbeddedThing json.Omit
}

(there is a parsing-ambiguity with embedded fields, if the constant-expression is an identifier. This could either be resolved in the type-checking phase, or there might be some color of bikeshed that doesn't have this problem)

The json-package could then, via reflection, see the type of the struct-tag and switch behavior based on it. Tags would still be compile-time computable (as they are constants) and type-checked. You can't have composite types as struct-tags, but I believe use-cases that require that are very rare - if they exist at all. Complicated use-cases might even still use some custom grammar inside a stringly-typed tag. But I believe for 90%+ of use-cases the types which can already be constants in Go would suffice.

@Merovius
Is there a technical reason why the tags can't be composite, if they are only made up of constant values? Otherwise, as the proposal says, regular constants are also fine and would be an improvement over the current string-based tags

@urandom The technical reason is "constants can't be composite values". We'd need to introduce either some general way to have composite constants (there are proposals about that elsewhere) or a separate notion of a pseudo-constant literal only used in struct tags to the spec, if we want that. The complexity of doing that hardly seems worth the benefit. Constants are already well-defined for strings, bools and numeric types, so we'd really only need to touch the field-tag part of the spec to do what I suggested :)

I am still not convinced on having to have an extra dependency just because one of my dependency have tags on one of its field. I think during the development of go mod, many great articles are written on the topic why dependency can be problematic.

@Merovius Importing is the issue. I am not confident that yaml or sql world would have a common package for tags, and that means when designing the data model, the library used for the field tag must be determined, or when writing code of implementing details, the model code must be changed accordingly. And since string tags can be read if the keywords are the same, I very much doubt structured (or typed) tags in different package can understand each other, so it is extra bad if there is two different packages (different type of sql usually) using the model. That is a no to me.

On the other hand, I would say a linter, possibly supported by go vet, allowing some kind of package-defined protocol of tag fields is more interesting (and probably with help form gopls, which I still haven't got time to look into to understand more to comment on). But as the discussion shows, it is probably just me.

Another thought just occurred to me before hitting the comment button: If we are asking packages to create a new package for tags, how about we ask them to provide a test function that can be plugged into unit tests, which simply accepts interface{} and validate if the tag field is type safe, well-formed and good?

How about actually not allowing packages with tag definitions to contain anything else? That not only encourages but forces people to define tags in very small separate packages that can be imported everywhere. Compiler doesn't even need to compile those. It only needs to parse them and use the information for syntax checks.

If libraries om SQL, yaml and other field tag heavy domains can each agree
to have a common, well maintained package of tags, I would be happy (though
not sold on it, for dependency is still dependency, well, I known I am
stubborn). But from what I see on the net, this is hard.

If it is just type check, I think linter and unit tests can do its job
(with the help of package author). But there's need for writing them more
easily (I can't think of a situation that I would require a tool to help
writing field tags, but probably it's just me), I do not know enough if
gopls can dip into this (again, with help of package author).

My uneducated guess is that if auto completion can be cross packages,
surely can it be done in field tags, though it still requires library
author to impose some kind of rules to address desired tags (and type of
it), perhaps in the same way as proposed in this proposal, with some
environment variable pointing to the library in use (without importing it).

On the other hand, if the tag is so complicated to write by hand (using
tools, not just need type checking), perhaps the logic is sophisticated
enough for its own code instead of hiding in tag fields?

On Sun, Jun 21, 2020, 4:19 AM Antonenko Artem notifications@github.com
wrote:

How about actually not allowing packages with tag definitions to contain
anything else? That not only encourages but forces people to define tags in
very small separate packages that can be imported everywhere. Compiler
doesn't even need to compile those. It only needs to parse them and use the
information for syntax checks.

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/golang/go/issues/23637#issuecomment-647041270, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AIAYXPSNXX347LG5UIUIRADRXUKTTANCNFSM4EOQ7OTA
.

@leafbebop why do they need to write common package? SQL, yaml, json, they all need completely different tags. They should be in separate packages. And if a package is not allowed to contain anything else apart from tag definitions you don't pay the usual price for the dependency. This is eliminating the main problem - transient dependencies. You can treat this tag package as a tag protocol like you described earlier.

if the tag is so complicated to write by hand

Even with simple tags like json you can make mistakes. Gopls recently started giving warnings about json tags and caught a few typos in my code. With libraries like gorm things get even worse.

@creker Sorry if I did not explain clearly.

@leafbebop https://github.com/leafbebop why do they need to write
common package? SQL, yaml, json, they all need completely different tags.

I meant a common sql package and a common yaml package and so on.
Especially in sql case, it is not uncommon to have multiple version of sql
library using a same data model.

I am aware of the current way of writing tags is prone to errors, and thus
I do propose solve these problems by linters (just like how gopls did it
for you) and by unit tests (asking the package author to provide a utility
to validate field tags).

I was exploring what would be still too complicated with these build time
(slightly different than compile time) type check and error message, and
claimed that they might deserve their own logic of code.

if the tag is so complicated to write by hand

Even with simple tags like json you can make mistakes. Gopls recently
started giving warnings about json tags and caught a few typos in my code.
With libraries like gorm things get even worse.

edit: grammar and format.

@leafbebop
This is a bit anecdotal, but I've noticed that we never add tags to a struct, unless the primary user plans to use the package that requires these tags in the first place. In terms of dependencies, if the tags were structured, no extra dependencies would be added because of them, since the dependent package is already used (i.e. the struct is already decoded via yaml, or a db, in the primary user / same package). For us, we would also not be adding extra dependencies due to tags due to secondary users (users of the struct in a completely different package/module), since we don't like adding tags that aren't used by the primary user itself. That is to say, if the primary user of the struct isn't using a package that defines some tags, those tags aren't being added in the first place. We always copy the structs and add different tags, since we don't want to couple structs to such packages unnecessarily, even when the tag is just a string.

So, at least for our shop, we would never see the problem you are describing.

On the subject of linters, such a solution would not be able to provide completion of non-structured tags. One would have to hard-code the tags into gopls, which is unfeasible for non-stdlib tags.

@urandom
I don't know about your code base, as well as many other's so I really can't comment on that. In my experience, however, it is pretty common to see data, and basic domain-specific data behaviour separated from decoding/encoding/storing logic. The data defined often describe a few requirements in tag fields, but not specific to any implementation of them.

I am not certain how much of the user base me or your shop can count for, but that's some concern.

As for linter, I can't see how that is impossible (maybe I should read into gopls soon). I am not saying that this is feasible for now, but saying that if a package defined typed tags they supported, with magic comments or other proposed or conventionalized protocol, a linter can read about them, and provide completion of non-structured tags according to that info.

On the other hand, I am curious to see a use case that type-check [1] would not suffice, and does not deserve its own logic, but auto-completion and other gopls features would help largely.

[1] which can easily be done by unit tests, with the package provides a validate facility, probably helped by some "x/tools/" utility - again this requires change of the package, but so does the proposal.

And another note: while this proposed spec change can be done in a backward compatible way, I would see many tools that built on purely unstructured tags (of other packages) would be broken, for people will rarely keep both unstructured tags and structured tags (and the current proposal does not support keeping them both).

I am not saying there a significant amount of tools reading on unstructured tags out there (or not, I am not sure about that), but that's another concern, if we are doing this.

@leafbebop
The proposal only wants to deprecate the string-based tags, not remove them, as that would be backwards incompatible.

What I said is
1) the proposal does not mention how to keep both string based and
structured field tags and even though there is a way of keeping both,
people would rarely do that, and
2) there are some library/tools depending on reading on string based tags
that are not defined by themselves (e.g., all linters that reads field
tag now), and they are going to be broken, even though the language change
is backward compatible.

I hope this times it is explained clearly.

On Mon, Jun 22, 2020, 5:04 PM Viktor Kojouharov notifications@github.com
wrote:

@leafbebop https://github.com/leafbebop
The proposal only wants to deprecate the string-based tags, not remove
them, as that would be backwards incompatible.

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/golang/go/issues/23637#issuecomment-647386445, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AIAYXPXQDQHICQ23GVB2GELRX4NA7ANCNFSM4EOQ7OTA
.

FWIW, my rough design would make it relatively simple to keep both (at least in an intermediary period). StructField.Tag could refer to the first string-typed tag, formatted in the old way. A new field, Tags []Value could be added, that lists all tags. New packages defining tags would only use the latter and select those of the appropriate type (a helper method func (StructField) TypedTag(Type) Value could be added for convenience). Packages like json, that want to stay backwards-compatible, would look in both to decide what to do.

That would enable a transition from

type Foo struct {
    Foo           int     `json:"foo,omitempty" bson:"foo"`
    Bar           float64 `json:"bar"`
    EmbeddedThing         `json:"-"`
}

to

type Foo struct {
    Foo           int     json.Name("foo"), json.OmitEmpty, `bson:"foo"`
    Bar           float64 json.Name("bar")
    EmbeddedThing         json.Omit
}

This would still work as expected with the updated json package and the (presumed pre-structured) bson package for both versions. Of course, the second version doesn't work with a pre-structured json-package - the assumption is, that an appropriate go version would be specified in go.mod, when making this change.

(Not commenting on the import-issue, because as I said, I tend to agree it's a problem to be solved and IMO that's everything I can say about it for now)

@Merovius

By meaning keeping both, I am not referring json.Name and bson:"foo", but rather json.Name("foo") and json:"foo". And yes, it seems like in your pseudo proposal it can be kept but it is awkward (as it should be), and I think most people would not really do it. Even if, bug of inconsistency between the tag field redundancy is easy to make and hard to catch.

This will cause code that reads on string field of json:"foo", but not part of the json library (at least the imported json library) would break.

This might be very rare and impractical case, but I think unofficial implementation of json package would suffer that. Not it is hard to fix, or I'm sure that is going to have an impact, but another concern.

Since tags are essentially a string and structured tags are essentially []interface{}, I feel like it would be relatively easy for a package to support both, especially if string-based tags are already supported and actively being parsed by the package. Deprecating the unstructured tags seems like the right approach since structured tags provide many more benefits, and this is probably the road most packages will follow.

One unanswered question is... what should happen if a struct field tries to use structured and unstructured tags at the same time? Surely this will happen as libraries slowly migrate to structured tags. Should conflicting options in the structured tag override the string-based tags? Should the library make this decision? Should there be some sort of convention? If so, what?

@bminer
Since string tags would be deprecated, the ideal solution would be for libraries to consider the structured tags first, falling back to string tags when the former aren't available. Of course, this isn't enforceable, so its really up to the library itself.

@urandom - I agree, and it's probably not enforceable. Just pointing out the corner case that might warrant a new guideline / convention -- perhaps even something that is checked by a linter. Should a library completely ignore the string / unstructured tags when both structured and unstructured tags are present?

Example:

type Person struct {
  FirstName string `json:"firstName" bson:"first"` [json.Name("fName")]
}

EDIT: Fixed invalid string tag. :)

  • Does the JSON encoder encode the FirstName field as "firstName" or "fName" above?
  • What does the BSON encoder do since there are structured tags present?
  • Suppose a library only supports parsing unstructured / string tags. Can structured tags be "stringified" to address this? Can this be done at compile time?

I think it would help to address these questions in the proposal -- even if it's just a convention that isn't enforceable. Thoughts?

I think building this idea of stringification in would mostly defeat the point of the proposal. If you stringify the typed tags, that mapping has to be injective, so you don't get collisions. But if you have an injective mapping, why not just use that mapping for your string-tags in the first place? Or to put another way: To be well-defined, the stringification would likely need to include the import path of the type defining the tag - and if the package using it knows that it needs to specify the import path, why not move it over to the structured tagging API while you're at it?

Do any of y'all actually know a case where you'd need to support both simultaneously? ISTM that usually whoever adds the struct-tags will have a certain amount of control (and/or expectations) over what packages that type is used with - and at the end of the day, the syntax is still bound to the package defining the tags, so if a third-party library wants to drop-in replace that package (say, a third-party json encoder) it seems reasonable to expect it to conform to that new API in a timely manner (after all, support for structured tags in json could be known at least 3 months in advance to any third-party replacement, who could then start adding that support themselves guarded by build tags). There'll probably still be some percentage of cases where it makes sense to use both - but IMO it's fine to just expect those cases to put both kinds of tags into their structs and keep them in sync.

At the end of the day, I don't really think the goal here should be to prescribe how packages actually end up performing that move in all eventualities. It should really just be about making a reasonable path possible.

@bminer
only the fName structure tag would be used, regardless, since your stringified tag is invalid anyway :D (hence the proposal itself)
but if it were valid, the theoretical json library would ideally discard any stringified tags when structured tags exist. E.g.: if reflect.StructField.Tags contains a value that matches any of its types, it would not bother looking at the old reflect.StructField.Tag

@urandom - Ha! Nice catch. I didn't realize the tag was invalid. :) Anyway, I think that I agree with your convention: ignore string-based tags entirely if structured tags exist. This can lead to strange situations, though...

Suppose the BSON library does not support structured tags yet. So we write:

type Person struct {
  FirstName string `bson:"firstName"` [json.Name("firstName")]
}

Then, suddenly the author adds support for structured tags and pushes a minor version release. The code above might silently break because the string-based tag will be ignored (by convention) by the BSON library.

Anyway, maybe this is worth discussing further? Sadly, since Go does not support structured tags and unstructured tags won't be going anywhere, I think it's really important to understand how we migrate between the two.

@Merovius - you are right that stringification can be weird, but current libraries have their own opinions about the namespace already. For example, for the sake of brevity, one writes json:"fieldName", not encoding/json:"fieldName". So, certainly namespace collisions are possible with unstructured tags. Again, I don't want to dive into the weeds too much either here, but it's worth considering exactly how the Go community would move forward (end users and library creators alike) if structured tags became a thing.

As a follow-up to my last comment, perhaps the convention should be:

If a pertinent structured tag is used, all unstructured tags should be ignored.

In the example above, there are no BSON-specific structured tags, so unstructured tags would get processed. If there were 1 or more BSON-specific structured tags, the BSON encoder library would ignore all unstructured tags. I also like this approach because linters can warn users when both tag types are being used (at least for commonly-used libraries like encoding/json).

Still, using the word "pertinent" makes for rather loose language, and as a computer scientist, I don't like it much.

@bminer, with your bson example, your code would have to contain a single bson.Tag() for the bson library to start ignoring string tags. It's not enough that it contains _any_ tags at all. So you, as a writer, will have to make the change yourself as well.

@bminer my point was exactly that you are diving too far into the weeds. What to do if there are both is exactly relevant, if a) there have to be both and b) if they have differences. IMO a) won't be the case in >99% of use-cases and thus for b) it's fine to rely on "don't do that then".

IMO you are complicating the design enormously for very little benefit - trying to address something that I believe to be entirely uncommon. And not just that, IMO the case you worry about even gets worse. If there are some rules about the compiler sometimes rewriting tags or them sometimes being ignored, then if someone sees that their encoding-process (or whatever) doesn't work as expected, they need to worry about and try to understand those rules too. If there are simply two APIs, one for typed and one for string tags, then thinking through how to move from one to the other is a pretty simple matrix - does the declaration contain typed/string tags yes/no and does the consumer site use the typed tag API yes/no. If a package wants to move to typed packages, it just has to look at this matrix and decide which cases are relevant to its particular use-case and decide if it's happy with what will happen. If, however, there is some magical system in place to translate from one system to the other, that will have to be understood and taken into account, which will make the decision much harder, what to do. And if some user wonders why their encoding (or whatever) is broken, they also have to understand this system - and users already have problems understanding the current string-tag syntax on its own.

IMO, if you just treat them as separate systems, it is pretty clear what happens when you add typed tags to a struct - code will simply continue to work as before, until it explicitly adopts the typed tag API. And it's pretty clear what happens when you remove string tags from a struct - code that is still relying on the string tag API will break. That's clear, it's simple and it gives a good migration path.

Note, that without any magic there is a pretty clear and obvious strategy for tag-consumers to adopt typed tags that makes the problem you identified vanish: Look for typed tags you are interested in, if you can't find any, use the string-tag. When bson in your example above adopts typed tags, it doesn't matter that there are some typed tags and a string-tag - it will ignore the json tags and not find one of the type it's interested in and thus use the string-tag. Even the case mentioned where you might have independent tag-consumers works fine under this strategy - if there is a struct-declaration that cares about interoperability with multiple such packages, it can just leave both the typed and the string-tag there. Any tag-consumer that knows about typed tags will ignore the string-tag altogether and a tag-consumer that doesn't will ignore the typed ones (obviously).

I think we have all reached the same conclusion: a library should ignore all string tags if a relevant structured tag exists.

@urandom Perhaps this should be included in the original proposal for clarity?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

enoodle picture enoodle  Â·  3Comments

OneOfOne picture OneOfOne  Â·  3Comments

dominikh picture dominikh  Â·  3Comments

michaelsafyan picture michaelsafyan  Â·  3Comments

myitcv picture myitcv  Â·  3Comments