Go: proposal: Go 2: function overloading

Created on 28 Aug 2017  Â·  67Comments  Â·  Source: golang/go

This has been talked about before, and often referred to the FAQ, which says this:

Method dispatch is simplified if it doesn't need to do type matching as well. 
Experience with other languages told us that having a variety of methods with
the same name but different signatures was occasionally useful but that it could
also be confusing and fragile in practice. Matching only by name and requiring
consistency in the types was a major simplifying decision in Go's type system.

Regarding operator overloading, it seems more a convenience than an absolute
requirement. Again, things are simpler without it.

Perhaps, this was the an okay decision at the point of the initial design. But I'd like to revisit this, as I question the relevance of that to the state of Go today, and I'm not sure just adding a section in the FAQ fully justifies a problem.

Why?

Complexity level is low to implement it in Go

Function overloading doesn't have to complicate the language much - class polymorphism, and implicit conversions do, and that with functional overloading does so even more. But Go doesn't have classes, or similar polymorphism, and isn't as complex as C++. I feel in the spirit of stripping to simplify, the need for it was overlooked. Because overloaded functions, are, for all practical purposes, just another function with a suffixed internal name (except you don't have to think about naming it). Compiler analysis isn't too complicated, and literally every serious language out there has it - so there's a wealth of knowledge on how to do it right.

Simpler naming and sensible APIs surfaces

Naming things is one of the most difficult things for any API. But without functional overloading, this makes this even harder, and in turn encourages APIs to be named in confusing ways and propagate questionable designs so much that its repeated use makes it perceived to be okay.

I'll address some cases, going from debatable to the more obvious ones:

  • Case 1: Go's standard library is filled with functions such as this

    func (re *Regexp) FindAll(b []byte, n int) [][]byte { ... }
    func (re *Regexp) FindAllIndex(b []byte, n int) [][]int { ... }
    func (re *Regexp) FindAllString(s string, n int) []string { ... }
    func (re *Regexp) FindAllStringIndex(s string, n int) [][]int { ... }
    

    I believe this is confusing and there is no way to know what the functions' intents really are, unless we read the docs. (This is a common pattern we've become accustomed to over-time, but still isn't convenient in any way). Even worse, by avoiding function overloading, the language itself is now unfortunately encouraging people to create APIs with insensible names.

    With function overloading, this could be

    func (re *Regexp) FindAll(b []byte, n int) [][]byte { ... }
    func (re *Regexp) FindAll(s string, n int) []string { ... }
    func (re *Regexp) FindAllIndex(b []byte, n int) [][]int { ... }
    func (re *Regexp) FindAllIndex(s string, n int) [][]int { ... }
    

    I think this is much nicer, and simplifies the API as its essentially now just FindAll and FindAllIndex. It can reduced, and this also has implications that go into auto-completion, and grouping it in the API docs, etc.

    The other part of the redundancy in this specific example relates to generics or variant types, but that's out of scope of this issue.

  • Case 2: The above example is actually one of the better ones.

    http package:

    func Handle(pattern string, handler Handler) { ... }
    func HandleFunc(pattern string, handler func(ResponseWriter, *Request)) { ... }
    

    We've used it long enough so that this API has been ingrained into us by habit as to what it does. But let's think about it from a pure semantic perspective for a moment. What does HandleFunc mean? - Does it mean "Handle the Func itself", or "Handle it using the func"?. While, the team has done what's best without the capability of overloading - It's still confusing, and it's a poor API design.

  • Case 3:

    Even worse:

    func NewReader(rd io.Reader) *Reader { ... }
    func NewReaderSize(rd io.Reader, size int) *Reader { ... }
    

    I'm saddened that this style even became popular and into the standard library. Does this really mean "create a new reader of size", or "create a new reader size" (that is, semantically - an integer)?. And this list doesn't just go on, it encourages user to utilize the same design which creates a nasty world rather quickly.

Better default patterns for APIs

Currently one of the patterns that's considered a good way to pass options into APIs is: "The functional options" pattern by Dave Cheney here: https://dave.cheney.net/2014/10/17/functional-options-for-friendly-apis.

While it's an interesting pattern, it's basically a work around. Because, it does a lot of unnecessary processing, like taking in variadic args, and looping through them, lot more function calls, and more importantly it makes reusing options very very difficult. I don't think using this pattern everywhere is a good idea, with the exception of a few that "really fits the bill" and keeps its configurations internal. (Whether it's internal or not, is a choice that cannot be generalized).

The most common pattern that I'd think as a general fit would be:
go type Options struct { ... opts } func DefaultOptions() { return &Options } func NewServer(options *Options) { ... }

Because this way, you can neatly reuse the structures, that can be manipulated elsewhere, and just pass it like this:

go opts = DefaultOptions(); NewServer(&opts);

But it still isn't as nice as Dave Cheney's example? Because, the default is taken care there implicitly.

If there was function overloading, it basically can be easily reduced to this

go func NewServer() { opts := DefaultOptions(); NewServer(&opts) ... } func NewServer(options *Options) { ... }

This allows me to reuse the structures, manipulate them, and provide nice defaults - and you can also use the same pattern as above for APIs that fit the bill. This is much much nicer, and facilitates a far better ecosystem of libraries that are better designed.

Versioning

It also helps with changes to the APIs. Take this for instance https://github.com/golang/go/issues/21322#issuecomment-321404418. This is an issue about inconsistent platform handling for the OpenFile function. And I find that a language like Rust has a much nicer pattern, that solves this beautifully - with OpenFile taking just the path, and everything else solved using a builder pattern with OpenOptions. Let's say hypothetically, we decide to implement that in Go. The perm fileMode parameter is a useless parameter in Windows. So, to make it a better designed API, let's hypothetically remove that param since now OpenOptions builder handles all of it.

The problem? You can't just go and remove it, because it would break everyone. Even with major versions, the better approach is to first deprecate it. But if you deprecate it here - you don't really provide a way for programs to change it during the transition period, unless you bring in another function altogether that's named, say OpenFile2 - This is how it's likely to end up without overloaded functions. The best case scenario is you find a clever way to name - but you cannot reuse the same good original name again. This is just awful. While this particular scenario is hypothetical - it's only so because Go is still in v1. These are very common scenarios that will have to happen for the libs to evolve.

The right approach, if overloaded functions are available - Just deprecate the parameter, while the same function can be used with the right parameters at the same time, and in the next major version the deprecated function be removed.

I'd think it's naive to think that standard libraries will never have such breaking changes, or to simply use go fix to change them all in one shot. It's always better to give time to transition this way - which is impossible for any major changes in the std lib. Go still has only hit v1 - So it's implications aren't perhaps seen so strongly yet.

Performance optimizations

Consider fmt.Printf, and fmt.Println and all your favorite logging APIs. They can all have string specializations and many more like it, that opens up a whole new set of optimizations possibilities, avoiding slices as variadic args, and better code paths.


I understand the initial designers had stayed away from it - but I feel it's time that old design decisions that the language has outgrown or, is harmful to the ecosystem are reopened for discussions - the sooner the better.

Considering all of this, I think it's easy to say that a tiny complexity in the language is more than worth it, considering the problems it in turn solves. And it has to be sooner than later to prevent newer APIs from falling into the insensible naming traps.

Edits:

  • Changed a few words which have incorrectly set an offensive tone to the issue. Thanks to those who pointed out, and my apologizes to anyone, whom I may have inadvertently offended - was not my intention.
  • Added versioning point
  • Added optimizations point
  • Better formatting
FrozenDueToAge Go2 LanguageChange Proposal

Most helpful comment

I've relabeled this as a proposal, but in fact there is no full proposal here. Perhaps this discussion should be moved to the golang-nuts mailing list.

I understand that the goal is to first say that function overloading could be added to Go 2, and to then discuss how it might work. But Go language changes don't work that way. We aren't going to say it's OK in principle, and lets figure out the details. We want to see how it really works before deciding whether it is a good idea.

Any proposal for function overloading needs to address how to handle cases like

func Marshal(interface{}) []byte
func Marshal(io.Reader) []byte

in which for a given argument type there are multiple valid choices of the overloaded function. I understand that others will disagree, but I believe this is a serious problem with C++ function overloading in practice: it's hard for people to know which overloaded function will be selected by the compiler. The problem in Go is clearly not as bad as the problem in C++, as there are many fewer implicit type conversions in Go. But it's still a problem. Go is intended to be, among other things, a simple language. It should never be the case that people are confused about which function is being called.

And lest anyone produce a simple answer for the above, please also consider

func Marshal(interface{}, io.Reader) []byte
func Marshal(io.Reader, interface{}) []byte

All 67 comments

In Go a function can be assigned to a variable or passed as a parameter. If functions can be overloaded, how will programmers indicate which variant of an overloaded function they meant?

On 28 Aug 2017, at 18:45, Prasanna V. Loganathar notifications@github.com wrote:

This has been talked about before, and often referred to the FAQ, which says this:

Method dispatch is simplified if it doesn't need to do type matching as well.
Experience with other languages told us that having a variety of methods with
the same name but different signatures was occasionally useful but that it could
also be confusing and fragile in practice. Matching only by name and requiring
consistency in the types was a major simplifying decision in Go's type system.

Regarding operator overloading, it seems more a convenience than an absolute
requirement. Again, things are simpler without it.
Perhaps, this was the an okay decision at the point of the initial design. But I'd like to revisit this, as much of this "experience with other languages" seems misguided. And I'm not sure just adding a section in the FAQ fully justifies a problem.

Why?

Function overloading doesn't really complicate the language - class polymorphism does, and that with functional overloading does so. But Go doesn't have class polymorphism. I feel in the spirit of stripping to simplify, the need for it was overlooked. Because overloaded functions, are, for all practical purposes, just another function (except you don't have to think about naming it). Compiler analysis isn't too complicated, and literally every serious language out there has - so there's wealth of info on how to do it right.

Naming things is one of the most difficult things for any API. But without functional overloading, this makes things harder, and in turn encourages APIs to be named horribly and propagate bad designs so much that its repeated use makes it perceived to be okay.

Go's standard library is filled with functions such as this

func (re *Regexp) FindAll(b []byte, n int) [][]byte
func (re *Regexp) FindAllIndex(b []byte, n int) [][]int
func (re *Regexp) FindAllString(s string, n int) []string
func (re *Regexp) FindAllStringIndex(s string, n int) [][]int

I don't this is good API design in anyway. The naming is horrific. But avoiding function overloading, the GO language is now unfortunately encouraging people to create APIs with insensible names.

With function overloading, this could be

func (re *Regexp) FindAll(b []byte, n int) [][]byte
func (re *Regexp) FindAll(s string, n int) []string
func (re *Regexp) FindAllIndex(b []byte, n int) [][]int
func (re *Regexp) FindAllIndex(s string, n int) [][]int

I think this is much nicer, and simplifies the API as, its essentially now just FindAll and FindAllIndex. It can reduced, and this also has implications that go into auto-completion, and grouping it in the API docs, etc.

The other part of the redundancy in this specific example relates to generics or variant types, but that's out of scope of this issue.

The above example is actually one of the better ones.
http package:

func Handle(pattern string, handler Handler)
func HandleFunc(pattern string, handler func(ResponseWriter, *Request))
We've used it long enough so that this ridiculous API has been ingrained in our brains as to what it does. But think from a pure semantic perspective for a moment. Does it mean HandleFunc mean, "Handle the Func", or "Handle it using the func". It's confused, and it's very poor API design.

Even worse and absolutely horrendous examples include:

func NewReader(rd io.Reader) *Reader
func NewReaderSize(rd io.Reader, size int) *Reader

I'm astonished that this style even before popular and the idiomatic Go way. Does this really mean "create a new read of size", or "create a new reader size" (aka an integer semantically). And this list doesn't just go on, it encourages user to utilize the same design which creates a nasty world rather quickly.

This avoids, yet another pattern that isn't so great, but often touted as the epitome of good design - "The functional options" pattern. Well, that's a nice pattern that was made popular by Dave Cheney here: https://dave.cheney.net/2014/10/17/functional-options-for-friendly-apis
While it's a decent pattern, it's basically a work around. Because, it does a lot of unnecessary processing, like taking in variadic args, and looping through them, lot more function calls, and more importantly it makes reusing options very very difficult. (I think Gophers touting this pattern is really misguided because of this). And it also involves a lot more function calls - I can't, for the life of me think of that as good design for most general APIs with the exception of a few that "really fits the bill".

The most common pattern that I'd think as a general fit would be

type Options struct { ... opts }

func DefaultOptions() { return &Options }

func NewServer(options *Options) { ... }
Because this way, you can neatly reuse the structures, that can be manipulated elsewhere, and just pass it like this:

opts = DefaultOptions();
NewServer(&opts);
But it still isn't as nice as Dave Cheney's example? Because, the default is taken care implicitly. (Well, you can do this still with pointers for options on the heap, but that's again bad design that forces things on the heap).

If there was function overloading, it basically can be easily reduced this

func NewServer() { opts := DefaultOptions(); NewServer(&opts) ... }
func NewServer(options *Options) { ... }
This allows me to reuse the structures, manipulate them, and provide nice defaults.
This is much much nicer, and facilitates a far better ecosystem of libraries that are better designed.

I understand the initial designers had stayed away from it - but I feel it's time that bad design are reopened for discussions - the sooner the better.

Considering all of this, I think it's easy to say that a tiny complexity in the language is more than worth the problems it in-turn creates. Note that it doesn't solve anything new - it just fixes the problems of it's own doing).

—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.

@davecheney - Solutions to make sure things are compatible, and the semantics of the language doesn't change beyond what it has to be, can be discussed. But I feel, it's only productive to do so, if the problem is acknowledged and there is a willingness for it to be solved. So far, I've unfortunately only seen deflection and being just marked as unfortunate rather than to even attempt to solve it.

One of the reasons for me to open this, is to indicate the critical need for the discussion on a reasonable approach to this.

That being said, I can think of a few ways - the obvious one being to pass the signature along - which might seem rather tedious, but the compiler can quite easily infer it - and in cases of ambiguity won't compile. These are "solvable" problems. But API design restrictions imposed by the language itself is not.

While the proposal author is completely entitled to any opinion, the form of its presentation makes me uncomfortable.

Random samples:

Function overloading doesn't really complicate the language...

Opinion, but presented like a fact.

I don't think anyone would call this as good API design in anyway.

Wrong, I do.

I think we can all agree that the naming is just horrific, ...

Wrong, I disagree.

We've used it long enough so that this ridiculous API has been ingrained in our brains as to what it does.

Using words like ridiculous with regard to someone's else work really doesn't help your case.

And so on.

Calling something badly designed is easier than demonstrating it by example. If I see FindIndexString, I know I have a string. In your proposal, I have to look further and continue using human cycles to determine which FindIndex I'm looking at. I generally read more Go than I write Go by an order of magnitude so that "String" suffix saves me quite a bit of time over the years.

I don't think the language will be improving as people start populating packages with overloaded functions. The proposal lacks a basis of its assertions that this design is ridiculous. The _as-seen-on-TV_ tonality of the proposal doesn't help either.

@cznic My apologies. Disregarding some one else's work wasn't my intention. If I've come across so - I do apologize. However, my emphasis on using the words "ridiculous", and "horrific" weren't to disregard them - I had to emphasize on the limiting nature of the language and it's imposition on why such APIs have propagated to an extent that they're seen as okay. I've unfortunately come to see a lot dogmatic views on things in the Go community, without the willingness to see the problems for what they are in many cases.

I strongly believe that APIs such as this

    func NewReader(rd io.Reader) *Reader { ... }
    func NewReaderSize(rd io.Reader, size int) *Reader { ... }

are horrible APIs, when they're a part of the language standard library. They are highly misleading - I recognize that they were created due to such restriction in the language. These are not meant to disregard the work of those who created the APIs, but in order to insist on the restrictions of language that forced them to create so.

I do apologize if my words came out offensive, than to convey my intention correctly. Hopefully, we can look past that and into the actual problem. Cheers. :)

@as, The semantic meaning of FindIndexString is find the index string, not find the index from a string (which is what the API means). This is my point, it has been so common for things to be misguided, that it has become OKAY to do so - which is not good.

You are missing the point. There is no FindIndexString. The function is regexp.FindStringIndex. A specific function is not relevant to the proposal.

@as, With reference to your readability remark - I would argue that reading a FindIndex conveys all the meaning, and even simplifies the code. It doesn't take anymore human cycles, as you already know that it find the index of the thing that you pass into it, regardless of what it is. The intent is very clear. And if you go ahead and change it to a byte representation above, the semantics are preserved as such with no code changes, which is an added advantage.

Though, even if you need to know the exact type - let's face it, we're not in the 80s. We have language tools that assist. Constraining a language because of these limitations seems counter-intuitive to me. Almost any decent editor with language support can take you right to the function. And lesser code that conveys intent is always cleaner. "FindIndex" is just lesser than "FindIndexString". And suffix 'String' semantically becomes an implementation detail, not just the intent.

I've relabeled this as a proposal, but in fact there is no full proposal here. Perhaps this discussion should be moved to the golang-nuts mailing list.

I understand that the goal is to first say that function overloading could be added to Go 2, and to then discuss how it might work. But Go language changes don't work that way. We aren't going to say it's OK in principle, and lets figure out the details. We want to see how it really works before deciding whether it is a good idea.

Any proposal for function overloading needs to address how to handle cases like

func Marshal(interface{}) []byte
func Marshal(io.Reader) []byte

in which for a given argument type there are multiple valid choices of the overloaded function. I understand that others will disagree, but I believe this is a serious problem with C++ function overloading in practice: it's hard for people to know which overloaded function will be selected by the compiler. The problem in Go is clearly not as bad as the problem in C++, as there are many fewer implicit type conversions in Go. But it's still a problem. Go is intended to be, among other things, a simple language. It should never be the case that people are confused about which function is being called.

And lest anyone produce a simple answer for the above, please also consider

func Marshal(interface{}, io.Reader) []byte
func Marshal(io.Reader, interface{}) []byte

@ianlancetaylor - Thanks. Correct me if I'm wrong, but I understood this to be more of a philosophical change than a technical one, and a controversial topic to start a proposal without some feedback.

Because, technically, as you said, Go doesn't suffer from the complexities of C++. In fact, both of the above that you stated can be easily solved with simply disallowing all implicit conversions while matching, and disallowing them to compile at all when in doubt of an ambiguity.

Very high-level matching:

  1. Match concrete types -> if they're exact matches - function identified.
  2. Match specific interfaces -> Check interface compatibility -> If compatible - refuse to compile, else function identified. (This disallows both the above cases, as io.Reader is compatible with interface{})
  3. Match to interface{}, if and only if the matches above didn't succeed or error, and such a match exists.

The above matches are always only entered when an overloaded method exists - so the compiler effectively only pays these costs when needed.

This should work nicely, because, a large number of scenarios covered above simply deal with concrete types.

Allowed:

func Marshal(string) []byte
func Marshal(byte) []byte
func Marshal(io.Reader) []byte

Allowed:

func Marshal(string) []byte
func Marshal(byte) []byte
func Marshal(interface {}) []byte

Side-note: All this opens up room for some very easy optimizations by means of specializations for APIs like "fmt.Printf" for strings, etc without unnecessary allocations.

Refuses to compile:

func Marshal(interface{}) []byte
func Marshal(io.Reader) []byte

Side-note: All this opens up room for some very easy optimizations by means of specializations for APIs like "fmt.Printf" for strings, etc without unnecessary allocations.

Would the suggested very easy optimization of Printf e.g. fmt.Printf(string, string) need to also duplicate and specialise the underlying interface{} functions that are used by fmt.Printf:
Fprintf, doPrintf, ... to not just allocate later?

@martisch - Not duplicate, but refactor. Of course, it will have to go all the way down.

Things beyond the type assertion for the string which happens today, will have to be refactored into its own tiny method, which the underlying implementation for the specialization will have to directly call.

@prasannavl There's no doubt that it's possible to come up with rules that somehow handle overloaded functions (C++ can do it...). That said, there really is a complexity argument as @ianlancetaylor has mentioned already. As it is, a function or method is uniquely identified by its name, it is a purely syntactic operation. The benefit of that simplicity, which you may consider perhaps too simple, is not be underestimated.

There's one form of overloading which I believe you haven't mentioned (or perhaps I missed it), and which also can be decided syntactically, and that is permitting the same function (or method) name as long as the number of arguments is different. That is:

func f(x int)
func f(x int, s string)
func f()

would all be permitted together because at any one time, depending on the number of arguments it is very clear which function is meant. It doesn't require type information (well, almost, see below). Of course, even in this simple case we have to take care of variadic functions. For instance:

func f(format string, args ...interface{})

would automatically consume all the functions f with 2 or more arguments.

Overloading based on argument count is like adding the number of arguments to the function name. As such, it's easy to determine which function or method is called. But even in this simple case of overloading we have to consider situations such as:

f(g())

What if g() returns more than one result? Still not hard, but it goes to show that even the simplest form of overloading can quickly lead to confusing code.

In summary, I believe for this proposal to be considered more seriously, there's a couple of things missing:
1) A concrete detailed description of the overloading rules proposed with a discussion of special cases (if any).
2) One or multiple concrete examples of existing code where overloading would solve a real problem (such as making code significantly easier to read, write, understand, or whatever the criteria).

Currently, the proposal seems to lack the engineering salience to seriously considered. Keep in mind that unless you are planning to create an experimental compiler and eventually working code, someone else from the Go team will have to implement this proposal.

In order to do that, there needs to be a perceived benefit that outweighs the cost of putting the change into the language. Ignoring the chaff (e.g., ridiculous, horrible), you submit that the language designers did not know what they were doing. Not really a good start for a proposal.

You need to _prove_ your problem is everyone's problem. Then you need to provide a solution that stands up to technical scrutiny. Then the powers that be decide if that solution is simple and coherent enough to make it into the language.

I think you are suggesting that it will be impossible to overload functions based on different interface types when it is the case that one of the interface types is assignment compatible with the other. That seems like an unfortunate restriction. It's already always possible to use a type switch to select different behavior at run time. The only gain from using overloaded functions is to select different behavior at compile time. It is often natural to handle different interface types in a different type, and there are plenty of examples in existing Go code of doing that at run time. Being able to overload some types at compile time but not others seems to me to be a weakness.

I forgot to mention another problem you will need to consider when it comes to writing a proper proposal, which is that Go constants are untyped.

func F(int64)
func F(byte)
func G() { F(1) } // Which F is called?

Also it will presumably be possible to overload methods, not just functions, so it becomes necessary to understand the meaning of method expressions and method values when they refer to an overloaded method.

func F(int64)
func F(byte)
func G() { F(1) } // Which F is called?

Here the rule could possibly be to use the same default type constants have in short variable declarations, ie. F(int) (not listed in the example, so a compile error in this case)

@cznic consider this code

package foo
func F(int64)

package bar // in a different repo
import "foo"
func G() { foo.F(1) }

That works but then foo adds func F(byte). Now bar.G() no longer compiles until it's rewritten to use an explicit conversion.

There's also the case of embedding an interface an interface when they have overlapping overloads.

That works but then foo adds func F(byte). Now bar.G() no longer compiles until it's rewritten to use an explicit conversion.

Nice catch.

In Go a function can be assigned to a variable or passed as a parameter. If functions can be overloaded, how will programmers indicate which variant of an overloaded function they meant?

@davecheney I think this can be resolved using the type of the variable, to which a function pointer is assigned. Can you give an example?

func Add(int, int)
func Add(float64, float64)

g := Add

What is the type of g?

func (S) F1()
func f(interface{ F1() })
func f(interface{ F2() })

var s S
f(s)

This works until S implements F2.

I think that every time I've wished for some form of function overloading, it has been because I was adding a new parameter to a function.

func Add(x, y int) int {}

type AddOptions struct {
  SendMailOnOverflow bool
  MailingAddress string // default: 1600 Amphitheatre Parkway 94093
}

// AddWithOptions exists because I didn't think Add would need options.
// If I was starting anew, it would be called Add.
func AddWithOptions(x, y int, opts AddOptions) int {}

The cost of adding a ...WithOptions function isn't very high, so this generally isn't a huge concern for me. However overloading where the number of parameters must differ, as mentioned by @griesemer, would admittedly be useful in this case. Some form of default value syntax would also serve.

// Add(x, y) is equivalent to Add(x, y, AddOptions{}).
func Add(x, y int, opts AddOptions = AddOptions{}) in {}

To echo @neild's comment, it seems like all of these cases would be better addressed by either simpler or more general features proposed elsewhere.

Case 1 is really two issues: []byte vs string (which already has a cluster of related issues, exemplified by https://github.com/golang/go/issues/5376), and Index vs. Value (which, as @cznic notes, is a reasonable API distinction).

Generics (https://github.com/golang/go/issues/15292) would address the issue more clearly than overloading, because they would make it obvious that the only difference is in the types (not other behaviors):

func <T> (re *Regexp) FindAll(b T, n int) []T { ... }
func <T> (re *Regexp) FindAllIndex(b T, n int) [][]int { ... }

Case 2 is arguably better handled (ha!) by sum types (#19412):

func Handle(pattern string, handler Handler | func(ResponseWriter, *Request)) {
    ...
}

But it's not obvious that even those are necessary: HandleFunc is already a bit redundant with the HandlerFunc type.

Case 3 is the default-parameter case that @neild mentions.

func NewReader(rd io.Reader, size=defaultBufSize) *Reader { ... }
r := bufio.NewReader(r, size=1<<20)



md5-b619e0405501436714d0e3a00b76af90



```go
r := bufio.NewReader(r, {Size: 1<<20})

@ianlancetaylor , @griesemer - Doesn't https://github.com/golang/go/issues/21659#issuecomment-325396314 solve all of what you have mentioned? Infact, I think it pretty much addresses most of the following comments.

@griesemer - Just the number of arguments is a good compromise, except the simplicity is not going to be what meets the eye, due to what you had already mentioned in dealing with interfaces. However, a pattern like what I suggested is a set of rules, that makes it possible, while still retaining a good amount of simplicity.

The internal impl can look something on the lines of FunctionName_$(Type1)_$(Type2)_$(Type3)() { } when overloaded functions are found, at it's simplest compromising on the function names (though this can be numerically optimized later with a bit of added complexity).

@ianlancetaylor

I think you are suggesting that it will be impossible to overload functions based on different interface types when it is the case that one of the interface types is assignment compatible with the other. That seems like an unfortunate restriction.

Yes, it is restricted. But I see no reason that overloading has to be "all or nothing". I think concrete type specializations and interface{} with non-compatible interfaces would solve most uses cases without complicating things and the 'ick' factor associated with dealing with complex interfaces.

Besides, even if later found a necessity for, which I doubt, it should be far easier to go from a limited set, to an all encompassing set, but not the other way around. Learning from the other languages, searching for a perfect solution is probably not going to take us far - It's going to end up like the case of generics - 10 years and counting.

@bcmills , I see you've provided a set of potential workarounds to the cases on the first section. Except for the first, pretty much all of them rely on language features that do not exist, at the moment. For the sake of the argument, let's hypothetically take that all of the proposed features work, and exists, with the exception of generics (Will come to this in a bit).

Considering that, your solution arguably solves one scenario, but things like specialization optimizations, versioning that I've added in the Why section still cannot be addressed.

Now, generics will get you quite far, no doubt. But why do I not include it? It has been in 10 years of exploration and still no clear future map to it. 10 years is a long time. If the language designers are searching for a perfect solution, I don't think they're going to find it, unless they move forward with some real implementation. Take Rust, C# for example - they move with some form of nightly versions that experiment making the feature available to use to the community, and learning from real use cases of it. Here, it's nice to see occasional articles from people like @rsc saying they've learnt more every year. But, learnt and used that knowledge how? With all this extended experience and knowledge, at best, it would be marginally better than the above mentioned languages, at worst, the same. With a decade and still no significant drive to solve the problem, I don't feel comfortable counting on that feature for any practical purpose.

Don't mistake me, I'm all for waiting to learn more about the scenario. But it's not very encouraging when it's a decade for a problem that has already been solved many times before, and all the wait is for marginal betterment - especially when you're not really learning from the real use case, which you will learn only once it's out there and people are using it. Languages will have to "evolve".

@prasannavl It seems plausible that the restrictions you suggest will solve some of the problems mentioned above. But it introduces new problems, which I described as "an unfortunate restriction" and which @griesemer described as "a complexity argument", adding "the benefit of that simplicity, which you may consider perhaps too simple, is not be underestimated."

To put it another way: you can solve certain problems by adding restrictions. But now everybody writing Go has to understand those restrictions. One of the several goals of Go is to provide a simple programming language. The rule "no overloads" is very simple. The rule "overloads are OK except that certain somewhat complex cases are prohibited" is less simple.

In other words, I'm explicitly disagreeing with your suggestion that it is easier to go from a limited set to an all encompassing set. It's is not easier, because the intermediate state is a language that is harder to understand. All language decisions are a balance of benefits and costs. Increasing the complexity of the language is a significant cost. It must be balanced by a more important benefit. I understand that you are in favor of overloading. But ultimately it adds no power to the language; it just lets you reuse a name rather than inventing a new one. To me the benefits seem relatively small, and the costs must therefore be relatively small.

Separately, I think you still haven't addressed method expressions and method values in the presence of overloading. For that matter, I don't understand how they work with @griesemer's suggestion of overloading on the basis of argument count. I think this matters in conjunction with the Go 1 compatibility rule, because it means that a program using a method expression or method value may unexpectedly break due solely to adding a new function overload. And the same seems to apply to a reference to a function in any way other than calling it. In other words, even if we do add function overloading, it may be impossible to overload an exported function or method in the standard library, or in any other library that cares about source code compatibility. That seems to me to be a considerable obstacle that needs to be understood first, not later.

@ianlancetaylor Thank you for your explanation - it helped me understand your approach to this better. To be honest, I almost want to agree with you on most of it.

However,

it just lets you reuse a name rather than inventing a new one. To me the benefits seem relatively small, and the costs must therefore be relatively small.

I'm rather of the opinion that you're under-estimating the benefits of naming things. While I don't want to put too much emphasis on the saying "there are only two problems in computing - naming things and cache invalidation" - I'm sure that you do recognize the value in it, and it didn't become a common cliche for no reason.

Naming things by itself probably doesn't add a lot of value. But naming propagates it's benefits and problems into the design of APIs, which in turn affects "discoverability". Language simplicity and ease of use of a language are two separate things. I, personally, come from a heavy C# and C/C++ background. So, to me C is simple - but C# has far better ease of use, which makes it simpler for most general use cases. One of the largest lack I see coming from that world into Go is that it's API discoverability is painful at best. People who are used to a library tend to look past it's discoverability. However without actually spending a lot of time in the docs or more naturally, just drilling into the source code, it's very difficult to understand many libraries. The regex package is a good example, where to this day, I still get confused and find myself looking at this docs, despite it's repeated use, as opposed to a library like C#, which provides overloading.

I do still remember my initial days at C# almost two decades ago, being able to navigate through the standard library with just it's fantastic naming, organization and docs provided through intelli-sense - which in all honesty I find impossible with go - one of the few design restrictions in std lib attributing to the lack of overloading. Today, with the abundance of language tools, I would like to insist on the tooling being a part of the consideration in language design. For example, overloaded functions often, achieve one goal - and such can be grouped together in auto-completion, etc, which shows much simpler completion lists that are far more comprehensible, etc. There are many smaller side benefits that I feel many overlook.

It's human tendency that once we get used to something long enough, we tend to just get used to it. I could be wrong here, but it's my personal suspicion that many Go programmers tend to just "get used to it", and as such discard the value from that's 'derived' from better naming. It doesn't just change a name. It's changes the paradigm with which APIs are designed.

That aside, I completely missed method expressions - let me sleep over it.

So, to me C is simple - but C# has far better ease of use, which makes it simpler for most general use cases.

(Emphasises mine) This is the key. To _me_ C# is an unusable mess. Also because name overloading. Arguing that overloading is better or not is pointless. It's just a personal preference. For _me_ naming functions doing different things, because they have different arguments but still use the same name is making the language much harder to read and reason about, especially for someone else's code.

The regex package is a good example, where to this day, I still get confused and find myself looking at this docs, despite it's repeated use, as opposed to a library like C#, which provides overloading.

Do you mean by this, that it would be better _for you_ if all the regexp matching functions had the same name? I must be misunderstanding you. (?)

@cznic - Haha, touche on the emphasis point.

For me naming functions doing different things, because they have different arguments but still use the same name is making the language much harder to read and reason about, especially for someone else's code.

I don't think I ever claimed to have functions overloaded for doing different things. It's well accepted that it's not a good practice regardless of the language. I'm talking about functions that do the same thing, but do it with small differences based on it's inputs, which should still be considered as one unit.

Examples can be seen in various ranges from NewReader/NewReaderSize above to regexp, NewServer/NewServerWithOptions and fmt.Print* specializations with string etc.

I've found non-overloaded code that accomplish the same thing to be far more verbose needlessly, and as such is counter-intuitive and only more difficult to quickly understand the meaning due to it's argument names littered along with the function name, than well developed overloads that establish what the function does far more effectively. And the exact calls are just a key press away with language assist tools. Practically, I've found that I need to know the precise function far less times, than the task accomplished by the function - or rather - that I'm okay with the extra key press when I need to know more, as opposed the verbosity that it introduces.

Do you mean by this, that it would be better for you if all the regexp matching functions had the same name? I must be misunderstanding you. (?)

Nope. What I suggest lies somewhere in the middle. As suggested in the issue post, where functions that do the same thing (for example finds an Index based on different inputs can be grouped into one). Though, in this specific case, it can also be solved to the same extent by generics, whenever that happens.

C# is not a reference point for good naming, I have to install a 12GiB editor with intellisense to even use the language because its libraries are so convoluted. Users commonly use long Hungarian-ish variable names in short function bodies to compensate for the confusion created by overloading and other wizardry. The net result, in my experience, is not only an unusable mess, but an unreadable one.

@as, well this is not relevant to the issue, however I feel like I should
perhaps question the validity of your experience today, since it's likely
extremely outdated.

No, there's no Hungarian-ish, atleast not in corefx (the std lib), unless it's still
in transition - if you encountered it as a part of a public API surface it's most
likely a library of questionable design, or had good reasons like platform
compatibility (like go has constants names O_RDWR). The official coding
guidelines recommend against it.

And no, you've not needed VS, or any large IDE ever since the lightweight
language service - which btw is far superior to gocode, or any viable alternative
at the moment of this discussion - and they've infact, set a standard for
what's expected from language services. So it would be unproductive to just
diss a language rather than trying to keep an eye open for taking the positives
from it.

So, while I agree C# is not very relevant for this discussion and I used it only as
an example, I feel you've misused the example by stating points which are either
not entirely true or is not very relevant today (Admittedly they were once upon
a time, before Go even existed).

Sent from my Android

On Sep 9, 2017 17:49, "as" notifications@github.com wrote:

C# is not a reference point for good naming, I have to install a 12GiB
editor with intellisense to even use the language because its libraries are
so convoluted. Users commonly use long Hungarian-ish variable names in
short function bodies to compensate for the confusion created by
overloading and other wizardry. The net result, in my experience, is not
only an unusable mess, but an unreadable one.

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/golang/go/issues/21659#issuecomment-328273764, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAiJMSoPgqCW7V2MgqFzhcrTyb7hZN-Aks5sgoJFgaJpZM4PEPmt
.

Unfortunately, my experience is not outdated. Which is why I am against this proposal. I am all for improving the language, but this proposal hasnt demonstrated what is improved as an end result. You can convince me through practical examples demonstrating so.

@as, but suggesting that you NEED visual studio just to use the language is not true, to say the least. You can easily write in any text editor. And we have lightweight visual studio code with omnisharp or other supported editors. You have to remember your APIs or use docs in the same exact way in either Go or C#.

Not to say that I'm in favor of overloading. I'm not against it but don't really see a big value in it. It can do nice things here and there but, ultimately, brings very little to the table. It brings little value but also brings a little confusion sometimes. At least in C#. C++ is a very different beast.

And, if we look at the C# List as an example. It's nice that we have three BinarySearch all doing the same thing but with different levels of control. But it's not more than that, just nice.

And we have lightweight visual studio code with omnisharp or other supported editors.

IIRC, VSC uses some JS editor...? If is it so, then lightweight does not apply, I'm affraid.

The conversation seems to be taking a detour. Don't know why editors are being discussed.

Just to bring the detour to an end: @cznic https://github.com/OmniSharp/omnisharp-vim - There's as lightweight as you can get with language assist. This is no different than C/C++, or pretty much any language. Arguably far more efficient and seamless than Go's language assist at the moment, due it being a full fledged Roslyn compiler providing the assist, rather than parsing the AST, or syntax tree multiple times.

@cznic, lightweight in terms of size. if you don't like VSC for some reason, you can choose other supported editors. Omnisharp supports emacs and vim, for example. C# came a long way since the days when you couldn't do anything without visual studio.

if you don't like VSC for some reason

I don't know if I like it because I would never touch it ;-) </offtopic>

@creker

And, if we look at the C# List as an example. It's nice that we have three BinarySearch all doing the same thing but with different levels of control. But it's not more than that, just nice.

Then again, C# has generics. You also use overloading in various places so naturally like new List() and new List(initalSize), or Dictionary, or many collections that follow similar APIs. Imagine if C# had new ListSize() - I, for one, would definitely think its an API that creates some numerical value that holds a size. But that's quite what Go does today - but you just end up "getting used" to it - not necessarily intuitive. Same goes for API that take in options. It removes unnecessary verbosity from the code in many places.

@prasannavl, it removes verbosity in exchange of clarity sometimes. Let's assume that you remember all API. Without overloading you have functions with different names. Function name tells you exactly, which function it is. With overloading you have to look at the arguments, how many of them, which type they has. If it's more complicated, you have to remember resolution rules to figure out exactly, which function is going to be called.

It's not always the case but it happens even in C# where that kind of stuff is significantly easier than in C++. Also, C# is helped by the fact that pretty much everyone uses intellisense. I, personally, in favor of tools assistance and don't like writing even in Go without them. But still, for me the gain is too little to justify added complexity and confusion even when you have tools to help you.

And if we compare that to generics you mentioned, they have clear value in that they can do something, which you can't do otherwise without writing complex and slow code. Added complexity to the language here is clearly justified. Go even has them in very limited way like slices, maps, channels etc. I would argue that, if generics were to be implemented, overloading would loose even more value.

@creker,

With overloading you have to look at the arguments, how many of them, which type they has. If it's more complicated, you have to remember resolution rules to figure out exactly, which function is going to be called.

I find this interesting. In my personal experience, with Resharper or even with just plain Omni-sharp Intellisense, I've never found this to be a problem, with all the possible functions popped out with docs instantly as I type, and contextually filtered with just the relevant functions as I type more. Even otherwise I've found this easier to figure out "related" functions in the source code, or in the docs, rather than having to scan every one of them.

I, personally, in favor of tools assistance and don't like writing even in Go without them.

Yup, I agree. It's important recognize the value of tooling in language design today. While it's a personal choice for some who want to just pop out a naked editor, I wouldn't want the language to be constrained much because of it. Pseudo IDEs like VSCode pops up pretty much instantly on today's systems with full tooling support.

I would argue that, if generics were to be implemented, overloading would loose even more value.

Honestly, in Go, generics (one would argue it's fundamental to modern languages), thanks to the tease of exploratory proposals, just seem like this promise-land that's just always in the distant future. (https://github.com/golang/go/issues/21659#issuecomment-328266346).

While Generics does help, API's like NewReaderSize will continue to exist in Go without overloading. While generics don't seem to be happening anytime soon in Go, overloading in my opinion is a far far simpler implementation. The implementation I had suggested by nixing the need for dealing with complicated interfaces at all, can be done quite easily. (Admittedly, I have to think about method expressions). And the maintenance cost of that would also not be too much. (I probably might end up in trouble throwing this out there - but I'd say it's probably reasonable to allot a few days to a week at max time-frame by some one familiar with the codebase - Let's just put that in perspective against generics that's stuck in design for 10 years). The real cost practically will be with analysis tooling adaptations, which will take significant time and work to cascade. But that's one time process. When, and if generics comes into Go, there comes two ways to solve some very small class of problems (i.e, some specialized functions that can be made generic, etc), but overloading complements the other cases.

With generic APIs, we also open up a larger class of naming difficulties, and versioning transitions. I'd hate to have a nice generic API, and have to write WithOptions and the likes, and even worse, having to write FindAllGeneric method to complement FindAll because, it cannot be overloaded and deprecated methods and new APIs cannot be provided during the same time. Perhaps, there are clever ways to resolve this particular problem with generics taking on more of it's challenges, but it certainly doesn't make things easier.

In short, the value of overloading isn't shadowed by generics, rather complemented nicely.

@prasannavl

I find this interesting. In my personal experience, with Resharper or even with just plain Omni-sharp Intellisense, I've never found this to be a problem, with all the possible functions popped out with docs instantly as I type, and contextually filtered with just the relevant functions as I type more. Even otherwise I've found this easier to figure out "related" functions in the source code, or in the docs, rather than having to scan every one of them.

Overloading is nice when you're writing code. But when you're reading it you have to deal with the problem I mentioned. The problem becomes even bigger when you have operator overloading. Not only function calls are not obvious now but arithmetic operations also.

Overloading is nice when you're writing code. But when you're reading it you have to deal with the problem I mentioned. The problem becomes even bigger when you have operator overloading. Not only function calls are not obvious now but arithmetic operations also.

Unless someone comes up with some spectacular innovation, I don't think anyone here is even attempting to venture near operator overloading (with notable exceptions like strings, of course). This issue, to my knowledge is only about functions.

@prasannavl

https://github.com/golang/go/issues/21659#issuecomment-328283293 reminds me that overloading is fairly closely tied to function naming, and especially to constructors. When you only get one name to work with for constructor functions — the name of the type itself — you end up needing to work out some way to squeeze lots of different initialization options into just that one name.

Thankfully, Go does not have distinguished “constructors”. The functions we write to return new Go values are just functions, and we can write as many different functions as we want to return a given type: they don't all have to have the same name, so we don't have to work as hard to squeeze everything into that name.

function overloading is bad to read code for me in java and c#.It is difficult to find the implement code as you need to know the type of the function arguments.
I will not use function overloading in java and c#, but others will use it,so it just make write not readable code easier.

Here is my work around with the function overloading need(add arguments after the function has been called by outer caller):

type OpenRequest struct{
   FileName string // need pass this one
   IsReadOnly bool // default false
   IsAppend bool // default false
   Perm FileMode // default 0777
}
func Open(req OpenRequest) (*File,error){
   if req.FileName==""{
     return nil,errors.New(`req.FileName==""`)
   }
   if req.Perm==0{ 
     req.Perm = 0777
   }
   ...
}
Open(OpenRequest{
   FileName: "a.txt",
})

I just found that you can add almost unlimit number of arguments to this function, and the caller can pass some of arguments, and ignore other one, and the arguments is more readable from caller code.

Here is my work around with the function overloading need(add arguments after the function has been called by outer caller):

Facepalm.

If we are considering function overloading, are we also going to consider pattern matching? In my opinion, they are related: The same concept implemented differently.

If I may be so bold, I don't think that the go authors are "considering"
function overloading. I also enjoy go's simplicity I think as much as
anyone. Personally, I can say that it changed my thinking about programming
in a good way.

Have anyone mentioned that overloading will require _additional_* name mangling for Go symbols?
With generics on board, this is even more exciting.

(*) As far as I know, right now there is only package prefix prepended to the symbol.

context adds method repeats to the database/sql API: https://golang.org/pkg/database/sql/

func (db *DB) Exec(query string, args ...interface{}) (Result, error)
func (db *DB) ExecContext(ctx context.Context, query string, args ...interface{}) (Result, error)
func (db *DB) Ping() error
func (db *DB) PingContext(ctx context.Context) error
func (db *DB) Prepare(query string) (*Stmt, error)
func (db *DB) PrepareContext(ctx context.Context, query string) (*Stmt, error)
func (db *DB) Query(query string, args ...interface{}) (*Rows, error)
func (db *DB) QueryContext(ctx context.Context, query string, args ...interface{}) (*Rows, error)
func (db *DB) QueryRow(query string, args ...interface{}) *Row
func (db *DB) QueryRowContext(ctx context.Context, query string, args ...interface{}) *Row
...there's more for other types

@pciet, great example! And we're only in v1 of the language. I'm scared of how these APIs will evolve without overloading.

For the context in database/sql case I’m not convinced function overloading is the right pattern to fix repeating APIs. My thought is change *DB to a struct of reference/pointer types and call methods on an sql.DB instead, where the context is an optional field assigned before making these calls to Exec, Ping, Prepare, Query, and QueryRow in which there’s an if DB.context != nil { block with the context handling behavior.

The database/sql case seems to me like an argument for better API versioning rather than for function overloading.

@neild, precisely. And overloading is one of the most helpful ways to achieve API versioning. (As mentioned in the initial post already)

The problem? You can't just go and remove it, because it would break everyone. Even with major versions, the better approach is to first deprecate it. But if you deprecate it here - you don't really provide a way for programs to change it during the transition period, unless you bring in another function altogether that's named, say OpenFile2 - This is how it's likely to end up without overloaded functions. The best case scenario is you find a clever way to name - but you cannot reuse the same good original name again. This is just awful. While this particular scenario is hypothetical - it's only so because Go is still in v1. These are very common scenarios that will have to happen for the libs to evolve.

This just so happens to be an example, similar to the hypothetical scenario I mentioned. It's not that you can't do it without, but you have to jump through hoops, possibly with new packages even, to correct mistakes of old.

@neild, precisely. And overloading is one of the most helpful ways to achieve API versioning. (As mentioned in the initial post already)

@prasannavl, as already pointed out, this is not fair solution as it can break existing code that assigns function referring to it by its name.
The whole "versioning" part can be misleading as overloadeding can be backwards-incompatible in unexpected ways (think about C code that calls your Go functions for example of less expected/popular example).

If you know the solutions to those complications or this is an acknowledged risk/tradeoff, please mention it in the first message.
Current solutions do not cause these troubles, so the alternative should consider that.

That being said, I can think of a few ways - the obvious one being to pass the signature along - which might seem rather tedious, but the compiler can quite easily infer it - and in cases of ambiguity won't compile. These are "solvable" problems. But API design restrictions imposed by the language itself is not.

I also don't get you point how function overloading helps tooling.

  1. It will get clumsier to "goto definition" of overloaded function because of candidates list.

  2. If there will be much smart inference from the compiler, public API for tools should be provided in order for them to use that information. Otherwise very few tools will adopt it properly.

  3. Tools that rely on the unique property of {package name}+{function name} combinations will break.
    And I would not presume that it is very trivial to fix all of those cases. It can be impossible to remedy by go fix.

I have a feeling that you underestimate the associated complexities at the whole picture.
Sorry if I am wrong, but things like "quite easily" or "Complexity level is low to implement it in Go"/"just another function with a suffixed internal name" are confusing.

offtopic

My understanding is that C++ resolutions are hard due to other reasons (templates, namespaces/dependent name lookup are better examples).

But Go doesn't have classes, or similar polymorphism, and isn't as complex as C++.

as already pointed out, this is not fair solution as it can break existing code that assigns function referring to it by its name.
The whole "versioning" part can be misleading as overloadeding can be backwards-incompatible in unexpected ways (think about C code that calls your Go functions for example of less expected/popular example).

Can be backwards-incompatible? Yes.
Should it backwards-incompatible? Not really.

The key here is in how it's implemented. Let's think about what the variables here are - It's only the function parameters. So, if we can come up with a way, where the generated function names are tethered to the function parameters, this can solved nicely. (I do vaguely remember mentioning something on these lines in one of the comments before). That said, I can imagine this being a big problem in a language like C where dynamic linking is extensive. In Go, a majority of the use cases are static and ABI compatibility can also be forgiving (though I don't imply that things should break).

Let take an oversimplified example (oversimplified because this can't work due to other problems yet to be solved discussed in the comments before)

func DoSomething(in string)
func DoSomething(in int32)

And it's internal implementation would look something on the lines of:

func DoSomething__In_String(in string)
func DoSomething__In_Int32(in int32)

This shouldn't break C calling Go, or Go calling Go, or binary compatibility.

I also don't get you point how function overloading helps tooling.

I think it'd be fair to just refer to C# here. The tooling around C# does exactly what I mention, and it's a language with one of the most stellar language services and tooling. (PS: It handles a lot more complexity that isn't needed in Go, as the language is far more complex).

It will get clumsier to "goto definition" of overloaded function because of candidates list.

Sorry, I really don't see how. Each reference directly is tied to one definition, or it's incorrect code that won't compile.

If there will be much smart inference from the compiler, public API for tools should be provided in order for them to use that information. Otherwise very few tools will adopt it properly.

Possibly. While I'd like to explore the possibility on how the "tax" from this can be reduced, I am certain the best approach in most scenarios is what you mention.

Tools that rely on the unique property of {package name}+{function name} combinations will break.
And I would not presume that it is very trivial to fix all of those cases. It can be impossible to remedy by go fix.

There are no cases that will break existing code. Go 1 tooling will remain compatible with Go 1 code. Hypothetically if Go 2 implements function overloading, the unchanged tooling just won't detect the new function that use overloading, which will of course require updated tooling.

Sorry, I really don't see how. Each reference directly is tied to one definition, or it's incorrect code that won't compile.

Well, maybe that point is not very important for others anyway.

In very-very short form: In Emacs (or any other editor/IDE, actually)
M-x find-function foo.Bar can't work without additional prompt if multiple foo.Bar exist. This basically boils down to the fact that you need two things instead of one to find the function: it's name and actual arguments (or their types).

@Quasilyte - Ah. Thanks for that. I was thinking of only the goto-definition by pointing at a function use, and navigating to the definition from there. (F12 from the actual function usage, in VSCode for example)

Thinking of the scenario you mentioned, yes, an additional prompt would be needed, when appropriate. Though most tooling assisted editors I tend to use - VS Code, Gogland, etc have as-you-type fuzzy search anyway that boils it down where I wonder if this is even noticeable. (C# has it, and never felt it to hinder ease of use or the speed of navigation. So does C++, Rust (traits), JavaScript etc). But yes, I suppose it might involve an extra key-press, and some people value it a great deal than others - though I really really wish one wouldn't state that as an argument against overloading.

@prasannavl, did you answered https://github.com/golang/go/issues/21659#issuecomment-325485157?
Please, consider this case.
It's a technical detail, not subjective or "religious".

I am not a good advisor here, but maybe technical design document may be a better argument than repeating the ones that proven not to be working (not everyone will agree on "API just get's better").
You may browse existing design documents to get inspiration.

Can be useful: adhoc polymorphism in the context of Haskell
Not all kinds of polymorphism play well with each other.
If generics are desired, they should be somehow designed together.

There is a backwards-compatible way to introduce adhoc polymorphism into Go without some problems mentioned above though.

Introduce a new keyword that defines overloadable function.
This function can not be assigned assigned unless type elision is possible (see https://github.com/golang/go/issues/12854).
No existing code is affected.

xfunc add1(x int) int { return x + 1 }
xfunc add1(x float32) float32 { return x + 1 }

f := add1 // Error: can't assign overloadable function
var f1 func(int)int = add1 // OK
var f2 func(float32)float32 = add1 // OK

func highOrder(f func(int) int) {}

highOrder(add1) // OK

xfunc is just a placeholder for a better (and new) keyword.

When overloadable function is assigned, it's type is concrete and it can't be distinguished from normal function.
The only differences are in the way functions are defined (to tag their names and store function info in separate data structure during compilation) and assigned.
No issues with "overloadable functions as parameters", because the value can't have a "overloadable function" type.

Overloadable functions share same namespace as normal functions,
hence it is not possible to have f as overloadable and ordinary function.

If considered in the context of original proposal goals, there is a downside.
Programmer must know beforehand which function may require overloading in future.

@Quasilyte I'm not @prasannavl but the answer is obvious - compile time error. If type inference doesn't work then compiler should generate an error. Go has var [name] [type] syntax to solve that. No need for new keywords or anything like that. Existing code will continue to compile as long as you don't start adding overloads. And when something breaks it will be trivial to fix by hand. But probably not something that can be fixed by tooling.

@creker, maybe it's my personal problem with "compatibility" term and "API versioning" solution.
If overloading is sold as a solution for that, why it makes it so easy to break code that is depending on the library?

This is why "compile error" is obvious, but suboptimal, in my opinion.
Also, some languages do it in other way and defer error until there is no way to infer the actual type.
These cases should be a part of proposal to avoid misunderstanding.
Obvious things are not exception.

@Quasilyte type inference can be as sophisticated as it can but when there's no other way then compiler should throw an error. The comment that you mentioned is ambiguous without proper context and should not compile. How exactly sophisticate it is should probably be in the proposal document. I agree with you on that.

About breaking client's code, that's definitely something to think about and should be covered extensively in the proposal. Even language like C# has problems with that. But it most cases examples of such breaking are more telling about bad library design rather than problem with overloading. MS adds overloads with every .Net release and doesn't break anything because if you design your API properly then adding an overload is not breaking change.

if you design your API properly then adding an overload is not breaking change.

That depends on how strictly you want to define “breaking change”. If someone, say, assigns a function to a variable, and later invokes that function by name, the two references may resolve to the same overload in one version and a different overloads in the next.

That pretty much implies that if you want a strong compatibility guarantee, you must prohibit users from ever referring to a specific overload, including by assigning it to a variable. The compatibility guidelines for Google's Abseil C++ libraries make that explicit: “You cannot take the address of APIs in Abseil (that would prevent us from adding overloads without breaking you).”

@bcmills you could still design your API to solve that. If your overloads are ambiguous then it's your problem to make them compatible. Even if reference resolves to different overload client's code is still working.

Say, you had one function

func foo(interface{})

Now you add an overload

func foo(io.Reader)

Obviously client's code that passed io.Reader before will now reference the latter overload. It's your problem to retain compatibility between the two even if they're used interchangeably. If your overloads are so incompatible that you can't even solve that then it's obviously API design problem.

Another solution could be on the language level. Forbid referencing overloads without explicitly specifying the exact overload. In other words, f := foo will not compile in any case even if you can infer the type. Also, forbid type conversion between overloads even if arguments are compatible. In the case above, two overloads have different and incompatible types even though you can pass io.Reader as interface{}.

@prasannavl, did you answered #21659 (comment)?
Please, consider this case.
It's a technical detail, not subjective or "religious".

It just doesn't compile. If there's ambiguity you add signatures to compile. (But, this does suffer from the same compatibility issue. More on that below)

@Quasilyte type inference can be as sophisticated as it can but when there's no other way then compiler should throw an error.

@creker, I think @Quasilyte was referring to the part where the function that isn't overloaded, is then overloaded, that could prevent code that compiled before from not compiling? While this isn't really an issue at all when statically linking - it is a significant issue during dynamic linking.

Also since you mentioned C#, just want to point out here, while I think it's fair to compare tooling, I don't think it may be fair comparison to compare on a deeper level. Since C# has IL code in the middle, it provides with more flexibility that Go cannot achieve.

maybe technical design document may be a better argument

@Quasilyte - I do agree. I think there's significant feedback that has been gained from this thread, to start thinking about a technical document. I hope to find time soon to collect things from this and potential impl possibilities into a document. 22/30 for/against a the time of this comment. Hmm.

PS: While I'm quite obviously advocating FOR overloading, if it HAS to introduce a new keyword, to me personally that would tip the scales, as that defeats the outer language simplicity enough for me to not purse it.

@bcmills - now, coming to the compatibility issue - thinking out loud here, there is one approach to make sure overloading doesn't break compatibility. But that'll break compatibility with existing Go. It's rather simple, but radical - change the internal repr of every Go function to have it's parameters as a part of the name

Eg:

func Hello(text string)
func Hello1(text string)
func Hello1(name string, text string)

Now the compatibility is an issue only when Hello, and Hello1 follow different methodologies for internal naming. But if you keep it consistent and change the internal representation of both to look like this

func Hello__$CMagic__string()
func Hello1__$CMagic__string()
func Hello1__$CMagic__string_!_string()

This solves compatibility. But introduces slightly more complicated debug tooling. (Not debugging, but debug tooling). The tooling will then have to actively convert the internal repr to human readable form of Hello1(name, text).

This does raise the cost of internal complexity more than I'd like. But does solve compatibility.

  1. Any approach here will add significant complexity to the spec. We will need to define something along the lines of the complex C++ rules to choose which overload is desired. It won't be as complex as C++ but it will be complex. One of the key reasons that Go code is easy to read is that it is easy to understand what every name refers to. This proposal will lose that property to some extent.
  2. In code like f := F where F is overloaded you will need to explicitly state the type of f, which is unfortunate.
  3. In code that uses method expressions that refer to an overloaded method, you will have to specify the type of the desired method, but how? This is a long issue but I don't see any good suggestion above for this problem.

Proposal declined.

Was this page helpful?
0 / 5 - 0 ratings