Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Proposal: Less error-prone loop variable scoping (github.com/golang)
85 points by yujian on May 11, 2023 | hide | past | favorite | 73 comments


I have to admit that, as a C/C++ programmer, I find this entire thing a bit bizarre.

In C and C++, of course a for loop induction variable is reused for each iteration. And if you take the address of the variable and keep it around for too long, you get to keep all the pieces and possibly the exploitable UAF vulnerability, too. (Never mind that, prior to C99, C induction variables were always in an enclosing scope, and this would be fairly obvious from reading the code.)

In Rust, code like the examples wouldn’t compile. If you want to copy a value to the heap and take a reference, you need to say so. And it will be quite clear whether you are keeping a boxed copy or whether you are keeping a reference to the original object. (And the object can’t mutate out from under you unexpectedly in either case — a non-mutable reference prevents mutation!)

So the fact that, in Go, one might rely on automatic promotion of an induction variable (which obviously gets mutated every iteration!) to the heap, and thus get confused about precisely which value is promoted to the heap, seems weird from my perspective. IMO one might reasonably argue that the problem is that this pattern works at all, not that it works in the way that people usually don’t want.

edit: in C++, with range-based for, the induction variable is gone after each iteration. So taking a reference that outlives the iteration is invalid, and if you’re lucky, the compiler will warn. The confusing cases from the Go proposal are that invalid in C++, not confusing.


It really is remarkable that Go has repeated all these mistakes, even with the benefit of having all of our field's history to learn from.

Loop variable scoping is not the only area where the designers of Go have failed to learn from past experience and instead opted for a design that lends itself to a more convenient implementation at the expense of exposing foot guns to the user: zero values are absolute dynamite, especially when combined with other language features like implicit zero initialisation of structs, multiple return values instead of sum types, or reflection-based magical JSON deserialization. When you have an empty string or a zero integer in Go, you can never be quite sure that the value you're holding is supposed to be what it is, rather than being an implicit zero value that snuck in at some point in place of a missing value.

Go also fails to offer any facilities to aid programmers in writing safe and correct concurrent code, beyond some fundamentally superficial language features like syntax extensions for channel types and spawning tasks. Channels are difficult to use correctly, usually having to be used in conjunction with some other synchronisation primitive like a WaitGroup, and compose poorly with implicit zero values resulting in the need to define semantics for fundamentally nonsense operations [1]. The language offers no facilities for restricting mutation, indeed in this respect, it's even worse than C. It's too easy to write buggy, racy code in Go.

There's no denying Go's initial allure, but in reality it fails to address in any systemic way the problems that have plagued the software development practice for decades. While other languages offer facilities for eliminating entire classes of bugs, all Go has to offer is an attitude that says "simply don't make mistakes." The whole language is a bitterly disappointing pile of missed opportunities. In the words of fasterthanlime, it's the billion dollar mistake all over again.

[1]: https://dave.cheney.net/2014/03/19/channel-axioms


> There's no denying Go's initial allure, but in reality it fails to address in any systemic way the problems that have plagued the software development practice for decade

I find comments like this illustrate perfectly the gap between these focused on language theoretics, and those focused on building software.

Go is successful because it strikes a strong balance between what practical software engineering organizations need from a language, of which safety is only one constraint.

Languages also need to be: * Easy "enough" to learn * Easy to read * Easy to write * Performant * Compile quickly * Have strong tooling * Have a strong standard library * Provide good solutions for it's problem domains * Support large code bases with many dependents ... * And be "safe*

Easy here is also influenced by what people already know, syntax and basic working methods need to be familiar to the vast majority of software engineers.

Go is successful because for many engineers it provides enough of all of the above with *enough" safety.

P.s. complex, difficult to understand languages incur more bugs then simple ones, even with additional safety.

PPS: If you think multiple returns and sumtypes solve the same thing, you've not thought very hard on the subject...


> P.s. complex, difficult to understand languages incur more bugs then simple ones, even with additional safety.

Do you have any info on this? My understanding of this has been that some types of complexity make code worse, and other types make code better. For example, ruby's metaprogramming has been a source of bugs because of it's complexity.

On the other hand, haskell is "complex and difficult" in terms of having a powerful type system. In my personal experience, I've found that haskell code has far fewer bugs than go code, and a large fraction of the bugs I encounter in go would not have been written in haskell or rust, in a language with a more powerful type system.

Do you have any info on this link between "complexity" and bugs?

> Go is successful [...]

That's unrelated to whether it has systematic problems. As is much of your post to be honest. Like, yeah, javascript is successful. Bash is successful. No one will argue that those languages don't have systematic problems.

The parent poster wasn't saying no one uses go, but that people use go despite its systematic problems, and tbh your comment mostly doesn't argue against that thesis, just arguing that it's successful. Which sure, yes, it is, that's not really related to the parent comment's claim that it's a poorly designed language.


> Do you have any info on this?

There have been some studies, and while all methodologies have tradeoff's go and clojure sit near the bottom of this list in bugs/committ: https://arxiv.org/pdf/1901.10220.pdf

Which is fascinating because they are very different languages whose unifying theme is simplicity.

But this actually fits well if you know another important fact: bugs occur along organizational lines. https://augustl.com/blog/2019/best_bug_predictor_is_organiza... If you've been around long enough, you realize a lot, if not most bugs, come from not understanding someone else's code. Type systems arn't going to help you, when you don't understand what the code they are "protecting" is doing.

> The parent poster wasn't saying no one uses go, but that people use go despite its systematic problems, and tbh your comment mostly doesn't argue against that thesis, just arguing that it's successful

But if go has so many problems why do people use it? Javascript, bash, and go have a unifying trait. They are useful, they solve a problem. But unlike javascript and bash, go wasn't and isn't integrated anywhere. No ecosystem forced people into go. They choose it, writ large. Why? Well because languages are more than there type systems.


> all methodologies have tradeoff's go and clojure sit near the bottom of this list in bugs/committ: https://arxiv.org/pdf/1901.10220.pdf

You have linked a study which, if you read the abstract, effectively claims "we tried to reproduce a study about bugs-by-programming-language, and could not. We have concluded their methodology was flawed".

I can't find information in your linked source that support anything about type-system complexity resulting in more bugs.

> But if go has so many problems why do people use it?

If candy is bad for your health, why do people eat it? Surely people eating candy is proof it's good for your health. That's the argument you're making here.

We can show candy is bad for your health, and the comment you originally replied to provided a mix of reasons why they saw go as having problems and ignoring programming research. All of those are more evidence than what you've replied with, which is essentially "but go is popular", entirely ignoring their concrete points.

Anyway, if people choosing a language is proof that a language doesn't have problems, that surely means PHP, Java, python, C, etc have fewer problems than Go since they all are used much more than it.

Again, a language being good and people using it are not necessarily related. People eat candy despite it being bad for them. People code in Go despite it being designed like "C, but with GC"

Perhaps rob pike said it best:

> The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt

As rob pike so clearly said, perhaps go is successful because it is intentionally a poorly designed language, intentionally one that eschews programming research in order to appeal to python programmers.


Rob Pike did not "clearly say" anything of the kind. You are badly distorting what he said. Maybe try reading it again without bias?

As for the rest of your post, you dismiss the study as flawed. Fine. But most of the rest is your candy analogy, which is also flawed. One could just as easily say that people like you like... whatever languages you like... in the same way that people like cocaine, because it makes them feel smarter than everyone else. Such "proof by analogy" doesn't say anything meaningful or arguable.

The one other thing you said is that "the language being good" and "being used" are two different things. But I have a different view of programmers than you seem to. They aren't toddlers with no self-control reaching for what's bad for them (or drug addicts either, in my flawed analogy). Programmers use languages because, despite their flaws, the languages are still better for their use case than the alternatives. You have to judge a language as a whole package - syntax, semantics, build-in libraries, third-party libraries, ability to find people who know it, help available on Stack Overflow, all of it. Go (and PHP and Java and C/C++) have significant market share because, for significant classes of problems, they still are better than the available alternatives.

I trust the people actually building stuff. They are the hardest people to fool.


> That's unrelated to whether it has systematic problems. As is much of your post to be honest. Like, yeah, javascript is successful. Bash is successful. No one will argue that those languages don't have systematic problems.

Right, bash and js have pretty significant problems, but they remain popular because they were developed as exclusive languages for platforms which became wildly popular; essentially they benefit from monopolies and network effects in a way that Go never has. The fact that Go is a popular language and still rapidly growing indicates that its issues are relatively minor compared with its advantages over other candidates (e.g., Haskell).

> On the other hand, haskell is "complex and difficult" in terms of having a powerful type system. In my personal experience, I've found that haskell code has far fewer bugs than go code, and a large fraction of the bugs I encounter in go would not have been written in haskell or rust, in a language with a more powerful type system.

Granted, but in my experience those languages trade off a lot of productivity in order to reduce bugs, and that leaves me a whole lot of time to debug Go programs and achieve a similar degree of correctness. Moreover, I often spend so much of my energy thinking about the type system in Rust/Haskell that I make silly mistakes that the type system doesn't catch (I've written a static site generator in Go and then in Rust, and I wrote a bunch of URL/filepath handling bugs in the Rust version that weren't present in the Go version--despite the benefit of hindsight--because I was focusing so much on the type system). Rust also makes it a lot more painful to create and use new integer types compared to Go (e.g., you have to define a ton of traits and even then you still have to wrap integer literals in MyIntegerType::new(0) and so on), so I see a lot of code that just passes around u64s as identifiers for different resource types and it's super easy to use the wrong u64 to index into the wrong resource collection.

I'm not saying that Rust results in more bugs on balance (I don't think it does) but that Go wins overall for most ordinary application code (but not for systems code, or high performance code, or real time / mission critical code).


> Languages also need to be:

> * Easy to read

The motivating examples of this whole proposal are examples where one reads the code and the semantics are unclear.

> PPS: If you think multiple returns and sumtypes solve the same thing, you've not thought very hard on the subject...

The stereotypical example of multiple returns in Go is functions that return val, err when they mean that they return one of a value or an error. Which is what sum types do.


> The stereotypical example of multiple returns in Go is functions that return val, err when they mean that they return one of a value or an error. Which is what sum types do.

The point is that with real sum types, the compiler will enforce that you return exactly one of a value or an error, whereas in Go, you're just supposed to do so, and it's possible to write code that mistakenly returns both or neither.


Yeah, but multiple returns are _used for other things_. So they encompass error returns _and other things_. Multiple returns by the way would be product types that are being automatically unpacked.

One of Go's core missions is to keep the number of concepts users need low. Multiple returns encompass the error use case and others.


Sure, and everything people do with sum types can also be done with plain C structs, and has been done like that in decades. But I think that saying that multiple returns (which are a sort of product type) “encompass” the error use case (logically a sum type) is overstating the case a bit. I would accept that returning errors can be achieved without sum types (and obviously is achieved without sum types in practice in the huge amount of code in languages that don’t have sum types).

But I would not personally design a new language without sum types or some other similarly strict solution for the kind of programming that will encounter errors.


I would suggest dialling the condescension down a bit mate.

There are many successful modern industrial languages out there that don't repeat the kinds of mistakes Go does. Kotlin is a strong example. So does the popularity of TypeScript show that we can successfully and fruitfully apply innovations from modern theory to industrial practise.

There's a tonne of real software being built in both of those languages. You don't need to ignore decades of research to build a successful and productive language.


You wrote the post you wrote. And then you accuse zbobet2012 of condescension? Physician, heal thyself.

Don't be so condescending to assume that Go's designers ignored decades of PL research. They knew it far better than you do.

And don't be condescending to the programmers, either. They aren't sheep who are led astray by shininess and PR. If that many programmers are using Go, then many of them find genuine value in Go. It solves enough actual problems well enough for it to get used, a lot.


> Go is successful because it strikes a strong balance between what practical software engineering organizations need from a language, of which safety is only one constraint.

This is a laughable claim when you've seen other languages (which have resulted in billion dollar companies with low engineering burden) that have done much better than go at this.

Go is successful for other reasons, mainly it plays into the biases of certain class of developers and managers.


> Go is successful for other reasons, mainly it plays into the biases of certain class of developers and managers.

While there is definitely an element of that, I think the main reason Go is successful are completely unrelated to the language itself. The main advantage of Go is that it is the one GC language with a slim runtime. Go programs are tiny compared to Java, C#, Python, Haskell etc. They start instantly. And they do so even if they are implementing an HTTP server or other non-trivial things.

So Go is excellent at something like containerized microservices, where you want to easily run dozens of containers on a single host, and rely on quick restarts for the occasional bug.


"The main advantage of Go is that it is the one GC language with a slim runtime"

I have worked in places (only startups, no big tech) that deployed go (okay, low n=2) and this was not a consideration.


> There's no denying Go's initial allure, but in reality it fails to address in any systemic way the problems that have plagued the software development practice for decades.

Eh, loop variable scoping has bitten me a single digit number of times in the eleven years I’ve been using it. Zero values are definitely more frustrating. Lack of Rust-like enums is another pain point.

But these are relatively minor compared to what Go gets right that other languages don’t—I would happily jump ship from Go the very moment something arrives that outperforms it for my use cases, and indeed I try out every language that I hear better-than-Go hype about.

So far Rust wins at systems programming, but even with the enums and lack of zero values I spend so much time fighting the language (not just the borrow checker) or implementing traits, etc—far more than I spend debugging nil pointers and such in Go. And frankly Rust is the only thing I’ve found that comes close. Go is just an extremely productive language even if it has a few sharp edges (after all, people still write really important code in C, C++, or even fully dynamic languages—compare that with zero values!).


> Eh, loop variable scoping has bitten me a single digit number of times in the eleven years I’ve been using it.

Go ahead and ask them to remove this from the Frequently Asked Question section then.

https://go.dev/doc/faq#closures_and_goroutines


Its presence in the FAQ indicates that a lot of people have run into it, not that people commonly repeat the mistake. What exactly do you think we're debating here?


This is a very one sided view. It reads like you have not familiarized yourself much with the internal reasoning behind these decisions. You can disagree with the balance that has been struck, but you don't appear to be aware of any balancing at all.


In Go, you can take the address of a local variable and return it. It will be allocated on the heap automatically.

This is weird for C++ programmers that think about the stack vs heap, but perfectly natural for Go. In Go, stack allocation for variables is a compiler optimization, not part of the language semantics. Go also only uses scope for visibility, not RAII.


I think the weirdness there is not with returning a local variable (C++ allows you to do that too, via RVO and NRVO), but the idea that you'd return a local variable by address.

That's fundamentally very weird from any systems background, and suggests a language design error: if people are returning pointers to "locals" because returning by value does an implicit copy, then maybe it shouldn't have been specified to do that.


You are coming from a C++ background too much. In the semantics of Go (and many other GC-based languages) there is no such thing as a local object. All objects in general are allocated in the same place, and you can always safely share a reference to an object (from a use-after-free perspective, not parallelism etc). So there is nothing weird, at all, about returning the address of an object you created locally in a function.

That the runtime can allocate objects on the stack instead is an optimization, not a part of the language semantics. Letting the reference escape the enclosing function does not "automatically promote the object to the heap", it simply prevents the optimization above.

Also, none of this has anything to do with the problem of taking the address of loop variables. The confusing thing there is not the lifetime of the loop variable per se. It is the fact that the range-based loop variable is mutated to contain a copy of the next element in the range at the next iteration. It doesn't work like the C++ range-based loop, where the range_declaration is considered part of the loop body, for example.


> All objects in general are allocated in the same place, and you can always safely share a reference to an object (from a use-after-free perspective, not parallelism etc).

Every other language I can think of that does this spells it in a materially different way, kind of like this (intentionally not matching any particular language):

    x = func();
    y = x;
Now x and y are references to the same thing, each one is sufficient to keep the referent alive, and they are on equal footing.

This includes every GC language I’m familiar with except Go. It also includes C++ when using any sort of pointer (including smart pointers) and even plain references (with the obvious caveats about returning a reference to a local variable being nonsense in C++). It includes Swift with ARC. And it includes functional languages, pure and otherwise. If you return a (correct) reference to an object that would otherwise be local, you are returning the same type as the original local variable.

I think go is unusual in that it looks more like this:

    x = func();
    y = &x:
Here x has type int (or whatever). y has type int. Yet x keeps that integer alive, and so does y. x and y refer to the same thing. &x and &y are not equivalent.

So I think woodruffw is reasonable to find returning the address of a local variable weird coming from pretty much any background.


The one thing that Go supports that I think no other GC language does is the ability to create a reference to a value at will.

However, similar things can be done in C# for example by capturing a local variable in a closure, even if that variable is of a value type (struct). The closure then becomes something similar to a Go reference to that value, and you are still free to return it with no UB or other problems.

Either way, if your mental model is adjusted to the GC memory model, there is nothing special going on here. The weirdness comes only out of tying an object's lifetime to its scope. Scope is irrelevant to lifetime in GC languages, where lifetime is tied to (live) references to that object. So, if a reference to an object "escapes" its scope, that object will live beyond the scope where it was originally declared - nothing strange about that.


I don't agree and my job requires that I program in C++. It's different, yes, but it's only strange if you learn about C or C++ first.

Whether you agree or not, I definitely think that promoting loop variables to be heap allocated is consistent with the rest of Go.


I don't disagree that it's consistent! Consistency and weirdness are separate dimensions; it's possible to be consistently weird, and IMO Go's pattern of returning local-looking-but-actually-heap addresses is pretty weird, especially when you consider that they could have made return-by-value a transparently optimized move instead.


There is such thing as "local-looking" in Go. That's the point -- the compiler and runtime make the decisions as needed and the programmer doesn't care.

(Obviously there are some consequences to this, wherein you may find yourself with heap-based allocation in hot paths that you didn't expect. But in terms of productivity, I think that not caring and having no distinction certainly reduces one thing that the programmer has to think about.)

I think there would be analogous design decisions that remove various low-level concepts in many languages. For example, Python ints are arbitrary precision, which removes a whole set of concerns from the programmer. Rather than being "weird", I think this is just optimizing for a particular kind of programmer productivity.


It's only weird because you are expecting a local variable to not be on the heap. If local variables were always on the heap, you wouldn't notice. That's precisely what is happening here.


Sorta related, I very surprised that Rust supports variable shadowing.

But this blog article satisfied me: https://ntietz.com/blog/rust-shadowing-idiomatic/


> So the fact that, in Go, one might rely on automatic promotion of an induction variable (which obviously gets mutated every iteration!) to the heap, and thus get confused about precisely which value is promoted to the heap, seems weird from my perspective.

Automatic promotion is not that big of a factor, fundamentally this is a common issue with closures over mutable bindings, similar issues can be observed in other langages. Most (but not all) have mitigations already because it’s such a common issue especially around loops e.g. C# breaking-changed the semantics of range loops in 5.0, javascript added “let” and “const” which have block and per-iteration scoping, …

C, C++, and Rust don’t suffer from it (ish) because one doesn’t have closures and the other two require more explicit management of the capture and its lifetime.

C++ could have something similar with capture by reference but the code would probably not be straightforward, and the more straightforward code would have much bigger issues of dangling references.


After coding in higher level languages the notion of routinely writing for loops like I used to do in C now seems so backwards. Iteration in general should almost always be pushed down into language constructs or libraries.


How often do you access your loop variables asynchronously?


Rarely, if ever. But aliasing and unexpected mutability don’t need asynchrony or threading to be a problem.

For example, the entire problem leading to this proposal is about unexpected mutability. You have a variable i, which presently has a specific value. You take a reference to it, and you give that reference to some other code that saves it. Then you mutate i, and you are suddenly surprised that the external code now has a reference to the new value.


In most cases the reference is hidden in the context of a lexical closure. And arguably it's not about unexpected mutability, but the context of mutability. Nobody is expecting `i' to be immutable, but in the context of a for loop its demonstrably the case that most people intuitively expect `i' to be scoped to the inner block and not the looping statement itself; and that intuition is so strong that even people who thoroughly understand the implementation details still sometimes run afoul of the issue when mechanically banging out code.

One can disagree with the wisdom of lexical closures, as opposed to other languages which require explicit declarations when binding variables or capturing values, but ultimately it's a tradeoff and lexical closures are a key architectural construct in Go. In this case, the implementation details resulted in poor cost/benefit, and tweaking the semantics of loop variables restores the cost/benefit in the context of Go's calculus.


Yes, but the reference isn't explicitly created. Rather, the reference is implicitly captured by a Go routine, so this happens in Go much more often than other languages because Go routines are a core feature of the language.


I’m not sure what you mean.

In the OP, the examples involve closures that capture an induction variable. I don’t think Goroutines per se are involved. (Although, again, in C++ you had better not capture a reference and then have the closure in question outlive the referent, and you need to literally type & to get a closure to capture by reference. In Rust, references created by closure follow the same rules as any other reference.) And the earlier proposal:

https://github.com/golang/go/discussions/56010

gives an explicit reference (with an &) as its motivating example.

So I still believe that the whole issue being addressed is bizarre from the perspective of a C, C++ or Rust programmer.

(I’m having trouble thinking of another language that promotes a local variable to be a heap-allocated slot when one explicitly takes a reference to it. Most GC’d languages have local variables that are, themselves, references to the heap such that the referent (the heap slot / box / whatever you want to call it) will outlive the variable if the reference is copied. Pure languages may not strongly distinguish between a value and a reference to a value because the referent can’t change, so a copy is as good as the original. I think that Go is rather odd here, but I could easily be missing something.)

(P.S. C++ also has extends the lifetime of temporaries if a const reference is taken, but only to the end of the scope. This is too magical for my taste. At least the heap isn’t involved.)


It’s true that the variant of the issue involving explicit references seems pretty unique to Go. But there are plenty of languages that have the variant of the issue where a closure implicitly captures a local variable by reference. JavaScript used to have it until `let` was created (and still does have it if you don’t use `let`). And I ran into it recently in some closure-heavy Python code I wrote recently. (Python makes it relatively ugly to work around, too.)


> this happens in Go much more often than other languages

I mean not really, a big reason for let and const in javascript was that this specific issue.


Definitely a good change, as this fixes an issue I (and I think most Go developers) have been bitten by more than once. It was really interesting to read the response of someone from the C# team in the original discussion, of how C# had gone through almost exactly the same change and how that went down: https://github.com/golang/go/discussions/56010#discussioncom...


It is so useful to compare analogous experiences from other languages, and I find not done often enough. For instance, i feel like ruby and python could probably learn a lot from each other, but it can seem like there just aren't enough people (myself also not) who are sufficiently expert at both to identify the lessons.


Yup, not a Go developer but ran into exactly this years ago when wanted to get a feel of it. Took a while to figure out what was wrong!


Not exactly. C# had never gone through the change on c-like for-loops.


The slightly less obvious story here: this is a breaking change to Go, and they’ve managed to find a policy whereby such a change is permissible. I think that’s the more substantial development, though this is quite an ugly wart and I’m glad it’s off.


This doesn’t sound that different from e.g. Rust’s Editions, though it seems like it can occur much more frequently


I dont really know much about Rust editions, but what little I do know sounds at least similar. Heres the issue where they changed the backcompat policy: https://github.com/golang/go/issues/56986


Yeah, I like this change, but I would like this change for Golang 2.0.


"Go 2" will most likely never happen. It was always a placeholder for "yeah, this might be a good idea, but we can't do it now as it would break compatibility".


Agreed. The change (especially to for-range) is a good one, but IMO they shouldn’t be breaking the Go 1.x Compatibility Promise in a Go 1.x release.


I would rather not have this.

The worse thing is: the old code still compile in new /old version. There is no error or warning.

Updating the version is so innocent looking. Nobody will be able to catch this error.


I can't tell if you've read the issue based on your comment, but I recommend it. They go to great lengths to address this concern.


Ahh - I hit this exact problem just last week. I was taking the address of the loop variable ranging over a slice, and was surprised when my resulting slice was full of pointers to the same thing. I wasn’t aware of the v:= v idiom.

Fortunately it was pretty obvious what was going on, but certainly the behaviour wasn’t what I expected while I was “in the flow”.

Super happy to see these little cases getting fixed!


The proposal has two parts, one is to change the semantic of "for-range" loops, the other is to change the semantic of traditional c-like "for;;" loops.

The first part is good and will fix your problem. However the second part will produce some surprises to many people.


I hadn't seen that, but I've since read the comments from rsc and others and I'm curious why you think it will be a problem?

From what I can gather (I'm too busy/lazy/both to dive into the formalities), the for;; loop will have a new variable allocated at the start of each iteration, similar to the proposed range loop behaviour change. The strong argument for this is that you'd expect both loop forms to behave similarly; if they were to remain different, which would be the case of they didn't make this change, then the principle of least surprise would be violated.

However, reading the followup comments, some people seem to have missed that the value of the variable at the end of one iteration will be used to re-initialise the new variable at the start of the next iteration. This seems to mean that the usual semantics of a for;; loop will not change, other than if you're doing some really strange stuff, which doesn't seem to be the case when reviewing large existing code bases.

I had misgivings about the behavioural changes being triggered by the Go version number in go.mod, but rsc makes a good argument that any dependency change carries some risk, intentional or not.



Yeah - but I agree with rsc, it’s easy to come up with constructed examples of behaviour changes, much harder to come up with examples you’d find in production code.

I mean - maybe you feel differently, but if I saw that code in a PR, I would reject it. It’s way too tricky for me.


:)


I wish Go also had this:

   for v := range vs {}
instead of this:

  for _, v := range vs {}
How often do you need to have both the element and its index? I'd say it's not the majority of cases. And now you have to type "_," in 99% loops.

That would be a far more serious breaking change, however. Although, that would only in practice affect integer lists (i.e. code silently compiles). For other types, the compiler would complain about type mismatch errors and refuse to compile (which is better than nothing).


> How often do you need to have both the element and its index? I'd say it's not the majority of cases. And now you have to type "_," in 99% loops.

Needing either the index or both is likely a lot more common in Go than in other languages owing to the lack of any sort of iterator or composition. So e.g. any sort of zipping or in-place mapping requires the index.

> Although, that would only in practice affect integer lists (i.e. code silently compiles).

Not so. It would also affect strings (runes are integers), as well as maps with the same key and value types. Plus it could affect every other case depending how the item is used.


>Needing either the index or both is likely a lot more common in Go than in other languages owing to the lack of any sort of iterator or composition

In our codebase, there's very few instances of needing the index. Usually you just want to iterate over a list and apply some function(s) to the elements or add them to another list.


I have the same feeling as you.

Go indeed has

    for v := range vs {}
But unfortunately here v means element index.


Many people think the change is only applied to "for-range" loops. However, in fact it will be also applied to traditional c-like "for ;;" loops, which at least will produce two surprises to old Go programmers: https://github.com/golang/go/issues/60078#issuecomment-15443...


What surprise is that? It would appear that no one will be surprised because the scenario necessary is never actually used anywhere. It doesn't exist.

For a change to be breaking it needs the potential to be able to break. As the potential approaches zero, so too does the coefficient of breakage.


One surprise is that the following loop will print false (now it prints true).

    for i, p := 0, (*int)(nil); p == nil; print(p == &i) {
        p = &i
    }
Another surprise is that the value a in the following loop will be duplicated twice in each loop step (now it never gets duplicated).

    func foo(constFunc func(*[100000]int)) {
        for a, i := [100000]int{}, 0; i < len(a); i++ {
            constFunc(&a)
        }
    }


That first example seems rather artificial; I can't recall I've ever had to do anything like that. What are real-world use cases? That second example seems like a rather convoluted way of looping 100000 times.


They are just demonstration code. The alike code in production certainly is different.


There is no alike code in production, which is obvious if you read the proposal in good faith. If you can find a real world use case that is impacted, the authors are keen to understand it.

The examples you share are in the proposal discussion, so there is little value in rehashing your apparently bad-faith argument here. It's not a breaking change if nothing breaks.

Go is and has always strived to be pragmatic. This is an incredibly pragmatic approach to solving a real problem. Being pedantic about this actually breaking a theoretical program which either makes no sense or doesn't exist does not add value to a proposal which already addresses this concept very well from the onset.


What would that look like then? How often is such code written?

Obviously you can write it like that, my point is that it's not clear to me people actually are writing code like that. "If a footgun is never encountered then is it really a footgun?"


This is not a problem about frequency. This is a problem about possible or not. Language design is a serious thing.


Every language makes it possible to do something potentially unexpected, which is also rather subjective because different people have different expectations. None of your examples are "broken" or exhibit inherently undesired behaviour. So it's very much about frequency: how often do people expect A? And how often do they expect B?


Your logic is so weird. Done here. :)


Sometimes I encounter this like 1/10 but it’s annoying for those who hasn’t seen it and not using tests, it’s really good seeing focus on this just hope how they’ll resolve it with Go version




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: