Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
C++ Coroutines Do Not Spark Joy (probablydance.com)
148 points by ingve on Nov 1, 2021 | hide | past | favorite | 250 comments


There were three competing coroutines designs in front of the standard committee: stackful coroutines, from boost; C#-like stackless coroutines from MS; and zero-overhead stackless coroutines initially from chriskohlhoff (but then picked up by many others).

After a lot of back and forward, the committee decided to pick the most proposal (i.e. the MS one) with the most complete implementation and potentially less ambitious. These are both good good features, but we ended up with a very complex, hard to use, design that requires unspecified compiler magic to remove all overhead.

Interestingly enough the committee went in the opposite direction for networking recently rejection the battle tested ASIO, again from chriskohlhoff, for a yet unproven design initially from Facebook, now Nvidia.

edit: personally I'm a huge fan of stackfull coroutines, but I could have lived with zero overhead stackless coroutines. The current design is IMHO the worst solution. Then again, sometimes it is better to ship something than argue forever.


I've been on the fence for a while about coroutines, but I have to say that libraries like folly coro do a very good job at hiding all the complexity (the library itself though is somewhat impenetrable).

Facebook is very invested in coroutines, all major internal libraries at this point have a coroutine interface (in particular Thrift auto-generates coro client interfaces), and they have enjoyed wide adoption, way beyond the small cabal of C++ gurus.

Personally what sold me is cancellation handling. It really is magical.

Are your concerns about frame allocation? Is this something that the nanosecond-latency crowd particularly cares about? :)


For networking allocating the frame is perfectly reasonable because you need to allocate some sort of control structure anyway and you can do it off a custom allocator. In fact i think that stackful coroutines are probably a better fit.

My issue is stackless coroutines would be perfect for generators but the required allocation kind of kills it :(.


> In fact i think that stackful coroutines are probably a better fit.

My experience with fibers is that they're stack overflow generators. Presumably you want lots of them running concurrently, so you'll need a small stack size, and now you have to be extremely careful about what code you run in them as it's very easy to run out of stack.

You can't rely on overcommit because if you actually end up touching the pages you're stuck with them.

> My issue is stackless coroutines would be perfect for generators but the required allocation kind of kills it :(.

That's true, though it depends on the granularity of the operations you want to run in your generators. Ranges+algorithms should cover most of the simple/small cases?


You probably have more experience than me on this. I play with coroutines on my free time, but at $WORK is all state machines or explicit continuations (I want to avoid the hammer/nail problem).

Having said that, I think that full coroutines can be no worse than stackless coroutines for stack usage. There are multiple options, including segmented stacks (which do have performance and ABI implications of course).

My preferred solution would be to have first class continuations and have the compiler convert them to stackless coroutines if the calling continuation never escapes the top level frame. In fact even yielding from inlined, non recursive functions would still allow the conversion to be performed while giving better ergonomics than the current implementation. The trick here is how to extend the language in such a way that this can be a guaranteed optimization instead of QoI. If this can be made to work, I think the same exact optimization could remove the frame allocation for stackless coroutines. Unfortunately I suspect this might require adding the equivalent of rust lifetimes to C++.

You can still avoid overflows even on a pure library implementation without compiler helps by switching to the main thread continuation (and its stack) for any stack consuming operation (or any operation that is known not to context switch). This can be extremely cheap, just an additional register swap over the normal function calling convention.

YMMV.


Well, sure, you just need a sufficiently smart compiler :)

> You can still avoid overflows even on a pure library implementation without compiler helps by switching to the main thread continuation

Yes in fact that's what we had to do with fibers (see for example folly::runInMainContext()) but that means that all code that may be called from a fiber needs to be fiber-aware, which is not ideal.


In my experience, you overcommit virtual address space and have the compiler generate max stack size information. You then work to remove giant allocations. Not ideal, but the price for bespoke speed and debuggability.


ASIO is a horrible godawful mess, in this I agree with them.


I disagree. ASIO has been (mostly) intuitive for me to use. The hardest part is that it doesn't have great documentation and that it sometimes changes paradigms (eg, the recent change in executors has been super annoying and lead to problems in migrations). But of course it really depends on how you think when you write software. If you don't really think asynchronously then you're going to have trouble.


I can't agree with this. If you deviate from the ASIO examples even a little bit it blows up in your face spectacularly, and I sunk a lot of time into reading the implementation to see how it works.

They should have stayed away from ASIO. It looked like they were going to, for a while, and it looked like it was going to be better than boost's implementation. But here I find out that was never the case and the portions that needed filled in required compiler support.


To clarify -- there's some stub of new boost ASIO kicking around that's good, or the unfinished C standard stuff looks ok.

But straight ASIO is pretty bad.


Any supposedly high performance async abstraction is almost necessarily going to be complex and flexible otherwise people will keep using the low level system API.


I doubt you will like S/R then.


Eric Niebler recently switched from Facebook to NVIDIA. I wonder if that has anything to do with it.


Yes of course, he is the main author of the S/R proposal.

Edit: btw, I'm a huge fan of Eric work and he was my GsoC mentor back in '06. It is just sad that we got into an either-or situation and we couldn't get Net TS in the standard.


Since when is the Net TS dead?


They talked about it in the latest cppcast episode. It's not dead officially yet, but it's dead.


Would you mind giving me the TLDR on no overhead? Seems impossible to me


What it seems impossible? You can easily implement 0-overhead coroutines yourself with macros and the duff device. They are just cumbersome.

Chris proposal was "just" syntactic sugar over that. He went so far as producing a working implementation on a clang branch. The most important difference to what was eventually standardized is that the coroutine object is a non type erased value type (basically very similar to a lambda, and in fact an extension on top of it). The big issue is that it was very hard to use safely as the address of local variables could potentially change between invocations if the coroutine is copied around.


C++ has already got to a point the teams inside my company can't even talk between themselves properly (it's R&D and very heterogeneous, so not that bad) but this doesn't happen with any other language. It's ridiculous.

The newcomers struggle so much to learn the language, and tbh I just say to them to learn the sane 5% of the language and ignore the rest. It's a monstrosity, extremely hard to bring up and generally not nice to read nor write.

I really wish a good, well-maintained, modern (as in, actually break stupid compatibility, just fuck it. It's 2021 for God sakes) C++ existed.


> I really wish a good, well-maintained, modern (as in, actually break stupid compatibility, just fuck it. It's 2021 for God sakes) C++ existed.

As a C++ programmer I would say this language is Ada 2012 (see https://pyjarrett.github.io/programming-with-ada/four-months...). It's hard to get people past the initial shock of the Pascal family syntax, but once you do, you'll find it ludicrously close feature-wise to the base C++ feature set that gets used day-to-day (RAII, templates, etc.) but with a Pascal skin. It also comes with built-in concurrency.


> Pascal family syntax

I have fantasies of doing a "new" programming language which is literally a lexical translation of Ada into curly-brace syntax and calling it Curla. I wonder if anyone would notice that it was just Ada!


This has already been done (http://www.adapplang.com/index.html).


Which is actually an April Fool's Joke. They only bothered to replace begin/end with {/} and is with :. Which results in delightfully asymmetric examples like with case:

  case Foo:
    when Bar => ...
  }
The also eliminated the need to both with and use if you use a package (the only positive change suggested on that site), and added some abbreviations like pkg and priv.


>It's hard to get people past the initial shock of the Pascal family syntax

What's so shocking about Pascal?


I don't think it's a matter of shock, as so much people parroting the manufactured corporate resistance to Pascal that sneakily came from AT&T (birth place of C, C++, and Unix). Being "anti Pascal" or feeling its syntax is "alien" was something embedded into the C family language culture. To the extent some people act like their brain might explode if they can't have a language with curly braces or that any new language should go that direction.


People throw their hands up because there's `begin...end` instead of curly braces.


If it was just that small difference, I would have picked up C decades ago, instead of despising it. There are all the .h files fractured off randomly from the source files, littering the landscape, the make system which never quite seems to work, the abuse of case sensitivity as a feature, the stupidity of uncounted strings ending with (maybe) a null, the abuses of macros which make things unreadable, then the pointers that are never quite explicit... char * or char, it never, ever* made sense to me. In Pascal, it's simple... @x is the address of X, x^ is what X points to. In pascal, you pass the value of an argument by default, or optionally you can pass it by reference, and the syntax doesn't change, and you don't have to play with the footgun that is pointers.

But sure, using the reserved words BEGIN and END at the bounds of a block of code break people's brains ;-)


I thought Rust was that better C++. It doesn't help existing large projects because nobody will ever rewrite them. However, if you're starting a C++ project in 2021 and have the option of using Rust, you might want to think twice.


There certainly are C++ programmers who express this sentiment. e.g. https://www.thecodedmessage.com/posts/hello-rust/

On the other hand, if the thing you don't like about C++ is that it keeps changing you may be unsatisfied in Rust too because Rust has in some ways an even faster pace of changes. Rust as it was written in 2015 still compiles on a modern Rust compiler (with edition set to "2015" and modulo any security issues) but it is no longer idiomatic Rust.


I think the OP is not complaining about change. Arguably, change is inevitable. He seems to dislike trying to keep backwards compatibility at the cost of sanity and usability, and the fact that C++ is an ever-growing pile of features that doesn't make sense anymore.

This is precisely what the Rust edition are there to solve: your legacy code will work but the language will change with each edition to reflect the state of the art, and if needed, it will break backwards compatibility.


While I agree that Rust changed a lot, it was mostly fixing edge cases (like borrow checker limitations or associated types not being allowed to have generics) or obvious missing features in the language (const fn, const generics, async await) that have been known for a long time. Most of these have been implemented since then (at least in MVP form) to the point where there's barely any new ideas anymore and most old ones got implemented. So Rust really feels like it's nearing a "point of completion". You can see this with the Rust 2021 edition as well that contained very few changes.


> Rust as it was written in 2015 still compiles on a modern Rust compiler... but it is no longer idiomatic Rust.

The changes since the 2015 edition have all been relatively minor in terms of idiomatic code I think. I mean sure, you can use the '?' operator instead doing a manual match and return, but most of the changes have just been making the compiler more permissive in terms of what it accepts.

The only new concept to be learnt would be async-await, and that should be familiar to most from other languages and only applies if you're doing IO.


Today an idiomatic function that twiddles some foozles has a signature like this in Rust:

fn twiddle(foozles: impl IntoIterator<Item=Foozle>)

But in 2015 you couldn't have written that. It works in the 2015 edition today, sure enough, but it would not have worked in any 2015 Rust compiler, and so it was not idiomatic Rust in 2015.

Instead in 2015 you'd have either chosen to specify what sort of container the Foozles live in, or, if you believe freedom to choose different containers is important to your users, you have to ask the caller to Box up an Iterator over their Foozles so you don't need to care what container they used.

so e.g.

fn twiddle(foozles: Vec<Foozle>) // Hope your foozles are in a Vector or else you'll need to write an adaptor function

or

fn twiddle(foozles: Box<Iterator<Item=Foozle>>) // Now we're incurring a heap allocation


In 2015 Rust you could write:

    fn twiddle<T: Foozle, IterT: IntoIterator<Item=T>>(foozles: IterT)
which I believe would still be considered idiomatic today. (this assumes that Foozle is a trait. If Foozle is a struct then you can simplify this). You can also now use the impl syntax you posted but it's just sugar and it's less flexible than the older method.


That's how I used to write it in 2018, before I discovered the `impl` keyword. It's the obvious way of doing it. (Though I'd've used `Iterator` rather than `IntoIterator` because I came from Python. (Yes, I know `IntoIterator` is the version best matching Python's behaviour.))


Note that impl is not exactly equivalent. If the type passed in is ambiguous, with an explicit template the caller can disambiguate, but with impl there's no template argument.

So sometimes you still want to use the "old" syntax in your public APIs, even when you could be using an impl trait


For what it's worth, I suspect most Rust programmers would consider twiddle(foozles: &[Foozle]) to be the most idiomatic option for general use both in 2015 and today.


Mmm, but not everything we might iterate is actually eligible to be a slice.


The only thing new in Rust 2018 is the impl Trait in argument position.

Generics and trait bounds are in the language from the start.

So you can write this function like this:

  fn twiddle<T: IntoIterator<Item = Foozle>>(foozles: T)
Which is 100% idiomatic today and is even preferred by some people over using impl Trait


impl Trait as the return type actually added quite a bit more than in argument position. In particular, it allows you to return a lambda without boxing since you cannot get the exact return type of a lambda (at least not yet). impl return types are sort of a special case of existential typing.


or

    fn twiddle<I>(foozles: I)
    where
      I: Iterator<Item=Foozle>
(or probably IntoIterator<Item=Foozle>). `impl T` is (modulo some details I don't remember right now) just syntactic sugar for this.


> modulo any security issues

I'm not sure there are any security issues associated with using an older edition. I think when soundness holes and other things like that are fixed, they're usually fixed across all editions. Sometimes that's technically a backwards-incompatible change/fix, but Rust's back compat policy considers that acceptable if the breakage isn't widespread.


Rust is a better C++. The problem is that it's a better C++, i.e. it seems to be following the same path (only faster). If in ten to fifteen years Rust will be in the same position (only with a smaller market-share) it isn't attractive enough to pay the high migration cost.


What's missing from Rust? Or what is in it that will become a problem later?


Two limitations with the type system currently are a lack of existential types as well as a lack of GADTs (general algebraic data types). Existential types would make it easier to work with unboxed lambdas when implementing a trait (since then you would be able to use type of the lambda for things like associated data types).

GADTs are useful for extremely generic code and are required if you want to write a general Monad trait. There is currently no way to write a trait which says return Self, but with a different generic argument. And not just Self, any associated type on a trait is a single type so you can't have a requirement for an associated type to be generic.

Finally, the procedural macro system is kind of a mess at the moment. Last time I checked, you really couldn't write a hygienic procedural macro (though I think this can be added later on without. A bigger limitation is that you don't really have any way to access type information in a procedural macro. Obviously type information is only available after parsing unless you make the macro system in Rust way more complicated, but it would be nice to have static type info for other modules. Having this information would allow you to do stuff like generating serialization code for some library that you don't control.


It's probably the same as that which was missing from C++ in, say, 1990 -- the needs (and fashions) of tomorrow. But it's not specifically Rust. All languages are built for the needs of their time, and then have to adapt and change as requirements and fashions change. The question is, how do they adapt? That's defined by the language's ethos. C, C++, Java, Python, and C# have all survived and adapted for some time, but they've had somewhat different adaptation approaches. Over time, their domain also shifted (in fact, contracted compared to its peak; Python is exceptional, perhaps, in that it's peak domain wasn't its initial domain).


Rust is C++ for OCaml developers.


> tbh I just say to them to learn the sane 5% of the language and ignore the rest

I thought I knew C++ well enough to get by. But in my current company, there's no way you can do anything in the C++ codebase without a serious understanding of the language latest features. Basically, it seems they use all single features available.

And it seems some people love this mess.


In my early 20's I got the appeal of that I think. I wrote a lot of C++, learned all the (then) new features and design patterns. I had a bunch of C++ books covering various advanced topics. I felt like a rockstar implementing ultra-sophisticated generic libraries.

I think I remember what the breaking point was for me. I had a custom serialization format (something YAML-like, because of course I had to implement my own custom format) and I wanted to pretty print it. I needed to implement some kind of indentation routine, you know, add a number of spaces at the start of a line depending on the current nesting level.

I wrote an indentation class that was fully generic. As in, the whole concept of indentation was abstracted away, I wasn't dealing with line buffers and characters, I had generic streams of objects and everything was templated and Boosted to the max. Must have taken about 20minutes to compile on the CPUs of that time. I was proud of my mad cpp skillz too.

And then the absurdity of it struck me. Over the next few years I found myself returning to the simplicity of C more and more (not that I would recommend C as a replacement for C++ in the general case, mind you).

Often when I read C++ discussions these days I think I detect the same kind of hubris I used to experience. Just writing ultra-complicated code for the sake of some perverted notion of purity and elegance.

So when you mix that mindset with the "and the kitchen sink too" ethos of the C++ language committee, you end up with the unmaintainable monstrosity that's modern C++.


You just described me in my 20s as well. I used to revel in my knowledge of C++ minutiae and being able to quote parts of the standard and recognize obscure undefined behavior.

Then I started a company and hired people to work with me and it hit me just how much of an absolute waste of time it is to know that stuff. It doesn't translate or generalize well, most C++ is a special kind of complexity that is only relevant to C++, and then when you hire people you can actually assign a dollar cost to the complexity and that cost is simply not worth it.

I still use C++ at my company, but I now use a very simple and straight forward subset of it, basically along the lines of C with classes and namespaces. No more boost, no more elaborate SFINAE, no more writing things to be so generic and obtuse and avoiding parts of the language that are too abstract.

In that vein I also avoid all of the mess introduced by C++20. Concepts were a missed opportunity, modules are beyond useless, and the fact that C++20 has both coroutines and ranges just shows how the language is designed in a Frankenstein manner. A good solution for coroutines would make the use of ranges unnecessary, and a good solution for ranges would make coroutines unnecessary.

Instead C++20 provides half-assed implementations of both.


I find myself largely agreeing with you about the hubris and complexity behind most c++14/17/20 discussions today. and often feel adding complexity for the sake of intellectual satisfaction is pointless and should be avoided.

My view is that we mostly read code a lot more than write it and code is basically the low level fabric on with we build higher level abstractions like business logic (except for some ultra specialized cases like shaders or vector assembly). My problem is that any senior engineer shouldn't have to waste brain cycles figuring out the language antics when reading code because that takes you away from the reality you really want to get to (the business logic or the intent of code). for a reasonbly senor coder the language really should be out of teh pecture. these antics often end up wasting attention and sometime resources because likely somebody would use that code in a way its not intended and it will have a subtle failure like using extra resources etc

PS: apologies for misspells in above para, was hoping to make a point.


I believe that deep, down its a question of identity and incentives. The right thing to do is reduce complexity.

But the thing that optimizes for nerd status and job security and resume building is to increase complexity, because most people cannot appreciate the sophistication of simplicity in some technological niche, and it is not easily communicated.

And then there is the question of what gives you joy? I find that for most coders, their primary motivation is not to solve a customer problem in the quickest, simplest and most efficient way, but to challenge themselves with riddles and learn new tech and follow fashion. This definitely varies by culture - Golang and Haskell are on opposite ends in that spectrum.

Overcoming delusion means to exit one's own mind and observe the real world. Looking at useful and well-made software, in which language is it implemented in? A Haskell fan will have no trouble talking for hours about how its concepts are superior. But he will not notice that there is barely any popular software written in Haskell out there, and he will not understand what that implies.


You hit the nail on the head I think. A few years into my career I was enamored by learning new programming languages, in particular functional ones. I was fully drinking the koolaid, too - pure functions and immutability would surely make writing web crud a blissful experience, right?

At some point I realized that the nice examples from the books never mapped to the real world as cleanly. Exceptions were pretty useful, local mutability made lots of things easier, and category theory has no place in calling JSON apis over http.

I think a great number of talented programmers never learn that lesson, and continue futzing about with theoretical purity for its own sake.


On the other hand there is value in learning different languages and concepts, and possibly applying them in your day to day job if it makes sense. It makes you a better programmer. But one is a means to an end, and the other is... I don't know, a philosophy? L'art pour l'art?


"But the thing that optimizes for nerd status and job security and resume building is to increase complexity, because most people cannot appreciate the sophistication of simplicity in some technological niche, and it is not easily communicated."

I think you're on to something there, except for the "job security" part. I've met very few people in tech who are concerned about losing their jobs, and fewer of them would make things much more complex---they didn't have the skills.

In fact, most of the majority I have known who produced complexity were optimizing for nerd status and resume building, but would quit rather than maintain their own complex code.


you can limit your code to a subset of c++, make it like a c with free ctor/dtor and simple classes, then it's not that complex.


I think this kind of philosophy is what ended up driving Go's simplicity.

Less infatuation with the tool, more focus on shipping maintainable code the first time.


And you can't really have a sane conversation with these guys either. I've never met a more arrogant programming community than the c++.

I wanted to learn more low level programming when I was a C# dev, and coming from an OOP background, I chose C++ (also I had encountered c++ in school as a first language) . Man, was it humiliating asking questions even on stack overflow. Apparently everything is obvious and I'm an idiot.

In any case, I dropped C++ (not because of the obnoxious community but the language felt so poorly designed, coming from C#) and turned to C. Its been so far an amazing experience learning from C community. They are deeply insightful about computers and genuinely willing to help new engineers. Very experienced and very high calibre c engineers were helping me out without trying to show off how much smarter they are than the rest of the world despite being clearly far better and smarter than I am.


There are a lot of hardware engineers at my company that more or less occasionally use Python or C# to get their job done. I understand that those languages are the better choice for them and gladly help them with getting C bindings done for the odd library in our industry, and to understand some low level concepts such as CPU cache when they run into problems due to those.

I turned away from the C++ unity towards the Rust community because I saw the same issue as you, and am very happy about that decision.


It is somehow ironic to read this, given the heat given to newcomers on comp.lang.c and even the moderated one on Usenet.


I'm sorry, but using stackoverflow as a proxy for any "community" is giving that s(h)ite too much credit. It is infected with arrogant assholes regardless of the field.

(I guess I'm old and misanthropic but I would not seek out answers by asking on the internet, but instead by reading one of the many excellent books on C++. It's not some obscure hobby where you need the collective wisdom of old hands.)

Now, I've been doing C++ for 20+ years, and the only real way to get help/mentorship in my experience is people you work with. There are too many ways to do any given thing in C++ and what you want is to follow the coding practices of the team.


Well said.

One thing i have noticed is that many people trying to learn C++ (or any other language for that matter) don't even do the basic homework (eg. read a book) before asking questions. They expect everything to be spoon-fed and to "easily" understand it all. Even a little effort is too much. They simply lack the commitment and persistence to learn anything non-trivial.


So, your answer to somebody that found it difficult to get help from the c++ community is to first admit your are a misanthrope then direct the other guy to: go read some books and learn it for yourself!

Kind of proving the other persons point for them...


Read what I said:

1. stackoverflow is a bad "community" for any field.

2. I described how I would approach learning C++, based on my personality. I didn't "direct" anyone to do anything. Especially since I phrased it as a literal parenthetical.


For me I got the hints of something might go wrong with STL seeing more and more use, and then boost, various pointer types.. and it kept piling on. I'd say as soon as template metaprogramming became mainstream, in a language where it looked like a tumor, that was the end of it for me personally. It was not a language I knew anymore, nor cared about. C is a bit more verbose but concise enough to get around (if there's no abuse of pre-processor). With Rust, jury is still out there.


Yes. The explosion of meta programming in C++ was what killed my interest in that language. Too many layers of worthless abstraction that looked like they could have been written by an APL programmer having a seizure.


>there's no way you can do anything in the C++ codebase without a serious understanding of the language latest features

I don't mind using the latest features, there are some really nice things like iterators, compile time expressions and so on. I just don't want it to be hard to read or understand (like you said, using ALL the latest features).

So a good rule of thumb for me has been "if I give this to a newcomer, will he be able to infer what this is doing?". If yes, it's generally "safe" to use it.


> Basically, it seems they use all single features available.

And nobody is worried that the compiler implementations are not battle-tested and/or fully optimized yet? Because that alone would scare me off of using the newest features in production.


They're not using all the features for rational reasons in the first place! It's a rat race, plane and simple.


> But in my current company, there's no way you can do anything in the C++ codebase without a serious understanding of the language latest features.

I feel your anecdotal example is not fair as you're trying to pass off problems created by your team with it's ill-advised approach to adopting a programming language's latest features as problems with said programing language.

For instance, I feel you could make precisely the same case with a Java or Python project if the project's coding guidelines forced the adoption of all the new bells and whistles. But that wouldn't mean the languages have a problem.

Meanwhile, the golden rule of C++ continues to be favouring the principle of least surprise and picking a language version (often C++11, and not C++14/17/20) and a subset of features to be enforced at the project's coding guidelines, and build your project around those.

Frankly, your anecdotal example sounds more like resume-driven development claiming yet another victim than a programming language being too unwielding.


I agree I was a bit unfair, in the sense that it's not a problem specific to C++. It's extremely tempting for developers to use all tools and features available to them, and without very strong guidelines, you can be sure that it will happen.

That being said, C++ is an extremely complex language, so there's much more room for the code to become insanely complex than in more modern languages.


just wanted to say that I have seen this happen in my current company and so much so that certain portions of code basically look like boost headers. many folks dont want to touch that code so they end up duplicating functionality with slight change because that one minor change would take a life of itself.

in my previous company I have seen this happen (to a less extent) as a way of marking territory since such speghetti code sans detailed documentation basically means nobody else will touch it.


The Dave Abrahams comment in the questions here:

https://www.youtube.com/watch?v=raB_289NxBk

"We got a lot of mileage in Swift, saying no to things"

and

"You're not going to get to a simpler language by adding features".

I've taken two comments (I don't feel that they're out of context), and they're 100% correct. As is your comment about learning the 5% part.

> I really wish a good, well-maintained, modern (as in, actually break stupid compatibility, just fuck it. It's 2021 for God sakes) C++ existed.

I often joke to my colleagues and say that C++42 will be equal to D today, or in fact D from 5 years ago.


Isn't Swift considered to be one of the more complex languages?

https://www.quora.com/Which-features-overcomplicate-Swift-Wh...


Swift is indeed a complex language, although relatively simple when compared against C++20.


It depends what you're comparing it to. If you look at every computer language, then perhaps, but C++ is still leagues ahead in complexity.


I also liked his take on safety.


I haven't coded in either¹, but I got the impression that Rust tries to supersede C++, and that Zig tries to supersede C, and that they're both slowly getting there. Is that a bad oversimplification?

¹ nor seriously coded in C++ in a long time, for that matter, so take this statement as an outsider impression


Do people really think Rust isn't a replacement for C? For the work I do - optimisation algorithms that must run As Fast As Possible - it absolutely is a much-superior replacement for C.

I think what people mean is that Rust violates their perception of what C 'is' - 'simple, bare-metal, low-abstraction'. Well I don't think C is any of these things and Rust's memory model (borrow-checker) is a much more explicit and precise representation of the actual (implicit, undocumented) invariants of your C program. In that sense it is 'simpler'.


To reword what you are saying in a slightly less dismissive tone: there are a lot people who stick to C because they don't like the added complexity of C++ (real or perceived). The same people are likely to look at Rust and perceive a lot of added complexity on their first impression, while also already feeling comfortable with C's complexity. With that combination switching over becomes a hard sell.

Meanwhile Zig has a lot less perceived complexity, and most of it appears to be similar enough to the "C complexity" that C programmers are already familiar to not scare them off.

(Again, I'm just giving outsider impression of someone who just read a ton of blog posts and basic tutorials + standard library stuff without really coding in either language)


If you're really, really good at C, you'll have realized that Rust doesn't add much to what you need to do. You're already doing what Rust does, you're just doing it manually without compiler support. Rust didn't create the issues associated with borrowing things, it merely surfaced them. They're always there, in any language you use, and since C is so manual, you better have figured that out if you plan on using it at a high level. I write mostly in Go, but I'm constantly tracking "ownership" issues in my language despite not having a "borrow checker", because the borrow checker doesn't create the issue.

Plus if you're using C at a high level you ought to be using a static analyzer anyhow, that may not impose the exact same restrictions on you as Rust, but should be imposing enough discipline on you that using Rust doesn't seem so foreign.


I think this is definitely a reasonable take. Having said that, Rust is not quite as complex as C++ so a lot of people who have rejected C++ are more open to Rust (the linux kernel being a prominent example).


> Rust is not quite as complex as C++

Yet. C++ wasn't as complex as C++ back when it was Rust's age. It wasn't even as complex as Rust. But if you pick a language for the next 20-30 years, you don't just care about where it is now, but also about where it is headed.


If you get things right first time, you don't need to add as much complexity on the journey to ship something usable.

This is obviously easier coming from behind. You simply don't have to (and shouldn't) repeat other people's mistakes.

> It wasn't even as complex as Rust

I'm particularly dubious about this claim. Even counting from the genesis of C++ rather than standardisation, yet counting Rust as starting only in 2015, C++ after six years not only has multiple inheritance and overloading of everything, it also already has templates. That's a lot of complexity.


> If you get things right first time, you don't need to add as much complexity on the journey to ship something usable.

There's no such thing as "getting things right the first time," because what's right depends on external, mutable, factors. The best anyone can do -- and most successful languages achieve -- is being well adapted to the environment. But the environment changes. For example, Java's object model was well adapted to an environment where memory access and computation were of roughly similar speed, but hardware changed so that computation is now faster than memory access, so Java is now changing to adapt to the current environment. On the other hand, its bet that memory is cheaper than computation has so far proven relatively long-lasting. But there's no telling what needs new hardware architectures or new software requirements will bring tomorrow.

Both Java and C have adapted better, in my opinion, than C++, and it's not because of specific features, but because of their evolution ethos, which is different not only from C++'s but also from each other. My personal opinion is that C++ has tried for too long to be both high and low level, and that has shaped its particular evolution. I feel that Rust is repeating that same fundamental mistake, but, of course, others may disagree.


I don't agree that it's somehow impossible to get things right and I'm not even sure you really believe that.

Take "explicit". In hindsight I suspect even many C++ proponents would agree that actually you don't want C++ conversion behaviour (when calling a function that takes a parameter of type X and you've provided a parameter of type Y, the C++ compiler will look for any way to construct an X that only needs a Y and then call that constructor rather than report that you've got the wrong type) by default. Perhaps C++ should have implicit conversion you can opt into, but it certainly should not have (as it does today) implicit conversion you must know about and opt out of to avoid blowing your feet off by mistake.

But I argue that it goes much further than such superficial mistakes. Multiple inheritance is a mistake. Personally I'm sold on the theory that all type inheritance is a mistake full stop (heritability should be a property only of interfaces), lots of people don't agree, but far fewer support the choice C++ made to embrace multiple inheritance of types. It's just so rarely even useful let alone the best option to solve a problem.


> I don't agree that it's somehow impossible to get things right and I'm not even sure you really believe that.

That depends what you mean by "things." Obviously, languages created in 2015 might fix mistakes made in 1985, and recognised as mistakes, in, say, 1995. But the things that a language needs to solve over its lifetime are mostly unknown when it's created. How the language handles the problems that are known when it's created is less interesting. If it doesn't do it well, it will never become popular. What's interesting is how it adapts when those problems change with time and new ones appear.


They can even do use-after-free errors just like in C.


Oh sure, but how many people who choose to be C programmers do you know who would be scared off by that? That's exactly what "familiar complexity" is about: people being comfortable with and not worrying about things that they're used to, even though they maybe should feel uncomfortable and worry a bit.


Many people that choose to be C programmers think they can manage it, until their code runs under SonarQube or similar tools.

It is like handling butcher knifes without metal gloves, they think they are worthless until they repent not wearing them.


Rust is, or was, explicitly a replacement for C++, and many of its features, including the borrow checker, are more supporting C++ generics/STL collections "done right".


Not sure about Zig, but Rust is a pretty successful replacement for C++ by and for all the people who want low-level efficiency of C/C++ but with safety guarantees and reasonable semantics that can be understood without long and painful training.


I really want to take a good look at Rust... it does seem to have very reasonable syntax and cargo looks nice.


The bigger sell to me is that you can actually give safe rust code to a newcomer, and they won’t cut themselves on the most basic of tasks. Code review and work delegation have become soooo much easier for me as software dev lead.


It is a successful language, a successful C++ replacement remains to be seen, as per companies whose SDKs favour only C, C++ and their managed stacks.


I think Rust has an ambition to supersede C++ but in its current form it has only superseded C. C++ is still a significantly more expressive than Rust, which limits the ability of Rust to replace it.


There's some attempts to keep C++ stuff restrained done to a sane level, for example Orthodox C++ https://gist.github.com/bkaradzic/2e39896bc7d8c34e042b. I suggest that you stick to them.

And for everyone who feels they have to incorporate the newest features of modern C++, don't. Avoid modern fetishism and make decisions based off things that have survived the Lindy effect.


It does exist. It's your regular C++17 with Google Style on top (though perhaps with exceptions allowed), ruthlessly enforced through reviews. For larger teams to talk to each other use Protocol Buffers and GRPC to decouple interfaces. For build use Bazel. For utility code use Abseil. Do not use Boost for anything. For editing, set up Vim/Nvim + YouCompleteMe.

With this setup I find C++ pretty easy to work in. And I've built a team of ~50 devs that were productive on the same multiplatform C++ codebase (+ all sorts of language bindings).


My only gripe with gRPC is that there is no way to integrate it into a foreign event loop. I'm a big fan of single threaded event driven architecture ( I mostly do embedded work)


That is by far not the only gripe I have with gRPC, but yeah, it's not designed for embedded. It's too large and too full featured for that. Maybe there's a cut down variant somewhere though, it is an open standard, it is conceivable that someone could implement a portion of its features compactly. For instance for Protocol Buffers there's c-only nanopb.


What happens when you have to use something that isn't on your list? Do you fire people who don't like Vim?


They can figure out how to set up VSCode or CLion. Wrt everything else - tough luck. OP asked for usable subset, I gave them the usable subset, battle tested for 15 years on a multimillion LOC, codebase, quite possibly the largest and most impressive in the world.


I think the nice language is in there, but it’s buried within the backwards compatibility as you mentioned.

I’ve started putting together https://cppbyexample.com as a way for newcomers to learn because pointing them to a reference or outdated examples isn’t helping anyone.


> C++ has already got to a point the teams inside my company can't even talk between themselves properly (it's R&D and very heterogeneous, so not that bad) but this doesn't happen with any other language. It's ridiculous.

Yeah, that's totally true. But I would also say that a lot of it in my experience is people not even trying to keep up with the standards and barely even writing C++11 style code. I wouldn't expect to go into a Python shop (my other language) and work with people who hadn't tried to keep up with that language for 10 years but many C++ devs seem to have that attitude.


Have you looked at Nim? Simple like Python, but generates optimised machine code. It is also statically typed, unlike Python.


D tried, but somehow failed.


D suffered from too many options, multiple standard libraries and indecision between GC and manual memory management.


> D suffered from [...] indecision between GC and manual memory management.

It is still suffering from that. If you want to manage memory yourself you can't use some parts of the standard library + you can't use a lot of the community packages since they assume there is a GC present.

So far it looks like the best alternative is Zig, but sadly it doesn't have some of the higher-level language features such as function overloading, compile-time interfaces (Rust traits) or macros. Jai seems to have most D features and more without the GC [0] but Zig is more likely win popular mindshare since it already is in the open + comes with a C/C++ compiler which is pretty cool.

[0] https://github.com/Jai-Community/Jai-Community-Library/wiki


Exciting. The tutorial says download the compiler, somehow cannot find it, do you have a link?


Jai isn't available to public, sadly. That's why I said Zig is "already in the open". Jai has been in private beta since early 2020 and there are about 150 people that have access to the compiler beta AFAIK.

The reason for the language not being available is that the designer (Jonathan Blow) doesn't want to release something half-baked to the world (because "there is already so much garbage software out in the world", or some line of reasoning similar to that). I don't agree with his reasoning, but I understand it TBH. He also has the reputation of taking too long to develop his games, but when they're finally released they're pretty good.

Anyway, it doesn't look like it'll be released to the public soon so like I said, Zig is the best alternative right now -- and it's gaining more and more users, so when Jai finally gets released, Zig might have already won.


Jai and Zig are my next two bets (after exploring some Nim and concluding that it needs more time to develop), but Jai has made design decisions that are much more appealing to gamedev, and Zig seems less pragmatic in this regard. Both seem to be promising languages in general, I'm only comparing them from a perspective of a specific domain of programming.

- Zig doesn't have operator overloading while Jai does. Maybe a minor nitpick compared to the other ones, but this is a must-have for any math-heavy code, even crufty Orthodox C++ practicioners seem to use it.

- Jai's dev team is also developing a compiler backend that is much faster than LLVM - it's only for debug builds, but that compile speed is very important for any iteration-based workflow.

- Jai's compile time evaluation is far more powerful - it can access the disk/internet and call any libraries. This is probably a horror show for any webdevs who like to do "npm install" without any thought, but in the case of gamedev where you have precise control over your dependencies this gives a much more higher degree of freedom. (For example, with this I imagine not only Jai code replacing all your build system scripts, but also asset management, packaging, localization, and publishing as well.) Jai's compile time evaluation also runs in a custom made bytecode-interpreter, which is probably going to be way faster than interpreting raw LLVM IR (which is what I think Zig's implementation is doing)

- Jai's support for switching allocators on the fly seems a bit less elegant than Zig's approach, but for actual usage I imagine it will be a lot ergonomic to use.

As you have said, you can't dismiss the community traction of Zig. But for me the most important thing will be: which language will actually ship a (reasonably sized) commercial game first. In that regard, as Jai's release will coincide with the Sokoban game that Blow's making it with, I think Jai will be the next language I try after C++ (well, if it releases though...)


> - Jai's dev team is also developing a compiler backend that is much faster than LLVM - it's only for debug builds, but that compile speed is very important for any iteration-based workflow.

So is Zig. In fact Zig might be even more aggressive about this - they chose to aim for an in-place binary patching approach explicitly for the purpose of fast rebuilds and simplifying live-reloading.


Oh, you’re right, my bad. If Zig achieves this then it will be a boon to all gamedevs using the language.


> Jai's dev team is also developing a compiler backend that is much faster than LLVM

I suppose you mean compile time when you say "faster". But LLVM also gives you many backend CPU (and even GPU) targets out of the box. Rolling your own backend probably means x86 and maybe ARM. Lots of work targeting new architectures that is saved by going with LLVM.


The custom compiler backend is only intended to use for development, so limiting it to x86 makes sense (well, unless you are planning to develop with one of the new M1 Macs…)

For actual deployment (release builds for various architectures), the LLVM compiler can be used for Jai.


> Zig doesn't have operator overloading

Zig, by design, doesn't have overloading of any name, period (which also includes operators).


Sure, but I also proposed an equivalent way to get "custom operators without overloading" (lexical scope-namespacing in @ functions with a special parser rule for certain infix binary forms that do not conflict with builtin operators) and this, too, was rejected.


I much more happy using D that I have ever been using C++. It is a perfect replacement, with none of the issues mentionned in GP comment.


D isn't a replacement for C++ anymore than Go is. If you like it that's fantastic, but it's targeting a fairly different intersection of tradeoffs.


Have you used both?


That's what code standards are for, so you're all writing code that everyone can understand. Seems like a failure of the tech managers


Coroutines let you control where allocations go though? Just override operator new/delete on your promise types. Honestly the feature has been amazing and has cleaned up my code a ton. It’s not for average users. It’s for library writers who then build abstractions on it to make async code easier to write for others. Exposing the coroutine frame as a non opaque structure is simply not practical. Not a super well-informed article IMO.


> It’s not for average users. It’s for library writers

Justifying the complexity of an interface by appeal to caste system is pretty poor, IMO.

A very common methodology for solving big problems is to break them down into smaller problems, solve each one in turn, and compose them together. In other words, we use library-oriented programming. A productive feature in a programming language makes it easy to both create the library elements of the solution and to compose them together.


There are tons of facilities in the language intended for library authors and abstractions that are absolutely intended to be used out of the box. This is just how things are, features are provided at various levels of abstractions. I know HN loves to collectively hate on C++ but “caste system?” Lol


I think "caste system" is the perfect word for it. I don't think it is a bad thing though, I am fine with having such castes in c++. First time I saw coroutines of c++ I was definitely like "yep, this is for library writers". I am not excited about it, but I am excited about libraries that will use it.

It is similar how templated code that basically becomes "magic" for an average programmer like me. I can't wrap my head around such complex templated libraries either but the simplicity of using such libraries are sure welcomed by me. I think it is perfectly have some language features for more advanced users.

Hey, at least they are adding concepts. Maybe I will move up to another caste and will be able to write more complex templated code now.


Yeah, it's part of the charm of C++. The is almost horrifyingly modifiable. As a library author, that gives a lot of control over the functionalities of the abstractions you expose.

For a day to day programmer, that sort of concern is (usually) overkill.


The difference is in the capabilities of what a library can do. C++ goes through a lot of effort to ensure any library has all the same building blocks as the standard library. Almost no other language does this. Instead, they just grant the standard library magic powers that they don't trust you with or just refuse to provide.

The "caste system" as a result exists in all languages. Just the vast majority prevent you from ever being a duke much less a king. C++ doesn't stop you. Whether or not this is valuable to you is then personal preference, but it's not complexity for the fun of it, either.


> Justifying the complexity of an interface by appeal to caste system is pretty poor, IMO.

Ahhh, I mean, isn't that a decent-sized chunk of template-based code as well? I would hazard a guess that even for a "simple" one like std::vector<foo>, there are way more C++ developers who can use std::vector than who could implement a templated vector from scratch.

I, myself, fall kind of in the zone between. I'm not a template guru, but I work on the foundational/library-type code on my team and have to do my absolute best to make sure that the stuff I build for everyone is usable without needing to know the minutia about how all that stuff works.


Yes, and that's why templates are also often criticised as a poor design.


I think many C++ devs haven't read Stroustrup's book. The intro to C++ chapter covers most of what they need to know and then the rest of the book goes into beautiful detail with examples.

Most of the C++ developers I worked with were C++ developers in name only and wrote appalling code. I was one of these developers until reading Stroustrup's book and doing my own side projects to improve skills.


I think the fundamental flaw of C++ is the bifurcation of its target users into elite (i.e. library) programmers and "average" programmers. It makes the C++ standard library incomprehensible for normal users. The C++ committee members are composed of elite programmers, of course, so they keep adding features which are inscrutable to most users, and the complexity of the language spirals out of control.

I have a soft spot in my heart for C++. I have used it on and off for about 30 years, 8-9 of those years professionally. But the complexity of the language has increased so much that I am actually looking forward to not writing it anymore.


I wonder if Rust is any different. I wouldn't know.


I regularly read source code of the Rust standard library. With C++ I don't even bother, it's futile.


Rust has a similar bifurcation. Most "regular users" of rust will not write macros, for example. Although my impression is that the divide is not as big as C++'s


Rust has two macro systems. First it has declarative "by example" macros which I wouldn't give to an absolute beginner but they're very safe and give a flavour of meta-programming. A programmer with a good general knowledge of Rust, and a nagging feeling that there must be a better way to make these several almost-identical-but-not-quite new types they're working on can learn how to write "by example" macros and while learning probably won't set anything on fire. Rust even provides, out of the box, an easy way to see what the result was of your macro substitution.

Second, modern Rust has stable procedural macros. The procedural macros have essentially unlimited power, since they literally run inside the compiler processing the tokens of the program. Still, two kinds of proc macro, derive macros and attribute macros are fairly tame and, with due caution, merely competent programmers can experiment for themselves. It's really only the function-like proc macro that makes unlimited chaos likely and wants an expert. Stuff like whichever_compiles! (a macro which takes a series of code blocks and your program has the first one that compiled successfully...) is in this last category and is clearly toxic. Anybody who could write such things hopefully knows enough to do so only as an elaborate joke.


I don't know that I would consider writing macros a sign of language expertise. I work on a large, intricate Rust project and macro stuff, while it does come up occasionally, is not what I would see as the "elite" bit of writing Rust. The best Rust folks I have seen understand Rust's type system, how it relates to the memory model and how those relate to the underlying machine.


I feared macros initially, but when I tried I was actually surprised by how easy they were. Way easier than Scala 2.x macros.


I've been saying this for quite a while - new C++ features seem to be added by people who don't actually use the language for any real job, or who do but who wish they were using C# instead.

I work in games development and the rate of adoption of new C++ features is incredibly slow. Only in the last couple of years it sort of became ok to use auto(although still frowned upon). Foreach loops are still out of the question though. I don't see the need for higher and higer and higher levels of abstraction in C++. If you need that then we just use C#.


I mean, here is the tradeoff really bad ?

The central point of this article is "c++ coroutines may allocate". Except Rust's, are there are a lot of languages where you can enforce that coroutines don't ? I'm confident that this is not the case for any JVM or .Net based-languages ; let's not even talk about Python or functional-ish languages.

For me for instance, as bad as C++ coroutines may be, doing the same thing in C# than what I'd do in C++ is literally not even on the radar ; we just set a much higher bar for what is "good" in C++ which leads to a lot of depreciating-tone articles but one must not loose sight of the whole picture ; almost no coroutine implementation in the world sparks joy when the same criterion than this article uses is applied.


Amen. As I said elsethread, what we got is better than nothing (and a huge amount of work was put in by Gor et al.). It is a shame we could have gotten much more.


C++ is trying to cater to every usecase. You don't need to use every feature, and if you move to a different field you may encounter different subsections of C++ usage.

For example, C# doesn't run on 16bit microcontrollers, and many things in the language make it unsuitable for this. If you added this as another of the design constraints on C# then it would be more complicated for desktop application devs, or server devs. C++ handles all of these cases, so is necessarily more complex.

IMO what C++ needs is compiler support for warnings when "old" bad practices are used, which would allow the language to remain flexible and backwards compatible, while still providing guidance for the future.


Check netduino.


> Foreach loops are still out of the question though

For real? What's the argument there? I've never seen anyone complain about range-for, so I'm curious what the perceived problem there is...


We use our own game engines which still use lots of custom containers which won't necessarily work with a for each loop. In the best case you will get a cryptic error message, worst case it will look like it's working but actually be very inefficient or crash(that's what happens when games programmers start overriding Count to mean something else depending on the usage basically). It's a self inflicted wound basically.


Doesn't foreach expand to the same `for(it = collection.begin(); it != collection.end(); ++it)` loop you'd otherwise write manually?

Or are iterator based loops also avoided in that codebase?


My complaint is that it sucks to write your own iterators that can be used in a range-for loop, unlike rust where it is a single interface. You even have a standard library function that converts a closure into an iterator.


This is sometimes evident in the standard library. I think they chose just about the most developer-unfriendly interface for `std::visit` of variants. Yes that interface covers some edge cases that I've _never_ seen in practice (visiting two variants at once??), and results in users writing ugly, verbose code. Then they proceeed to have an `overloaded` example on cppreference that's a useful template to at least make that more palatable for the common case, but naturally that's not in the standard library.

It annoyed me so much I implemented what I consider the sane/common interface: `cvisit(variant, [](Type1) {...}, [](Type2) {...}, etc)`.

Yes, I realize my usage might not match with everyone's, but from my own experience and code I've read, this covers almost every usage (visiting sum types).


> Yes that interface covers some edge cases that I've _never_ seen in practice (visiting two variants at once??),

heh, this was literally one of my first uses of C++ variants: that job needed a generic set of mathematical functions in a way that emulated a weaker type system (with automated conversions left and right), while preserving the original types as far as possible, and did meaningful conversions otherwise e.g. more-or-less

    using value = variant<int, float, string, vector<float>>;
    value clamp(value x, value a, value b);


Ah yeah numeric sort of variants totally make sense for that! In all of my use cases they have been types that don't have operations defined among each other. But really I'd just appreciate the standard library providing at least a convenience function for the common case over their more powerful but obtuse std::visit.


See overload in the example[1] at cpp reference. overload is an extremely useful helper that it should be in any c++ programmer toolbox.

[1] https://en.cppreference.com/w/cpp/utility/variant/visit


Thanks for posting the link, I forgot to include it!


If c++ is being designed by people who wish they were using c# then at least we know the designers of c++ are a sensible bunch of folk :)


Why is for-each out of the question?

for(const thingy &x : myContainer)

What's so illegal about that??? Are you also not writing your own move constructors and assignment operators?


Nothing is illegal about it, I'm just saying that in games our engines have tonnes of various containers, some of which don't work with a for each loop because they require iterators to be implemented and we don't have them, or containers which are containers but which aren't actually meant to ever be iterated upon.

Again, there's nothing wrong about foreach loops in general. I'm just saying that in my industry and in the engines that we work with even the "basic" features of modern C++ are frowned upon for <reasons>.


Very odd and amusing at the same time!


These 2 might not be the best examples, because "auto" and "foreach" are, without doubt, among the important improvements of C++ 2011.

There are many other more obscure new C++ features, whose usefulness is doubtful.

Having to write again the type, when the type is already provided by an initialization value, does not provide any kind of useful redundancy.

The word "auto" is also redundant, but there is no way to eliminate it without breaking the traditional syntax.

"foreach" also provides a long-needed simplification. It makes no sense to waste time with writing iteration limits and also expressions with either pointers or indices for accessing the elements of a data aggregate, when all these are things that the compiler knows better.

When C (1974) introduced its more powerful "for" syntax, instead of the traditional "for" with arithmetic progressions, that was a partial mistake.

The C "for" allows 2 features that were not available with traditional "for", assigning to multiple loop variables (using the comma operator) and other loop variable updating operations besides addition/subtraction, e.g. indirect addressing through pointer links.

Nevertheless, these 2 features are needed infrequently, but the programmer is forced in 99% of the cases, when a simple "for" is needed, to write a lot of redundant information in comparison with the traditional "for" (e.g. "for i in 0 to 100 by 3 do" vs. "for (i = 0; i < 100; i += 3) {" ) (with traditional "for", the most frequent simpler loops can be written in a simplified way, e.g. "for i to 100 do", when the initial value is 0 and the increment is 1, while in the corresponding C "for" you can only replace "+= 3" by "++").

For the 2 special cases, a better syntax could be found, without altering the syntax for the common case. At around the time when C was developed, the concept of iterators was introduced in Alphard and CLU, which is a better solution than the extended syntax of the C "for".

Introducing "foreach" in C++ 2011, has corrected a mistake dating from C 1974, by providing a simple syntax for the simple common case, so that the complex syntax can be used only when really needed.

The correction is not total, because there still are many "for" loops that cannot be written with "foreach" but which C "for" is still an overkill, e.g. the loops that access multiple arrays.

Nevertheless, not using "foreach" nowadays would be a weird choice (notwithstanding the fringe cases with RHS functions returning references to temporaries, which were discussed on HN some time ago).


Sorry, I should have made it clearer - there is no issue with either one of these. I was trying to point out that in some industries(like video games, at least where I work - big AAA studio with proprietary engines) the adoption of new C++ features is so glacial, that even relatively inoffensive things like auto or foreach are still being frowned upon. I said in another comment that there is a technical reason why foreach loops are not used sometimes - because we make custom containers for everything and sometimes these containers don't work well in foreach loops(or at all) - it's fixable, but it's easier for the engine to just implement a blanket ban. For us, visibility and explicitness of everything you do is absolute king, and anything that hides types or number of iterations in a loop(!!!) is met with suspicion. I don't necessarily agree, but I see a lot of comments in this thread saying "oh C++ is such a difficult language to learn as a newcomer" - I'm like.....no? Just because these new features exist doesn't mean you have to use them.


In the average case, it's generally not an issue, but in a game loop, i could see it being a source of issues.

It's really easy to make a mistake and accidentally make a copy of a reference, or obfuscate any conversions...which, I can imagine in the game engine world being quite a massive footgun.


I agree with most of the author’s reasoning and 100% of their recommendations.

> As the coroutines are, I don’t quite know what they’re for. They seem like they might be useful, but nobody seems excited by them.

It’s interesting that these two sentences were written next to each other.

I get the (outsider’s) impression that they were written using something like ASIO as the standard use case, which is indeed one of long-lived coroutines. I’m not sure anyone else participated who had a different application.

The second observation “nobody seems excited by them” is generally waved away by saying “get it into the standard so library writers can add sugar” (which implies that such sugar will make it into the c++26 or 29, if at all).

I’m generally pretty happy with the direction the committee has been moving in since C++17 or even 14. But I agree coroutines don’t seem to have quite made it.


A coworker added coroutines to KJ [1] so for our purposes it’s “made it”. It’ll be more broadly useful obviously with library support in the standard. I think that’s targeted for c++23

[1] https://github.com/capnproto/capnproto/blob/master/kjdoc/tou...


I have been using C++ coroutines in all of my development work for the past two years, and I am "excited by them", FWIW; they have changed C++ so drastically for me that I have no longer felt the need to work with any other random scripting languages (such as Python, which I used to use extensively): their ergonomics are actually pretty awesome.

Most of my projects, however, were things I would have been willing to develop in languages that would heap allocate constantly, so that heap allocations are happening here that I am poorly controlling isn't driving me insane (though I agree it sucks).

(The argument about competition is just strange to me, as that's just how generators as an abstraction--in any language--work... if you want to be able to have a generator yield to a separate generator the way you do that is by first developing a flatmap function that takes a generator of generators and yields its elements.)


> I have no longer felt the need to work with any other random scripting languages

Compile times are pretty bad though :) Especially with coroutines: at least clang (don't know other compilers) AFAIK works by copying the same function n times for each resumption point, and that's extremely slow.


competition -> composition

yields its elements -> yields its element's elements


I am very surprised how complicated C++ is becoming in every revision.

It began with a simple goal of adding classes to C. The STL, inline functions, "true constants" were all nice stuff. C++11 introduced nice stuff like "auto", constexpr. But now the new features are increasing the complexity so much that it is becoming harder to learn the language. Is so much complexity worth? How can someone new to programming ever hope to learn this monstrosity?


I do agree that C++ is a very big and complicated (and ugly) language, and because they value backwards compatibilty the only way it can go is even more complicated, because the cannot strip out stuff.

But you last question can be answered in a couple of directions: 1. you don't HAVE to use C++, if there's another language easier, or better suited to your goals, use that. I think as long as C++ is the best choice for some domains it will stay relevant. 2. you don't HAVE to know/use all features of C++. If you're working solo just ignore the parts you want. If you're in a big team hopefully the codebase is such that you don't need to know everything about it to be productive. 3. Is adding features worth it? Well if you take 1 & 2 in to account, I think the downsides are capped a bit, and the upside is: more options. But of course whether you prefer many options with a lot of complexity, or a straightforward language is subjective.


It’s actually fairly rare that you get to choose what language and language features are used in the code base that you’re responsible for. Sure, if you’re the tech lead at a startup, or working on a personal project, but for all my day jobs, I’ve not once gotten to choose anything that fundamental about the code base.

So just in general, I do think it’s fair criticism of a language for offering crappy features, even if it is “optional” to use them. (Not arguing either way for coroutines specifically, as I’m not a c++ dev).


> the only way it can go is even more complicated, because the cannot strip out stuff.

The irony is of course that they have stripped stuff out. I spent a fair amount of time going through an old code base in my last job stripping out std::unary_function and std::binary_function. Not saying that the replacement isn't better however.


My least favourite example of bad C++ change is deprecating and removing the overload of `std::random_shuffle` that doesn't require a new random device and a random function [1].

Sure, that version was cryptographically insecure, but 95% of the use cases of shuffling have nothing to do with cryptography and don't need to be secured. This is not a case like PHP's `mysql_query` which has an easy replacement: now you must declare an `std::random_device` and `std::mt19937` and pass them to your functions.

This nearly made me fail a job interview in an embarrassing way.

[1] https://en.cppreference.com/w/cpp/algorithm/random_shuffle


Apart from not being very random, it also wasn't thread-safe.


I think of the C++11 RNG stuff is thread safe, you need to lock around it or have an RNG per thread.


Becoming harder? Last time I wrote C++ was in 2010 or so. There's so much new stuff I already consider it impossible to learn. It will become even more complex with this update.


I think it's not a coincidence that 99% of the complaints about modern C++ on HN are from people who...don't use C++. You don't get anything remotely like this tone of discussion on r/cpp or whatever.

I've been continuously employed as primarily a C++ developer since around 2001. The language was pretty bad back then, but it's been getting better and better since C++11. The vast majority of the new stuff is absolutely wonderful to use, a huge improvement in productivity and code readability.


Big +1 to this. Most of the changes to C++ have been fixing old problems. Concepts are a complex feature sure, but before having named concepts in code, we had unnamed concepts in code that may or may not be named in the documentation. Being able to name these things in code and get reasonable error messages from the compiler / just actually be able to understand it in my brain is great!

Same with constexpr. People used to do all sorts of stupid math / string concatenation in "template metaprogramming" to get compile time execution. Now they write (mostly) normal functions in standard C++ that I can actually read. The language gets more complex sure, but the code I read and write gets simpler.

C++20 is larger than C++11, sure. But for most users, it's simpler too.


> Most of the changes to C++ have been fixing old problems.

It will not fix all the pre-existing legacy code that people deal with on a daily basis. Languages are like infrastructure. People need to get it right on day one since it's exceedingly expensive to fix later on after people have already started to build on top of it.


This is a selection bias. People generally aren't forced to use C++, so those who can't stand it don't take C++ jobs, or leave to use something else.


I wasn't even complaining though. I'm just demoralized due to the extreme and ever increasing complexity. I don't feel like it's worth it to update my C++ skills.

My real complaint about C++ is the utterly insane ABI situation. Only C++ code dares to touch C++ code, no other language can interface with it without a C interface in between. The new standards forced even C++ compilers to break backwards compatibility with themselves due to requiring certain data structure performance characteristics. There is such a thing as C++ code that won't even work with other C++ code produced by the same compiler.


Yes, visitors to Disneyland don’t complain about Mickey Mouse.


Could you explain what kind of projects nowadays use C++? I'm not up to date at all anymore. Used a lot of C++ until about 2010 (for simulations and Windows programming).


Still used a lot in scientific programming. Especially with long-lived projects, but even now new C++ projects start up.

It’s a combination of institutional knowledge, domain-specific libraries, and low-level, hpc capabilities that keep C++ being used.

(C++ for low-level stuff, python for the glue is a common paradigm in scientific computing)


> Could you explain what kind of projects nowadays use C++? I'm not up to date at all anymore. Used a lot of C++ until about 2010 (for simulations and Windows programming).

All browsers you use are written in C++ (Firefox, Chrome...). Node.js is written in C++.

We live in that bizarre world, where on one side we have C, an antiquated and simplistic language and on the other C++ a monster of complexity that provides also basic modern stuff C will never have.

Rust has complexity of its own but it's more like a "philosophical" complexity rather than syntactic. A lot of C and C++ developers just don't like how Rust works.


High-performance database engines and similar data infrastructure are almost exclusively written in modern C++ these days. They benefit immensely from new C++ features.


Game development using Unreal Engine, which has a C++ API [0]

[0] https://www.unrealengine.com/en-US/features/c-api


Read Stroustrup's blue book written after C++11 was released. Not only is the font nicer than the white 2003 edition, it goes into sufficient detail to understand the new (now old!) features of C++11.

I couldn't imagine writing C++2003 any more.


You could learn the stuff added since 2010 in like a couple days if you had a solid understanding of C++98, it really isn't that hard.


Could we please keep the conversation on (the coroutine) topic for once instead of degenerating into the usual C++ bashing?


I think it's therapeutic for those that have to deal with C++ on a daily basis haha.


I don't think so. The complaints always seem to come from people who parted ways with C++ a long time ago. Given that C++ remains popular, it feels like some are unsure whether it was the right choice and they need to publicly self-justify it.


What do you base that on? Just because someone chooses other languages whenever they can, doesn't mean they're not forced to use C++ at work.


Many of the complaints about C++ really don't line up with day to day experiences using C++ IMO. They focus on problems and complexities that just don't come up often if at all unless you're really going into the weeds of implementating your own "STL-like" types.

It'd be like if every time Java is mentioned everyone just bashes how terrible JNI's API is. Yes, it is bad. It's frankly horrifying. But it's also basically 0% of where you spend your time when writing Java on the daily.

That's not to say C++ is rosey in practice, just the problems that actually get encountered on a modern code base almost never get mentioned here while all these esoteric edge cases that never come up in practice are beaten to death.

For example take C++ templates. In this thread they are slammed repeatedly. In practice you know what? They're fine. It's not that bad. Template metaprogramming is brain bending and hard, and I wouldn't suggest it. But basic templated classes or functions? No harder or more complex than the exact same thing in Java or C#. And in practice that's really all you do the overwhelming majority of the time.


This article discusses things that are totally irrelevant to someone who will actually use coroutines, because they will use https://github.com/lewissbaker/cppcoro or wait for C++23 to give them the library support for coroutines, which will make usage simple, like the first code snippet from the article.


The unreliable allocation elision is very relevant for most coroutine users.

edit: also the lack of composability is relevant.


It is possible to compose them more easily than described in the article; Lewis Baker's cppcoro library for example provides a recursive_generator<> type[0] that allows this without using any macros. It's up to the library part of coroutines to make things easy, end users are not expected to write low-level coroutine code themselves.

I wonder about the allocation elision. Return value optimization became mandatory, and some compilers can already elide calls to new/delete and malloc()/free() in normal code, so perhaps it will be possible to guarantee allocation elision in the future in the most used cases.

[0]: https://github.com/lewissbaker/cppcoro#recursive_generatort


Unfortunately the C++ standards process is completely out of control and driving the language to total chaos


"Do you know how the Orcs first came into being? They were elves once, taken by the dark powers, tortured and mutilated. A ruined and terrible form of life."


I see this C++ coroutines thing as a desperate cry for real AST macros, but the godawful syntax of c++ just can't really make that work. And besides, people would complain even more if everyone made their own version of coroutines.

Maybe it would be possible to make a s-expression language with sane syntax that transpiled to C++ code and let you do arbitrary compile-time AST macros. You wouldn't have to rely on Turing-complete template insanity or whatever half-baked feature the committee shat out last. In principle it wouldn't even be that difficult to make something like this, since it wouldn't have to be an actual lisp. Might be an interesting project


I was a fan of async code 10 years ago, but now I'm getting more and more a believer that all those coroutine/fiber/etc models "do not spark joy". It's not just the C++ version.

One thing which usually happens is that engineers get overwhelmed with 2 threading models: They might already have a hard time to understand how normal threading works, and coroutines add another dimension on top of that. That's a non-issue for the engineers working on the proposals and some of the affected libraries since they are domain experts, but the minority of implementors of an API server or users of a HTTP client are actually IO experts.

The second thing which usually happens is that a couple of ecosystems start to exist on top of the foundations which are incompatible with another. So you no longer write code in C++, Rust, Python, Java etc. You are writing code in asio, tokio, asyncio or netty. While most of these e.g. have some interoperability or can coexist with other async runtimes running on different threads it's also not something easy.

And last but not least every one of the implementations has another set of limitations that one or the other person will be unhappy about. Whether its additional heap allocations, mandatory synchronization, lazy vs immediate execution, cancellation behaviors, etc.

All in all coroutines seem good tools to make reaching performance goals in a way that's a bit more pleasant compared to what we had 10 years ago (callbacks), but they always seem to be a compromise.

I'm wondering whether at some point we will see an uptake on fixing the shortcomings of plain threads to avoid having to reach out for coroutines and a decline in those. But security challenges and mitigations make it continuously harder to decrease system call overhead.


> One thing which usually happens is that engineers get overwhelmed with 2 threading models: They might already have a hard time to understand how normal threading works, and coroutines add another dimension on top of that. That's a non-issue for the engineers working on the proposals and some of the affected libraries since they are domain experts, but the minority of implementors of an API server or users of a HTTP client are actually IO experts.

IMO it's traditional threading that's hard to get your head around not async. Pre-emption makes things much more complex.


I see where you are coming from. Preemptive multitasking definitely introduces room for errors which are not possible in pure cooperative multitasking - like Javascript. However with multi-threaded languages like C++, C#, Java, Rust, etc. you always have to care about all of that complexity. Including ones where you cooperative coroutine might be resumed on a different OS thread. So it keeps being harder.

Also after moving to Rust, traditional threading lost a lot of its scariness. The type system is just so good at preventing the multithreading errors.


Isn't it the std lib's (and the compiler's) job to provide abstractions that allow the programmer to not think about preemption?


It is sad. Once again C++ is foisted with an overly complex and ugly standard that will make programmers unhappy for decades and likely cost billions in lost productivity. The proposal got pushback from a number of expert sources as well as from the community at large and yet here we are.


I wonder if this will go the way of std::auto_ptr and get replaced by a more sane version 10 years later.


It seems like a fouls errand for C++ to continue to try and add higher level abstractions while maintaining a zero overhead principle.


It is. It is a well known public secret that the STL isn't all that efficient due to issues with e.g. iterator invalidation. You can crank out a hash table that supersedes std::unordered_map in pretty much every situation in a weekend.

What C++ people need to realize is that they can't keep their cake and eat it too.


The single biggest contribution of the STL are not the data structures nor the algorithms. It is the design, the shared vocabulary and protocols. This is why I can replace almost any use of unordered_map with, say the corresponding abseil implementation with very little changes to the code.

Stepanov has always seen the STL as a starting point, not a finished library.


Any C++ code I have ever worked with is littered with `std::`. Replacing all that with `otherlib::` would be a herculean effort at best. You might be tempted to do a simple search and replace, but let's say there's also instances of `mystd::` (probably rare). Or perhaps you don't want _all_ `std::` to be replaced. Anyway, my point is that this whole ordeal gets hairy quickly. And in practice, I only ever see `std::`. And not only does it give off horrible noise in the code, in my mind I think "this data structure is probably not optimal". "This algorithm is probably not optimal".


Most data structures in a project are not performance bottlenecks, so that's alright. While there are much better hash table implementations, std::unordered_map is still usable, although not great.


If I as a C++ dev come to a new project, I'd much rather that someone was using std::vector or std::array or std::unordered_map or whatever other container because I know how it works, and I know that any other new developer will know how it works.

Personally, I would expect very heavy justification (incl. profiling results showing orders of magnitude differences in total task performance) and a very strongly tested + well documented implementation for anyone sending me a PR with their own implementation of a container.


> Any C++ code I have ever worked with is littered with `std::`. Replacing all that with `otherlib::` would be a herculean effort at best. You might be tempted to do a simple search and replace, but let's say there's also instances of `mystd::` (probably rare). Or perhaps you don't want _all_ `std::` to be replaced. Anyway, my point is that this whole ordeal gets hairy quickly.

replacing a std::type by a semi-equivalent non-std::one has never taken more than a few minutes with my IDE's find-and-replace, on up to MLOC-codebases.


What about putting using std::vector or whatever in the header or implementation?

Saves littering the codebase with std:: everywhere without having to do a blanket using namespace std;


> This is why I can replace almost any use of unordered_map with, say the corresponding abseil implementation with very little changes to the code.

How is that useful? For "easy" optimization? Because that's not how you optimize code.


I haven't made any claim about performance.


Interator invalidation can be easily caught with debug builds.

Not every C++ application needs to win microbenchmarks when using hash tables.


> Not every C++ application needs to win microbenchmarks when using hash tables.

strawman and you know it.


Not at all, rather the C micro benchmarks culture that sadly infects some C++ subcultures.

I never had any performance issue using the STL to deliver projects under the customer acceptance criteria.


Why do people think its too hard to learn? I've been writing in it and learning the new stuff and I'm impressed. Is it just overchoice? Where is all this extra complexity I keep hearing about hiding? Seems to me to keep getting free DLCs that you can optionally learn. Its the language that keeps on giving. My main complaint is the verbosity of it, even with the new stuff which adds brevity like range-based loops. Can someone provide a cooherent example where it was too complex for their needs?


From experience, you’re in the phase of your C++ skill development before you realize all the things you didn’t know you don’t know. Everyone goes through what you’re feeling, and with seniority in the language depression sets in how broken and inconsistent everything is.


Yes, this is me :)

There have certainly been times where I was SMH, thinking "that could have been simpler" maybe that's the beginning of a new phase.


I think the problem you miss (as have I for many years) is that it's not only about you learning the new stuff. When you work in a team, writing "new" C++ code forces everyone to learn the new stuff.

I too enjoy learning the new tools, their tradeoffs and how to best apply them to solve a given problem. But to someone who has not invested this effort (and might even be reading the syntax for the first time), they might as well be reading a completely different language. I believe that is the complexity that is talked about.


That just seems to be a reluctance to learn and stay relevant? You can indeed get away with writing C++98 but the rest of the world keeps turning.

I used to work with a guy who would hate all the "new stuff" and then belittle and berate newbies for not understanding his "simpler" code full of void* and COM code. A bizarre tradeoff of not learning anything new himself for 20 years, and then demanding everyone else knew what he learned 30 years ago. Inflexible. He'd still be using his Amiga if he had his way, cursing every new OS and computer system.

The interesting thing about the newer C++ stuff is that it really does look like a new language - because it is.


This was the complaint I'm curious to dig in to the most.. I suspect it will be divisive. Business goals and life goals don't necessarily align and having to learn a lot and often in order to remain "current" probably runs counter to some goals of businesses/teams where it simply becomes an additional cost.


Have you actually looked at the co-routine stuff? It is another level of awful, even for C++. The committee screwed that up.


You may think you need async, but you don't. This isn't Python or Javascript, the use case for C++ coroutines is microcontrollers and other things without a functioning modern OS.


Until you happen to do modern Windows with WinUI/UWP in C++, as not all APIs are exposed to .NET.


The main target for coroutines was networking. And, given enough library sugar, they actually work decently enough there.

The other use case is generators and there the hard to remove allocation hurts a lot.


This isn't Python. Nobody needs or asked for coroutines to do networking in C++.


many many many people did ask for them.

Also, of course you do not need them. You also do not need C++, just a needle and steady hand.


That is a bold thing to say. Why do you think this is the case?


Not yet, no. I will look out for it. I am definitely I'm in an early phase where I'm just finding things "cool", without noticing why they might be awful. But I do wonder if there are senior devs on each side of the this C++-is-awful fence?


I have done C++ for 15 years, its the only choice in the HFT domain I used to work in. There is nothing that can replace it right now because of its performance, the quality of the compilers and the available library eco-system. But that does not mean that some of C++ corners are not inexcusably obtuse. Co-routines is a new one.


I think C++ coroutines is best understood not as something you'd use directly, but as plumbing that makes existing async libraries interoperable and easier to use.

For example, Facebook's Folly has been updated to use coroutines and is being widely adopted at FB:

https://github.com/facebook/folly/tree/main/folly/experiment...


The snide response to this is does anything in C++ spark joy?

More seriously, I've come to believe that most people shouldn't be writing multithreaded code. More specifically, if you ever find yourself instantiating a thread directly you've probably made a mistake. It's possible you're writing something sufficiently low-level or a library or framework that justifies that of course.

From using Hack, I've come to really appreciate the single-threaded cooperative multitasking programming model for client code [1]. It's not unique to Hack obviously.

Part of the power of C++ is the ability to allocate things on the stack or the heap. This is powerful but the complexity cost of this is so incredibly high that I honestly question if it's worth it.

It's unsurprising to me to see the complexity that comes with coroutines as a result of this.

[1]: https://docs.hhvm.com/hack/asynchronous-operations/introduct...


Is it that complicated?

  std::thread t([](){
        std::cout << "thread function\n";
    });
  t.join();
With sane ground rules threading is not so hard. Don't try to use subtle atomics. Do use a mutex to protect all shared state. Do use scoped locks. Do use thread safety annotations.


The difficulties of multi-threading code to me is not about crashes or protecting shared states or ownerships. It is about reasoning how it might interleaving between these protected critical sections. For these, mutexes / locks / atomics are the wrong level of abstractions to reason about on daily basis.

I haven't used Structured Concurrency long enough to form an opinion whether that is the right level of abstraction or not.


Structured concurrency only partially helps. You still have to be aware of what state is being used and modified within the threads of execution so that you properly lock them. It does not remove the need for some equivalent to mutexes and locks unless each thread of execution is known to use and modify different pieces of state (like thread 0 modifies a[0], 1 modifies a[1], etc.).

If there's any overlap you have to have a lock-equivalent that ensures the desired section of code runs atomically (relative to others using the same pieces of state). Or you need some kind of transaction system that can retry a stateful change.


It is ... complicated when talking in the context of "threads modifying states".

I am more thinking through "shared states" needs to be published after exiting a critical section [1], and other states are passed along from one critical section to another (either as copyable values, or immutable state objects). At least this is the easiest for me at the moment to reason about a piece of concurrency code.

[1] I am using critical section here to loosely describe a block of code needs to be executed together. It can be code between two yield points (in coroutine), a "task" object (in traditional task based scheduling), or section of code protected by a mutex.


I personally prefer OpenMP to other threading models, but I come from a scientific programming background. Very simple (or at least it was til accelerators came along), same in C/C++/Fortran.


That's probably OK since C++ does not spark joy either - the language is mostly a superset of some things that have been lying around for a while which you may or may not want, combined into a somewhat usable system.

At times I almost think there might be subset of C++ that sparks more joy - sometimes I try to program in it!


The call to new can't really be avoided with this style of coroutine. You have to put the state somewhere.

I'll agree there's some ugliness, but it's way nicer than trying to do something without compiler support. I looked into coroutine libraries previously, and I just gave up on them as being too much of a hack.


My understanding was that the aim of the implementation here isn't really to be used by users, but to allow library maintainers to use them as the basis for user consumable coroutines.


Nothing in C++ sparks joy so that fits right in.


I agree, but I think in 5 years we will be using coroutines regularly. Just need to let the standard library catch up.


As they say, if you are starting a project to solve a problem, and choose C++ as a language, then you now have three problems: the original problem, the C++ language complexity, and the third is the maintenance hell afterwards.

C++ language development is a classic case of what feature creep looks like for a software project that is on the path to self-harm.


Surely you can replace "C++" with any other language and it'd still be true?

I look at the colossal ancient Delphi codebase that my old employer had as a great example.

Or the Magento system in PHP as another example.

Or all the Objective C in the world that won't work on modern macos or iOS/iPadOS as another example.


Does any element of C++ spark joy?


C++ does not spark joy.


test


Google needs to get off its ass and open source Fibers (not to be confused with Google Fiber). Fibers sparked plenty of joy in me back when I was at Google. It's just a more dignified way of doing things with threads, IMO. Here's the talk which, unfortunately, does not show the higher level APIs I'm talking about: https://www.youtube.com/watch?v=KXuZi9aeGTw. If this were released today everyone would slap their foreheads and say "of course that's how it should be done".

Google folks have already removed the obstacle in that direction by adding userspace switching to the kernel in recent kernel releases. Just go all the way now, and release the kraken.

Last update I've seen on this is this article from June of this year: https://www.phoronix.com/scan.php?page=news_item&px=Google-F...

Imagine how excruciating it must have been to all the Google people on the C++ Standard committee to work on this stuff that comes nowhere close to what they've been using internally for years, and not be able to talk about it in any kind of detail.


Fibers were seriously awesome, nice that userland preemption has been finally accepted upstream too. These days i just gave up on c++ and use Go


Yeah, I also don't advise my clients to start new C++ projects and steer them towards either Go or Rust (most pick the former, since it's easy to learn). If I were to start a product today, I'd likely be using Rust myself, purely due to the fact that once I need to hire people the Rust compiler won't let them do stupid stuff with shared state. What Go and Rust taught me more than other languages is that there is value in _not_ providing certain capabilities, such as inheritance or (in Rust) the ability to unsafely modify shared state.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: