> But why can’t I just quit then? What’s the matter? The matter is, none of the languages, especially the so-called “C++ killers” give any real advantage over C++ in the modern world. All those new languages are mostly focused on holding a programmer on a leash for their own good.
This, very specifically, is the advantage these languages have over C++.
The benefit isn't the universe of things you can do (they're all Turing complete, you can do anything) - it's what the language stops you from doing. This is the thing the C++ committee never understood or at the very least never appreciated.
In other words, the “feature” those languages have is reducing the cognitive complexity and more tightly bounding the risk.
Newer languages are adding friction or dropping altogether things that you rarely or never want to do, to more tightly constrain the space of possible actions to the set of desired actions.
so experienced cc devs dont need it and the dream is mostly about creating experienced devs who write fast software damage limited by the guardrails of the environment? so its basically the c#\java world of cheap oo software crawling over moores plateau hopping for one more silver bullet that will fix lowcost development,trying to solve the expensive quality dev shortage once more by cheapening out.
At least in aerospace we use C++ because we need a bare metal language and C++ is the most full featured/widely used one. Rust is great but toolchain support just isn’t there yet and ADA/Spark aren’t widely used anymore for new development.
Because I believe safety critical places like aviation are the ones that actually get things right. You can't pile on every demand on the human in charge and expect improvement. We're just not great at taking every possible detail into account. Doing lots of grunt work all the time reliably is what we have computers for.
In my old age, I find the more permissive a language is, the more painful it is to code in long term. Perl is great for simple things, but C++ was a vast improvement because it catches a lot of nonsense that Perl will compile just fine.
And debugging is so expensive that eventually writing code fast is near worthless. Whatever you gain you more than pay for in debugging afterwards.
One could actually have both.
Spiral provides a lot of its performance by correctly specifying error constraints, which is a form of safety.
You could specify architectural constraints by hand too, for performance and portability.
(With some good defaults.)
One could technically also get memory and reference safety this way. Which is the opposite of what Rust does for example, where it defaults to do a particular model of referencing memory and makes using any other painful.
It also does not have any constraints on timing or redundancy if necessary.
Automate some of this handling but still keep it explicit and you have a potential winner.
yes - i switched from C to C++ a long, long time ago (late 1980s) because C++ was so much more strongly typed (i still don't understand why other C programmers have stuck with C). i'm less than convinced with the promises of rust (for e.g.), but have only written trivial programs in it.
C isn't a simple language, really, because it has a ton of implicit assumptions and inconsistencies, undefined behavior and all the failure modes you have to keep in your head at all times.
It's not that C is simpler, it's less expressive.
I don't think this answers parents question though, which is what does it mean for a language to be simpler? I think that's a good question. This is just a list of examples.
It almost seems like simple is a bad concept applied to languages because what's simple and complex is the idea you're trying to express, accurately, in its totality - not necessarily the language.
And even simpler is the iota combinator which is by itself turing complete. So a single letter plus matching parens is about the simplest language as you can get. But I can't imagine anyone wanting or even being able to program in it.
"Simpler" isn't really a good adjective to classify languages. You have to be way more specific on what "simple" means. Are we talking characters needed to implement 'hello world?' Time to first memory access segfault? Number of pages in the specification (which, at this point, I'm not sure whether Python or C++ wins out...)? Lines of code in the compiler / interpreter?
i don't see those languages as being "simple". quick to program in, yes. but assembler is really pretty transparent, once you get into it. heck, i used to program in machine code (i don't recommend it) without too many problems.
They don’t. What they do is create a system where the system protects itself from human error. They don’t know better than me: I agree with what they’re doing. Automatic enforcement of good things, plus an escape hatch for emergencies, is better than no restrictions ever.
I assume you program in assembly language, then? Can’t trust those structured programming types to know best how to manage your instruction pointer after all. “Functions”, “loops”, “conditionals”, all just fancy words for straitjackets.
I’d love a demographic study of folks who are really unhappy about languages like Rust and Go specifically because they believe they’re crushing their creative spirit. The “I need MAXIMAL EXPRESSIVENESS at all times! Don’t tell me how to do things!” sort
I hesitate to even guess what the findings would look like, but I bet it’d be interesting.
I agree the results would fascinating. The perception from the other side is that rustaceans crave to be bound in the constrictive swaddling of authoritarianism. Which seems to be born out in both the language itself and the foundation.
I'm pretty much static guarantees to the bone. But I sort of get where the dynamic camp is coming from.
With something like python, you need to understand lists, dictionaries, loops, and the call stack. And then you're off to the races.
Meanwhile with ocaml you need to understand type theory, type inference, algebraic data types, the module system, maybe row poly, (insert a bunch of other things), and now also algebraic effects. And there's no guarantee that the well formed logic of your domain problem is compatible with the type theory your language is using (although, adt + generics almost always does the job if you take a minute).
That being said, the way that's being expressed here does leave something to be desired. Like, if someone doesn't jive well with static techniques that's one thing. But I've never really understood decrying people who get it as suffering from some sort of Stockholm syndrome.
I agree that there is a productive discussion to be had about all of these things, including their limits, for sure. Just that this way (my original parent, not you) isn’t it.
I mean, I don’t need maximum expressiveness, but imo languages with nearly over engineered type systems like rust, to me, lack almost any expressiveness without having to get into macros.
But when programming in assembly you're still beholden to the CPU designers in regards to out of order execution and branch prediction and all those fancy layers taking runtime control away from you. Better to just print your own silicon.
This is such a silly send-up. They don't know what's good for me. They tell me what their language is good for. I decide if what the language is good for is good for me.
This is the key insight. I think that the reason why we have so many c++ replacement languages is because c++ is okayish for basically anything. Replacing it completely will necessitate several different languages that are good for different applications.
Okayish was fine or even necessary in the past but the industry has matured to the point where we need more purpose built tools for more specific projects.
Rust is going to be necessary but also zig, hylo, vale, p, Odin, and jai (assuming we ever get a release date). A language able to do everything that c++ can is almost definitely only able to do it all okayish.
This is a bit of a misnomer, languages don't generally stop you from doing things. They just make really damn sure that you aren't doing it by accident.
One thing I love about C++ is it (mostly) stops me making type errors, compared to dynamic languages - "compile and it works". Adding type constraints makes programming safer, easier, and results in better compilation.
You can add other constraints (e.g. temporal constraints) and get similar benefits, on top of the above.
Language implementers ought to know better than me (the user) about programming language semantics, in this case specifically things like memory safety.
Memory safety is a very small fraction of the problems a computer program can have, and replacing them by an immediate shutdown is hardly a very useful improvement on program quality.
"Software quality", in my mind, includes "nobody can send me a crafted text message that gives them remote root access on my phone."
Yes, there are security issues beyond memory safety bugs. But these are the issues that are most regularly turned into the most serious exploits and they are hellishly common.
All systems security is about layered defenses. "Oh, log4j exists" is not a compelling reason to avoid changes that can mitigate very large portions of security risk.
There are places where you'll truly never encounter untrusted input and a crash is just as bad as blasting off and performing whatever unexpected computation, but that's nowhere near the entire existing C++ landscape.
well, i would not recommend shared_ptr except under rare circumstances, but i don't see what can be wrong with RAII. in fact, it is the languages thar don't have RAII that have the problems, IMHO.
Is a bounds check on every container access good for me? Not always. Sometimes my code is correct. But security vulnerabilities are observably a huge problem. And you only need one vuln in iMessage or whatever to get a dissident journalist's phone hacked and then get them murdered by the Saudi government. Is "I know what I'm doing, I promise" good enough for you in that case?
I find it interesting because until about three years ago, C++ was always on the other side of this argument. We fight about static type systems. You could shout at the compiler that you know that your code is correct even though the type system doesn't pass. Why does the committee think it knows better than me! Everything is just bits after all!
Now we are on the other side. C++ people saying "bro I totally promise I'll initialize this data member before it is used, stop making me initialize it in my constructor"
> Now we are on the other side. C++ people saying "bro I totally promise I'll initialize this data member before it is used, stop making me initialize it in my constructor"
C++ has never said this - if a type has a constructor, that constructor will always be used in a data member (or elsewhere).
zabzonk said "if a type has a constructor", but I don't think the int type does, which is why you get uninitialized memory here.
Any instance of Foo will run Foo's constructor because Foo has one. The problem is that that instance of Foo won't definitely have x initialized, because int doesn't.
Sure. If for some reason you only care about this with regards to struct types that have constructors that correctly initialize all of their members then you are good by default (sort of, there are weird edge cases here with globals). But "safe for this, not for that" is not especially compelling to me when it is hard to imagine a C++ codebase that doesn't use primitives/pod types somewhere.
Plus, there are stl types that don't guarantee initialization of members. So it isn't like only ever resting on top of stl types is sufficient. std::optional has a non-default constructor but it can happily blow up because of a read of uninitialized data.
And the fact that this is the silent default behavior in C++ is wild to me. Even in languages where it is possible to leave data in an uninitialized state, you should need to loudly signal it.
i said "if a type has a constructor". but if you are fiddling around with ints and chars and the like, then you will need to initialise them. i have been writing c++ since the mid80s and have never found this to be a problem, or at least no more than it would be to use an uninitialised int variable in C.
The fact this is even a distinction anyone has to keep in mind is again an example of the problem. People are awful at remembering these weird little details. We shouldn't expect people do things they're bad at, all the time, and to get them right. And I'll go a step further and say we shouldn't let them by default. They should have to make it super explicit they want to opt into the weird behavior nobody expects.
You can do this in Rust if you want! You just have to use unsafe { mem::uninitialized() } or even better, the newer unsafe { mem::MaybeUninit::uninit() }
The whole point is that you probably don't ever want to have an uninitialized variable you can access willy-nilly, and if you do, you should be very explicit.
Even your statement about C (and C++) isn't quite accurate, right, because anything static is guaranteed to be initialized to zero on creation. So while `int x` is a free for all, `static int x` is gonna be zero. Another thing people shouldn't have to think about!
"In some cases you are safe in other cases you aren't" is not good enough for real safety.
Congrats if you never write bugs. For the rest of us, we will use the widely available data that demonstrates that these bugs recur over and over and over and over even in organizations with strong developers, strong testing culture, and where the use of static analyzers and fuzzing is widespread.
Yes, C also has these problems. Both C and C++ represent significant safety risks, despite powering some of our absolute most security critical applications (the linux kernel and our web browsers). Other languages don't give you these footguns.
They might know how to design a language such that it tends to produce exceptionally readable codebases, good performance, and improved safety, though. Maybe not all three at once, but some mix of those that’s better than existing languages, perhaps.
Looks like the author is very focused on maximally optimizing some very compute-heavy signal-processing code, and doesn’t care much what is used outside of the compute kernel.
So it may be very true that for their work it’s all the same whether they use C++, or Python with numeric libraries, or some niche DSL, or a macroassembler.
But use of C++ is much broader than pure compute, so this perspective does not generalize to C++ or Rust as a whole.
Little by little, domains where either C or C++ would be the option to go during the 1990's, with exception of areas where VB and Object Pascal (Mac/PC variants) were enjoying adoption, have been slowly taken away from them.
Bjarne created C with Classes for his distributed systems research at Bell Labs, nowadays the majority of CNCF project landscape is written into something else.
C++ GUI frameworks were all over the place during the 1990's, nowadays one has to go to third parties, as platform vendors rather focus on managed languages. Even C++/WinRT going into maintenaince is an acknowledgemnt from Microsoft that outside Redmond XAML / C++ was a failure in regards to adoption.
Long term we might be left with LLVM, GCC, Unreal, Skia, CUDA, and a couple of other key projects that keep C++ relevant and that is about it.
Yeah if you define the utility of C++ so narrowly to these very highly micro-optimized use cases then Rust can easily capture 100x the mind share of C++ because programmers generally just don't need to care that much. And really seeing what advanced Rust programmers can do when armed with godbolt I'm not convinced even those use cases demand using C++.
> Do you know that in MSVC uint16_t(50000) + uin16_t(50000) == -1794967296?
There are two typos in this statement, one is "uin16_t" should have been "uint16_t", the other is that the "+" should have been "*". Searching for 1794967296 yields this answer:
Take C++ vs. Rust, for instance. If I look at this from a business perspective:
- Rust compiler can likely help confirm reference correctness, so this is a plus.
- Rust language exchanges the cognitive load of memory allocation for tracking reference ownership, so this is a neutral. (In C++ anything that has access to the pointer can technically "own" it so there is no need to track function call traces for ownership, but lifetimes do need to be analyzed against the semantics of the program. Potato, po-tah-to.)
- The pool of Rust developers available to work on a project is much smaller.
If I'm writing software where memory safety is of high importance and I can afford to take the risk with a smaller developer pool, Rust makes sense. If I'm writing a game? It's going to make far more business sense to hire a bunch of C++ developers that can work with Unreal or C# developers to work with Unity or Python developers to work with Godot or whatever.
And for real mission-critical software memory allocation isn't used so in those cases Rust has zero advantages. (Real mission-critical software lays out a static memory map up front to avoid exactly the problems Rust is trying to avoid but to also ensure real-time & predictable response times.)
So we're left with niche software like OpenSSL or similar that can benefit from Rust. But it's more likely that we want a language that supports proofs of correctness (think Coq or seL4) in those cases and Rust can't even support that.
C++ itself seems to be working pretty hard year after year on this task of killing C++ with endless half baked features that fail to address real problems
they do address real problems - specifically those of people trying to write very efficient libraries. if you are not in that group, simply don't use those features.
Sadly, the C++ standard bodies are actually breaking old code in new versions of the language.
For example, C++/11 introduced UTF8 string literals. Great feature which does what you’d expect – declare a string literal in source code, get a const pointer to null-terminated array of utf-8 bytes.
Then a decade later, in C++/20 they refactored these UTF8 literals to evaluate into const pointers to the new incompatible data type, char8_t.
It’s so bad that compiler developers had to implement switches to disable the new BS. Unfortunately, these switches are incompatible across compilers, -fno-char8_t in gcc, /Zc:char8_t- in msvc.
They address the problems of people that care enough to show at WG21 meetings, endure the multiple rewrites of their papers, and gather as many votes as they can.
As simple as that.
Many of the modern features aren't even exposed to the comunity as preview features before being voted into the standard.
And yet they can't update the standard library types to not be slow as hell because of ABI concerns.
If C++ was really all about speed then we'd have destructive move letting you pass unique_ptr in a register without compiler trickery. The fundamental number one constraint on new C++ behaviors is ABI compatibility. The rest is secondary.
The c++ standard absolutely cares deeply about abi compatibility. The committee can't even update intmax_t because it is an abi break. "Code compiled today and linked against code compiled years and years ago must behave correctly" is an absolutely premier design constraint when it comes to papers and it has prevented oodles of low hanging performance opportunities.
It's not in the standard itself, because the standard doesn't directly talk about ABI. But there was a vote within the committee not to break ABI, as you can see in the Prague 2020 meeting here: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/n48... (scroll down to page 11). And if you ever listen into any committee meetings or try to present papers, ABI breaks are very serious concerns that can torpedo proposals.
Not only that many of the Carbon devs are former clang contributors, and one reason that clang is now on third place regarding ISO support, is that other devs had to step into what was previously contributed by Apple (now busy with Swift) and Google (busy with Carbon/Rust).
Great article more about different approaches to building software that languages themselves.
My two cents: it is improbable that C++ will be ever replaced because it is a mastodon as a programming language: super complex, where complex includes complexity as difficulties and also its vastness. This doesn't mean that it will increase its usage since it is common nowadays to use specific programming languages (e.g. Python) for specific purposes and rely on C++ and others for very specific areas. Personally, I really liked SWIG [1] as a wrapper and interface generator. Don't know how much it is used today.
Other experience but similar conclusion than the article.
C++ is kind of a “language of last resort” type thing where you use it if there isn’t another good option for what you want to do. Eg. Most desktop development was done in C++, but now C# and newer languages have killed off C++ for most desktop apps. So C++ is still widely used in embedded/performance critical use cases, but Rust will replace a lot of it over the long term.
New C++ projects are started every day. And billions of lines of C/C++ runs the world. So C/C++ will be around and used heavily for a very long time. Probably way beyond the retirement age of anybody reading this.
Good question. My personal take is that C++ makes many developers happy, comfortable, and you can develop in C++ using different approaches, that is enough. There is no programming language for all and I think this is why we have a lot of them now and even DSLs.
I consider myself far from what is considered professional C++ knowledge, but for cryptography I always relied on Crypto++ [1] and that made me decide for the use of C++, at least partially. I think we can apply the same thinking for JavaScript or TypeScript, personally I don't like too much those programming languages but if I need to write a RESTful backend I go that route because they have good modules for that or sometimes I decide for Python.
> I made a simple Lisp-style interpreter to help game designers automate resource loading, and went on a vacation. When I was back, they were writing the whole game scenes in this interpreter so we had to support it for a while.
> Just as Latin never actually died, just like COBOL, Algol 68, and Ada, – C++ is doomed to eternal half-existence between life and death.
I'm almost afraid to ask, but I'd love to know: are there any Algol 68 projects in active use? Anyone have war stories of maintaining them (bug fixes, new features, improvements to interface with more modern systems) in the last couple decades?
A company I used to work had an Algol system running until 2005.
A rewrite had begun in the 1990s and went live in 2001 and the two systems were run side by side until to make sure the new system was correct.
certainly not active, but back in the 1980s i got dumped on a program that used algol, bcpl, fortran and cobol to drive a phototypesetter to do ... something. this took advantage of the fact that the dec10 allowed you to mix & match languages, much like the vax. i suppose i was lucky the idiot that originally wrote it didn't throw lisp into the pot.
anyway, as i was hired as a microcomputer specialist, and as two dec10 systems programmers had ducked it, i simply said "no can do" - a valuable lesson.
this is what i like about HN - recalling things i had completely forgotten!
Of course I know that uint16_t+uint16_t gives an int. If you're gonna be programming in C or C++, you need to know these things.
And yes, I do agree the fact that I know so much minutiae about those languages is part of the reason why I'm reluctant to go for something else I have a more shallow knowledge of. But that level of attention to detail is also what allows me to deliver high-quality code that does exactly what I intend it to do.
> If you're gonna be programming in C or C++, you need to know these things.
You have to, not you need to. Of course I know plenty of them (so much that I have won IOCCC a decade ago), but most of them are just pure hindrances and do not contribute to anything positive.
> Improving the efficiency of algorithms for fundamental computations can have a widespread impact, as it can affect the overall speed of a large amount of computations. Matrix multiplication is one such primitive task, occurring in many systems—from neural networks to scientific computing routines. [...] We further showcase the flexibility of AlphaTensor through different use-cases: algorithms with state-of-the-art complexity for structured matrix multiplication and improved practical efficiency by optimizing matrix multiplication for runtime on specific hardware. Our results highlight AlphaTensor’s ability to accelerate the process of algorithmic discovery on a range of problems, and to optimize for different criteria.
And I'll emphasis this sentence, as it seems to be the main goal/critics of the author in this post:
> improved practical efficiency by optimizing matrix multiplication for runtime on specific hardware.
> Do you know that in MSVC uint16_t(50000) + uin16_t(50000) == -1794967296?
Not only is that insane code and both my compiler and static analysis go for my throat, it's also not true. I think the argument is supposed to be about unsigned integer overflow, but the numbers don't add up. The big negative number wraps to 63744. And the sum (100K) wraps to 34464.
Numba is great but missing explicit simd, stack allocated arrays, and user defined types, to name a few things. I tried to keep my colleagues in Jax these days.
It's also a weird thing to bring up (Numba being great because it can jit-compile python to any arch, including GPUs) when the author discounted Julia... which has exactly the same property.
The difference is uptake. Julia's good and it's out there, but relative to the users of Python... How many people care how portable the Julia code they aren't writing is? The existence of a tool to jit-compile Python is more useful to a lot more engineers than the existence of another language that is nicely jit-compileable.
Right, except the author also mentions two obscure languages with very little uptake at all, so it can't simply be a popularity thing - they're not useful at all, by that limited metric.
This, very specifically, is the advantage these languages have over C++.
The benefit isn't the universe of things you can do (they're all Turing complete, you can do anything) - it's what the language stops you from doing. This is the thing the C++ committee never understood or at the very least never appreciated.