Being new to C++, on of my biggest gripes is that I often feel unsure whether I'm using the right way to do things, or if I'm using an old method that was replaced by a new one. Especially given that I never took the time to properly learn the language, but mostly piece together SO answers ... .
Is there a C++17-exclusive compiler-flag? Or linters that inform you about outdated concepts? Some way to get rid of the backwards compatible cruft when writing new things.
My opinion is that there is no "right way" of doing C++, unlike, say, python.
You can write a kernel in C++, or you can do high level UI, or a game engine, and these will be very different, you can almost say you are using a different language.
For example, to allocate an array, you can use a vector, array, new, or malloc. Most people will tell you to use a vector, but a vector uses a bit more memory than an array, so you might want to use this instead, but you may not want to link with the STL at all, so you go with new, but if you don't want to deal with exceptions or just want a block of raw memory, you may want to go with malloc instead.
Just use the new features if it makes life easier for you, that's the idea. As for linters, I know that cppcheck can warn you of C-isms (ex : C-style casts) but I don't know any of them that target modern C++ variants.
> My opinion is that there is no "right way" of doing C++, unlike, say, python.
There are, however, definitely wrong ways of doing things in C++. There's quite a few things in the STL that should have long since been deprecated if not outright removed.
Simple example is std::map. Absolutely horrible data structure. std::unordered_map is what you actually want, but to a beginner nothing really tells you that. You just think "I need a map, oh hey there's a std::map, perfect!" and go about your day completely unaware of how awful a data structure you just chose.
Another example that's a mistake made in C++ hello worlds of all things is std::endl. std::endl does not mean "end line", it means "\n + flush" (specifically '\n' - it does not do '\n\r' or '\r\n' conversions at all, the underlying file system does that). And flushing on every new line only makes sense for line-buffered things like std::out, but std::out already flushes on '\n' character. So std::endl in practice just leads to a ton of unnecessary & unexpected flushes.
> so you go with new, but if you don't want to deal with exceptions or just want a block of raw memory, you may want to go with malloc instead.
There's actually very little reason to ever use malloc. If you globally don't want exceptions just use -fno-exceptions. If you don't want exceptions in the small scope you're working in then use std::nothrow, eg: "new (std::nothrow) int[1024 * 1024]".
As for why you do this over malloc the reason is simple - it avoids overflow bugs. new int[size_t] never overflows. malloc(size_t * sizeof(int)) - well that overflows trivially, and that overflow results in security bugs.
Raw malloc would arguably be another case of "this is just wrong C++" rather than "this is a perfectly valid alternative". Raw free would belong in this case except for calloc, which is still very useful (and avoids the common overflow bug that malloc has). malloc's only saving grace is realloc, but now you're pretty deep into a specific edge case.
Another reason to prefer `new`/`delete` over `[std::]malloc`/`[std::]free` (and `std::vector` over `[std::]realloc`) is that `malloc`/`realloc`/`free` don't call constructors and destructors, which means using them with non-POD types is UB (more concretely, you'll get bugs if the constructors or destructors manage some resource like a mutex, heap allocation or file handle).
As for outdated stuff in the standard library, that's something that affects every language as it ages. `map`/`set` aren't, IMO, the worst offenders; they were misleadingly-named (like `std::endl` and `std::move`), but RB-trees are a useful data structure, and while they are slower than hash tables (`unordered_map`/`unordered_set`) for the typical map/set use case, it's not like they're buggy.
The one that IMO most cries out for replacement is `[std::]rand` (inherited from the C standard library), whose RNG is abysmally low quality/predictable, very slow, and thread-unsafe. C++11 did introduce a new RNG API[1], but it's quite cumbersome and verbose to use, when they really need to introduce a simple API like BSD/Mac `arc4random_uniform`[2] so people will stop using `rand`.
Edit: I see that this is being addressed, although it didn't make it into C++17.[3]
While not buggy, iostreams are also getting a bit long in the tooth and could stand to be replaced with a more modern API, the iterator APIs (in the <algorithm> and <numeric> headers, etc) should also be methods on STL types, and `std::string` isn't quite deprecation-worthy, but it would be nice if the STL were augmented with a state-of-the-art encoding-aware string implementation like Swift's. And it would be nice if the STL acknowledged the existence of the internet (with high-level libcurl-like functionality in addition to the very low-level Networking TS).
std::map will often have better cache coherence, especially if you're using a custom allocator or have a small data set.
Also not sure what you mean when you say operator new never overflows. Can you elaborate?
It would be nice if C++ gave an operator new/allocator that put the object on the end of a page boundary so that the hardware could catch any overflows. Sort of like how the atomic operators are cross platform but hardware specific.
If mySize is computed at runtime, that computation should most likely be checking for overflow. If it is a constant, declare it constexpr and get a compile-time warning for the overflow.
In any case, it's the multiplication that has a bug, not malloc (though I agree that the usage you mention is unfortunately common).
Yes it's the multiplication that has the bug, but malloc makes that error common as you said. An API whose usage commonly has trivial security bugs is not a good API. new just wholesale avoids that common category of bugs, so why would you avoid it in favor of malloc? They do the same thing, one just does it safer.
Boost, loki, and I'm sure others have a "flat" map which is just an array backed ordered map. Slower insertions than the red-black map but because of the cache locality it's much faster overall.
actually, you should see the pain that you get when having unordered dictionaries by default.
which is the default in python, you get awesome effects,
where the order of the unordered dictionary is dependent on the random seed.
where library implementor are serializing a dict into a serial structure (and are not sorting by key).
running the same code and same data twice, will get you potentially different results.
now here comes the best. most of the time in python you don't see the difference. the output is exactly the same but
never the less the order is not guaranteed.
so I believe having an ordered by default data-store will help the ecosystem.
Calling it unordered explicitly helps, to make sure that anybody serializing the structure will need to make a choice how to sort it.
A beginner may use std::map without reading even the first sentence of its documentation and get a red-black tree when they thought they were getting a hash table. The program will work correctly, but have a Big O complexity log(n) greater than they thought. That's not a problem worth breaking all the programs that correctly use std::map.
> That's not a problem worth breaking all the programs that correctly use std::map.
I challenge you to find a single correct use of std::map to begin with.
Is this severe enough to be worth breaking programs? No. But was it a mistake? Yes, absolutely. And deprecation is how mistakes are fixed over time, so why not fix the mistake? Old things can stay on their old STL and live with the old std::map. Maintained things will get warnings that they can go address in their own time. And new things are guided to the proper choice.
> I challenge you to find a single correct use of std::map to begin with.
Before C++11, you only had std::map, so every use was correct. After C++11, you may still care about iterator invalidation or actually maintaining ordering. These properties are too useful to be deprecated.
As for beginners, their priority should be learning the core language and making a habit of browsing the std documentation to see what's there. Then they'll stumble upon unordered map too.
EDIT: as for cache coherency, std mandates use of separate chaining. If you really care about cache, you'll have to implement open addressing yourself.
> Before C++11, you only had std::map, so every use was correct. After C++11, you may still care about iterator invalidation or actually maintaining ordering. These properties are too useful to be deprecated.
There's no shortage of 3rd party map implementations. Nothing forced you to use std::map pre-C++11, so it still isn't correct to use std::map prior to C++11 it was just easier to use std::map prior to that.
If you need ordered access a hashmap + key list is a superior implementation to red-black trees, so std::map still loses there.
The only thing std::map does have is iterators stay valid during inserts, but this doesn't really seem useful and indeed it's something most maps in most languages don't have. Heck, Java's red-black java.util.TreeMap doesn't even bother to keep iterators valid during inserts even though it could. How is this a useful property in practice much less one "too useful to be deprecated"? Do you have any examples?
If we're just complaining about performance and third-party libraries are an in-scope solution, I don't see why anyone would settle for std::unordered_map. It trails on every hashmap benchmark. Chandler Carruth blames it on the C++ standard, which mandates chaining for collision handling.
I mean, I wouldn't complain if with C++23 we introduce std2 and make things better. While we're at it, we could also try to fix std::list and std::deque which are nearly useless too.
> If you need ordered access a hashmap + key list is a superior implementation to red-black trees, so std::map still loses there.
Superior in what way?
> Do you have any examples?
It's a couple of years since, but I remember having relied on both ordering and iterator stability in certain algorithms related to topological meshes. Basically, I'd traverse a map and delete some elements during traversal.
Could I have done it in some way with a hash table? Yes. Would it have been more complicated? Yes.
Superior in performance, and you can pick if you want to optimize for iteration (put the keys in a sorted vector) or insertion (linked list). Either way you'll trivially beat traversing, inserting, and removing elements from a red-black tree in runtime.
> Basically, I'd traverse a map and delete some elements during traversal.
That works fine with unordered_map, too. You can keep the iterator when you remove. It's only inserts where map's iterators stays valid but unordered_map's doesn't.
I was quite invested in C++ 1998 back in the day, but didn't keep in touch with the language much after 2003.
I am now back into high-performance scientific computing, and I am glad to see C++ has evolved much, in pretty good ways. However, is it worth getting back into C++ when things like Julia and Rust are around? Or are these still too immature?
For me, as native companion to Java and .NET stacks, Rust still cannot take C++'s place regarding mixed debugging experience, IDE integration, GUI designers support, distribution of binary binaries and most important making team mates or customers (specially their IT) accept yet another tool into the projects.
For other people Rust is already mature enough and they are delivering production code with it.
In any case, until Julia and Rust stop using LLVM as their backend, being confortable with C++ is a good idea.
My use case is rapid development of new statistical methods.
Julia seems ideal in some ways, as performance can be quite close to C / C++ / Fortran, while the code is more high-level. I don't care about GUIs, distribution, teamwork or maintenance.
Julia seems promising, and there are some nice usage stories already:
I am quite proficient with JVM-based stacks (Java, Scala & Clojure). But the level of performance, especially in terms of memory usage, cannot come anywhere close to C++.
> My opinion is that there is no "right way" of doing C++, unlike, say, python.
As a longtime hater of C++, I do want to point out that this is one of my big issues with it. Every time I mention my various gripes with C++, I get the same response from the apologists: "Well, if you just do C++ the right way, that's not an issue"
Fine! Great! Then just tell me what the right way is! But no one can (or if they do, then someone else comes along and immediately tells them they're wrong). If there were some agreed-on idiomatic subset of C++ that avoided all the pitfalls, or even just a book that everyone agreed was "good", that would go a long way toward warming me on the language. But so far as I can tell, no such thing exists.
I'm just a hobbyist, but "new" and especially "malloc" are not "right way of doing C++". Use of array is non idiomatic either. Instead you should always aim for vector and smart pointer "factory" methods make_shared or make_unique.
I'd pretty strongly disagree with the notion that stack or static-allocated arrays (especially `std::array`) aren't idiomatic C++; rather it's (unnecessarily) allocating everything on the heap that's the anti-pattern, since it defeats the whole purpose (fast, fine-grained, deterministic memory model) of using C++ in the first place. (And implies lots of memory leaks and global state).
Also, `new`/`delete` aren't unidiomatic when strictly adhering to RAII and
- Writing exception-safe code (with the `new (std::nothrow) T` form).
- Writing custom smart pointers or data structures, or high-performance containers (especially with C++17 aligned `new`/`delete` and C++14 sized `delete`).
- Using memory pools (with placement `new`/`delete`).
`[std::]malloc`/`[std::]free` aren't necessarily unidiomatic either, such as
- When using a C API (or one in pre-standard C++ or a third language) which is incompatible with `new`/`delete`.
- When you need `[std::]realloc`, and for whatever reason a `std::vector` isn't an acceptable substitute.
- In embedded or other environments that don't provide a STL.
I think what he's saying is that using `std::vector` is better than using pointers and `malloc` from the point of view of doing simple things. (Simple is not the right term but can't think of a better word). I mean 80% of times.
With that line of argumentation, why not use sbrk() or poke the syscall directly? :)
I'd say prefer the safest, highest-level of abstraction that you can get away with. Which would generally mean std::vector over new/malloc.
I have no problem with the use of std::vector (in 90% of cases it is all you need), my problem is with the new trend of shunning any use of new/delete or naked pointers in modern C++.
Sure, dogma is not very useful. But when people ask "what should I use" saying "well, there are valid cases for everything" (like someone else in the thread basically did) does not help much either.
Sure but they are implemented by C++ specialists that work on this code all the time. I'd trust them to use malloc more correctly and bug free than I'd ever trust myself to do. As long as we're not talking embedded here I'd rather rely on work already done by experts than reinvent the wheel, perhaps poorly.
Umm, no. A vector allows you to dynamically change the length of an array. If you need it, then you pretty much need a vector (or some other kind of smart container). Otherwise, there's no reason to prefer a vector to C-style array.
Also shared_ptr's make the ownership relation harder to understand. (Also there's additional synchronization cost for copying pointers.) Of course there are situations where you need them, but in general, I think it's better to avoid them.
I don't think anyone could answer that question, except with anecdotes. Personally, I think there is plenty of room for disagreement about code "goodness" in Python, C++, or any other general purpose language.
For those that aren't aware, actually some of the input from Microsoft side was based in the work done with Midori and having System C# ideas applied to C++.
VC++ with the code checkers and clang with clang-tidy.
Not sure about the current state of gcc or other commercial C++ compilers.
Don't worry about not taking the time to properly learn C++ . It is a huge huge language and trying to learn it all at once from first principles will take you forever. Just try to do things in the most memory safe way possible.
I don't use C++ anymore - but Scott Meyers' books on C++ were the best investments I could have made back in the early 2000's. He focused on some real pain/productivity points of the language and standard library that made a huge difference in my approach to the problems I was working on.
Effective Modern C++ is a great book but it is in no way an intro to anything. It keeps with tradition of exposing the warts in C++ that you should know about but the issues covered in the newest one are very complex compared to the previous book and are not at all something that someone just learning the language or even just catching up on the newest standards should do.
If you want to catch up on C++11 and parts of C++14 I recommend Stroustrup's A Tour of C++.
I'd probably read the other Effective C++ books (even though they are dated now) before cracking open Effective Modern C++.
>Is there a C++17-exclusive compiler-flag? Or linters that inform you about outdated concepts? Some way to get rid of the backwards compatible cruft when writing new things.
clang-tidy might be the closest thing you're looking for:
We started our current codebase with a blank Emacs buffer back in February and went straight to (draft) C++17. Everything is compiled with both gcc and clang. Flags like -Wall, -Wpedantic -Weverything -Wno-c++98-compat -Wno-c++98-compat-pedantic -Wno-c++11-extensions are your friends.
But apart from that, languages have “cultural dialects”. So for example I essentially always loop with for(;;) and so a while (or do while!) looks alien to me. Likewise, a destructuring bind is not the only way to unpack a loop variable, so no compiler would mandate its use.
It’s a large system programming language so it will take a while to learn a lot of nuances...but you can get a ton of great code written without learn8ng that stuff.
That's an example of what I meant by "cultural dialect": I learned C in the context of the Unix sources so for(;;) was the canonical way to make an infinite loop. People who came from other "cultures" (e.g. Pascal, like many Mac programmers) used loop constructs common to them.
And it's not like I don't use 'while' -- in a bash script it looks normal to me -- it's just I would never even think to use one in C or C++. And when I read it it looks weird to me.
I realized two other reasons (I can't edit my comment any more):
- 'for' is simply my only looping construct. Now that C++ supports 'for(auto x : some_container) x.blahblah();' this is especially true.
- the old 'for(init; test; increment)' syntax put all your control right up front so as you read the code you already have in mind the range of when the body will run. A 'while(not_exit) { ... }' doesn't tell you that you're, say striding over the even elements or whatnot -- it's really just the equivalent of an open loop with an 'if(...) break;' somewhere in the body. And let's not get into the botch of 'do..while'
For an infinite loop, some people prefer it because the lack of a condition means it looks simpler than while(true), and therefore signals its meaning more effectively.
I'd be very surprised if there was any difference in terms of the generated code. Personally I use while(true).
There was a very interesting little side note in John Regehr's recent CppCon talk[1] on undefined behavior: Apparently, C11 added language that forbids compilers from eliding side-effect free infinite loops if they're of the form `while (<constant expression>)` but not other loop forms. C++ has not yet adopted this language, but it might in the future.
A strange choice by the committee I'd be interested in reading more about.
Not a compilers guy, so when you say "eliding loops", do you refer to optimizing out loops that the compiler assumes are side-effect free?
Coming from an embedded background, the form "while (true) { // do nothing of interest }" is seen a lot in embedded code. If your code is interrupt driven, or running certain RTOSes, you would usually see this at the end of main(), since everything happens in interrupts. You might also see this in error handlers, and "unimplemented" interrupt handlers during development, as a way to trap the program without halting / resetting the system completely. If this kind of loop was optimized out / you returned-from-main on your embedded system, interesting and unintended things may happen.
While I've never had problems with these particular loops being optimized out (that I can remember!), I do remember having to do other vary silly hacks to try and tell the compiler not to optimize things out, because something is happening in hardware that it wasn't aware of, so I wonder if this is just a practical way of making sure that this kind of code keeps working with newer compilers, even in the face of aggressive optimization.
Yes, the C and C++ standards allow compilers to eliminate loops which have no side effects (It's actually slightly more tricky than this but whatever, check the spec for the exact details). This can be tough in situations like you say, but also in other places: for example, Rust's semantics do not allow you to elide them, but because we use LLVM, we incidentally inherited this behavior: https://github.com/rust-lang/rust/issues/28728
For(;;) allows you to put the looping variable initialization, exit condition, and increment statement in the same line, so you easily get a clear idea of those three.
I don't know why this got downvoted; true, I was referring to the infinite loop case, but this is a legit (if not C++-specific) case of a benefit of for over while.
Likewise do...while is in retrospect, a mistake, because it documents the control strategy in a bad place from a humans-understanding-code point of view.
>I don't know why this got downvoted; true, I was referring to the infinite loop case, but this is a legit (if not C++-specific) case of a benefit of for over while.
Thanks gumby.
I simply didn't catch that they were referring only to infinite loops. I though they were debating the usefulness of while versus for, in general.
This is actually really troublesome, I coded a lot of c++ about 15 years ago. So initially I tend to write code thats old style. Googling samples is all over the place on what version its using. It would be nice if someone built a linter around this
I guess if you set the compiler's standard flag to C++14 or C++11, etc. and it compiles without warning or error, then you are not using any C++17 features. How exactly this would be useful remains to be seen!
Not directly related, but a question that popped up while reading this:
As a C++ programmer I'm quite happy to see the language keep improving in quite a good pace, but I'm wondering if anyone can shine some light on how this evolution compares with other widely used languages.
Of course newer languages can change drastically quite fast (Swift comes to mind), but I don't follow more accomplished languages (PHP, Java, JavaScript, Python, Even C, ..., sorry if I didn't name your language) close enough to get an idea.
That's quite a list of languages. Here's my opinion on a few that I know:
My impression of C++ is that is evolves fairly quickly, but also includes everything, the kitchen sink and the highway, making for an expansive and complex language. Scala gets some of the same flak.
Java evolves glacially, and outside of very rare large improvements (e.g. lambda's in 8) it's still the same (verbose) language and will remain for a long time.
PHP evolves fairly quikly, but the API's aren't consistent leading to a bit of the "flavor of the week" feeling for new additions. I don't know how this has been since PHP 7+
Javascript is evolving at a rapid pace, but despite that they seem to move toward a clear goal with nice functional API's and constructs being added to the language.
PHP 7+ hasn't added that much in terms of new APIs — (e.g. libsodium in 7.2, and it's typical function(target, arguments) stuff). It added strict typing and a very significant bunch of ergonomic/performance changes.
I'd say half the reason to call it a quick evolution is how fast it's being adopted compared with the glacial progress from 5.2 to 5.6, or Python 2 to 3.
Also a third half of the freshness has been adoption of packaging and coding standards.
C++ is old. It inherits most of C, which is also old. No other language has had to maintain backwards compatibility for roughly ~40 years, through generations of programming paradigms, while also trying to stay relevant. Modern C++ is a ton of new functionality mostly because its adding another paradigm to the language - since C++11 it has gotten much more functional.
Just look at how Python ended up if you want to see what happens when you try to backwards-break a language. And C++ is oh so much worse at that - if you make a non-compatible language iteration it would also (probably) break the ABI, which means "new" C++ cannot link old C++. It would make two languages. And nowadays, you really only have one of three reasons to write C++:
* You already have a C++ project.
* You need to use a C++ library / supporting code / integrate with C++ directly (and not through a C API).
* You want a feature only a C++ construct / library can provide. Qt is a great example of this, because there really is nothing comparable to it on any other platform in terms of a cross platform native UI toolkit, and using bindings can work but introduces a lot of build system complexities.
So C++ has all the reasons to adopt new paradigms to make life easier for those who have to use C++, but breaking backwards compatibility defeats all the core reasons you want to use the language. A prettier language is useless if you cannot get real work done in it.
Hopefully Rust will eat up the real-time domain, at least for tasks which aren't best served with formal or model-based methods.
It might already be usable, most of the essential pieces seem to be in place. No GC, many parts of stdlib, typed arena for memory pools etc. Move operations and immutable by default should reduce tendencies for defensive copying.
But best-practices, documentation and tooling for real-time is not on par with C++ yet. Though real-time is an underdocumented art in general..
Faster[1] than Java (glacial, albeit with warmer weather recently) or C (continental drift), slower than Swift, PHP (which truly has become a "kitchen sink" language equal to C++'s reputation for the same) or Python, about the same as Javascript.[2]
[1] By which I mean the languages themselves and minimally-conforming standard libraries rather than ecosystems or quasi-standard libraries like J2SE (Java), Cocoa (Swift) or the browser DOM (Javascript). C++ has a fairly slow-moving (but mature) ecosystem and its "quasi-standard" libraries (Boost, common compiler extensions, Windows and POSIX APIs) aren't as tightly-coupled to the language as the previous three examples.
[2] Both hindered by standards-body bureaucracy and the need to keep backwards compatibility with mistakes, but both popular enough that bigcorps are incentivized to keep things moving along; Javascript is more popular but that's offset by C++'s more efficient de facto process for standards-tracking library improvements (get them into Boost first).
Javascript actually probably moves a bit faster but Internet Explorer compatibility (still likely to be relevant for another 5 years or so) is an even bigger drag on adopting bleeding-edge features than waiting for (your company adopting) new Visual Studio/CentOS GCC/[Some embedded architecture] GCC versions is for C++.
Safari (specifically iOS Safari) is definitely a bigger drag on newer features than IE at the moment. IE is a known quantity and Edge has particularly good support. Safari, however, seems to be a case of Apple, ostensibly, making completely arbitrary decisions on what to include or not.
c++(19)98 was the first c++ standard. There was a minor update in 2003 to fix the most obvious defects. The next standard was c++11. If it felt stuck in the 2000s, it's because it was stuck in the 2000s. And it wasn't just Windows/Microsoft - To this day, gcc 4 (-std=c++0x if you want some c++11 features) is the standard compiler for many Linux sites.
The reason that the standard stayed unchanged for so long after 2003 is that the committee had a misunderstanding about some ISO rule and thought they couldn't release new versions for some period of time. I can't remember the details but Herb Sutter has mentioned it during conferences.
Sorry but I am going to complain again about modules. I mean I'm inches from making a petition so that compiler vendors can agree on something and move on. This would make the language much more attractive for new users, reduce compile time, etc.
What good does a petition do for this. If you want them so badly, why not use them, and give feedback to the vendors about what approach they should standardise on?
I do really love how C++ is evolving. The language is fighting and adapting and I think in a very smart way. This old dog can learn new tricks. I also love how often it makes me feel stupid at the incredible knowledge of it's maintainers and implementers.
Wow, just thinking about C++ while slacking, was enough to inspire a solution to a race condition in my Clojurescript project.
Earlier, I made some notes and walked away from the problem, because it was getting frustrating. Then, while I was checking this out, I started thinking "Imagine how complex this would have been if I had done it in C++" Then the immediately following thought was "Well, in C++, I would instead build it like so, because it needs to be thread safe." Problem solved, obvious-in-hindsight solution.
Even though my problem wasn't caused by a low level threading mistake, thinking about the problem in the context of a more primitive tool helped me to better understand how my own tools are working.
Stroustrup said he was disappointed with C++17 as it didn't introduce anything massive. You can see him mentioning this widely and openly at cppcon last year.
Maybe there were not any big things in C++17 but I do think they added smaller features that have a big impact. In ways such as we can begin to replace our parameters (where appropriate) with `std::string_view` and things of that nature. Moreover there are many libraries and packages that could see a lot of benefit from a sort of "C++17" refactor.
Modules (in the C++ terminology) and a package manager like npm are two different beasts.
Modules are not part of C++17, although Clang and MSVC both bake their own bread with implementation of the current modules draft.
There are still issues being worked out. They are solvable, but there is a challenge in getting consensus on a particular set of decisions about how to implement modules.
Is there a C++17-exclusive compiler-flag? Or linters that inform you about outdated concepts? Some way to get rid of the backwards compatible cruft when writing new things.