Since the questions specified neither a specific compiler/platform nor included Undefined or Unspecified Behavior as an option, I assumed this is an informal quiz and the author just tested the programs on their specific compiler.
Turns out the author does mean according to the standard, but thinks that “I don’t know” is a synonym for both undefined and unspecified behavior. It seems weird to use imprecise terminology in a post that’s all about lecturing about standards compliance.
I'm really surprised at how many people are annoyed by this quiz. It doesn't specify a compiler/platform, therefore all the definite answers are clearly wrong.
I think most C programmers know that the size of ints is implementation defined, and I've programmed on platforms where it was 2 bytes instead of 4. But when someone promises a brain teaser, and then asks an unclear question, you read that and go "What the author actually means can't be that dumb, can it?" It's annoying when a charitable reading of an unclear statement leads to an aggressive "gotcha".
On a TI DSP I used this millennium, sizeof(char) == sizeof(short) == sizeof(int) == sizeof(float) == sizeof(double) == sizeof(void *) == 1. Each and every type was 32-bit. Each memory address pointed to a unique 32-bits, ie: (int *)0, and (int *)1 did not overlap on that system. As sizeof measures addressable units, not bytes, so they're all size 1.
Even on more standard systems, int is 4 bytes on systems using an ILP32 or LP64 convention, but ILP64 is a thing too, making int 8 bytes.
Nope. On 8-bit platforms int is 16-bit wide, and even on some 16-bit platforms too. And this is not just history, embedded toolchains are prepared to handle the code as if it were platform-independent. The C99 uint32_t type is defined to 'unsigned int' on most sane platforms (not sure about ILP64, maybe it's unsigned short, but what's the uint16_t then?) But in the arm-none-eabi toolchain it's defined to 'unsigned long', because it is assumed that the same code is being built on 8-bit and 32-bit platforms, and so only 'long' guarantees the 32-bit range. And to avoid format string warnings, printf format strings shall contain the PRI*32 macros like PRIu32 instead of the raw %u / %lu.
The crazy one is `long`, which is 32 bits on some platforms and 64 bits on other platforms.
I solved that problem by never using `long`, opting instead for `int` for 32 bits, and `long long` for 64. `long` should be deprecated for 32 and 64 bit platforms, it's not fixable.
In D, we use `int` for 32 bits, `long` for 64 bits, and `size_t` for a pointer index. All the craziness just melts away. You can port the code back and forth between 32 and 64 bit patterns, and it just works. All those `int32_t` types are out back in the bin along with the whiteout.
I suspect the impetus for suffixing the number of bits is a bit of a backlash from C, where you never know how many bits are in a type. That has caused C programmers a lot of trouble and extra work.
People do tend to get annoyed when you don't communicate something, and then use it as a gotcha. They were aware of their audience's assumptions, but instead of communicating with those assumptions in mind, they communicated with different assumptions and then criticized their audience for not understanding the questions (i.e. "You didn't know C after all!")?
The quiz is exactly right: yes, the answers are all "I don't know" because the behavior is either implementation defined or undefined for each of them, and a couple of these reflect mistakes that I have had to point out in code reviews in recent years: expressions with multiple side effects without a sequence point, and shifting a N bit integral type by N bits (yes, that is undefined because some processor instruction sets mess up that case). C and C++ programmers need to be taught that they must not write that.
> C and C++ programmers need to be taught that they must not write that.
Things have stabilized quite a while ago. 99% of people who write C or C++ in 2024 will never use any computers where sizeof(int) != 4, or big endian processors, or systems where floats don’t conform to IEEE-754 standard.
Why shouldn’t they write code which requires these particular details from their compiler and the target processor?
I'd say there's no issue with writing such code, so long as it's protected by a compile-time assertion.
That said, if you use a variable-width integer type (`char`, `short`, `int`, `long`, `long long`) instead of a fixed-width type (`int32_t` & such) for anything other than passing parameters to existing libraries (including the standard library) I'd say you're Doing It Wrong. If you actually intend to have a variable-width, use one of the `_least` or `_fast` types to make it clear that you didn't just screw up.
One thing I think C++ (or at least, most C++ users) gets right is suggesting that you should use 'auto' if you don't really care that much about bit width, and the specified sizes otherwise.
Thankfully C23 takes this approach, although it'll probably take forever until it's as adopted as widespread as C99 is now, and that's still not widespread enough.
Such an explicit option might have given the game away. I haven’t programmed in C much but my thought would be, “wait is the gimmick here that everything is UB or implementation defined?”
Turns out the author does mean according to the standard, but thinks that “I don’t know” is a synonym for both undefined and unspecified behavior. It seems weird to use imprecise terminology in a post that’s all about lecturing about standards compliance.