Hacker Newsnew | past | comments | ask | show | jobs | submit | option_key's commentslogin

>Of course they had to take a stand for Biden, the only other choice was a lunatic. Love him or hate him, one thing Trump is, objectively, is a bullshitter (in the academic sense of the word). And scientists really just don't like bullshit.

There was an obvious third option: not endorsing anyone. There's no law that requires every publication to endorse a presidential candidate. In fact, most of them don't do that.

>If you get the feeling that Nature is politically biased (and I mean this in the usual everyday person sense, because of course any politics that affects science will be met with strong views), I think that should serve as a signal to check what your biases are.

I'm sorry, but this feels like gaslighting. GP has listed numerous examples of editorials that were biased in favor of a certain political platform, including an explicit endorsement of a presidential candidate. I really don't understand how, in spite of that, you could arrive at the conclusion they aren't politically biased.

>Remember that the people reading this are all the top experts in their own fields, so you can bet they'd love to write back and argue if some editor wrote something stupid.

Only if they don't mind committing a career suicide.


> Only if they don't mind committing a career suicide.

Disagreeing with other scientists is not going to lead to career suicide. It's pretty much the norm.

Which is why when there is a scientific consensus that forms, it tends to be mainly the crazy ones who do bad science that go against the grain. And oftentimes their career is doing just fine, because in science we really value academic freedom.

The public has a very distorted view of this, mainly informed by bad priors and odd examples.


>Thanks to them being yet another attack vector and funny stuff like on this post, got demoted to optional on C11.

Sadly, the C committee doesn't really understand what was wrong with VLAs and a sizable group of its members wants to make them mandatory again:

https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2921.pdf ("Does WG14 want to make VLAs fully mandatory in C23")


What's wrong with VLAs is their syntax. It really shouldn't use the same syntax as regular C arrays, otherwise they would be fine, maybe with a scary enough keyword. They are more generic than alloca too, alloca being scoped to the function, while VLAs being scoped to the innermost block scope that contains them.


Syntax, no protection against stack corruption,...


You can corrupt the stack without VLAs just fine. What else?


VLAs make it a lot easier to corrupt the stack by accident. Unless you're quite a careful coder, stuff like:

  f (size_t n)
  {
    char str[n];
leads to a possible exploit where the input is manipulated so n is large, causing a DoS attack (at best) or full exploit at worse. I'm not saying that banning VLAs solves every problem though.

However the main reason we forbid VLAs in all our code is because thread stacks (particularly on 32 bit or in kernel) are quite limited in depth and so you want to be careful with stack frame size. VLAs make it harder to compute and thus check stack frame sizes at compile time, making the -Wstack-usage warning less effective. Large arrays get allocated on the heap instead.


> stuff like ... leads to a possible exploit where the input is manipulated so n is large

The same is true for most recursive calls, should recursion be also banned in programming languages?


When writing secure C? In most cases, absolutely.


That's not really a fair comparison though. Recursion is strictly necessary to implement several algorithms. Even if "banned" from the language, you would have to simulate it using a heap allocated stack or something to do certain things.

None of this applies to VLA arguments.


It's not strictly necessary precisely because all recursions can be "simulated" with a heap allocated stack. And in fact, the "simulated" approach is almost always better, from both a performance and a maintenance perspective.


This is simply nonsense. In cases with highly complex recursive algorithms, "unrecursing" would make the code a completely unmaintainable mess, requiring an immensely complicated state machine, which is why something like Stockfish doesn't do that in its recursive search function even though the code base is extremely optimised. And yes, some algorithms are inherently recursive, and don't gain any meaningfull performance from the heap stack + state machine approach.


> In cases with highly complex recursive algorithms, "unrecursing" would make the code a completely unmaintainable mess, requiring an immensely complicated state machine

Nothing about it is "immensely complicated". Rather than store your recursion state in a call stack, you can store it in a stack of your own, i.e. a heap-allocated container. The state of a cycle of foo(a,b,c) -> bar(d,e,f) -> baz(g,h) -> foo(...) becomes expressible as an array of tagged union of (a,b,c), (d,e,f) and (g,h).

And there is nothing inherently unmaintainable about this approach. I would hope that it's a commonly taught pattern, but even if it's not, that doesn't make it impossible to understand. Picking good names and writing explanatory comments are 90% of the battle of readability.

> which is why something like Stockfish doesn't do that in its recursive search function even though the code base is extremely optimised

I can't speak for what Stockfish devs do, as I have no insight into which particular developers made what set of tradeoffs in which parts of their codebase. But it doesn't change the reality that using your own stack container is almost always more performant and more extensible:

1. Your own stack container can take up less space per element than a call stack does per stack frame. A stack frame has to store all local variables, which is wasteful. The elements of your own stack container can store just the state that is necessary.

2. Your own stack container can recurse much farther. In addition to the previous point, call stacks tend to have relatively small memory limits by default, whereas heap allocations do not. In addition, you can employ tricks like serializing your stack container to disk, to save even more memory and allow you to recurse even farther.

3. Your own stack container can be deallocated, shrunk, or garbage collected to free memory for further use, but a call stack typically only grows.

4. Your own stack container can be easily augmented to allow for introspection, which would require brittle hackery with a traditional call stack. This can be an extremely useful property, e.g. for a language parser resolving ambiguities in context sensitive grammars.

> And yes, some algorithms are inherently recursive, and don't gain any meaningfull performance from the heap stack + state machine approach.

Using a heap-allocated stack container is recursion. It is the state machine. The only fundamental implementation difference between the approach I describe and traditional recursion is that the former relies on the programmer using an array of algorithm state, and the latter relies on the runtime using an array of stack frames.


First of all, my original point was only that comparing recursion to VLAs was not a reasonable comparison, not to make some profound point about your favourite way to implement recursion. So, chill dude.

1. This is often irrelevant, depending on your priorities. Taking stockfish as an example again, memory is not what's at a premium, search nodes is. The search space is inherently intractable. You're never gonna be able to recurse meaningfully deeper by shaving off some stack space, because the breadth of the tree grows exponentially in the number of plies. The only form of optimisation that helps here is caching and various heuristics to avoid searching certain nodes at all

2. You know you can change the stack size, right? This is what Stockfish does for its search threads. No need for fancy dynamic allocation here. Also, have fun watching shit get interesting interesting when you have to realloc() ncpu huge chunks of contiguous memory balls deep into a search, when the engine is in time trouble... Sometimes resizing allocated memory is simply not an option.

3. Again this is not always relevant. Stockfish needs its big stacks all the time. So big whoop.

4. This Stockfish does need to do, which is why it keeps an array of structs(never resized) for that purpose. But Stockfish also needs to make decisions about when things move between the different stacks, which is why it uses recursive calls despite having a stack on the heap also.

5. It's obvious I am aware of this. My original comment literally said that banning recursion would force you to implement recursion manually anyway using a state machine. Like, dude, you're literally repeating my own comment back at me as if I didn't already know it. What's up with that?

The point here is: yes, in some specialised cases it might be preferable to implement the recursion yourself if the problem calls for it. But other times, and I'd argue most of the time, this is not necessary. So just use the already available abstraction provided by language. Your line of reasoning is a bit like arguing for not using C at all, because it will be slower than assembly in some cases. sure, write hand optimised assembly in your hot paths if you need to, but most people don't. Abstractions are generally our friends, they help us write clearer, more consise code.


> It's not strictly necessary precisely because all recursions can be "simulated" with a heap allocated stack.

This just moves the problem from a stack blowout to a heap blowout.

> And in fact, the "simulated" approach is almost always better, from both a performance and a maintenance perspective.

I am unsure about the performance, but turning recursive code implementing a recursive procedure into iterative code which has to maintain a stack by hand cannot possibly improve readability unless the programmers involved are pathologically afraid of seeing recursive code.


Computers have many GiBs of heap space. Your thread has MiB of stack. Tell me. Which is the bigger problem?

This is also ignoring the fact that the memory usage for recursive algorithms is higher because there’s a bunch of state for doing the function call being pushed onto the stack that you just don’t see (return address, potentially spilling registers depending on the compiler’s ability to optimize, etc). Unless you stick with tail recursion but that’s just a special case where the loop method would be similarly trivial. Case in point. I implemented depth first search initially as a recursive thing and blue out the stack on an embedded system. Switched to an iterated depth first search with no recursion. No problem.

OP said “it’s the only way to solve certain problems”. That’s clearly not true because ALL recursive algorithms can be mapped to non recursive versions.

I never got the fascination with implicit recursion. It’s just a slightly different way to express the solution. Personally I find it usually harder to follow / fully understand than regular iterative methods that describe the recursion state explicitly (ie time and space complexity in particular I find very hard to reason about for recursion.)


Definitely not.


For the can be, or performance, or maintenance perspective?? With example please ..


You’ll find it difficult to beat a construct that has dedicated language support. Trying to fake function calls without actually making function calls is difficult and going to have poor ergonomics.


MISRA C bans recursion for instance.


Doesn't a similar DoS risk (from allowing users to allocate arbitrarily large amounts of memory) also apply to the heap? You shouldn't be giving arbitrary user-supplied ints to malloc either.


> Doesn't a similar DoS risk (from allowing users to allocate arbitrarily large amounts of memory) also apply to the heap?

DoS Risk? No one cares too much about that - the problem with VLAs is stack smashing, which then allows aribtrary user-supplied code to be executed.

You cannot do that with malloc() and friends.


VLAs don’t smash the stack.


Depends on the number you put inside, and the linker settings for stack size.


No.


How does a huge VLA corrupt the stack? If there's not enough space but code keeps going then isn't that a massive bug with your compiler or runtime?


Okay. How do you tell the kernel that? Sure, the kernel will have put a guard page or more at the end of the stack, so that if you regularly push onto the stack, you will eventually hit a guard page and things will blow up appropriately.

But what if the length of your variable length array is, say, gigabytes, you've blown way past the guard pages, and your pointer is now in non-stack kernel land.

You'd have to check the stack pointer all the time to be sure, that's prohibitive performance-wise. Ironically, x86 kind of had that in hardware back when segmentation was still used.


I think the normal pattern is a stack probe every page or so when there's a sufficiently large allocation. There's no need to check the stack pointer all the time.

But that's not my point. If the compiler/runtime knows it will blow up if you have an allocation over 4KB or so, then it needs to do something to mitigate or reject allocations like that.


> I think the normal pattern is a stack probe every page or so when there's a sufficiently large allocation.

What exactly are you doing there, in kernel code?

> But that's not my point. If the compiler/runtime knows it will blow up if you have an allocation over 4KB or so, then it needs to do something to mitigate or reject allocations like that.

Do what exactly? Just reject stack allocations that are larger than the cluster of guard pages? And keep book of past allocations? A lot of that needs to happen at runtime, since the compiler doesn't know the size with VLAs.

It's not impossible and mitigations exist, but it is pretty "extra". gcc has -fstack-check that (I think) does something there.


> What exactly are you doing there, in kernel code?

In kernel code?

What you're doing is triggering the guard page over and over if the stack is pushing into new territory.

> Do what exactly? Just reject stack allocations that are larger than the cluster of guard pages? And keep book of past allocations? A lot of that needs to happen at runtime, since the compiler doesn't know the size with VLAs.

Just hit the guard pages. You don't need to know the stack size or have any bookkeeping to do that, you just prod a byte every page_size. And you only need to do that for allocations that are very big. In normal code it's just a single not-taken branch for each VLA.


That seems to be what -fstack-check for gcc is doing:

"If neither of the above are true, GCC will generate code to periodically “probe” the stack pointer using the values of the macros defined below."[1]

I guess I'm wondering why this isn't always on if it solves the problem with negligible cost? Genuine question, not trying to make a point.

[1] https://gcc.gnu.org/onlinedocs/gccint/Stack-Checking.html


What I'm finding in a quick search is:

* It should be fast, but I haven't found a benchmark.

* There appear to be some issues of signals hitting at the wrong time vs. angering valgrind, depending on probe timing.

* Probes like this are mandatory on windows to make sure the stack is allocated, so it can't be that bad.


I'm mostly interested in it for kernel code though, so the second point at least does not apply, at least not directly. Maybe there is something analogous when preempting kernel threads, I haven't thought it through at all. But interesting.


Because it's on by default in MSVC [0], and we all know that whatever technical decisions MS makes, they're superior to whatever technical decision the GNU people make. /s

Speaking seriously, I too would like an answer.

[0] https://docs.microsoft.com/en-us/windows/win32/devnotes/-win...


One decision Microsoft made was not to support VLAs at all, even after their new found C love.


An attacker would first trigger a large VLA-allocation that puts the stack pointer within a few bytes of the guard page. Then they would just have the kernel put a return address or two on the stack and that would be enough to cause a page fault. The only way to guard against that would be to check that every CALL instruction has enough stack space which is infeasible.


But that's the entire point of the guard page, it causes a page fault. That's not corruption.

Denial of service by trying to allocate something too big for the stack is obvious. I'm asking about how corruption is supposed to happen on a reasonable platform.


Perhaps they're trying to guard against introducing easy vulnerabilities on unreasonable platforms. With VLAs unskilled developers can perhaps more easily introduce this problem. It would be a case of bad platforms and bad developers ruining it for the rest.


An attacker could trigger a large VLA allocation that jumps over the guard page, and a write to that allocation. That write would start _below_ the guard page, so damage would be done before the page fault occurs (ideally, that write wouldn’t touch the guard page and there wouldn’t be a page fault but that typically is harder to do; the VLA memory allocation typically is done to be fully used)

Triggering use of the injected code may require another call timed precisely to hit the changed code before the page fault occurs.

Of course, the compiler could and should check for stack allocations that may jump over guard pages and abort the program (or, if in a syscall, the OS) or grow the stack when needed. Also, VLAs aren’t needed for this. If the programmer creates a multi-megabyte local array, this happens, too (and that can happen accidentally, for example when increasing a #define and recompiling)

The lesson is, though, that guard pages alone don’t fully protect against such attacks. The compiler must check total stack space allocated by a function, and, if it can’t determine that that’s under the size of your guard page, insert code to do additional runtime checks.

I don’t see that as a reason to outright ban VLAs, though.


VLAs give the attacker an extra attack vector. The size of the VLA is runtime-determined and potentially controlled by user input. Thus, the only safe way to handle VLAs is to check that there is enough stack space for every VLA allocation. Which may be prohibitively expensive and even impossible on some embedded platforms. Stack overflows may happen for other reasons too, but letting programmers put dynamic allocations on the stack is just asking for trouble.


I don’t think “may be prohibitively expensive and even impossible on some embedded platforms” is a strong argument for not including it in C. There are many other features in C for which that holds, such as recursion, dynamic memory allocation, or even floating point.


Welcome to the world of undefined behavior. Anything can happen....


I think this is a common misunderstanding about UB. It's not that anything can happen, just that the standard doesn't specify what happens, meaning whatever happens is compiler/architecture/OS dependent. So you can't depend on UB in portable code. But something definite will happen, given the current state of the system. After all, if it didn't, these things wouldn't be exploitable either.


> But something definite will happen, given the current state of the system.

This is only true in the very loose and more or less useless sense that the compiler is definitely going to emit some machine code. What does that machine code do in the UB case? It might be absolutely anything.

One direction you could go here is you insist that surely the machine code has a defined meaning for all possible machine states, but that's involving a lot of state you aren't aware of as the programmer, and it's certainly nothing you can plan for or anticipate so it's essentially the same thing as "anything can happen".

Another is you could say, no, I'm sure the compiler is obliged to put out specific machine code, and you'd just be wrong about that, Undefined Behaviour is distinct from Unspecified Behaviour or merely Platform Dependant behaviour.

Many C and C++ programmers have the mistaken expectation that if their program is incorrect it can't do anything really crazy, like if I never launch_missiles() surely the program can't just launch_missiles() because I made a tiny mistake that created Undefined Behaviour? Yes, it can, and in some cases it absolutely will do that.


I'm aware you can get some pretty crazy behaviours, say if you end up overwriting a return address and your code begins to jump around like crazy. Even that could reproduce the same behaviour consistently though.

I once had a bug like that in a piece of AVR C code where the stack corruption would happen in the same place every time and the code would pathologically jump to the same places in the same order every time. It's worth noting though that when there's an OS, usually what will happen is just a SIGABRT. See the OpenBSD libc allocator for a masterclass in making misbehaving programs crash.

I was never advocating to rely on UB, btw. But yes, UB can be understood in many cases.


You are confusing the C standard and actual platforms/C implementations. A lot of things are UB in the standard but perfectly well defined on your platform. Standards don’t compile code, real compilers do. The standard doesn’t provide standard library implementations, the actual platform does.

Targeting the standard is nice, but if all of your target platforms guarantee certain behaviors, you might consider using those. A lot of UB in the C standard is perfectly defined and consistent across MSVC, GCC, Clang, and ICC.


> A lot of UB in the C standard is perfectly defined and consistent across MSVC, GCC, Clang, and ICC.

Do you have examples of this "a lot of UB in the C standard" which is in fact guaranteed to be "perfectly defined and consistent" across all the platforms you listed ? You may need to link the guarantees you're relying on.


Okay so take the two most complained about UBs, improper aliasing and signed integer overflow. Every compiler I’ve ever used lets you turn both into defined behavior.


These things aren't the default, the compiler may "let you" but it doesn't do it until you already know you've got a problem and you explicitly tell it you don't want standard behaviour. However lets ignore that for a moment:

My GCC offers to turn signed integer overflow into either an abort or wrapping, either of which is defined behaviour, but how do I get MSVC to do precisely the same thing?

Likewise for aliasing rules. GCC has a switch to have the optimiser not assume the language's aliasing rules are actually obeyed, but what switch in MSVC does exactly the same thing?


MSVC doesn’t enforce strict aliasing to begin with. And passing /d2UndefIntOverflow makes signed integer overflow well-defined. Even if you don’t pass it, MSVC is very conservative in exploiting that UB, precisely to avoid breaking code that was valid before they started doing this optimization (2015 or so).


What you are describing is unspecified and implementation-defined behavior [0].

Avoiding UB (edit: in general) doesn't have anything to do with the code being portable and everything with the code not being buggy [1][2].

[0] https://en.cppreference.com/w/c/language/behavior

[1] https://blog.regehr.org/archives/213

[2] http://blog.llvm.org/2011/05/what-every-c-programmer-should-...


Oh really? Then why does every compiler I use have a parameter to turn off strict aliasing?

You cite to a source that contradicts you. In the llvm blog post: "It is also worth pointing out that both Clang and GCC nail down a few behaviors that the C standard leaves undefined."


Sometimes a compiler gives a guarantee that a particular UB is always handled in a specific way, but you cannot generalize this to all UB.

Added 'in general' to my comment to make this explicit.


What is undefined about a large VLA? It shouldn't be undefined.

According to wikipedia "C11 does not explicitly name a size-limit for VLAs"


The C standard has no mentions of a program stack. This isn’t undefined behavior.


You shouldn't be writing C if you're not a careful coder.


Yeah, right.

https://msrc-blog.microsoft.com/2019/07/16/a-proactive-appro...

https://research.google/pubs/pub46800/

https://support.apple.com/guide/security/memory-safe-iboot-i...

Maybe you could give an helping hand to Microsoft, Apple and Google, they are in need of carefull C coders.


I'm not sure if you intentionally missed my point. Everything in C requires careful usage. VLAs aren't special: they're just yet another feature which must be used carefully, if used at all.

Personally, I don't use them, but I don't find "they're unsafe" to be a convincing reason for why they shouldn't be included in the already-unsafe language. Saying they're unnecessary might be a better reason.


The goal should be to reduce the amount of sharp edges, not increase them even further.


VLAs are unsafe in the worst kind of way as it is not possible to query when it is safe to use them. alloca() at least in theory can return null stack overflow, but there is no such provision with VLA.


They're not unsafe (in the memory sense) as long as they check for overflow and reliably crash if there is one.


If a lot of platforms don't implement this check reliably, then it's unsafe in practice at this time, even if not in theory.


Who out there has a version of stack checking that doesn't actually check the stack…? If it doesn't check by default, as C doesn't, then it's not "as long as".


And if you're a careful coder writing C, you should give the VLA the stink eye unless it's proving its worth.


Hint, that means nobody should be writing C.


Where is the lie?


Too bad we have all that legacy C code that won't just reappear by itself on a safer language.

That means there are a lot of not careful enough developers (AKA, human ones) that will write a lot of C just because they need some change here or there.


With VLAs:

1. The stack-smashing pattern is simple, straightforward and sure to be used often. Other ways to smash the stack require some more "effort"...

2. It's not just _you_ who can smash the stack. It's the fact that anyone who calls your function will smash the stack if they pass some large numeric value.


They can overflow the stack. They cannot smash the stack.


Fair enough; I had the mistaken idea that the two terms are interchangeable, but apparently stack smashing is only used for the attack involving the stack:

https://en.wikipedia.org/wiki/Stack_buffer_overflow

so, pretend I said "overflow" instead of "smash" in my post.


Useless semantic pedantry at best, but arguable wrong as there isn't some sort of ISO standard on dumb hacking terms.


Overflowing the stack gives you a segfault. Smashing the stack lets hackers pop a shell on your computer. They are incredibly different. VLAs can crash your program, but they do not give attackers the ability to scribble all over the stack.


> Overflowing the stack gives you a segfault.

Maybe. If the architecture supports protected memory and the compiler has placed an appropriately sized guard page below the stack. If it doesn't then overflowing the stack via a VLA gives you easy read and write access to any byte in program memory.


If your architecture does not support this then you’re at risk whenever you make a function call.


Backpedal harder!


I’m not backpedaling. If your environment has guard pages and does probing then it protects equally well against VLAs and function calls overflowing the stack. If it has neither then both are liable to overwrite other memory. Obviously I would prefer that you have the protections, or some sort of equivalent, but they have nothing to do with VLAs.


Unless they happen to be enjoying kernel space.


There is no difference.


What about not adding even more ways how we should avoid using C?


> What about not adding even more ways how we should avoid using C?

That's a mute point for C's target audience because they already understand that they need to be mindful of what the language does.


What the heck. It's "moot", not "mute".


I'm curious, are there accents in which those two words are homophones? Given the US tendency to pronounce new/due/tune as noo/doo/toon I can imagine some might say mute as moot but I can't find anything authoritative online.


According to Wikipedia, East Anglia does universal yod-dropping, so mute/moot would be homophonic. (See https://en.wiktionary.org/wiki/Appendix:English_dialect-depe...).

Personally, I haven't come across anyone who pronounces 'mute' without the /j/.


They are not perfect homophones. There is a slight i (IPA j) in "mute".

https://en.wiktionary.org/wiki/mute

https://en.wiktionary.org/wiki/moot


That is like saying if sushi knifes are already sharp enough, there is no issue cutting fish with a samurai sword instead, except at least with the knife maybe the damage isn't as bad.


The difference between the largest sushi knives and a katana is more about who wields them than the blade involved.


One ends up cutting quite a few pieces either way.


When you really need a samurai, nothing less will do. Arguably most of us need sushi chefs these days.


> That is like saying if sushi knifes are already sharp enough (...)

No, it's like saying that professional people understand the need to learn what their tools of the trade do beyond random stackoverflow search on how to print text to stdout.

It seems you have an irrational dislike of C. That's perfectly ok. No need to come up with excuses though.


It doesn't seem, I do.

Ever since I got my hands on Turbo C++ 1.0, back in 1993, I see no reason why one should downgrade ourselves to C.

At least C++ give us the tools to be a bit more secure, even if tainted with C's copy-paste compatibility.

You will find posts from me on Usenet, stading on C++ frontline of C vs C++ flamewars.

No one is making excuses, it should be nuked, unfortunely it will outlive all of us.


I like your comparison of a C programmer with a samurai.


Including that most of them end up doing Seppuku on their applications.


while we're hugging them from behind


It's more like the C programmer is a sushi master. They can make a delicious, beautifully crafted snack. But if the wrong ingredients are used you'll get very sick.


VLAs are no more unsafe than standard C is for stack corruption.


Just one additional attack vector more to add to the list, who's still counting them?


It’s not an additional attack vector.


    int A[100000000];
Also has no protection.


the only result of banning VLAs is to force everyone to use alloca, which is even less safe.

exhibit A: https://lists.freedesktop.org/archives/mesa-commit/2020-Dece...

exhibit B: https://github.com/neovim/neovim/issues/5229

exhibit C: https://github.com/sailfishos-mirror/llvm-project/commit/6be...

etc etc


Nobody is forced to use alloca, which is not less safe, only equally disastrous. Just use malloc, already.


ah yes, why didn't I think of it, let me just try:

    #include <cstdlib>
    #include <span>
    
    __attribute__((annotate("realtime")))
    void process_floats(std::span<float> vec) 
    {
      auto filter = (float*) malloc(sizeof(float) * vec.size());
    
      /* fill filter with values */
    
      for(int i = 0; i < vec.size(); i++)
        vec[i] *= filter[i];
    
      free(filter);
    }

    $ stoat-compile++ -c foo.cpp -emit-llvm -std=c++20
    $ stoat foo.bc
    Parsing 'foo.bc'...

    Error #1:
    process_floats(std::span<float, 18446744073709551615ul>) _Z14process_floatsSt4spanIfLm18446744073709551615EE
    ##The Deduction Chain:
    ##The Contradiction Reasons:
     - malloc : NonRealtime (Blacklist)
     - free : NonRealtime (Blacklist)

oh noes :((


Here's a nickel, kid.

The bullshit about oh my embedded systems doesn't have dynamic memory is bullshit. You either know how big your stack is and how many elements there are, and you make the array that big. Or you don't know and you're fucked.

You can't clever your way out of not knowing how big to make the array with magic stack fairy pretend dynamic memory. You can only fuck up. Is there room for 16 elements? The array is 16. Is there room for 32? It's 32.


I think the parent comment was about malloc not being real-time? Not about storage space.

Though I do wonder why there can't be a form of malloc that allocates in a stack like fashion in real time to satisfy the formal verifier?


Real time also generally means your input sizes are bounded and known, otherwise the algorithm itself isn't realtime and malloc isn't the reason why.

But strictly speaking the only problem is a malloc/free that can lock (you can end up with priority inversion). So a lock-free malloc would be realtime just fine, it doesn't have to be stack growth only.


> Real time also generally means your input sizes are bounded and known, otherwise the algorithm itself isn't realtime and malloc isn't the reason why.

I think you meant to say something else? Real-time is a property of the system indicating a constraint on the latency from the input to the output—it doesn't constrain the input itself. (Otherwise even 'cat' wouldn't be real-time!)


If your input is not bounded you can't know in advance the time needed to process it. In other word you cannot be realtime.

`cat` can be realtime, but only by fixing the size of its internal buffer where it reads to and writes from. In this case it can in theory bound the time needed to process the fixed block of input.

But if for some reason `cat` tried to read/write by lines of unknown in advance size, it would fail to be realtime.


I think we're not disagreeing on the actual constraints, but the terminology. The "internal buffer" is not part of the system's "input". It's part of the system's "state".


We're talking about function inputs, not system inputs.


Real-time is a property of the whole system though (?) not individual functions. But even if you want to reframe it to be about functions, small input is neither necessary nor sufficient for it being "real time". Like your function might receive an arbitrarily large n-element array a[] and just return a[a[n-1]], and it would be constant time. Again, the size is the simply not the correct property to look at.


> Real-time is a property of the whole system though (?) not individual functions.

As I see it, it leads to a contradiction. `cat` have unbounded time of execution. It depends on a size of input. It means that `cat` cannot be realtime. But this logic leads us to a conclusion, that any OS kernel cannot be realtime, because it works for an unbounded time.

It is a non sense. Realtime is about latency: how much it takes time to react to an input. `cat` may be realtime or may be not, it depends on how we define "input". We need to define it in a way that it is bounded. I mean 4Kb chunks of bytes for example.

> Like your function might receive an arbitrarily large n-element array a[] and just return a[a[n-1]], and it would be constant time.

No. Receiving n-element array is an O(n) operation. It needs to be copied to memory. We can of course pick one function that just get a pointer to an array, but if real-time is a property of the whole system, then this system needs to copy the array from the outside world into memory. And it is an O(n) operation. So for any latency requirement exists such N so when n>N this requirement would not be met. So unbounded array as an input is incompatible with a real-time.


> Though I do wonder why there can't be a form of malloc that allocates in a stack like fashion in real time

I think that's basically what the LLVM SafeStack pass does -- stack variables that might be written out of bounds are moved to a separate stack so they can't smash the return address.


how would you implement that in the face of multiple threads? you can't use TLS as it will have initialize your stack on first access of your malloc_stack in a given thread, which may or may not be safe to use in real-time-ish-contexts (I think it's definitely not on Windows, not sure on Linux)


> how would you implement that in the face of multiple threads?

I imagine you could you allocate it at the same time as you allocate the thread's own stack?


> You either know how big your stack is

that's in most systems I target a run-time property, not a compile-time one


> wants to make them mandatory again

What does 'mandatory' mean? Like if I write a C compiler without them... what are they going to do about it?


Code that complies with the standard will be rejected by your compiler. The effect would probably be that few people would use your compiler.


Most mainstream compilers aim to be standards compliant


You don't understand! Nadella's Microsoft open-sourced a few applications and created a popular text editor, which makes him basically the second coming of Christ in the eyes of a large portion of HN users.

Their numerous anti-user and anti-privacy practices are unimportant. Nadella's Microsoft is "cool" and that's all that matters.


This could mean that information literacy is on the rise.


If only! The "traditional media" bubble is merely getting augumented with the help of "influencers", sketchy Twitter accounts, state-run networks, fringe outlets, social networks, web celebrities and so on. Just because someone thinks that TV isn't "trustworthy", doesn't mean much and chances are they're getting even lower grade reporting under the guise of it being "non-traditional media" (it must mean something...right??).


Or the opposite (that information literacy is on the decline, and people who are more intelligent and better educated trust the news more, at least from credible sources like the New York Times, the Washington Post, NPR, and CNN).


I’m not sure there are any news outlets remaining that I would consider credible, at least not consistently. They all have substantial black marks.


Reuters is credible IMO.


DC and NYC coverage (especially of political demonstrations) can get a little dicey but otherwise they seem mostly ok.


>Reuters is credible IMO.

Can't say I agree:

Syrian man denied asylum killed in German blast

https://www.reuters.com/article/us-germany-blast-minister-id...


All are the same level? Seems unlikely


If those are your picks for credible sources, you might be less informed than you believe yourself to be. I don't think they're the _most_ biased, but they're certainly not the _least_ biased.


Credibility is not about absence of bias, which is not a reasonable thing to expect from anyone. Credibility is about journalistic practices like checking sources and using editorial discretion about what level of certainty is needed to print something. Surely you have seen one of those charts that plots credibility vs politcal bias? e.g. https://adfontesmedia.com/static-mbc


The AP calling Abe names right after he was assassinated was stunning to me. AP news wire has traditionally been very neutral, given its wide syndication.


I saw this on /r/conservative today


What did they say?


They called him names?


They called him (in the assassination article) a "divisive arch-conservative" then revised it without issuing a correction or retraction.

How it comes up in search results: https://twitter.com/jeremyreporter/status/154545145897532211...

How it appears now: https://apnews.com/article/japan-shinzo-abe-shooting-22ec224...


Please share your list of least biased for comparison. This would be very helpful.

Thank you


I don't consider any single source or group of agreeing sources as credible in isolation. I've witnessed bad reporting from all major news sources. In cases where I wish to be well-informed, I'll seek out multiple sources, see if there's any disagreement, and try to figure out where and why the disagreement happens. When there is disagreement, I'll also examine the primary sources of the articles I've read. Do this enough, and you'll start to smell BS in most places.


OK. Great.

However you said _those_ weren't good, or least not unbiased enough. So what are the unbiased, or at least better about handling bias than those they posted.


It's not that you can't believe anything they report so much as relying on only those sources is going to give you a distorted and biased representation of ideas and events, same as somebody who only reads Fox News. I read articles from those publishers. I don't _only_ read articles from those publishers.


There is no such thing. That is the point. It's up to you to filter out the bias and figure out what happened.


CNN is credible? According to whom?


The top headlines on cnn.com at the moment are:

"Highland Park gunman's family was in turmoil for years before shooting" <-- neutral

"Shinzo Abe's assassination shocks Japan" <-- neutral

"Musk tells Twitter he wants out of deal to buy it. Twitter says it will force him to close the sale" <-- neutral

"Here's what's in Biden's executive order on abortion rights" <-- neutral

"Trump considering waiving executive privilege claim for Bannon but prosecutors say he was never shielded" <-- neutral

The top headlines on foxnews.com at the moment are:

"Dems reportedly full of 'outright worry' president won't be able to rescue plunging polls by midterms" <-- neutral

"Twitter squawks back after Musk reveals he's terminating $44B purchase" <-- neutral

"FBI director issues stern warning about biggest long-term threat to US" <-- neutral

"Mom gets terrifying call after dropping young daughter off at airport gate" <-- neutral but click-baity

"BORDER BATTLE: Biden admin fires back as governor takes spiraling migrant crisis into his own hands" <-- biased (alternative version: Biden admin criticizes Texas governor after executive order)

"GAFFER-IN-CHIEF: Biden widely mocked after he appears to read instruction right off teleprompter" <-- biased (alternative: none, because it's not news worthy)

"'Joe Biden and the Democrats are lying' to the American people: Rep. Malliotakis" <-- biased (alternative: none, because it's not news worthy)

"School choice advocate slams 'despicable' criticism from unions of Arizona school voucher bill" <-- biased (alternative: none, because it's not news worthy)

Fox and other conservative sources publish a lot of "conservative personality SLAMS liberal personality"-type articles. Left-wing publications do the opposite of course, but credible sources rarely publish this kind of article unless it's from someone important and about an important subject, and try not to use formulae like "as governor takes SPIRALING migrant crisis INTO HIS OWN HANDS" or "widely mocked".


What is being identified here isn't bias though. You're picking up style, target audience and how blatant it is.

Eg, "Highland Park gunman's family was in turmoil for years before shooting" isn't neutral, it is narrative building. It isn't relevant to anything important - there are lots of families in turmoil out there. Most families, I suspect, face some sort of turmoil every few years. But the style is more high-brow and their clearly targeting people who are/want to be emotionally sensitive.

Bias is different from whether the headline is pitching at high- or low- class audiences. You're probably picking up that Fox news isn't written with people like you in mind and CNN might be.


I would compare CNN/WSJ, and Fox/MSNBC. Much closer in terms of tone and target audience.

Reason Magazine is probably the pinnacle of very good right-leaning journalism, filling the same niche as VICE on the left. I really enjoy the excellent journalism of both.

EDIT: I have reproduced your original comparison with WSJ taking the place of CNN

CNN:

"Highland Park gunman's family was in turmoil for years before shooting" <-- neutral

"Shinzo Abe's assassination shocks Japan" <-- neutral

"Musk tells Twitter he wants out of deal to buy it. Twitter says it will force him to close the sale" <-- neutral

"Here's what's in Biden's executive order on abortion rights" <-- neutral

"Trump considering waiving executive privilege claim for Bannon but prosecutors say he was never shielded" <-- neutral

WSJ:

Japan’s Shinzo Abe, Former Premier, Is Assassinated <-- neutral

Musk Moves to End Deal for Twitter <-- neutral

Google Offers Concessions to Fend Off Antitrust Suit <-- neutral

U.S. Jobs Market Remains Robust <-- neutral

Investors Bet Euro’s Woes Are Far From Over <-- Somewhat editorial, but no clear bias

Housing-Affordability Index At Lowest Level Since 2006 <-- neutral


FYI:

If you're interested in looking at broadcast rundowns by day, the Vanderbilt Television News Archive is invaluable. It permits keyword searches (for how a specific story was covered), or looking at broadcasts from the major networks (NBC, ABC, CBS, CNN, and Fox News), dating to as early as 1969 (for ABC, CBS, and NBC, with CNN and Fox being added at later dates).

https://tvnews.vanderbilt.edu/

For websites, navigating archives by date on the Internet Archive can be useful. It's difficult to pick a canonical time of day for stories, and archive timing varies, but you might choose a target such as 6pm US/Eastern to designate the end of a daily news cycle and find the copy that most nearly matches that.

As noted, there are organisations which perform this work themselves, including Ad Fontes (which I've already mentioned), Media Bias Fact Check (https://mediabiasfactcheck.com/), and more. Sourcewatch (https://sourcewatch.org) is another.

There are of course partisan bias-check organisations (e.g., the Media Resource Center on the right, Fairness and Accuracy in Reporting on the left). There are also groups which look for under-reported stories, most notably Project Censored (https://www.projectcensored.org/).


Thank you! I'm the director of public policy for a non-profit firearms advocacy group, and we do quite a bit of news analysis. This could be super helpful for a piece I'm writing on how gun owners of color have been treated by the media at large over time.


Looking at just headlines doesn't give you the full picture of bias, but it's a good start.

The most notable CNN bias indicator would be having to pay out nearly half a billion to a minor for slandering him. It's not the only event of that nature either.

But usually the bias is omission of facts and even stories, speculation, and slant within the story. In the Sandman case it was only showing partial footage that cut out him being approached and made him look like the aggressor.


So you just go and pick today's headlines and call it a day?


The effort Victerius expended was markedly greater than you had.

You're more than welcome to provide your own data, or at a bare minimum, state your standards or criteria for what a sufficient or credible response might be.

Keep in mind that there are in fact organisations which do just this over time, with Ad Fontes Media being among the better known and more credible:

https://adfontesmedia.com/


I'll give you a hint: the results would have been the same on any other day. Clearly you're only interested in having your opinions validated. I'm sorry that reality is so disappointing to you.


You're right of course but the effort to check each day would be excessive. However what would you think if for the last 30 days it was similar to today? Would you switch to cnn? Probably not.


Mr. Nobody ;)


I might have agreed with you on the first 3 many years ago. Even NPR has gotten really bad with the burying of headlines.

At this point, the best thing people can do is to just glance at the news when it presents itself and then otherwise move on with their lives. If there's any good to come out of the age of disinformation, it's that maybe people will stop doomscrolling the news.


I’ve been saying for a long time now: if its actually important it will reach you by word of mouth

Now if only I could break the news consumption habit!


Wikipedia Current Events portal is an alternative to scanning CNN, Fox, BBC, NPR etc. I like that you can tab open links to further research around a topic.

https://en.m.wikipedia.org/wiki/Portal:Current_events


There are certainly people who think they're intelligent. They like to congratulate each other.

Taleb would call them IYI (intellectual yet idiot).


He calls everyone that unless you do nothing but slavishly agree with him. Speaking about the quality of social media compared to traditional media...


>There are certainly people who think they're intelligent. They like to congratulate each other.

>Taleb would call them IYI (intellectual yet idiot).

Lenin's "useful idiots" term is always apt.


> "credible sources"

Only for the credulous


All the credible sources you mentioned are not credible at all. That's an interesting choice you did there.


Who are then?


Why?


True, but that's because history as a discipline has incredibly low standard of evidence. They collectively decided that since reliable evidence is often very difficult to produce, they will settle for what they can get. A lot of antique or even medieval historical figures are known from a single sentence in some chronicle written 100 years after their death.


Even the most ardent Christian biblical scholars admit the overwhelming majority of writings claimed to be 1st century are much later forgeries. They have picked out a few scraps they have not been able to prove were forged, and based everything on those.

But the most favored bit of positive evidence is a single paragraph that everybody agrees was badly doctored up. They have "reconstructed" what they think the original must have actually said. But the text before and after it would flow neatly one to the next without it.

Next best is a line in Paul where he mentions somebody is Jesus's brother.


Both pieces of evidence are from Josephus' Antiquities. Not the best evidence but also not the worst.


That would make it unknowable whether he existed, it would not mean we have reason to believe he didn't.

This is not like physics where it's natural to assume something didn't exist if the proof for its existence is not strong enough.


Only because a lot of people want to believe.

We are all confident Adam, Noah, and Moses were made up, with none of the proof positive that you are demanding for this one case.


Adam, Noah and Moses were only attested and believed to exist by groups living in the Kingdom of Judah.

For Moses, people have looked a lot for any kind of evidence that there was some significant Jewish presence in Egypt, or Egyptian migration to the area of Israel, and nothing of the kind has been found, either in Egyptian documents or in archaeological evidence - which is actually evidence of absence when expecting a significant population to have migrated that way. Further accounts from the books of Exodus and Deuteronomy, such as the battle of Jericho, have also been somewhat conclusively debunked (the city of Jericho hadn't existed, at least not worth walls, for a few hundred years before the time of the conquest is supposed to have taken place).

Similarly, we have looked long and hard for evidence of a massive flood that could have lent some credence to the story of Noah, and nothing of any significant magnitude was found for that time - and here, we know for sure that a flood would have left significant geological evidence, so we know the flood can't have existed.

Adam has so little information associated with him that it's hard to even define what it would have meant for him to exist. We do know for sure, based on DNA evidence, that there is no single father + mother pair from which all humans living today have sprung, definitely not anyone living anywhere near close to the Jewish account of Adam.

In contrast, the idea of a founder of the Christian sect, one who was killed under Pontius Pilate around the year 33, has no major evidence against it, and is a somewhat plausible account of how the Christian sect could have come to be. There are no sources asserting a different origin, and there are no sources that contradict the possibility that Pontius Pilate and the Jewish authorities would have punished someone behaving like Jesus did. So, the neutral position is to say that he may or may not have existed, we don't know.

If you further believe the biblical or non-biblical sources attesting to his existence, even if you think they are weak, you can even say that it's more likely that he existed than that he didn't.


No one who lived then and wrote anything about Jesus, including the (unknown) authors of the Gospels, ever claimed to have met Him.

We can be confident Paul existed, or anyway somebody we know of as Paul, who wrote his Letters. Likewise Homer, the Iliad. Tacitus, Pliny, Horace, Plato, Euripides. But there is nothing traceable to any Jesus. You certainly can choose to believe He existed, but objectively, the evidence is too thin to support it.

Funny thing about Noah's flood. The water is all still there. We call it the sea. Sea level rose 120 meters in the past 20,000 years, up until 8000 years ago. Many millions of square miles of what was rich river bottom land is now sea floor. People whose family had lived there for tens of thousands of years had to keep moving inland (where other people already lived!) as the sea swallowed their ancestral homes. For 12000 years. It must have made an impression.


>The language used on the main vlang site also seems calm, clear and unsensational (at least to me).

It wasn't always like this. Back in 2019, its website looked like this:

https://web.archive.org/web/20190303184805/https://vlang.io/

As you can see the website claimed that its compiler is "200x faster" than C compilers, while neglecting to mention that it merely translates V code to C, so you still have to run a C compiler.

"400 KB compiler with zero dependencies" (apart from a C compiler and libc).

"As fast as C" - a lie.

Apart from deceptive marketing, there were serious issues with the code quality of the compiler:

https://github.com/vlang/v/blob/d32e538073e55c603992b5b65ebc...


"As fast as C" is not a lie. It literally translates to C.

The 200x compile speed up referred to C++ compilers, this was vague and hard to measure, since the languages are so different, and was removed.

The code may not have been perfect at the time of the 0.0.1 release, but it worked. V could compile itself.

Now it's much better and more organized.


>"As fast as C" is not a lie. It literally translates to C.

No, just because it outputs C it doesn't mean it's as fast as hand-written C. Using that logic, every language that outputs machine code is as fast as assembly, which is obviously not true.


If you look at the C that's generated using -keepc flag, I think you will be hard pressed to find any glaring inefficiencies to "hand coded C"; especially none of which GCC optimization wouldn't handle with -prod compiler flag. Even if there were, for the very large majority of users who are not highly proficient in C (which is becoming more and more common these days - Grey beards are rare), the V way is going to outperform novice to intermediate-skilled C programmers hands down.


I don't see how that really follows. V heap allocates any value which is address taken. You don't need to be an advanced C programmer to use pointers.


But it is true. V doesn't add any overhead. You can verify with `v -o file.c file.v`.

As long as you use the same data structures and algos, you'll get the same perf.

There's stuff like bounds checking, but it can be disabled, and it adds like 5%.


>and if necessary, turn off Fox News

Quips like this aren't very productive if you want to convince others of your viewpoint.


That's how it already works in Europe. Prices are set by the manufacturer and most cars are built to order. Dealerships rarely have their own branding, they basically work as franchises integrated with the manufacturer's network.

Their role is to collect orders, provide test rides and service the cars.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: