The goal of giving money is not to make those who receive it more educated, better parents or making them politically active. I mean, why did I have to say that?
(The amount of money is also unlikely to enbale to make savings for the future.)
The goal is that their immediate poverty decreases. Why is the author insistent on the measurement of unrelated stats? Because he has an ax to grind.
So to be clear you would consider UBI a success if the only detectable change in metrics is simple arithmetic in the form of “we gave them money and then they had more money”?
I’m sorry to sound snarky but I’m struggling to read this comment any other way. You seem to claim any metric that isn’t a number representing dollars a person has (however ephemeral) is “unrelated” which seems completely bonkers to me.
Early advocates (e.g. Friedman) for UBI (and also NIT) often focused on broader distributional outcomes for society and the economy as a whole. One is the reduction of the "welfare cliff". Another is the increase in relative spending power of the lower class as a group, which could lead to growth in businesses that serve these cohorts. Neither of these effects is assessed at all by RCTs that look at a small population of individuals for a period of less than a decade, because they are effects that occur over a long time and among a necessarily large (millions) group of people.
There is a sort of unstated assumption among some social policy critics that goes, roughly, "we can test most policy effects that matter with well-designed trials". People believe this less because there is any evidence for it and more because modern experimental science is impressive and successful in many ways and so therefore must have the answer to any question. However, many policy questions remain "wicked" and outside of a reasonable experimental domain.
OTOH, 99% of use cases don't care about performance and just want a portable implementation.
While it could be useful to have a "fast" variation that offers no guarantee at all, what you would end up with (because people are vain) is that too may people would use those instead "because perf", even though the actual usage is not performance critical, and have code that breaks whenever the compiler or platform changes.
> OTOH, 99% of use cases don't care about performance
For many other languages, I'd agree with '99%', but in the case of c++, performance is one of the main reasons it's still used, so I doubt the number is nearly that high.
With hindsight, I'd say that std::unordered_map is fine for what it is, but there are a lot of cases where it's really not suitable due to having a dynamic allocation for each element and the resulting lack of cache locality, and because of that many people have to go looking elsewhere for a usable hash map. There are good reasons why we have both std::vector and std::list.
Knowing that a typical maze will have branching paths at the beginning, but necessarily one good path at the end, I find it easier to start from the goal and work my way backward.
But your example is not reflective of the study. Are you saying that the 17% reduction is for some reason significant but the other ones, all of which would inconveniently disagree with the result you want, are not, even though they are in the same study?
IOW, you're saying that among the study results, all that agree with your POV are valid, all that don't are invalid. That's quite some bias there.
The answer to your question is literally answered by my comment that you’re replying to. Frequentist statistics cannot be used to affirm the null. That is, you cannot say “cardiovascular deaths was not significantly associated, therefore SFA does not cause CVD mortality”.
So I’m not disagreeing with or omitting anything in the study. The study said no significant association with CVD mortality. Ok, no problem. That doesn’t mean SFA doesn’t cause CVD mortality.
However, the study does show that SFA is associated with CVD events. So there’s a significant finding. It’s not cherry picking, this is just how frequentist statistics works.
Well, we read the article, which cites many studies. Maybe she is doing a super selective review of the field, but she does not merely quote one study, but several, all of which indicate that there is no correlation between saturated fat and cardiovascular problems.
IOW, we did not merely "read and trusted one random article" but assessed the presented evidence. You OTOH, merely provided ad-hominem attack on both the author and anyone who dared believe the presented evidence, which smacks of trying to shame people in not voicing their opinion.
Did you actually follow up and read the studies she cited in detail, though? Much of what she claims is misrepresentation, such as the insinuation that the Cochrane review found no evidence of SFA consumption being associated with risk.
You can embed/tunnel any network transport into another. There is nothing magical about the internet and IP. It is actually being tunneled when you're using a cable modem. WiFi is a horrible hack that encapsulate IP in a very ugly way to make it onto it's wireless tech.
You could have tunnel ATM over IP, I'm pretty sure of it. The depiction seems to me like a flattering extolment of IP.
I wonder why they chose to represent rationals with subtracting one from the denominator. It makes human parsing of the value harder and in many case makes the implementation code slighter harder; for example the equality op need to increment both denominators before using them. I suspect such increment must be constantly be needed left and right?
Yes, this is mostly a leftover from initial versions that used a natural number as denominator. It doesn't seem to make a noticeable difference in performance though, since increments are a very basic operation.
I think leaving this in the article makes the non-zero denominator more explicit. It also allows easier adoption to other numeral systems :)
... or simply that the LOGO language syntax and choice of commands is confusing? Without formal explanation, how surprising is it really that a child would assume that STOP mean stop?
I'd bet that if LOGO had used RETURN, like many other languages, then the children's reasoning would be likely be more accurate. Or go the other way and make them tell you what this or that brainfuck[1] program does. So, to me, this research says more about LOGO choices than anything.
But STOP does mean stop. Stop executing this subroutine.
If the program were instead (as a set of commands for a person, not a turtle) START WALKING; STOP; START CLAPPING; STOP; ... any child would understand what was intended. It would be more confusing if the first STOP here meant "stop all program execution, never proceed to the next step".
So the problem isn't STOP, it's the fact that there's more program to execute, hidden in the call stack.
The goal of giving money is not to make those who receive it more educated, better parents or making them politically active. I mean, why did I have to say that?
(The amount of money is also unlikely to enbale to make savings for the future.)
The goal is that their immediate poverty decreases. Why is the author insistent on the measurement of unrelated stats? Because he has an ax to grind.