I only save full page with Evernote (which is the default when you save using their clipper). Means I have it offline and available if the page changes. If it's something where I want it to update I'll bookmark instead in my browser.
I agree with you about the news feed. FB messenger is still the easiest way to contact people (almost everyone has it, you don't need any information being a name). Events are great for easily informing people of what you're doing. Groups are great for alerting a group of people to some information (basically better mailing lists).
For example, Chrome eats pretty much all free memory keeping tabs loaded, but surrenders it gracefully when needed. That's a great use of memory.
But for an OS, and specifically for a feature that doesn't devolve unless the user goes and manually changes it, it's obnoxious. The OS is inherently a support layer for the things the user opens by choice; I'd argue any expansion of its resource footprint ought to have a clear justification.
I mean, it does. The "fairy tale" that one hears in high school doesn't, but we put aside fairy tales when we became adults.
If you are done with the "undergrad level" of Popper and Kuhn it is worth reading Imre Lakatos's work on philosophy of science. It contains a moment where one realizes that research programs live or die by this "impact factor" and that this living or dying is a key part of the overall methodology of science. The gist is that science is actually participating in a survival-of-the-fittest evolution with certain foundational ideas as the "genes" which "reproduce". So scientific ideas are actually good or bad in no small part due to their ability to create further scientific research along similar lines. A low impact-factor therefore directly says "along this particularly important-to-science axis, this journal sucks."
You are right that "the impact factor of a journal is meaningful and provides a simple/preliminary heuristic for measuring up _some_ aspects of a paper published in it" - but this is not what you wrote.
1. Imre Lakatos and maturity are great, both implying that you should not apply the aforementioned rule of thumb to an individual paper - an individual in the population - whether it was published in Nature or an insignificant contender.
2. Your memetic approach is also good, but incomplete: the objective function in case of these journals is maximizing the impact factor - so we can conclude that "PrevMed is less successful in maximizing the impact factor than some competitors, or it is a younger journal, or ..." Yes, imact factor and quality correlate in the long run, but we are not at undergrad level.
3. "A low impact-factor ... directly says" - Not directly. Also, most of the journals - not to mention conferences - do not even have an impact factor.
4. "...this journal sucks" - Most of the people writing in these kind of journals have given up a lot to contribute something modest. The editor of this journal is probably emailing with reviewers at 1am or so. Just saying...
But to stretch the analogy a bit further would it be fair to say that impact factor is very like sexual selection for extreme display traits that otherwise are detrimental to the wellbeing of the species?
Yes impact factor matters to current science as practiced but there is plenty of good criticism to show (at least as it is currently calculated) that it is a lousy measure of what is likely to end up being true, reproducible and useful.
> would it be fair to say that impact factor is very like sexual selection for extreme display traits that otherwise are detrimental to the wellbeing of the species?
If I read every PoS article vaguely related to my research, I'd never get anything done. In practice, I don't pay attention to impact factor. But I do pay attention to who's publishing. And that's basically the same as impact factor, in practice.
> that it is a lousy measure of what is likely to end up being true, reproducible and useful.
I don't think so.
High impact factor publications are MUCH more likely to be quality science than low impact factor publications (at least in my area).
The major venues would have to get at least two orders of magnitude worse before they became bad indicators of quality.
Of course, and obviously, that does not entail that all work published in high impact factor journals is high-quality.
I think the fundamental problem is just that you vastly under-estimate the enormous volume of utter crap there is out there.
You have N groups of something (persons, sales units, cars, tools, whatever). Find the values that are present in all of them. This problem comes up, all the time, in some form.
With sets it's trivial. Take the first set, and calculate a cascading intersection with every other set. At the end you're left with only the values that appeared in every one. In very clear python the code might look like this:
common = ALL_SETS[0]
for _set in ALL_SETS[1:]:
common = common.intersection(_set)
return common
Now, you can argue that underneath the hood that's still going to do N expensive iterations and you'd be right. This is where proper optimisations come in - when provided by a robust stdlib, we expect the set operations to be implemented with bitmaps.[0, 1] These are not something you'd see in a casual implementation.
Btw, in the above code example I have deliberately omitted a trivial optimisation. Let's leave that as an exercise for the reader. :)
You're right that part of the problem is, indeed, theoretical (it remains to be shown that BPP ≠ BQP, even if it is widely believed to be true), but part of the problem is also physical (we need to build a model of BQP---i.e. a quantum computer---to show that there is a physically realizable model of it).