Science needs an intervention similar to what the CRM process (https://en.wikipedia.org/wiki/Crew_resource_management) did to tamp down cowboy pilots flying their planes into the sides of mountains because they wouldn't listen to their copilots who were too timid to speak up.
...on the evening of Dec 28, 1978, they experienced a landing gear abnormality. The captain decided to enter a holding pattern so they could troubleshoot the problem. The captain focused on the landing gear problem for an hour, ignoring repeated hints from the first officer and the flight engineer about their dwindling fuel supply, and only realized the situation when the engines began flaming out. The aircraft crash-landed in a suburb of Portland, Oregon, over six miles (10 km) short of the runway
It has been applied to other fields:
Elements of CRM have been applied in US healthcare since the late 1990s, specifically in infection prevention. For example, the "central line bundle" of best practices recommends using a checklist when inserting a central venous catheter. The observer checking off the checklist is usually lower-ranking than the person inserting the catheter. The observer is encouraged to communicate when elements of the bundle are not executed; for example if a breach in sterility has occurred
Maybe not this system exactly, but a new way of doing science needs to be found.
Journals, scientists, funding sources, universities and research institutions are locked in a game that encourages data hiding, publish or perish incentives, and non-reproducible results.
The current system lies on the market of ideas - ie if you publish rubbish a competitor lab will call you out. ie it's not the same as the two people in an aircraft cabin - in the research world that plane crashing is all part of the market adjustment - weeding out bad pilots/academics.
However it doesn't work all the time for the same reasons that markets don't work all the time - the tendency for people to choose to create cosy cartels to avoid that harsh competition.
In academia this is created around grants either directly ( are you inside the circle? ) or indirectly - the idea obviously won't work as the 'true' cause is X.
Not sure you can fully avoid this - but I'm sure their might be ways to improve it around the edges.
> The current system lies on the market of ideas - ie if you publish rubbish a competitor lab will call you out.
Does not happen in practice. Unless you're driven by spite, fanaticism towards rigorousness, or just hate their guts there is zero incentive to call out someone's work. Note that very little of what is published is obvious nonsense. But a lot has issues like "these energy measurements are ten times lower than what I can get, how on earth did they get that?" Maybe they couldn't or maybe you misunderstood and need to be more careful when replicating? Are you going to spend months verifying that some measurements in a five-year-old paper are implausible or do you have better things to do?
Sure - such direct contradiction is rare - call out was the wrong phrase - that mostly only happens which people try and replicate extraordinary claims.
Much more common is another paper is published which has a different conclusion in the particular area of science which may or may not reference the original paper - ie the wrong stuff get's buried over time by the weight of other's findings.
You could say that part of the problem is correction is often incremental.
In the end the manipulation by Masliah et al came out - science tends to be incremental, rather than all big break-throughs and I'd say any system will struggle to deal with bad faith actors.
In terms of bad faith actors - you have two approaches - look at better ways to detect, and looking at the properties of the system that perhaps creates perverse incentives - but I always think it's a bad idea to focus too much on the bad actors - you risk creating more work for those who operate in good faith.
How is that correction mechanism supposed to work though? Do you mean the peer review process?
Friends in big labs tell me they often find issues with competitor lab papers, not necessarily nefarious but like “ah no they missed thing x here so their conclusion is incorrect”.. but the effect of that is just they discard the paper in question.
In other words: the labs I’m aware of filter papers themselves on the “inbound” path in journal clubs, creating a vetted stream of papers they trust or find interesting for themselves.. but that doesn’t provide any immediate signal to anyone else about the quality of the papers
> How is that correction mechanism supposed to work though? Do you mean the peer review process?
No. I meant somebody else publishes the opposite.
One of the things you learn if you are a world expert in a tiny area ( PhD student ) is that half the papers published in your area are wrong/misleading in someway ( not necessarily knowingly - just they might not know some niche problem with the experimental technique they used ).
I agree peer review is far from perfect, and there is problem in that a paper being wrong is still a paper in your publication stats, but in the end you'd hope the truth will out.
People got all excited about cold fusion - then cold reality set in - I don't think the initial excitement about it was a bad thing - sometimes it takes other people to help you understand how you've fooled yourself.
I expressed the same idea here not too long - the value of any one individual paper is exactly 0.0 - and was downvoted by it, but I believe this is almost the second thing that you learn after you publish, and what seems to confuse the "masses" the most.
You (as a mortal, human being) are not going to be able to extract any knowledge whatsoever from an academic article. They are _only_ of value for (a) the authors, (b) people/entities who have the means to reproduce/validate/disprove the results.
The system fails when people who can't really verify use the results presented. Which happens frequently... (e.g. the news)
I'm in academia, and I think it has many good points.
The number one issue in my mind is competitors labs don't call you out. It's extremely unusual for people to say, publicly, "that research was bad". Only in the event of the most extreme misconduct to people get called out, rather than just shody work.
Yeah I don't think CRM is the correct thing in this case... I just think that there needs to be some new set of incentives put in place such that the culture reinforces the outcomes you want.
There actually are checklists you have to fill out when publishing a paper. You have to certify that you provided all relevant statistics, have not doctored any of your images, have provided all relevant code and data presented in the paper, etc. For every paper I have ever published, every last item on these checklists was enforced rigorously by the journal. Despite this, I routinely see papers from "high-profile" researchers that obviously violate these checklists (e.g.: no data released, a not even a statement explaining why data was withheld), so it seems that they are not universally enforced. (And this includes papers published in the same journals around the same time, so they definitely had to fill out the same checklist as I did.)
Not to mention that scientists spend a crazy amount of time writing grant proposals instead of doing science. Imagine if programmers spent 40% of their time writing documents asking for money to write code. Madness.
Indeed. You do need some idea of what you are going to do before being funded.
The tricky bit is that in research, and this a bit like the act of programming, you often discover import stuff in the process of doing - and the more innovative the area - the more likely this is to happen.
Big labs deal with this by having enough money to self-fund prospective work, or support things for extra time - the real problem is that new researchers - who often have the new ideas, are the most constrained.
If you work at a large company, it could consider 1,000's of different new major features or new products. But it only has the budget to pay for 50 per year.
So obviously there's a whole process of presentations, approvals, refinement, prototypes, and whatnot to ensure that only the best ideas actually make it to the stage where a programmer is working on it.
Same thing with a startup, but it's the founders spending months and months trying to convince VC's to invest more, using data and presentations and whatnot.
It's not a problem -- it's the foundation of any organization that spends money and wants to try new things.
How else would it work? The onus needs to be on someone to make sure we are doing worthwhile things. Like anything else in life, you need to prove you deserve the money before you get it. Often that means you need to refine your ideas and pitches to match what the world thinks it needs. Then once you get a track record it lowers your risk profile and money comes more easily.
Sounds sensible, bu the major unasked question it avoids is, was the current funding and organization structure of science in place when the past scientific achievements were achieved.
the impression I get from anecdotes and remarks is that pre-1990s, university departments used to be the major scientific social institution, providing organization where the science was done, with feedback cycle measured in careers. Faculty members would socialize and collaborate or compete with other members. Most of the scientific norms were social, possible because the stakes were low (measured in citations, influence and prestige only).
It is quite unlike current system centered on research groups formed around PIs and their research groups, an machine optimized for gathering temporary funding for non-tenured staff so that they can produce publications and 'network', using all that to gather more funding before the previous runs out. No wonder the social norms like "don't falsify evidence; publish when you have true and correct results; write and publish your true opinions; don't participate in citation laundering circles" can't last. Possibility of failure is much frequent (every grant cycle), environment is highly competitive in a way that you get only few shots at scientific career or you are out.
Imagine if everybody in every software company was an "engineer," including the executives, salespeople, and market researchers. Imagine if they only ever hired people trained as software engineers, and only hired them into software development roles, and staffed every other position in the company from engineering hires who had skill and interest at performing other roles. That's how medical practices, law firms, and some other professions work.
For example -- my wife is an architect, so I'm aware of specific examples here -- there are many architecture firms that have partners whose role consists of bringing in big clients and managing relationships with them. They are never called "sales executives" or "client relationship management specialists." If you meet one at a party, they'll tell you they're an architect.
Apparently it's the same thing with scientific research. When a lab gets big enough, people start to specialize, but they don't get different titles. If you work at an arts nonprofit writing grant applications, they will call you a grant writer, but a scientist is always a scientist or a "researcher" even if all they do is write grant applications.
> Imagine if programmers spent 40% of their time writing documents asking for money to write code.
The daily I'm not taking part anymore at work started today at 9:30 as always, and has currently (11:50) people excusing themselves because they have other meetings...
We need a revolution on exposing bad managers and making sure they lose their jobs. For every kind of manager. But that situation isn't very far from normal.
If this was applied in science we'd be still be flying blind with regards to stomach ulcers because a lot of 'researchers' thought bacteria couldn't live in the stomach (it's obviously a BS reason)
Yes, CRM procedures are very good in some cases and I would definitely apply it in healthcare in stuff like procedures, or the issues mentioned, etc.
...on the evening of Dec 28, 1978, they experienced a landing gear abnormality. The captain decided to enter a holding pattern so they could troubleshoot the problem. The captain focused on the landing gear problem for an hour, ignoring repeated hints from the first officer and the flight engineer about their dwindling fuel supply, and only realized the situation when the engines began flaming out. The aircraft crash-landed in a suburb of Portland, Oregon, over six miles (10 km) short of the runway
It has been applied to other fields:
Elements of CRM have been applied in US healthcare since the late 1990s, specifically in infection prevention. For example, the "central line bundle" of best practices recommends using a checklist when inserting a central venous catheter. The observer checking off the checklist is usually lower-ranking than the person inserting the catheter. The observer is encouraged to communicate when elements of the bundle are not executed; for example if a breach in sterility has occurred
Maybe not this system exactly, but a new way of doing science needs to be found.
Journals, scientists, funding sources, universities and research institutions are locked in a game that encourages data hiding, publish or perish incentives, and non-reproducible results.