Hacker Newsnew | past | comments | ask | show | jobs | submit | btheshoe's commentslogin

nah it's not top tier college, it's just being insanely smart in the mathy way.


I work at one of these funds in the UK. The majority of our quants -- and, indeed, our front office engineers working on the trading stack -- are Oxbridge grads.

If you want to be a quant, getting a post-graduate education at a top-tier university is pretty much a requirement. If you want to be a software engineer/SRE/etc. in the front office, you might be able to find your way in by going to the tech industry proper and then pivoting.


Huh, in my circles the real trophy jobs are quant roles at top firms. And given the caliber of the people there - and the compensation - they're arguably not trophy roles. Tbh ppl going for the pm/vc path are seen mostly as tools.


The people I know working as quants are jelous of me getting free food & working from my bed.


so based. But I do have to say that when the president of one of your top institutions resigns over misconduct, the level of misconduct is probably a fair amount over optimal. And speaking with my friends in, eg, alzheimer's research, the fraud and inability to trust the veracity of unreplicated results does really slow down work in the field.


On the whole, all these scandals in manipulated research have deeply shaken my trust in many of our scientific institutions. It's clear by now this isn't the case of a few bad apples - our scientific institutions are systemically broken in ways that promote spreading fraudulent results as established scientific truth.


As a aside, the phrase "a few bad apples" is actually originally "a few bad apples spoil the barrel" referencing the fact that a bad/overripe apple causes nearby apples to quickly ripen and go bad which is now known to be due to ripe apples producing ethylene gas which accelerates the ripening of other nearby apples. The phrase originally meant that one bad thing corrupts and destroys all associated. The discovery of a bad apple actually means everything is already irrevocably destroyed and thus reason for not tolerating even a single bad apple.

A modern metaphor with a somewhat similar meaning to the original is: "A fish rots from the head down." Pointing out that organizational failures are usually the result of bad leadership. A rotten leadership will quickly result in a rotten organization. Therefore, it is important to make sure the leadership is not rotten in a organization. It also points out that low-level failures indicate there are deeper high-level failures. If the line-level is screwed up, the leadership is almost certainly just as screwed up. The fix being replacing the rotten leadership with a new one as lower-level fixes will not fix the rotten head.

Another, more direct equivalent metaphor is a Chinese saying translated as: "One piece of rat poop spoils the pot of soup." That is hopefully self-explanatory. We should probably use it instead of "a few bad apples" as nobody will reverse the meaning of that one.


As an aside to your aside, it's also the case that phrases/words change meaning over time, as usage in one grows above the usage in a different way.

In this case, the "a few bad apples are not representative of a group" meaning have grown above the "One bad apple spoils the barrel" meaning, and so the phrase as changed, for better or worse.

Maybe it would be best if everyone used the long version instead of the short one. When you say/write "A few bad apples", the meaning is ambiguous, but if you use the long version, it's not. Problem solved :)


> In this case, the "a few bad apples are not representative of a group" meaning have grown above the "One bad apple spoils the barrel" meaning

Most of the time when I hear the "only a few bad apples, the rest of us are fine" meaning it's coming right from the mouths of badly spoiled apples twisting the meaning of those words and popularizing that usage to suit their agendas.

Generally, I think that there's nothing wrong with pushing back against words and phrases used incorrectly. We get to decide how words are used, and a large part of that decision making process involves social pressure and education. I think it's particularly useful to defend the meaning of words and phrases when they're being deceptively misused and promoted.


> the "a few bad apples are not representative of a group"

I have never heard that phrase, it has always been that the few spoils the whole.

I have heard people say "the proof is in the pudding", which means nothing at all, when the real phrase is "The proof of the pudding is in the tasting".

I'm from England and I speak English, so maybe it hasn't translated well to Americlish.


I think the larger "issue" is that the phrase colloquially means the exact opposite of the original observation, that a bad apple MEANS the bunch is spoiled. It's worse because this changing of the meaning is perpetuated by those same bad apples themselves.

"the proof is in the pudding" is a much more benign change. It's literally just a shortening, but no meaning is lost... if you want the proof, you'll find it in the pudding (implying you should try the pudding to verify your assumptions)


"Literally" is another word where the meaning changed from being the literal opposite of what it was "meant" to originally mean, not sure one is "worse" than the other. It's just change, which will continue to happen.


I would argue that the meaning has never changed. There is just an additional slang variation used by a subset of English speakers. Much like “wicked” was once slang for “good” and how Londers don’t literally ring people using the bones of dogs (“dog an bone” Cockney rhyming slang, in case the reference doesn’t translate).


> the original observation, that a bad apple MEANS the bunch is spoiled

Too few people have enough apple trees in their lives to preserve the meaning.


"It's just a few bad apples" is a common response to police misconduct here in the States, with the attitude of "why are you making such a big deal out of this?"

The original saying, of course, is all about why you have to make a big deal out of this, for reasons that apply to both apples and cops.


> I have heard people say "the proof is in the pudding", which means nothing at all, when the real phrase is "The proof of the pudding is in the tasting".

To be fair, the "real" phrase you give here doesn't make much more sense to me. Even assuming the use of the term "pudding" across the pond to be more than just a fairly niche dessert like it is in America, what does it mean for pudding to have "proof"? Is is some sort of philosophical thing where you don't accept that the pudding exists unless you taste it (which I feel isn't super convincing, since if we're going to have a discussion, we kind of have to accept that each other exists without having similar first-hand "proof", so we might as well accept that pudding exists as well)? I know there's a concept of something called "proofing" in baking, but I'm pretty sure that happens long before people taste the final product.

In general, I don't find most cliches to be particularly profound. "It is what it is" is just a weird way to state an obvious tautology, but somehow it's supposed to convince me that I should just passively accept whatever bad thing is happening? "You can’t teach an old dog new tricks" isn't universally true, but it apparently also is supposed to be a convincing argument in favor of inaction. "You can’t have your cake and eat it too" is probably the most annoying to me, because the only way anyone ever wants to "have" cake is by eating it; no one actually struggles to decide between eating their cake or keeping it around as a decoration or whatever.

There's something about stating something vaguely or ambiguously that seems to make it resonate with people as profound, and I've never been able to understand it. In my experience, thought-terminating cliches are by far the most common kind.


It’s “proof” as in to test. Like “proof reading”. The point being, the real test of how good something is, is to use it (for its intended purpose).

A vaguely similar sentiment to when people say “eating your own dog food” (or words to that effect) to mean testing something by using it themselves. Albeit the pudding proverb doesn’t necessitate the prover to be one’s self like “dog fooding” does.


This is just an excuse for ignorance and the annoying habit people have of repeating something they heard but don't understand.

I think it's right to correct it because when people misuse this phrase, it isn't gaining a new meaning--it's making it meaningless. Why apples? The comparison to apples adds no information or nuance.

Like when there's a story about police corruption, and someone says "they're just a few bad apples, not all cops are bad." Again, why compare them to apples? Why not just say a few bad cops?

This isn't words/phrases changing meaning, it's losing meaning.


> This isn't words/phrases changing meaning, it's losing meaning.

It is literally not, it still means something, just not the same as it originally meant. This happens all the time, with "literally" being one of the best examples of something that literally means the opposite of what it used to mean.


The problem with this is that it creates ambiguity in communication. Both the old meaning and the new one will circulate together, especially among different demographics, and cause potentially severe misunderstandings.


I don't think the phrase has changed meaning?

> The phrase originally meant that one bad thing corrupts and destroys all associated.

It's saying if you don't remove the bad apple, you will get a lot of bad apples in the future. The presence of a bad apple doesn't imply all of them are already spoiled right now. If you're seeing other apples that still haven't spoiled, it suggests you still have time to do damage control. That seems consistent with how the metaphor is used nowadays.


I think there's a transition phase between the two that people miss out on. I recall hearing "let's not let a few bad apples spoil the bunch" which is an acknowledgement of the original phrase, reworked to implore listeners not to throw it all away. You could say "let's not throw the baby out with the bathwater" but I guess some people are apple enthusiasts?


I would like to point out that "scientific truth" does not really exist, or at least is far from straightforward to define and establish. Basically, you should see each piece of research as evidence for a certain hypothesis, and the more evidence is available, the more that hypothesis is believable.

But the larger issue here is that all public institutions are, by that definition, broken. For example, businesses also won't hesitate to spread falsehoods to sell their stuff, governments will try to convince their people that they are needed through propaganda and policing, and so on.

How do we solve these problems? We have laws to regulate what businesses can't do (nevermind lobbying), and we split governments' responsibility so that no single branch becomes too powerful. In general, we have several independent institutions that keep an eye on each other.

In case of science, we trust other scientists to replicate and confirm previous findings. It is a self-correcting mechanism, whereby sloppy or fraudulent research is eventually singled-out, as it happened in this and many other cases.

So I guess the gist of what I want to say is that you're right in not trusting a piece of research just because it was made by a reputable institute, but look for solid results that were replicated by independent researchers (and the gold standard here is replication, not peer review)


> businesses also won't hesitate to spread falsehoods to sell their stuff

They do hesitate. It's quite hard to catch businesses openly lying about their own products because, as you observe, there are so many systems and institutions out there trying to get them. Regulators but also lawyers (class action + ambulance-chasers), politicians, journalists, activists, consumer research people. Also you can criticize companies all day and not get banned from social media.

A good example of what happens when someone forgets this is Elizabeth Holmes. Exposed by a journalist, prosecuted, jailed.

Public institutions are quite well insulated in comparison. Journalists virtually never investigate them, preferring to take their word as gospel. There are few crimes on the book that can jail them regardless of what they say or do, they are often allowed to investigate themselves, criticism is often branded misinformation and then banned, and many people automatically discard any accusation of malfeasance on the assumption that as the institutions claim to be non-profit, corruption is nearly impossible.

> It is a self-correcting mechanism, whereby sloppy or fraudulent research is eventually singled-out, as it happened in this and many other cases.

It's not self correcting sadly, far from it. If it were self-correcting then the Stanford President's fraud would have been exposed by other scientists years ago, it wouldn't be so easy to find examples of it and we wouldn't see editors of famous journals estimate that half or more of their research is bad. In practice cases where there are consequences are the exception rather than the norm, it's usually found by highly patient outsiders and it almost always takes years of effort by them to get anywhere. Even then the default expected outcome is nothing. Bear in mind that there had been many attempts to flag fraud at the MTL labs before and he had simply ignored them without consequence.


Alternatively there is a baseline of fraudulent behavior in any human organization of 1-5% and since there are tens of thousands of high-profile researchers this sort of thing is inevitable. The question you should be asking is whether the field is able to correct and address its mistakes. Ironically cases like this one are the success stories: we don’t have enough data to know how many cases we’re missing.


I don't think the baseline is the same. The more competition, the more temptation to cheat. When the margins to win are small enough, cheaters are disproportionately rewarded.

Think of Tour de France. Famously doping-riddled. There are a lot of clean cyclists, but they are much less likely to be able to compete in the tour.

You can fight cheating with policing: doping controls, etc. But as the competition gets more extreme, the more resources you need to spend on policing. There's a breaking point, where what you need to spend on policing exceeds what you get from competition.

This is why almost no municipalities have a free-for-all policy for taxis. There are too many people technically able to drive people for money. All that competition drives prices lower, sure, but asymptotically. You get less and less lower prices the more competition you pile on - but the incentives for taxi drivers to cheat (by evading taxes, doing money laundering as a side gig etc.) keep growing. London did an interesting thing - with their gruelling geography knowledge exam, they tried to use all that competitive energy to buy something other than marginally lower prices. Still incentive to cheat, of course, but catching cheaters on an exam is probably cheaper and easier than catching cheaters in the economy.

(Municipalities that auction taxi permits get to keep most of the bad incentives, without the advantage of competition on price.)


It's only a story because he's president, if he were only a researcher/professor this would not even be a story. This is NOT a success story, it shows that this fraudulent behavior is endemic and an effective strategy for climbing the academic ladder.

A success story would be this is exposed at large... we work out some kind of effective peer-reproduced tests... and the hundreds/thousands of cheating professors are fired.


Endemic means "regularly occurring". How many examples of this kind of misconduct are you aware of? Ok, now, what's the denominator? How much research is actually conducted? I'm personally familiar with 3 fields (CS, bio, and geology) and what I've learned is that the number of labs --- let alone projects --- is mind-boggling. If your examples constitute 1% of all research conducted --- which would represent a cosmic-scale fuckload of research projects --- how much should I care about it?


BMJ: Time to assume fraud? https://blogs.bmj.com/bmj/2021/07/05/time-to-assume-that-hea...

Study claims 1 in 4 cancer research papers contains faked data https://arstechnica.com/science/2015/06/study-claims-1-in-4-...


So let's talk about misleading headlines and citations in journal articles. I would argue that arstechnica is one of the better news sources. Despite that, if we go to the article there is a link to that there has been "a real uptick in misconduct". Now if we click through that link, it does claim that there has been an increase in fraud as a lead in (this time without a link) but the article is about something completely different (i.e. that almost half the retracted papers are retracted due to fraud).

As an aside, the article cites that there have been a total of 2000 retracted papers in the NIH database. Considering that there are 9 Million papers in the database overall, that is a tiny percentage.


> ... if we click through ...

So you deflect from the entire content of the article with that distraction? And then an additional misdirection regarding retraction? Why?


> > ... if we click through ...

> So you deflect from the entire content of the article with that distraction? And then an additional misdirection regarding retraction? Why?

What do you mean? I take issue with the headlines and reporting. And I believe if one claims lack of evidence, sloppy evidence or fraudulent evidence one should be pretty diligent about ones one evidence.

Regarding the claims in the article. If you look at the 1 in 4 article you find that the reality is actually much more nuanced, which is exactly my point. The reporting does not necessarily reflect the reality.

If you call that deflection...


The ArsTechnica article was about a paper by Morten Oksvold that claimed that 25% of cancer biology papers contain duplicated data.

One nuance is that his approach only focused on one easily identifiable form of fraud: Western blot images that can be shown to be fraudulent because they were copies of images use in different papers. Of all the potential opportunities for fraud, one must think that this must represent just a small portion.

If there are other nuances you care to mention, I'm all ears.

Instead, you refer to an entirely different article, as if the article I cited has no relevant content, which misleads casual readers of this comment stream. To paraphrase your comment in a less misleading way: "Inside this article you can find a link to an entirely different article whose content does not support the headline of the original article."


Well, one thing you might want to do before doubling down on the Oksvold study is work out the percentage of those papers that were likely to have misused western blot images (it's the bulk of the paper, impossible to miss), and then read the part of the Ars article (again: the bulk of the article) that discusses reasons why different experiments might have identical western blot images (one obvious one being that multiple experiments might get run in the same assay).

Instead, you're repeatedly citing this 25% number as if it was something the paper established. Even the author of the paper disagrees with you.


Double down? Repeatedly? I posted a link to an article with its headline, and only later, when rebutting a comment that implied the article was about something "completely different", I mention that the article is about the Oskvold study and its finding of duplication in 25% of papers. The paper did in fact establish that number (unless you want to quibble about 24% vs. 25%).

Yes, the ArsTechnica headline is poorly written, and not supported by the content of the article, because not all instances of duplication are fraud, but we can clarify that issue by quoting the article itself: "... the fact that it's closer to one in eight should still be troubling."


devil's advocate - '1 in 4 studies are fake, says "study"'


So just because one person is cheating, it means all academics are cheating?

FWIW, most top-ranked CS conferences have an artifact evaluation track, and it doesn't look good if you submit an experimental paper and don't go through the artifact evaluation process. Things are certainly changing in CS, at least on the experimental side.

It's also possible that theorems are incorrect, but subsequent work that figures this out will comment on it and fix it.

The scientific record is self-correcting, and fraud / bullshit does get caught out.


It's not just "one person", there is wide-spread fraud across many disciplines of academia. The situation, of course, is vastly different across subjects/disciplines, e.g. math and CS are not really much affected and I would agree they're self-correcting.

I might agree they're self-correcting in the (very) long-term, but we're seeing fictitious results fund entire careers. We don't know the damage that having 20+ years of incorrect results being built upon will have... And that's not to speak of those who were overlooked, and left academia, because their opportunities were taken by these cheaters (who knows what cost that has for society).


The very fact that the fraud is discovered, that reporters amplify it, and that it can bring down the president of the university, is evidence to me that the system still works.


Maybe? I'd want to see a clear model of flows and selection biases before I concluded that.

Another way to look at it: perhaps Tessier-Lavigne only got this scrutiny because he was president of the university. And the fact that they didn't guarantee anonymity when "not guaranteeing anonymity in an investigation of this importance is an 'extremely unusual move'" might be a sign that the scrutiny was politically diminished.

So it could be that most of the equally dubious researchers don't get caught because not enough attention is paid to patterns like this except when it's somebody especially prominent. Or it could be that this one was not as well covered up, perhaps because of the sheer number of issues. Or that the cross-institution issues made Stanford more willing to note the wrongdoing. Or that Stanford is less likely to sweep things under the rug because of its prominence. Or just that there was some ongoing tension between the trustees and the president and that this was an opportunity to win a political fight.


These are good points and hard to know. But the Retraction Watch is tracking stories of both mistakes and fraud in published research, across universities:

https://retractionwatch.com/


A tenacious undergrad doing journalism as a hobby is not a system.


The fate of the world lies in the hands of the young and inexperienced.

Grad students, Supreme Court clerks, 19-year-old soldiers.


Sure, any system with a false negative and false positive rate 'works'.


No. This level of scrutiny and diligence is rare, and was selectively applied based on the targets profile. The "field" did nothing about this over 20 years. A computer science freshman did this as a hobby, not as a participant in neuroscience.

Perhaps "nothing" is too harsh. Various people in the field raised concerns on several occasions. But the journals did nothing. The "field" still honoured him. And _Stanford_ did nothing (except enable him and pay him well) until public embarrassment revealed the ugliness.


This is the important and troubling point. Everyone trumpets science as a model of a rational, self-correcting social enterprise. But we see time and time again that it takes non-scientists to blow the whistle and call foul and gin up enough outside attention before something gets done to make the correction. That puts the lie to the notion of self-correction.


This is an issue at the department politics level. For the scientific field, once someone starts retracting papers (and arguably, even before this), everybody knows that you should take person X's papers with a huge grain of salt.

E.g., in math / theory, if someone has a history of making big blunders that invalidate their results, you will be very hesitant to accept results from a new paper they put on arXiv until your community has vetted the result.

So yes, I do trumpet science as a model of a rational, self-correcting social enterprise, at least in CS.

Other sciences like biology and psychology have some way to go.


The thing is that replication is inherently easy in CS. Especially now that people are expected to post code online.

Forcing authors to share raw data and code in all papers would already be a start. I don't know why top impact factor papers don't do this already.


I completely agree. It's a pity that this isn't becoming standard in fields affected by the replication crisis. I would be happy to be corrected if someone has heard / experienced otherwise.


> you will be very hesitant to accept results from a new paper they put on arXiv until your community has vetted the result.

Forgive my ignorance but I thought that was SOP for all papers. Is it not?


Well not really, right? Let's suppose some well known, well respected author that has a history of correct results puts up a new paper. I (and I think most people) will assume that the result is correct. We start to apply more doubt once the claimed result is a solution to a longstanding open problem, or importantly, if the researcher has a spotty track record for correctness (in math/TCS) or falsifying results (in experimental fields).

But really we shouldn't be talking about math errors and falsification in the same category.


The problem is that we don't know what the baseline really is. We know that between a third and a half of results from peer reviewed papers in many domains cannot be replicated. Looking closer, we see what look like irregularities in some of them, but it's harder to say which of them are fraud, which are honest mistakes, and which of them just can't be replicated due to some other factors. But because so many of these studies just don't pay off for one reason or another, I would agree that it is getting really hard to rely on a process which is, if nothing else, supposed to result in reliable and trustworthy information.


Where is that number of 1/3-1/2 coming from? And which fields? I find that very hard to believe (at least if we exclude the obvious fraudulent journals, where no actual research gets published)


I think he's referencing the replication crisis that was a big deal a few years ago. Psychology was hit hard(unsurprising), but a few other fields in the biology area were also hit.


It's worst in Psychology and the Social Sciences, but it's not limited to them. Per Wikipedia:

> A 2016 survey by Nature on 1,576 researchers who took a brief online questionnaire on reproducibility found that more than 70% of researchers have tried and failed to reproduce another scientist's experiment results (including 87% of chemists, 77% of biologists, 69% of physicists and engineers, 67% of medical researchers, 64% of earth and environmental scientists, and 62% of all others), and more than half have failed to reproduce their own experiments. But fewer than 20% had been contacted by another researcher unable to reproduce their work. The survey found that fewer than 31% of researchers believe that failure to reproduce results means that the original result is probably wrong, although 52% agree that a significant replication crisis exists. Most researchers said they still trust the published literature

Not sure if the results of that online study have (or can) themselves be reproduced, however. It's turtles all the way down.


Skimmed the wiki on the replication crisis, and people have actually tried to systemically replicate popular studies and found similar results. You could say there has been a successful replication of failure to replicate.


If a field takes two decades to "correct" its mistakes, then there are several things wrong with it. And if we have top positions held by unethical people, who have got away with it, and possibly climbed to the top because of it, then I do not know what to feel or say about this.


It's taken String Theory a few decades to correct itself.


Any human organization?

I don't expect 1-5% fraud in airline pilots, bank tellers, grocery store clerks, judges, structural engineers, restaurant chefs, or even cops (they can be assholes but you don't have to bribe them in functional countries).

I think academics can do better than 1-5% fraudulent.


What? In all of the ones you mentioned there is a known significant amount of fraudulent behaviour.

Store clerks, theft is about 1-2% of sales typically. It has been said for years that the majority of that theft is from employees. Airline pilots have been known to drink during their flights (or go away from there seat for other reasons that are not in the rules).

Cops, I mean don't get me started, just the protection of a cop who has done something wrong by the other cops would count as fraudulent, but I don't see many cops going after their own black sheep.

Judges, in Germany deals (i.e. the accused pleads guilty to lesser charges so the bigger ones get dropped) are only legal under very limited circumstances (almost never and need to be properly documented). Nevertheless, in studies >80% of lawyers reported that they had encountered these deals).

I think you seriously underestimate the amount of fraudulent behaviour.


Also coming back to judges. The behaviour by Thomas and Alito regarding presents etc. Would count as serious scientific misconduct in academia. So there's a significant percentage just there already.


I expect far higher levels of fraud in these professions.


I’ve come to believe that science is mostly about popularity and not about truth-finding. As long as peers like what you write, then you will get through the reviews and get cited. Feynman called this Cargo Cult Science. I think much of science is like this, see also Why Most Published Scientific Findings are False. Not much has changed since the publication of that paper. A few Open Science checks are not gonna solve the fundamental misalignment of incentives.


Wholeheartedly agree, really a shame to see what it’s become. Wish I could still see research the way I dreamed it was as a child.


it is impossible for most scientists to understand / critically think about all the research coming out from so many institutions, so most of these academics mainly focus on research coming from someone they respect / institutions they respect, so yes it is kind of like a popularity contest but i would argue that most things in life are due to the limited nature of the human brain we cannot think independently about everything for ourselves and rely on external judgements to what is important / true etc...


It is absolutely a popularity contest. The biggest problem is that many academics are reluctant to deviate too far from current consensus in fear of damaging their reputation.

The result is that research in many fields tends to stagnate and reinforce old ideas, regardless of whether they are right or wrong.


"It is difficult to get a man to understand something when his salary depends on his not understanding it." -- Upton Sinclair


The peer review system is not designed to catch fraud, it's designed to catch scientific or experimental errors.

Giving up on science is such a vast overgeneralization. You could take your statement and replace "manipulated research", "scientific institutions" and "established scientific truth" with just about any negative article in any domain. You could just as easily make this statement about startups (Theranos, Juicero), or government, or religion, or suburbs, or cities...


> The peer review system is not designed to catch fraud, it's designed to catch scientific or experimental errors.

Yes.

> Giving up on science is such a vast overgeneralization. You could take your statement and replace "manipulated research", "scientific institutions" and "established scientific truth" with just about any negative article in any domain. You could just as easily make this statement about startups (Theranos, Juicero), or government, or religion, or suburbs, or cities...

Institutions go through similar cycles of breaking and systemic reform. Not surprised that you can see patterns in other domains.


It often does neither:( The only real protection from fraud, mistakes and poor science is replication. If results can't be replicated by others it is not science.


If you implement a strategy such as publish or perish exceedingly smart people will game the system to win. Any metric gets gamed.

Look at papers that have real impact they get cited. Look at ones that don’t …


And not just that, but rewarding outsized effect sizes so that you reward folks who create the biggest lies with fraudulent stats.


And you have some of the smartest brains gaming it too... Such a sad use of good neurons :(


I wouldn’t presume that the smartest brains are gaming the system. Most likely, it’s mediocre hucksters who have bullied and networked their way into a position of authority. Being good at social engineering != to being the best researcher.


I've seen some situations where smart people did bad research because of deadlines related to work visas. Science doesn't care how smart you are or if you could end up without a home. It will take as many logical iterations over an experiment design before being fruitful.


I would. Lots of "the smartest brains" are mediocre hucksters who have bullied and networked their way into a position of authority. This doesn't mean they aren't "the smartest brains".

IME some of the more effective engineers I've worked with have gravitated towards politics, not "raw technical skill". It's not because they prefer it. They use their "smartness" to win.

The problem is that being good at social engineering >> anything else. Intelligent people often look at the system and say: "What's the point in naively following this when no one successful unless they game the system?"

What's the point of putting one of your great, well considered idea into the fold. It's far more effective the be a mediocre huckster. You don't have to deal with the uncertainty, giving your idea to someone else, etc. Better to work the social game and phone in the rest.

It works better and you don't have to deal with the crushing disappointment that goes with fighting for an idea in a horrifying bureaucracy.


Being smarter doesn't make you more moral either.


It may enable you to do sufficiently well without resorting to immoral methods. Interesting how these things can go.


It may, but it doesn't provide the motivation to bother, especially if you only ever get caught at the end of your career.


I've seen this more often go in the other direction, but I think it can be either way.



I can't speak for other fields but in Neuro there's plenty of this but often one learns how to catch it before using it in your own research, even if it never becomes a matter of public scrutiny. Unfortunately, I can't reassure you that bad research gets caught all the time. However, there's usually at least a couple of experts in a given sub field of Neuro that quickly call BS before something goes too far.


This is an excellent point. A lot of crappy research goes on, and nobody pays it any attention (except, occasionally, when cranks outside the field want to prove that "peer-reviewed research proves the Earth is flat).

It's frankly not worth the effort to debunk a shitty piece of researchin a low-profile journal that's never been cited in a decade.


> in Neuro there's plenty of this but often one learns how to catch it before using it in your own research, even if it never becomes a matter of public scrutiny.

And what happens when it is caught, it is just quietly ignored by the field, right? How often are there retractions?


Depends on the situation. If no one cites it then it drifts into obscurity quickly. If it was actually cited frequently it leads to an investigation of work by all authors on the paper along with a retraction.


A vast amount of "science" is being done at all times. You can likely count the scandals cognitively available to you on one hand; even if it took dozens of hands, you'd still be talking about an infinitesimal sliver of science on the whole. What's actually happening here is an availability bias: you remember scandals, because they're scandalous and thus memorable. You don't know anything about the overwhelming majority of scientific work that is being done, so you have no way of weighting it against the impression those scandals create in your mind.


Via HN yesterday [1]- an editor of _Anaesthesia_ did a meta study of the papers he handled that conducted RCTs. He had data from 150 of them and concluded:

> ...26% of the papers had problems that were so widespread that the trial was impossible to trust, he judged — either because the authors were incompetent, or because they had faked the data.

This is not a one off.

[1] https://www.nature.com/articles/d41586-023-02299-w


I didn't say it was a one-off. But 150 papers is, to a first approximation, a one-off of all the science done in a given year. We produce millions of journal articles every year.


There's something to be said about a defense of this that doesn't account for random sampling.

Assuming that they did a proper sample of said papers, that implies that for whatever domain they sampled, 26% is likely a decent estimate of actual issues. Increasing the scale doesn't make a proportional estimate any better.


Maybe we shouldn't. What's the point of all of that data if a good portion of it can't be trusted?


Here we're talking about a proportion significantly less than 1%.


No one is shocked by the concept of misconduct occurring, the issue here is that it is no longer surprising when those committing the misconduct end up running the organization. You can pretend that the conversation is about whether scientific misconduct is endemic, but that conversation being had is about the failure of these hierarchies to actually succeed in promoting the best from among their ranks.

Of course misconduct is unavoidable, that doesn't mean you should become president. The politics aren't working.


Are you commenting on the wrong subthread? I do that all the time. This subthread is about whether the foundations of science itself are stable.


You just did it again, trying to steer the conversation to something that not at the heart of the discussion. This is the parent:

It's clear by now this isn't the case of a few bad apples - our scientific institutions are systemically broken in ways that promote spreading fraudulent results as established scientific truth

This is a concern about the corrupted institutions, with the downstream concern that science itself may be under threat. The primary concern is the systemically broken institutions who promote the fraudulent to the top of their hierarchies. Not sure why you insist on straw manning this thing, but clearly you have some person reason for doing so, and I wish you luck in that endeavor.


We disagree about what the implications of a single university president surrendering their post are to the whole of science. You're asked not to write comments imputing personal motives:

https://news.ycombinator.com/newsguidelines.html

If you want to argue this further, you should probably snipe the swipes off the end of your comment.


Is that the correct conclusion to draw? I mean there are definitely big problems on how we conduct and fund scientific research (which might also contribute to fraud), but the number of research scandals is a tiny fraction to the amount of research being done.

Considering that we get fraud every time we have humans and prestige money, I would really like to see some statistics against other things human activities. I suspect science still has some of the lowest fraud rates and the strongest mechanisms to detect and deal with it.


The problem is a tiny percentage gets any attention whatsoever. It's the same with police and doctoral abuse. These things are hugely prevalent with 30-80% of professionals engaging in some form of abuse... same with fraud.

People know about police abuse. We don't talk about doctor abuse. I'm honestly not confident that there's any police/doctors that don't engage in abusive practices (or it's a tiny percentage of the population).


It's the same everywhere not just science. The fake-it-till-you-make-it type-A charismatic bullshitters rise up the ranks in all organizations.


I feel this trend taking root in academia is still a new-ish thing. The boundaries of academia and research, especially for computer science, really started blending 15-20 years ago as Big Tech took over Oil for the best paying job / grant.

The decay has been super fast though. Maybe some academics will find the courage to do a longitudinal study of this decay. Now that'll be an interesting paper to read.


It’s most certainly not a new trend, but is perhaps a quintessentially American disease. But one need only look at the so-called “luminaries” in many fields during the mid-20th century to see that this is not in anyway a novel phenomenon. Once you get slightly afield from the hard sciences, it’s charlatans all the way down, especially in fields like psychology and economics.


Especially that the folks that are committing the fraud are raising to high places. It goes to show that we have systemic problems. This isn't a failure of a few individuals but a failure of our institutions. Clearly our incentive structure is messed up if people like this are in positions like this. Clearly we need to not only address this individuals actions, but the systemic issues that led to his ability to do what he did and still rise to the position he did.


Scientific institutions aren’t perfect. They’re made up of people like anywhere else. And where there are people there will be politics and gamesmanship. That doesn’t mean science isn’t our best shot at figuring out how the world works.

The fact that a Stanford president can be pushed out for bad research conducted before he was even there? It tells me there’s still some integrity left.


The article from Nature yesterday came up with 26% of the peer reviewed published papers they examined (all RCT) were untrustworthy based on close examination of their data. They could only invalidate 2% without data.

I personally believe this is an underestimate.


The problem is not with science, it is your science illiteracy. You never accept scientific research as "true" until it has been verify and repeated by independent sources. It has become the culture to glorify unexpected and "interesting" results in the media and society at large. But you should find these "interesting" but not necessary believe it to be true.

We do get unbelievable findings such as CRISPR, but beware these will be very few and far between.


>our scientific institutions are systemically broken in ways that promote spreading fraudulent results as established scientific truth

Scientific consensus is still very reliable and if 95% of accredited scientists in a field say something is true it is in society's best interest to consider that to be the truth.


I truly hope they toss every single paper and citations to them that ever crossed this assholes desk. This misconduct literally should be treated the same as a dirty detectives cases being reviewed and tossed out since they are no longer trustworthy.


I hope you are forced to live in an authoritarian situation: so you may truly learn what it is like to be punished for the mistakes of others.

The point here is to save the good apples - not throw out the whole barrel for zero gain.


Yep. After years of pushing back against claims that researchers skewed scientific results to fit their agenda this is a huge, demoralizing blow. Even if it isn’t widespread, how can you honestly blame anyone for being skeptical anymore.


Wide spread? PI's are required to publish. It is impossible to maintain quality of papers via peer review at scale so bad papers usually get through simply because of the volume. Throw in a profit motive and people get creative about hiding it.

See this recently published article https://www.nature.com/articles/d41586-023-02299-w.

One would think that clinical trials would be documented and scrutinized out the yin-yang but they are not.


But it was caught, demonstrating that what we're constantly assured is true is actually true: science may not be perfect, but it catches all of its mistakes, therefore we should trust it above all(!) other disciplines.


How? Peer-review, re-review, journalism, and reproduction of results are the systems the scientific community is built upon. The system does its job of finding the bad apples, as it did here.

Bad things are gonna happen in every single institution ever created. A better measure is how long those things persist.

Science is about getting closer to "the truth". Sometimes science goes further away from the truth, sometimes it gets closer. Sometimes bad actors get us further away from the truth. It gets reconciled eventually.


A combination of "publish or perish" and papers not accepting "negative results" (which results in a ton of repeated research) has led to this.


You mean we can’t just Trust the Science™?!

I completely agree. Seeing slack chats and emails regarding Proximal Origin and how researchers were disagreeing with it all the way until they published a paper that served what purpose are really disheartening. Instead of guiding future research toward preventing similar outcomes, countless scientists spent untold years of combined effort on a theory the authors didn’t even believe.


It should be noted that the volume of corruption coming out of state-run schools is much smaller than that from private institutions.


I'd wager it is just better covered up


It strengthens mine. Science is self correcting in a way the religion and politics never can be, they keep making the same faith based mistakes over and over, while science continues to progress. Evidence of that is everywhere you look, whereas politics and religion have barely made any progress in hundreds of years.


To the contrary, it increases my trust.

Just the fact that Stanford managed to conduct an independent investigation against its OWN PRESIDENT, tells very positive things about the University.

After this episode, I might trust Stanford research even a bit more than any University that never caught fraudsters.


Agreed. The system is flawed. And as a result, many scientific "findings" simply can't be trusted. And there is no solution in sight.


"A database of retractions shows that only four in every 10,000 papers are retracted."

Every time a plane crashes it's international news. But just because you regularly hear about plane crashes doesn't mean flying is unsafe.


do me a favor and look up all the papers in thinking fast and slow that failed to replicate


One has to ask what there is left to trust at all?


But they got caught, they retracted, the system works. It's not a perfect system, in a perfect system people wouldn't be incentivized to publish publish publish or be damned to the back waters. The institution is broken, but the safety nets work.


ya so notice the pattern before anyone else and extract money by filling the inefficiency. this isn't a profound insight, the nature of alpha is that it's temporary and limited. doesn't mean it doesn't exist and you can't take advantage of it - as long as you're better than everyone else (faster, smarter, cheat-ier). tbh it's just like entrepreneurship - starting a business will attract competitors. but you can just outcompete them.


You don't know if "anyone else" already noticed the pattern. That does not reflect in the historical prices you can see. It will reflect in future prices which you can't see. And waiting for those does not help either. You are always at square one. Because others are sitting somewhere, watching the data and thinking about it just like you.


They don’t all notice a long time pattern at exactly the same time. Alpha decays over time as more people notice and exploit it, or the market slowly changes.

When alpha disappears overnight, it’s almost surely for reasons unrelated to other statistical event players - e.g. a tectonic shift caused by some bankruptcy, interest change, political decision, etc.

However, many traders do not know how to properly model and backtest. It works on backtest, fails in the real world, and they explain that “alpha is gone” when in reality it wasn’t there to begin with - it’s just that their backtest was bad - usually overfit or unrealistic assumptions.


You describe a world where you can watch the pattern slowly fade out and stop when it is gone.

But in reality, there is noise.

Say you start exploiting the pattern. You buy on Tuesday and sell on Friday. After 3 weeks of doing so, you lost money every time. Is the pattern gone? Or is this just statistical noise? Should you stop or plow through? You don't know.

Another way to look at it: We would have the exact same discussions if stocks prices were just random walk series.

To make a point in favor of pattern arbitrage, one would have to show that stocks differ from random walk series. Enough to be worth trading against this difference. As far as I know, nobody ever came up with a good argument in favor of this assumption.


The way to tell signal from noise is with enough statistics. That’s one benefit HFT has over “traditional” trading - it gives you thousands of data points per day. The other way to quickly get thousands of data points per day is to trade slowly but across many assets (which is a much harder game, granted).

Either way, if you know what you are doing, your backtest should also give you a good idea of the variance, and that should tell you if a 3 week loss is statistically probable or not. Personally, I guess I’d stay away from such strategies - I’d prefer a much lower alpha with much lower variance - so that I have effective feedback from the market.

Some firms, e.g. RenTech and Virtu , manage to have very consistent alpha. You haven’t seen a good argument because people who make money don’t care to convince you.


> You describe a world where you can watch the pattern slowly fade out and stop when it is gone.

Yes, this is a phenomenon that happens very often during alpha research. You discover something that is decaying already, and you join the wagon until there is no anomaly to correct anymore, at which point you should have found other wagons to join. It's an eternal race of finding new alphas while your previous ones decay.

The rest of your argument doesn't really make sense. You seem to be just against any form of statistical inference.

"Alpha" is called like that on purpose because it is _not_ noise anymore. If you regress it against your benchmark, you should definitely see a difference between alpha and epsilon, given you have enough points to reach statistical significance.

> one would have to show that stocks differ from random walk series

There is easily 40 years of litterature on the subject. You can convince yourself in 5m by running a PCA of stocks returns against beta, sector and country. Then you can run a second round of PCA of these residualized returns against momentum, size, value and quality. Quants funds find alpha against these latter residualized returns.


? Personally I'm comfortable uploading my resume wherever - it's my resume, it exists to advertise my skills. I'd give it freely to anyone who asks, there's nothing on there I'd consider private almost by definition.


the nba player's association has secured nba players a nice share of revenue. seems to me like engineers could do something similar.


If you're one of a dozen engineers in the world who can do what you do, then sure, you can get a similar piece of the pie as NBA players.

But most tech workers aren't like that. They are relatively easily replaceable in comparison.


If US tech workers could be replaced for a tenth of the compensation they would’ve been. Maybe that will change in the future but until then the fact that there is some scarcity means unionization is possible.


Who said anything about a tenth of the wages?

There are plenty of US programmers making way less than FAANG wages happy to take those jobs.


I've never felt unsafe in an alleyway in sf. but probably the same as any other city - don't be a dumbass, avoid the alleyways in shady parts of the city, esp at night.


Very interesting approach to gaining a comparative advantage in the labor market: pay everyone less.


Oxide is able to do this because they are founded by folks who have (well-deservedly) an incredible brand and following. For example, I've had a few interactions with Bryan Cantrill over the years, and he's come off as super smart, down to earth, empathetic and fun. It's a good combination.

They're a cool tech company doing cool tech stuff, so folks want to work there and are willing to make salary tradeoffs to make it happen.


Pay has never been an accurate reflection of skill and skill has never been an accurate reflection of what a person can bring to a company. Big Tech thought they could use a combination of paying more and a stringent interviewing process to get the best talent. It turns out in the era of gamified interviewing (LeetCode and friends) that they were wrong and now are laying off tens of thousands of people. Having done over a hundred interviews at one of those big tech companies I was shocked by the people we were hiring and how much we were paying them towards the end of my time there.

Interviewing is a game of generalizations without any one-size-fits-all solutions. If you're a startup doing something ambitious, such as a from the ground up hyperscaler rack and software system, you want to attract extremely experienced and qualified people. Experienced and qualified people with a track record are likely on decent enough financial footing that they can get by on a salary with potential for future upside and an interesting problem to solve and mission.

Also later in your career you can't put a dollar amount on looking around at your co-workers and being thankful for working with high caliber people.


>It turns out in the era of gamified interviewing (LeetCode and friends) that they were wrong

Interviews were gamified for thousands of years, there are private tutors that will teach your child how to pass interviews into the most elite institutios.

Leetcode just made it accessible to unwashed masses.


It most certainly is correlated. Look at averages not individual data points. Clearly a junior engineer makes less than a mid level which makes less than senior, etc. I agree completely that there is a lot of noise within that though.


This is significantly more than a lot of people who will read this thread makes. Might want to check your obviously insane amount of privilege.

I hope to make this much annually sometime before I die, but I am not totally confident I will get there.


Having fun and making a cool product is more worth then more money and having to optimize a shitty ad-algo...for example at meta/google.


I personally don't like the undertone of the class (tho very grateful that this material exists!!!) - this idea that universities are failing their students by not teaching them necessary material. I think a better phrasing is that students are failing themselves by not learning the material. I've personally never considered it the responsibility of my university to educate me - some of the classes are certainly useful for learning, but the ultimate onus falls on me to gain the skills that will lead me to success. I find it kind of distasteful how classes encourage a sort of passive victim mentality when it comes to learning - as if students need to be bribed with credits and cudgeled with a gpa to be forced to learn genuinely useful things.


You don't consider paying tens of thousands of dollars as creating responsibility to educate?

Of course, students need to be active in the learning process. But in my experience, it is more likely that professors and departments are terrible at educating than it is for students to not be motivated to learn.


> You don't consider paying tens of thousands of dollars as creating responsibility to educate?

I get OPs point. It's like getting an english literature degree and you've never read a book on your own.

My guess is most people needing the missing semester never coded outside of their assigned tasks. Which is fair enough, but its surprising to me to meet phd candidates who marvel over the missing semester (I've met 2).


To be clear, I was responding to the commenter's general point and not regarding this specific class or its contents.


There’s absolutely a responsibility to educate on the topics needed for the degree to be granted.

This class is an adjacency to an EE or CS candidate. Are universities also failing their students by not offering/requiring a touch-typing class? I don’t think so, in large part because computer science is not programmer occupational training.


A CS course isn't programmer occupational training in name only. Practically, there aren't many CS research jobs and working as a programmer is more often than not the career path for someone with a CS degree.

Universities can choose to be puritans about what CS is as you seem to be advocating for, or they can be realists and fill a very real gap in skills and knowledge.

Your point about "the topics needed for the degree to be granted" is also a very purist view of the role of university. Is the role of university solely to teach a curriculum that aligns with some abstract ideal of what a particular degree title means? Partly it is. But again, that doesn't match the expectation and the practical reasons why students choose a course. There are very few students studying CS for the beauty of it. Those that do probably do end up in academia and don't need this course. The rest are there for jobs, and they certainly could benefit from this.


What occupation are the vast majority of CS students intending to pursue when they enter a computer science program? What occupation did the vast majority of computer science graduates end up pursuing?

I'm willing to bet the answer to both of those questions is a computer programmer.


> Are universities also failing their students by not offering/requiring a touch-typing class?

Universities make assumptions based on the larger student body as to what requirements are needed for admission. Generally, we assume students can read and write and have general computer literacy, but actually the last assumption is starting to fray a bit; more and more, students are coming into school without basic computer desktop literacy. This hasn't been a problem for decades, as students tended just to pick up skills like touch typing. But today, some students are hard pressed to to save a file to the desktop.

I could see universities might actually have to start adding computer literacy as an entrance requirement, the same way we require basic reading and writing and English speaking, so we don't have to teach those things.


A degree takes already 3 or 4 years. In order to incorporate this “missing semester” universities would have to either a) remove existing material to make space for it, or b) extend the degree one more semester.

I don’t think universities should remove existing material in general to incorporate “bash 101”. Mainly because learning bash is easy and one can learn it by oneself without a professor. Extending the degree one more semester doesn’t make much sense either.


Half the value of having the material in a course is that it specifically highlights what should be learned.


>a) remove existing material to make space for it, or b) extend the degree one more semester.

It's not literally an entire semester's worth of material. "The Missing Semester" is just a catchy name they gave it.

The site says:

>The class consists of 11 1-hour lectures, each one centering on a particular topic. The lectures are largely independent, though as the semester goes on we will presume that you are familiar with the content from the earlier lectures. We have lecture notes online, but there will be a lot of content covered in class (e.g. in the form of demos) that may not be in the notes. We will be recording lectures and posting the recordings online.

And in that paragraph the word "semester" it doesn't mean a normal full-length semester:

>The class is being run during MIT’s “Independent Activities Period” in January 2020 — a one-month semester that features shorter student-run classes.

So, 11 lectures over the course of a month (actually three weeks if you look at the listed dates). And it's an unofficial class taught by grad students, alongside other classes.

If a CS program made this official, it could fit into the first two weeks of the course. And that'd be a great thing, since these tools make you way more productive in everything computer-sciency you do. It's like compound interest: the earlier you get good at the shell, the bigger the returns.

I think they call it the "Missing Semester" because

a) it's as useful as an entire semester

b) when you don't already know this stuff, it seems much bigger and more difficult than it really is. and your fellow students who already do know it seem like they're a semester ahead of you in comparison.

c) it might take you a semester to learn the material if you don't have instruction, feedback, a roadmap, while you're juggling your other academic obligations. people remember the things they succeeded in teaching themselves but forget the immense wasted time of rabbit holes they went down because they didn't have a mentor to guide them.

-----

I didn't study CS, I studied Physics instead. My hands-down favourite course, the one whose material I still use even though my day job has nothing to do with physics, was called something like "Problem Solving for Physicists" (google "university of sheffield PHY340", you can find PDFs of past exam papers to see what I'm talking about). It was this lovely hodge-podge of material, much of which had nothing specifically to do with physics at all. It had stuff like dimensional analysis, how to come up with sensible approximations and Fermi estimates, how to sanity check your calculations, coming up with lower and upper bounds, how to rule out certain classes of solutions even when you can't find the exact answer, that kind of thing. It was in the second or third year of my course, I forget which, but either way nothing in it had more than high-school level mathematics, so it could have been taught in the very first part of the first year, before we even did mechanics 101. That would have been tremendously helpful for everything that came afterwards. That was my "Missing Semester" (or perhaps, "Misplaced Semester").


I'm surprised anyone would object to this. Universities have a responsibility to prepare their students.

When I saw the title I figured it was another "computer science" class. But the curriculum was a significant portion of what I lacked when I graduated, which prevented me from finding work for a year.

Had someone at university told me before I graduated that I'd have no chance of finding work if I didn't know Git, Linux, REST, how to use the command-line, how to use an IDE, how to use an editor on the command line, and bash, I would have prepared myself for those things.


Can you elaborate on how not knowing those things specifically is what led to you not being able to find work

Did your interviews ask specific questions about Git, Linux things not covered in a standard operating systems course, and command line editing?


go on indeed, search for "software engineer", look at how often those things crop up in the "essentials" section


I’m just not sure how jot knowing these thing practically stopped you from getting a job.

Did they specifically ask about them in an interview?


Yes, they were asked about in interviews, when I managed to get interviews.

Also, since I wasn't using git (and hadn't even heard of git or github until after I graduated; mind you, this was in 2012), I didn't have any visible work to highlight when responding to job listing. I also didn't have any demonstrable ability to collaborate, dive into existing projects and find my way around, or do things like make PRs.

I didn't have anything to say about my ability to work with software tooling, using an operating system beyond just opening a web browser, notepad, Borland C++... I think I used netbeans for a group Java project also.

I definitely wouldn't have been able to demonstrate editing a file on the command line. Most tutorials were too dense for me without significant head-desk banging, because they assumed you knew how to compile a file, or run make.

Yes, we did do a lot of these things in a very limited way for our classes, but the teacher would always tell us exactly what we needed to do, and the very small amount of actual programming we did in our program didn't require me to know how to do something like debug the code, enable linting or error highlighting in the IDE (it was mostly simple enough that I could get it right, or close to right, with a few tries anyway).

I know these things contributed to me not getting a job, because I spent almost a year learning them and more, and was able to get jobs after.

All that said, I didn't do an internship, I didn't have a good GPA, and my school wasn't considered good. I was about as unattractive a candidate as I could be while still having a CS degree.


What school did you go to?

Regardless, I have my doubts that not knowing about git or software tooling held you back. I know many recent grads that don’t include anything about Git on resumes and it doesn’t come up in interviews.


> the ultimate onus falls on me to gain the skills that will lead me to success

I see where you're coming from, but sometimes you don't even know what the necessary skills are. Even if you're very self-motivated and enthusiastic, you can still benefit by being pointed in the right direction. That's part of what a good school or teacher should do for you. (And while they're at it, they can provide materials that smooth out the path to get there.)

You should never expect them to cover 100% of that, but if they're aware of a way that they can get closer to 100% than they currently are, then it's a good thing for them to do it.


I think you’re conflating two different things: universities selecting and presenting a syllabus needed to earn a certain degree, and students actually learning the material.

The latter is the solely the responsibility of each student, but I don’t understand why the former would be. Some of the content in this course strikes me as unknown unknowns for new programmers. Why would they be to blame if no one told them to learn a particular skill?


Honestly, because its something that should be a prerequisite for starting the degree program in the same way basic algebra is a prerequisite. Likewise, not knowing you need to know this stuff is a sign that you are probably not at the point where you should even be able to have declared the major. The fact that colleges allow this at all is doing a disservice to students, many of whom will go on to permanently damage their academic records.


We don't expect med students to have spent their teenage years doing experimental surgeries on their friends. Or accounting students to have taught themselves by doing accounting for a major business in their after-school time. Nor do we expect a microbiology student to have spent their childhood experimenting with infectious viruses and bacteria in their garage.

I think we expect that in comp sci just because many of us did happen to grow up doing that. But it's a weird and unusual expectation, and probably not a good one.

It also certainly wouldn't have been expected a few decades ago. You just wouldn't assume that a kid had a mainframe in their house to have learned on. Now that PCs have been around for awhile, we make that assumption, but again I don't think it's a good assumption. Certainly not for the less-affluent, nor the younger ones who grew up with smartphones and tablets instead of a PC.

I think there's also a bit of disconnect in what the purpose of the major is. Actually having a separate 'Software Engineering' major is relatively quite new and generally, Comp Sci was what everybody took if they wanted to learn to work on software. But now some people think it's a totally academic thing, while others think it's industry training, and that always confuses the discussion. But even in spite of that, it's just a bad assumption/expectation.


So, beyond a fairly standard high school curriculum and not too much distaste for math, what implicit requirements should we add for physics, mechanical engineering, chemistry, chemical engineering, material science, etc.? Because where I went to school, there were no special requirements for those majors--nor for CS/EE. Is CS today unique for some reason among STEM majors?

Different majors have varying degrees of difficulty for different people. By and large schools don't (and shouldn't) get into the business of heavily policing who gets to give a particular major a whirl.


People are going to really dislike what you said but I agree to a certain extent, especially when it comes to the basics of working in the command line. If somebody can't read the manual on that and figure it out then they are going to be so hopeless for so many other things that I don't want anything to do with them.


If someone has literally never opened a terminal with a command line, you're probably being rather dismissive of how unintuitive it will be for a lot of people at first.


"I've personally never considered it the responsibility of my university to educate me" - you're going to have to explain yourself


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: