This seems wrong. If they're using code to arrive at their findings, it should be high-quality, no less so than their lab technique. One can lead to bogus results just as easily as the other.
This actually made me laugh a bit. This just isn't how it works...
Researchers are not judged by the quality of their code -- they're judged on ideas (and more specifically, papers). And to be fair... have you ever written a quick, hacked-together script to prove some point and then move on? That's the same thing that researchers are doing. If they want "high-quality" code, that will probably only happen as the research systems are hardened and/or commercialized.
I should say, I'm still a big proponent of open-sourcing it all anyway -- perhaps just a few months later to maintain competitive advantages (or file for IP protections). All my dissertation code, hardware designs, etc. are online and documented for posterity. And I find that some other researchers genuinely find it useful (which kinda scares me). But I try to be a good citizen and support 'em anyway.
Probably not too interesting, but a start. It looks like the old robot power supply boards and force-torque sensor boards reside on my old lab's "internal" wiki. That's no good! I'll have to ask 'em about moving the files over to the public one. The latest designs (FPGA software defined radio) are being tested, so they've got a while before they'll be released. ;-)
It may seem wrong, but in practice it isn't. My experience with researcher-written code is that it does things in a roundabout, ugly, inefficient, duplicated-library-function but ultimately correct way.
The programs are not complicated. They are usually just some implementation of an equation or some other method for transforming input into output. Researchers don't have hundreds of hours to invest in learning the nuances of the const keyword in C++ or whatever, so they hack it. It works.
Checking extensional equality of programs is a task that is impossible to perform. If code is not well written, there ARE bugs lurking in the sourcefiles that just are unnoticed. Only computer scientists and mathematicians seem to understand this and try to prove correctness of their programs/results.
Publishing a version controlled source repository is actually more stringent than publishing the results of a lab assay, because it contains a record of everything you tried, older versions, and errors (that have hopefully been fixed).
When publishing on wet-lab data, you only publish the assays that worked (i.e., you didn't contaminate the samples, etc.). The wet-lab equivalent of a source repo would be like publishing a video recording of your lab.