Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> You cannot make edge cases and strange unique behavior happen in a controlled environment.

Sure you can. Go iterate through the GNU Coreutils bug tracker, find weird bugs, create test cases, feed them to your implementation. Bonus points if you can find an existing test suite (ex. xfstests turns out to be good for testing arbitrary filesystems). Granted, there will always be edge cases that you don't catch until they show up live in prod, but you can hit a chunk of the space without that.



Also, there are plenty of package system that uses coreutils for their build instructions. Just build the whole thing with coreutils aliased to the rust implementation and check for errors.


You have just contradicted yourself by saying,

there will always be edge cases that you don't catch until they show up live in in prod.."

in reply to,

"How do you measure robustness except with failures / years of service x times deployed ?"

The only viable live testing environment that springs to mind might be running your test code synchronously at the atomic level with production, which I'm convinced only IBM Z/OS on a Parallel Sysplex cluster running CICS can do.

Ed.spelling


I'm not seeing a contradiction. The original claim was that you can't measure robustness without testing live, because you can't reproduce edge cases in a lab. But that's not true; you can reproduce edge cases that are known in a lab. This isn't 100% effective, granted, but it's effective enough that you certainly can test robustness to a reasonable degree. It's like saying that you don't know how safe a car is until you've drive it 100,000 miles on the real roads; Real Life™ will find things you missed in testing, but you can still run enough crashes to get a decent idea of how safe the car is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: