Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you write unit tests for what you're trying to achieve, and then after you achieve it it turns out you didn't get everything right and the code was supposed to do something different, then those unit tests were almost pure waste: you have tests that assert your code is doing the wrong thing (which you thought was the right thing intially). So not only is the code wasted, the unit tests that tested it, which are likely much more code overall, are added waste.


>If you write unit tests for what you're trying to achieve, and then after you achieve it it turns out you didn't get everything right and the code was supposed to do something different, then those unit tests were almost pure waste

And the code changes were pure waste too.

That's why it's important to try and reduce the risk of building to the wrong requirements wherever possible - fail fast and often, use spikes and experiments, use lean, etc.

It's why it's important to reduce the cost of writing tests and code as much as possible too.

Writing the test itself and showing bits of it to others can actually help uncover many requirements bugs too. That's called BDD.

However, if it turned out it was the right thing and you didnt write a test you've just made it much harder to change that code in the future without breaking something. The cost of code changes went up.


> And the code changes were pure waste too.

Yes, I said so as well. Though it's also important that the code changes are likely to be the thing that reveals that the feature is misunderstood/badly specified. Lots of people can take a working feature and tell you if it addresses their problem. Much fewer can look at a set of unit tests and tell you the same.

> However, if it turned out it was the right thing and you didnt write a test you've just made it much harder to change that code in the future without breaking something. The cost of code changes went up.

Very much debatable. If the code needs to change because requirements themselves change in the future, the tests that are validating the old requirements are not helpful. And many kinds of refactoring also break most kinds of unit tests too.

From my experience, unit tests are most useful for testing regression cases, and for validating certain constrained and well defined parts of the code, like implementations of an algorithm. They're much less useful for testing regular business logic - integration tests are a much better solution for those.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: