Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Fast and comprehensive are not mutually exclusive goals. Having fast tests makes it more likely you'll add more and better tests as well. Because the cost of doing that gets lower. I have some pretty comprehensive tests that setup fairly complicated scenarios. The cost for that is low.

A slow test will be something you avoid running when you should be. When it takes 20 minutes to validate a change you did it gets tempting to skip it or you post pone doing it. Or you'll do it and get side tracked by something else. The ability to run high quality integration tests quickly is a super power. We're used to these things running slowly but my point is that you can engineer it such that it's fast and that's worth doing.

IMHO a key mistake is treating integration tests as unit tests. Which then dictates you do silly things like running expensive cleanup logic and isolating tests for each other and giving them their own virgin system to run against. That actually makes your tests less valuable and more tedious to run.

The real world is messy. A good integration test benefits from the noise created by lots of other tests running. It's the closest you can get to a real running live system without using the actual live running system and testing in production. Real users will never see a virgin system and they won't be alone in the system. It's OK for there to be data in the database. You can isolate through other means: give tests their own users. Randomize key things so they don't clash with other tests, etc. This results in better tests that actually run faster.

I love my unit tests as well. But I don't unit test things that I cover with an integration test anyway. I reserve those for things that are proper units that I can easily test in isolation. Anything with complicated logic, regular expressions, or algorithms basically. Testing that with an integration tests is counter productive because your goal is to test the logic and you probably want to do that with lots of different inputs. And then you mock/fake anything that just gets in the way of testing that.

But unit testing APIs is silly if you are in any case writing proper full blown integration / scenario tests that use those APIs. I don't need to unit test my database layer with an in memory database either. If it's at all important, that functionality will be used as part of my integration tests triggering logic that needs a database. And it will run on a proper database. And I can use my API from the outside to evaluate the output and assert everything is as it should be without poking around in that database. This adds more API calls and realism to the test and ensures I don't need to make assumptions about the implementation. Which then means I can change the implementation and validate that it didn't break anything important.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: