>Reminds me of TDD bandwagon which was all the rage when I started programming. It took years to slowly die out and people realized how overhyped it really was.
It never really went away. The problem is that there is a dearth of teaching materials telling people how to do it properly:
* E2E test first
* Write high level integration tests which match requirements by default
* Only start writing lower level unit tests when a clear and stable API emerges.
and most people when they tried it didn't do that. They mostly did the exact opposite:
* Write low level unit tests which match the code by default.
* Never write a higher level tests (some people don't even think it's possible to write an integration or e2e test with TDD because "it has to be a unit test").
For something complex, it’s kinda hard to write and debug high level tests when all the lower level functionality is missing and just stubbed out.
We don’t expect people to write working software that cannot be executed first, yet we expect people to write (and complete) all tests before the actual implementation.
Sure for trivial things, it’s definitely doable. But then extensive tests wouldn’t be needed for such either!
Imagine someone developing an application where the standard C library was replaced with a stub implementation… That wouldn’t work… Yet TDD says one should be able to do pretty much the same thing…
>Imagine someone developing an application where the standard C library was replaced with a stub implementation… That wouldn’t work… Yet TDD says one should be able to do pretty much the same thing…
No it doesnt say you should do that. TDD says red green refactor that is all. You can and should do that with an e2e test or integration test and a real libc to do otherwise would be ass backwards.
Yours is the exact unit testing dogma that I was referring to that people have misunderstood as being part of TDD due to bad education.
Would you be able to share any links that expand upon your recommended approach? It makes complete sense to me as a self-taught dev, and is what I've always done (most recently, an e2e test of a realtime cdc etl pipeline, checking for/logging and fixing various things along the way until I was getting the right final output). I rarely write unit tests. It would be good to read something more formal in support of what I've naturally gravitated towards
TDD failed because it was sold as a method on how to write better tests yet in reality it was a very challenging skill to learn on how to write software that involved a fundamental change in how you approached requirements engineering, software development, iterations and testing. Even with a skilled team the cost to adapt TDD would be very high for an uncertain outcome. So people tried shortcuts like you described and you can't blame them. The whole movement was flawed and unrealistic in its expectations and communications.
It never really went away. The problem is that there is a dearth of teaching materials telling people how to do it properly:
* E2E test first
* Write high level integration tests which match requirements by default
* Only start writing lower level unit tests when a clear and stable API emerges.
and most people when they tried it didn't do that. They mostly did the exact opposite:
* Write low level unit tests which match the code by default.
* Never write a higher level tests (some people don't even think it's possible to write an integration or e2e test with TDD because "it has to be a unit test").