Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The proof for TDD is usually looking at bug detection rates. Similar for code review. OTOH, the "design damage" of TDD is something overlooked by those metrics.

What it boils down to: - TDD in the hands of a junior is very good. Drastically reduces bugs, and teaches the junior how to write code that can be tested and is not just a big long single method of spaghetti with every data structure represented as another dimension on some array.

- TDD in the hands of a midlevel can be a mixed bag. They've learned how to do TDD well, but have not learned when and why TDD can go bad. This creates design damage, where everything is shoe-horned into TDD and the goal of 90% line coverage is a real consideration. This is maximum correctness but also potentially maximum design damage.

- TDD in the hands of a senior is a power tool. The "right" tests are written for the right reasons with the right level of coupling and the tests overall are useful. Every really complicated algorithm I've had to write, TDD was a life saver for getting it landed.

Feels a lot like asking someone if they prefer X or Y and they say "X" is the industry best practice. My response universally is now an eye brow raise "oh, is it? For which segments of the industry? Why? How do we know it's actually a best practice? Okay, given our context, why would it be a best practice for US". Juniors don't know the best practices, mid-levels apply them everywhere, seniors evaluate and consider when best practices are not best practices.

TDD slows development when tests are written in a blind way with an eye on code coverage and not correctness and design. TDD speeds up development in being a good way to catch errors and is one of the best ways to ensure correctness.



Developers have selective amnesia and only count dev time when working on what they want to work on rather than including time spent fixing things they’ve already mentally marked as done.

The worst actors find ways to make other people responsible for fixing their bugs.


Your comment doesn’t distinguish between having a robust automated test suite and doing TDD.

I’ll take your comment as testing is good and constraining your workflow to TDD is worthless.


TDD is really commonly misunderstood to be a testing strategy that helps reliability- it's not. It's supposed to guide your software design


But that’s just it-as a design aid it can really go off the rails, but as a testing strategy it’s really useful in one domain. Defect fixing. If I can convince a junior engineer that when he gets a bug report to first write a test that shows the problem and then fix it, using the test to prove it’s fixed, it provides immense benefits.


> If I can convince a junior engineer that when he gets a bug report to first write a test that shows the problem and then fix it, using the test to prove it’s fixed, it provides immense benefits.

That's just writing a regression test and making sure it catches the regression. What does that have to do with TDD? Does the philosophy of TDD lay claim to any test written before the bugfix, regardless of how much or little someone subscribes to TDD overall?


It's pretty bad at this. It's much better used as a testing methodology than a design methodology.

It can provide high level guardrails confirming implementation correctness that are as indifferent to software design as possible (giving freedom to refactor).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: