Hi, I'm the individual who wrote this. So far I have never earned a reputation for causing problems, but I have earned one for fixing them. (Yes, I am in contact with former coworkers.)
I think we may be in violent agreement: I also think kludges and hacks are unprofessional. In the post I'm writing about my experience with the way that models are exposed as imperfect over time and they're fixed to deal with it. As much as I gripe about exceptions and outliers, capturing them is a big part of design.
...Except the 'fixes' you've given aren't fixes: They're patchwork on-the-spot kludges which let you ignore the problem until it crops up again. A few years of that will give you spaghetti code that is a nightmare to maintain.
How about that user who has two locations? Sure, you overlooked that in the initial design, but a permanent fix needs to be made to ensure the system will accommodate the next one which arrives.
A profile for someone located outside the US? Pretty likely you'll see that happen again if the system grows. Why not account for it correctly rather than rely upon duplicate data or another such hack?
Your urge to 'fudge and hack' is natural but the professional solution is to resolve the root cause by correcting the data model, not fiddling with the data to 'just make it work'.
After the initial glory of creating Software v1.0, much of the remaining work involves fixing the system to better reflect reality. The reason mature software takes 'exceptions and outliers' in stride is that it's been coded to do so after a great deal of trial and error in on-going analysis and discovery.
Ignoring those exceptions and working around them rather than fixing them permanently leads to madness.
The correct fix for the guy with two locations was to remove the location_id column and create a join table from People to Locations. The correct fix for the guy outside was to alter the Location model to accomodate that, which included making US postal codes optional.
Your urge to 'fudge and hack' is natural but the professional solution is to resolve the root cause by correcting the data model, not fiddling with the data to 'just make it work'.
We agree again, that's why I wrote that they're "like freezing a bottle of water, it always grows and breaks worse in the end."
The impression I gained from your blog post was that of willy-nilly denormalization and data duplication to accommodate edge-cases (which often become anything but) as they're discovered to make problems quickly go away. After working on a few development, refactoring and data conversion projects, I was astounded to discover how often those sorts of kludges crop up and how costly they become.
Perhaps you could write a sequel pointing out the pitfalls of what appear to be easy database fixes to design errors but turn out in the long-run to be anything but.
Sorry I left you with that impression, I certainly didn't mean to. I only mentioned those as great ways to shoot yourself in the foot when you don't have the patience/skill to do the Right Thing.
Sounds like you're better prepared to write that followup than me, but I'll ponder it. Maybe "Database Anti-patterns". Actually, just typing the name makes me think it likely someone's done this sort of thing, but I don't have time to check.
I think we may be in violent agreement: I also think kludges and hacks are unprofessional. In the post I'm writing about my experience with the way that models are exposed as imperfect over time and they're fixed to deal with it. As much as I gripe about exceptions and outliers, capturing them is a big part of design.