Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's better because if you have one application that has authority over that area, you only have to answer this question once.

The answer will depend on the data in question, of course; maybe it is fine to serve stale data for a while, maybe you need to write to one DB, read from another, and combine in-process, etc., until the change fully propagates. But the impact needs to be localized to whatever extent is possible.

If you have several applications accessing the database directly, it makes the database everyone's problem, instead of just the one thing's problem. Then everyone has to know about the downtime and come up with their own strategy to mitigate. They can't say "Well we'll just trust what we get from Service A", because they don't actually get info from service A; they get info from service A's underlying datastore.

Worse, in most cases like this, there will just be one global database for everything, so schema changes, database restarts, etc., necessary for one thing can have negative effects, both direct and indirect, across the entire ecosystem. If Bob's Service decides it needs to do a massive reindexing and Alice's Service is on the same DB, even if they're using completely independent tables, etc., the performance hit is going to affect both. If Bob changes his schema and Alice reads or writes directly to those tables (e.g., Alice's service updates a column in records originally inserted by Bob's service), now Alice has to know about the change, plan for it, and coordinate her deployment in sync with Bob, etc.

That kind of thing is what people mean when they say "distributed monolith". There is no real "private" and "public" space where one service provider could reasonably offer a stable API but change things as necessary on the back-end. Nothing is really independent. All you've done is make a monolith that is much harder to coordinate, manage, debug, and understand.



This was the problem at my last job. We had a few applications and each had their own database, but they were all on the same server.

Over time they all learned to reach into eachother’s databases. The truth is we had ONE database arbitrarily divided into three schemas, each with different traditions.

As load increased it became a nightmare and a literal single point of failure. If one app misbehaved or took a load spike all the rest would slow/fall down. Even though huge chunks of the applications had nothing to do with each other they couldn’t be scaled independently.

We were working very hard, slowly, to detangle it without blowing everything up.

No application should ever have direct access to another application’s database. It’s going to go wrong. The temptation is too great. And by the time you realize it the technical debt it has caused may be MASSIVE.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: