Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks, and to add to that, it occurred to me that they may have different criticality too. Some services you may want to scale very aggressively because a failure would be catastrophic. Whereas other services may be even more cpu intensive on average but failures are acceptable so you let them run at 90% load. I'd imagine this would be a far more difficult balancing act if they were both in the same process.

For example if you had your chess AI engine running in the same monolith as your web server, it could slow down your response time to the point of timeout. But if they were separate services, your web server could stay snappy and give a meaningful response to the problem. "our ai service is overloaded right now, but here is a nice haiku while you wait."

Though still, I'd think of that as a fairly advanced use case. Not something small projects should have to think about.



I think your original question is a good one. It must be thoroughly proven and not just taken as gospel.

You may find cases where decoupling a service is a good idea. That doesn't justify decoupling everything by default. The more you decouple the more rigid becomes the whole system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: