Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Someone please write an article, "Don't start with architecture some dude suggested because of their ego". There are cases when monolith is bad, and when microservice is the must. Take the healthy approach.

The article is garbage with no real understanding how real microservices works:

The deployment: Modern microservices are working using templates. You deploy it using that directly from you gitlab/github. You copy paste your artifact and it's there. The builds are taking 2 minutes to build, sometimes less, means you can quickly react on some issue as opposed to 30 minutes old school java monolith. Deployments are build in the same cluster you use for everything else. CI job runner is just another application in your cluster. So if your cluster is down, everything is down.

The culture part:

We use templates, where you have all the libraries , tracing in place. In fact when this request is coming we have some similar functionality written, so we reply to product, oh, this feature is very similar to feature X, we'll copy it, while we discuss some schedule thing our developer renamed similar project did commit and it's already deployed automatically to the dev cluster, the rest of the team joined to development. There is a bad pattern when you need to update your templates. This is tradeoff of approach, you don't libraries as a concept. Hence that you can have half services migrated, half services don't, that's a bonus. The cons is that you need scripts to push everything immediately.

Better Fault Isolation:

Yes, you might have settings down and core functionality working, means you have less SLA breaking events. Saves you money and customers. Same thing with error handling. If it's just tooling you copy paste a different set of tooling. If the error logging is not implemented in a proper way in the code... It's no different from monolith, it's just errors in code. But things like tracing are already part of the template so for basic evens handlers are traced from deploy #1.



This article sounds like someone who's never successfully implemented either solution. Things that are wrong so far:

Monolithic apps need monitoring (e.g. Prometheus) just as much as microservices.

Monolithic apps can have scaling issues if you only have one database instance (especially if you're write-heavy), so you may need to shard anyway.

Monolithic apps will probably want a messaging system or queue to handle asynchronous tasks (e.g. sending e-mail, exporting data).

Microservices do not require kubernetes. You can run them fine on other systems; at my last job, we just ran uwsgi processes on bare metal.

Microservices do not require a realtime messaging system. You can just use API calls over HTTP, or gRPC, or whatever else. You'll probably want some kind of message queue as above, but not as an integral part of your architecture, meaning it can just be Redis instead of Kafka.

Microservices do not require separate DB instances. If your microservices are all using separate schemas (which they should), you can migrate a service's schema to a new DB primary/replica set whenever you want. In fact, if you have one e.g. MySQL primary you can have multiple secondaries, each only replicating one schema, to handle read load from individual services (e.g. a primary write node and then separate read nodes for user data, product information, and everything else). When it's time to break out the user data into a separate database, just make the read replica the new primary for that DB and add a new read replica off that.

This dude just straight up doesn't know what he's talking about, and it sounds like his experience with microservices is following a bad 'microservices with golang, kafka, cassandra, prometheus, and grafana on k8s' tutorial.

Here's how you write baby's first microservices architecture (in whatever language you use):

1. Decide where the boundaries are in your given application; e.g. user data, product data, frontend rendering, payment systems

2. Write those services separately with HTTP/gRPC/whatever APIs as your interface.

3. For each API, also write a lightweight native interface library, e.g. user_interface, product_interface, payment_interface. Your services use this to call each other, and the method by which they communicate is an implementation detail left up to the interface library itself.

4. Each service gets its own database schema; e.g. user, product, payment, which all live on the same MySQL (or RDS or whatever) instance and read replica.

5. Everything has either its own port or its own hostname, so that your nginx instances can route requests correctly.

There, now you have a working system which behaves like a monolith (working via what seems like internal APIs) but is actually a microservice architecture whose individual components can be scaled, refactored, or rewritten without any changes to the rest of the system. When you swap out your django protobuf-over-HTTP payment processing backend for a Rust process taking gRPC calls over Redis queues, you change your interface file accordingly and literally no one else has to know.

It also means that your deployment times are faster, your unit testing is faster, your CI/CD is faster, and your application startup time is faster when you do have to do a service restart.

I'm not sure why this is so hard for people to understand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: