1. auth? probably an internal service, so don't expose it to the outside network.
2. monitoring? if the service is being used anywhere at all, the client will throw some sort of exception if its unreachable.
memory problem? it should take <1 day to ensure the code for such a small service does not leak memory. if it does have memory leaks anyways, just basic cpu/mem usage monitoring on your hosts will expose it. then ssh in, run `top` voila now you know which service is responsible.
3. deployment? if its a go service, literally a bash script to scp over the binary and an upstart daemon to monitor/restart the binary.
rollback? ok, checkout previous version on git, recompile, redeploy. maybe the whole process is wrapped in a bash script or assisted by a CI/CD build job.
4. security? well ok, PDFs can be vulnerable to parser attacks. so lock down the permissions and network rules on the service.
Overall this setup would work perfectly fine in a small/medium company and take 5-10x less time than doing everything the FAANG way. i don't think we should jump to calling these best practices without understanding the context in which the service lives.
I agree more or less with 1 and 4 mostly. But for monitoring either you would have to monitor the service calling this microservice or need to have a way to detect error.
> if it does have memory leaks anyways, just basic cpu/mem usage monitoring on your hosts
Who keeps on monitoring like this? How frequently would you do it? In a startup there are somewhere in the range of 5 microservice of that scale per programmer and daily monitoring of each service by doing top is not feasible.
> 3. deployment? if its a go service, literally a bash script to scp over the binary and an upstart daemon to monitor/restart the binary.
Your solution literally is more complex than simple jenkins or ansible script for build then kubectl rollout restart yet is lot more fragile. Anyways the point stands that you need to have a way for deployment
My larger point is basically just against dogma and “best practices”. Every decision has tradeoffs and is highly dependent on the larger organizational context.
For example, kubectl rollout assumes that your service is already packaged as a container, you are already running a k8s cluster and the team knows how to use it. In that context, maybe your method is a lot better. But in another context where k8s is not adopted and the ops team is skilled at linux admin but not at k8s, my way might be better. There’s no one true way and there never will be. Technical decisions cannot be made in a vacuum.
> Overall this setup would work perfectly fine in a small/medium company and take 5-10x less time than doing everything the FAANG way.
The point was never comparing it to the FAANG way. The point is: it's easier (at the beginning) to maintain ONE monolith (and all the production stuff related to it) than N microservices.
1. auth? probably an internal service, so don't expose it to the outside network.
2. monitoring? if the service is being used anywhere at all, the client will throw some sort of exception if its unreachable.
memory problem? it should take <1 day to ensure the code for such a small service does not leak memory. if it does have memory leaks anyways, just basic cpu/mem usage monitoring on your hosts will expose it. then ssh in, run `top` voila now you know which service is responsible.
3. deployment? if its a go service, literally a bash script to scp over the binary and an upstart daemon to monitor/restart the binary.
rollback? ok, checkout previous version on git, recompile, redeploy. maybe the whole process is wrapped in a bash script or assisted by a CI/CD build job.
4. security? well ok, PDFs can be vulnerable to parser attacks. so lock down the permissions and network rules on the service.
Overall this setup would work perfectly fine in a small/medium company and take 5-10x less time than doing everything the FAANG way. i don't think we should jump to calling these best practices without understanding the context in which the service lives.