Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What has worked well for us, something that IMO combines the best out of both worlds:

* break down the problem into sensible components. for a reporting system I'm working on atm. we're using one component per type of source data (postgres, XLSX files, XML), one component for transformations (based on pandas) and one component for the document exporter.

* let those components talk through http requests with each other, including an OpenAPI specification that we use to generate a simple swagger GUI for each endpoint as well as to validate and test result schemas.

* deploy as AWS lambda - let amazon worry about scaling this instead of having our own Kubernetes etc.

* BUT we have a very thin shim in front that locally emulates API gateway and runs all the lambda code in a single process.

   - (big) advantage: we can easily debug & test business code changes locally. 
     only once that is correct we start worrying about deployment, which is all done as IaC and doesn't give us trouble that often.
   - (minor) disadvantage: the various dependencies of lambdas can conflict with each other locally, so gotta be careful with library versions etc.
doing so the scaling works quite well, there is no up-front infrastructure cost and code tends to stay nice and clean because devs CANNOT just import something from another component - common functionality needs to first be refactored into a commons module that we add in each lambda, which puts a nice point for sanity checking what goes in there.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: