AWS’s usual most doesn’t really apply here. AWS is Hotel California — if your business and data is in AWS, the cost of moving any data-intensive portion out of AWS is absurd due to egress fees. But LLM inference is not data-transfer intensive at all — a relatively small number of bytes/tokens go to the model, it does a lot of compute, and a relatively small number of tokens come back. So a business that’s stuck in AWS can cost-effectively outsource their LLM inference to a competitor without any substantial egress fees.
RAG is kind of an exception, but RAG still splits the database part from the inference part, and the inference part is what needs lots of inference-time compute. AWS may still have a strong moat for the compute needed to build an embedding database in the first place.
Simple, cheap, low-compute inference on large amounts of data is another exception, but this use will strongly favor the “cheap” part, which means there may not be as much money in it for AWS. No one is about to do o3-style inference on each of 1M old business records.
RAG is kind of an exception, but RAG still splits the database part from the inference part, and the inference part is what needs lots of inference-time compute. AWS may still have a strong moat for the compute needed to build an embedding database in the first place.
Simple, cheap, low-compute inference on large amounts of data is another exception, but this use will strongly favor the “cheap” part, which means there may not be as much money in it for AWS. No one is about to do o3-style inference on each of 1M old business records.