Hacker Newsnew | past | comments | ask | show | jobs | submit | steepben's commentslogin


Ankush has moved on to Snowflake now, but he was previously at AWS.


Yes it's a large chunk, but not everything! Marc had a comment on bluesky regarding this:

> Many SQL aggregations are monotonic operations (e.g. MAX, SUM, etc) that can be partially completed on each node and then post-merged. Some (e.g. DISTINCT) can be transformed into monotonic ops with some effort. Some aren't possible to do this way. (Ref on monotonicity: arxiv.org/pdf/1901.01930)

The benefit of this is that a lot more work is done _close_ to the data. The trend is that bandwidth is getting larger in data centers, but latency isn't improving at the same rate. Reducing the number of round trips between QP and storage greatly improves the overall query latency, even if you have to do more work on the storage.


> The benefit of this is that a lot more work is done _close_ to the data.

But isn't that fundamentally at odds with the central idea of disaggregation

> At a fundamental level, scaling compute in a database system requires disaggregation of storage and compute. If you stick storage and compute together, you end up needing to scale one to scale the other, which is either impossible or uneconomical.

So either you can get good perf by doing the work close to data, or get good scalability by separating compute and data. But I can't see how you can do both.


There's a blog with more technical details: https://aws.amazon.com/blogs/database/introducing-amazon-aur...

It provides strong consistency for cross-region transactions


It's Postgres compatible, so not exactly non-portable



Why would this product having a subset of postgresql features make it less portable? If anything, that makes it more portable


Not if you're an existing postgres user considering DSQL - then it's effectively an entirely different database. Compatibility goes both ways.


There's a blog on the technical details here: https://aws.amazon.com/blogs/database/introducing-amazon-aur...


The most interesting page is always quotas and limits: https://docs.aws.amazon.com/aurora-dsql/latest/userguide/CHA...

Seeing the No to configurable for some of the settings is the most telling hard limits that we can see up front.

Some Very Noteable ones are: Maximum size of all data modified within a write transaction: 10 MiB Maximum transaction time: 5 minutes


The 10MiB transaction size limit smells a lot like FoundationDBs transaction size limit


10MiB is an appealing number for humans. Can easily appear across different designs.


According to page 128 of their S-1 filing [0], their claims bot "AI Jim" processes claims without human intervention in a third of cases, including automatically denying claims. So either the second tweet in the thread is a lie, or their S-1 filing is a lie. This was pointed out by Rachel Metz [1].

[0]: https://sec.report/Document/0001047469-20-003416/#cy40510_bu... [1]: https://twitter.com/rachelmetz/status/1397625251257753606?s=...


I think the real story will be that their nature is a financial engineering free lunch that is going to blow up spectacularly.

They say they are different because they reinsure most of their risk. So why doesn't everybody do this, print money and let someone else take the risk? Just skimming the S-1 makes me think of Greensill. It's some sort of arbitrage that makes it hard to put your finger on where the risk is.


Columns, they were using a column per case instead of a row per case.


Is that definitive?


Nope! Actually going off better info now it looks like they WERE using rows, but it was in the old .xls format instead of .xlsx which limits to 2^16 rows

https://twitter.com/standupmaths/status/1313149987707072512?...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: