I think it is true for every row-oriented database that stores all the values for a single row together within a fixed-sized structure. Columnar stores on the other hand may or may not do this.
I have built a database engine that is a columnar store. All the values for each column are stored together. This requires just about every query to fetch each row value separately, but it has proven to be incredibly fast. Queries on big tables run several times faster than on Postgres and it is also about 5x faster than on SQLite. https://www.youtube.com/watch?v=Va5ZqfwQXWI
Mine is not. If a table has 4 columns (e.g. name, address, phone, email), then all the names are stored separately from the addresses. Likewise all the phone numbers are stored separately from the emails. The data is de-duped so it is incredibly easy to find out how many of each value is in each column (e.g. there are 1,234,567 rows in the table where name = 'John').
The downside is that projecting a row requires random I/O across a larger number of pages, which also means more evictions from the in-memory buffer and worse cache efficiency. Apache Arrow, Parquet, Redshift, Bigtable/Spanner, Snowflake are all hybrid columnar, for example. You get good row data locality while still being able to exploit SIMD/vectorized ops.
* Except, maybe, toast, hot or compressed..