We originally developed our evolution-inspired tool to optimize LLM prompts.
To our surprise, we found that the same method also worked well for getting better performance out of a base model on ARC-AGI tasks.
Edit: While Horizon assigns versions internally, it looks like they are not currently used for catching concurrent client-side modifications by different users. (I previously wrote that they would cause the "losing" write to fail if it was based on an outdated version of the data, but that doesn't seem to be the case yet.)
We're constantly improving performance and a lot has happened within the past year. I think that at this point RethinkDB is as good a database for analytics as many of the other general-purpose databases when it comes to features and performance.
From what I can tell, there are still two main limitations that apply in some, but not all scenarios:
* Grouping big results without an associated aggregation requires the full result to fit into RAM. I believe this was the limitation that you ran into a year ago, which lead to RAM exhaustion. This limitation is still there ( https://github.com/rethinkdb/rethinkdb/issues/2719 in our issue tracker). However we're shipping a new command `fold` with the upcoming 2.3 release of RethinkDB, which can be used in the vast majority of cases to perform streaming grouped operations (in conjunction with a matching index). See https://github.com/rethinkdb/rethinkdb/issues/3736 for details.
* Scanning data sets that don't fit into memory on rotational disks is still inefficient. Most SQL databases deploy sophisticated optimizations to structure their disk layout in order to minimize the effects of high seek times. RethinkDB's disk layout it built with a stronger focus on SSDs. This limitation hence doesn't apply if the data is stored on SSDs.
Out of curiosity, why would you prefer this sort of implementation to something more in line with MogileFS? [e.g. Metadata storage with the actual file stored independently on the local file system of multiple physical nodes]
> That's not what http://rethinkdb.com/docs/quickstart/ says. It shows a connection that exposes context-bound Builder methods with trigger (insert, changes, run, etc) methods.
I think the confusion stems from the fact that the Quickstart guide assumes that you're running queries in the Data Explorer, a web frontend for prototyping queries. In the Data Explorer, clicking the "Run" button is what triggers the execution of the AST.
If you wrote something like `r.table("tv_shows").insert(...)` in your application code, it wouldn't do anything except for returning an AST object. You can store that object, or call the `run(conn)` method on it to send it over a RethinkDB connection and execute it.
Note that the `r` object in these queries has no state. You can think of it as a namespace that serves as a starting point for building queries.
We've since simplified the build process, and it can now build most dependencies automatically.
The only exception right now are the web UI assets which still need to be downloaded separately or copied from Linux (building these on Windows will come later).
We don't link or compile in any Cygwin code, and use all Windows APIs directly. The build system uses some Cygwin tools though.
Incidentally we considered using Cygwin to achieve Windows compatibility at some point, but found that it didn't implement some of the lower-level APIs that RethinkDB uses on Linux.
We originally developed our evolution-inspired tool to optimize LLM prompts. To our surprise, we found that the same method also worked well for getting better performance out of a base model on ARC-AGI tasks.
We're open-sourcing the evolver tool today. It is built to be adapted to many different optimization problems. (Some coding required) You can read more about it at https://imbue.com/research/2026-02-27-darwinian-evolver/
Happy to answer questions!