> What would be the cargo doc or rust-analyzer equivalent for good architecture?
Well, this is where you still need to know your tools. You should understand what ECS is and why it is used in games, so that you can push the LLM to use it in the right places. You should understand idiomatic patterns in the languages the LLM is using. Understand YAGNI, SOLID, DDD, etc etc.
Those are where the LLMs fall down, so that's where you come in. The individual lines of code after being told what architecture to use and what is idiomatic is where the LLM shines.
What you describe is how I use LLM tools today, but the reason I am approaching my project in this way is because I feel I need to brace myself for a future where developers are expected to "know your tools"
When I look around today - its clear more and more people are diving in head first into fully agentic workflows and I simply don't believe they can churn out 10k+ lines of code today and be intimately familiar with the code base. Therefore you are left with two futures:
* Agentic-heavy SWEs will eventually blow up under the weight of all their tech debt
* Coding models are going to continue to get better where tech debt wont matter.
If the answer if (1), then I do not need to change anything today. If the answer is (2), then you need to prepare for a world where almost all code is written by an agent, but almost all responsibility is shouldered by you.
In kind of an ignorant way, I'm actually avoiding trying to properly learn what an ECS is and how the engine is structured, as sort of a handicap. If in the future I'm managing a team of engineers (however that looks) who are building a metaphorical tower of babel, I'd like to develop to heuristic in navigating that mountain.
A lot of current LLM work is basically emergent behavior. They use a really simple core algorithm and scale it up, and interesting things happen. You can read some of anthropic's recent papers to see some of this, like: They didn't expect LLMs could "lookahead" when writing poetry. However, when they actually went in and watched what was happening (there's details on how this "watching" works on their blog/in their studies) they found the LLM actually was planning ahead! That's emergent behavior, they didn't design it to do that, it just started doing due to the complexity of the model.
If (BIG if) we ever do see actual AGI, it is likely to work like this. It's unlikely we're going to make AGI by designing some grand Cathedral of perfect software, it is more likely we are going to find the right simple principles to scale big enough to have AGI emerge. This is similar.
Perception and interpretation can very much be influenced by language (Sapir-Wharf hypothesis), so to the extent that perception and interpretation influence intelligence, it's not clear that the relationship is only in one direction.
Sapir-Whorf was named after, but not postulated as a single theory by Sapir or Whorf. It's just a colloquialism for Linguistic Relativity (vs Universality). In its weak form, there are many examples of Linguistic Relativity.
Am I the exception? When thinking I don't conceptualize things in words - the compression would be too lossy. Maybe because I'm fluent in three languages (one germanic, one romance, one slavic)?
Our brains reason in many domains depending on the situation.
For domains built primarily on linguistic primitives (legal writing), we do often reason through language. In other domains (i.e spatial) we reason through vision or sound.
We experience this distinction when we study the formula vs the graph of a mathematical function, the former is linguistic, the latter is visual-spatual.
And learning multiple spoken languages is a great way to break out of particularly rigid reasoning patterns, and as important, countering biases that are influenced by your native language.