So DeepSeek, GPT, and presumably many other LLMs are capable of solving this problem and even producing independent unique proofs. I wonder if this particular Erdos problem is unique in that solvability
The SA-67 is essentially a hybrid surface-to-air missile and loitering drone that operates like an airborne mine. It’s a pretty innovative weapon: instead of relying on a fast, highly detectable rocket motor, it uses a small gas turbine and passive infrared seeker to silently loiter in a combat zone and then ambush aircraft without ever triggering their traditional radar warning receivers.
But it seems they are pretty pissed off with the Chinese, since they spent a few hundred million on their defense systems, that turned to be a complete failure. This was also after the HQ-9B failed to adequately protect high-value targets in Pakistan during India Operation Sindoor,
I don't believe the SA-67 is the most-likely weapon used, here. Given that it's turbojet powered, that missile is almost certainly subsonic and better suited for taking out prop-driven drones like the Predator. Even at sea level, the F-15E would probably outrun it at low cruise speed.
You're definitely right that passive seekers are playing a huge role here, though. Many people online (and on HN) bought into the air dominance shtick just because major radar sites were taken offline. It was always the road mobile and TELAR vehicles that would be a threat.
Early 2000s RTS games (Starcraft 1, Warcraft 3, CnC franchise) continue to amaze me in how well their seemingly comical "game physics" model the intrinsic dynamics of real world conflicts, almost prophetically.
We have attacked their “legacy” air defense systems. We cannot really degrade their ability to use their anti-aircraft loitering missiles which don’t rely on radar.
They mention "not malicious", but I wonder if current controls are strong enough to prevent malice if the objective is to "interpret intentions disastrously". Isn't this irresponsible?
The idea of stateful models/interactions in an enterprise is extremely powerful. Is anyone aware of open source projects that have a similar goal? I'm looking for stateful conversations, with collaborative agent/skill refinement.
To head off the semantics debate: I don't mean a model rewriting its own source code. I'm asking about 'process recursion'—systems that analyze completed work to autonomously generate new agents or heuristics for future tasks.
-ish. I often keep md files around and after a successful task. I ask Codex to write the important bits down. Then, when I come around to a similar task in the future, I have it start at the md file. It's like context that grows and is very localized. It helps when I'm going through multiple repos at multiple levels.
I’m also doing similar with fairly decent results. AGENTS.md grows after each session that resulted in worthwhile knowledge that future sessions can take advantage of. At some point I assume it will be too big, then it’s back to the Stone Age for the new agents, in order to release some context for the actual work.
reply