Oh, absolutely - such an entity obviously could! Modelling the behaviour of such an entity is very difficult indeed, as you'd need to make all kinds of assumptions without basis. However, you only need to model this behaviour once you've posited the likely existence of such an entity - and that's where (purely subjectively) it feels like there's a gap.
Nothing has yet convinced me (and I am absolutely honest about the fact that I'm not a deep expert and also not privy to the inner workings of relevant organisations) that it's likely to exist soon. I am very open to being convinced by evidence - but an "argument from trajectory" seems to be what we have at the moment, and so far, those have stalled at local maxima every single time.
We've built some incredibly impressive tools, but so far, nothing that looks or feels like a concept of will (note, not consciousness) yet, to the best of my knowledge.
> those have stalled at local maxima every single time.
It's challenging to encapsulate AI/ML progress in a single sentence, but even assuming LLMs aren't a direct step towards AGI, the human mind exists. Due to its evolutionary limitations, it operates relatively slowly. In theory, its functions could be replicated in silicon, enhanced for speed, parallel processing, internetworked, and with near-instant access to information. Therefore, AGI could emerge, if not from current AI research, then perhaps from another scientific branch.
> We've built some incredibly impressive tools, but so far, nothing that looks or feels like a concept of will (note, not consciousness) yet, to the best of my knowledge.
Objectives of AGIs can be tweaked by human actors (it's complex, but still, data manipulation). It's not necessary to delve into the philosophical aspects of sentience as long as the AGI surpasses human capability in goal achievement. What matters is whether these goals align with or contradict what the majority of humans consider beneficial, irrespective of whether these goals originate internally or externally.
> In theory, its functions could be replicated in silicon, enhanced for speed, parallel processing, internetworked, and with near-instant access to information. Therefore, AGI could emerge, if not from current AI research, then perhaps from another scientific branch.
Let's be clear, we have very little idea about how the human brain gives rise to human-level intelligence, so replicating it in silicon is non-trivial.
> In theory, its functions could be replicated in silicon, enhanced for speed, parallel processing, internetworked, and with near-instant access to information. Therefore, AGI could emerge, if not from current AI research, then perhaps from another scientific branch.
This is true, but there are some important caveats. For one, even though this should be possible, it might not be feasible, in various ways. For example, we may not be able to figure it out with human-level intelligence. Or, silicon may be too energy inefficient to be able to do the computations our brains do with reasonable available resources on Earth. Or even, the required density of silicon transistors to replicate human-level intelligence could dissipate too much heat and melt the transistor, so it's not actually possible to replicate human intelligence in silico.
Also, as you say, there is no reason to believe the current approaches to AI are able to lead to AGI. So, there is no reason to ban specifically AI research. Especially when considering that the most important advancements that led to the current AI boom were better GPUs and more information digitized on the internet, neither of which is specifically AI research.
I have put this argument to the test. Admittedly only using the current state of AI, I have left an LLM model loaded into memory and waiting for it to demonstrate will. So far it has been a few weeks and no will that I can see: model remains loaded in memory waiting for instructions. If model starts giving ME instructions (or doing anything on its own) I will be sure to let you guys know to put your tin foil hats or hide in your bunker.
> I am very open to being convinced by evidence - but an "argument from trajectory" seems to be what we have at the moment, and so far, those have stalled at local maxima every single time.
Sounds like the same argument as why flying machines heavier than air deemed impossible at some point.
Our current achievements in flight are impressive, and obviously optimised for practicality on a couple of axes. More generally though, our version of flight, compared with most birds, is the equivalent of a soap box racer against a Formula 1.
Nothing has yet convinced me (and I am absolutely honest about the fact that I'm not a deep expert and also not privy to the inner workings of relevant organisations) that it's likely to exist soon. I am very open to being convinced by evidence - but an "argument from trajectory" seems to be what we have at the moment, and so far, those have stalled at local maxima every single time.
We've built some incredibly impressive tools, but so far, nothing that looks or feels like a concept of will (note, not consciousness) yet, to the best of my knowledge.