Logic reasoning is a pretty solid branch of AI since it’s inception. Robust solutions exist for most problems; even a programming language based on its principles (Prolog).
With ChatGPT there is now a system that can express the results from automatic logic reasoning in language.
The next step would be to combine the two, i.e. tell chatGPT to explain the result of a logic reasoning program in natural language. It could of course also be asked to translate a natural language query into Prolog code.
This will probably require retraining the model, but I guess the demo we are given by OpenAI leaves little doubt that this is perfectly doable.
ChatGPT has the potential to plug the gap between GOFAI and natural language, which is quite a feat.
ChatGPT is damned impressive, but I hesitate to apply the word "reasoning" to its output. It's also not clear that the backend can be generalized to be able to plug the input into a symbolic manipulation program, or to gather the output and explain it. Only OpenAI knows for certain right now. Powerful if possible though!
Here's why I can't call it "reasoning":
> what is next in the series: a,b,c
> Without additional context, it is impossible to determine what the next element in the series "a,b,c" would be. The series could potentially continue with "d," "e," or any other letter of the alphabet, depending on the pattern or rule being followed. It is also possible that the series does not follow any pattern and is simply a random or arbitrary sequence of letters. Without more information, it is impossible to determine what the next element in the series would be.
> why isn't it d?
> As mentioned in my previous response, without additional context or information about the pattern or rule being followed, it is impossible to determine what the next element in the series "a,b,c" would be. It is not necessarily true that the next element in the series would be "d," as the series could potentially continue with any other letter of the alphabet or follow a different pattern altogether. Without more information, it is impossible to determine what the next element in the series would be.
Sounds like it is trained to avoid answering questions, and instead tries to bait you to give it more information so that it can tell you what it told it.
It answers some things directly, but so many things are just avoidance. And then people say "see, it understood after a bit of back and forth, it is smart!", even though it is basically iterating through Google responses and reformulate those to fit the conversation until you say it got it right. Google v1 used pure logic and got you waht you wanted, Google v2 tries to do natural language and sometimes misses, ChatGPT is the next step and tries to do full language but misses most of the time.
My point is, there are frameworks and languages for reasoning which are mature. But they require formalised input (e.g. code) and deliver formalised output.
As a language model, chatGPT can translate back and forth between natural language and those formal languages. Part of that ability is evident in the numerous examples that demonstrate how it writes or explains code.
The version of ChatGPT that is public is quite restricted; it can‘t browse the web or run code.
A version of chatGPT that can translate a logic query I to Prolog, run the code, and translate the result back to the user should be perfectly capable of logic reasoning.
> The next step would be to combine the two, i.e. tell chatGPT to explain the result of a logic reasoning program in natural language. It could of course also be asked to translate a natural language query into Prolog code.
From what I remember, the very initial prototype of AlphaGo just had a neural net trained on historical games; effectively saying, "what kind of move would a traditional grandmaster make here?" with no planning whatsoever. This was good enough to beat the person who wrote the prototype (who wasn't a master but wasn't a complete novice either); and to make it able to defeat grandmasters, they added Markov chains for planning (which also necessitated a separate neural net for evaluating board positions).
It sounds similar to your suggestion: A model which simply generates realistic-looking sentences is accurate maybe 85% of the time; to make it truly human (or super-human), it needs to be paired with some sort of formal structure -- the analog of the Markov chain. The difficulty being, of course, that the world and its knowledge isn't as simple to represent as a go board.
That said, making coding answers more reliable, by adding a logical structure explicitly designed to support search & testing, should be within reach.
> The difficulty being, of course, that the world and its knowledge isn't as simple to represent as a go board.
Humans suffer from the exact same limitation. The limit to correct inference and prediction is often the amount and quality of input data.
A language model that can extract information from text and interact with the user to refine and clarify that information could be tremendously useful for experts who understand how the model works.
Without that understanding it will be rather disappointing though, as we see with some of the reactions to chatGPT and also Galactica (RIP).
In fact it has just gotten closer.
Logic reasoning is a pretty solid branch of AI since it’s inception. Robust solutions exist for most problems; even a programming language based on its principles (Prolog).
With ChatGPT there is now a system that can express the results from automatic logic reasoning in language.
The next step would be to combine the two, i.e. tell chatGPT to explain the result of a logic reasoning program in natural language. It could of course also be asked to translate a natural language query into Prolog code.
This will probably require retraining the model, but I guess the demo we are given by OpenAI leaves little doubt that this is perfectly doable.
ChatGPT has the potential to plug the gap between GOFAI and natural language, which is quite a feat.