>if they're "Semantic Cloud of Words", they're still a hundred thousand dimensional clouds of words, and in those hundred thousand dimensions, any relationship you can think of, no matter how obscure, ends up being reflected as proximity along some subset of dimensions.
Yes, exactly that. That's what GPT4 is doing, over billions of parameters, and many layers stacked on top of one another.
Let me give you one more tangible example. Suppose Stable Diffusion had two steps of generating images with humans in it. One step, is taking as input an SVG file, with some simple lines which describe the human anatomy, with body position, joints, dots as eyes etc. Something very simple xkcd style. From then on, it generates the full human which corresponds to exactly the input SVG.
Instead of SD being a single model, it could be multimodal, and it should work a lot better in that respect. Every image generator suffers from that problem, human anatomy is very difficult to get right.[1] The same way GPT4 could function as well. Being multimodal instead of a single model, with the two steps discreet from one another.
So, in some use cases, we could generate some semantic clouds, and generate syntax and grammar as a second step. And if we don't care that much about perfect syntax and grammar, we feed it to GPT2, which is much cheaper to run, and much faster. When i used the paid service of GPT3, back in 2020, the Ada model, was the worst one, but it was the cheapest and fastest. And it was fast. I mean instantaneous.
>the very structure of reasoning as humans do it
I don't agree that the machine reasons even close to a human as of today. It will get better of course over time. However in some not so frequent cases, it comes close. Some times, it seems like it, but only superficially i would argue. Upon closer inspection the machine spits out non sense.
[1] Human anatomy, is very difficult to get right, like an artist. Many/all of the artists, point out the fact, that A.I. art doesn't have soul in the pictures. I share the same sentiment.
Yes, exactly that. That's what GPT4 is doing, over billions of parameters, and many layers stacked on top of one another.
Let me give you one more tangible example. Suppose Stable Diffusion had two steps of generating images with humans in it. One step, is taking as input an SVG file, with some simple lines which describe the human anatomy, with body position, joints, dots as eyes etc. Something very simple xkcd style. From then on, it generates the full human which corresponds to exactly the input SVG.
Instead of SD being a single model, it could be multimodal, and it should work a lot better in that respect. Every image generator suffers from that problem, human anatomy is very difficult to get right.[1] The same way GPT4 could function as well. Being multimodal instead of a single model, with the two steps discreet from one another.
So, in some use cases, we could generate some semantic clouds, and generate syntax and grammar as a second step. And if we don't care that much about perfect syntax and grammar, we feed it to GPT2, which is much cheaper to run, and much faster. When i used the paid service of GPT3, back in 2020, the Ada model, was the worst one, but it was the cheapest and fastest. And it was fast. I mean instantaneous.
>the very structure of reasoning as humans do it
I don't agree that the machine reasons even close to a human as of today. It will get better of course over time. However in some not so frequent cases, it comes close. Some times, it seems like it, but only superficially i would argue. Upon closer inspection the machine spits out non sense.
[1] Human anatomy, is very difficult to get right, like an artist. Many/all of the artists, point out the fact, that A.I. art doesn't have soul in the pictures. I share the same sentiment.