We are 100% going to get 'hey remember when LLMs were pure and not explicitly (or more dangerously: subtly) recommending things' nostalgia in years to come.
There are parallels to early web here I'm sure of it.
I think I'm a little more worried about AI being subtly influenced in its training data -- they can't explain why they give the tokens they do, and even chain of thought / explain your working thinking is similarly made up and hallucination-prone
There are parallels to early web here I'm sure of it.
I think I'm a little more worried about AI being subtly influenced in its training data -- they can't explain why they give the tokens they do, and even chain of thought / explain your working thinking is similarly made up and hallucination-prone