> For simple tasks where we would alternatively hire only ordinary humans AIs have similar error rates.
Yes if a task requires deep expertise or great care the AI is a bad choice. But lots of tasks don't. And in those kinds of tasks even ordinary humans are already too expensive to be economically viable
Do you have good examples of tasks in which dubious verbal prompt could be an acceptable outcome?
By the way, I noticed:
> AI
Do not confuse LLMs with general AI. Notably, general AI was also implemented in system where critical failures would be intolerable - i.e., made to be reliable, or part of a finally reliable process.
> For simple tasks where we would alternatively hire only ordinary humans AIs have similar error rates.
Yes if a task requires deep expertise or great care the AI is a bad choice. But lots of tasks don't. And in those kinds of tasks even ordinary humans are already too expensive to be economically viable