yea I saw that, but im not sure on how accurate that is. a few large apps/companies I know to be 100% on AWS in us-east-1 are cranking along just fine.
It's easy: we have reached AGI when there are zero jobs left. Or at least non manual labor jobs. If there is a single non-physical job left, then that means that person must be doing something that AI can't, so by definition, it's not AGI.
I think it'll be a steep sigmoid function. For a long time it'll be a productivity booster, but not enough "common sense" to replace people. We'll all laugh about how silly it was to worry about AI taking our jobs. Then some AI model will finally get over that last hump, maybe 10 or 20 years from now (or 1000, or 2}, and it will be only a couple months before everything collapses.
I dislike your definition. There are many problems besides economic ones. If you defined "general" to mean "things the economy cares about", then what do you call the sorts of intelligences that are capable of things that the economically relevant ones are not?
A specific key opens a subset of locks, a general key would open all locks. General intelligence, then, can solve all solvable problems. It's rather arrogant to suppose that humans have it ourselves or that we can create something that does.
It also partitions jobs into physical and intellectual aspects alone. Lots of jobs have a huge emotional/relational/empathetic components too. A teacher could get by being purely intellectual, but the really great ones have motivational/inspirational/caring aspects that an AI never could. Even if an AI says the exact same things, it doesn't have the same effect because everyone knows it's just an algorithm.
And most people get by on those jobs by faking the emotional component, at least some of the time. AGI presumably can fake perfectly and never burn out.
Have a long talk with any working teacher or therapist. If you think the regular workload is adequate for them to offer enough genuine emotional support for all the people they work with, always, everyday, regardless of their personal circumstances, you're mistaken. Or the person you're talking with is incredibly lucky.
It doesn't have to be much, or intentional, or even good for that matter. My kids practice piano because they don't want to let their teacher down. (Well, one does. The other is made to practice because WE don't want to let the teacher down).
If the teacher was a robot, I don't think the piano would get practiced.
IDK how AI gains that ability. The requirement is basically "being human". And it seems like there's always going to be a need for humans in that space, no matter how smart AI gets.
Something still feels off if the formal proof can't be understood. I don't dispute its correctness, but there's a big jump from 4 color theorem, where at least mathematicians understood the program, to this, where GPT did the whole thing. Like if GPT ceased to exist, nobody would have a clue how to recreate the formalization. Or maybe there's a step in there that's a breakthrough to some other problem, but since it was generated, we'll never notice it.
The computer-assisted component of the Noperthedron proof is a reasonably small sagemath program that was (as far as I know) written by humans: https://github.com/Jakob256/Rupert
Perhaps you have confused this article with a recent unrelated announcement about a vibe-coded proof of an Erdos conjecture? https://borisalexeev.com/pdf/erdos707.pdf
Oops you're right! I read these both yesterday and they blended together in my memory by the time I made this comment this morning. I knew something felt "off".
Tangentially I'll have to reconsider my position on long but lossy context LLMs.
I wonder if append-only will continue to be important. As agents get more powers, their actions will likely be the bottleneck, not the LLM itself. And at n*2, recomputing a whole new context might not take much longer than just computing the delta, or even save time if the new context is shorter.
True. Generally it seems like you're visualizing things, moving stuff around, seeing vague patterns and trying to make them more clear. IDK how a transformer architecture would fit all of that in its context, or use it productivity once it's there. You can't just keep appending forever, but you also can't delete stuff either, because unlike humans, a deletion is a hard delete; there's no fuzzy remembrance left to rely on, so even deleting bad ideas is dangerous because it'll forget that it was a bad idea and infinite loop. Symbols manipulation doesn't come until the end, after you have a good idea what that part will look like.
Hmm, I wonder what happens if you let them manipulate their own context symbolically, maybe something like a stack machine. Perhaps all you need is a "delete" token, or a "replace" flag. That way you don't have context full of irrelevant information.
I guess the challenge is, where would the training data come from? Data on the internet is in its final form so "next token" is never a delete.
Edit: I guess in essence, that's what reasoning LLMs already do. IIUC the thought blocks are ephemeral, and only the response is maintained for the chat. Maybe there'd be some benefit of doing this recursively? But that's also kind of what subagents are for. So, perhaps nothing new here.