The sense was that it got a lot better late last year. There hasn't been enough time to make this call holistically, from development to market reaction to turndown. It also doesn't address the Jevons paradox take that with more people and AI, they can get even more done. Every place I've worked has had an idea and tech debt backlog far deeper than the companies' capacity.
An external tech appears that eliminates the need for human labor. But a CEO acknowledging its arrival and adjusting for it is a 'massive failure of leadership'?
Oh really? You think an entity that knows everything, oversees its own development and upgrades itself, understands human psychology perfectly and knows its users intimately, but isn't aligned with human interest wouldn't be 'much of a threat'?
Or to be more optimistic, that the same entity directed 24/7 in unlimited instances at intractable problems in any field, delivering a rush of breakthroughs and advances wouldn't be a type of 'salvation'?
Yes neither of these outcomes nor the self-updating omniscient genius itself is certain. Perhaps there's some wall imminent we can't see right now (though it doesn't look like it). But the rate of advance in AI is so extreme, it's only responsible to try to avoid the darker outcome.
There should always be web fallbacks for those opposed to smartphones or unable to use them. Insisting the elderly use them for any type of bureaucratic flow results too often in stress and confusion.
If Wagner's rapturous welcome in Rostov was typical elsewhere, he wouldn't have needed to run a military dictatorship.
He seemed to lose his nerve after his call with Lukashenko. It's tough to imagine it being anything other than a threat to his family, in which case it was a schoolboy error to fail to secure them before launching the rebellion.
The elimination of jobs necessarily 'makes a path' to a post-work society. Post-work couldn't exist without it. Beyond that, it isn't in AI companies' power to shape economies and societies for post-work (which is what I assume you're really getting at here). All Altman, Amodei, Hassabis and the others can do is alert policymakers to what's coming, and they're trying pretty hard to do that, aren't they? - often in the teeth of the skepticism we see so much of on this site. Really if policy makers won't look ahead, the AI companies can't be blamed for the bumps we're going hit.
Do you really pay so little attention to the space that you think this is all they do? Almost every public discussion or interview involving these figures turns at some point to society's unpreparedness for what's coming, for instance Amodei's interview last week.
How do these interviews magically make the hard economics of UBI viable? Read up on UBI a little bit, and you'll quickly realize that it's far more expensive than universal healthcare, and we can't even get our politicians onboard with that.
That's uncertain in a post-work economy or even for the transition. Some mechanism will need to exist for the abundance resulting from automation to be distributed fairly - in both the post-work era and during the transition to it. Also measures to ensure production of essential goods that might otherwise disappear with deflation. This is all out of scope for AI companies, unless you fancy putting off a response until full automation, and anointing them as (fingers crossed) benign dictators for life?
Yes, these people are publicly warning about the risks of AI. Altman is promoting regulation that clearly favors OpenAI. This is called regulatory capture. It aims to strengthen one's own position. Furthermore, the claim that these companies cannot shape economies is simply false. They decide how quickly they deploy, which industries they automate, whether they cooperate with unions, etc. These are all decisions that shape the economy.
Widespread job losses as a path to post-work are about as plausible as a car accident as a path to bringing a vehicle to a standstill. You would have to be from another planet (or a sociopath) not to understand that this violates boundary conditions that we implicitly want to leave intact.
> They decide how quickly they deploy, which industries they automate, whether they cooperate with unions, etc. These are all decisions that shape the economy.
They control how quickly they deploy, but I don't see how they have any control over the rest: "which industries they automate" is a function of how well the model has generalised. All the medical information, laws and case histories, all the source code, they're still only "ok"; and how are they, as a model provider in the US, supposed to cooperate (or not) with a trade union in e.g. Brandenburg whose bosses are using their services?
> Widespread job losses as a path to post-work are about as plausible as a car accident as a path to bringing a vehicle to a standstill.
Certainly what I fear.
Any given UBI is only meaningful if it is connected to the source of economic productivity; if a government is offering it, it must control that source; if the source is AI (and robotics), that government must control the AI/robots.
If governments wait until the AI is ready, the companies will have the power to simply say "make me"; if the governments step in before the AI is ready, they may simply find themselves out-competed by businesses in jurisdictions whose governments are less interested in intervention.
And even if a government pulls it off, how does that government remain, long-term, friendly to its own people? Even democracies do not last forever.
I want to live. And if you threaten my life, I will defend myself with whatever means I have at my disposal. It makes no difference whether you threaten me by taking away my livelihood or by withholding it from me. You therefore have a choice. Either you value my life as you value your own, or there will be war between us. And that is a war you will not win, because you are not only waging it against me, but against all people whose right to life you wish to deny.
Notwithstanding that I do not believe he is competent, Musk is currently talking about turning the entire moon into a space data center factory, specifically with a capacity so large that the resulting products of said factory could freeze the tropics just by blocking out the sun.
It is fortunate for him that those of us who understand the implications of this, do not believe he can do it.
Do you believe he could do it? Would you act against him now, when most people think his success in this endevour is implausible? Or wait until he demonstates all the parts necessary, at which point action against him is impossible? Or do you believe his claims that him doing this will render work unnecessary rather than, as I fear, making it impossible without also making it unnecessary?
What about everyone else that you think would be on your side? If you need everyone on-side, timing matters too.
Sorry, man, but I can't follow the plot. Why exactly do data centers from the moon block out the sun and freeze the tropics and make work unnecessary? Serious question: Are you okay? I hope you're just making fun of my last answer a little.
> Why exactly do data centers from the moon block out the sun
Musk wants to make a data center *factory* on the Moon, with an output of 1000 TW/year of satellites which are (supposedly) going to be launched from the moon.
I have done the maths on this, and suspect Musk used Grok for this plan, those numbers are on the edge of what's plausible for the thermodynamic limits of rearranging atoms, even with engineering that nobody's actually designed yet. But let's disregard my mere opinion that this is beyond him and say he solves all those technical difficulties:
If you built that much each year, given how long it lasts, the physical size of that many watts of PV-powered satellites is enough to block enough sunlight as to lower the average temperature of planet Earth by 33°C immediately, without accounting for any additional affects from how ice reflects more light than unfrozen land and water. Those feedback mechanisms can plausibly make it more like 48°C cooling.
> and make work unnecessary
Note: I am not making that claim, Musk is. Musk doesn't have a good answer to this, just vague platitudes about how AI can do all the work, not why his AI and his robots are going to give everyone (and not just his fans) luxury.
> Serious question: Are you okay?
No. I see the world's richest man sowing chaos, and demanding the removal of all checks on his plan to gain even more power both by political campaigning and by using phrases such as "robot army" within his own companies, and when his AI calls itself "Mecha Hitler" the military of the world's largest economy decides to pay for its use and then goes on to make threats against other competing AI companies that don't want to be involved in the military.
We are living through a time that seems like a completely crazy sci-fi plot. I don't understand why Musk is currently the richest person in the world. I don't understand what is going on politically, especially in the US and around the world, geopolitically, economically, socially, and in terms of information technology. It's as if the world I've known for the first two-thirds of my life has completely drifted away into an absurd alternate reality. It takes a bit of effort for me to keep a clear head. What I can say with some certainty is that someone who actually intends to do what Musk is announcing would behave differently in many ways than Musk does. Musk is ultimately a (rather successful) impostor. I assume that his communication is aimed at eliciting certain reactions from the public and is less about negotiating plausible realities on a factual level. That's why I'm not so interested in playing out scenarios based on the content of his grandiose announcements. I am more concerned about the destabilizing effect and about a third world war, which we may already be in the midst of.
The earlier question is: when do you decide to believe someone like him? When do you act against someone like him who you do believe? Waiting until he is credible is waiting too late, acting before then makes you look like the villain and you don't get much support.
What do you mean? I have the day off today. I'm sitting here in my underwear listening to my washing machine in the background. The sun is shining outside. I went for a walk in the park next door earlier. In an hour (Germany time), I'll cook something for lunch and then go to the garage to put a new rear tire on my motorcycle. Tomorrow, sauna; Sunday, bike ride; and on Monday, back to the office. What I'm trying to say is: I'm not the protagonist whose decision determines whether Musk f*ks up the world or not. And that's not a question of my priorities, but of a realistic assessment of the real scope for action.
If you want to have a real chance of putting someone like Musk in his place, you need to join the largest possible political collective with the right agenda. But looking at the course of the conversation, my respectful recommendation (assuming you're not trolling) would be to focus on your own well-being first.
> I want to live. And if you threaten my life, I will defend myself with whatever means I have at my disposal. It makes no difference whether you threaten me by taking away my livelihood or by withholding it from me. You therefore have a choice. Either you value my life as you value your own, or there will be war between us. And that is a war you will not win, because you are not only waging it against me, but against all people whose right to life you wish to deny.
Like, OK, is that just you blowing off steam or do you have a specific threshold where you'll do anything?
Okay, I understand. The person who wrote the parent post seems to believe that people do not fundamentally have a right to survive, but must assert and maintain this claim transactionally in a market context. I think that every person has an intrinsic and incommensurable right to survive, and that this right also includes the right to defend oneself when the right to life is questioned or even endangered by others, not only through actions but also through omissions. For example: I must help you in an emergency, and you must help me in an emergency. I must not let you starve, and you must not let me starve. In a good society, these things are regulated institutionally. In this way, individuals are not burdened with the corresponding moral dilemma. The question of who pays for me to live and why they should do so points in the opposite direction: it suggests that this question needs to be clarified and that I (or any other person) should simply die if I cannot afford to live. I wanted to express that there is an ideological conflict here that could well take on the character of a war, and that my side does not consist of peace-loving hippies, but of people who are prepared to defend themselves very effectively against such a misanthropic ideology.
> do you have a specific threshold where you'll do anything?
This conflict is not fought only once a certain threshold has been reached, but from the outset and continuously, in political struggles, in the struggle for social values and prevailing ethics, etc. Only when there is really no other option is it fought with fists and weapons. If you ask me specifically when the masses will storm the palaces of people like Musk with pitchforks, I can't answer that. For myself, I can say that I still see a lot of scope for political action within the legal frameworks that have been established (at least here in Europe). After World War II, there was a comprehensive redistribution policy throughout the Western world (especially in the US) that we could certainly repeat: top tax rates above 90%, enormous power for trade unions, a rapidly growing middle class, and historically low income concentration. The constraints are different today than they were then, but the only thing that is really necessary is the willingness to put things that are currently upside down back on their feet.
post-work? is this from the same lot who cant work-from-office because theyd have a nervous breakdown? who exactly pays for my existence in this world where i dont have to work?
reply