Total compensation involves more than just wages. Providing benefits such as healthcare coverage is inherently expensive, since productivity gains in healthcare have been limited.
At least in the United States we are not getting this benefit.
If AI does begin to really crater the job market, only owners of AI (yes including shareholders) will benefit but most folks do not own stock - or at least do not own any significant amount of stock.
That's not such an ironclad argument lmao. If we are to believe Baumol's cost disease, rising productivity in other sectors is partly responsible for healthcare cost increase.
Obviously I don't seriously believe we should depress productivity so that nurses make less money and hospital stays are cheaper. But, you know, it doesn't make it untrue.
The people you are replying to are trying to have a meaningful discussion by providing references and some basic argumentation. Can you add some link or arguments that explain more strongly your point of view instead of using strong affirmations ('misinformation', 'debunked', 'nonsensical') without any trace of argumentation and no reference at all ?
Railroads need repair too? Not sure if it's every 4 years. Also, the trains I take to/from work are super slow because there is no money to upgrade.
I think we may not upgrade every 4 years, but instead upgrade when the AI models are not meeting our needs AND we have the funding & political will to do the upgrade.
Perhaps the singularity is just a sigmoid with the top of the curve being the level of capex the economy can withstand.
For what it's worth they cost a lot less than highways to maintain. Something like the 101 in the Bay Area costs about $40,000 per lane-mile per year, or about $240,000.
Trains are closer to $50-100,000 per mile per year.
If there's no money for the work it's a prioritization decision.
This is the thing with AI: We can always come up with a new architecture with different inputs & outputs to solve lots of problems that couldn't be solved before.
People equating AI with other single-problem-solving technologies are clearly not seeing the bigger picture.
> Yes, and where do you suppose experienced developers come from?
Almost every time I hear this argument, I realize that people are not actually complaining about AI, but about how modern capitalism is going to use AI.
Don't get me wrong, it will take huge social upheaval to replace the current economic system.
But at least it's an honest assessment -- criticizing the humans that are using AI to replace workers, instead of criticizing AI itself -- even if you fear biting the hands that feed you.
> criticizing the humans that are using AI to replace workers, instead of criticizing AI itself
I think you misunderstand OP's point. An employer saying "we only hire experienced developers [therefore worries about inexperienced developers being misled by AI are unlikely to manifest]" doesn't seem to realize that the AI is what makes inexperienced developers. In particular, using the AI to learn the craft will not allow prospective developers to learn the fundamentals that will help them understand when the AI is being unhelpful.
It's not so much to do with roles currently being performed by humans instead being performed by AI. It's that the experienced humans (engineers, doctors, lawyers, researchers, etc.) who can benefit the most from AI will eventually retire and the inexperienced humans who don't benefit much from AI will be shit outta luck because the adults in the room didn't think they'd need an actual education.
1. How it's gonna be used and how it'll be a detriment to quality and knowledge.
2. How AI models are trained with a great disregard to consent, ethics, and licenses.
The technology itself, the idea, what it can do is not the problem, but how it's made and how it's gonna be used will be a great problem going forward, and none of the suppliers tell that it should be used in moderation and will be harmful in the long run. Plus the same producers are ready to crush/distort anything to get their way.
... smells very similar to tobacco/soda industry. Both created faux-research institutes to further their causes.
Data centers account for like 2% of global energy demand now. I’m not sure if we can really say that AI, which represents a fraction of that, constitutes a huge environmental problem.
An nVIDIA H200 uses around 2.3x more power (700W) when compared to a Xeon 6748P (300W). You generally put 8 of these cards into a single server, which adds up to 5.6KW, just for GPUs. With losses and other support equipment, that server uses ~6.1KW at full load. Which is around 8.5x more when compared to a CPU only server (assuming 700W or so at full load).
Considering HPC is half CPU and half GPU (more like 66% CPU and 33% GPU but I'm being charitable here), I expect an average power draw of 3.6KW in a cluster. Moreover, most of these clusters run targeted jobs. Prototyping/trial runs use much limited resources.
On the other hand, AI farms use all these GPUs at full power almost 24/7, both for training new models and inference. Before you asking, if you have a GPU farm which you do training, having inference focused cards doesn't make sense, because you can divide nVIDIA cards with MIG, so you can put aside some training cards, divide these cards to 6-7 and run inference on them, resulting ~45 virtual cards for inference per server, again at ~6.1KW load.
Data centres in general are an issue that contribute to climbing emissions, two percent globally is not trivial .. and it's "additional" over demand of a decade and more ago past, another sign we are globally increasing demand.
Emissions aside, locally many data centres (and associated bit mining and AI clusters) are a significant local issue due to local demand on local water and local energy supplies.
> Almost every time I hear this argument, I realize that people are not actually complaining about AI, but about how modern capitalism is going to use AI.
This was pretty consistently my and many others viewpoint since 2023. We were assured many times over that this time it would be different. I found this unconvincing.
> I realize that people are not actually complaining about AI, but about how modern capitalism is going to use AI.
Something very similar can be said about the issue of guns in America. We live in a profoundly sick society where the airwaves fill our ears with fear, envy and hatred. The easy availability of guns might not have been a problem if it didn't intersect with a zero-sum economy.
Couple that with the unavailability of community and social supports and you have a a recipe for disaster.
> In this world people become more like pests. They offer no economic value yet demand that AGI owners (wherever publicly or privately owned) share resources with them. If people revolted any AGI owner would be far better off just deploying a bioweapon to humanely kill the protestors rather than sharing resources with them.
This is a very doomer take. The threats are real, and I'm certain some people feel this way, but eliminating large swaths of humanity is something dicatorships have tried in the past.
Waking up every morning means believing there are others who will cooperate with you.
Most of humanity has empathy for others. I would prefer to have hope that we will make it through, rather than drown in fear.
>but eliminating large swaths of humanity is something dicatorships have tried in the past.
Technology changes things though. Things aren't "the same as it ever was". The Napoleonic wars killed 6.5 million people with muskets and cannons. The total warfare of WWII killed 70 to 85 million people with tanks, turboprop bombers, aircraft carriers, and 36 kilotons TNT of Atomic bombs, among other weaponry.
Total war today includes modern thermonuclear weapons. In 60 seconds, just one Ohio class submarine can launch 80 independent warheads, totaling over 36 megatons of TNT. That is over 20 times more than all explosives, used by all sides, for all of WWII, including both Atomic bombs.
AGI is a leap forward in power equivalent to what thermonuclear bombs are to warfare. Humans have been trying to destroy each other for all of time but we can only have one nuclear war, and it is likely we can only have one AGI revolt.
I don't understand the psychology of doomerism. Are people truly so scared of these futures they are incapable of imagining an alternate path where anything less than total human extinction occurs?
Like if you're truly afraid of this, what are you doing here on HN? Go organize and try to do something about this.
I don’t see it as doomerism, just realism. Looking at the realities of nuclear war shows that it is a world ending holocaust that could happen by accident or by the launch of a single nuclear ICBM by North Korea, and there is almost no chance of de-escalation once a missile is in the air. There is nothing to be done, other than advocate of nuclear arms treaties in my own country, but that has no effect on Russia, China, North Korea, Pakistan, India, or Iran. Bertrand Russell said, "You may reasonably expect a man to walk a tightrope safely for ten minutes; it would be unreasonable to do so without accident for two hundred years." We will either walk the tightrope for another 100 years or so until global society progresses to where there is nuclear disarmament, or we won’t.
It is the same with Gen AI. We will either find a way to control an entity that rapidly becomes orders of magnitude more intelligent than us, or we won’t. We will either find a way to prevent the rich and powerful from controlling a Gen AI that can build and operate anything they need, including an army to protect them from everyone without a powerful Gen AI, or we won’t.
I hope for a future of abundance for all, brought to us by technology. But I understand that some existential threats only need to turn the wrong way once, and there will be no second chance ever.
I think it's a fallacy to equate pessimistic outcomes with "realism"
>It is the same with Gen AI. We will either find a way to control an entity that rapidly becomes orders of magnitude more intelligent than us, or we won’t. We will either find a way to prevent the rich and powerful from controlling a Gen AI that can build and operate anything they need, including an army to protect them from everyone without a powerful Gen AI, or we won’t
Okay, you've laid out two paths here. What are *you* doing to influence the course we take? That's my point. Enumerating all the possible ways humanity faces extinction is nothing more than doomerism if you aren't taking any meaningful steps to lessen the likelihood any of them may occur.
> This is a very doomer take. The threats are real, and I'm certain some people feel this way, but eliminating large swaths of humanity is something dicatorships have tried in the past.
Tried, and succeeded in. In times where people held more power than today. Not sure what point you're trying to make here.
> Most of humanity has empathy for others. I would prefer to have hope that we will make it through, rather than drown in fear.
I agree that most of humanity has empathy for others — but it's been shown that the prevalence of psychopaths increases as you climb the leadership ladder.
Fear or hope are the responses of the passive. There are other routes to take.
It's a hypothetical deployment but it's reasonable to expect. These robots will be very valuable, and everyone will want one. It's not going to become a housemaid in a few years. But will they be making car parts? Almost certainly. Moravec's paradox is still in play, but advancement in AI chips will slowly overcome it.
> But will they be making car parts? Almost certainly.
Worth calling out that Hyundai is a major investor in Boston Dynamics.
FTA: This journey will start with Hyundai—in addition to investing in us, the Hyundai team is building the next generation of automotive manufacturing capabilities, and it will serve as a perfect testing ground for new Atlas applications.
But will they be making car parts? Almost certainly.
I believe robots are currently making car parts in abundance. The robots usually are like a box with a hydraulic arm or something equivalent.
The specially and especially hard part of humanoid robots is justifying the cost and complexity of the construction by having them by "walk-on replacements" for humans and so they have failed entirely at being that.
Reading that line made me cringe. Memorizing APIs come in handy for interviews or perhaps fixing bugs in production but not so much for day-to-day work.
Now, math equations or the minute details of data structures and algorithms.. Those can be hard to internalize and knowing them well is very helpful when reading papers or other diving into open source projects which make critical use of advanced, specialized knowledge.
> Memorizing APIs come in handy for interviews or perhaps fixing bugs in production but not so much for day-to-day work.
Some interviews, maybe. As you say, it doesn't matter much for day-to-day work -- and so I put no weight on it when rating interviews either. If the candidate remembers the right method names, great. If they don't, who cares.
I always make a point of explaining that I'm not testing them on API memorization.
Both. Tech companies are ravenous for anyone and everyone who clears the bar; “choose the best from N applicants” is not a model for the hiring process.
Was it Seneca who mocked this line of thinking? If I remember right he mocked it by saying either
1) a good man is a good man and thus equal to other good men
Or
2) you proceed through so many qualifications and “but what if this guy was prettier or had nicer tone of voice than the other good man all else equal” etc until you admit that you include minute details like the exact placement of every hair follicle on some dudes head in your proposed total ordering of humanity
I'll admit there's probably an argument against that method of evaluation, but I'm not exactly blown away by Seneca's arguments.
In particular, I'm not convinced the evaluation must extend from arguably relevant features to obviously irrelevant features. Even if I'm ultimately wrong, I can mount a defensible argument that memory is relevant to programming ability. I do not see a way to mount a defensible argument that (for example) facial features are relevant to programming ability.
Here's one - attractiveness is roughly correlated with intelligence, and in the general case positive traits tend to correlate. An interview is an attempt to extract the maximum amount of information about a candidate in a short space of time. Virtually any positive trait is evidence that somebody will be a better candidate. Weak evidence, yes - but if you have two otherwise completely identical candidates (a silly hypothetical) it's not illogical to choose the one with better hair follicles (a silly outcome of a silly hypothetical).
His pretty face might leave your superiors with a better impression of your group. A friend manages a software group and actually told me that one of the best things to increase chances of getting a job with her group would be to pay more attention to my appearance and smile more. Also mentioned that just being interpersonally nice was much more important than actual abilities in her organization as long as you were good enough that the owners believed you knew what you were doing and she could justify keeping you there. Etc
Hopefully the candidates realize that the interview is only the first of many shitty zero-sum games, and they opt for growing positive-sum companies instead.
Improving oneself is a good frame for this. There are so many important and subtle and deep aspects of the craft to improve at. I want to work with people who recognize this and allocate their self-improvement efforts wisely.
Studying API trivia to show off in interviews is about as far off as it gets. So probably I’d choose the candidate who hadn’t done that. I’d rather risk a lazy coworker than one who fixates (even if competently) on the wrong problems. I’ve seen coworkers with that trait seriously drag down a team with low-signal nitpicking on others’ pull requests and designs, for example.
API knowledge is ideally a consequence of familiarity with the craft. Sort of like how vocabulary is a consequence of reading, writing, and interacting with well-read people. I’d rather have a conversation with someone who reads a moderate amount than with someone who decided to memorize the dictionary. The former knows fewer words, but I expect they’ll have more interesting things to say, even if none come across in the first 30 seconds.
Yes studying api trivia doesn't have much use. These stuff doesn't need to be memorized. Ideally you memorize stuff that important to you and likely useful for you.
Let's use your example of memorizing dictionary in some foreign language x. This person will be able to read faster and more efficiently without having to stuck on some unknown word and looking up dictionary every 2 minutes. At this state this person is just simply reading and focusing the brain on the content itself.
Likewise with memorizing programming stuff, the person can now just simply code and freeing the brain to just focus on the problem. Consequently more time to code moderate amount and ensure familiarity with the craft. I think this so called the 10x engineer that the author mentioned.
This relates to my most hated interview question - "If someone else can do what you do but they also have X quality why should we pick you?" Which always makes me want to contemptuously exclaim "Well obviously you shouldn't, fool!"
I have never actually gotten a job where the interview had a variation of that question.
on edit: changed explain to exclaim, don't think it significantly changed the meaning in context however.
Remembering stuff is important, but in an interview, all I expect is that the candidate can remember what they discussed 10-20 minutes ago (in the same interview), and that they use a credible looking syntax that's self-consistent; and that they can explain anything they wrote.
I also give a problem that doesn't use a lot of library functions.
Why should workers care about being more productive if they do not reap the rewards in terms of wages?
https://en.wikipedia.org/wiki/Decoupling_of_wages_from_produ...