Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI hype is real, but we ought to start also examining anti-AI-hype-hype. It's become fashionable to rage against AI as a whole with about the same amount of understanding that the MBA hype edgelords have when they push AI as a cure-all, and both are a bad look.

To balm the enraged; look, I agree with you, the hype is indeed out of control. But like, let the vultures spend all their monies. Eventually the bubble itself will go the way of NFTs and we'll all be able to buy GPUs and SSDs again. Hopefully.

That said, there's an important chunk of discourse that gets shouted down and it really shouldn't. For just a moment, table the issues that come out of "AI as an Everything Replacement" and think of the new things that come out of this tech. On-demand tutors that never tire. Actually viable replacement for search. Large heterogenous datasets can now be rapidly parsed, by an individual, for specific insights. Personal dev teams at a fraction of the cost that now make it possible for people with absolutely bugfuck ideas to actually try them without worrying about wasted time or resources - we are going to see a vibrance in the world that was not there before.

It is not an unqualified or unmitigated good. Hell, I'll even grant that it may be a net negative - but I don't know either way, and I don't think anyone else does either. Not with any significant confidence. It just feels like we've skipped the part of the discussion where discourse occurs and gone right to "Pro" and "Anti" camps with knives at throats and flushed, sweaty faces.



Two factors here.

1) "Tech bro" AI hype in keynotes and online forums is annoying. It usually contains a degree of embellishment and drama; kinda feels like reality TV but for software developers. Instead of Hollywood socialites, we get Sam Altman and the gang. Honestly, this annoys me but I ignore it beyond key announcements.

2) This hype cycle, unlike NFTs, is putting our economy in serious danger. This is repeated ad nausiem on youtube. While there is some hype on the topic here to, the implications are serious and real. I wont go into details, but I restructured my portfolio to harden it against an AI collapse. I didn't want to do that, but I did. I want to retire someday.

Considering point 2, I'd guess some of the "hype" is more frustration, since I can't be the only person.


Yeah, I see both those points and really I agree with both. Actually, I think problem 1 is exacerbating problem 2 by a lot - I get just as mad at the postmillenial dudebro with the get-rich-quick-on-AI scam video as I do with the AI-MBAs of the world.

Actually, that's a lie. The MBAs are still worse. They ought to know better at least.

All I'm getting at is that while we put totally legitimate backpressure on the hype cycle, we should at the same time be able to talk about and develop those elements of this new tech that will benefit us. Not "us the tech vcs" (I am not one of them) but "us the engineers and creatives".

Yes it's disruptive. Yes it's already caused significant damage to our world, and in a lot of ways. I'm not at all trying to downplay that. But we have two ways this goes:

- people (individuals) manage to adopt and leverage this tech to their own benefit and the benefit of the commons. Large AI companies develop their models and capture large sectors of industry, but the diffusion of the disruption means that individuals also have been empowered, in many ways that we can't even predict yet.

- people (individuals) fight tooth and nail against this tech, and lose the battle to create laws that will contain it (because let's be honest, our leadership was captured by private interests long ago and OpenAI / MSFT / Google / Meta have deep enough pockets to afford to buy the legislature). Large AI companies still develop their models and capture whole sectors of industry, but this time they go unchecked due to a fragile and damaged AI industry in the commons. We learn too late that the window to make use of this stuff has closed because all the powerful stuff is gated behind corporate doors and there ARE laws about AI now but basically those laws make it impossible to challenge the entrenched powers (kinda like they do now with pre-AI tech - patent laws and legal challenges to threats to power - like what the EFF is constantly battling).

If we do not begin to steer towards a robust open conversation about creating and using these models, it's only going to empower the people that we are worried about empowering already. Yes, we need to check the spread of "AI in fucking everything". Yes we need to do something about scraping all data everywhere all the time for free. But if we don't adopt the new weapon in the information space, we'll just be left with digital muskets versus armies of indefatigable robots with heat-seeking satellite munitions. Metaphorically(?) speaking.


> Actually, I think problem 1 is exacerbating problem 2 by a lot

100%, the fear mongering is just to trigger rallies of investment both in stock and funding. What bad sounding to us "AI took my jerb!" sounds great to the c-suite.

I think you might overestimating the power of AI, a little. It's really good at creating flashy things, nice looking videos and code, but the reasoning and logic is still lacking. I don't see it replacing human oversight anytime soon.


Oh, neither do I. We see eye to eye on this point - it isn't good at the things people have learned to be good at, and that's a good thing.

What it excels at is empowering people with good ideas about architecture and function to explore them without being burdened by SCRUM, or managers, or other such trappings of large orgs. A solo dev, who has a hot take on a new way to structure a cluster or iterate on a dev tool, can just throw the pasta rather than spend tons of time nitpicking boilerplate and details with a team of 10. Someone who uses computers a lot but doesn't know how to do specific thing x or y can now discover that in seconds, with full documentation and annotations and (most importantly) links to relevant non-AI learning material.

What I feel like people are getting wrong most is this idea that AI is coming for your job and it's going to be a powerslave to the MBA types who can then kick the engineers out of the picture. It's not happening (if anything, enabling smaller teams to get more done is going to deprecate the large org outside of the places it's not needed). That's the bubble, and while gargantuan amounts of money go to these AI startups it's all going to fall on it's face when they realize that what AI allows us to do is bootstrap good projects without megalith VC bucks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: