Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While we’re at it let’s encourage people to stop making music and peanut butter as well.


Your point?


dinvald said, “The problem is, of course, these systems are fundamentally incapable of human-level intelligence and cognition. There will be a lot of wasted effort in pursuit of this unreachable goal (with LLM technology), an effort better spent elsewhere, like solving cancer or climate change, and stealing young and naive people’s minds away from these problems.”

In response I said, “While we’re at it let’s encourage people to stop making music and peanut butter as well.”

Then dinvald said, “Your point?”

Can you explain to dinvald what my point was?

ChatGPT:

Absolutely. Your response is a form of reductio ad absurdum, a strategy where one extends an argument to an absurd extreme to point out its flaws. In this case, you are challenging dinvald's claim that efforts towards developing Large Language Models (LLMs) is a waste of resources that could be better spent on solving other problems.

By suggesting we also discourage people from making music and peanut butter, you are demonstrating that dinvald's argument could be applied to any human endeavor that isn't directly tied to solving major global issues. Your point is that not every pursuit needs to be measured against solving the world's largest problems, like cancer or climate change. You imply that just like music or peanut butter, AI development also has its place and value.


The problem is not with the existence of LLMs, it's with how they're marketed - as a "reasoning" engine and something that can "think". There's no cognition in it, and very unreliable information.

If you want to eat a peanut butter sandwich that's marketed as a panacea that can cure all diseases, while not even knowing if the butter has long expired, then all props to you, but I don't.

We've all lamented the disinformation campaigns made by humans recently, well this is like a disinformation engine on atomic steroids! And no, they're not going to "solve it" by feeding it more data and parameters - this problem is inherent in LLMs, and in fact gets harder the more data there is.

Of course, as always, anything that comes out of Silicon Valley is a fad that is shoved down our throats as the best thing since sliced bread. But in reality, it's completely unnecessary, as it only marginally improves the quality of life of the 0.1% of the population, while simultaneously decreasing it for the rest. That is the sad reality we live in, my friend.


This is a very impassioned post, but I'd note as a researcher who's been working in the field for a while -- LLMs achieve to a degree an approximation of the basis of the task space that generates the distribution at hand under the bound of the minimum descriptor length, achieved by the L2 penalty. This can be shown to a certain point reasonably straightforwardly.

Hence, as a distilled set of operators in task space, that disambiguation then becomes "reasoning" for a number of people. We're not saying they're human. Under reasonable mathematical constraints you can show pretty clearly reasoning. Heck, even without it you can have good inductive tests.

I'm not sure the core pith of what you're getting at but I have a suspicion that there's something else under there other than the raw mechanics of how LLMs do or don't work. Would this be a correct assertion on my end? :) <3 :D :)


I prefer not to engage further on this topic, to be honest. It's not a productive use of our time.

Instead, to get more educated on it, I'll read Gary Marcus's "Rebooting AI: Building Artificial Intelligence We Can Trust" that discusses research on AI reasoning in more detail ;)

I will probably also look into "Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass" by Mary L. Gray and Siddharth Suri, which discusses already-existing huge ethical violations around building these AI systems by large corporations like Google and Microsoft-backed OpenAI :D <3

If you have any other good reads on this (particularly the 2nd topic) based on your experience, I'd greatly appreciate it!


I think this is more the societal layer of it, which is definitely important. There's a lot of wannabe "Impact of AI" people so I tend to be generally distrustful of that space due to the huckster coefficient. I think the basic summary is we can look back to the industrial revolution, though this time with white collar work and a few things modified to the present and that will land near the general consensus that a lot of people will have.

If you're looking for the more technical side of things (reasoning, etc) then I'd recommend taking a look at the original Shannon and Weaver paper, plus information topology from there on out. That field alone is interesting enough to dive down to a Ph.D. in.


Re the societal layer, it's not about the wannabe people though - there are real documented cases of how these big systems were trained, using deeply unethical practices (e.g. outsourcing processing of highly disturbing images to "ghost" workers in Kenya while paying them <$2/hour, by OpenAI). These have been studied by folks with PhDs in the field, so not something to be simply discounted (take a look at https://www.dair-institute.org/publications, for example).

Re reasoning, I'm more curious what you think of the non-statistical school of thought, e.g. the arguments against LLMs as "reasoning" engines, as popularized by Noam Chomsky and Gary Marcus.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: