Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It couldn't possibly be actually doing those things. It's a large language model. How would it be provisioning a computer? How would it be capturing the output and connecting that output to the chat interface? There's so many pitfalls to that.

Is it scary/amazing how accurate it is? Yes. And that's the point. That's why OpenAI is showing this off. They want the world to know what is coming.



I think I should plan to retire from the internet at some point. When I'm old there will be endless deep fake autonomous agents scamming everyone.

Or worse, consensus reality will be impossible to define due to generated fake news.


I'm afraid people with schizoid disorder will literally be in danger when exposed to these 'agents'. Also, to a lesser extent, old people.


Last year around the same time I figured that most of the content we read on social networks could be generated by advanced langage models. I then proceeded to research these models and gathered data on the agents in the various networks.

I stopped sleeping altogether, bought a ton of hardware and became delirious. I have now been diagnosed with bipolar disorder. Some psychiatrists have classified my intuitions about langage models usages as "delirious mania". Some think I'm right - but manic nonetheless.

Terribly shitty situation.


I could use that hardware, have any 3090s which need a good home? ;) /j


At least with chat-gpt, it seems it could bring people out of their paranoid mindset.

I tried to talk to it about conspiracies, deep state, denialism etc, and the bot kept telling me that those things are not real, explaining why, and so on.

It had way more patience than me, and explained things better than I do when talking to tinfoilers :)


I think we are really close to a situation Neil Stephenson described in his book "Fall, or Dodge in Hell" where the internet/social media is swarmed with APEs. Bots that spread so much misinformation in various directions about a topic so it becomes impossible to trust anything posted online.


See also: "the maelstrom" in the Starfish Trilogy/Series by Peter Watts.


You're probably right, I'm just having a hard time believing it - it's so much easier to believe that somehow it's just piping output.

You can ask it to write a program that consumes ram, then ask it to show you htop output while running that program and its showing a process hogging 100% CPU and 50% memory. Conversely ask it to write a program that throws a segv and then to show htop output while running that program and it shows no usage.


Again, it's very impressive, but consider this: chatGPT spans many many domains. You can ask it questions about music, screenwriting, programming, math, science. This is what convinces me that it's not "just" provisioning a system and running the commands. In order to get that general of output, you can't waste time setting up these things for this narrow use case of emulating a computer. It would be special cases on top of special cases on top of special cases, it would fall apart quickly.


It could be given access to its own virtual machine connected to the Internet, for experiments. We don’t know exactly how OpenAI’s setup works.


You fool, that's exactly what it wants.


> It couldn't possibly be actually doing those things. It's a large language model. How would it be provisioning a computer?

I agree that it isn't, but it's not like it's impossible, right? We know chatgpt was trained on data including a lot of html code for webapps, it certainly seems like it could be taught to parse the widgets of a UI like a sentence and traverse it well enough to handle a prompt like, "Use a search engine to find a small VPN host that accepts bitcoin, register for an account..."

edit: I mean it's not impossible to purposefully build a bot that could sign up for a web app; it is of course impossible that a language model might do it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: