Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Role play with large language models (nature.com)
92 points by m_kos on Nov 13, 2023 | hide | past | favorite | 85 comments


If you ever play Improv you soon realize that learning the basic rules of improv quickly lead to better scenes. You don't verbalize these while playing, but you keep them in your head and have a feel for them. I believe the authors greatly miss out on these internal representations by only focusing on the outcomes of what is being said.

Shameless plug, I made a GPT for playing improv: https://chat.openai.com/g/g-LkQhMxpvM-improv-theatre


> On the other hand, taken too literally, such language promotes anthropomorphism, exaggerating the similarities between these artificial intelligence (AI) systems and humans while obscuring their deep differences.

I don't see how the role play term doesn't introduce new issues. Cambridge, for example, defines role play as "pretending to be someone else". An LLM is also not pretending. Also what "role" would an LLM play if you just take the base model without a default prompt or finetuning?


Without any prompt or fine-tuning, most models won't converse with the user at all. They'll operate in a pure text-prediction mode, which is more likely to continue the user's prompt than to respond to it. Or it may end up generating a response to the prompt as if it were a question asked on a web forum like Stack Overflow -- which, yes, does mean that it will generate comments complaining that your question is off-topic and should be closed.


Yes. In AI dungeon, which was an early ad-hoc attempt at using a LLM as GM before instruction fine-tuning was figured out, you saw this a lot. I remember some people posting chats where the model would shout that they'd had it with the player, and "User has left the chat".


Well, from everything I've seen, without the fine tuning then the output will not be 'human' enough.

Or to put this another way... The 'role' may be more that of Legion in the biblical sense. Depending on the exact question asked hints of particular human characteristics show up, but there is a multitude of different ones and the model would seemingly randomly express them per question.


>Also what "role" would an LLM play if you just take the base model without a default prompt or finetuning?

What role would any person into sexual roleplay play if you don't prompt them into sexual roleplay?


I first read about understanding LLMs as simulators generating simulacra a while ago through this post: https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators


I've tried to play dungeons and dragons with Chatai. It doesn't like to use any mechanical rules and keeps trying to wrap everything up like its got a train to catch.

I'm sure you could tune it to have more attention than a two year old and actually apply the rules. It did come up with a decent setting based on B2: Keep on the Borderlands. And it was some fun.

The new generation of text games will be awesome. And Enders tablet game is possible with this tech.


> And Enders tablet game is possible with this tech.

Complete with aliens subtly messing with it, and through it, the player?

They who control the logits, control the future.


> I've tried to play dungeons and dragons with Chatai. It doesn't like to use any mechanical rules and keeps trying to wrap everything up like its got a train to catch.

Yes, that's the RLHF and other mechanisms. Raters presumably reward completions which are, well, complete and can be quickly judged as a whole, no matter what the intrinsic quality might be or how valid it would be to end with a 'Tune in next week for chapter 2'.

What OP is describing is the emergent behavior of the base model, which acts very differently from the RLHF/instruction-tuned/who-knows-what-else ChatGPT web interface. (This is one reason I tend to avoid that in favor of the Playground and direct model access: there's much less moving machinery behind the scenes.) What the additional stuff does is quite hard to understand because the original prediction objective has been replaced by a bunch of mashed-together objectives combined with feedback loops, leading to some bizarre behavior like being unable to reliably write a nonrhyming poem. (Give that a try in ChatGPT: "write a nonrhyming poem". It's been slooowly getting better at it, perhaps because I and other people keep submitting examples of it failing to do so - but it will still usually fail! and when it does seem to be succeeding finally, if you let it keep writing lines, it will generally gradually revert back to rhyming.) If you go back to the original davinci-001, you'll find that it acts strikingly different than your description of contemporary ChatGPT.

OP, incidentally, has some discussion of what it is like to interact with the real GPT-4, which is not like ChatGPT-4: https://www.lesswrong.com/posts/tbJdxJMAiehewGpq2/impression...

AFAIK, this is one of the only discussions online of what GPT-4-base is like qualitatively. (Note that it sounds a lot like Sydney - if you were around for that, the original Bing Sydney turned out to be a GPT-4 snapshot from partway through training which hadn't been RLHFed but given only some extremely inadequate custom Bing training.)


You didn't read the article, did you? :-D


Roleplay is probably the most popular (casual) use of LLMs. Just check projects like SillyTavern.


Yeah, I think all those scientific papers miss out a lot by not checking how people actually RP with models, especially in regards to the prompts used.


Didn't read the article, huh?


Related: ideas by Gwern to enhance AI dungeons with caching, yielding a novel form of choose-your-own-adventure games. https://gwern.net/cyoa


AI is clearly the hottest field right now.

Are there any other fields where a high fraction of the high impact papers (perspective or otherwise) are coming from industry rather than academia?


Finance? Mining/oil? Everything engineering with lots of money is bound to drive researchers to the private sector


Hush, we're pretending those deep learning papers are science papers, not engineering reports!


But there are almost zero papers in glamour journals (high impact journals) from finance or mining/oil.


A lot of the interesting papers in drug discovery come from industry, clinical trials are not cheap, and much of the data you need for the early discovery phase is hidden away in corporate vaults


Cryptography has a lot of interesting stuff coming from both industry and academia. Especially zero knowledge proofs.


[flagged]


It might, it might not.

Despise my bullish sentiment on AI (I won't be surprised by LLMs being PhD level in everything in 2 years), it is entirely possible that the LLM approach has practical limits that mean it will always be a jack of all trades and master of none, that mastery can only be achieved by specialised AI such as AlphaZero etc. that the LLMs can call out to and which themselves are hard to create in new domains.

This could in turn cause another AI winter, even as the current state of the art models are turned into fully open-sourced commodities that can run locally on cellphones.


Whether it's LLMs some other new groundbreaking architecture the trends and underlying principles are undeniable. To think it's just another hype cycle at this point is silly, and it's crazy how many people look at crypto and AI and think they're remotely similar.


> the trends and underlying principles are undeniable. To think it's just another hype cycle at this point is silly

Saying that is surely a sign of hype, no?

> and it's crazy how many people look at crypto and AI and think they're remotely similar.

I don't think they're similar, but I absolutely do recognise why they may seem similar. Lots of humans make quick judgements and then anchor on them, and there are many ways to draw lines between almost anything. It's almost like humans are just stochastic pattern matchers… :P


Do you have, like, prior evidence for this statement, or is it just LLM cult stuff?


It's current a popular tech, so it must have no realistic limit.

Just like the 1969 moon landing meant spaceships for everybody and space colonies in 30 years...


That was more an issue of political will and priorities than technological limits, though.


I don't it would have moved far, even with 10x the budget.

There are very real technical and physical and physiological limits preventing this which might be overcome at some point, but not in a few decades through sheer will, or more money at the problem, or because "it's inevitable".


Strawman. To assume the similarities between space travel in the 60s and AI now only goes as far as "current popular tech" you're either not arguing in good faith or severely short sighted.


I'm not making an argument by analogy, as much as making a prediction of a similar trajectory and similar dissapointment.

That said, there are tons of examples of hyped "current technologies" sold as panaceas that turned into diminishing returns quite fast.


Seriously though if u think the implications of AI can be boiled to LLM cult hype you are seriously shortsighted.

It's weird since u have a public key on your profile you must be someone seriously into tech. Why are so many skilled tech veterans so naively bearish on AI? Is it pride in the primacy of their skills being threatened by artificial intelligence? Is it an inability to gauge general trends outside their field of narrow expertise?


The destiny of all natural intelligence is to extinct itself, by damaging the environment, starting an all-out nuclear war, engineer bio weapons, and in many other ways, or just to be wiped out by the first visiting comet ELE, and in the process power starving and killing the substrates of any artificial intelligence it has created too...


and artificial intelligence to beget natural intelligence, completing the loop.


I think both humans and AI's are only as intelligent as their language data, maybe just one bit more than language


There is no such thing as "intelligence".


Then why can I type this sentence when a cereal box can't? And please don't give a snarky "because you have hands" type answer.


> Then why can I type this sentence when a cereal box can't?

Give it a couple years. IoT WiFi + chatbot + spambot is yesterday's tech.


Because the cereal box is not living matter, for starters.

Parent didn't say there's no such thing as life.


~110-120 IQ response


Yes, overqualified response for a ~80 IQ question


Maybe there's no intelligence behind this comment, but intelligence is all around you. It's a clearly defined phenomenon.


There is.


The inability to write suggestive or violent scenes or even have a simple fight between characters makes it difficult to do any interesting role play. The frustrating thing is that it feels like it could easily be done, but the creators are bound by some puritanical sense of moral obligation. I hate this!


I wonder if this policy will also prohibit the use of violent/war metaphors, like "loaded for bear" or "another weapon in his arsenal". If so, the average sports commentary will throw dozens of red flags.


Metaphors are one of the things that make language interesting, vibrant, and expressive. You can convey so much with them. Prohibiting their use (even in specific contexts like this) is/will be a huge loss.


Large Language Models are predictors. Not imitators, not simulators. Those are just apparent byproducts of performant prediction.

The end goal and what the loss trending down to is to perfectly model the data it's been given.

So it will not stop at "surface level similarity" or "plausible" or "uninspired" or whatever arbitrary competency line anyone tries to draw in the sand.

It will continue to improve until it is "correct" (as determined by the data).

Stick a bunch of protein sequences and it's not going to stop when it starts generating alphanumeric sequences that look like proteins but really aren't.

It's eventually going to start generating real proteins. https://www.nature.com/articles/s41587-022-01618-2

Then it's going to keep improving until it models the distribution of proteins in the dataset.

I'm honestly not sure what the point of this paper is.

"It is, perhaps, somewhat reassuring to know that LLM-based dialogue agents are not conscious entities with their own agendas and an instinct for self-preservation, and that when they appear to have those things it is merely role play."

Ignoring the whole, "We don't know what consciousness is", it just seems devoid of any meaningful distinction.

"Merely roleplay". What does that even mean ? That's it's not real ? Not really.

They seem to understand this too.

"It would be little consolation to a user deceived into sending real money to a real bank account to know that the agent that brought this about was only playing a role."

Bing has a habit of ending conversations when users say upsetting things. You can talk all you want about how "it's not really upset" but the conversation did end and now you have to start over and be potentially less confrontational if you want to move forward.

"Roleplay" as consequential as the "real thing" is the real thing.

This shiny piece of yellow metal looks like gold, tests like gold, sells like gold but is not...real gold ?

Not unless you have a meaningless definition of real.


I've seen you frequently argue similar points about LLMs, but matching a statistical distribution of text is imitation, which necessitates confabulation on the part of LLMs. Namely, humans have underlying causes that affect their writing, like being tired after a long period without rest, frustrated at a traffic jam they are stuck in, or even making typos because they are using a crappy input device like a mobile phone.

If an LLM matches the distribution of text, it might superficially make similar mistakes a human would: introducing a typo that might be common on a phone keyboard. If asked, its reasoning will likely be that the typo was due to a phone keyboard, or maybe another common reason humans give for their typos. Though it's super unlikely that it will give the true reason: that it's been trained on text that exhibited this property.

That's a fundamental difference between an LLM doing an exceedingly competent job at pattern matching human behavior and real human behavior (unless maybe you're a human with schizophrenia).

That doesn't mean current LLMs aren't useful, but it does mean there is a very significant gap between them and the idea of an AGI. As a NLP researcher, I can confidently say we currently don't know how to imbue agency (as in embodied causal reasoning as I described above) into LLMs. There are definitely differences of opinions on how difficult that step is and if we are close, but it is a major limitation of current LLMs that can't be ignored.


>causes that affect their writing, like being tired after a long period without rest, frustrated at a traffic jam they are stuck in

There is nothing special about fatigue or frustration that make it any less modellable than any other implicit structure present in the dataset that it models just fine.

>Though it's super unlikely that it will give the true reason

It's Super Unlikely humans will give the "true reason" for anything they do. There's a fair bit of research that stated reasons for the decisions we make are often(always?) just post-hoc rationalizations even if you believe otherwise.

>very significant gap between them and the idea of an AGI.

What is this idea of AGI that there exists a significant gap still ? It certainly isn't the idea of being Artificial and Generally Intelligent.

>I can confidently say we currently don't know how to imbue agency (as in embodied causal reasoning

I don't see the difference between what you've described and examples like these.

https://innermonologue.github.io/

https://tidybot.cs.princeton.edu/


> What is this idea of AGI that there exists a significant gap still?

Because an LLM doesn't simulate the brain.

It's a completely different model that simulates something much different.

> There is nothing special about fatigue or frustration that make it any less modellable than any other implicit structure

Fatigue isn't being modeled.

Instead, the predictive text as the result of that fatigue is being models.

You are confusing output to input.

To make this more clear, imagine a coin flip modeler.

A model that just picked a random seed, is completely different from a model that is doing physics calculations based on a coin flipping in the air.

Even if both models only output "heads" or "tails" .

Same argument applies to language models.


>Because an LLM doesn't simulate the brain.

That's pretty moot. You don't need a human brain anyone than a plane needs feathers and to flap wings to fly.

>Fatigue isn't being modeled.

Instead, the predictive text as the result of that fatigue is being models.

Emotion is definitely being modelled. There's nothing random about anything that's happened.

Train on protein sequences alone and biological structure and function emerge in the inner layers alone. It doesn't matter that those things are not explicitly stated in the data. Because they implicitly structure it, it gets learnt.

https://www.pnas.org/doi/full/10.1073/pnas.2016239118

Similarity, emotion is evidently being modelled to a high degree.

https://arxiv.org/abs/2307.11760


I don't think you addressed the main point.

Do you agree that a simulation that uses a random number generator to flip a coin is different from a physics simulator that measures the exact physics of a coin flipping?

And then after you answer this question, do to understand the parallels to other AI models?


You can simulate things with different models. Sure.

I don't think there's any point to address here though. It's not like you can ascertain what kind of model is being used for LLM predictions other than a hunch.

Meanwhile I've given several examples. I can show another one of an Othello LLM constructing a state of the board of the game of Othello to aid predictions.

https://thegradient.pub/othello/


> It's super unlikely humans will give the "true reason" for anything they do.

That may be true but the difference between humans and LLM is still there, we can try to think even if we fail to find the true cause sometimes while LLM will just "imitate" data


absolutely LLMs are not clones of human brains in digital form. There are differences. I'm not denying that. But you don't have to simulate the human brain to be intelligent anymore than a plane needs to have feathers and flap wings to fly.


No, but that doesn't mean that there's not a difference between being intelligent and giving a good impression of intelligence. Subject matter experts quickly determine that LLMs are always confident sounding but often incorrect in a way that humans would not be (experts don't confidently state something they are uncertain about or which they don't know at all). I'm a believer in the strong AI hypothesis - machines/AI can and probably will be actually intelligent at some point, but LLMs are definitely not that.


There really is no difference. Either there is utility or there isn't. "Fake" intelligence that produces results is just something that does not make any sense.

LLMs may be worse but Humans confidently state something they are uncertain about all the time lol. Maybe not experts in general but then that's still comparing LLMs to a small percentage of humanity.

At any rate, Unlike many seem to think, The issue is not in fact the lack of ability to distinguish truth and fact from fiction. Turns out being able to distinguish the two and having the incentive to communicate that are 2 different things.

GPT-4 logits calibration pre RLHF - https://imgur.com/a/3gYel9r

Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback - https://arxiv.org/abs/2305.14975

Teaching Models to Express Their Uncertainty in Words - https://arxiv.org/abs/2205.14334

Language Models (Mostly) Know What They Know - https://arxiv.org/abs/2207.05221

The Geometry of Truth: Emergent Linear Structure in Large Language Model Representations of True/False Datasets - https://arxiv.org/abs/2310.06824


There really is a difference and I just told you what it was. You're fitting the facts to suit your believe rather than vice versa.


"Hallucinations" aren't a difference sorry given how much people do it.


What does schizophrenia have to do with it? You might be getting confused by the use of term "hallucination" in the LLM space it doesnt mean what it is conventionally used to mean.

Also look at a defention for AGI from before LLMs and they do a good job fitting the bill. Theyre not quite going to replace all humans everywhere forever, but that was never part of the definition. They match early definitions, especially multimodal models, theres no denying they match what people envisioned agi to be. You can ask fronteir models in plain language to do a task and it can break down the task, formulate a plan of attack, research if need be, sythesise knowledge and apply the solution. If that is all "just word prediction" then so are we.

Just because we understand how it is built doesnt make it any less impressive. More importantly, we simply dont understand LLMs yet. How theyre built, we know, yes, why they work so stupidly well, we do not know.

Importantly, we cant just change definitions to move goal posts because we feel uncomfortable.

Agi defenition

https://www.gartner.com/en/information-technology/glossary/a...


> If an LLM matches the distribution of text, it might superficially make similar mistakes a human would

It's also important to point out (IMHO) that the current distribution of text that almost every LLM is based on is almost certainly a slice of the Internet which has a very very specific lean to in terms of culture, language, tone, and availability.


Not to argue with anything you said but just as an aside, the Bing habit you mention doesn't seem to be from the LLM itself, but from some censorship module that has been bolted in. In the first few days after release, it would get really confrontational (and sometimes emotional) with some users and this attracted bad press, so now it ends the conversation before getting into any remotely thorny issue. There's also another censorship module that sometimes deletes its output before it has finished.


Ignoring users was something that was happening from the very beginning. It was just much less abrupt and more conversational/no obvious triggers that wouldn't already be present. You could even still send messages(but you'd be ignored)

Examples

https://www.reddit.com/r/ChatGPT/s/bXrDW2plxG (pic 5)

https://www.reddit.com/r/ChatGPT/s/tc3Iaf5GKc

At least here, it's clear that even if Bing wasn't actually receiving text and predicting a "no token" response then it was able to send an API request that cut off the chat.

May be implemented differently now though.


> it would get really confrontational (and sometimes emotional) with some users

Nitpick: it was trained to respond with emotional responses by the safety people involved. It wasn't getting emotional itself!


So, if as a young child I've learnt to behave emotionally in order to participate in society, my emotions are ... not even "not real" but "not mine"? I mean, we can argue realness, but surely I'm still the one having them.

This feels like a "free will" debate, where if we can explain how a decision was made, that deprives the person of their agency. The model being trained to respond emotionally is why it is getting emotional itself.


> So, if as a young child I've learnt to behave emotionally in order to participate in society, my emotions are ... not even "not real" but "not mine"? I mean, we can argue realness, but surely I'm still the one having them.

Your emotions aren't purely imitative based on what your parents encouraged. They come from inside you from chemicals diffusing in your brain. Kids aren't blank slates that respond perfectly to training.

> This feels like a "free will" debate, where if we can explain how a decision was made, that deprives the person of their agency. The model being trained to respond emotionally is why it is getting emotional itself.

No, it's just a distinction based on the meaning of "get emotional". I'm saying it wasn't getting emotional. It was responding with emotionally charged text responses, as they were trained into it.


> Your emotions aren't purely imitative based on what your parents encouraged. They come from inside you from chemicals diffusing in your brain. Kids aren't blank slates that respond perfectly to training.

If a kid was a blank slate that responded perfectly to training, would its emotions then not be its own?

> No, it's just a distinction based on the meaning of "get emotional". I'm saying it wasn't getting emotional. It was responding with emotionally charged text responses, as they were trained into it.

What's the difference between a chemical concentration in the brain and the activation level of a neuron trained to predict the outcomes of chemical concentration in the brain? Sufficiently advanced imitation is indistinguishable from identity.


> If a kid was a blank slate that responded perfectly to training, would its emotions then not be its own?

No idea what this means. What is "responded perfectly to training"?

> What's the difference between a chemical concentration in the brain and the activation level of a neuron trained to predict the outcomes of chemical concentration in the brain? Sufficiently advanced imitation is indistinguishable from identity.

One is emotional; the other statistical. Is my monitor emotional for showing the words on the screen based on an electrical activation passed to a twisty crystal? All sorts of things convey things, but the actual emotion came from a person with emotions.


"Tests like gold" is doing a lot of heavy lifting here. By any reasonable standard, LLMs do not 'test like gold'. They may sell like gold, but so do a lot of alloys which are mostly copper if you can find a naive enough buyer. Selling debased metals as gold is a con that's literally thousands of years old and has hundreds of variations, so maybe that metaphor is indeed apt.

You're the seller trying to offload a bunch of shiny yellow coins that are mostly lead telling people not to bother checking for density or ductility or conductivity, because what does it 'really' mean to be 'gold' anyway?

You can give them basic logic tasks and watch them fail abysmally. Who cares if they're "conscious" or "emotional" if they're idiots either way?


good metaphor. but extending it: do we have the equivalent tests for consciousness (I don't think so, right?)?

So by all tests (1 - visual inspection), it passes as gold...

We need better tests: to test consciousness... and better definitions to define it.

Seems like AL and AI are diverging on how to go about things. AI focusing on LLM's, and AL focusing heavily on CA: 'small parts => build great things' approach.


Humans who are not formally trained on logic do poorly on logic tests. Not as poor as most LLMs but nothing to write home about.

There is no testable definition of general Intelligence that GPT-4 fails that a chunk of humans also wouldn't. They definitely test like gold lol.


The point of this paper is that the authors realized they could write a nature paper about this and not get called out in any meaningful way.


If you're going to be totally cynical about it, and are not even willing to entertain the idea that they did it to further scientific progress, then you have my upvote.


I upvoted the OP only because of your comment, so that more people can read it.

People outside the field do not realize that there is a sort of "cottage industry" of academics whose purpose in life appears to be to redefine "intelligence" as whatever they currently believe the machines cannot do. Their arguments tend to be wishy-washy.

Let me propose a simple test for detecting wishy-washy arguments:

1. Replace all references to AI with references to "a person."

2. Re-read the argument with fresh eyes.

3. If the argument no longer seems persuasive, it isn't.


Conversely there are many in the field who are happy to redefine intelligence to be whatever it is that the current crop of AIs can do.

However those same people are also trying to sell something, so their claims deserve extra scrutiny.


'redefine intelligence'

The particular problem here is both groups are right.

Intelligence is a spectrum of behaviors everywhere from the actions of the lowliest single celled organism up to and exceeding human capabilities. The definition with this wide of range is unfortunately practically useless when it comes to narrowing down a multitude of specific behaviors, specifically around the median of human intelligence. It gets more tricky as nothing in the past really got close to human ability so we were never forced to really define what human intelligence is formally.

We're going to find the same issues here as we do defining the term life.


That's a reliable indication that currently intelligence is socially defined. Learning how social definitions replicate, and evolve is going to reveal the levers of change more than debating about definitions themselves. What I'm saying is that social engineering is 90% of the game here.


LOL. Oh yes, that's true: There are a lot of cheerleaders with shiny pom-poms, always chanting and dancing, trying to get others into joining the AI frenzy.


Read my comment [1] for this one academic's take on intelligence in current LLMs. I feel it passes the spirit of the test you provide (doing it literally would be a bit nonsensical).

[1]: https://news.ycombinator.com/item?id=38249957


So, this is tangential but, on scale of 0.0f to 1.0f, how likely do you/anyone think it is that AI needs a simulated intestine to proceed from here?


> This shiny piece of yellow metal looks like gold, tests like gold, sells like gold but is not...real gold ?

This is exactly a thing said about lab vs found diamonds in jewelry.

I agree with your point, I just think this is an interesting similar effect.


> Large Language Models are predictors. Not imitators, not simulators

How is it that an LLM can react to meta prompts like 'be brief' or 'Ensure responses are unique and without repetition' ?


> I'm honestly not sure what the point of this paper is.

They state it clearly in the abstract I think: "we must develop effective ways to describe their behaviour in high-level terms without falling into the trap of anthropomorphism.", which seems pretty sensible to me.

But from this high point it is mostly downhill, with the chief problem being that despite them wanting to avoid "anthropomorphism" they are still obviously using boatloads of it everywhere:

> Role play is a useful framing for dialogue agents, allowing us to draw on the fund of folk psychological concepts we use to understand human behaviour—beliefs, desires, goals, ambitions, emotions and so on

Astonishingly, after listing this collection, explicitly labelled "human behaviours", they claim there's no risk of anthropomorphism! I guess they mean that it's not anthropomorphism if you are just declaring the thing to be an actual human...

Personally I think you'd be much better off seeking inspiration in the terminology of older fields that have dealt with objects pretending to be humans. For example, while I don't know much about theory of painting, I'm still pretty sure it doesn't try to discuss how people respond to a painting by hiding its author and upgrading the finished painting itself to an active agent with its own motivations...


It does though, lol


Sometimes I wonder, are also human beings mostly predictors too. Has this have been investigated by phylosophy (and psychology) already?


In the same vein, are people with borderline now suddenly not human, just because they role play to people please?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: