Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>GPT can't perform to the level of a middle market copywriter or content marketer. I am convinced that people who think LLMs can write have simply not understood what professional writers do.

GPT's rigid "robot butler" style is not "just how LLMs write". OpenAI deliberately tuned it to sound that way. Even much weaker models that aren't tuned to write in a particular way can easily pass for human writing.



This is part of the problem with the whole discourse of comparing human writers to LLMs. Superficial things like style and tone aren't the problem, but they are overwhelmingly the focus of these discussions.

It's funny to see, because developers are so sensitive about being treated like code monkeys by their non-technical colleagues. But these same devs turn around to treat other professionals as word monkeys, or pixel monkeys, or whatever else. Not realizing that they are only seeing the tip of the iceberg of someone else's profession.

Professional writers don't take prompts and shit out words. They work closely with their clients to understand the important outcomes, then work strategically towards them. The dead giveaway of LLM writing isn't the style. It's the lack of coherent intent behind the words, and low information density of the text. A professional writer works to communicate a lot with very little. LLMs work in the opposite way: you give it a prompt, then it blows it out into verbiage.

Sit down for coffee with a professional copywriter (not the SEO content marketing spammers), and see what they have to say about LLMs.


>and low information density of the text.

Personally, I group all these things under 'style'. Perhaps, i should have used, 'presentation' instead. You've latched on that specific word and gone off. Point is that the post-training of these models, especially GPT from Open ai is doing a lot to how the writing (the default at least) presents long strings of text. Like how GPT-4 is almost compelled to end bouts of fiction prematurely in sunshine and rainbows. That technically isn't style but is part of what i was talking about.

>A professional writer works to communicate a lot with very little. LLMs work in the opposite way: you give it a prompt, then it blows it out into verbiage.

There's no reason you have to work this way with an LLM.


> You've latched on that specific word and gone off.

No, I haven't. I'm not talking about style, but something deeper. What I'm talking about is something you don't even seem to realize exists in professional writing - which is why you keep thinking I'm misunderstanding you when I am not.

I've worked with professional writers, and nothing in the LLM space even comes close to them. It's not a matter of low quality vs high quality, or benchmarking, or style. It's simply an apples and oranges comparison.

The economics of LLMs for shortform copy will never make sense, because producing the words is the cheapest part of that process. They might become the best way for writers themselves to produce longform copy on the execution side, but they can't replace the writer's ability to work with the client to figure out exactly what they are trying to write, and why, and what a good result even looks like. And no, this isn't a prompting issue, or a UI issue, or a context window length issue, or anything like that.

Elsewhere in this thread someone mentioned how invaluable LLMs are for producing internal business copy. I could easily see these amateur writing tasks being replaced by LLMs. But the implication there isn't that LLMs are any good at writing, but that these tasks don't require good writing to begin with.


>What I'm talking about is something you don't even seem to realize exists in professional writing

I've read hundreds of books, fiction and otherwise. This isn't a brag, it's just to say, believe me, I know what professional writing looks like and I know where LLMs currently stand because I've used them a lot. I know the quality you can squeeze out if you're willing to let go of any presumptions.

You'll notice that not once did I say current LLMs could wholesale replace professional writers anymore than they can currently replace professional software devs. I just disagree on the "not a good writer" bit.

If it's the opinion of professional writers you're looking for then you can find some who disagree too.

Rie Kudan won an award on a novel she used GPT to verbatim ghostwrite (no edits essentially) 5% of. Her words, not mine. Who knows how much more of the novel is edited GPT.


>Rie Kudan won an award on a novel she used GPT to verbatim ghostwrite (no edits essentially) 5% of. Her words, not mine. Who knows how much more of the novel is edited GPT.

That a professional human novelist was able to leverage GPT for their book isn't disproving the grandparent's post. They knew what good looks like, and if it wasn't good they wouldn't have kept it in the book.


That's my point. Good writing can come out of LLMs and nobody has to take my word for it.

When part of the OPs point seems to be that LLMs can't write good stuff then that's proof enough.

If you're talking about replacing professionals wholesale then I never made that argument.


Good writing can also come out of Markov chains. Or even RNGs - if your novelist has enough time to filter the output.

LLMs can't write good stuff. Human writers can write good stuff. When a good writer uses an LLM in their writing process, that writer can certainly produce good writing.

When an AI hypebro who is otherwise a bad writer uses an LLM in their writing process, they still produce bad writing.


>Good writing can also come out of Markov chains.

Waiting for the Author who has used a Markov Chain to ghost write.

>LLMs can't write good stuff. Human writers can write good stuff. When a good writer uses an LLM in their writing process, that writer can certainly produce good writing.

Give it a rest. The author was quite clear she copy pasted sections of writing in.


I actually agree with you that professional writers _can_ write/communicate much better than LLMs. However, I’ve read way too many articles or chapters in books that are so full of needless fluff before they get to the point. It’s almost as if they wanted to show off that they can write all that and somehow connect it to the main part of the article. I’m not reading the essay to appreciate the writer’s ability to narrate things, instead I care about what they have to say on that topic that brought me to the essay.


Perhaps the pointless fluff you're describing is actually chaff: countermeasures strategically deployed ahead of time by IQ 180 writers in order to preemptively water down any future LLM's trained on their work.

Then the humans can make a heroic return, write surgical prose like Hemingway to slice through the AI drivel, and keep collecting their paychecks.

Bonus points if you can translate this analogy to software development...


lmfao :¬)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: