Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Automate Content: Like this very post. I use Wispr Flow to talk with Claude, explain the topic and tell it to read my past blog posts to write in my style.

This post is generated. Meaning, it wasn't written with the same aim towards truth and relevance that we assume writers have. It looks like writing, and fools readers into thinking it's writing.

But it's just text. What's the purpose in reading it as opposed to any other generated text?



It took 50+ prompts to get it to this state. I’d say that counts as putting enough care into it to be my voice.


What matters to me is that a human writer has verified the content and is ready to stake their reputation on it being worth my time to read.

It sounds like that's what you've done here, in which case I don't feel that you are wasting my time by having me read something that you haven't even reviewed yourself.


I've been trying to organize my thoughts of how I feel about consuming AI generated content. This comment really encapsulates how I feel.

As long as a human put time and effort into making something, then I'm willing to consider putting my effort toward reading/watching. If someone just spends 5 seconds to throw a prompt out there, that's when I get annoyed.


> As long as a human put time and effort into making something, then I'm willing to consider putting my effort toward reading/watching. If someone just spends 5 seconds to throw a prompt out there, that's when I get annoyed.

Why? Do you care more about the origin than the quality?


> Why? Do you care more about the origin than the quality?

Because they are linked. AI content can be generated so frivolously and at such volume you easily be overwhelmed by low quality garbage. Humans can also generate crap, but a much slower pace and I think that AI being so good at crap generation that it will push out any humans in the space that used to meek out any work here. So, what we are left with is AI content that is mostly low-effort crap, with maybe some rigorously reviewed bits that are good here and there, and the human-content which will mostly be people who care enough to make quality content otherwise they would be already posting AI schlock.

The end is that using AI as a proxy indicator for garbage will be right more than its wrong. So if I see something is AI generated, I should give it a pass and not waste my limited time resource on it.


It does seem weird for someone to expect others to spend their time fully reading something… when they also have access to the same tools and can just tell it to summarize it back.


That would be true for writing where the author typed a sentence and the LLM expanded it to multiple paragraphs.

That is not what happened here: the author provided a lot more input than the finished article, and used the LLM to help crunch that down to as good a version as possible of the points there wanted to make.


Or so the author claims…



Where does it show “that the author provided a lot more input than the finished article“?


Every one of those 72 commits represents the author making a decision and having the LLM make edits based on that decision.


so, are you vouching for this entire text, and are happy for us to treat it as if you wrote it?

so that if it has sloppy mistakes or doesn't reflect your thinking well, you do want others to trust your work less?


Readers would have been better served with the prompts you wrote than the AI generated output.


I don't think that's true. What matters to me is the human editorial touch: I don't want to wade through 50 prompts and responses, I want a human author to have resolved that process into a final output that they think is worth sharing with me.


I think the correct benchmark is `len()`. Give me your prompts or your output, whichever is shorter.


No


Try reading a manuscript copy of a book before it’s been edited. Yes I know some people do this out of interest but for most people it’s not the type of writing they are interested in reading or would get the most out of.


All ~50 prompts would take you have an hour to read and wouldn’t bring across my point nearly as good.


But it would provide a better illustration of how you’re actually working.


If you're interested in seeing the process behind this piece of writing you can read through a lot of the details in the 71 commits that went into creating the story in the PR: https://github.com/steipete/steipete.me/pull/106/commits


Well…


So your argument is that only text written by hand can convey useful information?

If it was dictated or transcribed, it's somehow lesser and unworthy of attention?


> So your argument is that only text written by hand can convey useful information?

No. I don't know where you got that from.


So you're suggesting that using an LLM is the same as transcribing? Then I guess Dragon Naturally Speaking was way ahead of the curve.


If using LLMs for writing, you should provide your prompts up front so we can see your actual thinking, and then ignore the rest of the content. Or better yet, synthesize those prompts in a writing style that we like more!


Not the prompts exactly, but these commits should give you a good idea about what they were: https://github.com/steipete/steipete.me/pull/106/commits




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: