> Automate Content: Like this very post. I use Wispr Flow to talk with Claude, explain the topic and tell it to read my past blog posts to write in my style.
This post is generated. Meaning, it wasn't written with the same aim towards truth and relevance that we assume writers have. It looks like writing, and fools readers into thinking it's writing.
But it's just text. What's the purpose in reading it as opposed to any other generated text?
What matters to me is that a human writer has verified the content and is ready to stake their reputation on it being worth my time to read.
It sounds like that's what you've done here, in which case I don't feel that you are wasting my time by having me read something that you haven't even reviewed yourself.
I've been trying to organize my thoughts of how I feel about consuming AI generated content. This comment really encapsulates how I feel.
As long as a human put time and effort into making something, then I'm willing to consider putting my effort toward reading/watching. If someone just spends 5 seconds to throw a prompt out there, that's when I get annoyed.
> As long as a human put time and effort into making something, then I'm willing to consider putting my effort toward reading/watching. If someone just spends 5 seconds to throw a prompt out there, that's when I get annoyed.
Why? Do you care more about the origin than the quality?
> Why? Do you care more about the origin than the quality?
Because they are linked. AI content can be generated so frivolously and at such volume you easily be overwhelmed by low quality garbage. Humans can also generate crap, but a much slower pace and I think that AI being so good at crap generation that it will push out any humans in the space that used to meek out any work here. So, what we are left with is AI content that is mostly low-effort crap, with maybe some rigorously reviewed bits that are good here and there, and the human-content which will mostly be people who care enough to make quality content otherwise they would be already posting AI schlock.
The end is that using AI as a proxy indicator for garbage will be right more than its wrong. So if I see something is AI generated, I should give it a pass and not waste my limited time resource on it.
It does seem weird for someone to expect others to spend their time fully reading something… when they also have access to the same tools and can just tell it to summarize it back.
That would be true for writing where the author typed a sentence and the LLM expanded it to multiple paragraphs.
That is not what happened here: the author provided a lot more input than the finished article, and used the LLM to help crunch that down to as good a version as possible of the points there wanted to make.
I don't think that's true. What matters to me is the human editorial touch: I don't want to wade through 50 prompts and responses, I want a human author to have resolved that process into a final output that they think is worth sharing with me.
Try reading a manuscript copy of a book before it’s been edited. Yes I know some people do this out of interest but for most people it’s not the type of writing they are interested in reading or would get the most out of.
If you're interested in seeing the process behind this piece of writing you can read through a lot of the details in the 71 commits that went into creating the story in the PR: https://github.com/steipete/steipete.me/pull/106/commits
If using LLMs for writing, you should provide your prompts up front so we can see your actual thinking, and then ignore the rest of the content. Or better yet, synthesize those prompts in a writing style that we like more!
This post is generated. Meaning, it wasn't written with the same aim towards truth and relevance that we assume writers have. It looks like writing, and fools readers into thinking it's writing.
But it's just text. What's the purpose in reading it as opposed to any other generated text?