Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Honestly, I could care less if an author uses AI as long as I can understand what I'm reading and it's interesting. They still have to instruct the AI.


If you're reading nonfiction, it means you're wasting time reading a lot more words when you could have just read the prompt.


Remind you of some entire genres of book?

That’s right, business and self-help books!

Any of these with an author who’s got actual accomplishments and money before writing the book was almost certainly already ghostwritten from an outline (and so are lots of other books, you’d be surprised, it’s not just these genres). Successful CEOs or people you’ve heard of generally don’t write their own books. Often, they’re terrible writers, and even if they’re not, writing is time-consuming and as with everything else that actually creates something they prefer to pay someone else to do it.

As of last year new books in that category are written by AI and edited by one or more humans—with each editor doing just two or three chapters, you can finish one of these books in a month or less.


Well, to err is human, to truly screw up you need a computer.

We're going to be blasted to smithereens with LLM-generated "80% should be good enough" garbage.


It’s fortunate we have mountains of human-written books, film, television, radio programs, music, and video games from Before AI. Just the good stuff could occupy several lifetimes.

Pity we killed most of the good used book stores already, though.

Also, shame about journalism and maybe also democracy. That’s too bad.


In my case, I talk a lot, and write a TON, my use for AI is really "can you say the same information with less words" then I tweak what it gives me. To be fair, I'm not a paid writer, just a dev writing emails to business people. I rewrite emails like 20 times before sending them. ChatGPT has helped me to just write it once, and have it summarized. I usually keep confidential details out and add them in after if needed.


Indeed you can losslessly "compress" an LLM's spew into just the prompt (plus any other inputs like values of random variables).

But you can also compress a book's entire content into just its ISBN.

It's just that books are hopefully more than just statistical mashups of existing content (some books like textbooks and encyclopaedias are kinds of mashup, though one hopes the editors have more than a statistically-based critical input!)


You can't regenerate the book from the ISBN. But you can generate the text from the prompt.


You can go and fetch the book from a book store using the information. Fundamentally there's not much difference between that and "fetching" the output from some model using the matching prompt. In both cases there some kind of static store of latent information that can be accessed unambiguously using a (usually) shorter input.

I'm not saying the value of the returned information is equivalent, of course. But being "just a pointer" into a larger store isn't, in itself, the problem to me.


You realise that you can't fetch a new isbn without altering the archive, while this is not the case for every new prompt that you come up with?


I don't understand the distinction. If the book archive is electronic, like many in fact are, why can you not get a copy of the book with a given ISBN without altering anything? Even if it's not electronic, does the acquisition of a book by an individual meaningfully change the overall disposition of available information? If you took the last one in your local Waterstones, I can still get one elsewhere.


> If the book archive is electronic, like many in fact are, why can you not get a copy of the book with a given ISBN without altering anything?

Because new books are written?

It feels to me that you are set on insisting that a prompt and an ISBN are the same, and no amount of logic will move you from there.


Models can be trained more and fine tuned, though, if we're going to stick to the analogy. But in the context of the analogy, the LLM won't be materially updated between two prompts in roughly the way that telling you that the answer you seek is in a book with a specific ISBN isn't materially affected by someone publishing a new book at that moment.

You are quite right that you're not convincing me of your original thesis that that a prompt contains the entire content of the reply in a way that some other reference to an entity in some other pool of information to doesn't. That's not the same as saying "ISBNs and LLM prompts are the same thing", which is a strawman. It's saying that they're both unambiguous (assuming determininism) pointers to information.

Of course no-one is disagreeing that a reply from a deterministic LLM would add no information to the global system (you, an LLM's model, a prompt) than just the prompt would. But I still think the same is true for the content of a book not adding to the system of (you, a book store, an ISBN).

In fact, since random numbers don't contain new information if you know the distribution, one can even extend it to non-deterministic LLMs: the reply still adds no information to the system. The analogy would then be that the book store gives you at random a book from the same Dewey code as the ISBN you asked for. Which still doesn't increase the information in the system.


Can you, though? I thought LLMs just by virtue of how they work, are non-deterministic. Let alone if new data is added to the LLM, further retraining happens, etc.

Is it possible to get the same output, 1:1, from the same prompt, reliably?


They are assuming a lot of things, like the LLM doesn't change, and that you have full control over the randomness . This might be possible if you are running the LLM locally.


Well if the llm change, why not assume the index system doesn't change too?

And yeah I guess if you control the seed an llm would be deterministic.


Not true if the author of the prompt used an iterative approach. Write the initial prompt, get the result, "simplify this, put more accent on that, make it less formal", get the result, and so on, and edit the final output manually anyway.


Depends on your own level of background knowledge vs. the author's.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: