Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> 1.7 Before delving into code...

Did the authors use an LLM to write or improve the text? I have no problem with that but I feel I'd like to know how much work is LLM based before reading.



The proclivity to suggest something is LLM generated when it isn't is such a fun one. Almost like a Rorschach test for literary exposure.

The answer in this context is no (you've might not been exposed to enough fiction).


Why does it matter? My English is poor, so when I write long articles or posts, I ask GPT to fix errors. I do this because I respect my readers and don't want their eyes to bleed from reading my text.


AI-generated text doesn't just make my eyes bleed; it makes my blood boil. I haven't read much of your English specifically, so I can't say for sure, but generally non-native speakers get a ton of leeway in my book. I do not speak your language anywhere near as well as you speak mine, and your words will not make me feel frustrated even if I occasionally have to pause to figure out the intended meaning.

(Also, IMHO, your comment history is perfectly readable without being distracting.)


Why would "Before delving into code..." be a red flag that marks the text as LLM-generated?


Someone said that the word "delve" is a favourite of AI and a sign that something was AI written.


I don't usually suspect AI unless I see in a closing paragraph "However, it is important to note..."


Really... It's also one of non-native speakers' favorite words.


All I can't tell you is that it was already written this way in 2021: https://github.com/sysprog21/lkmpg/blob/2246e208093876de4c3b...


LLM likes to use "delve" doesn't mean every usages of "delve" imply LLM


I wouldn't think it matters as long as the [human] authors review it for accuracy.


Perfectly valid synonym for 'dive' in this context.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: