Hacker Newsnew | past | comments | ask | show | jobs | submit | love2read's commentslogin

Another plea for @dang to integrate pangram into all story and comment submissions

In the United States, cheating via AI is now rampant regardless of ethnicity. I know little of Australian Universities but I would assume it’s similar over there.

Awesome ai generated comment. Seriously when will @dang integrate pangram into comments?

Does pangram work?

This article was clearly written by a human (and AI) but still has a few "LLMisms" such as:

- The key insight - [CoreML] doesn't XXX. It YYY.

With that being said, this is a highly informative article that I enjoyed thoroughly! :)

The article links to their own Github repo: https://github.com/maderix/ANE


What’s the intent of pointing out the presumed provenance in writing, now that LLMs are ubiquitous?

Is it like one of those “Morning” nods, where two people cross paths and acknowledge that it is in fact morning? Or is there an unstated preference being communicated?

Is there any real concern behind LLMs writing a piece, or is the concern that the human didn’t actually guide it? In other words, is the spirit of such comments really about LLM writing, or is it about human diligence?

That begs another question: does LLM writing expose anything about the diligence of the human, outside of when it’s plainly incorrect? If an LLM generates a boringly correct report - what does that tell us about the human behind that LLM?


We've got about a year before so many people are interacting with LLMs on a daily basis that its style starts to reverse infect human speech and writing

Great insight – Would you like to try and identify some specific "AI-isms" that you've noticed creeping into your own writing or your colleagues' emails lately?

People are okay to use delve now.

This said, there were people that talked like this before LLMs, it didn't develop this whole cloth.

Exactly. LLM's are mimics.

People seem to be going around pointing out that people talk like parrots, when in reality it's parrots talk like people.


I mean, it's both.

Did you develop your own whole language at any point to describe the entire world? No, you, me, and society mimic what is around us.

Humans have the advantage, at least at this point, of being a continuous learning device so we adapt and change with the language use around us.


The article above doesn't read well, at all.

It's not my subject, but it reads as a list of things. There's little exposition.


Gawd Damn LISTICLES!!!! And all of those articles that list in bullet points at the top of the article the summary of the article. And all of those people saying they don't want to read exposition, just give me the bullet points.

It's already happened to me. I've started to have dreams where instead of some sort of interpersonal struggle the entire dream is just a chatbot UI viewport and I'm arguing with an LLM streaming the responses in. Which is super trippy when I become aware its a dream. In the old days I'd dream about playing chess against myself and lose which was quite bizzare feeling because my brain was running both players. But thats totally normal compared to having my brain pretend to be an LLM inside a dream.

My honest take? You're probably right

You are absolutely right.

Here is why you are correct:

- I see what you did there.

- You are always right.


Also the Prior Art section, which has telltale repetition of useless verbs like "documenting," "providing insight into," and "confirming" on each line. This was definitely AI-written, at least in part.

Below are the items from that section. How should they be written to not look like an AI?

> hollance/neural-engine — Matthijs Hollemans’ comprehensive community documentation of ANE behavior, performance characteristics, and supported operations. The single best existing resource on ANE.

> mdaiter/ane — Early reverse engineering with working Python and Objective-C samples, documenting the ANECompiler framework and IOKit dispatch.

> eiln/ane — A reverse-engineered Linux driver for ANE (Asahi Linux project), providing insight into the kernel-level interface.

> apple/ml-ane-transformers — Apple’s own reference implementation of transformers optimized for ANE, confirming design patterns like channel-first layout and 1×1 conv preference.


The AI-ism that annoys me the most is the unnecessary hubris. Just sampling a small portion of the linked article:

"Here’s the fascinating part:", "And one delightful discovery: "

Personally I find the AI-isms take away from the voice of the author. What does the author find interesting? What was their motivation? It's all lost in a sea of hubris and platitudes.

There's almost certainly a positive side - technical people who aren't so good at communication can now write punchy deep-tech blogs. But what's lost is the unique human voice that is normally in every piece of writing. It's like every blog is rewritten by a committee of copywriters before it's published. Bleurgh.


The grammatical structure in the middle two is identical, and they're all similar in that way.

- "- Name - {Noun with modifiers} {comma} {verb-ing with modifiers}."

- "- Name - {Noun with modifiers} {comma} {verb-ing with modifiers}."

The phrasing is the same, which I notice sometimes happens in my own notes, but it's most noticeable when an LLM is asked to summarize items. An LLM written job description (without major prompting) for a resume comes out the same way, in my experience. It's the simplest full-sentence grammar for describing what something is, and then what something does.

If we used the developer's descriptions (from the github repo) to populate the info, it would look like this:

- hollance/neural-engine - Everything we actually know about the Apple Neural Engine (ANE)

- mdaiter/ane - Reverse engineered the Apple Neural Engine, with working Python and Objective C samples

- apple/ml-ane-transformers - eiln/ane - Reverse engineered Linux driver for the Apple Neural Engine (ANE).

- apple/ml-ane-transformers - Reference implementation of the Transformer architecture optimized for Apple Neural Engine (ANE)

IMO It may not be as information-packed as the LLM list, but it is more interesting to read. I can tell, or at least think I can tell, that different individuals wrote each description, and it's what they wanted me to know most about their project.

If I were making a list of software during research (that would eventually turn into a report), the particular details I write down in the moment would be different, depending on the solution I'm looking for or features it has or doesn't have, will add or won't add. I don't try to summarize "the Whole Project" in one clean bullet point, i (or my readers) can re-read the repo for that, or glean it from surrounding context (presuming enough surrounding context was written). But unless I made an effort later to normalize the list, the grammar, length, and subpoints would vary from the form-identifiable "LLM Concise Summary." It's more work for me to write to a standard, and even more work to consciously pick one.

EDIT: Upon re-reading the article, I noticed the "Prior Art" section is written in past-tense, as I would expect. But the list is in present tense. I feel like it jumps from "narrative" to "technical details list" back to "narrative". And the list is 70% of the section! I wouldn't mind reading a whole paragraph describing each project, what worked, what didn't, what they could use and what they couldn't, in the past tense, if it were interestingly-written. Something that tells me the author dove into the previous projects, experimented with them, or if they interacted with the developers. Or something interesting the author noticed while surveying the "prior art". but "interestingly-written" isn't really the LLMs goal, nor its ability. It's maximal information transfers with minimal word count. So the result is a list that smells like the author merely read the repo readme and wrote a summary for the masses in a technical report.

tl;dr The list is just "a list", and that makes it not interesting to read. If it was not interesting to read it was probably not interesting to write, which I take as an LLM writing it.


Can you share a link?

https://www.ideone.com/VAz4Nn

Doesn't run inside IDEone due to the external download link, but you can copy&paste the code over


This is an amazing example of a comment that says nothing. There's absolutely zero substance here.

By "modified" this person of course means that they swapped out the list of X0,000 names from English to Korean names. That is seemingly the only change.

The attached website is a fully ai-generated "visualization" based on the original blog post with little added.


It's a good website and probably AI generated with some insane expensive model that us mere mortals are too poor to afford, thus it has a value

This answer makes sense if you know that LLMs have layers, if you don't this answer is not super informative.

If I were to describe this to a nontechnical person, I would say:

LLMs are big stacks of layers of "understanders" that each teach the next guy something.

Imagine you are making a large language model that has 4 layers. Each layer will talk to it's immediate neighbor.

The first layer will get the bare minimum, in the LLM's of today, that's groups of letters that are common to come up together, called "tokens". This layer will try to derive a bit of meaning to tell the next layer, such as grouping of letters into words.

The next layer may be a little bit more semantic, for example interpreting that the word "hot" immediately followed by the word "dog" maps to a phrase "hot dog".

The layer after that, becoming a bit more intelligent given it's predecessors have already had some chances at smaller interpretations may now try to group words into bigger blobs, such as "i want a hot dog" as one combined phrase rather than a set of separated concepts.

The final layer may do something even more intelligent afterward, like realize that this is a quote in a book.

The point is that each layer tries to add a little meaning for the next layer.

I want to stress this: the layers do not actually correspond to specific concepts the way I just expressed, the point is that each layer adds a bit more "semantic meaning" for the next layer.


Is it becoming a thing to misspell and add grammatical mistakes on purpose to show that an LLM didn't write the blog post? I noticed several spelling mistakes in Karpathy's blog post that this article is based on and in this article.

I expect this kind of counter signaling to become more common in the coming years.

You just started to notice it

People aren't gonna be happy I spell this out, but, Karpathy's not The Dude.

He's got a big Twitter following so people assume somethings going on or important, but he just isn't.

Biggest thing he did in his career was feed Elon's Full Self Driving delusion for years and years and years.

Note, then, how long he lasted at OpenAI, and how much time he spends on code golf.

If you're angry to read this, please, take a minute and let me know the last time you saw something from him that didn't involve A) code golf B) coining phrases.


I have no skin in the game here, but this seems a bit "sharp-edged", do you have something against the guy? He just seems deep into his influencer/retired hobbyist arc to me...

No, and me too. Just had been sitting in my chest a while when I see people expecting non-hobbyist work from him. And had been worried to post it because things you and I understand become sharp-edged when spoken out loud to other people who don't.

Agree, same as Carmack. They're suit and tie types now.

Is this AI generated?

What?

Tesla FSD isn't a delusion. There are people using it to successfully do long distance drives across the USA right now, without interventions. Dunno how much credit Karpathy gets for that, but the tech works.

I almost edited in something about 2018 vs 2026 but didn’t, trusted you to understand :)

How is posting on this website with your full name not career suicide?

That's what taking a stand looks like... if any of these employees lose their job, they are welcome to come crash at my place for as long as they would like; they will have a roof over their head and I will cook them 3 meals a day.

Not all tech employers are total weenies who would refuse to hire someone for taking this stance.

Most are, but not all.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: