Hacker Newsnew | past | comments | ask | show | jobs | submit | deepfriedrice's commentslogin

I can't help but think that Steam Machine/SteamOS/Linux gaming in general is severely bottlenecked by anti-cheat. Nearly all serious multiplayer games require Windows specific anti-cheat.

Maybe there's a critical mass of Linux users that will force AC support. Maybe new cheating paradigms (DMA) will obsolete local AC. I suppose one of those could happen in the next 10 years.


Arc Raiders on linux is fully supported and a lot of fun. Lots of people have steam decks and lots of people will have a steam machine. There will be FPS multi on linux. The larger studios might not, but many more will.


I don't know the gullibility of the average tech CEO but this doesn't strike me as a very convincing phishing attempt.

* "We've received reports about the latest content" - weird copy

* "which doesn't meet X Terms of Service" - bad grammar lol

* "Important:Simply ..." - no spacing lol

* "Simply removing the content from your page doesn't help your case" - weird tone

* "We've opened a support portal for you " - weird copy

There should so many red flags here if you're a native english speaker.

There are some UX red flags as well, but I admit those are much less noticeable.

* Weird and inconsistent font size/weight

* Massive border radius on the twitter card image (lol)

* Gap sizes are weird/small

* Weird CTA


I think you'll be led astray thinking this is CEO-specific.

The whole theory of phishing, and especially targeted phishing, is to present a scenario that tricks the user into ignoring the red flags. Usually, this is an urgent call to action that something negative will happen, coupled with a tie-in to something that seems legit. In this case, it was referencing a real post that the company had made.

A parallel example is when parents get phone calls saying "hey it's your kid, I took a surprise trip to a tiny island nation and I've been kidnapped, I need you to wire $1000 immediately or they're going to kill me". That interaction is full of red flags, but the psychological hit is massive and people pay out all the time.


I razz CEOs in jest, but my point is: This is an example of a good phishing attempt? ChatGPT could surely find and fix most of the red flags I called out. Perhaps the red flags ensure they don't phish more people than they can productively exploit.


There are certainly phishing attempts that are pixel perfect, but I'd say way more energy tends to go into making phishing websites perfect. The goal of the email is to flip people into action as quickly as possible with as little validation.


> We've been on the hunt for this AI host since opting into the test several hours ago, but the robot has yet to appear.

An entire article about a beta feature they haven’t even seen? I normally wouldn’t read Ars but I’m on flight with nothing else to do and still feel swindled


I went straight to HN for commentary because I know exactly what is happening on Reddit and for the first time can't bear to look.


Two of the default front page posts were the conservative sub complaining about all of the insensitive comments on Reddit.


And yet when the two Minnesota politicians were assassinated, that subreddit was full of its own blend of insensitive comments. Complete drivel all around.


Indeed on Hacker News all posts about the two Minnesota politicians were assassinated were immediately flagged and buried. It's clear where biases lie both here and on reddit.


Oh wow - I wanted to build a game just like this not too long ago but never found the time. Wishlisted!


The "critique" is nuts. Surely AI generated. If I didn't trust the domain, I'd assume the author to be incredible for seriously referencing something like this.

Look at the critique [0] and then look at the code [1].

[0] https://web.archive.org/web/20250423135719/https://github.co...

[1] https://github.com/ricci/async-ip-rotator/blob/master/src/as...


Yea clearly AI with the keyword bolding, numbered arguments, and so on. Feel like lots of AI produced content follow this structured response pattern.


It's uses a simple, purpose-focused template of a type that is a common recommendation for clear communication, outline numbering, and highlights keywords using monospaced text, as is common practice in technical writing. None of that is unusual for a human, especially writing something that they know is going to be high visibility, to do.

Modestly competent presentation is now getting portrayed as an "AI tell".


The format doesn’t itself indicate AI, but when combined with the fact that the critique is mostly nonsense it does appear to strongly suggest it.


It has excellent presentation, excess verbosity, and is wholly nonsensical. Read the code. It uses excessive whitespace doing things like function calls/declarations with one parameter per line, and so it's probably like 100 lines "real" code of mostly tight functions -- the presentation/objections make no sense whatsoever.

I was able to generate extremely comparable output from ChatGPT by telling it to create a hyper-negative review, engage in endless hyperbole, and focus on danger, threats, and the obvious inexperience of the person who wrote it. Such is the nature of LLMs it'd happily produce the similar sort of nonsense for even the cleanest and tightest code ever written. I'll just quote its conclusion because LLM verbosity is... verbose.

---

Conclusion This code is a ticking time bomb of security vulnerabilities, AWS billing horrors, concurrency demons, and maintenance black holes. It would fail any professional code review:

Security: Fails OWASP Top 10, opens SSRF, IP spoofing, credential leakage

Reliability: Race conditions, silent failures, unbounded threading

Maintainability: Spaghetti architecture, no documentation, magic literals

Recommendation: Reject outright. Demolish and rewrite from scratch with proper layering, input validation, secure defaults, IAM roles, structured logging, and robust error handling.

---

Oooo sick burn. /eyeroll


> I was able to generate extremely comparable output from ChatGPT by telling it

Just to check, you know that ChatGPT is fully built on human writing right?

Would it be ironic if I claim "what you write looks like what the tool can output, so you used the tool" if the tool was built to output stuff that looks like what you write.

Fun fact: anything you or me write looks like ChatGPT too. It could be surprising if people didn't spend billions and stole truckloads of scraped unlicensed content including content created by you and me to get the tool to literally do just this.


I’m not arguing that it’s unusual for humans to write in this manner, but when you use something like chatgpt with some frequency and see that as a common response template it’s an obvious pattern..


People say emdashes are a signal that something's from chatgpt also — yet people forget that the cliches or patterns of LLMs are learned from real-world patterns. What is common in something like ChatGPT has a good chance to also be common outside of it, and _lots_ of false positives (and false negatives) are bound to creep up frequently when trying to do any sort of pattern-based "detection" here.


I’ve never encountered emdashes in emails from my colleagues before ChatGPT was available, and it’s obvious now where there are emdashes, the content is at least in part AI generated. Same with semicolons. Yes, proper grammar and syntax use semicolons but in most casual business communication those rules are modified for simplicity.


Yes, emdashes are inserted automatically by iOS when a user inputs a double dash: —


>Modestly competent presentation is now getting portrayed as an "AI tell".

This. Someone on a reddit gamedev sub the other day was showing where his game got review bombed because his own description of his game used good descriptions and bulleted lists. It seems like anytime a bulleted list is used now, people assume it's because of AI.


I'm relatively confident this critique is AI-powered. The dead giveaways:

1. Verbosity. Developers are busy people and security researcher devs are busy even moreso. Someone so skilled wouldn't spend more than 2-3 sentences of time in critiquing this repo.

2. Hostility. Writing bug free code is hard, even impossible for most. Unless your name is Linus Torvalds, Richard Hipp, or maybe Dan Abramov, most devs are not comfortable throwing stones while knowing they live in glass houses.

3. Ownership. "Killshot" comments like this are only ever written by frustrated gatekeepers against weak PRs that would hurt "their baby". Nobody would get emotionally invested in other people's random utility projects. This is just a single python file here without much other context.

4. Author. The author is still an aspiring developer. See their starred repo highlighting adherence to SOLID/DRY principles as a primary feature of their project. Not something you'd expect to see from a seasoned security researcher. https://github.com/SSD1805/EchoFlow

5. Content. The critique is... wrong. It says the single file, utility repo is "awful" for being a "less maintainable" monolith. Hilariously, it calls the code bad because it does not need dependency injection. This was a top critique in the comment!

--

Regardless of political persuasion, I hope this trend of using AI to cyberbully people you don't like goes away.


I hope this trend of DOGE using the US Government to cyberbully people they don't like goes away.


Once you've read enough ChatGPT slop, you know it when you see it:

- Massive verbosity.

- Flawless spelling and grammar.

- Grandiose tone.

- Robotic cadence where every paragraph and sentence has similar length (particularly obvious in longer text.)

- Em dashes everywhere.

- The same few stock phrases or sentence structures used over and over - e.g. "This isn't X—it's Y", which that issue uses twice in two paragraphs:

    There is nothing "hardcore" about writing fragile, insecure, and unscalable code. This isn’t pushing boundaries—it’s demonstrating a lack of engineering fundamentals.

    If this is what was learned at previous jobs, then it’s time to unlearn it and start following best practices. Because right now, this is not just bad engineering—it’s reckless.
If AI didn't write that snippet then I'll permanently retire from internet commenting.

(None of what I just wrote is intended as a defence of DOGE.)


These are all good points, and I agree. The Em dash I've noticed a lot. One additional is over usage of adjectives like "robust" or something that came out of the third option in a thesaurus.

As someone who has been using regular dashes and words like "robust" for years, I've had to purposefully dumb down things like my resume/CV and internet comments. Like many of us here, I'm coming from a generation that actually had to write 100% of the research paper instead of an AI generating it for me. So I always took great care to aim for something close to perfection in writing.


The point 2 makes me think you did not read what developers write on the internet, in particular in flame war, in particular when they have beef with whoever they argue with.

Verbose hostility of that kind and throwing stones, even nitpicking with exaggerated outrage are no exception. And lack of experience never stopped people from feeling and behaving like god given gift to programming profession.


a propos number 2, I think this is only a feature of seasoned developers who have managed to outgrow their own high opinions of themselves. I've met plenty of younger devs who would totally write something like this taking down the work of someone whose style did not align exactly with what they considered "good".


I agree on all counts. The readme of the repo you link also smacks of an AI generated summary of the codebase. (Frankly, I don’t think the AI was able to understand what the code in that repo does, which is my guess as to why it talked much about form rather than function.)


> Developers are busy people and security researcher devs are busy even moreso.

Neither the critique, the critiquer's profile, nor even the Krebs article says that the critique is a security researcher, and it definitely isn't the case that all devs are particularly "busy people". You yourself argue later, in fact, that the signs are that the author is not an experienced dev or security researcher, so it is nonsense (even more than assuming an average rules out an exception in the group) to argue that the code is AI-written based on the assumption that normally, a security researcher would be too busy to write it.

> Hostility. Writing bug free code is hard, even impossible for most. Unless your name is Linus Torvalds, Richard Hipp, or maybe Dan Abramov, most devs are not comfortable throwing stones while knowing they live in glass houses.

If you've been online more than about 5 minutes, you know that there is no shortage of hostility, and that even if it isn't most of any given community, its a highly visible subset of any community online.

> "Killshot" comments like this are only ever written by frustrated gatekeepers against weak PRs that would hurt "their baby". Nobody would get emotionally invested in other people's random utility projects.

The only reason we are talking about this on HN is that this isn't some random "other people's random utility project". The critique was posted while the author of the code being critiqued was a high profile figure in current news stories, and the critiquer posted a more explicitly political followup the day after the original critique addressing the author's highly-publicized resignation due to the news coverage.

> The author is still an aspiring developer. See their starred repo highlighting adherence to SOLID/DRY principles as a primary feature of their project.

That...doesn't support the critique being AI. In fact, it undercuts it because it provides a simpler explanation than AI as the explanation for your next bullet point, that the critique is wrong (especially, the SOLID/DRY focus is particularly consistent combined with the "aspiring dev" status you describe is particularly consistent with the specific things you focus on the critique being wrong about.) It also undercuts your first bullet point, as already discussed, which hinges on the assumption that the critique was written by an very busy experienced security researcher, and not an aspiring dev..

I mean, if excess verbosity, a more regularized format than is typical for the venue, and being wrong together are hallmarks of an AI written critique, then I'd say your post is at least as much AI-suspicious as the critique under discussion.


Lol that's so funny. Can't imagine writing that. (the critique, not the code).


"Where are the examples" is a straw man. Imagine the ways a political enemy might exploit limitless access to the attention of 140M Americans. The calculus seems to be that a false negative will be much more catastrophic than a false positive.


I understand what you're saying but that argument I don't think should apply here. Having some kind of evidence to back up a drastic action like this is not something that should be argued for, it should be a given. I've asked at least 5 different times for people to point to anything material, and no one has come up with anything. I'm not saying there is no threat, I could be wrong and there could be a massive threat, but if there is one shouldn't we be able to point to something more than "it could happen" and being paranoid about it? I'm being asked to have faith in institutions/politicians that have a long, long, long proven track record of not having my best interests at heart and I can't accept that when they have clear conflicting interests / motives.


> Just trying to shoehorn alexa into as many domains as possible

It happened outside of Alexa too. Every team with a public facing product was directed (it seemed) to come up with some sort of Alexa integration. It was usually dreamed up and either a) never prioritized or b) half assed because nobody (devs, PMs, etc.) actually thought it made any sense.


Someone should honestly script this. Assuming this is not already that


Not a script but if you're reading on phone with the Harmonic app, there's a "View on archive.org" button for every post. It works pretty well for me.


just treat archive.ph as a 2nd level browser.

if url doesn't work in the regular browser -- copy url into that.

maybe add that as a feature request for Brave.


I just check the feature requests for the iOS client I’m using and this has been requested [1] …three years ago.

[1] https://github.com/dangwu/Octal/issues/228


Brave search has been so terrible for me. I’ve very quickly been conditioned to append “!g” to all omnibar searches, even in non-brave browsers! (This tells brave to use Google)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: