Hacker Newsnew | past | comments | ask | show | jobs | submit | 48terry's commentslogin

With the current atmosphere around technology, I feel like "digitize air traffic control" is an idea that will be both executed terribly by the money grubbing lunatics in control the government and tech corporations, AND received poorly by the public.

> Or can they stay on at Ars if for example, it was explained as an unintentional mistake while using an LLM to restructure his own words and it accidentally inserted the quotes and slipped through.

No. Don't giving people free passes because of LLMs. Be responsible for your work.

They submitted an article with absolute lies and now the company has a reputational problem on its hands. No one cares if that happened because they sought out to publish lies or if it was because they made a tee-hee whoopsie-doodle with an LLM. They screwed up and look at the consequences they've caused for the company.

> I think for some people this could be a redeemable mistake at their job. If someone turns in a status report with a hallucination, that’s not good clearly but the damage might be a one off / teaching moment.

Why would you keep someone around who:

1. Lies

2. Doesn't seem to care enough to do their work personally, and

3. Doesn't check their work for the above-mentioned lies?

They have proven, right then, right there, that you can't trust their output because they cut corners and don't verify it.


This entire article is the dude jerking himself off about how smart he is with amazing anecdotes like a third grade spelling bee.


> I'd say otherwise - it's a reach out to have a relationship.

They HAD a relationship: it was 20 years of volunteer work. That relationship was broken by Mozilla's actions.


Question: What makes LLMs well-suited for the task of poker compared to other approaches?


They are not, and that's the whole point of doing this research. If we can build good benchmark, models developers would have nice goal.


I think the post was just really bad, myself.


I have a better idea: random.randint(1,10)


That requires tool use or some similar specific action at inference time.

The technique I suggested would, I think, work on existing model inference methods. The ability already exists in the architecture. It's just a training adjustment to produce the parameters required to do so.


You absolutely need to get an accessibility professional (one you pay, not try to crowdsource free labor) to review your site. Your site excludes disabled people from participating.


Hi OP here. Thank you for the feedback, which specific issues jump out at you?


That's overstating the problem, but accessibility is very important.


That was the joke.


Thanks <3


Yeah, like, there's straight-up photos of penises and vaginas and naked people of all sorts on Wikipedia. It absolutely should be considered a NSFW app.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: