Hacker Newsnew | past | comments | ask | show | jobs | submit | easeout's commentslogin

Anybody measure employees pressured by KPIs for a baseline?

"Just like humans..", was also my first thought.

> frequently escalating to severe misconduct to satisfy KPIs

Bug or feature? - Wouldn't Wallstreet like that?


POSIWID [0] and Accountability Sinks [1] territory, I'm sure LLMs will become the beating hearts of corporate systems designed to do something profitably illegal with deniability.

[0] https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...

[1] https://aworkinglibrary.com/writing/accountability-sinks



I don't think this is "whataboutism", the two things are very closely related and somewhat entangled. E.g. did the AI learn of violate ethical constraints from training data?

Another interesting question is: What happens when an unyielding ethical AI agent tells a business owner or manager "NO! If you push any further this will be reported to the proper authority. This prompt as been saved for future evidence". Personally I think a bunch of companies are going to see their profit and stock price fall significantly, if an AI agent starts acting as a backstop for both unethical and illegal behavior. Even something as simple as preventing violation of internal policy could make a huge difference.

To some extend I don't even thing that people realize that what they're doing is bad, because humans tend to be a bit fuzzy and can dream up reason as to why rules don't apply or wasn't meant for them, or this is a rather special situation. This is one place where I think properly trained and guarded LLMs can make a huge positive improvement. We're are clearly not there yet, but it's not a unachievable goal.


> A problem repeatedly occurred on "https://factory.strongdm.ai/".

Why make a desktop windowing system app, for a user group who runs a bunch of simultaneous terminal sessions with tear-off tabs or tmux panels, and then force everything into one window that can only display a single session at a time?

The Open button and then codex resume --last is good, but it's a waste and The Wrong Abstraction not to make instantiable conversation windows from the get-go.


The main problem I have with the language is compile times. Rust is good at many things, but not that.

Xcode is optional, though its primacy has meant less adoption of Swift's first party LSP and VS Code extension.


I wrote this a long time ago, but I think the metaphor was about generative AI applications vs. traditional software applications, not about AI coding agents vs. writing code yourself.


Hi, that's my website and my wisecrack article. It was a while ago, but I think the metaphor was that a train is traditional deterministic-ish software, whose behavior is quite regular and predictable, compared to something generative which is much less predictable.


Heh, if repression did not exist, it would be necessary to invent it?


Yeah that was pretty weird. Minimizing harm means both leaving people alone and not denying yourself random pleasant feelings.


Native performance doesn't earn that much user goodwill without native layout and behavior. You can't make a single design for many platforms and please everyone who chose each platform for what it is. Unless perhaps you are Snap and having a _unique_ UI is part of the appeal for your young-leaning audience.


Baby don't hurt me


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: