Domain Specific Language. It's when you invent a tiny programming language with nouns and verbs that are appropriate for some niche. Like maybe for eg wedding planning you wouldn't use json or yaml, but some custom format that lets users define people and who has to sit where and where they can't sit without being professional programmers.
My toilet doesn't officially support crapping without an internet connection either. I'd argue that in both cases it's implicit unless very explicitly disclaimed.
I mean, as a snarky hyperbole about how ridiculous consumer products have become, sure. In reality, I would be very surprised if Oral B decided I needed Internet access to use my toothbrush.
1) I only see static until I fiddle with the mute button, which makes the image work besides working as expected. As soon as I change channel, all static again until I hit mute. I'm on Chrome over Windows using a corporate network.
2) The info button shows a reasonable email address, and, under "Support", the string "bc1q4s2f6df2cqa8stenwp8y5tlmd5pywy8dwqqxvh". I have no idea what to do with that string.
Startup idea: online text editor that logs every keystroke and blockchains a hash of all logs every day. If you're accused of AI use, you can pull up the whole painstaking writing process and prove it's real.
Startup idea: keep humans in liquid-filled pods, connecting sensors to their central nervous system, and record every nerve impulse they generate. This way we can be 100% sure that those nerve impulses were generated by humans, and not an AI.
If you are accused of using AI, is proving you different really a defense? It changes the trespass from making something using AI to making something that looks like AI was used, but with the extent that some subcultures are against the use of AI, just appearing to have used it even with proof you didn't isn't going to be accepted.
So much of the discussion focuses on the creators of works, but what about the changes in consumers, who seem to be splitting between those who don't mind AI and those who want to oppose anything involving AI (including merely looking like AI). Is there enough consumers in the group that opposes AI but is okay with AI looking content as long as it is proven not to be AI?
"AI looking content" would be decided on an individual by individual basis, with some percentage using AI detection software in their decision making process, with that software being varying degrees of snake oil.
The rest is silly, because you can emulate the whole writing process by combining backtracking https://arxiv.org/abs/2306.05426 and a rewriting/rewording loop.
With not much effort we can make LLM output look incredibly painstaking.
I doubt that this is a problem in need of a technical solution.
In any case, this system can easily be circumvented by emulating the key presses on that website.
Stupid startup killing idea: an open-source script that runs LLM in the background and streams its output as input events, so the idiotic keylogger thinks it's all written by hand.
Just writing this down here instantly invalidates the premise.
An overkill variant to rub salt in the wounds of duped investors: make the script control a finger bot on an X/Y harness, so it literally presses the physical keys of a physical keyboard according to LLM output.
Bonus points for making a Kickstarter out of it and getting some YouTubers to talk about it (even as a joke) - then sitting back to watch as some factories in China go brrrr, and dropshippers flood the market with your "solution" before your fundraising campaign even ends.
>An overkill variant to rub salt in the wounds of duped investors: make the script control a finger bot on an X/Y harness, so it literally presses the physical keys of a physical keyboard according to LLM output.
That's how the first automated trading firms operated in the 80s. NASDAQ required all trades to be input via physical terminals, so they build an upside down "keyboard" with linear actuators in place of the keys, that would be then placed on top of the terminal keyboard, and could input trades automatically.
It's often enough the case. Our own industry has plenty of examples of things that are net win when they exist in small quantities, or available to small group of people, that rapidly become a net tragedy when scaled up and available to everyone. I keep pondering, if the ethically correct choice needs to always be either everyone having something, or no one at all?
> make the script control a finger bot on an X/Y harness,
Too many points of mechanical failure. Just use a RPi Pico W (or other USB HID capable microcontroller) to emulate a keyboard and have it stream key codes at a human pace. Make it wifi or bluetooth enabled to stream key codes from another computer and no trace of an LLM would ever be on the target system.
> online text editor that logs every keystroke and blockchains a hash of all logs
Do you really think it would help? The kind of people who believe an "AI detector" works will just ignore your complicated attempts to prove otherwise; it's the word of your complex system (which requires manual analysis) against the word of the "AI detector" (a simple system in which you just have to press a button and it says "guilty" or "not guilty").
The more complicated you make your system (and adding a blockchain makes it even more complicated!), and the more it needs human judgment (someone has to review the keystrokes, to make sure it's not the writer manually retyping the output of a LLM), the less it will be believed.
That's a dehumanizing system. Have we lost our way, HN? Are we so immersed in the bleakness of tech, it comes so naturally for us, to propose "hey, let's create surveillance machines to perpetually watch people working, for the rest of their productive lives" and it's something we have to pause and think about?
Let's not build Hell on Earth for whatever reason it momentarily seems to make business sense.
We could make a killing selling companies the software and then again charging “privacy fees” to users. We have a moral duty to our shareholders to do this as soon as possible.
If you feel compelled to surveil yourself so as not to be arbitrarily fired by an algorithm, I do consider that dystopian; yes. You're not "in control" of data you're expected to turn over to your employer to keep your job. Worse still if these keyloggers become normalized, and they'll shift from being "optional" to "professionally expected" to "mandated".
This (IMHO) is an example of an attempt at a technical solution for a purely social problem—the problem that employers are permitted to make arbitrary firing decisions on the basis of an opaque algorithm that makes untraceable errors. Technical solutions are not the answer to this. There should be legally-mandated presumptions in favor of the worker—presumptions in the direction of innocence, privacy, and dignity.
This stuff's already illegal on several levels, in some of the more pro-worker countries. It's illegal to make hiring/firing decisions solely on the basis of an algorithm output (EU-wide, IIRC?). And in several EU countries it's illegal to have surveillance cameras pointed at workers without an exceptional reason—and it's not something a worker can consent/opt-in to, it's an unwaivable right. I believe—well, I hope—the same laws extend to software surveillance like keyloggers.
Surveillance is something you do to someone else. If it's yourself you're just keeping records. It's common that proving validity of something involves the records of it's creation. Is registering for copyright surveillance?
data you're expected to turn over to your employer
If you got paid to make something, that would be your employer's data anyway.
Worse still if these keyloggers become normalized, and they'll shift from being "optional" to "professionally expected" to "mandated"
You think a brainstorm about using a blockchain by a hacker news comment is going to suddenly become 'mandated'?
And in several EU countries it's illegal to have surveillance cameras pointed at workers without an exceptional reason
They described logging their own keystrokes and encrypting them to have control over them. It isn't a camera and it isn't controlled by someone else. Also they said in an editor, so it isn't every keystroke, it would only be the keystrokes from programming.
Crap. Most of the substitutes have a bigger ecological footprint, even if you don't care at all about cost and comfort. Those reusable shopping bags need to be actually reused, several dozen if not hundreds of times, to be ecological. How many times have you used your last one?
> Those reusable shopping bags [...] How many times have you used your last one?
This reply blew my mind ... when you realize someone must have the exact opposite experience ...
of course I use the reusable bags,
they are in the trunk, and first step after pulling into the parking spot is to take these out the trunk and head into the store
in the last 5 years used only my reusable shopping bag every time except in a few cases when I forgot to put the bags back in the trunk for some reason (usually, I needed to transport bigger things)
I use reusable bags for many years as well, and I wash them every so often. I of course, wash all the produce before using anyway, so a little bit of dirtiness of the bag is negligible.
> Most of the substitutes have a bigger ecological footprint
Only if you insist on single-use things, in many cases. A reasonable substitute for a single-use plastic bag isn't a single-use paper bag: It's a multi-use bag.
> even if you don't care at all about cost and comfort.
I care a lot about comfort, which is why a single-use utensil ban can't come soon enough in the US.
This argument is so absurd. I have reused my reusable bags more times than I can possibly count. It's really not hard or unrealistic. How does everyone else seem to have such a hard time with this?
To pile on here. I reuse my reusable bags hundreds if not thousands of times. I purchased a set of high quality bags that stuff into a small bag. They are better in all ways than any other bags I've used. The ergonomics are excellent.