> Byterun is a Python interpreter written in Python. This may strike you as odd, but it's no more odd than writing a C compiler in C.
I'm not so sure. The difference between a self-hosted compiler and a circular interpreter is that the compiler has a binary artifact that you can store.
With an interpreter, you still need some binary to run your interpreter, which will probably be CPython, making the new interpreter redundant. And if you add a language feature to the custom interpreter, and you want to use that feature in the interpreter itself, you need to run the whole chain at runtime: CPython -> Old Interpreter That Understand New Feature -> New Interpreter That Uses New Feature -> Target Program. And the chain only gets longer, each iteration exponentially slower.
Meanwhile with a self-hosted compiler, each iteration is "cached" in the form a compiled binary. The chain is only in the history of the binary, not part of the runtime.
---
Edit since this is now a top comment: I'm not complaining about the project! Interpreters are cool, and this is genuinely useful for learning and experimentation. It's also nice to demystify our tools.
I was never a user of PyPy but I really appreciated the (successful) effort to cleanly extract from Python a layer that of essential primitives upon which the rest of the language's features and sugar could be implemented.
It's more than just what is syntax or a language feature, for example RPython provides nts classes, but only very limited multiple inheritance; all the MRO stuff is implemented using RPython for PyPy itself.
This is the case only if the new interpreter does not simply include the layer that the old interpreter has for translating bytecode to native instructions. Once you have that, you can simply bootstrap any new interpreters from previous ones. Even in the case of supporting new architectures, you can still work at the Python level to produce the necessary binary, although the initial build would have to be done on an already supported architechture.
The usual understanding of "interpreter" in a CS context is program that executes source code directly without a compilation step. However the binary that translates an intermediate bytecode to native machine code is at least sometimes called a "bytecode interpreter".
This is still incorrect. A bytecode interpreter, as its name indicates, interprets a bytecode. Typically, compiling a bytecode to native machine code is the work of a JIT compiler.
Yes, that's another great example of the same kind of thing - creating a JIT from an interpreter. It remains true that interpreters do not directly generate machine code.
I've seen computer games where any input device is accepted, and on-screen instructions refer to the last type of device used. Seemed like a good idea. And how does input-based parental control work? Do you hide the adult's controller?
On first use of the controller after a reboot, you're prompted to select which user is playing. Saved games and achievements and whatnot are per-user.
If you've got a child in the household, you're expected to tag their user as such, which imposes some restrictions on their account. Then set up an access code on your user, so the child can't log in as you.
I wish it mentioned how safe this is. Some years ago I got banned for just logging in with a third-party client, without sending any messages. Given how critical WhatsApp is for some people, and how permanent the bans are, that's a big risk.
You should use a separate WhatsApp account for bot purposes.
Recently, I used a separate WhatsApp account to interact with a group chat that I have with my friends. After about a week, they disabled the account, with no way to re-enable it.
Since WhatsApp accounts are bound to phone numbers, getting a new phone number is a significant hurdle in many legislations.
An easier solution is to just not use WhatsApp at all and look for the alternatives for bot purposes. Telegram explicitly encourages bot usage with no risk of bans.
In my case I did, but it's still wasted time and money. And when breaking TOS there's always a chance of getting related accounts also banned, though I don't know if that has already happened with WhatsApp or not.
Happened to me on Telegram with their official app. Just because I opened it once when travelling. I got the account back, after 2 years. That's apparently the period to clean banned accounts. Tried to restore it by all means, just to see if that's possible. It wasn't. Though I didn't have anything important, I actually never sent a message. Now I use it for Openclaw :D
It's very different for Google, the giant faceless corporation, to know someone's search history. Making it _public_ is a different ballgame.
I can't believe I have to say this, but you can't simply delete an important facet of society (expectation of privacy) and expect things to turn out alright. People will still have hangups around prudish topics and traditions. And privacy has always worked as an escape hatch for people in bad situations, either locally (controlling parents and partners) or society-wide (facist governments, genocides).
Just because we can imagine a society where this information is public and everything still works, doesn't mean that there's a path from here to there.
YouTube search is one of those services that is pointlessly hostile. Most recently, they've removed the "order by upload date" filter, and changed the way that blurring works. Previously, sensitive videos had blurred thumbnails and a toggle to remove the blur (even though it had no way to never blur). Now the UI looks the same, but the "toggle" reloads the page without any filters, and adding a filter re-blurs them. So it's impossible to filter results and see unblurred thumbnails.
These changes baffle me. It's not even enshittification because I cannot see any benefit to YouTube at all.
You'll never win this battle, so why waste feelings and energy on it? That's where the internet is headed. There's no magical human verification technology coming to save us.
I can prove all contributions to stagex are by humans because we all belong to a 25 year old web of trust with 5444 endorser keys including most redhat, debian, ubuntu, and fedora maintainers, with all of our own maintainer keys in smartcards we tap to sign every review and commit, and we do background checks on every new maintainer.
I am completely serious. We have always had a working proof of human system called Web of Trust and while everyone loves to hate on PGP (in spite of it using modern ECC crypto these days) it is the only widely deployed spec that solves this problem.
You can prove the commits were signed by a key you once verified. It is your trust in those people which allows you to extend that to “no LLM” usage, but that’s reframing the conversation as one of trust, not human / machine. Which is (charitably) GPs point: stop framing this as machine vs human — assume (“accept”) that all text can be produced by machines and go from there: what now? That’s where your proposal is one solution: strict web of trust. It has pros and cons (barrier to entry for legitimate first timers), but it’s a valid proposal.
All that to say “you’re not disagreeing with the person you’re replying to” lol xD
I can prove that code was signed by a key that was verified to belong to a single human body by lots of in-person high reputation humans.
How the code was authored, who cares, but I can prove it had multiple explicit cryptographic human signoffs before merge, and that is what matters in terms of quality control and supply chain attack resistance.
Exactly. So in the words of the comment you replied to: why are we wasting energy on worrying about Claude code impersonating humans? We have that solution you proposed.
That’s what I mean by “you agree with the person to whom you replied”
I suppose you are correct. I am agreeing that if one widely deployed the defense tactics projects like stagex use, then asshats using things like undercover will not be trusted.
Unfortunately outside of classic Linux packaging platforms, useful web of trust and signing is very rare, so I expect things like undercover mode being popular are going to make everything a lot worse before it gets better.
The people who signed my keys trust me to be an honest human actor that chose this as the singular identity they signed for the human body they met in person.
I -could- burn my 16+ years of reputation by letting a bot start signing commits as me, and I could also set my house on fire. I have very strong incentive not to do so as my aggregate trust is very expensive and the humans that signed me would be unlikely to sign a second if I ruined the reputation of my first.
This incentive structure is why web of trust actually works pretty well, and is the best "proof of human" we are likely ever going to have while respecting privacy and anonymity for those that need it.
You can only prove that all contributions are pushed by those humans, and you can quite explicitly/clearly not prove that those humans didn't use any AI prior to pushing.
I absolutely do not care what autocomplete tools someone used. Only that they as humans own and sign what is submitted so it is attached to their very expensive reputations they do not want to lose.
That’s great, and I also don’t care. But I think all people are saying is that by most definitions you cannot “prove all contributions to stagex are by humans”.
Or are you saying you can prove that aliens and cats didn’t make them? Because I’m not sure that’s true either.
And once you find out someone has trained their dog to commit something, how exactly do you revoke your trust?
I think if you answer these questions you’ll see pretty quickly why this solution isn’t the silver bullet you think it is.
It is not a silver bullet by itself, but when combined with the other tactics in stagex I believe it gives us a very strong supply chain attack defense.
I can not prove the tools used, but I can prove multiple humans signed off on code with keys they stake their personal reputations on that I have confirmed they maintain on smartcards.
While nothing involving humans is perfect I feel it is best effort with existing tools and standards and makes us one of the hardest projects to deploy a successful supply chain attack on today.
With 5400+ people I am betting that you have at least one person in your 'web of trust' that no longer deserves that trust.
That's one of the intrinsic problems with webs of trust (and with democracy...), you extend your trust but it does not automatically revoke when the person can no longer be trusted.
Of course! There are always edge cases, but I would suspect the number of bots signed by reputable keys to be near 0%, and the honest human score in this trust graph to be well over 90%.
Compare to how much we should trust any random unsigned key signing commits, or unsigned commits, in which the trust should be 0% unless you have reviewed the code yourself.
The problem is all it really takes is one edge case to successfully break a web of trust to the point that the web of trust becomes a blind spot. Instead of distrusting everybody (which should be the default) the web of trust attempts to create a 'walled garden of trust' and behind that wall everybody can be friendly. That gives a successful attacker a massive advantage.
If we were talking about any linux distribution before stagex, I would agree with you.
Stagex however expects at least one maintainer may at any time engage in reputation-ending dishonesty or simply they were threatened or coerced. This is why every single release is signed by a -quorum- of code reviewers and code reproducers that must all build locally and get identical hashes, so no single points of failure exist in our trust graph.
Our last release was signed by four geodistributed maintainers that all attest to having built the entire distribution from 180 bytes of machine code all the way up with the same hashes.
All of their keys being compromised at once gets beyond the pale.
While I appreciate all of the effort you put in this and respect that you trust this to be bulletproof I'm always going to be skeptical of silver bullets.
Your level of certainty is the thing that frightens me more than the confidence I have in the quality of your work.
Do you think it is likely that the majority of the people that spent decades building this trust graph and gaining the trust needed to be release engineers on the packages that power the whole internet are just going to hand off control of that key to a bot?
Anyone doing so would be setting their professional reputations completely on fire, and burning your in-person-built web of trust is a once in a lifetime thing.
Basically, we trust the keys belong to humans and are controlled by humans because to do otherwise would be a violation of the universally understood trust contract and would thus be reputational bankruptcy that would take years to overcome, if ever.
Even so, we assume at least one maintainer is dishonest at all times, which is why every action needs signatures from two or more maintainers.
It also makes no sense! "Fuck this, it doesn't matter - but I'll happily spend effort communicating that to others, because apparently making others not care about something I don't care about is something I do care about." Wut?!
Well, I say it makes no sense. Alternatively, it makes a lot of sense, and these people actually just wanna destroy everything we hold dear :-(
Then do something about it. Vote for better politicians. Donate money to causes that you think are important. If you think you can do it better, and this isn't meant to be facetious, run for political office.
Being fatalistic can be a great excuse not to do anything.
I cannot. I can only vote better politicians if they are there. That is without even going into the minefield of what is "better". My implication is that I have no confidence whatsoever in any current politician in my state.
> Donate money to causes that you think are important.
I have no money.
> If you think you can do it better, and this isn't meant to be facetious, run for political office.
I have no money, no visibility and no connections. Even if I was magically given tons of money, I would still need a strong network to attempt any real change, even without taking into consideration the strong networks already in place preventing it.
Telling random citizens "run for office" is facetious, whether you mean it or not.
> Telling random citizens "run for office" is facetious, whether you mean it or not.
Hard disagree. At least where I live, "random citizens" run for local office and succeed all the time.
Also, complaining that you "have no network" is a you problem, not a system problem. I'm truly sorry if you feel you have no friends, but you'll be better off at least trying to get some (independent of politics). And if that's something you've tried and failed at before, I do feel pity. But I don't think hope is lost for anyone. And even if it were lost, please don't actively spread the misery!
You are kind of proving my point. You are actively justifying doing literally nothing about what bothers you and acting indignant and self righteous about it.
I didn’t mean that you should strive to do as little as possible; rather that if you have 2 choices, do more or do less - then I would be biased towards doing less.
Of course not always a realistic option
You can probably find a million situations where doing less is terrible.
I think first step would be to define for yourself what doing less actually means - it could mean taking a walk instead of chasing dopamine -> doing less but you move more.
But whatever it’s a philosophical question and there aren’t any right or true answers
It's fun to pet the cat. It's not fun to rage against an unstoppable force. Well, maybe it is for some people. But I find people often underestimate the detrimental effects.
Yes, of course. Do you prefer to die? Those are the only two alternatives, and a decision that you don't want one is a decision that you prefer the other.
No, there is no alternative. Everything eventually dies, so you better make peace with it. The only people who believe that they won't die are religious people who believe in an afterlife (which is a preposterous position) and the people who have their heads or whole bodies frozen because they think they are so special that the future will honor their contracts and revive them.
Both of these are bound to lead to the exact same outcome so it doesn't really matter what you believe but it may guide you to wiser decision while you are alive to accept reality absent proof to the contrary.
I'm sorry to hear that you don't want to exist in the future. I do. I have thought about it extensively, and there is literally no scenario in which I consider not-existing better than existing.
There is an essentially infinite amount of creativity and interesting complexity available in the richness of interactions with other people and the things people create. What, exactly, are you "horrified" about?
I guess I could just curl up into fetal position and watch the world go by. But that's no fun. Why not dream big and shoot for the moon with kooky goals like, say, having an underground, community-supported internet where things are falling less to shit?
Belief in inevitability is a choice (except for maybe dying, I guess).
Why stop at one? Make more such underground community supported internets. The more the merrier. Monoculture ends with death. The only question is how long it will take.
IDK. I sort of like the idea that now instead of dead internet theory being a joke, that it’ll be a well known fact that a minority of people are not real and there is no point in engaging… I look forward to Social 3… where people have to meet face to face.
Even if it is impossible to win, I am still feeling bad about it.
And at this point it is more about how large space will be usable and how much will be bot-controlled wasteland. I prefer spaces important for me to survive.
You said "can’t change". I observed that deciding you can't change something is self-fulfilling. Your argument from that point still relied on the assumption that you can't change it.
Before you decide not to care about something, you are supposed to make a deep assessment to see whether you can change it. It is only after you’ve determined that the thing can’t be changed that you can choose not to care about it.
Funny story, when I was younger I trained a basic text predictor deep learning model on all my conversations in a group chat I was in, it was surprisingly good at sounding like me and sometimes I'd use it to generate some text to submit to the chat.
I don't see what the value of this would be. Why would I want to automate talking to my friends? If I'm not interested in talking with them, I could simply not do it. It also carries the risk of not actually knowing what was talked about or said, which could come up in real life and lead to issues. If a "friend" started using a bot to talk to me, they would not longer be considered a friend. That would be the end.
I think you underestimate how many people already run their opinions and responses through LLMs, even if the LLM is not writing them wholesale. Intelligence is part of the social game, so appearing to have it matters to people. Friend groups are just social groups of a certain kind, they're not really removed from all this.
It was for fun, to see if it were possible and whether others could detect they were talking to a bot or not, you know, the hacker ethos and all. It's not meant to be taken seriously although looks like these days people unironically have LLM "friends."
I think you're underestimating the difficulty, even for exact copies of text (which AI mostly isn't doing).
What sort of Orwellian anti-cheat system would prevent copy and paste from working? What sort of law would mandate that? There are elaborate systems preventing people from copying video but they still have an analog hole.
Human verification technology absolutely exists. Give it some time and people who sell ai today are going to shoehorn it everywhere as the solution to the problem they are busy creating now.
Nothing like throwing in the towel before a battle is ever fought. Let's just sigh and wearily march on to our world of AI slop and ever higher bug counts and latency delays while we wait for the five different phone homes and compilations through a billion different LLM's for every silly command.
I am actively building non-magical human verification technology that doesn't require you uploading your retinal scans or ID to billionaires or incompetent outsourcing firms.
Not parent poster but I am a maintainer of software powering significant portions of the internet and prove my humanity with a 16 year old PGP key with thousands of transitive trust signatures formed through mostly in-person meetings, using IETF standards and keychain smartcards, as is the case for everyone I work with.
But, I do not have an Android or iOS device as I do not use proprietary software, so a smartphone based solution would not work for me.
Why re-invent the wheel? Invest in making PGP easier and keep the decades of trust building going anchoring humans to a web of trust that long predates human-impersonation-capable AI.
So, you haven't thought about it yet? Your counterquestion is insufficient to get any further, because "use" is a very relative term. If I can "use" a smartphone depends on the way you code your app. If you use inherently inaccessible UI elements, I can't "use" your app, no matter if I own a smartphone or not.
I might reply with a similarily useless question: Can you write accessible smartphone apps?
We already have it and we use it to validate the trusted human maintainer involvement behind the linux packages that power the entire internet: PGP Web Of Trust. Still works as designed and I still go to keysigning parties in person.
Say a regular human wanted to join and prove their humanhood status (expanding the web of trust). How would they go about that? What is the theoretical ceiling on the rate of expansion of this implementation?
They need to go to generate their key, ideally offline with an offline CA backup on and subkeys on a nitrokey or yubikey smartcard with touch requirement enabled for all key operations for safe workstation use. One can use keyfork on AirgapOS to do this safely, as a once-ever operation.
From there they set up their workstation tools to sign every ssh connection, git push, commit, merge, review, secret decryption, and release signature with their PGP smartcard which is all very well supported. This offers massive damage control if you get malware on their system, in addition to preventing online impersonation.
From there they ideally link it to all their online accounts with keyoxide to make it easy to verify as a single long lived identity, then start seeking out key signing parties locally or at tech conferences, hackerspaces etc.
We run one at CCC most years at the Church Of Cryptography.
Think of it like a long term digital passport that requires a few signatures by an international set of human notarys before anyone significantly trusts it.
Yes it requires a manual set of human steps anchored to human reputation online and offline, which is a doorway swarms of made up AI bot identities cannot pass through.
Do I expect most humans to do this? Absolutely not. However I consider it _negligent_ for any maintainer of a widely used open source software project to _not_ do this or they risk an impersonator pushing malware to their users.
No idea on theoretical rate of expansion but all the major security conscious classic linux distros mandate this for all maintainers. There are only maybe 20k people on earth that significantly contribute to FOSS internet foundations and Linux distros, so it scales just fine there.
Note: with the exception of stagex, most modern distros like alpine and nix have a yolo wikipedia style trust model, so never ever use those in production.
Hardly. They're clearly trying to influence a group at large using a flawed logical tactic. I'm pointing that out with a device that suggests their same futility they expect others to adapt should be primarily experienced by them instead.
If pointing out bullies is bullying then you're in a ridiculous mindset.
I assume we're heading to a place where keyboards will all have biometric sensors on every key and measure weight fluctuations in keystrokes, actually.
I wager that "describe only what the code change does" was someone's attempt to invert "don't add the extra crap you often try to write", not some 4d chess instruction that makes claude larp like a human writing a crappy commit message.
Yes, this is a trend I've noticed strongly with Claude code—it really struggles to explain why. Especially in PR descriptions, it has a strong bias to just summarize the commits and not explain at all why the PR exists.
No, I think a lot of humans can explain why they're adding a new button to the checkout page, or why they're removing a line from the revenue reconciliation job. There's always a reason a change gets made, or else nobody would be working on it at all :)
But will this be released as a feature? For me it seems like it's an Anthropic internal tool to secretly contribute to public repositories to test new models etc.
All these companies use AIs for writing these prompts.
But AI aren't actually very good at writing prompts imo. Like they are superficially good in that they seem to produce lots of vaguely accurate and specific text. And you would hope the specificity would mean it's good.
But they sort of don't capture intent very well. Nor do they seem to understand the failure modes of AI. The "-- describe only what the code change does" is a good example. This is specifc but it also distinctly seems like someone who doesn't actually understand what makes AI writing obvious.
Hey LLM, write me a system prompt that will avoid the common AI 'tells' or other idiosyncrasies that make it obvious that text or code output was generated by an AI/LLM. Use the referenced Wikipedia article as a must-avoid list, but do not consider it exhaustive. Add any derivations or modifications to these rules to catch 'likely' signals as well.
Hey, LLM, take a look at these multiple hundred emails and docs in my docs folder from the last few years, before I started using AI, that I wrote personally. create a list of all of the idiosyncrasies that I have in my writing. Create a file to remember that. And then use that to write any new text that'll be published so it sounds like my authentic voice. Thank you.
All the prompts I've ever written with Claude have always worked fine the first time. Only revised if the actual purpose changes, I left something out, etc. But also I tend to only write prompts as part of a larger session, usually near the end, so there's lots of context available to help with the writing.
As someone who often fails to read subtext, I would estimate that most people expect you to participate in mind reading as a natural part of conversation.
So it is no surprise that many people have difficulty switching gears to literal mode when interacting with these models.
This is my pet peeve with LLMs, they almost always fails to write like a normal human would. Mentioning logs, or other meta-things which is not at all interesting.
I had a problem to fix and one not only mentioned these "logs", but went on about things like "config", "tests", and a bunch of other unimportant nonsense words. It even went on to point me towards the "manual". Totally robotic monstrosity.
1) This seems to be for strictly Antrophic interal tooling
2) It does not "pretend to be human" it is instructed to "Write commit messages as a human developer would — describe only what the code change does."
Since when "describe only what the code change does" is pretending to be human?
You guys are just mining for things to moan about at this point.
1) It's not clear to me that this is only for internal tooling, as opposed to publishing commits on public GitHub repos. 2) Yes, it does explicitly say to pretend to be a human. From the link on my post:
> NEVER include in commit messages or PR descriptions:
> [...]
> - The phrase "Claude Code" or any mention that you are an AI
Heh, this is what people who are hostile against AI-generated contributions get. I always figured it'd happen soon enough, and here it is in the wild. Who knows where else it's happening...
How so? Good bit of my global claude.md is dedicated to fighting the incessant attribution in git commits. It is on the same level as the "sent from my iphone" signature - I'm not okay with my commits being advertising board for anthropic.
The difference in response time - especially versus a regex running locally - is really difficult to express to someone who hasn't made much use of LLM calls in their natural language projects.
Someone said 10,000x slower, but that's off - in my experience - by about four orders of magnitude. And that's average, it gets much worse.
Now personally I would have maybe made a call through a "traditional" ML widget (scikit, numpy, spaCy, fastText, sentence-transformer, etc) but - for me anyway - that whole entire stack is Python. Transpiling all that to TS might be a maintenance burden I don't particularly feel like taking on. And on client facing code I'm not really sure it's even possible.
So, think of it as a business man: You don't really care if your customers swear or whatever, but you know that it'll generate bad headlines. So you gotta do something. Just like a door lock isn't designed for a master criminal, you don't need to design your filter for some master swearer; no, you design it good enough that it gives the impression that further tries are futile.
So yeah, you do what's less intesive to the cpu, but also, you do what's enough to prevent the majority of the concerns where a screenshot or log ends up showing blatant "unmoral" behavior.
The up-side of the US market is (almost) everyone there speaks English. The down side is, that includes all the well-networked pearl-clutchers. Europe (including France) will have the same people, but it's harder to coordinate a network of pearl-clutching between some saying "Il faut protéger nos enfants de cette vulgarité!" and others saying "Η τηλεόραση και τα μέσα ενημέρωσης διαστρεβλώνουν τις αξίες μας!" even when they care about the exact same media.
For headlines, that's enough.
For what's behind the pearl-clutching, for what leads to the headlines pandering to them being worth writing, I agree with everyone else on this thread saying a simple word list is weird and probably pointless. Not just for false-negatives, but also false-positives: the Latin influence on many European languages leads to one very big politically-incorrect-in-the-USA problem for all the EU products talking about anything "black" (which includes what's printed on some brands of dark chocolate, one of which I saw in Hungary even though Hungarian isn't a Latin language but an Ugric language and only takes influences from Latin).
I just went through quite an adventure trying to translate back and forth from/to Hungarian to/from different languages to figure out which Hungarian word you meant, and arrived at the conclusion that this language is encrypted against human comprehension.
dark chocolate is "étcsokoládé" literally edible-chocolate in Hungarian.
i heared the throat-cleaning "Negró" candy (marketed by a chimney sweeper man with soot-covered face) was usually which hurt English-speaking people's self-deprecating sensitivities.
If it’s good enough it’s good enough, but just like there are many more options than going full blown LLM or just use a regex there are more options than transpile a massive Python stack to TS or give up.
Supporting 10 different languages in regex is a drop in the ocean. The regex can be generated programmatically and you can compress regexes easily. We used to have a compressed regex that could match any placename or street name in the UK in a few MB of RAM. It was silly quick.
I think it will depend on the language. There are a few non-latin languages where a simple word search likely won't be enough for a regex to properly apply.
You can literally | together every street address or other string you want to match in a giant disjunction, and then run a DFA/NFA minimization over that to get it down to a reasonable size. Maybe there are some fast regex simplification algorithms as well, but working directly with the finite automata has decades of research and probably can be more fully optimized.
We're talking about Claude Code. If you're coding and not writing or thinking in English, the agents and people reading that code will have bigger problems than a regexp missing a swear word :).
I talk to it in non-English. But have rules to have everything in code and documentation in english. Only speaking with me should use my native language. Why would that be a problem?
In my experience these models work fine using another language, if it’s a widely spoken one. For example, sometimes I prompt in Spanish, just to practice. It doesn’t seem to
affect the quality of code generation.
Thank you. +1.
There are obviously differences and things getting lost or slightly misaligned in the latent space, and these do cause degradation in reasoning quality, but the decline is very small in high resource languages.
It just can’t be a case simply because how ML works. In short, the more diverse and high quality texts with reasoning reach examples were in the training set, the better model performs on a given language.
So unless Spanish subset had much more quality-dense examples, to make up for volume, there is no way the quality of reasoning in Spanish is on par with English.
I apologise for the rambling explanation, I sure someone with ML expertise here can it explain it better.
I saw a curious post recently that explored this idea, and showed that it isn’t really the case. The internal layers of the model aren’t really reasoning in English, or in any human language.
Translation in/out of human languages only happens at the edges of the model.
Internal layer activations for the same concept are similar regardless of language, while activations at the top/bottom layers diverge. Meanwhile the pattern is reversed for same language different content.
So we do at least agree on the fact that quality of human language <-> embeddings transition depends on how good target language is represented in the training dataset?
Even if it happens at the edge, on every conversation turn, I may assume non captured small subtleties of meaning over time can accumulate into significant error.
The agent may be translating to English behind the scenes. There are all kinds of hacks layered on top of the underlying models. But however exactly it works, it’s clearly not the case that these things only make use of Spanish training data when you prompt them in Spanish.
Lets take a language with orders of magnitude less training data, like Nahuatl. The following is what happens if I ask Claude Code in Nahuatl to give me an example of a simple program that I can execute inside a web browser (avoiding any direct keyword prompts like 'Javascript' or 'Hello World'). Interestingly, it responds in Spanish. I doubt that it has any relevant training data in Nahuatl. (Apologies if the Nahuatl is wrong – I just used Google translate.)
¿Uelis techmakas se neskayotl tlen se programa tlen amo ouij tlen ueliskia nijchiuas ipan se navegador web?
-----
Entiendo tu pregunta en náhuatl. Aquí un ejemplo sencillo — una calculadora en HTML puro que puedes abrir directo en el navegador, sin servidor:
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Calculadora</title>
<style>
body { font-family: sans-serif; display: flex; justify-content: center; padding: 40px; }
.calc { border: 1px solid #ccc; padding: 20px; border-radius: 8px; width: 200px; }
input { width: 100%; margin-bottom: 10px; font-size: 1.2em; text-align: right; }
button { width: 22%; margin: 2px; padding: 8px; font-size: 1em; cursor: pointer; }
.wide { width: 47%; }
</style>
</head>
<body>
<div class="calc">
<input type="text" id="pantalla" readonly>
<div>
<button onclick="tecla('7')">7</button>
<button onclick="tecla('8')">8</button>
<button onclick="tecla('9')">9</button>
<button onclick="tecla('/')">÷</button>
<button onclick="tecla('4')">4</button>
<button onclick="tecla('5')">5</button>
<button onclick="tecla('6')">6</button>
<button onclick="tecla('*')">×</button>
<button onclick="tecla('1')">1</button>
<button onclick="tecla('2')">2</button>
<button onclick="tecla('3')">3</button>
<button onclick="tecla('-')">−</button>
<button onclick="tecla('0')">0</button>
<button onclick="tecla('.')">.</button>
<button onclick="tecla('+')">+</button>
<button onclick="limpiar()">C</button>
<button class="wide" onclick="calcular()">=</button>
<button class="wide" onclick="borrar()">⌫</button>
</div>
</div>
<script>
const pantalla = document.getElementById('pantalla');
function tecla(val) {
pantalla.value += val;
}
function calcular() {
try {
pantalla.value = eval(pantalla.value);
} catch {
pantalla.value = 'Error';
}
}
function limpiar() { pantalla.value = ''; }
function borrar() { pantalla.value = pantalla.value.slice(0, -1); }
</script>
</body>
</html>
Guarda esto como calculadora.html y ábrelo en cualquier navegador — no necesita servidor ni dependencias. Es un buen punto de partida para aprender HTML,
CSS y JavaScript.
> it’s clearly not the case that these things only make use of Spanish training data when you prompt them in Spanish.
It’s not! And I’ve never said that.
Anyways, I’m not even sure what we are arguing about, as it’s 100% fact that SOTA models perform better in English, the only interesting question here how much better, is it negligible or actually makes a difference in real world use-cases.
It’s negligible as far as I can tell. If the LLM can “speak” the language well then you can prompt it in that language and get more or less the same results as in English.
In my experience agents tend to (counterintuitively) perform better when the business language is not English / does not match the code's language. I'm assuming the increased attention mitigates the higher "cognitive" load.
Why do you need to do it at the client side? You are leaking so much information on the client side.
And considering the speed of Claude code, if you really want to do on the client side, a few seconds won't be a big deal.
Depends what its used by, if I recall theres an `/insights` command/skill built in whatever you want to call it that generates a HTML file. I believe it gives you stats on when you're frustrated with it and (useless) suggestions on how to "use claude better".
Additionally after looking at the source it looks like a lot of Anthropics own internal test tooling/debug (ie. stuff stripped out at build time) is in this source mapping. Theres one part that prompts their own users (or whatever) to use a report issue command whenever frustration is detected. It's possible its using it for this.
This is assuming the regex is doing a good job. It is not. Also you can embed a very tiny model if you really want to flag as many negatives as possible (I don't know anthropic's goal with this) - it would be quick and free.
It is not a tricky problem because it has a simple and obvious solution: do not filter or block usage just because the input includes a word like "gun".
It is exceedingly obvious that the goal here is to catch at least 75-80% of negative sentiment and not to be exhaustive and pedantic and think of every possible way someone could express themselves.
75-80% [1], 90%, 99% [2]. In other words, no one has any idea.
I doubt it's anywhere that high because even if you don't write anything fancy and simply capitalize the first word like you'd normally do at the beginning of a sentence, the regex won't flag it.
Anyway, I don't really care, might just as well be 99.99%. This is not a hill I'm going to die on :P
Except that it's a list of English keywords. Swearing at the computer is the one thing I'll hear devs switch back to their native language for constantly
They evidently ran a statistical analysis and determined that virtually no one uses those phrases as a quick retort to a model's unsatisfying answer... so they don't need to optimize for them.
Ha! Where I'm from a "dolly" was the two-wheeled thing. The four-wheeler thing wasn't common before big-boxes took over the hardware business, but I think my dad would have called it a "cart", maybe a "hand-cart".
Cloud hosted call centers using LLMs is one of my specialties. While I use an LLM for more nuanced sentiment analysis, I definitely use a list of keywords as a first level filter.
Actually, this could be a case where its useful. Even it only catches half the complaints, that's still a lot of data, far more than ordinary telemetry used to collect.
> The issue is that you shouldn't be looking for substrings in the first place.
Why? They clearly just want to log conversations that are likely to display extreme user frustration with minimal overhead. They could do a full-blown NLP-driven sentiment analysis on every prompt but I reckon it would not be as cost-effective as this.
It's fast, but it'll miss a ton of cases. This feels like it would be better served by a prompt instruction, or an additional tiny neural network.
And some of the entries are too short and will create false positives. It'll match the word "offset" ("ffs"), for example. EDIT: no it won't, I missed the \b. Still sounds weird to me.
I swear this whole thread about regexes is just fake rage at something, and I bet it'd be reversed had they used something heavier (omg, look they're using an LLM call where a simple regex would have worked, lul)...
The pattern only matches if both ends are word boundaries. So "diffs" won't match, but "Oh, ffs!" will. It's also why they had to use the pattern "shit(ty|tiest)" instead of just "shit".
You have a semi expensive process. But you want to keep particular known context out. So a quick and dirty search just in front of the expensive process. So instead of 'figure sentiment (20seconds)'. You have 'quick check sentiment (<1sec)' then do the 'figure sentiment v2 (5seconds)'. Now if it is just pure regex then your analogy would hold up just fine.
I could see me totally making a design choice like that.
Excellent project, unfortunate title. I almost didn't click on it.
I like the tradeoff offered: full access to the current directory, read-only access to the rest, copy-on-write for the home directory. With stricter modes to (presumably) protect against data exfiltration too. It really feels like it should be the default for agent systems.
I'm not so sure. The difference between a self-hosted compiler and a circular interpreter is that the compiler has a binary artifact that you can store.
With an interpreter, you still need some binary to run your interpreter, which will probably be CPython, making the new interpreter redundant. And if you add a language feature to the custom interpreter, and you want to use that feature in the interpreter itself, you need to run the whole chain at runtime: CPython -> Old Interpreter That Understand New Feature -> New Interpreter That Uses New Feature -> Target Program. And the chain only gets longer, each iteration exponentially slower.
Meanwhile with a self-hosted compiler, each iteration is "cached" in the form a compiled binary. The chain is only in the history of the binary, not part of the runtime.
---
Edit since this is now a top comment: I'm not complaining about the project! Interpreters are cool, and this is genuinely useful for learning and experimentation. It's also nice to demystify our tools.
reply