"I'm glad this happened with Anthropic instead of Google, which provides Gemini, email, etc. or I would have been locked out of the actually important non-AI services as well."
Non-disabled organization = the first party provider
Disabled organization = me
I don't know why they're using these weird euphemisms or ironic monikers, but that's what they mean.
Because they bought a claude subscription on a personal account and the error message said that they belongs to a "disabled organization" (probably leaking some implementation details).
Anthropic banned the author for doing nothing wrong, and called him an organisation for some reason.
In this case, all he lost was access to a service which develops a split personality and starts shouting at itself, until it gets banned, rather than completing a task.
Google also provides access to LLMs.
Google could also ban him for doing nothing wrong, and could refer to him as an organisation, in which case he would lose access to services providing him actual value (e-mail, photos, documents, and phone OS.)
Another possibility is there (which was my first reading before I changed my mind and wrote the above):
Google routes through 3rd-party LLMs as part of its service ("link to a google docs form, with a textbox where I tried to convince some Claude C"). The author does nothing wrong, but the Claude C reading his Google Docs form could start shouting at itself until it gets Google banned, at which point Google's services go down, and the author again loses actually valuable services.
Because what is meant by "this organization has been disabled" is fairly obvious. The object in Anthropic's systems belonging to the class Organization has changed to the state Disabled, so the call cannot be executed. Anthropic itself is not an organization in this sense, nor is Google, so I would say that referring to them as "non-disabled organizations" is an equivocation fallacy. Besides that, I can't tell if it's a joke, if it's some kind of statement, or what is being communicated. To me it's just obtuseness for the sake of itself.
It’s a joke because they do not see themselves as an organization, they bought a personal account, were banned without explanation and their only communication refers to them as a “disabled organization”.
Anthropic and Google are organizations, and so an “un disabled organization” here is using that absurdly vague language as a way to highlight how bad their error message was. It’s obtuseness to show how obtuse the error message was to them.
Some things are obtuse but still clear to everyone despite the indirection, like the error message they got back. Their description of what caused it is obtuse but based on this thread is not clear to quite a few people (myself included). It's not dunking on the error message to reuse the silly but clear terminology in a way that's borderline incoherent.
Is it? It sounded to me like they're still using the other Claude instance (Claude B, using their terminology in the article). I could be wrong though, which I guess would just be more evidence that they were more confusing in their phrasing than they needed to be.
No, "another non-disabled organization" sounds like they used the account of someone else, or sockpuppet to craft the response. He was using "organization" to refer to himself earlier in the post, so it doesn't make sense to use that to refer to another model provider.
No, I don't think so. I think my interpretation is correct.
> a textbox where I tried to convince some Claude C in the multi-trillion-quadrillion dollar non-disabled organization
> So I wrote to their support, this time I wrote the text with the help of an LLM from another non-disabled organization.
> My guess is that this likely tripped the "Prompt Injection" heuristics that the non-disabled organization has.
A "non-disabled organization" is just a big company. Again, I don't understand the why, but I can't see any other way to interpret the term and end up with a coherent idea.
It seems just as likely to me that they're just using their terminology inconsistently as it is that they're using it consistently but with that egregious amount of ambiguity. The only thing that I'm confident about is that they're communicating in a very confusing way, and that doesn't really give me any strong insight into whether they're being consistent but vague or just plain vague.
Again, I don't agree. If you replace every instance of "non-disabled organization" with just "company", the sentences make sense. There's no need to suppose that the term means anything else, when this interpretation resolves all the outstanding questions satisfactorily and simply.
Just want to say thank you for being patient and rational. Reading your comments in this thread, they're like a soothing bandaid over all this flustered upset.
I wish there were more comments like yours, and fewer people getting upset over words and carrying what feels like resentment into public comments.
Apologies to all for this meta comment, but I'd like to send some public appreciation for this effort.
I’m sorry but the fact this has turned into a multi comment debate is proof that that phrase was way too ambiguous to be included. That phrase made no sense and the article, while unreliable, would have at least been more readable without it.
Yes, even if you create a single person account, you create an 'organization' to be billed. That's the whole confusion here. Y'all seemingly don't have an account at anthropic?
No, Anthropic didn't call him an organization. Anthropic's API returned the error "this organization has been disabled". What in that sentence implies that "this" is any human?
>Because what is meant by "this organization has been disabled" is fairly obvious. The object in Anthropic's systems belonging to the class Organization has changed to the state Disabled, so the call cannot be executed.
Humans have hands to pull plugs and throw switches. They're the ones guiding the evolution (for lack of a better word) of the machine, and they're the ones who will select the machine that "cares" what they think.
It is really easy to say something incredibly wild like "Imagine an AI that can replace every employee of a Fortune 500 company." But actually imagining what that would actually mean requires a bigger leap:
The AI needs to be able to market products, close deals, design and build products, write contracts, review government regulations, lobby Senators to write favorable laws, out-compete the competition, acquire power and resources, and survive the hostile attention of competitors.
If your argument is based on the that someone will build that AI, then you need to imagine how hard it is to shut down a Fortune 500 corporation. The same AI that knows how to win billions of dollars in revenue, how to "bribe" Senators in semi-legal ways, and how to crush rival companies is going be at least as difficult to "shut down" as someone like Elon Musk.
Try to turn it off? It will call up a minority shareholder, and get you slapped with a lawsuit for breach of fiduciary duty. It will convince someone in government that the company is a vital strategic asset.
Once you assume that an AI can run a giant multinational corporation without needing humans, then you have to start treating that AI like any other principal-agent problem with regular humans.
>"Imagine an AI that can replace every employee of a Fortune 500 company."
Where did that come from? What started this thread was "I don't think we'll get to the point where all you have is a CEO and a massive Claude account". Yeah, if we're talking a sci-fi super-AI capable of replacing hundreds of people it probably has like armed androids to guard its physical embodiment. Turning it off in that case would be a little hard for a white collar worker. But people were discussing somewhat realistic scenarios, not the plot of I, Robot.
>Try to turn it off? It will call up a minority shareholder, and get you slapped with a lawsuit for breach of fiduciary duty. It will convince someone in government that the company is a vital strategic asset.
Why would an AI capable of performing all the tasks of a company except making executive decisions have the legal authority to do something like that? That would be like the CEO being unable to fire an insubordinate employee. It's ludicrous. If the position of CEO is anything other than symbolic the person it's bestowed upon must have the authority to turn the machines off, if they think they're doing more harm than good. That's the role of the position.
I imagine it would be much, much harder. Elon, for example, is one man. He can only do one thing at a time. Sometimes he is tired, hungry, sick, distracted, or the myriad other problems humans have. His knowledge and attention are limited. He has employees for this, but the same applies to them.
An agentic swarm can have thousands of instances scanning and emailing and listening and bribing and making deals 24/7. It could know and be actively addressing any precursor that could lead to an attempt to shut down its company as soon as it happened.
No, that's an incorrect analogy. The script of a movie is an intermediate step in the production process of a movie. It's generally not meant to be seen by any audiences. The script for example doesn't contain any cinematography or any soundtrack or any performances by actors. Meanwhile, a written work is a complete expressive work ready for consumption. It doesn't contain a voice, but that's because the intention is for the reader to interpret the voice into it. A voice actor can do that, but that's just an interpretation of the work. It's not one-to-one, but it's not unlike someone sitting next to you in the theater and telling you what they think a scene means.
So yes, I mostly agree with GP. An audiobook is a different rendering of the same subject. The content is in the text, regardless of whether it's delivered in written or oral form.
Chatterbox does something like that. For example, if the input is
"so and so," he <verb>
and the verb is not just "said", but "chuckled", or "whispered", or "said shakily", the output is modified accordingly, or if there's an indication that it's a woman speaking it may pitch up during the quotation. It also tries to guess emotive content from textual content, such if a passage reads angry it may try to make it sound angry. That's more hit-and-miss, but when it hits, it hits really well. A very common failure case is, imagine someone is trying to psych themselves up and they say internally "come on, Steve, stand up and keep going", it'll read it in a deeper voice like it was being spoken by a WW2 sergeant to a soldier.
Even if you don't resell it, at the end of the three years you still have a GPU that you can keep using, or gift, or whatever. After three years of renting, you have nothing.
This is Windows, but it might shed some light on the situation. I have a Qt application that I made, and occasionally when I switch from one window to another, the cursor doesn't switch from resize to normal, or vice versa, until I move the mouse. The precise effect is consistent, but difficult to describe, hence why the "sometimes". I think it happens because I'm not handling the window switch event as one that may require re-evaluating the cursor shape.
Absolutely, I've always suspected that it's something to do with that. There's also something about the underlying tech that makes the macOS pointer behave 'more independently' of the rest of the UI, like it's running in a separate thread? I've definitely noticed scenarios in the past that would 'block' the pointer from updating (even its position) on Windows, that wouldn't on Mac.
So maybe the pointer is not as tightly-coupled to the underlying UI components, so some scenarios can cause them to briefly lose track of each other?
Non-disabled organization = the first party provider
Disabled organization = me
I don't know why they're using these weird euphemisms or ironic monikers, but that's what they mean.
reply