Hacker Newsnew | past | comments | ask | show | jobs | submit | 6Az4Mj4D's commentslogin

Nice using Calude to build tool to fool Claude :)

How does it decide which model to use per invocation


The caller specifies the model in the request body (just like a normal OpenAI API call). OCP maps it to the corresponding Claude CLI flag:

- claude-sonnet-4-6 → claude -p --model sonnet - claude-opus-4-6 → claude -p --model opus

If you don't specify, it defaults to Sonnet. There's no automatic model selection yet — that's coming in v4 with agent-aware routing (different agents get different models based on their role).


The voting failed with a small number I think 53 / 47


Leaving autonomous weapons aside, how does Anthropic justifies that they signed up with surveillance company Palantir and now raising concerns for same surveillance with DoD?

It doesn't match.


This is very easy to explain. Anthropic outlines some limitations in their terms of service. Palantir accepted those terms. The DoD did not.

OpenAI claims their terms of service for DoD contain the same limitations as Anthropics proposed service agreement. Anthropic claims that this is untrue.

Now given that (a) the DoD terminated their deal with Anthropic, (b) stated that they terminated because Anthropic refused modify their terms of service, and (c) then signed a deal with openAI; I am inclined to believe that there is in fact a substantial difference between the terms of service offered by Anthropic and OpenAI.


Yeah, it never made sense when Sam immediately said that they had the same constraints yet de DoW immediately agreed with that.

From what I can see, OpenAI’s terms basically say “need to comply with the law”, which provides them with plenty of wiggle room with executive orders and whatnot.


I think they said they will comply with the law and Pentagon policies.

And:

1. there is no law currently prohibiting autonomous weapons platforms

2. the Pentagon can create policies overnight allowing all kinds of stuff

So yeah, OpenAI is going to make a lot of money from actually doing what the military asks from them.


Secret FISA court decisions are also law, the public just can’t see or challenge them. So we really have no idea what is considered lawful.

If the contract says “all lawful use” it’s a blank check to the state.


Are you sure about that? Every information I’ve seen suggests that the DoD has been using Anthropic’s models through Palantir.

My understanding is that Anthropic requested visibility and a say into how their models were being used for classified tasks, while the DoD wanted to expand the scope of those tasks into areas that Anthropic found objectionable. Both of those proposals were unacceptable for the other side.


Wasn’t the trigger for all this what happened with Maduro earlier this year? From what I understood, Anthropic wasn’t very happy how their systems were being used by the DoW through Palentir which caused this whole feud.


Reportedly, Anthropic didn't know about Claude's role in capturing Maduro until they saw it on the headlines.


And why would they have an objection to that? They sold a product to a customer. They should have no business in how that customer uses their software.


> And why would they have an objection to that? They sold a product to a customer. They should have no business in how that customer uses their software.

They sold a service to a customer, contractually subject to terms they both agreed upon. How do people keep missing this? The government changed their mind after agreeing to the restrictions and tried to alter the deal with Anthropic ex-post-facto.


It’s a bit more complex than that, but to be fair I don’t know what they were expecting after they integrated a purpose-built model with Palantir to be deployed in high-security networks to carry out classified tasks.


TBH I don’t know what they were expecting when closing that $200 million DoD contract last year.


Licensing is a thing. See requirements that, for example, GPL3 places on customers.


I'd hate to break it to you, but companies do have a right to determine how their products are used. You were subject to that when you wrote that comment. Did you not notice that?


No, I do not think they do. If a buy a car a run somebody over on purpose, the manufacturer has no right to come take my car away. Even if it were to be written in a contract.


If you tell the car dealership that your plan is to run someone over with the car you are buying, they 100% have the right to refuse selling the car to you.

If you tell a gun dealer you're going to kill someone when you walk out of the shop, they have a right and an obligation to refuse the sale.

Please feel free to tell me how these analogies are incorrect.


You're confusing physical goods transactions with subscription access to a service.

One of the many reasons every company has tried to shift their business model to the latter: greater control over users.


The GGP did not make that distinction, they made a statement about all companies and all products.


It's different with services. If you close a mobile phone contract and use it for spamming, the supplier can cancel your contract.


So firearms dealers should be fine with their customers going on mass murder sprees?


Is this a rethoric question?


Is your original question rhetoric? Because it ain't very... smart


“We’ve actually held our red lines with integrity rather than colluding with them to produce ‘safety theater’ for the benefit of employees (which, I absolutely swear to you, is what literally everyone at [the Pentagon], Palantir, our political consultants, etc, assumed was the problem we were trying to solve),” Amodei reportedly wrote.

“The real reasons [the Pentagon] and the Trump admin do not like us is that we haven’t donated to Trump (while OpenAI/Greg have donated a lot),” he wrote, referring to Greg Brockman, OpenAI’s president, who gave a Pac supporting Trump $25m in conjunction with his wife.

https://www.theguardian.com/technology/2026/mar/04/sam-altma...


> we haven’t donated to Trump

Another reason is that Sam Altman has been willing to "play ball" like providing high-profile (though meaningless) big announcements Trump likes to tout as successes. For example:

> "The Stargate AI data center project worth $500 billion, announced by US President Donald Trump in January 2025, is reportedly running into serious trouble.

More than a year after the announcement, the joint venture between OpenAI, Oracle, and Softbank hasn't hired any staff and isn't actively developing any data centers, The Information reports, citing three people involved in the "shelved idea."

https://the-decoder.com/stargates-500-billion-ai-infrastruct...


Reminds me of when they cut the camera to Zuck and he made the $600 Billion Deal announcement, but was hot mic'd after and said "I'm sorry I wasn't ready... I wasn't sure what number you wanted to go with". I will be extremely surprised if half of these deals actually go through


Sam donated $1M to Trump's inaugural fund. Dario did not.

http://magamoney.fyi/executives/samuel-h-altman/


> signed up with surveillance company Palantir

Just to nitpick, Palantir isn't doing surveillance like Flock. They do data integration the way IBM does under contract for the governments. Some data pipelines include law enforcement surveillance data which get integrated with other software/databases to help police analyze it. There's no evidence they are collecting it themselves despite recent headlines. It's a relatively minor but important distinction IMO.

https://www.wired.com/story/palantir-what-the-company-does/


They are providing the software to do surveillance, They are definetly bad actors, you can dance around this all you want, but they are in it.


It is an important distinction.

It’s the same with Facebook selling user data. Neither selling your data, like the carriers do, or selling the ability to target you with your data, like Facebook does, are very nice. But legally they are separate things that need to be regulated differently. As is the case with Flock and Palantir.


I'm not so sure Facebook is an apt analogy. Have we forgotten all the times Facebook has actually sold personal data?


Nice assertion. Please provide citations, substance, or anything other than “you’re wrong definitely.”



Wow... See. I didn't even know it was this bad. You don't need much to silence these people that are supporting authoritarian collaborators.


I always just say Palantier is IBM 2.0

IBM of course has an problematic history.


Iunno, this seems pretty dystopian to me: https://www.eff.org/deeplinks/2026/01/report-ice-using-palan...


The government knowing where you live is neither surveillance nor dystopian.


That depends very much on how they use and disseminate that information.


The government is building databases of people and where they live, without consent of the governed. If that isn't is dystopian I don't know what is.


Their data integration and sale allows for the government to surveil citizens without probable cause or warrants.


The solution is still no different than a decade ago. Far stricter laws on intelligence, federal and local police surveillance, and a reduction in executive power which oversteps checks and balances.

There will always be another IT company willing to do integrations even if Palantir dies. Software isn’t going away.


Right. But this is about Anthropic -- a company frames itself as a responsible and ethical steward of LLM technology. They can't pretend that OpenAI is somehow morally bankrupt here while continuing to deal with companies that undermine peoples' civil liberties.

I'm also a little unsure what you're saying here. Are you saying that it's futile to rely on corporate leaders to commit to ethical acts, as there's always someone else who will debase themselves to make money? I think that solely relying on the state to regulate itself with respect to civil liberties is a fast path to despotism. The well-regulated state was always a partnership between ordinary people bravely standing up for their rights and the norms of the rules and laws that made it socially acceptable to do so.

If I'm grasping you correctly, I think you're right; however, this points to the rottenness of our culture's way of organizing labor: the optimization of the shareholder over everyone else leads to some really awful effects.


I think a company which provides a sensor fusion dragnet for a government-run mass domestic civilian surveillance system is at least as culpable (and odious) than the ones supplying the data.


It's funny you'd pick IBM:

https://en.wikipedia.org/wiki/IBM_and_the_Holocaust

Though, I guess IBM did get away with lots of stuff that... Actually, did any supply companies in the WWII German war machine actually get in trouble for war crimes, or did they just go after officers and the people actually working in the camps?

The company selling punchcards that were used for logistics was apparently fine. What about the people making the gas canisters, or supplying plumbing fixtures? The plumbers? Where's the line?

Wondering, since this is increasingly becoming a current events question instead of an academic concern.


There were the so-called Subsequent Nuremberg Trials (12 of them). Among them were the trials of IG Farben (gas chamber supplies, Zyklon B) and Krupp (armament of the German military forces in preparation of an aggressive war)

I'm under no illusion that all the perpetrators of war crimes were held accountable but it's not a bad model.


Sure, but it's not as if the DoD was planning on using Anthropic to _collect_ the data either? I assume that the hypothetical DoD use case Anthropic shied away from dealt with the processing of surveillance data, just like what Palantir does.


https://www.washingtonpost.com/technology/2026/03/04/anthrop...

> The military’s Maven Smart System, which is built by data mining company Palantir, is generating insights from an astonishing amount of classified data from satellites, surveillance and other intelligence, helping provide real-time targeting and target prioritization to military operations in Iran, according to three people familiar with the system...

> As planning for a potential strike in Iran was underway, Maven, powered by Claude, suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance, said two of the people.


> They do data integration the way IBM does under contract for the governments

Good thing IBM's data integration was never used for ill!

Oh, wait https://en.wikipedia.org/wiki/IBM_and_World_War_II


Oracle started by building databases for the CIA


Basically it’s glorified Excel.

Take it out on the database purveyors, not Palantir.


Sure, Palantir is just one tool in the chain, and it's a lot more boring than people make it out to be.

On the other hand, a comment like yours does smack a bit of "Once the rockets are up, who cares where they come down."


It might match. The red line was domestic surveillance. You don't know what deal they had. Giving Anthropic the benefit of the doubt, perhaps Palantir said "Deal, we won't use your tool domestically".


Every single time the box is flipped over, whats inside is "more domestic surveillance". Who in their right mind would give the benefit of the doubt?


Well, I think a company that stood their ground knowing full well they'd be designated a SCR deserves the benefit of the doubt.


> Who in their right mind would give the benefit of the doubt?

I'm saying that we should give Anthropic the benefit of the doubt that when they say "our deal with Palantir doesn't cross our red line", we should believe Anthropic, that they have gotten an assurance from Palantir that they wouldn't use it domestically. I'm NOT saying we should give Palantir the benefit of the doubt.

I wasn't commenting on "is giving AI to Palantir a good idea" (I don't think it is), I was commenting on "should we conclude that Anthropic is being dishonest because they claimed they have red lines but work with Palantir" (I think it's unclear, but there's a plausible explanation in which they're not being dishonest, but possibly naive, so give them the benefit of the doubt).


Whether you disagree with whether it truly aligns with their stated values, in their partnership with Palantir (making Claude available within their AI platform) they requested consistent restrictions:

> “[We will] tailor use restrictions to the mission and legal authorities of a government entity” based on factors such as “the extent of the agency’s willingness to engage in ongoing dialogue,” Anthropic says in its terms. The terms, it notes, do not apply to AI systems it considers to “substantially increase the risk of catastrophic misuse,” show “low-level autonomous capabilities,” or that can be used for disinformation campaigns, the design or deployment of weapons, censorship, domestic surveillance, and malicious cyber operations.

Source: https://techcrunch.com/2024/11/07/anthropic-teams-up-with-pa...


Why do you assume the contract with palantir doesn't have similar terms? Weird assumption.


The moral disposition of the Anthropic leaders doesn't matter because they don't own the company. Investors won't idly watch them decimate billions in ROI by alienating the largest institutional customers on the planet.


> The moral disposition of the Anthropic leaders doesn't matter because they don't own the company. Investors won't idly watch them decimate billions in ROI by alienating the largest institutional customers on the planet.

Anthropic is a Public Benefit Corporation chartered in Delaware, with an expressed commitment to "the responsible development and maintenance of advanced AI for the long-term benefit of humanity."

So in theory (IANAL), investors can't easily bully Anthropic into abandoning their mission statement unless they can convince a court that Anthropic deliberately aimed to prioritize the cause over profit.


> So in theory (IANAL), investors can't easily bully Anthropic into abandoning their mission statement unless they can convince a court that Anthropic deliberately aimed to prioritize the cause over profit.

So why were they ever working with the military in the first instance, if that's the case? If you didn't gleam from OpenAI that it doesn't matter. Everyone is greedy and will jump ship for money if Anthropic does not get it for them.


They are all guilty.


[flagged]


I wish people like you would actually talk to people at Anthropic, maybe interview with the company, actually engage with the real humans there before making blithe comments like this.

Seriously, you're on HN, you can't possibly be that many degrees removed from someone at the company.

In any case it's absolutely not "just marketing", it suffuses their whole culture, and it is genuine.


[flagged]


Just have an actual, good faith conversation with a real human working there instead of fighting/making assumptions about a strawman in your head.


I'm not talking about employees, I'm talking about the CEO. The fact that employees believe it means the marketing works. Everything about your posts makes my point. Anthropic is a business and that you believe they have a serious commitment to the PBC or any of that other stuff, then you have drank the kool-aid, full stop.


Really not sure how you can reconcile that with them making decisions that got them designated a SCR.


Then you're not thinking very hard when this thread is full of people saying "I'm deleting my OpenAI account RIGHT NOW" Which isn't a surprise because you are also buying this hook, line and sinker.


[flagged]


"The law" is the contract. The Pentagon agreed to terms of service. The law is not on the Pentagon's side. The contract did not change; what changed is the Pentagon breaking the contract.

Perhaps you think the law shouldn't allow such a contract; that's a valid position. But that's not what the law currently says.


I'm saying they shouldn't write in their contract that they have some veto power of how their software is used if it's within the law of the land (ie laws written by congress)

Is that more clear?


Sure. And since they can't reach a contract they do agree on, there is no sale. They cannot be compelled to sign a contract that they do not agree to.


Agree. Anthropic shouldn't require that in their contract (it is stupid). I'm glad the government resisted as it was an insane overreach. But since Anthropic insisted there should be no contract.


> if its within the law.

The current administration has been caught flouting court orders in dozens of cases, to the point that courts are no longer even granting them the assumption that they’re operating in good faith.

I can think of a million good reasons not to give these people the tools to implement automated totalitarianism. Your proposal that they simply refuse service to the government entirely would be ideal.


Yes we obv need large corporations to exert some kind of control over our elected officials.


Our elected officials shouldn’t violate contracts. This isn’t rocket surgery.


They can have a contract that says whatever they want. My argument is this shouldn't try to push one of these contracts and the government shouldn't agree to such a contract.

Nowhere did I say elected officials should violate contracts.


The government works for the people, not the other way around. For the people, by the people and of the people.

If you don't question people in positions of power they will just do whatever they want. Democracy is sustained by action, not by acquiescence.

And with the lawlessness of this administration, I would make it a point to hold them accountable. I'm not going to let them do mass surveillance when they decide to change the law.

Are you native, or just ignoring what is going on?


I want people to question people in power. Thats kind of the point of democracy. But it's good to remember corporations aren't people :-)


It’s a service. Democracy doesn’t give the government the right to force you to perform a service.

The technology isn’t suitable for the purposes the regime wants.


They can choose to sell to government agencies or not. But selling to them and then trying to have some veto power is wrong. So it sounds like we're in a agreement.

I would like western Democratic powers to have the most advanced technology personally but you may disagree.


Basically, yes.

I've worked in government outside of the Federal level. The government has a moral and often legal incentive to do inefficient things for the simple reason that the work they do needs to be safe, controlled and deterministic.

Any US state maintains a birth registry, death registry and DMV. But firewalls exist so that live links don't exist between these and other programs. It's inefficient, but avoids many hazards and conflicts in regulatory or legal compliance. For example, income tax information is secret, and cannot be shared outside of the tax processing scenario. Police investigatory data should not be linked to your unemployment claim. Fundamentally, those are examples of why the stuff that Palantir is doing is problematic.

With military applications, it's even more fraught, and human life is in peril by design. It's important for a professional army like the US Army that strict discipline and rules of engagement are followed. Soldiers may find themselves in situations where people are shooting at them, and they are ordered to take no action.

AI is not capable of functioning in that environment.

My point is these are complex issues, and we are in a political environment where people seeking simple answers are looking at technology like AI to disconnect them from accountability. There's a nuance there, and a reason why Anthropic is willing to partner with Palantir for their work, but hesitant to powering drones that are dropping hellfire missiles on people.


That is crazy. You are suggesting that corporations should have no power over their own IP.

Are you really saying that if Anthropic sells a limited version of their product to Palantir at a certain price, the government should be able to demand access to an unlimited version of Anthropic's product for free because they are a customer of Palantir?

That would effectively mean the government gets an unlimited license to all IP of companies that do business with government suppliers... that would be terrible.


Imagine if a gun manufacturer sold weapons to the military but said "don't use them is unjustified wars as we deem fit" seems wrong as we dont want gun manufacturers setting our foreign policy. Choose not to sell them sure, but this isn't "ownership of IP". If the feds were to ask for weights and torrent it out, sure IP. But this ain't that


Guns aren’t a service, which is what Anthropic sells.

Anthropic has a contract for how their service is to be used, the government committed itself to following the contract by signing. Then it violated the contract.

Basically the government committed fraud by signing a contract that it clearly intended to violate. Then they tried to bully Anthropic into not doing anything about their breach of contract.

It’s mobster behavior. You’re saying Anthropic should just not sell services if it’s going to enforce the terms of service. You have it backwards: the government shouldn’t enter into contracts that it intends to violate.


[flagged]


If they're doing it against the terms of service (and publicly so), I can't pin that one on Anthropic.

They've done lots wrong and maybe they shouldn't have gotten in bed with the military to begin with, but this illegal war is not theirs. It rests squarely with the President who declared it. (And with the military officers who are going along with it despite the violation of international law.)


> If they're doing it against the terms of service (and publicly so), I can't pin that one on Anthropic.

Anthropic claim that superintelligence is coming, that unaligned AI is an existential threat to humanity, and they are the only ones responsible enough to control it.

If that's your world view, why would you be willing to accept someone's word that they'll only Do Good Things with it? And not just "someone", someone with access to the world's most powerful nuclear arsenal? A contract is meaningless if the world gets obliterated in nuclear war.


Anybody who works with the military has to deal with that moral dilemma. Many people believe that the military has some legitimate use. They have to figure out for themselves how do deal with the the possibility that it can also be used illegitimately.

So I don't blame Anthropic for getting into bed with the military, and getting out when it got bad for them. A lot of military suppliers are facing a similar dilemma, I suspect. The army runs on its stomach, and I do not envy the people delivering pizzas to the Pentagon, knowing what room those pizzas are consumed in.


I don't think any AI company should get in bed with the military. That being said, if the terms of service have been violated, the account should be canceled.


They basically are cancelling the contract, but there are some nuances on Anthropic's side. The contract probably has stipulations that prevent them from doing it overnight, so it might be illegal (but ethical) for them to just turn off the API keys.

Also, doing that might have bad second order effects with bad ethical implications.

For example, when Musk decided to pull the plug on a bunch of starlink terminals, he (intentionally and knowingly) blocked a US-funded attack that would have sunk a big chunk of the Russian navy, which certainly prolonged the Ukraine war. That was clearly an act of treason (illegal).

Anyway, just turning off Claude could kill a bunch of civilians in the region or something. It depends on how deeply it's integrated into military logistics at this point.

Anyway, your point certainly holds for OpenAI:

They walked into a "use ChatGPT for war crimes, and illegal domestic surveillance / 'law enforcement'" deal with open eyes, and pretty obviously lied about it while the deal was being signed. I don't see any ethical nuance that would even partially excuse their actions.


This exchange between Anthropic and OpenAI feels a lot like theater. If I was really trying to stop abuses I wouldn't going out of my way to talk about it. The "public sees us as the hero's" bullshit feels like a smoke screen. Id make one statement and keep silent and let the public do the math and not get involved.


Is there any high level "How does it work" document?


We have some docs here: https://github.com/imbue-ai/sculptor

If you have any specific questions that aren't covered there, please let us know in Discord!


People who are paying $100K are not qualified as per education or job sponsorship standards of H1b. So, they are two different audience.

You can compare the $100K payers with new Trump gold card scheme, many will take it if there is no other condition except money.


I see there are lots of courses being sold for Evals in Maven. Some are as costly as USD 3500. Are they worth it? https://maven.com/parlance-labs/evals


As I was reading that prompt, it looked like large blob of if else case statements


Maybe we can train a simpler model to come up with the correct if/else-statements for the prompt. Like a tug boat.


Hobbyists (random dudes who use LLM models to roleplay locally) have already figured out how to "soft-prompt".

This is when you use ML to optimize an embedding vector to serve as your system prompt instead of guessing and writing it out by hand like a caveman.

Don't know why the big cloud LLM providers don't do this.


This is generally how prompt engineering works

1. Start with a prompt

2. Find some issues

3. Prompt against those issues*

4. Condense into a new prompt

5. Go back to (1)

* ideally add some evals too


Mistakes are new signal.

If it is too polished, then chances are its AI :)


But what if AI screens the resume for candidates?


Maybe correct but not too polished sentences may be a better give away that it's not AI generated, but still good enough to get through ATS or AI screening?


I prefer Reddit communities over SO any day. SO, folks are so high headed, they will bash you with anything that doesn't suit their framework. I am sure with GPTs slowly they will lose traffic.


Reddit has been 1000x worse than S.O. for me. (and S.O. sucks)

Reddit = Question asked 6 months to 5 years ago. Auto-closed because age. Answer is out of date. Ask again, get's closed as already asked.

Reddit has all the same mod problems as S.O. but it's worse because it's goal isn't to provide info, it's to be social media.


Threads don't get closed due to age on Reddit (they used to be archived but this stopped a while back). Mods can lock threads but this is used to moderate content.

And which subreddit locked your thread because a similar question was asked six months ago? I find that difficult to believe.


In the end in few years, it will be whosoever has better AI wins in all fields. Monopoly sort of thing. I finance world maybe they win most of the trades.


> I finance world maybe they win most of the trades.

Every trade has two participants.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: