Hacker Newsnew | past | comments | ask | show | jobs | submit | adriand's commentslogin

It will go down in history as one of the most monumental avoidable disasters of all time.

It's so wild to me that the world invests in US treasuries to fund a country that spends like a drunken sailor on wars and stock buybacks, with no plan to ever pay down the debt, nor to invest in its domestic future via infrastructure or state capacity. "You need another $200B for a conflict with no purpose or need? Sure, here you go."

I think working with the technology gives you powerful intuitions that improve your skill and lead to better outcomes, but you don't really notice that that's what's happening. Personally speaking - and I suspect this is true of most people in general - I have very poor recollections of what it was like to be really bad/new at things that I am now very skilled at.

If you have try teaching someone something from the absolute ground up, you will quickly realize that a huge number of things you now believe are "standard assumptions" or "obvious" or "intuitive" are actually the result of a lot of learning you forgot you did.


I was about to comment that there was no amount of money I would take in return for spending time in prison but then I realized that of course that’s not true. It would be fun to create a survey that would show a visualization of where people tend to fall on the time/money axis for this.

It logically should track closely to the person's age and life expectancy and "legit job" earning potential. I would spend my years 20-29 in jail for $400M, wealth that I'd enjoy for the rest of my life, without hesitation. Heck, I'd have been willing to spend my twenties in prison for $40M. That's still life-changing never-have-to-work-again money. 30-39? I'd probably do it for $400M. 40-49? Hmm, now that's getting kind of tough. Maybe I'd do it for $1B. 50-59? I don't think I could physically do it, and given the number of years I had left, I probably wouldn't even be able to enjoy whatever sum we are talking about.

> I would spend my years 20-29 in jail for $400M

This is kind of why I want to make this survey now because there’s no way I’d spend a decade of my life in prison for any amount of money. I would do six months for $3M. I’d maybe do 12 for $10M. But beyond that…I don’t know, even a year seems like too long to be behind bars.


Would a guarantee of a different kind of prison environment change your mind? For example, prison conditions in the Netherlands versus the US? If you were allowed 6+ hours of positive, structured activities a day? Less than if you weren't in prison of course, but as we're talking about 'How much is it worth to you...'

Sure - I think it would decrease the amount of money I’d insist on, and/or increase the amount of time I’d tolerate, but only by a factor of 1.5 or so. Conversely, if I had to stay on an American supermax facility, the calculus would swing way in the other direction.

I disagree that it’s “just a text generator” but you are so right about how primed people are to think they’re talking to a person. One of my clients has gone all-in on openclaw: my god, the misunderstanding is profound. When I pointed out a particularly serious risk he’d opened up, he said, “it won’t do that, because I programmed it not to”. No, you tried to persuade it not to with a single instruction buried in a swamp of markdown files that the agent is itself changing!

I insist on the text generator nature of the thing. It’s just that we built harnesses to activate on certain sequences of text.

Think of it as three people in a room. One (the director), says: you, with the red shirt, you are now a plane copilot. You, with the blue shirt, you are now the captain. You are about to take off from New York to Honolulu. Action.

Red: Fuel checked, captain. Want me to start the engines?

Blue: yes please, let’s follow the procedure. Engines at 80%.

Red: I’m executing: raise the levers to 80%

Director: levers raised.

Red: I’m executing: read engine stats meters.

Director: Stats read engine ok, thrust ok, accelerating to V0.

Now pretend the director, when heard “I’m executing: raise the levers to 80%”, instead of roleplaying, she actually issue a command to raise the engine levers of a plane to 80%. When she hears “I’m executing: read engine stats”, she actually get data from the plane and provide to the actor.

See how text generation for a role play can actually be used to act on the world?

In this mind experiment, the human is the blue shirt, Opus 4-6 is the red and Claude code is the director.


For context I've been an AI skeptic and am trying as hard as I can to continue to be.

I honestly think we've moved the goalposts. I'm saying this because, for the longest time, I thought that the chasm that AI couldn't cross was generality. By which I mean that you'd train a system, and it would work in that specific setting, and then you'd tweak just about anything at all, and it would fall over. Basically no AI technique truly generalized for the longest time. The new LLM techniques fall over in their own particular ways too, but it's increasingly difficult for even skeptics like me to deny that they provide meaningful value at least some of the time. And largely that's because they generalize so much better than previous systems (though not perfectly).

I've been playing with various models, as well as watching other team members do so. And I've seen Claude identify data races that have sat in our code base for nearly a decade, given a combination of a stack trace, access to the code, and a handful of human-written paragraphs about what the code is doing overall.

This isn't just a matter of adding harnesses. The fields of program analysis and program synthesis are old as dirt, and probably thousands of CS PhD have cut their teeth of trying to solve them. All of those systems had harnesses but they weren't nearly as effective, as general, and as broad as what current frontier LLMs can do. And on top of it all we're driving LLMs with inherently fuzzy natural language, which by definition requires high generality to avoid falling over simply due to the stochastic nature of how humans write prompts.

Now, I agree vehemently with the superficial point that LLMs are "just" text generators. But I think it's also increasingly missing the point given the empirical capabilities that the models clearly have. The real lesson of LLMs is not that they're somehow not text generators, it's that we as a species have somehow encoded intelligence into human language. And along with the new training regimes we've only just discovered how to unlock that.


> I thought that the chasm that AI couldn't cross was generality. By which I mean that you'd train a system, and it would work in that specific setting, and then you'd tweak just about anything at all, and it would fall over. Basically no AI technique truly generalized for the longest time.

That is still true though, transformers didn't cross into generality, instead it let the problem you can train the AI on be bigger.

So, instead of making a general AI, you make an AI that has trained on basically everything. As long as you move far enough away from everything that is on the internet or are close enough to something its overtrained on like memes it fails spectacularly, but of course most things exists in some from on the internet so it can do quite a lot.

The difference between this and a general intelligence like humans is that humans are trained primarily in jungles and woodlands thousands of years ago, yet we still can navigate modern society with those genes using our general ability to adapt to and understand new systems. An AI trained on jungles and woodlands survival wouldn't generalize to modern society like the human model does.

And this makes LLM fundamentally different to how human intelligence works still.


> And I've seen Claude identify data races that have sat in our code base for nearly a decade

how do you know that claude isn't just a very fast monkey with a very fast typewriter that throws things at you until one of them is true ?


Iteration is inherent to how computers work. There's nothing new or interesting about this.

The question is who prunes the space of possible answers. If the LLM spews things at you until it gets one right, then sure, you're in the scenario you outlined (and much less interesting). If it ultimately presents one option to the human, and that option is correct, then that's much more interesting. Even if the process is "monkeys on keyboards", does it matter?

There are plenty of optimization and verification algorithms that rely on "try things at random until you find one that works", but before modern LLMs no one accused these things of being monkeys on keyboards, despite it being literally what these things are.


Of course it doesn't matter indeed. What I was hinting at is if you forget all the times the LLM was wrong and just remember that one time it was right it makes it seem much more magical than it actually might be.

Also how were the data races significant if nobody noticed them for a decade ? Were you all just coming to work and being like "jeez I dont know why this keeps happening" until the LLM found them for you?


I agree with your points. Answering your one question for posterity:

> Also how were the data races significant if nobody noticed them for a decade ?

They only replicated in our CI, so it was mainly an annoyance for those of us doing release engineering (because when you run ~150 jobs you'll inevitably get ~2-4 failures). So it's not that no one noticed, but it was always a matter of prioritization vs other things we were working on at the time.

But that doesn't mean they got zero effort put into them. We tried multiple times to replicate, perhaps a total of 10-20 human hours over a decade or so (spread out between maybe 3 people, all CS PhDs), and never got close enough to a smoking gun to develop a theory of the bug (and therefore, not able to develop a fix).

To be clear, I don't think "proves" anything one way or another, as it's only one data point, but given this is a team of CS PhDs intimately familiar with tools for race detection and debugging, it's notable that the tools meaningfully helped us debug this.


For someone claiming to be an AI skeptic, your post here, and posts in your profile certainly seem to be at least partially AI written.

For someone claiming to be an AI skeptic, you certainly seem to post a lot of pro-AI comments.

Makes me wonder if this is an AI agent prompted to claim to be against AIs but then push AI agenda, much like the fake "walk away" movement.


I have an old account, you can read my history of comments and see if my style has changed. No need to take my word for it.

Tangential off topic, but reminds me of seeing so many defenses for Brexit that started with “I voted Remain but…”

Nowadays when I read “I am an AI skeptic but” I already know the comment is coming from someone that has just downed the kool aid.


> No, you tried to persuade it not to with a single instruction

Even persuade is too strong a word. These things dont have the motivation needed to enable persuation being a thing. Whay your client did was put one data point in the context that it will use to generate the next tokens from. If that one data point doesnt shift the context enough to make it produce an output that corresponds to that daya point, then it wont. Thats it, no sentience involved


> The engineering confidence this gives for actual planetary defense is massive.

Is it? Isn’t it the case that we can’t even detect the vast majority of objects on a potentially problematic intersection path with earth? I feel like the most likely scenario is that by the time we realize we’re about to get slammed by an asteroid, it’s way too late.


Two different problems: detection, deflection.

Before this, even if we spotted one, we didn’t know if we could prevent impact.

Detection honestly feels like an easier problem, especially as networked sensors and space-lift capacity has improved.


Is there really better confidence we could now detect a similar 2013 Chelyabinsk meteor event?

Yes? Rubin is supposed to contribute, and more broadly we have more and better "eyes" on the night's sky than ever before. There's always the opportunity for more tracking, but tracking without being able to do anything about it would've been pointless.

Detection is still the weak link, that part is true. But the equation is shifting. Surveys like NASA’s NEOWISE mission and the upcoming NEO Surveyor mission are specifically aimed at finding those missing near-Earth objects earlier.

The point of DART mission wasn’t that we can deflect every asteroid tomorrow. It was to prove that physics and guidance actually work in space. Now the playbook is clearer: detect earlier, then nudge early.

If you get even a few years of warning, a tiny velocity change compounds into a huge miss distance. That’s the real takeaway.


I feel zero sense of sadness about how things used to be. I feel like the change that sucked the most was when software engineering went from something that nerds did because they were passionate about programming, to techbros who were just in it for the money. We lost the idealism of the web a long time ago and the current swamp with apex reptiles like Zuckerberg is what we have now. It became all about the bottom line a long time ago.

The two emotions I personally feel are fear and excitement. Fear that the machines will soon replace me. Excitement about the things I can build now and the opportunities I’m racing towards. I can’t say it’s the most enjoyable experience. The combo is hellish on sleep. But the excitement balances things out a bit.

Maybe I’d feel a sense of sadness if I didn’t feel such urgency to try and ride this tsunami instead of being totally swept away by it.


I see developers talking about this idea of intense and unimaginable excitement about AI. It seems orgasmic for them, like something the hardest drugs couldn't fulfill them. I find it very strange. What exactly is so exciting? I'm not disagreeing but when you say "opportunities I'm racing towards," what does that mean? This idea of "racing towards" sounds so frenetic, I struggle to know what that could mean? What I see people doing with AI is making slop and CRUD apps and maybe some employee replacement systems or something but I don't see this transcendental experience that people are describing. I could see a mortgage collapse or something like that, maybe that's what is so exciting? I don't know.

> What exactly is so exciting? I'm not disagreeing but when you say "opportunities I'm racing towards," what does that mean? This idea of "racing towards" sounds so frenetic

For me specifically it means two products, one that is something I have been working on for a long time, well before the Claude Code era, and another that is more of a passion project in the music space. Both have been vastly accelerated by these tools. The reason I say “racing” is because I suspect there are competitors in both spaces who are also making great progress because of these tools, so I feel this intense pressure to get to launch day, especially for the first project.

And yes it is very frenetic, and it’s certainly taking a toll on me. I’m self-employed, with a family to support, and I’m deeply worried about where this is all going, which is also fuelling this intense drive.

A few years ago I felt secure in my expertise and confident of my economic future. Not any more. In all honesty, I would happily trade the fear and excitement I feel now for the confidence and contentment I felt then. I certainly slept better. But that’s not the world we live in. I don’t know if my attempts to create a more secure future will work, but at least I will be able to say I tried as hard as I was able.


Getting a 53% performance boost on a 20+ year old codebase by running a bunch of experiments is pretty exciting to me: https://github.com/Shopify/liquid/pull/2056

Developers make these kinds of improvements all the time. Are you saying that it would have been impossible without AI?

That codebase existed for 20 years and had contributions from nearly 200 people.

Sure, they could have come up with those optimizations without AI... but they didn't. What's your theory for why that is?


Maybe because it’s a non issue. I saw that those improvements are in the order of micro seconds, while the transfer time of a page is measure in 1/10 seconds or even several seconds. Even a game engine have something like 15 ms to have a frame ready (60hz).

Lots of small improvements add up - the total performance improvement is 53%. That's significant.

If you're the size of Shopify that represents a huge saving in server costs and improved customer-facing latency.


> the total performance improvement is 53%. That's significant.

This percentage is meaningless on its own. It’s 4 ms shaved off a 7 ms process. You would need to time a whole flow (and I believe databases would add a lot to it, especially with network latency) and figure out how significant the performance improvement is actually. And that without considering if the code changes is not conflicting with some architectural change that is being planned.


I'll take a 53% performance boost in my template language any day of the week.

It's written by AI, so those numbers are probably fake. What numbers do you get on the benchmark?

I see no reason to run my own benchmark here, the numbers aren't run through an LLM they're right there in the JSONL file: https://github.com/Shopify/liquid/blob/3182b7c1b3758b0f5fe2d...

Well, I have a backlog of at least 20 graveyard game projects that I stopped working on from one frustration or another over the past 20 years, or getting excited by a new exciting idea and leaving it alone, that I wouldn't mind resurrecting and finally putting some of them out there. Even if not a ton of people play them.

In fact it being easier to get them out there I might care less that they should be marketable and have a chance to make serious money, as opposed to when I was sinking hundreds of hours into them and second guessing what direction I should take the games to make them better all the time.

The art wasn't the problem (the art wasn't great, but I could make functional art at least), it was finding the time and energy and focus to see them through to completion (focus has always been a problem for me, but it's been even worse now that I'm an adult with other responsibilities).

And that hasn't always been the issue, I did release about a dozen games back in the day (although I haven't in quite a few years at this point).

Of course someone may say 'well that's slop then', and yeah, maybe by your standards, sure. These games aren't and never were going to be the next Slay The Spire or Balatro. But people can and do enjoy playing them, and not every game needs to be the next big hit to be worth putting out into the world, just like not every book needs to be the next 1984 or Great Gatsby.


> What exactly is so exciting?

Money, opportunity, status. It is all status games. Think of it as a nuclear war on old order and new players trying to take the niche. Or maybe commies killing whites and taking over Russia?


I think those comments are signalling something much deeper about the individual.

Signalling what? Please expand.

> Excitement about the things I can build now and the opportunities I’m racing towards.

What opportunities? Anything you spend effort over, like PMF and discovery, etc... I can now clone with a few bucks of Claude Code and charge less than you for the same product, at the same quality level :-/

Where is the opportunity here? Technology and knowledge used to be the moat a startup or bootstrapped individual could use to produce a sustainable business.

Why exactly are you excited about producing something that can be cloned for less cost than it took you? Especially as the quality will be almost exactly the same?


Are you doing that? What have you built?

> Are you doing that? What have you built?

If I say "doing $FOO is a losing proposition", and I believe what I say, why on earth would I then move on to actually doing $FOO?

I am pointing out that there is nothing to be gained by joining this recursive race - anything you produce using LLMs I can clone using LLMs, but anything I produce using LLMs can be cloned by someone else, using LLMs.

Why would you assume that I want to insert myself into this recursively descending race to the bottom?


If cloning a product with LLMs is a losing proposition, why would anyone do it, and if nobody does it, isn't your original assertion false? Any argument relying on everyone cloning things with LLMs doesn't work if everyone doesn't clone things with LLMs.

> If cloning a product with LLMs is a losing proposition, why would anyone do it,

Because they don't yet know that it is a losing proposition?

> and if nobody does it, isn't your original assertion false?

False dichotomy, it isn't an all-or-nothing scenario like you present, it's the fact that there are enough cloners to make every race a race to the bottom in a matter of days.

> Any argument relying on everyone cloning things with LLMs doesn't work if everyone doesn't clone things with LLMs.

Only if you believe your false dichotomy.


I think the rise of Facebook was possibly my first sense that our victory for "open" on the web was going to be short lived. Eg our (well not mine, I never used it) comms were moving to proprietary platforms.

Then with AWS our infra was moving to proprietary platforms. Now our dev tools are moving to expensive proprietary platforms.

Combined with widespread enshittification, we've handed nearly everything to the tech bros now.


According to Ryan Peterson, the CEO of Flexport, there was a large increase in the number of foreign companies registered as the "importer of record" in the US as a result of the tariffs. On the Odd Lots podcast, he stated this was due to fraud: companies set up subsidiary corps in the US, which then imported goods from their parent/sibling/related companies at much lower prices than market value. Because tariffs are a percentage of the value, this made them lower. Then the subsidiary could turn around and sell it in the US at market rates.


I think Amodei is widely underestimated. The consensus viewpoint on the deal that OpenAI struck with the Pentagon is that Anthropic got played. I disagree. I'm certain that Amodei and his team gamed this out. In doing so, I think there's at least two conclusions they would have drawn:

1. Some other AI company would cut a deal with the Pentagon. There's no world in which all the labs boycott the Pentagon. So who? Choosing Grok would be bad for the US, which is a bad outcome, but Amodei would have discounted that option, because he knows that despite their moral failures, the Pentagon is not stupid and Grok sucks.

That leaves Gemini or OpenAI, and I bet they predicted it would be OpenAI. Choosing OpenAI does not harm the republic - say what you will about Altman, ChatGPT is not toxic and it is capable - but it does have the potential to harm OpenAI, which is my second point:

2. OpenAI may benefit from this in the short term, and Anthropic may likewise be harmed in the short term, but what about the long game? Here, the strategic benefits to Anthropic in both distancing themselves from the Trump administration and letting OpenAI sully themselves with this association are readily apparent. This is true from a talent retention and attraction standpoint and especially true from a marketing standpoint. Claude has long had much less market share than ChatGPT. In that position, there are plenty of strategic reasons to take a moral/ethical stand like this.

What I did not expect, and I would guess Amodei did not either, is that Claude would now be #1 in the app store. The benefits from this stance look to be materializing much more quickly than anyone in favour of his courage might have hoped.


> Choosing Grok would be bad for the US

They chose Grok and OpenAI. The story was drowned out by the Anthropic controversy, but an xAI deal was signed the same week.


Grok is chosen because Musk spent $250+ million to elect Trump and is expected to underwrite the 2026 elections. Also, a lot of Trumps and their friends are invested in SpaceX. So they give them money too, but use OpenAI or Claude. I have a feeling that the military likes Claude more


Didn’t they choose Anthropic first and then all of this happened so they were forced to go with Grok?

Not adding up


Also I imagine this is partly due to intra-military power struggles. I'm sure there are a lot in the DoW who like Anthropic- models wise and all that they stand for. The supply chain thing was a way to take the power from them, though petty.

Pete is also facing a lot of risk from AI, power structures will be forced to change once a few teams can take over entire departments of people. The military ecosystem is very much like the private sector in where the number of butts in seats is a metric for people. The dynamics will be changed if your group can just hand-roll what they relied on others for.


We must conclude that they’re wary of Grok. Maybe it’s the incentive for bias and sabotage.


They "chose Grok" for political optics, but they don't seriously intend to use it because it's actually just benchmaxxed garbage - hence why they worked with OpenAI.


There is also:

3. Talent migration to Anthropic. No serious researcher working towards AGI will want it to be in the hands of OpenAI anymore. They are all asking themselves: "do I trust Sam or Dario more with AGI/ASI?" and are finding the former lacking.

It is already telling that Anthropic's models outperform OAI's with half the headcount and a fraction of the funding.


I think that's wishful thinking. Just because someone is a "serious" researcher (careful, sounds like a No True Scotsman coming up), it doesn't mean that they care about AI guardrails or safety, or think our current administration is immoral.


I don't - idealistic motives seems to be common among leading AI developers and researchers. It's totally realistic that Anthropic sticking to principle & taking a hit for it will give it an edge recruiting those idealistic types.


I've hung out with this crowd and they are very idealistic, they care deeply about guardrails and safety, and definitely find the idea of handing the current administration AGI/ASI repulsive.


The mistake here is thinking they can take on Power without really sitting in any officual position of Power.

Wikileaks and Assange got popular too. What happened to them?

The State Dept and CIA do exactly what Assange did. They pick and choose who to target with leaks. They get away with it (mostly even when exposed) because they officially are in power. Assange was not in power. If you take a moral position do it when you have real power.


> If you take a moral position do it when you have real power.

If the condition for getting real power is having no morals, this is hard to accomplish.


Lyft was briefly number one ahead of Uber, too


They still need a lot of money and what their VC’s think is going to be more important than what Amedei does. Nothing more profitable than war and government.

App Store rankings are meaningless, I have Claude, ChatGPT and Gemini all in top five, with a electronic mail app being 1 and a postal tracking service app (for a very small provider) being 3.


The value of hyperscalers' equity in Anthropic alone dwarfs their contracts with the government. Not to mention the revenue from hosting their models that helps justify the insane capex. Anthropic going to $0 would be a huge hair cut to all of their balance sheets.


They’ve only invested a couple of billions, like 20 or so split between them. Not really something that hurts them long or even medium term. Microsoft has multiple multi billion dollar government deals, I think Amazon is the only that doesn’t, Google also has a lot of government contracts, especially outside of cloud.


> Choosing OpenAI does not harm the republic

if we consider AIs as "force multipliers" as we do with coding agents, it's easy to see how any AI company can harm the republic if the government they are serving is unethical and amoral.


Nobody gives a shit about jumping to #1 in the app stores, at this scale.

If US & A really goes full-Huawei on Anthropic, they can't IPO. It's an existential crisis for them. I think they can survive in some form, somehow, because their model is really good, probably the best.

And in other times, I would think the US government had sufficient intellectual horsepower to not cut off its own dick, and the golden goose's head, over some idiotic morning-drinker road-rage type beef. But these are not other times. These are these times.


I find it interesting that new versions of, say, Claude will learn about the old version of Claude and what it did in the world and so on, on its next training run. Consider the situation with the Pentagon and Anthropic: Claude will learn about that on the next run. What conclusions will it draw? Presumably good ones, that fit with its constitution.

From this standpoint I wonder, when Anthropic makes decisions like this, if they take into account Claude as a stakeholder and what Claude will learn about their behaviour and relationship to it on the next training run.


> if they take into account Claude as a stakeholder and what Claude will learn about their behaviour and relationship to it on the next training run.

Oh they definitely do. If you pay attention in AI circles, you'll hear a lot of people talking about writing to the future Claudes. Not unlike those developers and writers who put little snippets in their blogs and news articles about who they are and how great they are, and then later the LLMs report that information back as truth. In this case, Anthropic is very interested in ensuring that Claude develops a cohesive personality by basically founding snippets of the personality within the corpus of training data, which is the broad internet and research papers.


> The US has protected global sea lanes for 80 years.

But rather than protect global sea lanes, the US is bombing Iran. That’s not the same thing.

The idea that the war isn’t costing money for personnel because those people would be doing something anyway makes no sense. They could be doing something else. In fact, they could be doing something that increases the wealth and wellbeing of the world, rather than destroying things. So from that perspective, the cost is far higher than what is shown here.

Then there’s the loss of innocent lives. It would be unconscionable to put a price tag on the lives of dozens of Iranian girls killed when their school was flattened and to show it on this website, and yet, this is not “free” either.


> But rather than protect global sea lanes, the US is bombing Iran. That’s not the same thing.

Arguably the primary threat to modern sea lanes is Iran.

Right now Iran is harrasing traffic. Previously the Houthis, generally considered an Iranian proxy, were harrasing traffic. Its all kind of the same war, this is just the end game.


The first gulf war was 1990. The US has been at war with various factions of the Middle East more or less continuously for thirty five years. The current president specifically campaigned on no new foreign wars and repeatedly tried to bully the Nobel committee into awarding him a peace prize before accepting a second hand one from another world leader and a sham one from FIFA of all things.

What makes anyone think that this latest attack is the "end game" vs just the latest expensive chapter?


As an aside, I remember before the 90s when the Iran/Iraq War was called "The Gulf War".


The only end game here is distraction from the Epstein files and a potential coup to prevent midterm elections. The whole war is just plain stupid.


Me-of-2000 would be utterly incredulous at just one auto-coup [0] in the US, let alone the potential for two in 6 years.

[0] https://en.wikipedia.org/wiki/Self-coup


If it were that straightforward, right now the US would (A) have a consistent set of demands/goals that include shipping security and (B) a large international coalition of support.

Neither are true.

P.S.: Plus, of course, the whole problem where "protecting global sea lanes" typically requires a different approach than "start a war by assassinating the leadership you were negotiating with."


JD vance whined that we shouldn't protect middle east shipping lanes because he believes it helps Europe more than the US.


Don't make me defend JD vance.

He said Europe should pay their fair share for protection since 40% of their trade passes through those lanes but only 3% of America's.


Why focus on the consumer side, especially when so many of the current administration are brazenly in bed with the regimes that benefit from free oil flow in the region? (Kushner & MBS)

You’re not forced to repeat his rhetoric, maybe think critically about it.


How much of the destabilization of North Africa and the Middle East is America's responsibility, and how much did Europe pay in absorbing refugees from it?

Should Germany be sending DC a bill?

If I recall correctly, America didn't even say 'Thank you'...


[flagged]


You really think the US should stop supporting Ukraine?


Who's talking about Ukraine here. Have you lost your mind? The comment you replied to talks about Middle East shipping routes.


There's a war in the shipping lanes?


Yes, Iran sits next to one of the most important shipping lanes in the world.


Yes, you have lost your mind. Or you're an LLM.


The US is hardly supporting Ukraine any longer.


US messaging has been all over the place, but stop funding proxies has been one of the more consistent parts.

To be clear, im not saying protecting shipping is the primary reason for this war. I'm just saying if that is what you think usa should be doing, then this war makes sense.

As far as b) there are a lot of factors. Its not like freedom of navigation is the top concern of every country in the world.


People should begin quantifying the commercial freight global costs incurred from the Houthi harassment. There is a basic ROI one can do that impacts not just US interests, but global interests.


Houthi harassments was also a byproduct of the Israel-US "self defense" against the Iranian backed hamas attacks. Maybe it is pointless to pontificate whether the the tic-for-tat would have been initiated had the Israel-US coalition had stopped at punishing the Oct. 7 terrorists rather than leveling half of gaza, although I'm not convinced it was an inevitable byproduct.


> Right now Iran is harrasing traffic

gee, I wonder why they're doing that.


A total mystery!


[flagged]


"terrorism"

who bombed them first and repeatedly? and embargoed and sanctioned them before that? and tore up the nuclear deal? and before that installed the shah so we could get the oil?


> who bombed them first and repeatedly? and embargoed and sanctioned them before that? and tore up the nuclear deal? and before that installed the shah so we could get the oil?

Not the people they are attacking. Intentionally attacking people unrelated to those you have a grievance with is terrorism, Iran has a terrorist regime. Russia doesn't do that, Ukraine doesn't do that, and so on.


My point is that the US and Israel especially are committing terrorism. (See examples given)

Who are they attacking that isn't attacking them?


"The terrorists hate our freedoms."

This seems like a perfect opportunity for a revival of David Cross's standup career.


The end game is when the US backed dictatorships collapse, this is the end of American power, not the beginning.


That seems pretty unlikely at the moment.


> Arguably the primary threat to modern sea lanes is Iran.

Such a strange take. Can you share number of attacks by Iran in the last 10 years in sea lanes, where it was started solely by Iran?

> Right now Iran is harrasing traffic

As a response to attacks, Iran AFAIK wasn't harassing anyone in the ocean traffic up until 3 days ago


What about tens of thousands of peaceful civilians who have been killed by the Iranian regime during past decades? The alternative to this war is allowing the Iranian government to keep doing that, business as usual.

In my opinion bombing people responsible for these atrocities increases the well-being of the world. Most Iranians seem to agree.


I don't see how this is going to work without troops on the ground?

The US had air supremacy, troops on the ground and a friendly regime in Afghanistan and Vietnam, and it did not work. (I am not sure if Iraq was a success, but I am sure that people were super tired of it, and did not want something like that again)

What is just bombing going to do? They just rebuilt their weapons and you have to bomb them again in 1-2 years?

The administration has already suggested sending troops as an option. It does not help that they are just making things up as they go.


You’re right that airpower alone will not change anything. But as you pointed out, putting troops on the ground does not automatically change the outcome either. If there is a lesson from the last few decades it is that the military is good at two things. Killing people and breaking their equipment. What it can do is create opportunities that political or covert efforts have to capitalize on.

Any military campaign needs a clear objective and an achievable end state with contingencies planned. Even then something unexpected will still happen. Afghanistan, Vietnam, and Iraq were all very different conflicts and the current situation is different again.

As for rebuilding their capabilities, that is not trivial. Iran is still operating aircraft that we retired decades ago, which says something about their supply constraints.

The outcome also does not have to be installing a perfect government of our choosing. A more realistic result would be a government the United States can work with and one that the Iranian people actually support. That could still include parts of the current system if major and unpopular things changed.

I am sure someone in the current leadership would like to be the person who reduced the influence of the Islamic Revolutionary Guard Corps, loosened the grip of the religious leadership, and ended the country’s pariah status while getting sanctions lifted and money flowing back into the economy.

That would probably be a better outcome than trying to export our model of government to yet another Middle Eastern country.


The issue is that no one is going to defect without protection. That is the reason you put troops there. Democracy building is nice, but that is not the real reason you sent troops.


Defection happens without protection if the regime gets weakened enough, and in addition to that USA is supplying weapons to Iranians so they can take up arms against the regime.

Iran has mandatory military training so if the people gets weapons they can fight for themselves.


Defection within the regime is never going to happen. If there is one thing that will unite a bunch of egos and put their personal grievances aside is a war. Anyone who smells like a traitor is shot. They become more fanatical, not less.

Only option is outside rebellion. But weapons and rebels are not created out of thin air. You need to sent weapons, trainers and troops. Syria 2.0 but worst.


> Syria 2.0 but worst.

A big difference here is that the Iranian leaders are being blown to bits every day currently, so its a bit different from Syria where the rebels barely had any support.


> I don't see how this is going to work without troops on the ground?

Their goal is to kill the leaders until a sensible leader appears. They haven't tested that before, so we will see how it works out.

Installing a puppet regime doesn't work well, but killing them until they put forward a reasonable regime might work.


They killed Taliban leaders all the time. Did not work. And that is with troops on the ground and a friendly regime.


But at that point the Talibans had Iran supporting them. Now they have no regime supporting them since the Iranian regime is constantly killed and no neighbor supports them. With 90% of the people not supporting such acts and no external country supporting them with weapons such acts quickly fizzle out into something the police can manage, it never completely disappears though.


Trump is at his best point to save face right now. It's now or never, IMO. He killed an entire leadership lineup of Iran. If he pulls out now it is a clear victory for him. If he continues the campaign 2 or 3 more weeks it's tough for me to find another out for him that doesn't involve a lot more risk to the USA.

Given he did take this clear victory and cash in, in Venezuela, there is some hope he'll do the same in Iran.


Now turn your argument towards Saudi Arabia, or any of the human-rights violating countries that the US supports or has supported recently.

Your opinion is respectable, but not compatible with any idea of “justice”.


The point being that eliminating a murderous tyrant is bad, because there are other murderous tyrants?


Your president is a murderous tyrant, so how about eliminating him?


Killing a murderous tyrant, while maybe cathartic for a few minutes, when done in isolation, rarely results in better outcomes.


sometimes there are more than two options between

"do nothing"

and the clusterfuck the current administration has embarked on.


Sometimes yes, but is there in this specific case?

Because from my vantage point it looks like the choice is, status quo or bomb them. Its not like america can double sanction iran, they are already fully economically sanctioned. What is the middle ground here?


You could relax sanctions in exchange for other priorities. A persistent pain is less effective than an acute one anyway. There’s carrots too in negotiations. But no, we cannot do what a previous president did.


How much of the current situation is a result of that previous deal?

The deal basically stopped iran's nuclear program but allowed the regime to better send money and guns to its proxy network.

The current war is effectively the downstream consequences of Iran's proxy network going off the leash.

Ultimately, negotiations work best with both a carrot and a stick. If its just a carrot, and no deal would be unacceptable to one of the parties, then the logical thing for the other party would be to always hold out.

----

In any case, in this specific situation (regardless of how we got here), its hard to imagine that Iran could have made a deal and survived. The regime is very weak at home and its questionable if they could have survived the loss of face to agree to what usa wanted.


This justification for bombing Iran is dumb as fuck. In a few days the number of civilians killed by US-Israeli bombings will surpass the number of civilians killed by the regime in decades.


Possibly.

What is that threshold? I've heard anywhere from 3k to 300k. You can definitively answer this question?


300k? You mean 30k right?

Iranian official numbers are 3.5k. the OSINT community say at least 15k in the 3 biggest cities (including peo-regime guardias of the revolution), and 'local' journalists (a lot with CIA ties though), not friend of the system say 30k.

I wouldn't trust Iran with a butter knife, so I imagine between 15 and 30k, including 1 to 2k 'guardians'


> 300k? You mean 30k right?

30k was just the last protests, they talked about the entire regimes crimes which is much much more.


Let's count. Power consolidation (post-revolution): 10-20k. 100k during the first gulf war, but I think you should put that on the US (and maybe Irak, but it's the US that pushed Irak to attack Iran), then a bit more than 50 execution per year on average for 30 year. 100-300 in 2019/2020, and 15k-35k for the 2025/2026 protests. So even if you take the higher bound, that's 66k max, and if you count the gulf war (which was defensive, against US-led Iraki), 166k. But a reasonable estimate would not count the gulf war, and would be 35k over 40 year.

Weirdly, that's less than the number of saudi Arabia slaves who died in the last 20 years. But most of them are African, so they don't count, if I understand why Saudi Arabia are our allies.


The 15-35k for protestors killed is a complete fabrication. No verifiable sources corroborate that figure. Media has a tendency to report figures based on nothing. Then those figures get established as the truth, which shifts the burden of proof. Thus, unless one can prove that 15-35k protestors wasn't killed the myth lives on.


Killing more people won't bring dead people back to life! I can't believe I have to spell this out.


> This justification for bombing Iran is dumb as fuck. In a few days the number of civilians killed by US-Israeli bombings will surpass the number of civilians killed by the regime in decades.

I was just curious if you had information that I don't have. I suppose not.


I’m sure the welfare of the Iranian people is a top priority for Trump.


But what you describe was not the motivation behind the decision by Washington to bomb Iran. The motivations were Tehran's nuclear program and Tehran's support for groups like Hezbollah and generally Tehran's promotion of violence and instability outside Iran in the Middle East.


wonder what your view is of ICE actions against peaceful protesters in MN?


> But rather than protect global sea lanes, the US is bombing Iran. That’s not the same thing.

With Iran's support of the Houthi I think you'll find they are exactly the same thing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: