I don't want to paint the decision to work on military and weapons projects as black and white. The ethics are difficult. This is however one of the oldest and at this point least interesting arguments in favor of doing so. It is the height of human hubris to engage in this kind of speculation. Anyone with a mediocre grasp of both science and history can tell you it is almost impossible to understand how the course of any invention, project, company or even government will shape the future, for better or worse.
EDIT: To put this more succinctly. Reason has serious limits when applied to macro human behavior. I feel this author's attempt to apply reason to macro human behavior has exceeded those limits grossly.
I would dispute the argument about reason at length but disputing with self described unreasonable people is a waste of time. (despite the species of logic they utilize)
I'll say only this: the only problem with reason is lack of perfect information. At some point all is an estimate.
Ok. Isn't that like saying the only problem with perpetual motion is entropy?
My point is that there is so much imperfect information in geopolitics that trying to draw large scale conclusions about it from minor events is silly.
Basically, this is an argument from final consequences in which the final consequences are virtually impossible for this one person to discern with any real certainty.
These people are looking for improvement at the top of Maslow's pyramid ... [1]
In other words: they want, above anything else, to be able to say "that wasn't me".
From that perspective, this decision makes a lot of sense. That is destroys other people's lives through inaction ... well they buy clothes made in Bangladesh too.
The leadership's motivations are neither here nor there to my argument. My point is that you shouldn't waste time making arguments from final consequences without any empirical methods. There are many tasks which are beyond reason. Predicting the future is among them. I feel like this should be obvious but I'll give an example because I feel like my point isn't coming across.
During the time between WWI and WWII, governments around the world were contemplating the use of the aircraft in warfare. Particularly the consequences of aerial bombing. Some people opposed it because it was dishonorable to kill from the air. Some people doubted its effectiveness, especially after the way WWI entrenchments withstood massive artillery bombardments. And some people made exactly the argument in this article. That the bomber would save lives because it would shorten wars.
By 1945, dozens of cities had been flattened by aerial bombing and hundreds of thousands of civilians had perished from it. What people got wrong was that they reasoned about aerial bombing in the context of past conflict. The notion of "justifiable" ubiquitous total war mostly was not anticipated. Essentially, they were reasoning in a world of past conflict and had not anticipated a conflict that would shift the window of what is moral or even logical.
Now, who was right? Clearly those insisting on bombing as a humane measure were wrong. But at the same time, if England, France and the US had divested themselves of aerial bombing on moral grounds, Nazi Germany and Imperial Japan would've had a major advantage that might've tipped the war in their favor. I personally think I'm probably happier in the world we have rather than living "The Man in the High Castle". So who was right?
It's my contention that no one was right. There's no reason to believe that a sizable portion of any group considering the issue could anticipate what happened in WWII. It's my contention, that reason alone is not enough to predict the specific future consequences of our daily actions. I'm not just rebuking this writing but nearly any writing of its kind.
I love how you use WW2 as an argument. Let me ask you, in your example.
Suppose on the British side those people would have won, and Britain would not have had air power. We could ask the same question on the other side, and while it's not so easy to see without a good knowledge of history and the animosity and threats Germany faced between WW1 and WW2, but if Germany had refused to create air power it would have been bombed into the ground, it's people destroyed.
So if one side had refused to create air power ... what would have happened ? Because we both know the answer here.
What's more difficult to see is what would have happened if everyone (magically) refused to create it. But that would have enabled random other parties to quickly become a major threat, which would have had equally devastating consequences.
The problem you have is that the logic is strongly on the side of creating more powerful weapons. A fair study of the cold war will reveal: the main reason we did not have WW3 is ... nuclear weapons. Einstein prevented it.
There should be a forced ammendment every time we institute the draft: deferments are not allowed for descendents of politicians who voted to institute said conscription.
For the record, I do think WW2 was worth fighting. It's about as clear-cut case of national self defense as you'll find in the history books. Even people who are normally quite pacifist don't really dispute this in the general case.
But we're talking about extrajudicial termination of classified targets, in civilian settings, against whom war has not been formally declared. Some of them are US citizens or citizens of our allies. This is indefensible.
I very much doubt you would have found much support anywhere in 1939. Not in France, Britain, Belgium, Netherlands and certainly not in the US. Not even after annexing the Sudetenland, not even after the Polish invasion.
There is decision friction either way. Either you are free to engage without permission as dictated by the rules of engagement on the ground (people are shooting at you) or there is a matrix of agreement that must be met in accordance with law.
The only real economic result attributed to precision is the degree of collateral damage. Is the life of a third world civilian worth the extra expense of more precise technology. As a war fighter I absolutely believe so. Civilian life is simply more valuable than the lives of the belligerents.
I think your perspective assumes that nation states should neutralize most threats.
I don't think that's the case, because it ignores blowback.
For example, if a military uses a drone to precision kill enemies in foreign territory 10 times over a 6-month period, what effects does that have on the local population?
Anyone got a transcript of the whole piece? I am in the mood to read a poorly argued position on a topic I have strong feelings about and this person's viewpoint will fit the bill nicely.
Google’s Anti-Pentagon Decision Will Kill More People
Precision bombing in war saves the lives of soldiers and civilians.
Author: Andy Kessler
Earlier this year more than 3,000 Google employees signed a letter to chief executive Sundar Pichai demanding the company halt work on the Defense Department’s Project Maven, which applies algorithms to warfare. The disgruntled employees also wanted their boss to pledge “that neither Google nor its contractors will ever build warfare technology.” In June the company announced it would not renew its Project Maven contract. This is incredibly shortsighted and will increase the likelihood of war and civilian deaths.
Past warfare was described primarily by tonnage and throw weights, because precision was almost nonexistent. But ever since humans started dropping bombs out of airplanes, they’ve been aiming for more precision.
On June 15, 1944, a squadron of 75 American Superfortress B-29s left China to destroy the Imperial Iron and Steel Works in Yawata, Japan. The site manufactured about a quarter of Japanese steel at the time. The 47 bombers that made it to Yawata dropped more than 365 bombs. One accidentally destroyed a power house more than a kilometer away from the complex. The rest missed.
It wasn’t for lack of trying. In 1943, behavioral scientist B.F. Skinner demonstrated new guidance technology to track simulated Japanese destroyers. He then revealed that inside the nose cone of the bomb were three pigeons trained to peck away at silhouettes of Japanese warships. But real technology advances. Wartime news reports claimed the highly complex Norden bombsight could hit a pickle barrel from 20,000 feet above. But in 1944 bombardiers recorded that “75% of Norden bombsights fell short of specifications,” missing by more than 300 feet.
For the rest of the war, the city of Yawata was firebombed in an unsuccessful campaign to destroy the iron and steel works. Notably, it was a target on Aug. 9, 1945, three days after Little Boy was dropped on Hiroshima. A B-29 carrying the atomic bomb Fat Man made runs over Yawata, but thick smoke from the ground made targeting impossible, and the bombers headed to the next target on the list, the Mitsubishi Steel and Arms Works in Nagasaki. The bombardier, Capt. Kermit Beahan, used a Norden bombsight to target the factory. He wasn’t even close—off by almost two miles. Horseshoes and hand grenades!
During the Cold War, the U.S. and Russia often negotiated over throw weight, the payload that a missile carries—power over precision. In hot wars, conventional bombing continued. The U.S. dropped seven million tons of bombs during the Vietnam War, 10 times as much as during the Korean War and twice as much as the Allies dropped in World War II. There are no good statistics, but most think more than 90% of bombs missed across those wars.
During the first Gulf War, the paradigm changed. Tomahawk cruise missiles would read the terrain preloaded from satellite imagery and accurately find their targets. Even aerial bombing improved. Some 17,000 precision bombs, mostly laser-guided, almost always hit their targets. But 210,000 conventional bombs still missed by an average of 300 feet.
By the late 1990s, the Joint Direct Attack Munition kit was developed. It was bolted onto conventional 500-pound bombs, which updated their position via GPS as they dropped. They almost always landed within 10 feet of their target. It was first used in Kosovo in 1999 and perfected in the Iraq War.
There is so much more to do. The Israelis successfully deploy the Iron Dome missile interceptor. They claim it is 90% accurate, but many dispute this. Last week a new interceptor, known as David’s Sling, co-developed with Raytheon , was launched against two Syrian SS-21 Tochka missiles headed toward Israel. Neither missile interceptor hit its target.
Precise weapons set expectations that war can be fought without civilian casualties. Which brings us back to Google. The U.S. has used drones successfully against al Qaeda and Islamic State. Drone pilots identify and eliminate terrorists from thousands of miles away. Collateral damage and civilian deaths still happen, but they’re often the result of faulty intelligence, not the bombing technology.
Project Maven, or the Algorithmic Warfare Cross-Function Team, was set up by the Defense Department last year. Working with private industry, the group applies machine learning and artificial intelligence to read drone footage and “autonomously extract objects of interest from moving and still imagery.” This probably doesn’t include facial recognition. It eventually will.
I don’t think we’ll ever get to fully autonomous weapons, the fever dreams of Terminator’s Cyberdyne Systems and the Campaign to Stop Killer Robots. Treaties will be signed to outlaw them.
Still, it’s naive to think war is obsolete—bad actors will always be around. But technology will soon surgically zap bad guys. Project Maven gets us closer. Which is why Google engineers’ refusal to develop precision technology means more civilians will die or the military will be reluctant to use force. That’s wrong. They should Google Si vis pacem, para bellum.
==========================================================
Back in the day, around 10 years ago, I was working a lot with clean energy. One of the big projects was modifying human behaviour through social networks, focusing on using targeted messaging and peer pressure via Twitter, when it was still kind of big. Send messages to a big enough network of organically influential people, and all of sudden you have a full city saving some MW per day; put on a sweater if you are cold, turn off the lights when not in use, drink more water if you are hot, close your blinds to keep your house cool, etc...
I was happy, doing something good for humanity and the environment... then DARPA took our research, I was disheartened... and now I have no idea where or how it is being used, but doesn't seem to be helping humanity... I notice the grandchildren of this research coming up in the news, the whole Cambridge Analytica scandal seemed quite familiar, if anything I was wondering why it took so long for people to notice...
Anyway, moral of the story: no matter what we build, and how good our intentions, somebody will figure out how to do evil with it.
As long as the technology is open sourced and given to our enemies for free I think it's a decent argument.
I mean anyone who doesn't support giving this away for free presumably wants our enemies to kill more Americans than otherwise necessary, unless of course they're lying about their reasons for supporting this in the first place.
This is an excellent point and I’m sorry it’s being flagged.
If you think that as humans we all have moral value that transcends our citizenship, and you ALSO believe that violence is a legitimate way to settle grievances, than ensuring that all violence is precise makes a lot of sense.
Of course those two axioms are inconsistent so this argument exposes the parent post for the jingoistic bullshit that it is.
EDIT: To put this more succinctly. Reason has serious limits when applied to macro human behavior. I feel this author's attempt to apply reason to macro human behavior has exceeded those limits grossly.