Documento creado: June 1st 2009
Air & Space Power Journal - Español  Segundo  Trimestre 2009


Wired for War: Welcome to Tomorrow’s Battlefield, Today

P.W. Singer

Wired for War

There was little to warn of the danger ahead. The Iraqi insurgent had laid his ambush with great cunning. Hidden along the side of the road, the bomb looked like any other piece of trash. American soldiers call these jury-rigged bombs “IEDs,” official shorthand for Improvised Explosive Devices.

The team hunting for the bomb was an Explosive Ordinance Disposal (EOD) team, the pointy end of the spear in an effort to suppress roadside bombings. By 2006, there were 2,500 of these attacks a month and they were the leading cause of casualties among American troops, as well as Iraqi civilians. In a typical tour in Iraq, each EOD team would go on more than 600 calls, defusing or safely exploding about two devices a day. Perhaps the most telling sign of how critical the teams’ work was to the American war effort is that the insurgents began offering a rumored $50,000 bounty for killing an EOD soldier.

Unfortunately, this particular IED call would not end well. By the time the soldier had advanced close enough to see the telltale wires protruding from the bomb, it was too late. There was no time to defuse the bomb and no time to escape. The IED erupted in a wave of flame.

Depending on how much explosive has been packed into an IED, a soldier must be as far as 50 yards away to escape death and as far as a half a mile away to escape injury from the blast and bomb fragments. Even if you are not hit, the pressure from the blast by itself can break bones. This soldier, though, had been right on top of the bomb. Shards of metal shrapnel flew in every direction at bullet speed. As the flames and debris cleared, the rest of the team advanced. But they found little left of their teammate. Hearts in their throats, they loaded the remains onto a helicopter, which took them back to the base camp near Baghdad International Airport.

That night, the team’s commander, a Navy chief petty officer, did his sad duty and wrote home about the incident. The effect of this explosion had been particularly tough on his unit. They had lost their most fearless and technically savvy soldier. More important, they had also lost a valued member of the team, a soldier who had saved the others’ lives many times over. The soldier had always taken the most dangerous roles, always willing to go first to scout for IEDs and ambushes. Yet the other soldiers in the unit had never once heard a complaint.

In his condolences, the chief noted the soldier’s bravery and sacrifice. He apologized for his inability to change what had happened. But he also expressed his thanks and talked up the silver lining he took away from the loss. At least, he wrote, “When a robot dies, you don’t have to write a letter to its mother.”

The “soldier” in this case was, in fact, a 42-pound robot called a PackBot. Just about the size of a lawnmower, the PackBot mounts all sorts of cameras and sensors, as well as a nimble arm with 4 joints. It moves using four “flippers.” These are tiny tank treads that can also rotate on an axis, allowing the robot not only to roll forward and backward using the treads like a tank would, but also to flip its tracks up and down (almost like a seal moving) to climb stairs, rumble over rocks, squeeze down twisting tunnels, and even swim underwater. The cost to the US of “death” was $150,000.

The destination of the chief’s letter was not some farmhouse in Iowa, as is always the case in the old war movies. Instead, it arrived at a standard two-story concrete office building, located across from a Macaroni Grill and Men’s Wearhouse in a drab office park just outside Boston, Massachusetts. On the corner is a sign for a company called iRobot, the maker of the PackBot. The name is inspired by Isaac Asimov’s 1950 science fiction classic I, Robot, in which robots of the future not only carry out mundane chores but make life and death decisions. It is at places like this that the future of war is being written.

Unmanned War

The PackBot is only one of many new unmanned systems operating in the wars in Iraq and Afghanistan today. When U.S. forces went into Iraq in 2003, they had zero robotic units on the ground. Today, there are over 12,000 in the inventory. And these are just the first generation. Already in the prototype stage are varieties of unmanned weapons and exotic technologies, from automated machine guns and robotic stretcher-bearers to tiny but lethal robots the size of insects, which often look like they are straight out of the wildest science fiction. As a result, Pentagon planners are not merely having to figure out how to use such machines as the PackBot in the wars of today, but also how they should plan for battlefields in the near future that will be, as one officer put it, “largely robotic.”

The most apt historical parallel to the current period in the development of robotics may well turn out to be World War I. Back then, strange, exciting new technologies that had been science fiction just years earlier were introduced and used in increasing numbers on the battlefield. Indeed, it was H.G. Wells’ 1903 short story “Land Ironclads” that inspired Winston Churchill to champion the development of the tank. Another story, by A.A. Milne, creator of the beloved Winnie the Pooh series, was among the first to raise the idea of using airplanes in war, while Arthur Conan Doyle (in “Danger”) and Jules Verne (Twenty Thousand Leagues Beneath the Sea) pioneered the ideas of submarines in war.

When these new technologies were used in actual war, they didn’t really change the fundamentals of war. But even their earliest models did quickly prove useful enough to make it clear that they weren’t going back to the realm of fiction any time soon. And, more importantly, their effects began to ripple out, raising questions not only how best to use them in battle, but also generating an array of new political, moral, legal, and ethical challenges. For instance, differing interpretations between the US and Germany over how submarine were allowed to fight was one of the issues that drew America into the world war, which ultimately led to its own superpower status, while airplanes were just useful in spotting and attacking troops at greater distances, but also allowed the new phenomenon of strategic bombing, which created a profoundly new link between the fighting and the public.

Much the same thing is just starting to happen with regard to robotics today. On the civilian side, experts such as Microsoft’s Bill Gates describe robotics as being close to where computers were in the early 1980s, still rare, but poised for a breakout. On the military side, our new unmanned systems are rapidly becoming present in almost every realm of war, moving more and more soldiers out of danger, and, in turn, allowing their enemies to be targeted with increasing precision.

And they are also changing the experience of war itself. This is leading some of the first generation of soldiers working with robots even to worry that war waged by remote control from distant locations will become too easy, too abstract, too tempting. More than a century ago, General Robert E. Lee famously observed, “It is good that we find war so horrible, or else we would become fond of it.” He didn’t contemplate a time when a pilot could “go to war” by commuting to work each morning in his Toyota Camry to a cubicle where he could shoot missiles at an enemy 7,500 miles away and then make it home in time for his kid’s soccer practice.

As our weapons are designed to have greater and greater autonomy even deeper questions arise. How can the new armaments reliably separate friend from foe? What laws and ethical codes apply? What does it say about us when we send out unmanned machines to fight for us? In turn, what is the “message” that those on the other side actually receive? Ultimately, how will humans remain masters of weapons that are immeasurably faster and more “intelligent” than they are?

The unmanned systems that have already been deployed to Iraq and Afghanistan today come in all sorts of shapes and sizes. All told, some 22 different robot systems are now operating on the ground. One retired Army officer speaks of these new forces as “the Army of the Grand Robotic.”

The world of unmanned systems at war doesn’t end at ground level. One of the most familiar of these Unmanned Air Vehicles (UAVs) is the Predator. At 27 feet in length, the propeller powered drone is just a bit smaller than a Cessna plane. Perhaps its best quality is that it can spend some 24 hours in the air, flying at heights of up to 26,000 feet.

Predators are flown by what are called “reach-back” or “remote-split” operations. While the drone flies out of bases in the war zone, the human pilot and sensor operator are physically located 7,500 miles away, flying the planes via satellite from a set of converted single-wide trailers located mostly at Nellis and Creech Air Force bases, just outside of Las Vegas and Indian Springs, Nevada, respectively. Such operations have created the novel experience of pilots juggling the psychological disconnect of being “at war” while still dealing with the pressures of home. In the words of one Predator pilot, “You see Americans killed in front of your eyes and then have to go to a PTA meeting.” Says another, “You are going to war for 12 hours, shooting weapons at targets, directing kills on enemy combatants and then you get in the car, drive home and within 20 minutes you are sitting at the dinner table talking to your kids about their homework.”

Each Predator costs just under $4.5 million, which sounds like a lot until you compare it to the costs for other military aircraft. Indeed, for the price of one new F-35, the Pentagon’s next-generation manned fighter jet (which hasn’t even taken flight yet), you can buy 30 unmanned Predators. More important, the low price and lack of a human pilot mean that the Predator can be used for missions in which there is a high risk of being shot down, such as traveling low and slow over enemy territory. As what happened with the first planes in World War I, Predators originally were designed for reconnaissance and surveillance, but now some are armed with laser-guided Hellfire missiles on the wings. In addition to its deployments in Iraq and Afghanistan, the Predator, along with its larger, heavier-armed sibling, the Reaper, has been used with increasing frequency to attack suspected terrorists in Pakistan. According to media reports, the drones are now carrying out cross-border strikes at the rate of one every other day, operations which the Pakistani Prime Minister describes as the biggest point of concern between the U.S. and Pakistan.

In addition to the Predator and Reaper, a veritable menagerie of unmanned drones now circles in the skies over war zones. Small UAVs such as the Raven, which is just over three feet long, or the even smaller Wasp (which carries a camera the size of a peanut) are tossed into flight by individual soldiers and fly just above the rooftops, sending back video images of what’s on the other side of the street or hill. Medium sized drones such as the Shadow circle over entire neighborhoods, at heights above 1,500 feet, and are tasked out by commanders at brigade headquarters to monitor for anything suspicious. The larger Predators and Reapers roam over entire cities at 5,000 to 15,000 feet, hunting for targets to strike. Finally, sight unseen, 40-foot long jet-powered Global Hawks zoom across much larger landscapes at 60,000 feet, monitoring electronic signals and capturing reams of detailed imagery for intelligence teams to sift through. Each Global Hawk can stay in the air as much as 35 hours, meaning it can fly 3,000 miles, spend 24 hours mapping out a target area of some 3,000 square miles, and then fly 3,000 miles back home.

A massive change has thus happened in the airspace above wars. Only a handful of drones were used in the 2003 invasion of Iraq, with just one supporting all of V Corps, the primary U.S. Army unit. Today, there are more than 7000 drones in the U.S. military’s total inventory, and not a mission happens without them. One Air Force lieutenant general forecasts that “given the growth trends, it is not unreasonable to postulate future conflicts involving tens of thousands.”

The result is that a significant military robotics industry is beginning to emerge. The World War I parallel is again instructive. As a report by the Pentagon’s Defense Advanced Research Projects Agency (DARPA) noted, only 239 Ford Model T cars were sold in 1908. Ten years later, more than a million were. “Just as World War I accelerated automotive technology, the war on terrorists will accelerate the development of humanoid robot technology,” the report said.

It’s not hard to see the appeal of robots to the Pentagon. Above all, they save lives. But they also don’t come with some of our human frailties and foibles. "They don't get hungry," says Gordon Johnson of the Pentagon's Joint Forces Command. "They're not afraid. They don't forget their orders. They don't care if the guy next to them has just been shot. Will they do a better job than humans? Yes.”

Robots are particularly attractive for roles dealing with what people in the field call the “Three D’s”—tasks that are dull, dirty, or dangerous.

Many military missions can be incredibly boring as well as physically taxing. Can you keep your eyes open for 30 hours watching empty desert sands? A robot can. Can you operate in “dirty” environments, such as inclement weather or battle zones filled with biological or chemical weapons, without a bulky suit and protective gear? Can you see at night or in multiple spectrums? A robot can. Finally, can your commander send you out and not have to worry about the repercussions, personal and political, or you being killed?

And with advancing research in AI, machines may even one day soon surpass our main comparative advantage today, the mushy grey blob inside our skull. This is not just a matter of raw computing power. If a soldier learns French or marksmanship, he cannot easily pass on that knowledge to other soldiers. Computers have faster learning curves. They not only speak the same language, but can be connected directly via a wire or a network, which means they have sharable intelligence.

The ability to compute and then act at digital speed is another unmanned advantage. Humans, for example, can only react to incoming artillery fire by taking cover at the last second. But the Counter Rocket Artillery Mortar (CRAM) system uses radar to direct the rapid fire of its Phalanx 20 mm. Gatling guns against incoming rockets and mortar rounds, achieving a 70 percent shoot-down capability. More than 20 CRAMs—known affectionately as “R2-D2s,” after the little robot in Star Wars that they look like-- are now in service in Iraq and Afghanistan. Some think such weapons are only the start. One Army colonel says, “The trend towards the future will be robots reacting to robot attack, especially when operating at technologic speed…. As the loop gets shorter and shorter, there won’t be any time in it for humans.”

As new prototypes of unmanned planes hit the battlefield, the trend will be for the size extremes to be pushed in two directions. Some drone prototypes have wings the length of football fields. Powered by solar energy and hydrogen, they are designed to stay in the air for days and weeks, acting as mobile spy satellites or even aerial gas stations. At the other size extreme, are what technology journalist Noah Shachtman describes as “itty-bitty, teeny-weeny UAVs.” The military’s estimation of what is possible with micro air vehicles is illustrated by a contract let by the Defense Advanced Research Projects Agency (DARPA) in 2006. It sought an insect-size drone that weighs less than 10 grams, is smaller than 7.5 centimeters, has a speed of 10 meters per second and a range of 1,000 meters, and can hover in place for at least a minute.

As our machines get smaller, they will move into the nanotechnology realm, once thought theoretic but becoming all too real. A major advancement in these happened in 2007, when David Leigh, a professor at the University of Edinburgh revealed that he had built a “nanomachine,” whose parts consisted of single molecules. When asked to describe to a normal person the significance of his discovery, Leigh said it would be difficult to predict. "It is a bit like when stone-age man made his wheel, asking him to predict the motorway," he said. Leigh would make one venture, however. "…Things that seem like a Harry Potter film now are going to be a reality."

The Closing Loop

Despite all the enthusiasm in military circles for the next generation of unmanned vehicles, ships, and planes, there is one question that people are generally reluctant to talk about. It is the equivalent of Lord Voldemort in Harry Potter, the issue That-Must-Not-Be-Discussed. What happens to the human role in war as we arm ever more intelligent, more capable, and more autonomous robots?

When this issue comes up, both specialists and military folks tend to either change the subject or speak in absolutes. “People will always want humans in the loop,” says Eliot Cohen, a noted military expert who served in the State Department under President George W. Bush. An Air Force captain similarly writes in his service’s professional journal, “In some cases, the potential exists to remove the man from harm’s way. Does this mean there will no longer be a man in the loop? No. Does this mean that brave men and women will no longer face death in combat? No. There will always be a need for the intrepid souls to fling their bodies across the sky.”

All the rhetoric ignores the reality that humans started moving out of “the loop” of war a long time before robots made their way onto battlefields. As far back as World War II, the Norden bombsight made calculations of height, speed, and trajectory too complex for a human to automatically decide when to drop a bomb on a B-17. By the time of the first Gulf War, Captain Doug Fries, a radar navigator, could write this description of what it was like to bomb Iraq in his B-52: "The navigation computer opened the bomb bay doors and dropped the weapons into the dark."

The trend toward growing computer autonomy has also been in place at sea since the Aegis computer system was introduced in the 1980s. Designed to defend U.S. Navy ships against missile and plane attacks, the system operates in four modes: Semi-Automatic, in which humans work with the system to judge when and at what to shoot; Automatic Special, in which human controllers set the priorities, such as telling the system to destroy bombers before fighter jets, but the computer decides how to do it; Automatic, in which data goes to human operators in command but the system works without them; and Casualty, in which the system just does what it calculates is best to keep the ship from being hit. Humans can override the Aegis system in any of its modes, but experience shows this is often beside the point, sometimes with tragic consequences.

The most notable of these was in July 3, 1988, when the U.S.S. Vincennes was patrolling in the Persian Gulf. The ship had been nicknamed “Robo-cruiser,” both because of the new Aegis radar system it was carrying and because its captain had a reputation for being overly aggressive. That day, the Vincennes’s radars spotted Iran Air Flight 655, an Airbus passenger jet. The jet was on a consistent course and speed and was broadcasting a radar and radio signal that showed it to be civilian. The automated Aegis system, though, had been designed for managing battles against attacking Soviet bombers in the open North Atlantic, not for dealing with skies crowded with civilian aircraft like those over the Gulf. The computer system registered the plane with an icon on the screen that made it seem to be an Iranian F-14 fighter (a plane half the size), and hence an “Assumed Enemy.”

Even though the hard data were telling the human crew that the plane wasn’t a fighter jet, they trusted what the computer was telling them more. Aegis was Semi-Automatic mode, giving it the least amount of autonomy, but not one of the 18 sailors and officers on the command crew was willing to challenge the computer’s wisdom. They authorized it to fire. (That they even had the authority to do so without seeking permission from more senior officers in the fleet, which any other ship would have had to, was again only because the Navy had greater confidence in Aegis than in a human-manned ship without it) Only after the fact did the crew members realize that they had accidentally shot down an airliner, killing all 290 passengers and crew, including 66 children.

The tragedy of Flight 655 was no isolated incident. Indeed, much the same scenario was repeated just a few years ago, when U.S. Patriot missile batteries accidentally shot down two allied planes during the Iraq invasion of 2003. The Patriot systems classified the craft as Iraqi rockets and there were only a few seconds to make a decision. So, machine judgment trumped any human decisions. In both of these cases, the human power “in the loop” was actually only veto power, and even that was a power that military personnel were unwilling to use against the quicker (and what they viewed as superior) judgment of a computer.

The point is not that the Matrix or Cylons are taking over, but rather that a redefinition of what it means to have humans “in the loop” of decision-making in war is under way, with the authority and autonomy of machines ever expanding. There are myriad pressures to give war-bots greater and greater autonomy. The first is simply the push to make more capable and more intelligent robots. But as psychologist and artificial intelligence expert Robert Epstein notes, this comes with a built-in paradox. “The irony is that the military will want it [a robot] to be able learn, react, etc., in order for it to do its mission well. But they won’t want it to be too creative, just like with soldiers. But once you reach a space where it is really capable, how do you limit them? To be honest, I don’t think we can.”

Simple military expediency also widens the loop. To achieve any sort of personnel savings from using unmanned systems, one human operator has to be able to “supervise” (as opposed to control) a larger number of robots. But researchers are finding that humans have a hard time controlling multiple units at once (imagine playing five different video games at the same time). Even having human operators control two UAVs at a time rather than one reduces performance levels by an average of 50 percent. As a NATO study concluded, the goal of having one operator control multiple vehicles is “currently, at best, very ambitious, and, at worst, improbable to achieve.” And this is with systems that aren’t shooting or being shot at. As one Pentagon-funded report noted, “Even if the tactical commander is aware of the location of all his units, the combat is so fluid and fast-paced that it is very difficult to control them.” So, a push is made to given even more autonomy to the machine.

And then there is the fact that an enemy is involved. If the robots aren’t going to fire unless a remote operator authorizes them to, then any foe need only disrupt that communication. Military officers respond to this problem by saying that, while they don’t like the idea of taking humans out of the loop, there has to be an exception, a backup plan for when communications are cut and the robot is “fighting blind.” So another exception is then made.

Even if the communications link is not broken, there are combat situations in which there is not enough time for the human operator to react, even if the enemy is not operating at digital speed. For instance, a number of robot makers have added “counter-sniper” capabilities to their machines, enabling them to automatically track down and target with a laser beam any enemy that shoots. But those precious seconds while the human decides whether to fire back or not could let the enemy get away. So, as one U.S .military officer observes, there is nothing technical to prevent one from rigging the machine to shoot something more lethal than light. “If you can automatically hit it with a laser range finder, you can hit it with a bullet.”
This creates a powerful argument for another exception to the rule that humans must always be “in the loop,” giving robots in such settings the ability to fire back on their own. This kind of autonomy is generally seen as more palatable than other types. “People tend to feel a little bit differently about the counterpunch than the punch,” notes Noah Shachtman.

Each exception, however, pushes one further and further from an absolute and instead down a slippery slope. And at each step, once robots "establish a track record of reliability in finding the right targets and employing weapons properly,” says John Tirpak, editor of Air Force Magazine, the “machines will be trusted.”

The reality is that the human location “in the loop” is already becoming, as retired Army colonel Thomas Adams notes, that of “a supervisor who serves in a fail-safe capacity in the event of a system malfunction.” Even then, he thinks the speed, confusion, and information overload of modern-day war will soon move the whole process outside of “human space.” He describes how the coming weapons “will be too fast, too small, too numerous, and will create an environment too complex for humans to direct.” As Adams concludes, the various new technologies “are rapidly taking us to a place where we may not want to go, but probably are unable to avoid.”

The irony is that for all the claims by military, political, and science leaders that “humans will always be in the loop,” as far back as 2004 the U.S. Army was carrying out research on armed ground robots which found that “instituting a ‘quickdraw’ response made them much more effective than an unarmed variation that had to call for fires from other assets.” Similarly, a 2006 study by the Defense Safety Working Group, a body in the Office of the Secretary of Defense, discussed how the concerns over potential killer robots could be allayed by giving “armed autonomous systems” permission to “shoot to destroy hostile weapons systems but not suspected combatants.” That is, they could shoot at tanks and jeeps, just not the people in them. By 2007, the U.S. Army had solicited proposals for a system that could carry out “[f]ully autonomous engagement without human intervention.” The next year, the U.S. Navy circulated research on a “Concept for the Operation of Armed Autonomous Systems on the Battlefield.” Perhaps most telling is a report that the Joint Forces Command drew up in 2005, which suggested autonomous robots on the battlefield will be the norm within 20 years. Its title was somewhat amusing, given the official line one usually hears on the issue of ensuring absolute human control of armed robots: “Unmanned Effects: Taking the Human Out of the Loop.”

So, despite what one article called “all the lip service paid to keeping a human in the loop,” autonomous armed robots are coming to war. They simply make too much sense to the people that matter. A Special Operations Forces officer put it this way, “That’s exactly the kind of thing that scares the shit out of me. . . . But we are on the pathway already. It’s inevitable.”

Replacing Warriors?

With robots taking on more and more roles, and humans ever further out of the loop, some wonder whether human warriors will eventually be rendered obsolete. Describing a visit he had with the graduating class at the Air Force Academy, a retired Air Force officer says, “There is a lot of fear that they will never be able to fly in combat.”

The most controversial role for robots in the future would be as replacements for the human grunt in the field. But even in the military, people are starting to discuss having machines move in. In 2004, DARPA researchers surveyed a group of U.S. military officers and robotic scientists about the roles they thought robots would take over in the near future. The officers predicted that the first functions turned over to robots would be countermine operations, followed by reconnaissance, forward observation, logistics, and then infantry. Oddly, among the last they thought would be turned over to autonomous robots were air defense, driving or piloting vehicles, and food service—each of which has already seen automation. Special Forces roles were felt, on average, to be least likely to ever be delegated to robots.

The average year the soldiers predicted that humanoid robots would start to be used in infantry combat roles was 2025. Their projection wasn’t much different than the scientists’, who predicted 2020. To be clear, these numbers only reflect the opinions of those in the survey, and could prove to be way off. Robert Finkelstein, a veteran engineer who now heads Robotic Technologies Inc. and helped conduct the survey, thinks they are highly optimistic and that it won’t be until “2035 [that] we will have robots as fully capable as human soldiers on the battlefield.” But the broader point is that many are starting to contemplate a world where robots replace the grunt in the field, well before many of us will pay off our mortgages.

However, as H.R. “Bart” Everett, a Navy robotics pioneer, explains, the near future is less likely to involve the full-scale replacement of humans in battle anytime soon. Instead, the human use of robots in war will evolve “to more of a team approach.” His center, the Space and Naval Warfare Systems Command, has joined with the Office of Naval Research (ONR) to support the activation of a “Warfighter’s Associate concept” within the next 10 to 20 years. Humans and robots would be integrated into a team that shares information and coordinates action toward a common goal.

 A solicitation by the Pentagon to the robotics industry captures the vision: “The challenge is to create a system demonstrating the use of multiple robots with one or more humans on a highly constrained tactical maneuver. . . . One example of such a maneuver is the through-the-door procedure often used by police and soldiers to enter an urban dwelling . . . [where] one kicks in the door then pulls back so another can enter low and move left, followed by another who enters high and moves right, etc. In this project the teams will consist of robot platforms working with one or more human teammates as a cohesive unit.”

Another U.S. military–funded project envisions the creation of “playbooks” for tactical operations by a robot-human team. Much like a football quarterback, the human soldier would call the “play” for robots to carry out, but like the players on the field, the robots would have the latitude to change what they do if the situation shifts.
“Just see it and shoot it is not the future,” Thomas McKenna of the ONR explains. Instead, the robots in these teams will be expected to interact with humans naturally, perform tasks reliably, as well as predict what the human will ask of them. “The robot will do what robots do best. People will do what people do best.”
The military, then, doesn’t expect to replace all its soldiers with robots anytime soon, but rather sees a process of integration into a force that will become over time, as Joint Forces Command projected in its 2025 plans, “largely robotic.” The individual robots would “have some level of autonomy—adjustable autonomy or supervised autonomy or full autonomy within mission bounds.” But it is important to note that the autonomy of any human soldiers would also be circumscribed within these units. They also have limits placed on them by their orders and rules.

Where does it Take our Politics?

Lawrence J. Korb is one of the deans of Washington’s defense policy establishment. A former Navy flight officer, he served as assistant secretary of defense during the Reagan administration. Now he is a senior fellow at the Center for American Progress, a left-leaning think tank. In between, Korb has seen presidential administrations, and their wars, come and go. And, having written 20 books, more than 100 articles, and made more than a thousand TV news-show appearances, Korb has also helped shape how the American media and public understand these wars. In 2007, I asked him what he thought was the most important overlooked issue in Washington defense circles. He answered, “Robotics and all this unmanned stuff. What are the effects? Will it make war more likely?”

Korb is a great supporter of unmanned systems for a simple reason: “They save lives.” But he worries about their effect on the perceptions and psychologies of war. As more and more unmanned systems are used, robotics “will further disconnect the military from society. People are more likely to support the use of force as long as they view it as costless.” Even more worrisome, a new kind of voyeurism enabled by the new technologies will make the public more susceptible to attempts to sell the ease of a potential war. “There will be more marketing of wars. More ‘shock and awe’ talk to defray discussion of the costs.”

Korb believes that political Washington today has been “chastened by Iraq.” But he worries about the next generation of policymakers. Technology such as unmanned systems can be seductive, feeding overconfidence that can lead nations into wars for which they aren’t ready. “Leaders without experience tend to forget about the other side, that it can adapt. They tend to think of the other side as static and fall into a technology trap.”

“We’ll have more Kosovos and less Iraqs,” is how Korb sums up where he thinks we are headed. That is, he predicts more punitive interventions like the Kosovo strikes of 1999, launched without ground troops, and fewer operations like the invasion of Iraq. As unmanned systems become more prevalent, we’ll be likelier to use force, but also see the bar raised on anything that exposes human troops to dangers. Korb envisions a future in which the United States is willing to fight, but only from afar, in which it is more willing to punish via war, but less to face the costs of war.

Colonel R. D. Hooker Jr. is an Iraq veteran and the commander of an Army airborne brigade. As he explains, the people and their military in the field should be linked in two ways. The first is the direct stake that the public has in the government’s policies. “War is much more than strategy and policy because it is visceral and personal. . . . Its victories and defeats, joys and sorrows, highs and depressions are expressed fundamentally through a collective sense of exhilaration or despair. For the combatants, war means the prospect of death or wounds and a loss of friends and comrades that is scarcely less tragic.” Because it is their blood personally invested, citizen-soldiers, as well as their fathers, mothers, uncles, and cousins who vote, combine to dissuade leaders from foreign misadventures and ill-planned aggression.

The second link is supposed to come indirectly, through a democracy’s free media, which widens the impact of those investments of blood to the public at large. “Society is an intimate participant [in war] too, through the bulletins and statements of political leaders, through the lens of an omnipresent media, and in the homes of the families and the communities where they live. Here, the safe return or death in action of a loved one, magnified thousands of times, resonates powerfully and far afield.” It may not be your son or daughter at risk in a particular battle, but you’re supposed to care because those at risk are part of your community, and it might just be someone you know the next time.

Robotics, though, take trends that are already operative in our body politics to their final logical ending place. With no draft, no need for congressional approval (the last formal declaration of war was in 1941), no tax or war bonds, and now the knowledge that the Americans at risk are more and more just American machines, the already lowering bars to war may well hit the ground. A leader needn’t carry out the kind of consensus building that is normally needed before a war, and doesn’t even need to unite the country behind the effort. In turn, the public truly does become the equivalent of sporting fans watching war, rather than citizens sharing in its importance.

But our new technologies don’t merely remove human risk, they also record all they experience, and in so doing reshape the public’s link to war. The Iraq war is literally the first conflict in which you could download video of combat from the Web. As of June 2007, there were more than 7,000 video clips of combat footage from Iraq on YouTube alone. Much of this footage was captured by drones and unmanned sensors and then posted online. Some of the videos were released from official sources, but many were not.

This trend could build connections between the war front and home front, allowing the public to see what is happening in actual battle as never before. But, inevitably, the ability to download the latest snippets of robotic combat footage to home computers and iPhones turns war into a sort of entertainment. Soldiers call such clips “war porn.” Particularly interesting or gruesome combat footage, such as an insurgent blown up by a UAV, is posted on blogs and forwarded to friends, family, and colleagues with subject lines like “Watch this!” much the same way an amusing clip of a nerdy kid dancing in his basement is e-mailed around. A typical clip making the rounds showed people’s bodies being blown into the air by a Predator strike, set to Sugar Ray’s snappy pop song “I Just Want to Fly.” In sum, the ability to watch more but experience less has a paradoxical effect. It widens the gap between our perceptions and war’s realities.

Such changed connections don’t just make a public less likely to wield its veto power over its elected leaders. As the former Pentagon official Korb observed, they also alter the calculations of the leaders themselves.

Nations often go to war because of overconfidence. This makes perfect sense; few leaders choose to start a conflict thinking they will lose. Historians have found that technology can play a big role in feeding overconfidence; new weapons and capabilities breed new perceptions, as well as misperceptions, about what might be possible in a war. Today’s new technologies are particularly liable to feed overconfidence. They are perceived to help the offensive side in a war more than the defense, plus they are advancing at an exponential pace. The difference of just a few years of research and deployment can create vast differences in capabilities. But this can create a sort of “use it or lose it” mentality, as even the best of technologic advantages can prove quickly fleeting (a major concern for the U.S., as 42 countries are now working on military robotics, from Iran and China to Belarus and Pakistan). Finally, as one roboticist explains, a vicious circle is generated. Scientists and companies often overstate the value of new technologies in order to get governments to buy them, but if leaders believe the hype they may be more likely to feel adventurous.

When faced with a dispute or crisis, policymakers have typically regarded the use of force as the “option of last resort.” Now unmanned systems might help that option move up the list, with each step up making war more likely. That returns us to Korb’s scenario of “more Kosovos, less Iraqs.”

While avoiding the mistakes of Iraq certainly sounds like a positive result, the other side of the tradeoff would not be without its problems. The 1990s were not the halcyon days some recall. Lowering the bar to allow for more unmanned strikes from afar would lead to an approach resembling the so-called “cruise missile diplomacy” of that period. That approach may leave fewer troops stuck on the ground (a result whose importance many have taken as a lesson from Iraq), but, like the strikes against Al Qaeda camps in Sudan and Afghanistan in 1998, the Kosovo war in 1999, and perhaps now the drone strikes in Pakistan, they are military endeavors without any true sense of a commitment, lash-outs that yield incomplete victories at best. As one U.S. Army report notes, such operations “feel good for a time, but accomplish little.” They involve the country in a problem, but do not resolve it.

Robot in IraqEven worse, Korb may be wrong and the dynamic could yield not fewer Iraqs but more of them. It was the lure of an easy preemptive action that helped get the United States into such trouble in Iraq in the first place. As one robotics scientist says of the new technology that he is building: “The military thinks that it will allow them to nip things in the bud, deal with the bad guys earlier and easier, rather than having to get into a big ass war. But the most likely thing that will happen is that we’ll be throwing a bunch of high tech against the usual urban guerillas. . . . It will stem the tide [of U.S. casualties], but it won’t give us some asymmetric advantage.”

Thus, robots may entail a dark irony. By appearing to lower the human costs of war, they may seduce us into more wars.


 Contributor

Dr. Peter. W. Singer Dr. Peter. W. Singer is director of the 21st-Century Defense Initiative at the Brookings Institution and the author of Children at War (2005) and Corporate Warriors: The Rise of the Privatized Military Industry (2003). This article is adapted from his new book, Wired for War: The Robotics Revolution and Conflict in the 21st Century, just published by Penguin. Further information at http://wiredforwar.pwsinger.com

Disclaimer

The conclusions and opinions expressed in this document are those of the author cultivated in the freedom of expression, academic environment of Air University. They do not reflect the official position of the U.S. Government, Department of Defense, the United States Air Force or the Air University


[ Inicio | ¿Comentario? | Email su Opinión a ]