robots – Informed Comment https://www.juancole.com Thoughts on the Middle East, History and Religion Wed, 06 Oct 2021 05:41:47 +0000 en-US hourly 1 https://wordpress.org/?v=5.8.10 Haugen: Facebook has become the Terminator https://www.juancole.com/2021/10/haugen-facebook-terminator.html Wed, 06 Oct 2021 05:41:47 +0000 https://www.juancole.com/?p=200457 Ann Arbor (Informed Comment) – Frances Haugen’s whistleblower testimony on Tuesday before the Senate Commerce, Science and Transportation Subcommittee on Consumer Protection, Product Safety and Data Security highlighted Facebook’s interactions with children, but the testimony was a devastating indictment of the pernicious effects of the company’s algorithms on human societies across the planet. Haugen is extremely brave and a true American hero, having sprung thousands of pages of internal memos that show the company’s bad faith.

This damaging activity is part of what brings the company $40 billion a year in profits. Haugen maintains that Facebook would still be very profitable if it dialed back the negativity, just not quite as profitable.

Haugen is a specialist in algorithmic products that underlie search and recommendation systems. Such algorithms are what make the Facebook feed go. Haugen worked at other platforms, including Google +, and she concludes that of them all, Facebook is the most evil: “the choices being made inside of Facebook are disastrous for our children, for our public safety, for our privacy, and for our democracy.”

Haugen hangs a lot on the algorithm, and not everyone may have a firm idea of what that is. The word comes from the last name of the Baghdad-based Iranian Muslim mathematician Muhammad ibn Musa al-Khwarizmi (c.780-850 CE), whose book on algebra was extremely influential. Alkhwarizmi got corrupted in medieval Europe as algorithm.

An algorithm is just a set procedure for performing a calculation. We don’t think about it, but we deploy an algorithm every time we do long division.

You could write a code that says, every time a customer searches for conflict, show them ads for guns, and when they click on guns you show them ads for bullets, and when they click on bullets show them ads for funeral homes and pictures of shooting victims. This algorithm at each stage escalates toward violence.

It gets worse, because another thing that keeps people on the site is “Meaningful Social Interactions” (MSI). What Facebook wants is to foster more MSIs downstream, i.e. as time goes on. It can be innocent. Reaching out to an old high school friend is MSI. But the problem is that a knock-down drag-out shouting match between two users counts as MSI.

Haugen is saying that this is the kind of algorithm that Facebook uses. The company makes money by showing you ads. It can make more money if it can keep you on the site. Human beings’ most powerful feelings come from fear and anger, flight or fight mechanisms, which depend on pumping adrenaline into the bloodstream. They are what keep you most interested in a scene. Facebook’s procedures or algorithms serve up to you an escalating series of images, posts and ads that lead you toward these negative emotions, or they encourage negative interactions with other uses that spill a lot of vitriol, making you angry or afraid, but definitely engaged.

Haugen noted, “Engagement-based ranking and these processes of amplification, they impact all users of Facebook. The algorithms are very smart in the sense that they latch on to things that people want to continue to engage with. And unfortunately, in the case of teen girls and things like self-harm, they develop these feedback cycles where children are using Instagram as to self-soothe, but then are exposed to more and more content that makes them hate themselves. ”

Facebook is a machine for ramping up adrenaline, which it does daily in billions of human beings, making them more and more frightened and more and more angry. Haugen is saying that Facebook software engineers didn’t necessarily set out to produce these results, but they know that the site has this effect, but since it produces advertising dollars they refuse to dial it back:

    “I don’t think Facebook ever set out to intentionally promote divisive, extreme, polarizing content. I do think though that they are aware of the side effects of the choices they have made around amplification, and they know that algorithmic-based rankings, so engagement-based ranking, keeps you on their sites longer. You have long — you have longer sessions, you show up more often, and that makes them more money.”

One of the scarier implications of Haugen’s testimony is that Facebook administrators are not entirely in control of the algorithms. They can try to dial things back, but the algorithms have subroutines and they can continue to push your buttons hard even if the company puts in some dampers. The artificial intelligence knows what sets you off, and it serves more and more of that to you. When I said this on Twitter, one of my readers suggested that we are already in a Skynet scenario, from the James Cameron Terminator movies starring Arnold Schwarzenegger, where an artificial neural net develops consciousness and comes after human beings.

She said,

    “During my time at Facebook, first working as the lead product manager for civic misinformation, and later on counter-espionage, I saw Facebook repeatedly encounter conflicts between its own profits and our safety. Facebook consistently resolves these conflicts in favor of its own profits.

    The result has been more division, more harm, more lies, more threats, and more combat. In some cases, this dangerous online talk has led to actual violence that harms and even kills people. This is not simply a matter of certain social media users being angry or unstable or about one side being radicalized against the other.

    It is about Facebook choosing to grow at all costs, becoming an almost trillion-dollar company by buying its profits with our safety. During my time at Facebook, I came to realize the devastating truth. Almost no one outside of Facebook knows what happens inside of Facebook. The company intentionally hides vital information from the public, from the US government, and from governments around the world.”

Haugen refers to the role Facebook has had in fanning ethnic violence:

    “What we saw in Myanmar and are now seen in Ethiopia are only the opening chapters of a story so terrifying, no one wants to read the end of it. Congress can change the rules that Facebook plays by and stop the many harms it is now causing.”

The Muslim Rohingya minority has been ethnically cleansed and some would say genocided in Buddhist Burma or Myanmar, with 700,000 people chased out. Facebook was a primary means by which the genociders whipped up hate against the Rohingya, all of whom were blamed for the actions of a handful of radicals. It would be like all white people being blamed for the Capitol insurrection.

Facebook keeps apologizing when called on the mat, oops, sorry about that genocide we helped foment. Haugen is saying that these apologies are entirely insincere. Facebook makes money by keeping people glued to its site, and it accomplishes that by making them adrenaline junkies, simpering with terror and bursting with rage. Haugen is saying that this is the business model, and Facebook is not going to give it up unless someone makes them.

—–

Bonus video:

Facebook Whistleblower Frances Haugen testifies before Senate Commerce Committee

]]>
Artificial Intelligence Wants You (and Your Job); ;We’d Better Control Machines Before They Control Us https://www.juancole.com/2021/07/artificial-intelligence-machines.html Fri, 23 Jul 2021 04:01:17 +0000 https://www.juancole.com/?p=199029 By John Feffer | –

( Tomdispatch.com ) – My wife and I were recently driving in Virginia, amazed yet again that the GPS technology on our phones could guide us through a thicket of highways, around road accidents, and toward our precise destination. The artificial intelligence (AI) behind the soothing voice telling us where to turn has replaced passenger-seat navigators, maps, even traffic updates on the radio. How on earth did we survive before this technology arrived in our lives? We survived, of course, but were quite literally lost some of the time.

My reverie was interrupted by a toll booth. It was empty, as were all the other booths at this particular toll plaza. Most cars zipped through with E-Z passes, as one automated device seamlessly communicated with another. Unfortunately, our rental car didn’t have one.

So I prepared to pay by credit card, but the booth lacked a credit-card reader.

Okay, I thought, as I pulled out my wallet, I’ll use cash to cover the $3.25.

As it happened, that booth took only coins and who drives around with 13 quarters in his or her pocket?

I would have liked to ask someone that very question, but I was, of course, surrounded by mute machines. So, I simply drove through the electronic stile, preparing myself for the bill that would arrive in the mail once that plaza’s automated system photographed and traced our license plate.

In a thoroughly mundane fashion, I’d just experienced the age-old conflict between the limiting and liberating sides of technology. The arrowhead that can get you food for dinner might ultimately end up lodged in your own skull. The car that transports you to a beachside holiday contributes to the rising tides — by way of carbon emissions and elevated temperatures — that may someday wash away that very coastal gem of a place. The laptop computer that plugs you into the cyberworld also serves as the conduit through which hackers can steal your identity and zero out your bank account.

In the previous century, technology reached a true watershed moment when humans, harnessing the power of the atom, also acquired the capacity to destroy the entire planet. Now, thanks to AI, technology is hurtling us toward a new inflection point.

Science-fiction writers and technologists have long worried about a future in which robots, achieving sentience, take over the planet. The creation of a machine with human-like intelligence that could someday fool us into believing it’s one of us has often been described, with no small measure of trepidation, as the “singularity.” Respectable scientists like Stephen Hawking have argued that such a singularity will, in fact, mark the “end of the human race.”

This will not be some impossibly remote event like the sun blowing up in a supernova several billion years from now. According to one poll, AI researchers reckon that there’s at least a 50-50 chance that the singularity will occur by 2050. In other words, if pessimists like Hawking are right, it’s odds on that robots will dispatch humanity before the climate crisis does.

Neither the artificial intelligence that powers GPS nor the kind that controlled that frustrating toll plaza has yet attained anything like human-level intelligence — not even close. But in many ways, such dumb robots are already taking over the world. Automation is currently displacing millions of workers, including those former tollbooth operators. “Smart” machines like unmanned aerial vehicles have become an indispensable part of waging war. AI systems are increasingly being deployed to monitor our every move on the Internet, through our phones, and whenever we venture into public space. Algorithms are replacing teaching assistants in the classroom and influencing sentencing in courtrooms. Some of the loneliest among us have already become dependent on robot pets.

As AI capabilities continue to improve, the inescapable political question will become: to what extent can such technologies be curbed and regulated? Yes, the nuclear genie is out of the bottle as are other technologies — biological and chemical — capable of causing mass destruction of a kind previously unimaginable on this planet. With AI, however, that day of singularity is still in the future, even if a rapidly approaching one. It should still be possible, at least theoretically, to control such an outcome before there’s nothing to do but play the whack-a-mole game of non-proliferation after the fact.

As long as humans continue to behave badly on a global scale — war, genocide, planet-threatening carbon emissions — it’s difficult to imagine that anything we create, however intelligent, will act differently. And yet we continue to dream that some deus in machina, a god in the machine, could appear as if by magic to save us from ourselves.

Taming AI?

In the early 1940s, science fiction writer Isaac Asimov formulated his famed three laws of robotics: that robots were not to harm humans, directly or indirectly; that they must obey our commands (unless doing so violates the first law); and that they must safeguard their own existence (unless self-preservation contravenes the first two laws).

Any number of writers have attempted to update Asimov. The latest is legal scholar Frank Pasquale, who has devised four laws to replace Asimov’s three. Since he’s a lawyer not a futurist, Pasquale is more concerned with controlling the robots of today than hypothesizing about the machines of tomorrow. He argues that robots and AI should help professionals, not replace them; that they should not counterfeit humans; that they should never become part of any kind of arms race; and that their creators, controllers, and owners should always be transparent.

Pasquale’s “laws,” however, run counter to the artificial-intelligence trends of our moment. The prevailing AI ethos mirrors what could be considered the prime directive of Silicon Valley: move fast and break things. This philosophy of disruption demands, above all, that technology continuously drive down labor costs and regularly render itself obsolescent.


Buy the Book

In the global economy, AI indeed helps certain professionals — like Facebook’s Mark Zuckerberg and Amazon’s Jeff Bezos, who just happen to be among the richest people on the planet — but it’s also replacing millions of us. In the military sphere, automation is driving boots off the ground and eyes into the sky in a coming robotic world of war. And whether it’s Siri, the bots that guide increasingly frustrated callers through automated phone trees, or the AI that checks out Facebook posts, the aim has been to counterfeit human beings — “machines like me,” as Ian McEwan called them in his 2019 novel of that title — while concealing the strings that connect the creation to its creator.

Pasquale wants to apply the brakes on a train that has not only left the station but no longer is under the control of the engine driver. It’s not difficult to imagine where such a runaway phenomenon could end up and techno-pessimists have taken a perverse delight in describing the resulting cataclysm. In his book Superintelligence, for instance, Nick Bostrom writes about a sandstorm of self-replicating nanorobots that chokes every living thing on the planet — the so-called grey goo problem — and an AI that seizes power by “hijacking political processes.”

Since they would be interested only in self-preservation and replication, not protecting humanity or following its orders, such sentient machines would clearly tear up Asimov’s rulebook. Futurists have leapt into the breach. For instance, Ray Kurzweil, who predicted in his 2005 book The Singularity Is Near that a robot would attain sentience by about 2045, has proposed a “ban on self-replicating physical entities that contain their own codes for self-replication.” Elon Musk, another billionaire industrialist who’s no enemy of innovation, has called AI humanity’s “biggest existential threat” and has come out in favor of a ban on future killer robots.

To prevent the various worst-case scenarios, the European Union has proposed to control AI according to degree of risk. Some products that fall in the EU’s “high risk” category would have to get a kind of Good Housekeeping seal of approval (the Conformité Européenne). AI systems “considered a clear threat to the safety, livelihoods, and rights of people,” on the other hand, would be subject to an outright ban. Such clear-and-present dangers would include, for instance, biometric identification that captures personal data by such means as facial recognition, as well as versions of China’s social credit system where AI helps track individuals and evaluate their overall trustworthiness.

Techno-optimists have predictably lambasted what they consider European overreach. Such controls on AI, they believe, will put a damper on R&D and, if the United States follows suit, allow China to secure an insuperable technological edge in the field. “If the member states of the EU — and their allies across the Atlantic — are serious about competing with China and retaining their power status (as well as the quality of life they provide to their citizens),” writes entrepreneur Sid Mohasseb in Newsweek, “they need to call for a redraft of these regulations, with growth and competition being seen as at least as important as regulation and safety.”

Mohasseb’s concerns are, however, misleading. The regulators he fears so much are, in fact, now playing a game of catch-up. In the economy and on the battlefield, to take just two spheres of human activity, AI has already become indispensable.

The Automation of Globalization

The ongoing Covid-19 pandemic has exposed the fragility of global supply chains. The world economy nearly ground to a halt in 2020 for one major reason: the health of human workers. The spread of infection, the risk of contagion, and the efforts to contain the pandemic all removed workers from the labor force, sometimes temporarily, sometimes permanently. Factories shut down, gaps widened in transportation networks, and shops lost business to online sellers.

A desire to cut labor costs, a major contributor to a product’s price tag, has driven corporations to look for cheaper workers overseas. For such cost-cutters, eliminating workers altogether is an even more beguiling prospect. Well before the pandemic hit, corporations had begun to turn to automation. By 2030, up to 45 million U.S. workers will be displaced by robots. The World Bank estimates that they will eventually replace an astounding 85% of the jobs in Ethiopia, 77% in China, and 72% in Thailand.”

The pandemic not only accelerated this trend, but increased economic inequality as well because, at least for now, robots tend to replace the least skilled workers. In a survey conducted by the World Economic Forum, 43% of businesses indicated that they would reduce their workforces through the increased use of technology. “Since the pandemic hit,” reports NBC News,

“food manufacturers ramped up their automation, allowing facilities to maintain output while social distancing. Factories digitized controls on their machines so they could be remotely operated by workers working from home or another location. New sensors were installed that can flag, or predict, failures, allowing teams of inspectors operating on a schedule to be reduced to an as-needed maintenance crew.”

In an ideal world, robots and AI would increasingly take on all the dirty, dangerous, and demeaning jobs globally, freeing humans to do more interesting work. In the real world, however, automation is often making jobs dirtier and more dangerous by, for instance, speeding up the work done by the remaining human labor force. Meanwhile, robots are beginning to encroach on what’s usually thought of as the more interesting kinds of work done by, for example, architects and product designers.

In some cases, AI has even replaced managers. A contract driver for Amazon, Stephen Normandin, discovered that the AI system that monitored his efficiency as a deliveryman also used an automated email to fire him when it decided he wasn’t up to snuff. Jeff Bezos may be stepping down as chief executive of Amazon, but robots are quickly climbing its corporate ladder and could prove at least as ruthless as he’s been, if not more so.

Mobilizing against such a robot replacement army could prove particularly difficult as corporate executives aren’t the only ones putting out the welcome mat. Since fully automated manufacturing in “dark factories” doesn’t require lighting, heating, or a workforce that commutes to the site by car, that kind of production can reduce a country’s carbon footprint — a potentially enticing factor for “green growth” advocates and politicians desperate to meet their Paris climate targets.

It’s possible that sentient robots won’t need to devise ingenious stratagems for taking over the world. Humans may prove all too willing to give semi-intelligent machines the keys to the kingdom.

The New Fog of War

The 2020 war between Armenia and Azerbaijan proved to be unlike any previous military conflict. The two countries had been fighting since the 1980s over a disputed mountain enclave, Nagorno-Karabakh. Following the collapse of the Soviet Union, Armenia proved the clear victor in conflict that followed in the early 1990s, occupying not only the disputed territory but parts of Azerbaijan as well.

In September 2020, as tensions mounted between the two countries, Armenia was prepared to defend those occupied territories with a well-equipped army of tanks and artillery. Thanks to its fossil-fuel exports, Azerbaijan, however, had been spending considerably more than Armenia on the most modern version of military preparedness. Still, Armenian leaders often touted their army as the best in the region. Indeed, according to the 2020 Global Militarization Index, that country was second only to Israel in terms of its level of militarization.

Yet Azerbaijan was the decisive winner in the 2020 conflict, retaking possession of Nagorno-Karabkah. The reason: automation.

“Azerbaijan used its drone fleet — purchased from Israel and Turkey — to stalk and destroy Armenia’s weapons systems in Nagorno-Karabakh, shattering its defenses and enabling a swift advance,” reported the Washington Post‘s Robyn Dixon. “Armenia found that air defense systems in Nagorno-Karabakh, many of them older Soviet systems, were impossible to defend against drone attacks, and losses quickly piled up.”

Armenian soldiers, notorious for their fierceness, were spooked by the semi-autonomous weapons regularly above them. “The soldiers on the ground knew they could be hit by a drone circling overhead at any time,” noted Mark Sullivan in the business magazine Fast Company. “The drones are so quiet they wouldn’t hear the whir of the propellers until it was too late. And even if the Armenians did manage to shoot down one of the drones, what had they really accomplished? They’d merely destroyed a piece of machinery that would be replaced.”

The United States pioneered the use of drones against various non-state adversaries in its war on terror in Afghanistan, Iraq, Pakistan, Somalia, and elsewhere across the Greater Middle East and Africa. But in its 2020 campaign, Azerbaijan was using the technology to defeat a modern army. Now, every military will feel compelled not only to integrate increasingly more powerful AI into its offensive capabilities, but also to defend against the new technology.

To stay ahead of the field, the United States is predictably pouring money into the latest technologies. The new Pentagon budget includes the “largest ever” request for R&D, including a down payment of nearly a billion dollars for AI. As TomDispatch regular Michael Klare has written, the Pentagon has even taken a cue from the business world by beginning to replace its war managers — generals — with a huge, interlinked network of automated systems known as the Joint All-Domain Command-and-Control (JADC2).

The result of any such handover of greater responsibility to machines will be the creation of what mathematician Cathy O’Neill calls “weapons of math destruction.” In the global economy, AI is already replacing humans up and down the chain of production. In the world of war, AI could in the end annihilate people altogether, whether thanks to human design or computer error.

After all, during the Cold War, only last-minute interventions by individuals on both sides ensured that nuclear “missile attacks” detected by Soviet and American computers — which turned out to be birds, unusual weather, or computer glitches — didn’t precipitate an all-out nuclear war. Take the human being out of the chain of command and machines could carry out such a genocide all by themselves.

And the fault, dear reader, would lie not in our robots but in ourselves.

Robots of Last Resort

In my new novel Songlands, humanity faces a terrible set of choices in 2052. Having failed to control carbon emissions for several decades, the world is at the point of no return, too late for conventional policy fixes. The only thing left is a scientific Hail Mary pass, an experiment in geoengineering that could fail or, worse, have terrible unintended consequences. The AI responsible for ensuring the success of the experiment may or may not be trustworthy. My dystopia, like so many others, is really about a narrowing of options and a whittling away of hope, which is our current trajectory.

And yet, we still have choices. We could radically shift toward clean energy and marshal resources for the whole world, not just its wealthier portions, to make the leap together. We could impose sensible regulations on artificial intelligence. We could debate the details of such programs in democratic societies and in participatory multilateral venues.

Or, throwing up our hands because of our unbridgeable political differences, we could wait for a post-Trumpian savior to bail us out. Techno-optimists hold out hope that automation will set us free and save the planet. Laissez-faire enthusiasts continue to believe that the invisible hand of the market will mysteriously direct capital toward planet-saving innovations instead of SUVs and plastic trinkets.

These are illusions. As I write in Songlands, we have always hoped for someone or something to save us: “God, a dictator, technology. For better or worse, the only answer to our cries for help is an echo.”

In the end, robots won’t save us. That’s one piece of work that can’t be outsourced or automated. It’s a job that only we ourselves can do.

Copyright 2021 John Feffer

Via Tomdispatch.com

]]>
Do Robot Generals dream of Deceased Humans? https://www.juancole.com/2020/08/robot-generals-humans.html Wed, 26 Aug 2020 04:01:45 +0000 https://www.juancole.com/?p=192793 ( Tomdispatch.com ) – With Covid-19 incapacitating startling numbers of U.S. service members and modern weapons proving increasingly lethal, the American military is relying ever more frequently on intelligent robots to conduct hazardous combat operations. Such devices, known in the military as “autonomous weapons systems,” include robotic sentries, battlefield-surveillance drones, and autonomous submarines. So far, in other words, robotic devices are merely replacing standard weaponry on conventional battlefields. Now, however, in a giant leap of faith, the Pentagon is seeking to take this process to an entirely new level — by replacing not just ordinary soldiers and their weapons, but potentially admirals and generals with robotic systems.

Admittedly, those systems are still in the development stage, but the Pentagon is now rushing their future deployment as a matter of national urgency. Every component of a modern general staff — including battle planning, intelligence-gathering, logistics, communications, and decision-making — is, according to the Pentagon’s latest plans, to be turned over to complex arrangements of sensors, computers, and software. All these will then be integrated into a “system of systems,” now dubbed the Joint All-Domain Command-and-Control, or JADC2 (since acronyms remain the essence of military life). Eventually, that amalgam of systems may indeed assume most of the functions currently performed by American generals and their senior staff officers.

The notion of using machines to make command-level decisions is not, of course, an entirely new one. It has, in truth, been a long time coming. During the Cold War, following the introduction of intercontinental ballistic missiles (ICBMs) with extremely short flight times, both military strategists and science-fiction writers began to imagine mechanical systems that would control such nuclear weaponry in the event of human incapacity.

In Stanley Kubrick’s satiric 1964 movie Dr. Strangelove, for example, the fictional Russian leader Dimitri Kissov reveals that the Soviet Union has installed a “doomsday machine” capable of obliterating all human life that would detonate automatically should the country come under attack by American nuclear forces. Efforts by crazed anti-Soviet U.S. Air Force officers to provoke a war with Moscow then succeed in triggering that machine and so bring about human annihilation. In reality, fearing that they might experience a surprise attack of just this sort, the Soviets later did install a semi-automatic retaliatory system they dubbed “Perimeter,” designed to launch Soviet ICBMs in the event that sensors detected nuclear explosions and all communications from Moscow had been silenced. Some analysts believe that an upgraded version of Perimeter is still in operation, leaving us in an all-too-real version of a Strangelovian world.

In yet another sci-fi version of such automated command systems, the 1983 film WarGames, starring Matthew Broderick as a teenage hacker, portrayed a supercomputer called the War Operations Plan Response, or WOPR (pronounced “whopper”) installed at the North American Aerospace Command (NORAD) headquarters in Colorado. When the Broderick character hacks into it and starts playing what he believes is a game called “World War III,” the computer concludes an actual Soviet attack is underway and launches a nuclear retaliatory response. Although fictitious, the movie accurately depicts many aspects of the U.S. nuclear command-control-and-communications (NC3) system, which was then and still remains highly automated.

Such devices, both real and imagined, were relatively primitive by today’s standards, being capable solely of determining that a nuclear attack was under way and ordering a catastrophic response. Now, as a result of vast improvements in artificial intelligence (AI) and machine learning, machines can collect and assess massive amounts of sensor data, swiftly detect key trends and patterns, and potentially issue orders to combat units as to where to attack and when.

Time Compression and Human Fallibility

The substitution of intelligent machines for humans at senior command levels is becoming essential, U.S. strategists argue, because an exponential growth in sensor information combined with the increasing speedof warfare is making it nearly impossible for humans to keep track of crucial battlefield developments. If future scenarios prove accurate, battles that once unfolded over days or weeks could transpire in the space of hours, or even minutes, while battlefield information will be pouring in as multitudinous data points, overwhelming staff officers. Only advanced computers, it is claimed, could process so much information and make informed combat decisions within the necessary timeframe.

Such time compression and the expansion of sensor data may apply to any form of combat, but especially to the most terrifying of them all, nuclear war. When ICBMs were the principal means of such combat, decisionmakers had up to 30 minutes between the time a missile was launched and the moment of detonation in which to determine whether a potential attack was real or merely a false satellite reading (as did sometimes occur during the Cold War). Now, that may not sound like much time, but with the recent introduction of hypersonic missiles, such assessment times could shrink to as little as five minutes. Under such circumstances, it’s a lot to expect even the most alert decision-makers to reach an informed judgment on the nature of a potential attack. Hence the appeal (to some) of automated decision-making systems.

“Attack-time compression has placed America’s senior leadership in a situation where the existing NC3 system may not act rapidly enough,” military analysts Adam Lowther and Curtis McGiffin argued at War on the Rocks, a security-oriented website. “Thus, it may be necessary to develop a system based on artificial intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such speed that the attack-time compression challenge does not place the United States in an impossible position.”

This notion, that an artificial intelligence-powered device — in essence, a more intelligent version of the doomsday machine or the WOPR — should be empowered to assess enemy behavior and then, on the basis of “predetermined response options,” decide humanity’s fate, has naturally produced some unease in the community of military analysts (as it should for the rest of us as well). Nevertheless, American strategists continue to argue that battlefield assessment and decision-making — for both conventional and nuclear warfare — should increasingly be delegated to machines.

“AI-powered intelligence systems may provide the ability to integrate and sort through large troves of data from different sources and geographic locations to identify patterns and highlight useful information,” the Congressional Research Service noted in a November 2019 summary of Pentagon thinking. “As the complexity of AI systems matures,” it added, “AI algorithms may also be capable of providing commanders with a menu of viable courses of action based on real-time analysis of the battlespace, in turn enabling faster adaptation to complex events.”

The key wording there is “a menu of viable courses of action based on real-time analysis of the battlespace.” This might leave the impression that human generals and admirals (not to speak of their commander-in-chief) will still be making the ultimate life-and-death decisions for both their own forces and the planet. Given such anticipated attack-time compression in future high-intensity combat with China and/or Russia, however, humans may no longer have the time or ability to analyze the battlespace themselves and so will come to rely on AI algorithms for such assessments. As a result, human commanders may simply find themselves endorsing decisions made by machines — and so, in the end, become superfluous.

Creating Robot Generals

Despite whatever misgivings they may have about their future job security, America’s top generals are moving swiftly to develop and deploy that JADC2 automated command mechanism. Overseen by the Air Force, it’s proving to be a computer-driven amalgam of devices for collecting real-time intelligence on enemy forces from vast numbers of sensor devices (satellites, ground radars, electronic listening posts, and so on), processing that data into actionable combat information, and providing precise attack instructions to every combat unit and weapons system engaged in a conflict — whether belonging to the Army, Navy, Air Force, Marine Corps, or the newly formed Space Force and Cyber Command.

What, exactly, the JADC2 will consist of is not widely known, partly because many of its component systems are still shrouded in secrecy and partly because much of the essential technology is still in the development stage. Delegated with responsibility for overseeing the project, the Air Force is working with Lockheed Martin and other large defense contractors to design and develop key elements of the system.

One such building block is its Advanced Battle Management System (ABMS), a data-collection and distribution system intended to provide fighter pilots with up-to-the-minute data on enemy positions and help guide their combat moves. Another key component is the Army’s Integrated Air and Missile Defense Battle Command System (IBCS), designed to connect radar systems to anti-aircraft and missile-defense launchers and provide them with precise firing instructions. Over time, the Air Force and its multiple contractors will seek to integrate ABMS and IBCS into a giant network of systems connecting every sensor, shooter, and commander in the country’s armed forces — a military “internet of things,” as some have put it.

To test this concept and provide an example of how it might operate in the future, the Army conducted a live-fire artillery exercise this August in Germany using components (or facsimiles) of the future JADC2 system. In the first stage of the test, satellite images of (presumed) Russian troop positions were sent to an Army ground terminal, where an AI software program called Prometheus combed through the data to select enemy targets. Next, another AI program called SHOT computed the optimal match of available Army weaponry to those intended targets and sent this information, along with precise firing coordinates, to the Army’s Advanced Field Artillery Tactical Data System (AFATDS) for immediate action, where human commanders could choose to implement it or not. In the exercise, those human commanders had the mental space to give the matter a moment’s thought; in a shooting war, they might just leave everything to the machines, as the system’s designers clearly intend them to do.

In the future, the Army is planning even more ambitious tests of this evolving technology under an initiative called Project Convergence. From what’s been said publicly about it, Convergence will undertake ever more complex exercises involving satellites, Air Force fighters equipped with the ABMS system, Army helicopters, drones, artillery pieces, and tactical vehicles. Eventually, all of this will form the underlying “architecture” of the JADC2, linking every military sensor system to every combat unit and weapons system — leaving the generals with little to do but sit by and watch.

Why Robot Generals Could Get It Wrong

Given the complexity of modern warfare and the challenge of time compression in future combat, the urge of American strategists to replace human commanders with robotic ones is certainly understandable. Robot generals and admirals might theoretically be able to process staggering amounts of information in brief periods of time, while keeping track of both friendly and enemy forces and devising optimal ways to counter enemy moves on a future battlefield. But there are many good reasons to doubt the reliability of robot decision-makers and the wisdom of using them in place of human officers.

To begin with, many of these technologies are still in their infancy, and almost all are prone to malfunctions that can neither be easily anticipated nor understood. And don’t forget that even advanced algorithms can be fooled, or “spoofed,” by skilled professionals.

In addition, unlike humans, AI-enabled decision-making systems will lack an ability to assess intent or context. Does a sudden enemy troop deployment, for example, indicate an imminent attack, a bluff, or just a normal rotation of forces? Human analysts can use their understanding of the current political moment and the actors involved to help guide their assessment of the situation. Machines lack that ability and may assume the worst, initiating military action that could have been avoided.

Such a problem will only be compounded by the “training” such decision-making algorithms will undergo as they are adapted to military situations. Just as facial recognition software has proved to be tainted by an over-reliance on images of white males in the training process — making them less adept at recognizing, say, African-American women — military decision-making algorithms are likely to be distorted by an over-reliance on the combat-oriented scenarios selected by American military professionals for training purposes. “Worst-case thinking” is a natural inclination of such officers — after all, who wants to be caught unprepared for a possible enemy surprise attack? — and such biases will undoubtedly become part of the “menus of viable courses of action” provided by decision-making robots.

Once integrated into decision-making algorithms, such biases could, in turn, prove exceedingly dangerous in any future encounters between U.S. and Russian troops in Europe or American and Chinese forces in Asia. A clash of this sort might, after all, arise at any time, thanks to some misunderstanding or local incident that rapidly gains momentum — a sudden clash between U.S. and Chinese warships off Taiwan, for example, or between American and Russian patrols in one of the Baltic states. Neither side may have intended to ignite a full-scale conflict and leaders on both sides might normally move to negotiate a cease-fire. But remember, these will no longer simply be human conflicts. In the wake of such an incident, the JADC2 could detect some enemy move that it determines poses an imminent risk to allied forces and so immediately launch an all-out attack by American planes, missiles, and artillery, escalating the conflict and foreclosing any chance of an early negotiated settlement.

Such prospects become truly frightening when what’s at stake is the onset of nuclear war. It’s hard to imagine any conflict among the major powers starting out as a nuclear war, but it’s far easier to envision a scenario in which the great powers — after having become embroiled in a conventional conflict — reach a point where one side or the other considers the use of atomic arms to stave off defeat. American military doctrine, in fact, has always held out the possibility of using so-called tactical nuclear weapons in response to a massive Soviet (now Russian) assault in Europe. Russian military doctrine, it is widely assumed, incorporates similar options. Under such circumstances, a future JADC2 could misinterpret enemy moves as signaling preparation for a nuclear launch and order a pre-emptive strike by U.S. nuclear forces, thereby igniting World War III.

War is a nasty, brutal activity and, given almost two decades of failed conflicts that have gone under the label of “the war on terror,” causing thousands of American casualties (both physical and mental), it’s easy to understand why robot enthusiasts are so eager to see another kind of mentality take over American war-making. As a start, they contend, especially in a pandemic world, that it’s only humane to replace human soldiers on the battlefield with robots and so diminish human casualties (at least among combatants). This claim does not, of course, address the argument that robot soldiers and drone aircraft lack the ability to distinguish between combatants and non-combatants on the battlefield and so cannot be trusted to comply with the laws of war or international humanitarian law — which, at least theoretically, protect civilians from unnecessary harm — and so should be banned.

Fraught as all of that may be on future battlefields, replacing generals and admirals with robots is another matter altogether. Not only do legal and moral arguments arise with a vengeance, as the survival of major civilian populations could be put at risk by computer-derived combat decisions, but there’s no guarantee that American GIs would suffer fewer casualties in the battles that ensued. Maybe it’s time, then, for Congress to ask some tough questions about the advisability of automating combat decision-making before this country pours billions of additional taxpayer dollars into an enterprise that could, in fact, lead to the end of the world as we know it. Maybe it’s time as well for the leaders of China, Russia, and this country to limit or ban the deployment of hypersonic missiles and other weaponry that will compress life-and-death decisions for humanity into just a few minutes, thereby justifying the automation of such fateful judgments.

Michael T. Klare, a TomDispatch regular, is the five-college professor emeritus of peace and world security studies at Hampshire College and a senior visiting fellow at the Arms Control Association. He is the author of 15 books, the latest of which is All Hell Breaking Loose: The Pentagon’s Perspective on Climate Change.

Follow TomDispatch on Twitter and join us on Facebook. Check out the newest Dispatch Books, John Feffer’s new dystopian novel (the second in the Splinterlands series) Frostlands, Beverly Gologorsky’s novel Every Body Has a Story, and Tom Engelhardt’s A Nation Unmade by War, as well as Alfred McCoy’s In the Shadows of the American Century: The Rise and Decline of U.S. Global Power and John Dower’s The Violent American Century: War and Terror Since World War II.

Copyright 2020 Michael T. Klare

Via Tomdispatch.com

—–

Bonus Video added by Informed Comment:

CNET Highlights: “Watch DARPA’s AI vs. Human in Virtual F-16 Aerial Dogfight (FINALS)”

]]>