Robots – Informed Comment https://www.juancole.com Thoughts on the Middle East, History and Religion Wed, 21 Feb 2024 03:32:56 +0000 en-US hourly 1 https://wordpress.org/?v=5.8.10 AI Behavior, Human Destiny and the Rise of the Killer Robots https://www.juancole.com/2024/02/behavior-destiny-killer.html Wed, 21 Feb 2024 05:02:39 +0000 https://www.juancole.com/?p=217200 ( Tomdispatch.com ) – Yes, it’s already time to be worried — very worried. As the wars in Ukraine and Gaza have shown, the earliest drone equivalents of “killer robots” have made it onto the battlefield and proved to be devastating weapons. But at least they remain largely under human control. Imagine, for a moment, a world of war in which those aerial drones (or their ground and sea equivalents) controlled us, rather than vice-versa. Then we would be on a destructively different planet in a fashion that might seem almost unimaginable today. Sadly, though, it’s anything but unimaginable, given the work on artificial intelligence (AI) and robot weaponry that the major powers have already begun. Now, let me take you into that arcane world and try to envision what the future of warfare might mean for the rest of us.

By combining AI with advanced robotics, the U.S. military and those of other advanced powers are already hard at work creating an array of self-guided “autonomous” weapons systems — combat drones that can employ lethal force independently of any human officers meant to command them. Called “killer robots” by critics, such devices include a variety of uncrewed or “unmanned” planes, tanks, ships, and submarines capable of autonomous operation. The U.S. Air Force, for example, is developing its “collaborative combat aircraft,” an unmanned aerial vehicle (UAV) intended to join piloted aircraft on high-risk missions. The Army is similarly testing a variety of autonomous unmanned ground vehicles (UGVs), while the Navy is experimenting with both unmanned surface vessels (USVs) and unmanned undersea vessels (UUVs, or drone submarines). China, Russia, Australia, and Israel are also working on such weaponry for the battlefields of the future.

The imminent appearance of those killing machines has generated concern and controversy globally, with some countries already seeking a total ban on them and others, including the U.S., planning to authorize their use only under human-supervised conditions. In Geneva, a group of states has even sought to prohibit the deployment and use of fully autonomous weapons, citing a 1980 U.N. treaty, the Convention on Certain Conventional Weapons, that aims to curb or outlaw non-nuclear munitions believed to be especially harmful to civilians. Meanwhile, in New York, the U.N. General Assembly held its first discussion of autonomous weapons last October and is planning a full-scale review of the topic this coming fall.

For the most part, debate over the battlefield use of such devices hinges on whether they will be empowered to take human lives without human oversight. Many religious and civil society organizations argue that such systems will be unable to distinguish between combatants and civilians on the battlefield and so should be banned in order to protect noncombatants from death or injury, as is required by international humanitarian law. American officials, on the other hand, contend that such weaponry can be designed to operate perfectly well within legal constraints.

However, neither side in this debate has addressed the most potentially unnerving aspect of using them in battle: the likelihood that, sooner or later, they’ll be able to communicate with each other without human intervention and, being “intelligent,” will be able to come up with their own unscripted tactics for defeating an enemy — or something else entirely. Such computer-driven groupthink, labeled “emergent behavior” by computer scientists, opens up a host of dangers not yet being considered by officials in Geneva, Washington, or at the U.N.

For the time being, most of the autonomous weaponry being developed by the American military will be unmanned (or, as they sometimes say, “uninhabited”) versions of existing combat platforms and will be designed to operate in conjunction with their crewed counterparts. While they might also have some capacity to communicate with each other, they’ll be part of a “networked” combat team whose mission will be dictated and overseen by human commanders. The Collaborative Combat Aircraft, for instance, is expected to serve as a “loyal wingman” for the manned F-35 stealth fighter, while conducting high-risk missions in contested airspace. The Army and Navy have largely followed a similar trajectory in their approach to the development of autonomous weaponry.

The Appeal of Robot “Swarms”

However, some American strategists have championed an alternative approach to the use of autonomous weapons on future battlefields in which they would serve not as junior colleagues in human-led teams but as coequal members of self-directed robot swarms. Such formations would consist of scores or even hundreds of AI-enabled UAVs, USVs, or UGVs — all able to communicate with one another, share data on changing battlefield conditions, and collectively alter their combat tactics as the group-mind deems necessary.

“Emerging robotic technologies will allow tomorrow’s forces to fight as a swarm, with greater mass, coordination, intelligence and speed than today’s networked forces,” predicted Paul Scharre, an early enthusiast of the concept, in a 2014 report for the Center for a New American Security (CNAS). “Networked, cooperative autonomous systems,” he wrote then, “will be capable of true swarming — cooperative behavior among distributed elements that gives rise to a coherent, intelligent whole.”

As Scharre made clear in his prophetic report, any full realization of the swarm concept would require the development of advanced algorithms that would enable autonomous combat systems to communicate with each other and “vote” on preferred modes of attack. This, he noted, would involve creating software capable of mimicking ants, bees, wolves, and other creatures that exhibit “swarm” behavior in nature. As Scharre put it, “Just like wolves in a pack present their enemy with an ever-shifting blur of threats from all directions, uninhabited vehicles that can coordinate maneuver and attack could be significantly more effective than uncoordinated systems operating en masse.”

In 2014, however, the technology needed to make such machine behavior possible was still in its infancy. To address that critical deficiency, the Department of Defense proceeded to fund research in the AI and robotics field, even as it also acquired such technology from private firms like Google and Microsoft. A key figure in that drive was Robert Work, a former colleague of Paul Scharre’s at CNAS and an early enthusiast of swarm warfare. Work served from 2014 to 2017 as deputy secretary of defense, a position that enabled him to steer ever-increasing sums of money to the development of high-tech weaponry, especially unmanned and autonomous systems.

From Mosaic to Replicator

Much of this effort was delegated to the Defense Advanced Research Projects Agency (DARPA), the Pentagon’s in-house high-tech research organization. As part of a drive to develop AI for such collaborative swarm operations, DARPA initiated its “Mosaic” program, a series of projects intended to perfect the algorithms and other technologies needed to coordinate the activities of manned and unmanned combat systems in future high-intensity combat with Russia and/or China.

“Applying the great flexibility of the mosaic concept to warfare,” explained Dan Patt, deputy director of DARPA’s Strategic Technology Office, “lower-cost, less complex systems may be linked together in a vast number of ways to create desired, interwoven effects tailored to any scenario. The individual parts of a mosaic are attritable [dispensable], but together are invaluable for how they contribute to the whole.”

This concept of warfare apparently undergirds the new “Replicator” strategy announced by Deputy Secretary of Defense Kathleen Hicks just last summer. “Replicator is meant to help us overcome [China’s] biggest advantage, which is mass. More ships. More missiles. More people,” she told arms industry officials last August. By deploying thousands of autonomous UAVs, USVs, UUVs, and UGVs, she suggested, the U.S. military would be able to outwit, outmaneuver, and overpower China’s military, the People’s Liberation Army (PLA). “To stay ahead, we’re going to create a new state of the art… We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat.”

To obtain both the hardware and software needed to implement such an ambitious program, the Department of Defense is now seeking proposals from traditional defense contractors like Boeing and Raytheon as well as AI startups like Anduril and Shield AI. While large-scale devices like the Air Force’s Collaborative Combat Aircraft and the Navy’s Orca Extra-Large UUV may be included in this drive, the emphasis is on the rapid production of smaller, less complex systems like AeroVironment’s Switchblade attack drone, now used by Ukrainian troops to take out Russian tanks and armored vehicles behind enemy lines.

At the same time, the Pentagon is already calling on tech startups to develop the necessary software to facilitate communication and coordination among such disparate robotic units and their associated manned platforms. To facilitate this, the Air Force asked Congress for $50 million in its fiscal year 2024 budget to underwrite what it ominously enough calls Project VENOM, or “Viper Experimentation and Next-generation Operations Model.” Under VENOM, the Air Force will convert existing fighter aircraft into AI-governed UAVs and use them to test advanced autonomous software in multi-drone operations. The Army and Navy are testing similar systems.

When Swarms Choose Their Own Path

In other words, it’s only a matter of time before the U.S. military (and presumably China’s, Russia’s, and perhaps those of a few other powers) will be able to deploy swarms of autonomous weapons systems equipped with algorithms that allow them to communicate with each other and jointly choose novel, unpredictable combat maneuvers while in motion. Any participating robotic member of such swarms would be given a mission objective (“seek out and destroy all enemy radars and anti-aircraft missile batteries located within these [specified] geographical coordinates”) but not be given precise instructions on how to do so. That would allow them to select their own battle tactics in consultation with one another. If the limited test data we have is anything to go by, this could mean employing highly unconventional tactics never conceived for (and impossible to replicate by) human pilots and commanders.

The propensity for such interconnected AI systems to engage in novel, unplanned outcomes is what computer experts call “emergent behavior.” As ScienceDirect, a digest of scientific journals, explains it, “An emergent behavior can be described as a process whereby larger patterns arise through interactions among smaller or simpler entities that themselves do not exhibit such properties.” In military terms, this means that a swarm of autonomous weapons might jointly elect to adopt combat tactics none of the individual devices were programmed to perform — possibly achieving astounding results on the battlefield, but also conceivably engaging in escalatory acts unintended and unforeseen by their human commanders, including the destruction of critical civilian infrastructure or communications facilities used for nuclear as well as conventional operations.

At this point, of course, it’s almost impossible to predict what an alien group-mind might choose to do if armed with multiple weapons and cut off from human oversight. Supposedly, such systems would be outfitted with failsafe mechanisms requiring that they return to base if communications with their human supervisors were lost, whether due to enemy jamming or for any other reason. Who knows, however, how such thinking machines would function in demanding real-world conditions or if, in fact, the group-mind would prove capable of overriding such directives and striking out on its own.

What then? Might they choose to keep fighting beyond their preprogrammed limits, provoking unintended escalation — even, conceivably, of a nuclear kind? Or would they choose to stop their attacks on enemy forces and instead interfere with the operations of friendly ones, perhaps firing on and devastating them (as Skynet does in the classic science fiction Terminator movie series)? Or might they engage in behaviors that, for better or infinitely worse, are entirely beyond our imagination?

Top U.S. military and diplomatic officials insist that AI can indeed be used without incurring such future risks and that this country will only employ devices that incorporate thoroughly adequate safeguards against any future dangerous misbehavior. That is, in fact, the essential point made in the “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” issued by the State Department in February 2023. Many prominent security and technology officials are, however, all too aware of the potential risks of emergent behavior in future robotic weaponry and continue to issue warnings against the rapid utilization of AI in warfare.

Of particular note is the final report that the National Security Commission on Artificial Intelligence issued in February 2021. Co-chaired by Robert Work (back at CNAS after his stint at the Pentagon) and Eric Schmidt, former CEO of Google, the commission recommended the rapid utilization of AI by the U.S. military to ensure victory in any future conflict with China and/or Russia. However, it also voiced concern about the potential dangers of robot-saturated battlefields.

“The unchecked global use of such systems potentially risks unintended conflict escalation and crisis instability,” the report noted. This could occur for a number of reasons, including “because of challenging and untested complexities of interaction between AI-enabled and autonomous weapon systems [that is, emergent behaviors] on the battlefield.” Given that danger, it concluded, “countries must take actions which focus on reducing risks associated with AI-enabled and autonomous weapon systems.”

When the leading advocates of autonomous weaponry tell us to be concerned about the unintended dangers posed by their use in battle, the rest of us should be worried indeed. Even if we lack the mathematical skills to understand emergent behavior in AI, it should be obvious that humanity could face a significant risk to its existence, should killing machines acquire the ability to think on their own. Perhaps they would surprise everyone and decide to take on the role of international peacekeepers, but given that they’re being designed to fight and kill, it’s far more probable that they might simply choose to carry out those instructions in an independent and extreme fashion.

If so, there could be no one around to put an R.I.P. on humanity’s gravestone.

Via Tomdispatch.com

]]>
AI vs. AI: And Human Extinction as Collateral Damage https://www.juancole.com/2023/07/extinction-collateral-damage.html Wed, 12 Jul 2023 04:02:30 +0000 https://www.juancole.com/?p=213159 ( Tomdispatch.com) – A world in which machines governed by artificial intelligence (AI) systematically replace human beings in most business, industrial, and professional functions is horrifying to imagine. After all, as prominent computer scientists have been warning us, AI-governed systems are prone to critical errors and inexplicable “hallucinations,” resulting in potentially catastrophic outcomes. But there’s an even more dangerous scenario imaginable from the proliferation of super-intelligent machines: the possibility that those nonhuman entities could end up fighting one another, obliterating all human life in the process.

The notion that super-intelligent computers might run amok and slaughter humans has, of course, long been a staple of popular culture. In the prophetic 1983 film “WarGames,” a supercomputer known as WOPR (for War Operation Plan Response and, not surprisingly, pronounced “whopper”) nearly provokes a catastrophic nuclear war between the United States and the Soviet Union before being disabled by a teenage hacker (played by Matthew Broderick). The “Terminator” movie franchise, beginning with the original 1984 film, similarly envisioned a self-aware supercomputer called “Skynet” that, like WOPR, was designed to control U.S. nuclear weapons but chooses instead to wipe out humanity, viewing us as a threat to its existence.

Though once confined to the realm of science fiction, the concept of supercomputers killing humans has now become a distinct possibility in the very real world of the near future. In addition to developing a wide variety of “autonomous,” or robotic combat devices, the major military powers are also rushing to create automated battlefield decision-making systems, or what might be called “robot generals.” In wars in the not-too-distant future, such AI-powered systems could be deployed to deliver combat orders to American soldiers, dictating where, when, and how they kill enemy troops or take fire from their opponents. In some scenarios, robot decision-makers could even end up exercising control over America’s atomic weapons, potentially allowing them to ignite a nuclear war resulting in humanity’s demise.

Now, take a breath for a moment. The installation of an AI-powered command-and-control (C2) system like this may seem a distant possibility. Nevertheless, the U.S. Department of Defense is working hard to develop the required hardware and software in a systematic, increasingly rapid fashion. In its budget submission for 2023, for example, the Air Force requested $231 million to develop the Advanced Battlefield Management System (ABMS), a complex network of sensors and AI-enabled computers designed to collect and interpret data on enemy operations and provide pilots and ground forces with a menu of optimal attack options. As the technology advances, the system will be capable of sending “fire” instructions directly to “shooters,” largely bypassing human control.

“A machine-to-machine data exchange tool that provides options for deterrence, or for on-ramp [a military show-of-force] or early engagement,” was how Will Roper, assistant secretary of the Air Force for acquisition, technology, and logistics, described the ABMS system in a 2020 interview. Suggesting that “we do need to change the name” as the system evolves, Roper added, “I think Skynet is out, as much as I would love doing that as a sci-fi thing. I just don’t think we can go there.”

And while he can’t go there, that’s just where the rest of us may, indeed, be going.

Mind you, that’s only the start. In fact, the Air Force’s ABMS is intended to constitute the nucleus of a larger constellation of sensors and computers that will connect all U.S. combat forces, the Joint All-Domain Command-and-Control System (JADC2, pronounced “Jad-C-two”). “JADC2 intends to enable commanders to make better decisions by collecting data from numerous sensors, processing the data using artificial intelligence algorithms to identify targets, then recommending the optimal weapon… to engage the target,” the Congressional Research Service reported in 2022.

AI and the Nuclear Trigger

Initially, JADC2 will be designed to coordinate combat operations among “conventional” or non-nuclear American forces. Eventually, however, it is expected to link up with the Pentagon’s nuclear command-control-and-communications systems (NC3), potentially giving computers significant control over the use of the American nuclear arsenal. “JADC2 and NC3 are intertwined,” General John E. Hyten, vice chairman of the Joint Chiefs of Staff, indicated in a 2020 interview. As a result, he added in typical Pentagonese, “NC3 has to inform JADC2 and JADC2 has to inform NC3.”

It doesn’t require great imagination to picture a time in the not-too-distant future when a crisis of some sort — say a U.S.-China military clash in the South China Sea or near Taiwan — prompts ever more intense fighting between opposing air and naval forces. Imagine then the JADC2 ordering the intense bombardment of enemy bases and command systems in China itself, triggering reciprocal attacks on U.S. facilities and a lightning decision by JADC2 to retaliate with tactical nuclear weapons, igniting a long-feared nuclear holocaust.

The possibility that nightmare scenarios of this sort could result in the accidental or unintended onset of nuclear war has long troubled analysts in the arms control community. But the growing automation of military C2 systems has generated anxiety not just among them but among senior national security officials as well.

As early as 2019, when I questioned Lieutenant General Jack Shanahan, then director of the Pentagon’s Joint Artificial Intelligence Center, about such a risky possibility, he responded, “You will find no stronger proponent of integration of AI capabilities writ large into the Department of Defense, but there is one area where I pause, and it has to do with nuclear command and control.” This “is the ultimate human decision that needs to be made” and so “we have to be very careful.” Given the technology’s “immaturity,” he added, we need “a lot of time to test and evaluate [before applying AI to NC3].”

In the years since, despite such warnings, the Pentagon has been racing ahead with the development of automated C2 systems. In its budget submission for 2024, the Department of Defense requested $1.4 billion for the JADC2 in order “to transform warfighting capability by delivering information advantage at the speed of relevance across all domains and partners.” Uh-oh! And then, it requested another $1.8 billion for other kinds of military-related AI research.

Pentagon officials acknowledge that it will be some time before robot generals will be commanding vast numbers of U.S. troops (and autonomous weapons) in battle, but they have already launched several projects intended to test and perfect just such linkages. One example is the Army’s Project Convergence, involving a series of field exercises designed to validate ABMS and JADC2 component systems. In a test held in August 2020 at the Yuma Proving Ground in Arizona, for example, the Army used a variety of air- and ground-based sensors to track simulated enemy forces and then process that data using AI-enabled computers at Joint Base Lewis McChord in Washington state. Those computers, in turn, issued fire instructions to ground-based artillery at Yuma. “This entire sequence was supposedly accomplished within 20 seconds,” the Congressional Research Service later reported.

Less is known about the Navy’s AI equivalent, “Project Overmatch,” as many aspects of its programming have been kept secret. According to Admiral Michael Gilday, chief of naval operations, Overmatch is intended “to enable a Navy that swarms the sea, delivering synchronized lethal and nonlethal effects from near-and-far, every axis, and every domain.” Little else has been revealed about the project.

“Flash Wars” and Human Extinction

Despite all the secrecy surrounding these projects, you can think of ABMS, JADC2, Convergence, and Overmatch as building blocks for a future Skynet-like mega-network of super-computers designed to command all U.S. forces, including its nuclear ones, in armed combat. The more the Pentagon moves in that direction, the closer we’ll come to a time when AI possesses life-or-death power over all American soldiers along with opposing forces and any civilians caught in the crossfire.

Such a prospect should be ample cause for concern. To start with, consider the risk of errors and miscalculations by the algorithms at the heart of such systems. As top computer scientists have warned us, those algorithms are capable of remarkably inexplicable mistakes and, to use the AI term of the moment, “hallucinations” — that is, seemingly reasonable results that are entirely illusionary. Under the circumstances, it’s not hard to imagine such computers “hallucinating” an imminent enemy attack and launching a war that might otherwise have been avoided.

And that’s not the worst of the dangers to consider. After all, there’s the obvious likelihood that America’s adversaries will similarly equip their forces with robot generals. In other words, future wars are likely to be fought by one set of AI systems against another, both linked to nuclear weaponry, with entirely unpredictable — but potentially catastrophic — results.

Not much is known (from public sources at least) about Russian and Chinese efforts to automate their military command-and-control systems, but both countries are thought to be developing networks comparable to the Pentagon’s JADC2. As early as 2014, in fact, Russia inaugurated a National Defense Control Center (NDCC) in Moscow, a centralized command post for assessing global threats and initiating whatever military action is deemed necessary, whether of a non-nuclear or nuclear nature. Like JADC2, the NDCC is designed to collect information on enemy moves from multiple sources and provide senior officers with guidance on possible responses.

China is said to be pursuing an even more elaborate, if similar, enterprise under the rubric of “Multi-Domain Precision Warfare” (MDPW). According to the Pentagon’s 2022 report on Chinese military developments, its military, the People’s Liberation Army, is being trained and equipped to use AI-enabled sensors and computer networks to “rapidly identify key vulnerabilities in the U.S. operational system and then combine joint forces across domains to launch precision strikes against those vulnerabilities.”

Picture, then, a future war between the U.S. and Russia or China (or both) in which the JADC2 commands all U.S. forces, while Russia’s NDCC and China’s MDPW command those countries’ forces. Consider, as well, that all three systems are likely to experience errors and hallucinations. How safe will humans be when robot generals decide that it’s time to “win” the war by nuking their enemies?

If this strikes you as an outlandish scenario, think again, at least according to the leadership of the National Security Commission on Artificial Intelligence, a congressionally mandated enterprise that was chaired by Eric Schmidt, former head of Google, and Robert Work, former deputy secretary of defense. “While the Commission believes that properly designed, tested, and utilized AI-enabled and autonomous weapon systems will bring substantial military and even humanitarian benefit, the unchecked global use of such systems potentially risks unintended conflict escalation and crisis instability,” it affirmed in its Final Report. Such dangers could arise, it stated, “because of challenging and untested complexities of interaction between AI-enabled and autonomous weapon systems on the battlefield” — when, that is, AI fights AI.

Though this may seem an extreme scenario, it’s entirely possible that opposing AI systems could trigger a catastrophic “flash war” — the military equivalent of a “flash crash” on Wall Street, when huge transactions by super-sophisticated trading algorithms spark panic selling before human operators can restore order. In the infamous “Flash Crash” of May 6, 2010, computer-driven trading precipitated a 10% fall in the stock market’s value. According to Paul Scharre of the Center for a New American Security, who first studied the phenomenon, “the military equivalent of such crises” on Wall Street would arise when the automated command systems of opposing forces “become trapped in a cascade of escalating engagements.” In such a situation, he noted, “autonomous weapons could lead to accidental death and destruction at catastrophic scales in an instant.”

At present, there are virtually no measures in place to prevent a future catastrophe of this sort or even talks among the major powers to devise such measures. Yet, as the National Security Commission on Artificial Intelligence noted, such crisis-control measures are urgently needed to integrate “automated escalation tripwires” into such systems “that would prevent the automated escalation of conflict.” Otherwise, some catastrophic version of World War III seems all too possible. Given the dangerous immaturity of such technology and the reluctance of Beijing, Moscow, and Washington to impose any restraints on the weaponization of AI, the day when machines could choose to annihilate us might arrive far sooner than we imagine and the extinction of humanity could be the collateral damage of such a future war.

Via Tomdispatch.com

]]>
By 2030, Today’s World will be Made Over https://www.juancole.com/2020/12/2030-todays-world.html Tue, 08 Dec 2020 05:04:36 +0000 https://www.juancole.com/?p=194850 This article reviews Mauro F. Guillén’s recent book, 2030: How Today’s Biggest Trends Will Collide And Reshape The Future of Everything, (New York: St. Martin’s Press, 2020). $28.99.

Guillén is the Zandman Professor in International Management at the Wharton School of Penn and an expert on global market trends. This book has eight chapters, a conclusion, and a Postscript on the impact of COVID-19 on trends in 2030. This book acts as a roadmap to navigate the next decade, since today’s world will change by 2030. The world is changing so fast that folks are nervous about what the future has in store for all of us. One of the biggest changes by 2030 will be technological ones, such as 3-D printing, artificial intelligence, and nanotechnologies.

Technology will generate creative solutions but also create problems, e.g., as a result of autonomous vehicle technology about three million truck drivers may lose their jobs. Autonomous vehicles have a bright future since a computer can adapt to road and traffic conditions, and figure out a complex trip. A single robot in the manufacturing section can displace five to six workers. By 2030 there will be more robotic arms than human ones, more computers than human beings, and more sensors than eyes. Artificial intelligence (AI) can perform tasks that human brains do, such as make decisions, recognize speech, and utilize visual perception.

Today, it takes about a decade of higher education and years of training to become a first-rate surgeon. Yet in 2016, the Smart Tissue Autonomous Robot (STAR) used its own intelligence, tools, and vision to stitch together the small intestine of a small pig. It did a better job than human surgeons who had the same task. The robot’s stitches were more consistent and less resistant to leaks than the sutures of the human surgeons. And an added bonus: robots are neither temperamental nor judgmental.

New 3D printers can create a three-dimensional figure or object by printing very thin sheets in sequence. They then stack them on top of each other to create a three-dimensional shape. By using only the exact amount of material needed to make a dental piece or human replacement tissue, waste is reduced. Less carbon will be put into the atmosphere as fewer goods are made with less material. The clincher is this: companies will need much less shipping as mini-factories and printer farms are constructed that are closer to customers. Companies are learning to manufacture items to real-time demand, instead of storing their supplies in warehouses since freight transportation makes up 25% of all carbon emissions in affluent nations.

Cities can use 3-D printed seawalls to counter flooding and storm surge. The company, Branch Technologies, uses Cellular Fabrication (C-FABTM), that is, industrial robots, complex algorithms, and freeform, extrusion technology that allows material to become solid in free space. This same company is perfecting a new construction product that’s stronger, lighter, faster on-site and with ten times greater design freedom using a waste-free process. This company built the largest 3-D structure in the world, viz., a bandshell at a park in Nashville. China uses 3-D printing to print entire homes that may aid in recovering from natural disasters like earthquakes and cyclones.

Nanotechnologies can go a long way in arresting climate change. The clothing industry accounts for circa eight % of total carbon emissions. Nanotechnologies can design particles as small as one billionth of an inch manufacturing cheaper, stronger, and more environmentally friendly materials that are programable, i.e., material endowed with the capability of changing their shape, conductivity, density, or optical properties in response to sensors. Researchers have discovered a material that tightens in cold weather to provide warmth and yet provides relief from heat in summer. We now have nanomedicine. We can deliver drugs to cancerous cells with enormous precision. Nanotechnologies can detect ovarian cancer when the disease is affecting one hundred cells. Nanorobots are able to be programmed to transport molecular payloads and cause on-site tumor blood supply blockages that can lead to tissue death and shrink the tumor.

Women in the US, Europe, and East Asia are having fewer children yet there’s a baby boom in Africa. Its population of 1.3 billion people will grow to 2 billion by 2038. One might surmise that Africa couldn’t survive such a growth in population, but that’s not the case. The landmass of Africa is humongous, i.e., about as big as India, China, Japan, the US plus Western and Eastern Europe combined. Will Africa be able to feed its increased population? The World Bank says that by 2030 Africa’s agriculture may well become a trillion-dollar sector, thus transforming the entire global economy.

Though today Africa imports food, its positioned to become an agricultural and industrial revolution. It must cultivate 500 million acres of land that’s about the size of Mexico, while greatly improving productivity by growing and processing a vegetable named cassava the third chief source of carbohydrates after rice and maize in the developing world. Over 300 million people in sub-Saharan Africa use cassava for their daily dietary needs since it contains less sugar than wheat.

This is a remarkable book, one that I cannot do justice to in a short article. The author has certainly done his homework in writing this book, e.g., it contains 27 pages of notes that enlighten the reader. I have difficulty praising this book too highly.

]]>
Do Robot Generals dream of Deceased Humans? https://www.juancole.com/2020/08/robot-generals-humans.html Wed, 26 Aug 2020 04:01:45 +0000 https://www.juancole.com/?p=192793 ( Tomdispatch.com ) – With Covid-19 incapacitating startling numbers of U.S. service members and modern weapons proving increasingly lethal, the American military is relying ever more frequently on intelligent robots to conduct hazardous combat operations. Such devices, known in the military as “autonomous weapons systems,” include robotic sentries, battlefield-surveillance drones, and autonomous submarines. So far, in other words, robotic devices are merely replacing standard weaponry on conventional battlefields. Now, however, in a giant leap of faith, the Pentagon is seeking to take this process to an entirely new level — by replacing not just ordinary soldiers and their weapons, but potentially admirals and generals with robotic systems.

Admittedly, those systems are still in the development stage, but the Pentagon is now rushing their future deployment as a matter of national urgency. Every component of a modern general staff — including battle planning, intelligence-gathering, logistics, communications, and decision-making — is, according to the Pentagon’s latest plans, to be turned over to complex arrangements of sensors, computers, and software. All these will then be integrated into a “system of systems,” now dubbed the Joint All-Domain Command-and-Control, or JADC2 (since acronyms remain the essence of military life). Eventually, that amalgam of systems may indeed assume most of the functions currently performed by American generals and their senior staff officers.

The notion of using machines to make command-level decisions is not, of course, an entirely new one. It has, in truth, been a long time coming. During the Cold War, following the introduction of intercontinental ballistic missiles (ICBMs) with extremely short flight times, both military strategists and science-fiction writers began to imagine mechanical systems that would control such nuclear weaponry in the event of human incapacity.

In Stanley Kubrick’s satiric 1964 movie Dr. Strangelove, for example, the fictional Russian leader Dimitri Kissov reveals that the Soviet Union has installed a “doomsday machine” capable of obliterating all human life that would detonate automatically should the country come under attack by American nuclear forces. Efforts by crazed anti-Soviet U.S. Air Force officers to provoke a war with Moscow then succeed in triggering that machine and so bring about human annihilation. In reality, fearing that they might experience a surprise attack of just this sort, the Soviets later did install a semi-automatic retaliatory system they dubbed “Perimeter,” designed to launch Soviet ICBMs in the event that sensors detected nuclear explosions and all communications from Moscow had been silenced. Some analysts believe that an upgraded version of Perimeter is still in operation, leaving us in an all-too-real version of a Strangelovian world.

In yet another sci-fi version of such automated command systems, the 1983 film WarGames, starring Matthew Broderick as a teenage hacker, portrayed a supercomputer called the War Operations Plan Response, or WOPR (pronounced “whopper”) installed at the North American Aerospace Command (NORAD) headquarters in Colorado. When the Broderick character hacks into it and starts playing what he believes is a game called “World War III,” the computer concludes an actual Soviet attack is underway and launches a nuclear retaliatory response. Although fictitious, the movie accurately depicts many aspects of the U.S. nuclear command-control-and-communications (NC3) system, which was then and still remains highly automated.

Such devices, both real and imagined, were relatively primitive by today’s standards, being capable solely of determining that a nuclear attack was under way and ordering a catastrophic response. Now, as a result of vast improvements in artificial intelligence (AI) and machine learning, machines can collect and assess massive amounts of sensor data, swiftly detect key trends and patterns, and potentially issue orders to combat units as to where to attack and when.

Time Compression and Human Fallibility

The substitution of intelligent machines for humans at senior command levels is becoming essential, U.S. strategists argue, because an exponential growth in sensor information combined with the increasing speedof warfare is making it nearly impossible for humans to keep track of crucial battlefield developments. If future scenarios prove accurate, battles that once unfolded over days or weeks could transpire in the space of hours, or even minutes, while battlefield information will be pouring in as multitudinous data points, overwhelming staff officers. Only advanced computers, it is claimed, could process so much information and make informed combat decisions within the necessary timeframe.

Such time compression and the expansion of sensor data may apply to any form of combat, but especially to the most terrifying of them all, nuclear war. When ICBMs were the principal means of such combat, decisionmakers had up to 30 minutes between the time a missile was launched and the moment of detonation in which to determine whether a potential attack was real or merely a false satellite reading (as did sometimes occur during the Cold War). Now, that may not sound like much time, but with the recent introduction of hypersonic missiles, such assessment times could shrink to as little as five minutes. Under such circumstances, it’s a lot to expect even the most alert decision-makers to reach an informed judgment on the nature of a potential attack. Hence the appeal (to some) of automated decision-making systems.

“Attack-time compression has placed America’s senior leadership in a situation where the existing NC3 system may not act rapidly enough,” military analysts Adam Lowther and Curtis McGiffin argued at War on the Rocks, a security-oriented website. “Thus, it may be necessary to develop a system based on artificial intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such speed that the attack-time compression challenge does not place the United States in an impossible position.”

This notion, that an artificial intelligence-powered device — in essence, a more intelligent version of the doomsday machine or the WOPR — should be empowered to assess enemy behavior and then, on the basis of “predetermined response options,” decide humanity’s fate, has naturally produced some unease in the community of military analysts (as it should for the rest of us as well). Nevertheless, American strategists continue to argue that battlefield assessment and decision-making — for both conventional and nuclear warfare — should increasingly be delegated to machines.

“AI-powered intelligence systems may provide the ability to integrate and sort through large troves of data from different sources and geographic locations to identify patterns and highlight useful information,” the Congressional Research Service noted in a November 2019 summary of Pentagon thinking. “As the complexity of AI systems matures,” it added, “AI algorithms may also be capable of providing commanders with a menu of viable courses of action based on real-time analysis of the battlespace, in turn enabling faster adaptation to complex events.”

The key wording there is “a menu of viable courses of action based on real-time analysis of the battlespace.” This might leave the impression that human generals and admirals (not to speak of their commander-in-chief) will still be making the ultimate life-and-death decisions for both their own forces and the planet. Given such anticipated attack-time compression in future high-intensity combat with China and/or Russia, however, humans may no longer have the time or ability to analyze the battlespace themselves and so will come to rely on AI algorithms for such assessments. As a result, human commanders may simply find themselves endorsing decisions made by machines — and so, in the end, become superfluous.

Creating Robot Generals

Despite whatever misgivings they may have about their future job security, America’s top generals are moving swiftly to develop and deploy that JADC2 automated command mechanism. Overseen by the Air Force, it’s proving to be a computer-driven amalgam of devices for collecting real-time intelligence on enemy forces from vast numbers of sensor devices (satellites, ground radars, electronic listening posts, and so on), processing that data into actionable combat information, and providing precise attack instructions to every combat unit and weapons system engaged in a conflict — whether belonging to the Army, Navy, Air Force, Marine Corps, or the newly formed Space Force and Cyber Command.

What, exactly, the JADC2 will consist of is not widely known, partly because many of its component systems are still shrouded in secrecy and partly because much of the essential technology is still in the development stage. Delegated with responsibility for overseeing the project, the Air Force is working with Lockheed Martin and other large defense contractors to design and develop key elements of the system.

One such building block is its Advanced Battle Management System (ABMS), a data-collection and distribution system intended to provide fighter pilots with up-to-the-minute data on enemy positions and help guide their combat moves. Another key component is the Army’s Integrated Air and Missile Defense Battle Command System (IBCS), designed to connect radar systems to anti-aircraft and missile-defense launchers and provide them with precise firing instructions. Over time, the Air Force and its multiple contractors will seek to integrate ABMS and IBCS into a giant network of systems connecting every sensor, shooter, and commander in the country’s armed forces — a military “internet of things,” as some have put it.

To test this concept and provide an example of how it might operate in the future, the Army conducted a live-fire artillery exercise this August in Germany using components (or facsimiles) of the future JADC2 system. In the first stage of the test, satellite images of (presumed) Russian troop positions were sent to an Army ground terminal, where an AI software program called Prometheus combed through the data to select enemy targets. Next, another AI program called SHOT computed the optimal match of available Army weaponry to those intended targets and sent this information, along with precise firing coordinates, to the Army’s Advanced Field Artillery Tactical Data System (AFATDS) for immediate action, where human commanders could choose to implement it or not. In the exercise, those human commanders had the mental space to give the matter a moment’s thought; in a shooting war, they might just leave everything to the machines, as the system’s designers clearly intend them to do.

In the future, the Army is planning even more ambitious tests of this evolving technology under an initiative called Project Convergence. From what’s been said publicly about it, Convergence will undertake ever more complex exercises involving satellites, Air Force fighters equipped with the ABMS system, Army helicopters, drones, artillery pieces, and tactical vehicles. Eventually, all of this will form the underlying “architecture” of the JADC2, linking every military sensor system to every combat unit and weapons system — leaving the generals with little to do but sit by and watch.

Why Robot Generals Could Get It Wrong

Given the complexity of modern warfare and the challenge of time compression in future combat, the urge of American strategists to replace human commanders with robotic ones is certainly understandable. Robot generals and admirals might theoretically be able to process staggering amounts of information in brief periods of time, while keeping track of both friendly and enemy forces and devising optimal ways to counter enemy moves on a future battlefield. But there are many good reasons to doubt the reliability of robot decision-makers and the wisdom of using them in place of human officers.

To begin with, many of these technologies are still in their infancy, and almost all are prone to malfunctions that can neither be easily anticipated nor understood. And don’t forget that even advanced algorithms can be fooled, or “spoofed,” by skilled professionals.

In addition, unlike humans, AI-enabled decision-making systems will lack an ability to assess intent or context. Does a sudden enemy troop deployment, for example, indicate an imminent attack, a bluff, or just a normal rotation of forces? Human analysts can use their understanding of the current political moment and the actors involved to help guide their assessment of the situation. Machines lack that ability and may assume the worst, initiating military action that could have been avoided.

Such a problem will only be compounded by the “training” such decision-making algorithms will undergo as they are adapted to military situations. Just as facial recognition software has proved to be tainted by an over-reliance on images of white males in the training process — making them less adept at recognizing, say, African-American women — military decision-making algorithms are likely to be distorted by an over-reliance on the combat-oriented scenarios selected by American military professionals for training purposes. “Worst-case thinking” is a natural inclination of such officers — after all, who wants to be caught unprepared for a possible enemy surprise attack? — and such biases will undoubtedly become part of the “menus of viable courses of action” provided by decision-making robots.

Once integrated into decision-making algorithms, such biases could, in turn, prove exceedingly dangerous in any future encounters between U.S. and Russian troops in Europe or American and Chinese forces in Asia. A clash of this sort might, after all, arise at any time, thanks to some misunderstanding or local incident that rapidly gains momentum — a sudden clash between U.S. and Chinese warships off Taiwan, for example, or between American and Russian patrols in one of the Baltic states. Neither side may have intended to ignite a full-scale conflict and leaders on both sides might normally move to negotiate a cease-fire. But remember, these will no longer simply be human conflicts. In the wake of such an incident, the JADC2 could detect some enemy move that it determines poses an imminent risk to allied forces and so immediately launch an all-out attack by American planes, missiles, and artillery, escalating the conflict and foreclosing any chance of an early negotiated settlement.

Such prospects become truly frightening when what’s at stake is the onset of nuclear war. It’s hard to imagine any conflict among the major powers starting out as a nuclear war, but it’s far easier to envision a scenario in which the great powers — after having become embroiled in a conventional conflict — reach a point where one side or the other considers the use of atomic arms to stave off defeat. American military doctrine, in fact, has always held out the possibility of using so-called tactical nuclear weapons in response to a massive Soviet (now Russian) assault in Europe. Russian military doctrine, it is widely assumed, incorporates similar options. Under such circumstances, a future JADC2 could misinterpret enemy moves as signaling preparation for a nuclear launch and order a pre-emptive strike by U.S. nuclear forces, thereby igniting World War III.

War is a nasty, brutal activity and, given almost two decades of failed conflicts that have gone under the label of “the war on terror,” causing thousands of American casualties (both physical and mental), it’s easy to understand why robot enthusiasts are so eager to see another kind of mentality take over American war-making. As a start, they contend, especially in a pandemic world, that it’s only humane to replace human soldiers on the battlefield with robots and so diminish human casualties (at least among combatants). This claim does not, of course, address the argument that robot soldiers and drone aircraft lack the ability to distinguish between combatants and non-combatants on the battlefield and so cannot be trusted to comply with the laws of war or international humanitarian law — which, at least theoretically, protect civilians from unnecessary harm — and so should be banned.

Fraught as all of that may be on future battlefields, replacing generals and admirals with robots is another matter altogether. Not only do legal and moral arguments arise with a vengeance, as the survival of major civilian populations could be put at risk by computer-derived combat decisions, but there’s no guarantee that American GIs would suffer fewer casualties in the battles that ensued. Maybe it’s time, then, for Congress to ask some tough questions about the advisability of automating combat decision-making before this country pours billions of additional taxpayer dollars into an enterprise that could, in fact, lead to the end of the world as we know it. Maybe it’s time as well for the leaders of China, Russia, and this country to limit or ban the deployment of hypersonic missiles and other weaponry that will compress life-and-death decisions for humanity into just a few minutes, thereby justifying the automation of such fateful judgments.

Michael T. Klare, a TomDispatch regular, is the five-college professor emeritus of peace and world security studies at Hampshire College and a senior visiting fellow at the Arms Control Association. He is the author of 15 books, the latest of which is All Hell Breaking Loose: The Pentagon’s Perspective on Climate Change.

Follow TomDispatch on Twitter and join us on Facebook. Check out the newest Dispatch Books, John Feffer’s new dystopian novel (the second in the Splinterlands series) Frostlands, Beverly Gologorsky’s novel Every Body Has a Story, and Tom Engelhardt’s A Nation Unmade by War, as well as Alfred McCoy’s In the Shadows of the American Century: The Rise and Decline of U.S. Global Power and John Dower’s The Violent American Century: War and Terror Since World War II.

Copyright 2020 Michael T. Klare

Via Tomdispatch.com

—–

Bonus Video added by Informed Comment:

CNET Highlights: “Watch DARPA’s AI vs. Human in Virtual F-16 Aerial Dogfight (FINALS)”

]]>