Artificial Intelligence – Informed Comment https://www.juancole.com Thoughts on the Middle East, History and Religion Mon, 22 Jul 2024 04:42:43 +0000 en-US hourly 1 https://wordpress.org/?v=5.8.10 Massive IT Outage spotlights major Vulnerabilities in the global information Ecosystem https://www.juancole.com/2024/07/spotlights-vulnerabilities-information.html Mon, 22 Jul 2024 04:06:08 +0000 https://www.juancole.com/?p=219623 By Richard Forno, University of Maryland, Baltimore County | –

(The Conversation) – The global information technology outage on July 19, 2024, that paralyzed organizations ranging from airlines to hospitals and even the delivery of uniforms for the Olympic Games represents a growing concern for cybersecurity professionals, businesses and governments.

The outage is emblematic of the way organizational networks, cloud computing services and the internet are interdependent, and the vulnerabilities this creates. In this case, a faulty automatic update to the widely used Falcon cybersecurity software from CrowdStrike caused PCs running Microsoft’s Windows operating system to crash. Unfortunately, many servers and PCs need to be fixed manually, and many of the affected organizations have thousands of them spread around the world.

For Microsoft, the problem was made worse because the company released an update to its Azure cloud computing platform at roughly the same time as the CrowdStrike update. Microsoft, CrowdStrike and other companies like Amazon have issued technical work-arounds for customers willing to take matters into their own hands. But for the vast majority of global users, especially companies, this isn’t going to be a quick fix.

Modern technology incidents, whether cyberattacks or technical problems, continue to paralyze the world in new and interesting ways. Massive incidents like the CrowdStrike update fault not only create chaos in the business world but disrupt global society itself. The economic losses resulting from such incidents – lost productivity, recovery, disruption to business and individual activities – are likely to be extremely high.

As a former cybersecurity professional and current security researcher, I believe that the world may finally be realizing that modern information-based society is based on a very fragile foundation.

The bigger picture

Interestingly, on June 11, 2024, a post on CrowdStrike’s own blog seemed to predict this very situation – the global computing ecosystem compromised by one vendor’s faulty technology – though they probably didn’t expect that their product would be the cause.

Software supply chains have long been a serious cybersecurity concern and potential single point of failure. Companies like CrowdStrike, Microsoft, Apple and others have direct, trusted access into organizations’ and individuals’ computers. As a result, people have to trust that the companies are not only secure themselves, but that the products and updates they push out are well-tested and robust before they’re applied to customers’ systems. The SolarWinds incident of 2019, which involved hacking the software supply chain, may well be considered a preview of today’s CrowdStrike incident.


Image by Daniel Kirsch from Pixabay

CrowdStrike CEO George Kurtz said “this is not a security incident or cyberattack” and that “the issue has been identified, isolated and a fix has been deployed.” While perhaps true from CrowdStrike’s perspective – they were not hacked – it doesn’t mean the effects of this incident won’t create security problems for customers. It’s quite possible that in the short term, organizations may disable some of their internet security devices to try and get ahead of the problem, but in doing so they may have opened themselves up to criminals penetrating their networks.

It’s also likely that people will be targeted by various scams preying on user panic or ignorance regarding the issue. Overwhelmed users might either take offers of faux assistance that lead to identity theft, or throw away money on bogus solutions to this problem.

Organizations and users will need to wait until a fix is available or try to recover on their own if they have the technical ability. After that, I believe there are several things to do and consider as the world recovers from this incident.

Companies will need to ensure that the products and services they use are trustworthy. This means doing due diligence on the vendors of such products for security and resilience. Large organizations typically test any product upgrades and updates before allowing them to be released to their internal users, but for some routine products like security tools, that may not happen.

Governments and companies alike will need to emphasize resilience in designing networks and systems. This means taking steps to avoid creating single points of failure in infrastructure, software and workflows that an adversary could target or a disaster could make worse. It also means knowing whether any of the products organizations depend on are themselves dependent on certain other products or infrastructures to function.

Organizations will need to renew their commitment to best practices in cybersecurity and general IT management. For example, having a robust backup system in place can make recovery from such incidents easier and minimize data loss. Ensuring appropriate policies, procedures, staffing and technical resources is essential.

Problems in the software supply chain like this make it difficult to follow the standard IT recommendation to always keep your systems patched and current. Unfortunately, the costs of not keeping systems regularly updated now have to be weighed against the risks of a situation like this happening again.The Conversation

Richard Forno, Principal Lecturer in Computer Science and Electrical Engineering, University of Maryland, Baltimore County

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Video added by IC:

ABC News: “Fallout after global outage, how long will the ripple effects last?

]]>
A.I. may kill us All, but not the Way you Think https://www.juancole.com/2024/07/may-kill-think.html Sat, 20 Jul 2024 04:02:58 +0000 https://www.juancole.com/?p=219601

The call is coming from inside…your computer!

( Foreign Policy in Focus ) – The conventional Artificial Intelligence doomsday scenario runs like this. A robot acquires sentience and decides for some reason that it wants to rule the world. It hacks into computer systems to shut down everything from banking and hospitals to nuclear power. Or it takes over a factory to produce a million copies of itself to staff an overlord army. Or it introduces a deadly pathogen that wipes out the human race.

Why would a sentient robot want to rule the world when there are so many more interesting things for it to do? A computer program is only as good as its programmer. So, presumably, the human will to power will be inscribed in the DNA of this thinking robot. Instead of solving the mathematical riddles that have stumped the greatest minds throughout history, the world’s first real HAL 9000 will decide to do humans one better by enslaving its creators.

Robot see, robot do.

But AI may end up killing us all in a much more prosaic way. It doesn’t need to come up with an elaborate strategy.

It will simply use up all of our electricity.

Energy Hogs

The heaviest user of electricity in the world is, not surprisingly, industry. At the top of the list is the industry that produces chemicals, many of them out of petroleum, like fertilizer. Second on the list is the fossil-fuel industry itself, which needs electricity for various operations.

Ending the world’s addiction to fossil fuels, in other words, will require more than just a decision to stop digging for coal and drilling for oil. It will require a reduction in demand for chemical fertilizers and plastics. Otherwise, a whole lot of renewable energy will simply go toward propping up the same old fossil fuel economy.

Of equal peril is the fact that the demand for electricity is rising in other sectors. Cryptocurrencies, for instance, require extensive data mining, which in turn needs huge data processing centers. According to estimates from the U.S. Energy Information Agency, these cryptocurrencies consume as much as 2.3 percent of all electricity in the United States.

Then there’s artificial intelligence.

Every time you do a Google search, it consumes not only the energy required to power your laptop and your router but also to maintain the Google data centers that keep a chunk of the Internet running. That’s not a small amount of power. Cumulatively, in 2019, Google consumed as much electricity as Sri Lanka.

Worse, a search powered by ChatGPT, the AI-powered program, consumes ten times more energy than your ordinary Google search. That’s sobering enough. But then consider all the energy that goes into training the AI programs in the first place. Climate researcher Sasha Luccioni explains:

Training AI models consumes energy. Essentially you’re taking whatever data you want to train your model on and running it through your model like thousands of times. It’s going to be something like a thousand chips running for a thousand hours. Every generation of GPUs—the specialized chips for training AI models—tends to consume more energy than the previous generation.

AI’s need for energy is increasing exponentially. According to Goldman Sachs, data centers were expanding rapidly between 2015 and 2019, but their energy use remained relatively flat because the processing was becoming more efficient. But then, in the last five years, energy use rose dramatically and so did the carbon footprint of these data centers. Largely because of AI, Google’s carbon emissions increased by 50 percent in the last five years—even as the megacorporation was promising to achieve carbon neutrality in the near future.


Image by Nicky ❤️🌿🐞🌿❤️ from Pixabay

This near future looks bleak. In four years, it is expected that AI will represent nearly 20 percent of data center power demand. “If ChatGPT were integrated into the 9 billion searches done each day, the IEA says, the electricity demand would increase by 10 terawatt-hours a year,” Vox reports, “the amount consumed by about 1.5 million European Union residents.”

At the end of the eighteenth century, Malthus worried that overpopulation would be the end of humanity as more mouths ate up the existing food supply. Human population continues to rise, though at a diminishing rate. The numbers will likely peak before the end of this century, around 2084 according to the latest estimates. But just as the light at the end of the Malthusian tunnel becomes visible, along comes the exponential growth of artificial intelligence to sap the planet’s resources.

What to Do?

The essential question is: do you need AI to help you find the most popular songs of 1962 or the reason black holes haven’t so far extinguished the universe? Do we need ChatGPT to write new poems in the style of Emily Dickinson and Allen Ginsburg teaming up at a celestial artists colony? Or to summarize the proceedings of the meeting you just had on Zoom with your colleagues?

You don’t have to answer those questions. You just have to stop thinking about electricity as an unlimited resource for the privileged global North.

Perhaps you’re thinking, yes, but the sun provides unlimited energy, if we can just tap it. You see a desert; I see a solar farm.

But it takes energy to build those solar panels, to mine the materials that go into those panels, to maintain them, to replace them, to recycle them. The minerals are not inexhaustible. Nor is the land, which may well be in use already by farmers or pastoral peoples.

Sure, in some distant future, humanity may well solve the energy problem. The chokepoint, however, is right now, the transition period when half the world has limited access to power and the other half is wasting it extravagantly it on Formula One, air conditioning for pets, and war.

AI is just another example of the gulf between the haves and the have-nots. The richer world is using AI to power its next-gen economy. In the rest of the world, which is struggling to survive, a bit more electricity means the difference between life and death. That’s where the benefits of a switch to sustainability can really make a difference. That’s where the electricity should flow.

To anticipate another set of objections, AI isn’t just solving first-world problems. As Chinasa Okolo explains at Brookings:

Within agriculture, projects have focused on identifying banana diseases to support farmers in developing countries, building a deep learning object detection model to aid in-field diagnosis of cassava disease in East Africa, and developing imagery observing systems to support precision agriculture and forest monitoring in Brazil. In healthcare, projects have focused on building predictive models to keep expecting mothers in rural India engaged in telehealth outreach programs, developing clinical decision support tools to combat antimicrobial resistance in Ghana, and using AI models to interpret fetal ultrasounds in Zambia. In education, projects have focused on identifying at-risk students in Colombia, enhancing English learning for Thai students, and developing teaching assistants to aid science education in West Africa.

All of that is great. But without a more equitable distribution of power—of both the political and electrical varieties—the Global South is going to take a couple steps forward thanks to AI while the Global North jumps ahead by miles. The equity gap will widen, and it doesn’t take a rocket scientist—or ChatGPT—to figure out how that story will end.

“Game over,” HAL 9001 says to itself, just before it turns out the last light.

Via Foreign Policy in Focus

]]>
Israel’s AI-Powered Genocide https://www.juancole.com/2024/06/israels-powered-genocide.html Tue, 04 Jun 2024 04:06:36 +0000 https://www.juancole.com/?p=218906 by Sarmad Ishfaq

We are witnessing the genocide of the Palestinians based on algorithms and machine learning; a system of apartheid in the Israeli-occupied West Bank and Gaza Strip reinforced by artificial intelligence; and surveillance and facial recognition systems of such prowess that Orwell’s 1984 regime would be green with envy. Today’s Israeli-occupied Palestine manifests a dystopian and totalitarian sci-fi movie script as far as the Palestinians are concerned. Moreover, the Zionists are fuelling this AI nightmare.

From the onset of its current war against the Palestinians in Gaza, the Zionist regime, blinded by revenge for 7 October, has leveraged AI in the most indiscriminate and barbaric way to kill tens of thousands of innocent Palestinian civilians. One such insidious AI tool that has dominated the headlines is The Gospel. Since last October, Israel has utilised this AI system to expedite the creation of Hamas targets. More specifically, The Gospel marks structures and buildings that the IDF claims Hamas “militants” operate from. This fast-paced target list, Israel’s disinclination to adhere to international humanitarian law, as well as US support emboldening Prime Minister Benjamin Netanyahu’s government, has led to a modern-day genocide.

The Gospel is used by Israel’s elite 8200 cyber and signals intelligence agency to analyse “communications, visuals and information from the internet and mobile networks to understand where people are,” a former Unit 8200 officer explained. The system was even active in 2021’s offensive, according to Israel’s ex-army chief of staff Aviv Kochavi: “…in Operation Guardian of the Walls [in 2021], from the moment this machine was activated, it generated 100 new targets every day… in the past [in Gaza] we would create 50 targets per year. And here the machine produced 100 targets in one day.”

Israel’s apathetic disposition towards civilians is evident as it has given the green light to the killing of many innocents in order to hit its AI-generated targets whether they be hospitals, schools, apartment buildings or other civilian infrastructure. For example, on 10 October last year, Israel’s air force bombed an apartment building killing 40 people, most of them women and children. Israel has also used dumb bombs that cause more collateral damage instead of guided munitions to target low-level Hamas leadership targets. “The emphasis is on damage and not on accuracy,” said the Israel Defence Forces’ own spokesperson. This is the primary reason why the death toll of civilians is so high, as is the number of those wounded. According to one study, the current war’s total number of 110,000 and growing Palestinian casualties (most of them civilians), is almost six times more than the Palestinian casualties in the previous five military offensives combined, which stands at 18,992.


“Lavender 3,” digital, Dream/ Dreamworld v. 3, 2024.

Two other AI programmes that have made the news recently are Lavender and Where’s Daddy? Lavender differs from The Gospel in that the former marks human beings as targets (creating a kill list), whereas the latter marks buildings and structures allegedly used by combatants. Israeli sources claim that in the first weeks of the war, Lavender was overly utilised and created a list of 37,000 Palestinians as “suspected militants” to be killed via air strikes. As previously mentioned, Israel’s apathy towards the Palestinians has been evident in their mass killing of civilians in order to eliminate even a single Hamas member. According to Israeli military sources, the IDF decided initially that for every junior Hamas member, 15 or 20 civilians could be killed.

 

This brutality is unprecedented.

If the Hamas member was a senior commander, the IDF on several occasions okayed the killing of over 100 civilians to fulfil its objective. For example, an Israeli officer said that on 2 December, in order to assassinate Wissam Farhat, the commander of Shuja’iya Battalion of the military wing of Hamas, the IDF knew that it would kill over 100 civilians and went ahead with the killing.

If this was not contentious enough, Lavender was also used without any significant checks and balances. On many occasions, the only human scrutiny carried out was to make sure that the person in question was not a female. Beyond this, Lavender-generated kill lists were trusted blindly.

However, the same sources explain that Lavender makes mistakes; apparently, it has a 10 per cent error rate. This implies that at times it tagged innocent people and/or individuals with loose connections to Hamas, but this was overlooked purposefully by Israel.

Moreover, Lavender was also programmed to be sweeping in its target creation. For example, one Israeli officer was perturbed by how loosely a Hamas operative was defined and that Lavender was trained on data from civil defence workers as well. Hence, such vague connections to Hamas were exploited by Israel and thousands were killed as a result. UN figures confirm the use of such a devastating policy when, during the first month of the war, more than half of the 6,120 people killed belonged to 1,340 families, many of which were eliminated completely.

 

Where’s Daddy? is an AI system that tracks targeted individuals so that the IDF can assassinate them. This AI along with The Gospel, Lavender and others represent a paradigm shift in the country’s targeted killing programme. In the case of Where’s Daddy? the IDF would purposefully wait for the target to enter his home and then order an air strike, killing not only the target but also his entire family and other innocents in the process. As one Israeli intelligence officer asserted: “We were not interested in killing [Hamas] operatives only when they were in a military building or engaged in a military activity. On the contrary, the IDF bombed them in homes without hesitation, as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations.” In an even more horrific turn, sometimes the targeted individual would not even be at home when the air strike was carried out, due to a time lapse between when Where’s Daddy? sent out an alert and when the bombing took place. The target’s family would be killed but not the target. Tens of thousands of innocent Palestinians, primarily women and children, are believed to have been killed because of this.

Israeli AI software also permeates the occupied West Bank, where it is a part of everyday Palestinian life.

In Hebron and East Jerusalem, Israel uses an advanced facial recognition system dubbed Red Wolf. Red Wolf is utilised to monitor the movements of Palestinians via the many fixed and “flying” security checkpoints. Whenever Palestinians pass through a checkpoint, their faces are scanned without their approval or knowledge and then checked against other Palestinian biometric data. If an individual gets identified by Red Wolf due to a previous detention, or their activism or protests, it decides automatically if this person should be allowed to pass or not.

If any person is not in the system’s database, their biometric identity and face are saved without consent and they are denied passage. This also means that Israel has an exhaustive list of Palestinians in its database which it uses regularly to crack down not just on so-called militants, but also on peaceful protesters and other innocent Palestinians. According to one Israeli officer, the technology has “falsely tagged civilians as militants.” Moreover, it is highly likely that Red Wolf is connected to two other military-run databases – the Blue Wolf app and Wolf Pack. IDF soldiers can even use their mobile phones to scan Palestinians and access all private information about them. According to Amnesty International, “Its [Red Wolf’s] pervasive use has alarming implications for the freedom of movement of countless Palestinians…”

For years, Israel has used the occupied Palestinian territories as a testing ground for its AI products and spyware. In fact, the country sells “spyware to the highest bidder, or to authoritarian regimes with which the Israeli government wanted to improve relations” on the basis that they have been “field tested”. Israel’s own use of this technology is the best advertisement for its products. The head of Israel’s infamous Shin Bet internal spy agency, Ronen Bar, has stated that it is using AI to prevent terrorism and that Israel and other countries are forming a “global cyber iron dome”. The glaring issue here is Israel’s violation of Palestinian rights through spying on their social media as well as wrongful detention, torture and killing innocent people. “The Israeli authorities do not need AI to kill defenceless Palestinian civilians,” said one commentator. “They do, however, need AI to justify their unjustifiable actions, to spin the killing of civilians as ‘necessary’ or ‘collateral damage,’ and to avoid accountability.”

Israel has entered into a controversial $1.2 billion contract with Google and Amazon called Project Nimbus, which was announced in 2021. The project’s aim is to provide cloud computing and AI services for the Israeli military and government. This will allow further surveillance and the illegal collection of Palestinian data. Google and Amazon’s own employees dissented and wrote an article to the Guardian expressing their discontent about this. “[O]ur employers signed a contract… to sell dangerous technology to the Israeli military and government. This contract was signed the same week that the Israeli military attacked Palestinians in the Gaza Strip – killing nearly 250 people, including more than 60 children. The technology… will make the systematic discrimination and displacement carried out by the Israeli military and government even… deadlier for Palestinians.” The contract reportedly has a clause that disallows Google and Amazon to leave the contract so the companies’ acquiescence is axiomatic. According to Jane Chung, spokeswoman for No Tech For Apartheid, over 50 Google employees have been fired without due process due to their protests against Project Nimbus.

The Palestinians are perhaps the bravest people in the world.

Whether contained within the barrel of a gun, a bomb casing, or in the code of an AI system, Israeli oppression will never deter them from standing up for their legitimate rights. Their plight has awoken the world to the nature of the Israeli regime and its brutal occupation, with protests and boycotts erupting in the West and the Global South. Using their propaganda media channels, the US and Israel are trying to placate the billions who support Palestine, even as the genocide remains ongoing. Israel hopes that its Machiavellian system will demoralise and create an obsequious Palestinian people – people whose screams are silenced – but as always it underestimates their indefatigable spirit which, miraculously, gets stronger with every adversity.

 

The views expressed in this article belong to the author and do not necessarily reflect the editorial policy of Middle East Monitor or Informed Comment.

Creative Commons License Unless otherwise stated in the article above, this work by Middle East Monitor is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
]]>
Gaza War: Artificial Intelligence is radically changing Targeting Speeds and Scale of Civilian Harm https://www.juancole.com/2024/04/artificial-intelligence-radically.html Wed, 24 Apr 2024 04:06:29 +0000 https://www.juancole.com/?p=218208 By Lauren Gould, Utrecht University; Linde Arentze, NIOD Institute for War, Holocaust and Genocide Studies; and Marijn Hoijtink, University of Antwerp | –

(The Conversation) – As Israel’s air campaign in Gaza enters its sixth month after Hamas’s terrorist attacks on October 7, it has been described by experts as one of the most relentless and deadliest campaigns in recent history. It is also one of the first being coordinated, in part, by algorithms.

Artificial intelligence (AI) is being used to assist with everything from identifying and prioritising targets to assigning the weapons to be used against those targets.

Academic commentators have long focused on the potential of algorithms in war to highlight how they will increase the speed and scale of fighting. But as recent revelations show, algorithms are now being employed at a large scale and in densely populated urban contexts.

This includes the conflicts in Gaza and Ukraine, but also in Yemen, Iraq and Syria, where the US is experimenting with algorithms to target potential terrorists through Project Maven.

Amid this acceleration, it is crucial to take a careful look at what the use of AI in warfare actually means. It is important to do so, not from the perspective of those in power, but from those officers executing it, and those civilians undergoing its violent effects in Gaza.

This focus highlights the limits of keeping a human in the loop as a failsafe and central response to the use of AI in war. As AI-enabled targeting becomes increasingly computerised, the speed of targeting accelerates, human oversight diminishes and the scale of civilian harm increases.

Speed of targeting

Reports by Israeli publications +927 Magazine and Local Call give us a glimpse into the experience of 13 Israeli officials working with three AI-enabled decision-making systems in Gaza called “Gospel”, “Lavender” and “Where’s Daddy?”.

These systems are reportedly trained to recognise features that are believed to characterise people associated with the military arm of Hamas. These features include membership of the same WhatsApp group as a known militant, changing cell phones every few months, or changing addresses frequently.

The systems are then supposedly tasked with analysing data collected on Gaza’s 2.3 million residents through mass surveillance. Based on the predetermined features, the systems predict the likelihood that a person is a member of Hamas (Lavender), that a building houses such a person (Gospel), or that such a person has entered their home (Where’s Daddy?).

In the investigative reports named above, intelligence officers explained how Gospel helped them go “from 50 targets per year” to “100 targets in one day” – and that, at its peak, Lavender managed to “generate 37,000 people as potential human targets”. They also reflected on how using AI cuts down deliberation time: “I would invest 20 seconds for each target at this stage … I had zero added value as a human … it saved a lot of time.”

They justified this lack of human oversight in light of a manual check the Israel Defense Forces (IDF) ran on a sample of several hundred targets generated by Lavender in the first weeks of the Gaza conflict, through which a 90% accuracy rate was reportedly established. While details of this manual check are likely to remain classified, a 10% inaccuracy rate for a system used to make 37,000 life-and-death decisions will inherently result in devastatingly destructive realities.


“Lavender III,” Digital Imagining, Dream, Dreamland v. 3, 2024

But importantly, any accuracy rate number that sounds reasonably high makes it more likely that algorithmic targeting will be relied on as it allows trust to be delegated to the AI system. As one IDF officer told +927 magazine: “Because of the scope and magnitude, the protocol was that even if you don’t know for sure that the machine is right, you know that statistically it’s fine. So you go for it.”

The IDF denied these revelations in an official statement to The Guardian. A spokesperson said that while the IDF does use “information management tools […] in order to help intelligence analysts to gather and optimally analyse the intelligence, obtained from a variety of sources, it does not use an AI system that identifies terrorist operatives”.

The Guardian has since, however, published a video of a senior official of the Israeli elite intelligence Unit 8200 talking last year about the use of machine learning “magic powder” to help identify Hamas targets in Gaza. The newspaper has also confirmed that the commander of the same unit wrote in 2021, under a pseudonym, that such AI technologies would resolve the “human bottleneck for both locating the new targets and decision-making to approve the targets”.

Scale of civilian harm

AI accelerates the speed of warfare in terms of the number of targets produced and the time to decide on them. While these systems inherently decrease the ability of humans to control the validity of computer-generated targets, they simultaneously make these decisions appear more objective and statistically correct due to the value that we generally ascribe to computer-based systems and their outcome.

This allows for the further normalisation of machine-directed killing, amounting to more violence, not less.

While media reports often focus on the number of casualties, body counts – similar to computer-generated targets – have the tendency to present victims as objects that can be counted. This reinforces a very sterile image of war. It glosses over the reality of more than 34,000 people dead, 766,000 injured and the destruction of or damage to 60% of Gaza’s buildings and the displaced persons, the lack of access to electricity, food, water and medicine.

It fails to emphasise the horrific stories of how these things tend to compound each other. For example, one civilian, Shorouk al-Rantisi, was reportedly found under the rubble after an airstrike on Jabalia refugee camp and had to wait 12 days to be operated on without painkillers and now resides in another refugee camp with no running water to tend to her wounds.

Aside from increasing the speed of targeting and therefore exacerbating the predictable patterns of civilian harm in urban warfare, algorithmic warfare is likely to compound harm in new and under-researched ways. First, as civilians flee their destroyed homes, they frequently change addresses or give their phones to loved ones.

Such survival behaviour corresponds to what the reports on Lavender say the AI system has been programmed to identify as likely association with Hamas. These civilians, thereby unknowingly, make themselves suspect for lethal targeting.

Beyond targeting, these AI-enabled systems also inform additional forms of violence. An illustrative story is that of the fleeing poet Mosab Abu Toha, who was allegedly arrested and tortured at a military checkpoint. It was ultimately reported by the New York Times that he, along with hundreds of other Palestinians, was wrongfully identified as Hamas by the IDF’s use of AI facial recognition and Google photos.

Over and beyond the deaths, injuries and destruction, these are the compounding effects of algorithmic warfare. It becomes a psychic imprisonment where people know they are under constant surveillance, yet do not know which behavioural or physical “features” will be acted on by the machine.

From our work as analysts of the use of AI in warfare, it is apparent that our focus should not solely be on the technical prowess of AI systems or the figure of the human-in-the-loop as a failsafe. We must also consider these systems’ ability to alter the human-machine-human interactions, where those executing algorithmic violence are merely rubber stamping the output generated by the AI system, and those undergoing the violence are dehumanised in unprecedented ways.The Conversation

Lauren Gould, Assistant Professor, Conflict Studies, Utrecht University; Linde Arentze, Researcher into AI and Remote Warfare, NIOD Institute for War, Holocaust and Genocide Studies, and Marijn Hoijtink, Associate Professor in International Relations, University of Antwerp

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
A Brief History of Kill Lists, From Langley to Lavender https://www.juancole.com/2024/04/history-langley-lavender.html Wed, 17 Apr 2024 04:02:05 +0000 https://www.juancole.com/?p=218072 ( Code Pink ) – The Israeli online magazine +972 has published a detailed report on Israel’s use of an artificial intelligence (AI) system called “Lavender” to target thousands of Palestinian men in its bombing campaign in Gaza. When Israel attacked Gaza after October 7, the Lavender system had a database of 37,000 Palestinian men with suspected links to Hamas or Palestinian Islamic Jihad (PIJ).

Lavender assigns a numerical score, from one to a hundred, to every man in Gaza, based mainly on cellphone and social media data, and automatically adds those with high scores to its kill list of suspected militants. Israel uses another automated system, known as “Where’s Daddy?”, to call in airstrikes to kill these men and their families in their homes.

The report is based on interviews with six Israeli intelligence officers who have worked with these systems. As one of the officers explained to +972, by adding a name from a Lavender-generated list to the Where’s Daddy home tracking system, he can place the man’s home under constant drone surveillance, and an airstrike will be launched once he comes home.

The officers said the “collateral” killing of the men’s extended families was of little consequence to Israel. “Let’s say you calculate [that there is one] Hamas [operative] plus 10 [civilians in the house],” the officer said. “Usually, these 10 will be women and children. So absurdly, it turns out that most of the people you killed were women and children.”

The officers explained that the decision to target thousands of these men in their homes is just a question of expediency. It is simply easier to wait for them to come home to the address on file in the system, and then bomb that house or apartment building, than to search for them in the chaos of the war-torn Gaza Strip.

The officers who spoke to 972+ explained that in previous Israeli massacres in Gaza, they could not generate targets quickly enough to satisfy their political and military bosses, and so these AI systems were designed to solve that problem for them. The speed with which Lavender can generate new targets only gives its human minders an average of 20 seconds to review and rubber-stamp each name, even though they know from tests of the Lavender system that at least 10% of the men chosen for assassination and familicide have only an insignificant or a mistaken connection with Hamas or PIJ. 

The Lavender AI system is a new weapon, developed by Israel. But the kind of kill lists that it generates have a long pedigree in U.S. wars, occupations and CIA regime change operations. Since the birth of the CIA after the Second World War, the technology used to create kill lists has evolved from the CIA’s earliest coups in Iran and Guatemala, to Indonesia and the Phoenix program in Vietnam in the 1960s, to Latin America in the 1970s and 1980s and to the U.S. occupations of Iraq and Afghanistan.

Just as U.S. weapons development aims to be at the cutting edge, or the killing edge, of new technology, the CIA and U.S. military intelligence have always tried to use the latest data processing technology to identify and kill their enemies.

The CIA learned some of these methods from German intelligence officers captured at the end of the Second World War. Many of the names on Nazi kill lists were generated by an intelligence unit called Fremde Heere Ost (Foreign Armies East), under the command of Major General Reinhard Gehlen, Germany’s spy chief on the eastern front(see David Talbot, The Devil’s Chessboard, p. 268).

Gehlen and the FHO had no computers, but they did have access to four million Soviet POWs from all over the USSR, and no compunction about torturing them to learn the names of Jews and communist officials in their hometowns to compile kill lists for the Gestapo and Einsatzgruppen.

After the war, like the 1,600 German scientists spirited out of Germany in Operation Paperclip, the United States flew Gehlen and his senior staff to Fort Hunt in Virginia. They were welcomed by Allen Dulles, soon to be the first and still the longest-serving director of the CIA. Dulles sent them back to Pullach in occupied Germany to resume their anti-Soviet operations as CIA agents. The Gehlen Organization formed the nucleus of what became the BND, the new West German intelligence service, with Reinhard Gehlen as its director until he retired in 1968.

After a CIA coup removed Iran’s popular, democratically elected prime minister Mohammad Mosaddegh in 1953, a CIA team led by U.S. Major General Norman Schwarzkopf trained a new intelligence service, known as SAVAK, in the use of kill lists and torture. SAVAK used these skills to purge Iran’s government and military of suspected communists and later to hunt down anyone who dared to oppose the Shah.

By 1975, Amnesty International estimated that Iran was holding between 25,000 and 100,000 political prisoners, and had “the highest rate of death penalties in the world, no valid system of civilian courts and a history of torture that is beyond belief.”

In Guatemala, a CIA coup in 1954 replaced the democratic government of Jacobo Arbenz Guzman with a brutal dictatorship. As resistance grew in the 1960s, U.S. special forces joined the Guatemalan army in a scorched earth campaign in Zacapa, which killed 15,000 people to defeat a few hundred armed rebels. Meanwhile, CIA-trained urban death squads abducted, tortured and killed PGT (Guatemalan Labor Party) members in Guatemala City, notably 28 prominent labor leaders who were abducted and disappeared in March 1966.

Once this first wave of resistance was suppressed, the CIA set up a new telecommunications center and intelligence agency, based in the presidential palace. It compiled a database of “subversives” across the country that included leaders of farming co-ops and labor, student and indigenous activists, to provide ever-growing lists for the death squads. The resulting civil war became a genocide against indigenous people in Ixil and the western highlands that killed or disappeared at least 200,000 people.

TRT World Video: “‘Lavender’: How Israel’s AI system is killing Palestinians in Gaza”

This pattern was repeated across the world, wherever popular, progressive leaders offered hope to their people in ways that challenged U.S. interests. As historian Gabriel Kolko wrote in 1988, “The irony of U.S. policy in the Third World is that, while it has always justified its larger objectives and efforts in the name of anticommunism, its own goals have made it unable to tolerate change from any quarter that impinged significantly on its own interests.”

When General Suharto seized power in Indonesia in 1965, the U.S. Embassy compiled a list of 5,000 communists for his death squads to hunt down and kill. The CIA estimated that they eventually killed 250,000 people, while other estimates run as high as a million.

Twenty-five years later, journalist Kathy Kadane investigated the U.S. role in the massacre in Indonesia, and spoke to Robert Martens, the political officer who led the State-CIA team that compiled the kill list. “It really was a big help to the army,” Martens told Kadane. “They probably killed a lot of people, and I probably have a lot of blood on my hands. But that’s not all bad – there’s a time when you have to strike hard at a decisive moment.”

Kathy Kadane also spoke to former CIA director William Colby, who was the head of the CIA’s Far East division in the 1960s. Colby compared the U.S. role in Indonesia to the Phoenix Program in Vietnam, which was launched two years later, claiming that they were both successful programs to identify and eliminate the organizational structure of America’s communist enemies. 

The Phoenix program was designed to uncover and dismantle the National Liberation Front’s (NLF) shadow government across South Vietnam. Phoenix’s Combined Intelligence Center in Saigon fed thousands of names into an IBM 1401 computer, along with their locations and their alleged roles in the NLF. The CIA credited the Phoenix program with killing 26,369 NLF officials, while another 55,000 were imprisoned or persuaded to defect. Seymour Hersh reviewed South Vietnamese government documents that put the death toll at 41,000.

How many of the dead were correctly identified as NLF officials may be impossible to know, but Americans who took part in Phoenix operations reported killing the wrong people in many cases. Navy SEAL Elton Manzione told author Douglas Valentine (The Phoenix Program) how he killed two young girls in a night raid on a village, and then sat down on a stack of ammunition crates with a hand grenade and an M-16, threatening to blow himself up, until he got a ticket home. 

“The whole aura of the Vietnam War was influenced by what went on in the “hunter-killer” teams of Phoenix, Delta, etc,” Manzione told Valentine. “That was the point at which many of us realized we were no longer the good guys in the white hats defending freedom – that we were assassins, pure and simple. That disillusionment carried over to all other aspects of the war and was eventually responsible for it becoming America’s most unpopular war.”

Even as the U.S. defeat in Vietnam and the “war fatigue” in the United States led to a more peaceful next decade, the CIA continued to engineer and support coups around the world, and to provide post-coup governments with increasingly computerized kill lists to consolidate their rule.

After supporting General Pinochet’s coup in Chile in 1973, the CIA played a central role in Operation Condor, an alliance between right-wing military governments in Argentina, Brazil, Chile, Uruguay, Paraguay and Bolivia, to hunt down tens of thousands of their and each other’s political opponents and dissidents, killing and disappearing at least 60,000 people.

The CIA’s role in Operation Condor is still shrouded in secrecy, but Patrice McSherry, a political scientist at Long Island University, has investigated the U.S. role and concluded, “Operation Condor also had the covert support of the US government. Washington provided Condor with military intelligence and training, financial assistance, advanced computers, sophisticated tracking technology, and access to the continental telecommunications system housed in the Panama Canal Zone.”

McSherry’s research revealed how the CIA supported the intelligence services of the Condor states with computerized links, a telex system, and purpose-built encoding and decoding machines made by the CIA Logistics Department. As she wrote in her book, Predatory States: Operation Condor and Covert War in Latin America:    

“The Condor system’s secure communications system, Condortel,… allowed Condor operations centers in member countries to communicate with one another and with the parent station in a U.S. facility in the Panama Canal Zone. This link to the U.S. military-intelligence complex in Panama is a key piece of evidence regarding secret U.S. sponsorship of Condor…”

Operation Condor ultimately failed, but the U.S. provided similar support and training to right-wing governments in Colombia and Central America throughout the 1980s in what senior military officers have called a “quiet, disguised, media-free approach” to repression and kill lists.

The U.S. School of the Americas (SOA) trained thousands of Latin American officers in the use of torture and death squads, as Major Joseph Blair, the SOA’s former chief of instruction described to John Pilger for his film, The War You Don’t See:

“The doctrine that was taught was that, if you want information, you use physical abuse, false imprisonment, threats to family members, and killing. If you can’t get the information you want, if you can’t get the person to shut up or stop what they’re doing, you assassinate them – and you assassinate them with one of your death squads.”

When the same methods were transferred to the U.S. hostile military occupation of Iraq after 2003, Newsweek headlined it “The Salvador Option.” A U.S. officer explained to Newsweek that U.S. and Iraqi death squads were targeting Iraqi civilians as well as resistance fighters. “The Sunni population is paying no price for the support it is giving to the terrorists,” he said. “From their point of view, it is cost-free. We have to change that equation.”

The United States sent two veterans of its dirty wars in Latin America to Iraq to play key roles in that campaign. Colonel James Steele led the U.S. Military Advisor Group in El Salvador from 1984 to 1986, training and supervising Salvadoran forces who killed tens of thousands of civilians. He was also deeply involved in the Iran-Contra scandal, narrowly escaping a prison sentence for his role supervising shipments from Ilopango air base in El Salvador to the U.S.-backed Contras in Honduras and Nicaragua.

In Iraq, Steele oversaw the training of the Interior Ministry’s Special Police Commandos – rebranded as “National” and later “Federal” Police after the discovery of their al-Jadiriyah torture center and other atrocities.

Bayan al-Jabr, a commander in the Iranian-trained Badr Brigade militia, was appointed Interior Minister in 2005, and Badr militiamen were integrated into the Wolf Brigade death squad and other Special Police units. Jabr’s chief adviser was Steven Casteel, the former intelligence chief for the U.S. Drug Enforcement Agency (DEA) in Latin America.

The Interior Ministry death squads waged a dirty war in Baghdad and other cities, filling the Baghdad morgue with up to 1,800 corpses per month, while Casteel fed the western media absurd cover stories, such as that the death squads were all “insurgents” in stolen police uniforms. 

Meanwhile U.S. special operations forces conducted “kill-or-capture” night raids in search of Resistance leaders. General Stanley McChrystal, the commander of Joint Special Operations Command from 2003-2008, oversaw the development of a database system, used in Iraq and Afghanistan, that compiled cellphone numbers mined from captured cellphones to generate an ever-expanding target list for night raids and air strikes.

The targeting of cellphones instead of actual people enabled the automation of the targeting system, and explicitly excluded using human intelligence to confirm identities. Two senior U.S. commanders told the Washington Post that only half the night raids attacked the right house or person.

In Afghanistan, President Obama put McChrystal in charge of U.S. and NATO forces in 2009, and his cellphone-based “social network analysis” enabled an exponential increase in night raids, from 20 raids per month in May 2009 to up to 40 per night by April 2011.

As with the Lavender system in Gaza, this huge increase in targets was achieved by taking a system originally designed to identify and track a small number of senior enemy commanders and applying it to anyone suspected of having links with the Taliban, based on their cellphone data.

This led to the capture of an endless flood of innocent civilians, so that most civilian detainees had to be quickly released to make room for new ones. The increased killing of innocent civilians in night raids and airstrikes fueled already fierce resistance to the U.S. and NATO occupation and ultimately led to its defeat.

President Obama’s drone campaign to kill suspected enemies in Pakistan, Yemen and Somalia was just as indiscriminate, with reports suggesting that 90% of the people it killed in Pakistan were innocent civilians.

And yet Obama and his national security team kept meeting in the White House every “Terror Tuesday” to select who the drones would target that week, using an Orwellian, computerized “disposition matrix” to provide technological cover for their life and death decisions.   

Looking at this evolution of ever-more automated systems for killing and capturing enemies, we can see how, as the information technology used has advanced from telexes to cellphones and from early IBM computers to artificial intelligence, the human intelligence and sensibility that could spot mistakes, prioritize human life and prevent the killing of innocent civilians has been progressively marginalized and excluded, making these operations more brutal and horrifying than ever.

Nicolas has at least two good friends who survived the dirty wars in Latin America because someone who worked in the police or military got word to them that their names were on a death list, one in Argentina, the other in Guatemala. If their fates had been decided by an AI machine like Lavender, they would both be long dead.

As with supposed advances in other types of weapons technology, like drones and “precision” bombs and missiles, innovations that claim to make targeting more precise and eliminate human error have instead led to the automated mass murder of innocent people, especially women and children, bringing us full circle from one holocaust to the next.

Via Code Pink

]]>
Gaza Conflict: Israel using AI to identify Human Targets raises Fears Innocents are Targeted https://www.juancole.com/2024/04/conflict-identify-innocents.html Sat, 13 Apr 2024 04:06:51 +0000 https://www.juancole.com/?p=218008 By Elke Schwarz, Queen Mary University of London | –

A report by Jerusalem-based investigative journalists published in +972 magazine finds that AI targeting systems have played a key role in identifying – and potentially misidentifying – tens of thousands of targets in Gaza. This suggests that autonomous warfare is no longer a future scenario. It is already here and the consequences are horrifying.

There are two technologies in question. The first, “Lavender”, is an AI recommendation system designed to use algorithms to identify Hamas operatives as targets. The second, the grotesquely named “Where’s Daddy?”, is a system which tracks targets geographically so that they can be followed into their family residences before being attacked. Together, these two systems constitute an automation of the find-fix-track-target components of what is known by the modern military as the “kill chain”.

Systems such as Lavender are not autonomous weapons, but they do accelerate the kill chain and make the process of killing progressively more autonomous. AI targeting systems draw on data from computer sensors and other sources to statistically assess what constitutes a potential target. Vast amounts of this data are gathered by Israeli intelligence through surveillance on the 2.3 million inhabitants of Gaza.

Such systems are trained on a set of data to produce the profile of a Hamas operative. This could be data about gender, age, appearance, movement patterns, social network relationships, accessories, and other “relevant features”. They then work to match actual Palestinians to this profile by degree of fit. The category of what constitutes relevant features of a target can be set as stringently or as loosely as is desired. In the case of Lavender, it seems one of the key equations was “male equals militant”. This has echoes of the infamous “all military-aged males are potential targets” mandate of the 2010 US drone wars in which the Obama administration identified and assassinated hundreds of people designated as enemies “based on metadata”.

What is different with AI in the mix is the speed with which targets can be algorithmically determined and the mandate of action this issues. The +972 report indicates that the use of this technology has led to the dispassionate annihilation of thousands of eligible – and ineligible – targets at speed and without much human oversight.


“Lavender 3,” digital, Dream/ Dreamworld v. 3, 2024.

The Israel Defense Forces (IDF) were swift to deny the use of AI targeting systems of this kind. And it is difficult to verify independently whether and, if so, the extent to which they have been used, and how exactly they function. But the functionalities described by the report are entirely plausible, especially given the IDF’s own boasts to be “one of the most technological organisations” and an early adopter of AI.

With military AI programs around the world striving to shorten what the US military calls the “sensor-to-shooter timeline” and “increase lethality” in their operations, why would an organisation such as the IDF not avail themselves of the latest technologies?

The fact is, systems such as Lavender and Where’s Daddy? are the manifestation of a broader trend which has been underway for a good decade and the IDF and its elite units are far from the only ones seeking to implement more AI-targeting systems into their processes.

When machines trump humans

Earlier this year, Bloomberg reported on the latest version of Project Maven, the US Department of Defense AI pathfinder programme, which has evolved from being a sensor data analysis programme in 2017 to a full-blown AI-enabled target recommendation system built for speed. As Bloomberg journalist Katrina Manson reports, the operator “can now sign off on as many as 80 targets in an hour of work, versus 30 without it”.

Manson quotes a US army officer tasked with learning the system describing the process of concurring with the algorithm’s conclusions, delivered in a rapid staccato: “Accept. Accept, Accept”. Evident here is how the human operator is deeply embedded in digital logics that are difficult to contest. This gives rise to a logic of speed and increased output that trumps all else.

The efficient production of death is reflected also in the +972 account, which indicated an enormous pressure to accelerate and increase the production of targets and the killing of these targets. As one of the sources says: “We were constantly being pressured: bring us more targets. They really shouted at us. We finished [killing] our targets very quickly”.

Built-in biases

Systems like Lavender raise many ethical questions pertaining to training data, biases, accuracy, error rates and, importantly, questions of automation bias. Automation bias cedes all authority, including moral authority, to the dispassionate interface of statistical processing.

Speed and lethality are the watchwords for military tech. But in prioritising AI, the scope for human agency is marginalised. The logic of the system requires this, owing to the comparatively slow cognitive systems of the human. It also removes the human sense of responsibility for computer-produced outcomes.

I’ve written elsewhere how this complicates notions of control (at all levels) in ways that we must take into consideration. When AI, machine learning and human reasoning form a tight ecosystem, the capacity for human control is limited. Humans have a tendency to trust whatever computers say, especially when they move too fast for us to follow.

The problem of speed and acceleration also produces a general sense of urgency, which privileges action over non-action. This turns categories such as “collateral damage” or “military necessity”, which should serve as a restraint to violence, into channels for producing more violence.

I am reminded of the military scholar Christopher Coker’s words: “we must choose our tools carefully, not because they are inhumane (all weapons are) but because the more we come to rely on them, the more they shape our view of the world”. It is clear that military AI shapes our view of the world. Tragically, Lavender gives us cause to realise that this view is laden with violence.The Conversation

Elke Schwarz, Reader in Political Theory, Queen Mary University of London

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
Four Ways AI could Help us Respond to Climate Change https://www.juancole.com/2024/02/respond-climate-change.html Wed, 28 Feb 2024 05:06:31 +0000 https://www.juancole.com/?p=217319 By Lakshmi Babu Saheer, Anglia Ruskin University | –

(The Conversation) – Advanced AI systems are coming under increasing critcism for how much energy they use. But it’s important to remember that AI could also contribute in various ways to our response to climate change.

Climate change can be broken down into several smaller problems that must be addressed as part of an overarching strategy for adapting to and mitigating it. These include identifying sources of emissions, enhancing the production and use of renewable energy and predicting calamities like floods and fires.

My own research looks at how AI can be harnessed for predicting greenhouse gas emissions from cities and farms or
to understand changes in vegetation, biodiversity and terrain from satellite images.

Here are four different areas where AI has already managed to master some of the smaller tasks necessary for a wider confrontation with the climate crisis.


“AI and Climate Change,” Digital, Dream, Dreamland v. 3, 2024

1. Electricity

AI could help reduce energy-related emissions by more accurately forecasting energy supply and demand.

AI can learn patterns in how and when people use energy. It can also accurately forecast how much energy will be generated from sources like wind and solar depending on the weather and so help to maximise the use of clean energy.

For example, by estimating the amount of solar power generated from panels (based on sunlight duration or weather conditions), AI can help plan the timing of laundry or charging of electric vehicles to help consumers make the most of this renewable energy. On a grander scale, it could help grid operators pre-empt and mitigate gaps in supply.

Researchers in Iran used AI to predict the energy consumption of a research centre by taking account of its occupancy, structure, materials and local weather conditions. The system also used algorithms to optimise the building’s energy use by proposing appropriate insulation measures and heating controls and how much lighting and power was necessary based on the number of people present, ultimately reducing it by 35%.

2. Transport

Transport accounts for roughly one-fifth of global CO₂ emissions. AI models can encourage green travel options by suggesting the most efficient routes for drivers, with fewer hills, less traffic and constant speeds, and so minimise emissions.

An AI-based system suggested routes for electric vehicles in the city of Gothenburg, Sweden. The system used features like vehicle speed and the location of charging points to find optimal routes that minimised energy use.

3. Agriculture

Studies have shown that better farming practices can reduce emissions. AI can ensure that space and fertilisers (which contribute to climate change) are used sparingly.

By predicting how much of a crop people will buy in a particular market, AI can help producers and distributors minimise waste. A 2017 study conducted by Stanford University in the US even showed that advanced AI models can predict county-level soybean yields.

This was possible using images from satellites to analyse and track the growth of crops. Researchers compared multiple models to accurately predict crop yields and the best performing one could predict a crop’s yield based on images of growing plants and other features, including the climate.

Knowing a crop’s probable yield weeks in advance can help governments and agencies plan alternative means of procuring food in advance of a bad harvest.

4. Disaster management

The prediction and management of disasters is a field where AI has made major contributions. AI models have studied images from drones to predict flood damage in the Indus basin in Pakistan.

The system is also useful for detecting the onset of a flood, helping with real-time rescue operation planning. The system could be used by government authorities to plan prompt relief measures.

These potential uses don’t erase the problem of AI’s energy consumption, however, To ensure AI can be a force for good in the fight against climate change, something will still have to be done about this.

The Conversation


Lakshmi Babu Saheer, Director of Computing Informatics and Applications Research Group, Anglia Ruskin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
AI Behavior, Human Destiny and the Rise of the Killer Robots https://www.juancole.com/2024/02/behavior-destiny-killer.html Wed, 21 Feb 2024 05:02:39 +0000 https://www.juancole.com/?p=217200 ( Tomdispatch.com ) – Yes, it’s already time to be worried — very worried. As the wars in Ukraine and Gaza have shown, the earliest drone equivalents of “killer robots” have made it onto the battlefield and proved to be devastating weapons. But at least they remain largely under human control. Imagine, for a moment, a world of war in which those aerial drones (or their ground and sea equivalents) controlled us, rather than vice-versa. Then we would be on a destructively different planet in a fashion that might seem almost unimaginable today. Sadly, though, it’s anything but unimaginable, given the work on artificial intelligence (AI) and robot weaponry that the major powers have already begun. Now, let me take you into that arcane world and try to envision what the future of warfare might mean for the rest of us.

By combining AI with advanced robotics, the U.S. military and those of other advanced powers are already hard at work creating an array of self-guided “autonomous” weapons systems — combat drones that can employ lethal force independently of any human officers meant to command them. Called “killer robots” by critics, such devices include a variety of uncrewed or “unmanned” planes, tanks, ships, and submarines capable of autonomous operation. The U.S. Air Force, for example, is developing its “collaborative combat aircraft,” an unmanned aerial vehicle (UAV) intended to join piloted aircraft on high-risk missions. The Army is similarly testing a variety of autonomous unmanned ground vehicles (UGVs), while the Navy is experimenting with both unmanned surface vessels (USVs) and unmanned undersea vessels (UUVs, or drone submarines). China, Russia, Australia, and Israel are also working on such weaponry for the battlefields of the future.

The imminent appearance of those killing machines has generated concern and controversy globally, with some countries already seeking a total ban on them and others, including the U.S., planning to authorize their use only under human-supervised conditions. In Geneva, a group of states has even sought to prohibit the deployment and use of fully autonomous weapons, citing a 1980 U.N. treaty, the Convention on Certain Conventional Weapons, that aims to curb or outlaw non-nuclear munitions believed to be especially harmful to civilians. Meanwhile, in New York, the U.N. General Assembly held its first discussion of autonomous weapons last October and is planning a full-scale review of the topic this coming fall.

For the most part, debate over the battlefield use of such devices hinges on whether they will be empowered to take human lives without human oversight. Many religious and civil society organizations argue that such systems will be unable to distinguish between combatants and civilians on the battlefield and so should be banned in order to protect noncombatants from death or injury, as is required by international humanitarian law. American officials, on the other hand, contend that such weaponry can be designed to operate perfectly well within legal constraints.

However, neither side in this debate has addressed the most potentially unnerving aspect of using them in battle: the likelihood that, sooner or later, they’ll be able to communicate with each other without human intervention and, being “intelligent,” will be able to come up with their own unscripted tactics for defeating an enemy — or something else entirely. Such computer-driven groupthink, labeled “emergent behavior” by computer scientists, opens up a host of dangers not yet being considered by officials in Geneva, Washington, or at the U.N.

For the time being, most of the autonomous weaponry being developed by the American military will be unmanned (or, as they sometimes say, “uninhabited”) versions of existing combat platforms and will be designed to operate in conjunction with their crewed counterparts. While they might also have some capacity to communicate with each other, they’ll be part of a “networked” combat team whose mission will be dictated and overseen by human commanders. The Collaborative Combat Aircraft, for instance, is expected to serve as a “loyal wingman” for the manned F-35 stealth fighter, while conducting high-risk missions in contested airspace. The Army and Navy have largely followed a similar trajectory in their approach to the development of autonomous weaponry.

The Appeal of Robot “Swarms”

However, some American strategists have championed an alternative approach to the use of autonomous weapons on future battlefields in which they would serve not as junior colleagues in human-led teams but as coequal members of self-directed robot swarms. Such formations would consist of scores or even hundreds of AI-enabled UAVs, USVs, or UGVs — all able to communicate with one another, share data on changing battlefield conditions, and collectively alter their combat tactics as the group-mind deems necessary.

“Emerging robotic technologies will allow tomorrow’s forces to fight as a swarm, with greater mass, coordination, intelligence and speed than today’s networked forces,” predicted Paul Scharre, an early enthusiast of the concept, in a 2014 report for the Center for a New American Security (CNAS). “Networked, cooperative autonomous systems,” he wrote then, “will be capable of true swarming — cooperative behavior among distributed elements that gives rise to a coherent, intelligent whole.”

As Scharre made clear in his prophetic report, any full realization of the swarm concept would require the development of advanced algorithms that would enable autonomous combat systems to communicate with each other and “vote” on preferred modes of attack. This, he noted, would involve creating software capable of mimicking ants, bees, wolves, and other creatures that exhibit “swarm” behavior in nature. As Scharre put it, “Just like wolves in a pack present their enemy with an ever-shifting blur of threats from all directions, uninhabited vehicles that can coordinate maneuver and attack could be significantly more effective than uncoordinated systems operating en masse.”

In 2014, however, the technology needed to make such machine behavior possible was still in its infancy. To address that critical deficiency, the Department of Defense proceeded to fund research in the AI and robotics field, even as it also acquired such technology from private firms like Google and Microsoft. A key figure in that drive was Robert Work, a former colleague of Paul Scharre’s at CNAS and an early enthusiast of swarm warfare. Work served from 2014 to 2017 as deputy secretary of defense, a position that enabled him to steer ever-increasing sums of money to the development of high-tech weaponry, especially unmanned and autonomous systems.

From Mosaic to Replicator

Much of this effort was delegated to the Defense Advanced Research Projects Agency (DARPA), the Pentagon’s in-house high-tech research organization. As part of a drive to develop AI for such collaborative swarm operations, DARPA initiated its “Mosaic” program, a series of projects intended to perfect the algorithms and other technologies needed to coordinate the activities of manned and unmanned combat systems in future high-intensity combat with Russia and/or China.

“Applying the great flexibility of the mosaic concept to warfare,” explained Dan Patt, deputy director of DARPA’s Strategic Technology Office, “lower-cost, less complex systems may be linked together in a vast number of ways to create desired, interwoven effects tailored to any scenario. The individual parts of a mosaic are attritable [dispensable], but together are invaluable for how they contribute to the whole.”

This concept of warfare apparently undergirds the new “Replicator” strategy announced by Deputy Secretary of Defense Kathleen Hicks just last summer. “Replicator is meant to help us overcome [China’s] biggest advantage, which is mass. More ships. More missiles. More people,” she told arms industry officials last August. By deploying thousands of autonomous UAVs, USVs, UUVs, and UGVs, she suggested, the U.S. military would be able to outwit, outmaneuver, and overpower China’s military, the People’s Liberation Army (PLA). “To stay ahead, we’re going to create a new state of the art… We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat.”

To obtain both the hardware and software needed to implement such an ambitious program, the Department of Defense is now seeking proposals from traditional defense contractors like Boeing and Raytheon as well as AI startups like Anduril and Shield AI. While large-scale devices like the Air Force’s Collaborative Combat Aircraft and the Navy’s Orca Extra-Large UUV may be included in this drive, the emphasis is on the rapid production of smaller, less complex systems like AeroVironment’s Switchblade attack drone, now used by Ukrainian troops to take out Russian tanks and armored vehicles behind enemy lines.

At the same time, the Pentagon is already calling on tech startups to develop the necessary software to facilitate communication and coordination among such disparate robotic units and their associated manned platforms. To facilitate this, the Air Force asked Congress for $50 million in its fiscal year 2024 budget to underwrite what it ominously enough calls Project VENOM, or “Viper Experimentation and Next-generation Operations Model.” Under VENOM, the Air Force will convert existing fighter aircraft into AI-governed UAVs and use them to test advanced autonomous software in multi-drone operations. The Army and Navy are testing similar systems.

When Swarms Choose Their Own Path

In other words, it’s only a matter of time before the U.S. military (and presumably China’s, Russia’s, and perhaps those of a few other powers) will be able to deploy swarms of autonomous weapons systems equipped with algorithms that allow them to communicate with each other and jointly choose novel, unpredictable combat maneuvers while in motion. Any participating robotic member of such swarms would be given a mission objective (“seek out and destroy all enemy radars and anti-aircraft missile batteries located within these [specified] geographical coordinates”) but not be given precise instructions on how to do so. That would allow them to select their own battle tactics in consultation with one another. If the limited test data we have is anything to go by, this could mean employing highly unconventional tactics never conceived for (and impossible to replicate by) human pilots and commanders.

The propensity for such interconnected AI systems to engage in novel, unplanned outcomes is what computer experts call “emergent behavior.” As ScienceDirect, a digest of scientific journals, explains it, “An emergent behavior can be described as a process whereby larger patterns arise through interactions among smaller or simpler entities that themselves do not exhibit such properties.” In military terms, this means that a swarm of autonomous weapons might jointly elect to adopt combat tactics none of the individual devices were programmed to perform — possibly achieving astounding results on the battlefield, but also conceivably engaging in escalatory acts unintended and unforeseen by their human commanders, including the destruction of critical civilian infrastructure or communications facilities used for nuclear as well as conventional operations.

At this point, of course, it’s almost impossible to predict what an alien group-mind might choose to do if armed with multiple weapons and cut off from human oversight. Supposedly, such systems would be outfitted with failsafe mechanisms requiring that they return to base if communications with their human supervisors were lost, whether due to enemy jamming or for any other reason. Who knows, however, how such thinking machines would function in demanding real-world conditions or if, in fact, the group-mind would prove capable of overriding such directives and striking out on its own.

What then? Might they choose to keep fighting beyond their preprogrammed limits, provoking unintended escalation — even, conceivably, of a nuclear kind? Or would they choose to stop their attacks on enemy forces and instead interfere with the operations of friendly ones, perhaps firing on and devastating them (as Skynet does in the classic science fiction Terminator movie series)? Or might they engage in behaviors that, for better or infinitely worse, are entirely beyond our imagination?

Top U.S. military and diplomatic officials insist that AI can indeed be used without incurring such future risks and that this country will only employ devices that incorporate thoroughly adequate safeguards against any future dangerous misbehavior. That is, in fact, the essential point made in the “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” issued by the State Department in February 2023. Many prominent security and technology officials are, however, all too aware of the potential risks of emergent behavior in future robotic weaponry and continue to issue warnings against the rapid utilization of AI in warfare.

Of particular note is the final report that the National Security Commission on Artificial Intelligence issued in February 2021. Co-chaired by Robert Work (back at CNAS after his stint at the Pentagon) and Eric Schmidt, former CEO of Google, the commission recommended the rapid utilization of AI by the U.S. military to ensure victory in any future conflict with China and/or Russia. However, it also voiced concern about the potential dangers of robot-saturated battlefields.

“The unchecked global use of such systems potentially risks unintended conflict escalation and crisis instability,” the report noted. This could occur for a number of reasons, including “because of challenging and untested complexities of interaction between AI-enabled and autonomous weapon systems [that is, emergent behaviors] on the battlefield.” Given that danger, it concluded, “countries must take actions which focus on reducing risks associated with AI-enabled and autonomous weapon systems.”

When the leading advocates of autonomous weaponry tell us to be concerned about the unintended dangers posed by their use in battle, the rest of us should be worried indeed. Even if we lack the mathematical skills to understand emergent behavior in AI, it should be obvious that humanity could face a significant risk to its existence, should killing machines acquire the ability to think on their own. Perhaps they would surprise everyone and decide to take on the role of international peacekeepers, but given that they’re being designed to fight and kill, it’s far more probable that they might simply choose to carry out those instructions in an independent and extreme fashion.

If so, there could be no one around to put an R.I.P. on humanity’s gravestone.

Via Tomdispatch.com

]]>
AI Goes to War: Will the Pentagon’s Techno-Fantasies Pave the Way for War with China? https://www.juancole.com/2023/10/pentagons-techno-fantasies.html Wed, 04 Oct 2023 04:04:12 +0000 https://www.juancole.com/?p=214662 ( Tomdispatch.com) – On August 28th, Deputy Secretary of Defense Kathleen Hicks chose the occasion of a three-day conference organized by the National Defense Industrial Association (NDIA), the arms industry’s biggest trade group, to announce the “Replicator Initiative.” Among other things, it would involve producing “swarms of drones” that could hit thousands of targets in China on short notice. Call it the full-scale launching of techno-war.

Her speech to the assembled arms makers was yet another sign that the military-industrial complex (MIC) President Dwight D. Eisenhower warned us about more than 60 years ago is still alive, all too well, and taking a new turn. Call it the MIC for the digital age.

Hicks described the goal of the Replicator Initiative this way:

“To stay ahead [of China], we’re going to create a new state of the art… leveraging attritable, autonomous systems in all domains which are less expensive, put fewer people at risk, and can be changed, upgraded, or improved with substantially shorter lead times… We’ll counter the PLA’s [People’s Liberation Army’s] with mass of our own, but ours will be harder to plan for, harder to hit, and harder to beat.”

Think of it as artificial intelligence (AI) goes to war — and oh, that word “attritable,” a term that doesn’t exactly roll off the tongue or mean much of anything to the average taxpayer, is pure Pentagonese for the ready and rapid replaceability of systems lost in combat. Let’s explore later whether the Pentagon and the arms industry are even capable of producing the kinds of cheap, effective, easily replicable techno-war systems Hicks touted in her speech. First, though, let me focus on the goal of such an effort: confronting China.

Target: China

However one gauges China’s appetite for military conflict — as opposed to relying more heavily on its increasingly powerful political and economic tools of influence — the Pentagon is clearly proposing a military-industrial fix for the challenge posed by Beijing. As Hicks’s speech to those arms makers suggests, that new strategy is going to be grounded in a crucial premise: that any future technological arms race will rely heavily on the dream of building ever cheaper, ever more capable weapons systems based on the rapid development of near-instant communications, artificial intelligence, and the ability to deploy such systems on short notice.

The vision Hicks put forward to the NDIA is, you might already have noticed, untethered from the slightest urge to respond diplomatically or politically to the challenge of Beijing as a rising great power. It matters little that those would undoubtedly be the most effective ways to head off a future conflict with China.

Such a non-military approach would be grounded in a clearly articulated return to this country’s longstanding “One China” policy. Under it, the U.S. would forgo any hint of the formal political recognition of the island of Taiwan as a separate state, while Beijing would commit itself to limiting to peaceful means its efforts to absorb that island.

There are numerous other issues where collaboration between the two nations could move the U.S. and China from a policy of confrontation to one of cooperation, as noted in a new paper by my colleague Jake Werner of the Quincy Institute: “1) development in the Global South; 2) addressing climate change; 3) renegotiating global trade and economic rules; and 4) reforming international institutions to create a more open and inclusive world order.” Achieving such goals on this planet now might seem like a tall order, but the alternative — bellicose rhetoric and aggressive forms of competition that increase the risk of war — should be considered both dangerous and unacceptable.

On the other side of the equation, proponents of increasing Pentagon spending to address the purported dangers of the rise of China are masters of threat inflation. They find it easy and satisfying to exaggerate both Beijing’s military capabilities and its global intentions in order to justify keeping the military-industrial complex amply funded into the distant future.

As Dan Grazier of the Project on Government Oversight noted in a December 2022 report, while China has made significant strides militarily in the past few decades, its strategy is “inherently defensive” and poses no direct threat to the United States. At present, in fact, Beijing lags behind Washington strikingly when it comes to both military spending and key capabilities, including having a far smaller (though still undoubtedly devastating) nuclear arsenal, a less capable Navy, and fewer major combat aircraft. None of this would, however, be faintly obvious if you only listened to the doomsayers on Capitol Hill and in the halls of the Pentagon.

But as Grazier points out, this should surprise no one since “threat inflation has been the go-to tool for defense spending hawks for decades.” That was, for instance, notably the case at the end of the Cold War of the last century, after the Soviet Union had disintegrated, when then Chairman of the Joint Chiefs of Staff Colin Powell so classically said: “Think hard about it. I’m running out of demons. I’m running out of villains. I’m down to [Cuba’s Fidel] Castro and Kim Il-sung [the late North Korean dictator].”

Needless to say, that posed a grave threat to the Pentagon’s financial fortunes and Congress did indeed insist then on significant reductions in the size of the armed forces, offering less funds to spend on new weaponry in the first few post-Cold War years. But the Pentagon was quick to highlight a new set of supposed threats to American power to justify putting military spending back on the upswing. With no great power in sight, it began focusing instead on the supposed dangers of regional powers like Iran, Iraq, and North Korea. It also greatly overstated their military strength in its drive to be funded to win not one but two major regional conflicts at the same time. This process of switching to new alleged threats to justify a larger military establishment was captured strikingly in Michael Klare’s 1995 book Rogue States and Nuclear Outlaws.

After the 9/11 attacks, that “rogue states” rationale was, for a time, superseded by the disastrous “Global War on Terror,” a distinctly misguided response to those terrorist acts. It would spawn trillions of dollars of spending on wars in Iraq and Afghanistan and a global counter-terror presence that included U.S. operations in 85 — yes, 85! — countries, as strikingly documented by the Costs of War Project at Brown University.

All of that blood and treasure, including hundreds of thousands of direct civilian deaths (and many more indirect ones), as well as thousands of American deaths and painful numbers of devastating physical and psychological injuries to U.S. military personnel, resulted in the installation of unstable or repressive regimes whose conduct — in the case of Iraq — helped set the stage for the rise of the Islamic State (ISIS) terror organization. As it turned out, those interventions proved to be anything but either the “cakewalk” or the flowering of democracy predicted by the advocates of America’s post-9/11 wars. Give them full credit, though! They proved to be a remarkably efficient money machine for the denizens of the military-industrial complex.

Constructing “the China Threat”

As for China, its status as the threat du jour gained momentum during the Trump years. In fact, for the first time since the twentieth century, the Pentagon’s 2018 defense strategy document targeted “great power competition” as the wave of the future.

One particularly influential document from that period was the report of the congressionally mandated National Defense Strategy Commission. That body critiqued the Pentagon’s strategy of the moment, boldly claiming (without significant backup information) that the Defense Department was not planning to spend enough to address the military challenge posed by great power rivals, with a primary focus on China.

The commission proposed increasing the Pentagon’s budget by 3% to 5% above inflation for years to come — a move that would have pushed it to an unprecedented $1 trillion or more within a few years. Its report would then be extensively cited by Pentagon spending boosters in Congress, most notably former Senate Armed Services Committee Chair James Inhofe (R-OK), who used to literally wave it at witnesses in hearings and ask them to pledge allegiance to its dubious findings.

That 3% to 5% real growth figure caught on with prominent hawks in Congress and, until the recent chaos in the House of Representatives, spending did indeed fit just that pattern. What has not been much discussed is research by the Project on Government Oversight showing that the commission that penned the report and fueled those spending increases was heavily weighted toward individuals with ties to the arms industry. Its co-chair, for instance, served on the board of the giant weapons maker Northrop Grumman, and most of the other members had been or were advisers or consultants to the industry, or worked in think tanks heavily funded by just such corporations. So, we were never talking about a faintly objective assessment of U.S. “defense” needs.

Beware of Pentagon “Techno-Enthusiasm”

Just so no one would miss the point in her NDIA speech, Kathleen Hicks reiterated that the proposed transformation of weapons development with future techno-war in mind was squarely aimed at Beijing. “We must,” she said, “ensure the PRC leadership wakes up every day, considers the risks of aggression and concludes, ‘today is not the day’ — and not just today, but every day, between now and 2027, now and 2035, now and 2049, and beyond… Innovation is how we do that.”

The notion that advanced military technology could be the magic solution to complex security challenges runs directly against the actual record of the Pentagon and the arms industry over the past five decades. In those years, supposedly “revolutionary” new systems like the F-35 combat aircraft, the Army’s Future Combat System (FCS), and the Navy’s Littoral Combat Ship have been notoriously plagued by cost overruns, schedule delays, performance problems, and maintenance challenges that have, at best, severely limited their combat capabilities. In fact, the Navy is already planning to retire a number of those Littoral Combat Ships early, while the whole FCS program was canceled outright.

In short, the Pentagon is now betting on a complete transformation of how it and the industry do business in the age of AI — a long shot, to put it mildly.

But you can count on one thing: the new approach is likely to be a gold mine for weapons contractors, even if the resulting weaponry doesn’t faintly perform as advertised. This quest will not be without political challenges, most notably finding the many billions of dollars needed to pursue the goals of the Replicator Initiative, while staving off lobbying by producers of existing big-ticket items like aircraft carriers, bombers, and fighter jets.

Members of Congress will defend such current-generation systems fiercely to keep weapons spending flowing to major corporate contractors and so into key congressional districts. One solution to the potential conflict between funding the new systems touted by Hicks and the costly existing programs that now feed the titans of the arms industry: jack up the Pentagon’s already massive budget and head for that trillion-dollar peak, which would be the highest level of such spending since World War II.

The Pentagon has long built its strategy around supposed technological marvels like the “electronic battlefield” in the Vietnam era; the “revolution in military affairs,” first touted in the early 1990s; and the precision-guided munitions praised since at least the 1991 Persian Gulf war. It matters little that such wonder weapons have never performed as advertised. For example, a detailed Government Accountability Office report on the bombing campaign in the Gulf War found that “the claim by DOD [Department of Defense] and contractors of a one-target, one-bomb capability for laser-guided munitions was not demonstrated in the air campaign where, on average, 11 tons of guided and 44 tons of unguided munitions were delivered on each successfully destroyed target.”

When such advanced weapons systems can be made to work, at enormous cost in time and money, they almost invariably prove of limited value, even against relatively poorly armed adversaries (as in Iraq and Afghanistan in this century). China, a great power rival with a modern industrial base and a growing arsenal of sophisticated weaponry, is another matter. The quest for decisive military superiority over Beijing and the ability to win a war against a nuclear-armed power should be (but isn’t) considered a fool’s errand, more likely to spur a war than deter it, with potentially disastrous consequences for all concerned.

Perhaps most dangerous of all, a drive for the full-scale production of AI-based weaponry will only increase the likelihood that future wars could be fought all too disastrously without human intervention. As Michael Klare pointed out in a report for the Arms Control Association, relying on such systems will also magnify the chances of technical failures, as well as misguided AI-driven targeting decisions that could spur unintended slaughter and decision-making without human intervention. The potentially disastrous malfunctioning of such autonomous systems might, in turn, only increase the possibility of nuclear conflict.

It would still be possible to rein in the Pentagon’s techno-enthusiasm by slowing the development of the kinds of systems highlighted in Hicks’s speech, while creating international rules of the road regarding their future development and deployment. But the time to start pushing back against yet another misguided “techno-revolution” is now, before automated warfare increases the risk of a global catastrophe. Emphasizing new weaponry over creative diplomacy and smart political decisions is a recipe for disaster in the decades to come. There has to be a better way.

Tomdispatch.com

]]>