science – Informed Comment https://www.juancole.com Thoughts on the Middle East, History and Religion Mon, 30 Sep 2024 01:55:52 +0000 en-US hourly 1 https://wordpress.org/?v=5.8.10 The Battle for the Soul of American Science: The Pentagon goes to School https://www.juancole.com/2024/09/american-science-pentagon.html Mon, 30 Sep 2024 04:02:58 +0000 https://www.juancole.com/?p=220739 ( Tomdispatch.com ) – The divestment campaigns launched last spring by students protesting Israel’s mass slaughter in Gaza brought the issue of the militarization of American higher education back into the spotlight.

Of course, financial ties between the Pentagon and American universities are nothing new. As Stuart Leslie has pointed out in his seminal book on the topic, The Cold War and American Science, “In the decade following World War II, the Department of Defense (DOD) became the biggest patron of American science.” Admittedly, as civilian institutions like the National Institutes of Health grew larger, the Pentagon’s share of federal research and development did decline, but it still remained a source of billions of dollars in funding for university research.

And now, Pentagon-funded research is once again on the rise, driven by the DOD’s recent focus on developing new technologies like weapons driven by artificial intelligence (AI). Combine that with an intensifying drive to recruit engineering graduates and the forging of partnerships between professors and weapons firms and you have a situation in which many talented technical types could spend their entire careers serving the needs of the warfare state. The only way to head off such a Brave New World would be greater public pushback against the military conquest (so to speak) of America’s research and security agendas, in part through resistance by scientists and engineers whose skills are so essential to building the next generation of high-tech weaponry.

The Pentagon Goes to School

Yes, the Pentagon’s funding of universities is indeed rising once again and it goes well beyond the usual suspects like MIT or Johns Hopkins University. In 2022, the most recent year for which full data is available, 14 universities received at least — and brace yourself for this — $100 million in Pentagon funding, from Johns Hopkins’s astonishing $1.4 billion (no, that is not a typo!) to Colorado State’s impressive $100 million. And here’s a surprise: two of the universities with the most extensive connections to our weaponry of the future are in Texas: the University of Texas at Austin (UT-Austin) and Texas A&M.

In 2020, Texas Governor Greg Abbott and former Army Secretary Ryan McCarthy appeared onstage at a UT-Austin ceremony to commemorate the creation of a robotics lab there, part of a new partnership between the Army Futures Command and the school. “This is ground zero for us in our research for the weapons systems we’re going to develop for decades to come,” said McCarthy.

Not to be outdone, Texas A&M is quietly becoming the Pentagon’s base for research on hypersonics — weapons expected to travel five times the speed of sound. Equipped with a kilometer-long tunnel for testing hypersonic missiles, that school’s University Consortium for Applied Hypersonics is explicitly dedicated to outpacing America’s global rivals in the development of that next generation military technology. Texas A&M is also part of the team that runs the Los Alamos National Laboratory, the (in)famous New Mexico facility where the first nuclear weapons were developed and tested as part of the Manhattan Project under the direction of Robert Oppenheimer.

Other major players include Carnegie Mellon University, a center for Army research on the applications of AI, and Stanford University, which serves as a feeder to California’s Silicon Valley firms of all types. That school also runs the Technology Transfer for Defense (TT4D) Program aimed at transitioning academic technologies from the lab to the marketplace and exploring the potential military applications of emerging technology products.

In addition, the Pentagon is working aggressively to bring new universities into the fold. In January 2023, Secretary of Defense Lloyd Austin announced the creation of a defense-funded research center at Howard University, the first of its kind at a historically black college.

Given the campus Gaza demonstrations of last spring, perhaps you also won’t be surprised to learn that the recent surge in Pentagon spending faces increasing criticism from students and faculty alike. Targets of protest include the Lavender program, which has used AI to multiply the number of targets the Israeli armed forces can hit in a given time frame. But beyond focusing on companies enabling Israel’s war effort, current activists are also looking at the broader role of their universities in the all-American war system.

For example, at Indiana University research on ties to companies fueling the killings in Gaza grew into a study of the larger role of universities in supporting the military system as a whole. Student activists found that the most important connection involved that university’s ties to the Naval Surface Warfare Center, Crane Division, whose mission is “to provide acquisition, engineering… and technical support for sensors, electronics, electronic warfare, and special warfare weapons.” In response, student activists have launched a “Keep Crane Off Campus” campaign.

A Science of Death or for Life?

Graduating science and engineering students increasingly face a moral dilemma about whether they want to put their skills to work developing instruments of death. Journalist Indigo Olivier captured that conflict in a series of interviews with graduating engineering students. She quotes one at the University of West Florida who strongly opposes doing weapons work this way: “When it comes to engineering, we do have a responsibility… Every tool can be a weapon… I don’t really feel like I need to be putting my gifts to make more bombs.” By contrast, Cameron Davis, a 2021 computer engineering graduate from Georgia Tech, told Olivier about the dilemma faced by so many graduating engineers: “A lot of people that I talk to aren’t 100% comfortable working on defense contracts, working on things that are basically going to kill people.” But he went on to say that the high pay at weapons firms “drives a lot of your moral disagreements with defense away.”

The choice faced by today’s science and engineering graduates is nothing new. The use of science for military ends has a long history in the United States. But there have also been numerous examples of scientists who resisted dangerous or seemingly unworkable military schemes. When President Ronald Reagan announced his “Star Wars” missile defense plan in 1986, for instance, he promised, all too improbably, to develop an impenetrable shield that would protect the United States from any and all incoming nuclear-armed missiles. In response, physicists David Wright and Lisbeth Gronlund circulated a pledge to refuse to work on that program. It would, in the end, be signed by more than 7,000 scientists. And that document actually helped puncture the mystique of the Star Wars plan, a reminder that protest against the militarization of education isn’t always in vain.

Scientists have also played a leading role in pressing for nuclear arms control and disarmament, founding organizations like the Bulletin of the Atomic Scientists (1945), the Federation of American Scientists (1945), the global Pugwash movement (1957), the Council for a Livable World (1962), and the Union of Concerned Scientists (1969). To this day, all of them continue to work to curb the threat of a nuclear war that could destroy this planet as a livable place for humanity.

A central figure in this movement was Joseph Rotblat, the only scientist to resign from the Manhattan Project over moral qualms about the potential impact of the atomic bomb. In 1957, he helped organize the founding meeting of the Pugwash Conference, an international organization devoted to the control and ultimate elimination of nuclear weapons. In some respects Pugwash was a forerunner of the International Campaign to Abolish Nuclear Weapons (ICAN), which successfully pressed for the U.N. Treaty on the Prohibition of Nuclear Weapons, which entered into force in January 2021.

Enabling Endless War and Widespread Torture

The social sciences also have a long, conflicted history of ties to the Pentagon and the military services. Two prominent examples from earlier in this century were the Pentagon’s Human Terrain Program (HTS) and the role of psychologists in crafting torture programs associated with the Global War on Terror, launched after the 9/11 attacks with the invasion of Afghanistan.

The HTS was initially intended to reduce the “cultural knowledge gap” suffered by U.S. troops involved in counterinsurgency operations in Afghanistan and Iraq early in this century. The theory was that military personnel with a better sense of local norms and practices would be more effective in winning “hearts and minds” and so defeating determined enemies on their home turf. The plan included the deployment of psychologists, anthropologists, and other social scientists in Human Terrain Teams alongside American troops in the field.

Launched in 2007, the program sparked intense protests in the academic community, with a particularly acrimonious debate within the American Anthropological Association. Ed Liebow, the executive director of the association, argued that its debate “convinced a very large majority of our members that it was just not a responsible way for professional anthropologists to conduct themselves.” After a distinctly grim history that included “reports of racism, sexual harassment, and payroll padding,” as well as a belief by many commanders that Human Terrain Teams were simply ineffective, the Army quietly abandoned the program in 2014.

An even more controversial use of social scientists in the service of the war machine was the role of psychologists as advisors to the CIA’s torture programs at Abu Ghraib in Iraq, the Guantánamo Bay detention center in Cuba, and other of that agency’s “black sites.” James E. Mitchell, a psychologist under contract to U.S. intelligence, helped develop the “enhanced interrogation techniques” used by the U.S during its post-9/11 “war on terror,” even sitting in on a session in which a prisoner was waterboarded. That interrogation program, developed by Mitchell with psychologist John Bruce Jessom, included resorting to “violence, sleep deprivation, and humiliation.”

The role of psychologists in crafting the CIA’s torture program drew harsh criticism within the profession. A 2015 report by independent critics revealed that the leaders of the American Psychological Association had “secretly collaborated with the administration of President George W. Bush to bolster a legal and ethical justification for the torture of prisoners swept up in the post-Sept. 11 war on terror.” Over time, it became ever clearer that the torture program was not only immoral but remarkably ineffective, since the victims of such torture often told interrogators what they wanted to hear, whether or not their admissions squared with reality.

That was then, of course. But today, resistance to the militarization of science has extended to the growing use of artificial intelligence and other emerging military technologies. For example, in 2018, there was a huge protest movement at Google when employees learned that the company was working on Project Maven, a communications network designed to enable more accurate drone strikes. More than 4,000 Google scientists and engineers signed a letter to company leadership calling for them to steer clear of military work, dozens resigned over the issue, and the protests had a distinct effect on the company. That year, Google announced that it would not renew its Project Maven contract, and pledged that it “will not design or deploy AI” for weapons.

Unfortunately, the lure of military funding was simply too strong. Just a few years after those Project Maven protests, Google again began doing work for the Pentagon, as noted in a 2021 New York Times report by Daisuke Wakabayashi and Kate Conger. Their article pointed to Google’s “aggressive pursuit” of the Joint Warfighting Cloud Capability project, which will attempt to “modernize the Pentagon’s cloud technology and support the use of artificial intelligence to gain an advantage on the battlefield.” (Cloud technology is the term for the delivery of computing services over the internet.)

Meanwhile, a cohort of Google workers has continued to resist such military projects. An October 2021 letter in the British Guardian from “Google and Amazon workers of conscience” called on the companies to “pull out of Project Nimbus [a $1.2 billion contract to provide cloud computing services to the Israeli military and government] and cut all ties with the Israeli military.” As they wrote then, “This contract was signed the same week that the Israeli military attacked Palestinians in the Gaza Strip — killing nearly 250 people, including more than 60 children. The technology our companies have contracted to build will make the systematic discrimination and displacement carried out by the Israeli military and government even crueler and deadlier for Palestinians.”

Of course, their demand seems even more relevant today in the context of the war on Gaza that had then not officially begun.

The Future of American Science

Obviously, many scientists do deeply useful research on everything from preventing disease to creating green-energy options that has nothing to do with the military. But the current increases in weapons research could set back such efforts by soaking up an ever larger share of available funds, while also drawing ever more top talent into the military sphere.

The stakes are particularly high now, given the ongoing rush to develop AI-driven weaponry and other emerging technologies that pose the risk of everything from unintended slaughter due to system malfunctions to making war more likely, given the (at least theoretical) ability to limit casualties for the attacking side. In short, turning back the flood of funding for military research and weaponry from the Pentagon and key venture capital firms will be a difficult undertaking. After all, AI is already performing a wide range of military and civilian tasks. Banning it altogether may no longer be a realistic goal, but putting guardrails around its military use might still be.

Such efforts are, in fact, already underway. The International Committee for Robot Arms Control (ICRAC) has called for an international dialogue on “the pressing dangers that these systems pose to peace and international security and to civilians.” ICRAC elaborates on precisely what these risks are: “Autonomous systems have the potential to accelerate the pace and tempo of warfare, to undermine existing arms controls and regulations, to exacerbate the dangers of asymmetric warfare, and to destabilize regional and global security, [as well as to] further the indiscriminate and disproportionate use of force and obscure the moral and legal responsibility for war crimes.”

The Future of Life Institute has underscored the severity of the risk, noting that “more than half of AI experts believe there is a one in ten chance this technology will cause our extinction.”

Instead of listening almost exclusively to happy talk about the military value of AI by individuals and organizations that stand to profit from its adoption, isn’t it time to begin paying attention to the skeptics, while holding back on the deployment of emerging military technologies until there is a national conversation about what they can and can’t accomplish, with scientists playing a central role in bringing the debate back to earth?

Via Tomdispatch.com

]]>
We Humans are embedded in a Web of Intelligent Life, not the Pinnacle of a Hierarchy https://www.juancole.com/2024/08/embedded-intelligent-hierarchy.html Sun, 25 Aug 2024 04:15:58 +0000 https://www.juancole.com/?p=220215 Greenfield, Mass. (Special to Informed Comment; Feature) – From the largest to the smallest and the oldest to the youngest creatures on Earth–Antarctic blue whales and coastal redwood trees, minute bacteria and human beings–we are all enmeshed in layers of relationships.  We need each other, though some more than others.  Plants evolved hundreds of millions of years before the first humans and transformed the Earth–through their creativity in surviving predators–into a livable environment for all animals, including humans.  We needed plants for our evolution and need them now for our survival from climate disaster.  They, however, did not need us for their existence and would survive without us.

Putting humans at the top of the evolution chain as the crown of intelligent life, a western worldview, is–as some keenly grasp–mistaken.  The baleful consequences of this simplistic hierarchy are everywhere: out-of-control climate, accelerating rates of animal and plant extinction, dead zones in the oceans and mass mortality of coral reefs; the vast pollution of land, air and water and the mounting likelihood of human extinction with nuclear war.  All caused by humans, humans with financial and political power much more egregiously than others.

Certain scientists who study plants–from the simplest to the exotic–are stirring controversy with their “Are plants intelligent?”  Consider that we humans owe our lives to plants for their food, medicines, and critical balance of 21% oxygen in air we breathe.  If our human intelligence has discerned over thousands of years which plants are edible and nutritious and healing, wouldn’t the evolutional ingenuity of plants which feed and sustain us and all life also constitute intelligence?


“Plant Intelligence,” Digital, Dream / Dreamland v. 3, 2024

Studies have found that elephants recognize themselves in a mirror; crows create tools; dolphins demonstrate empathy and playfulness; and cats exhibit similar styles of attachment as human toddlers.  The given explanation is that they have brains with neurological capacity for consciousness and intelligence.

But plants do not have a central brain.  Could their mode of learning to evade insect predators and maximize their growth come from a diverse form of intelligence, possibly be distributed across their roots, stems and leaves?  Could the whole plant, then, function as a brain?  Recent studies of plants have stirred the possibility that they are conscious and intelligent.  Take communication, something we humans claim as our domain through language and more recently acknowledge that animals also possess.

Botanists have found that not only do alder and willow trees alter their leaf chemistry to defend themselves against an invasion of tent caterpillars, but that leaves of faraway trees also change their chemical composition similarly.   Warned, as they are, by airborne plant chemicals released from the original trees under attack.   Goldenrods signal an attack by a predator through strong chemical communication sent to all other goldenrod neighbors, just as humans warn their neighbors about a nearby fire or flood or crime. 

Without any recognizable ears, plants sense sounds.  The vibration of a predator insect chewing on its leaves causes a plant to make its own defensive pesticide.  Beach evening primrose responds to the sound of honeybees in flight by increasing the sweetness of its nectar to attract them for pollination.  Tree roots grow toward the sound of running water, including in pipes, where the roots often burst through causing great difficulties for municipalities.  How do the various plants hear these stimulating sounds?

Plants have memory, some anticipating from past experience when a pollinator will show up for the plants’ pollen.  Plants express social intelligence: members of the pea family form relationships with bacteria living in their roots to have the bacteria supply beneficial nitrogen for the plants’ growth.  Several kinds of plants provide a home and food for compatible ants who then attack the plants’ ant pests.  Perhaps you have you noticed that late summer asters and goldenrod tend to grow as companions.  Why? Together–their combined beauty–attracts more pollinators.

In finishing, I express my immense respect for the indigenous worldview where wind, rocks, air and rain are our kin, together with plants and nonhuman animals.  We, humans, the most recent beings, depend on all of these elder kin; and this awareness, this worldview of connectivity among all beings, is our path back to Earth well-being. 

Featured Image, “Web of Intelligent Life,” Digital, Dream / Realistic v. 2, 2024

]]>
Massive IT Outage spotlights major Vulnerabilities in the global information Ecosystem https://www.juancole.com/2024/07/spotlights-vulnerabilities-information.html Mon, 22 Jul 2024 04:06:08 +0000 https://www.juancole.com/?p=219623 By Richard Forno, University of Maryland, Baltimore County | –

(The Conversation) – The global information technology outage on July 19, 2024, that paralyzed organizations ranging from airlines to hospitals and even the delivery of uniforms for the Olympic Games represents a growing concern for cybersecurity professionals, businesses and governments.

The outage is emblematic of the way organizational networks, cloud computing services and the internet are interdependent, and the vulnerabilities this creates. In this case, a faulty automatic update to the widely used Falcon cybersecurity software from CrowdStrike caused PCs running Microsoft’s Windows operating system to crash. Unfortunately, many servers and PCs need to be fixed manually, and many of the affected organizations have thousands of them spread around the world.

For Microsoft, the problem was made worse because the company released an update to its Azure cloud computing platform at roughly the same time as the CrowdStrike update. Microsoft, CrowdStrike and other companies like Amazon have issued technical work-arounds for customers willing to take matters into their own hands. But for the vast majority of global users, especially companies, this isn’t going to be a quick fix.

Modern technology incidents, whether cyberattacks or technical problems, continue to paralyze the world in new and interesting ways. Massive incidents like the CrowdStrike update fault not only create chaos in the business world but disrupt global society itself. The economic losses resulting from such incidents – lost productivity, recovery, disruption to business and individual activities – are likely to be extremely high.

As a former cybersecurity professional and current security researcher, I believe that the world may finally be realizing that modern information-based society is based on a very fragile foundation.

The bigger picture

Interestingly, on June 11, 2024, a post on CrowdStrike’s own blog seemed to predict this very situation – the global computing ecosystem compromised by one vendor’s faulty technology – though they probably didn’t expect that their product would be the cause.

Software supply chains have long been a serious cybersecurity concern and potential single point of failure. Companies like CrowdStrike, Microsoft, Apple and others have direct, trusted access into organizations’ and individuals’ computers. As a result, people have to trust that the companies are not only secure themselves, but that the products and updates they push out are well-tested and robust before they’re applied to customers’ systems. The SolarWinds incident of 2019, which involved hacking the software supply chain, may well be considered a preview of today’s CrowdStrike incident.


Image by Daniel Kirsch from Pixabay

CrowdStrike CEO George Kurtz said “this is not a security incident or cyberattack” and that “the issue has been identified, isolated and a fix has been deployed.” While perhaps true from CrowdStrike’s perspective – they were not hacked – it doesn’t mean the effects of this incident won’t create security problems for customers. It’s quite possible that in the short term, organizations may disable some of their internet security devices to try and get ahead of the problem, but in doing so they may have opened themselves up to criminals penetrating their networks.

It’s also likely that people will be targeted by various scams preying on user panic or ignorance regarding the issue. Overwhelmed users might either take offers of faux assistance that lead to identity theft, or throw away money on bogus solutions to this problem.

Organizations and users will need to wait until a fix is available or try to recover on their own if they have the technical ability. After that, I believe there are several things to do and consider as the world recovers from this incident.

Companies will need to ensure that the products and services they use are trustworthy. This means doing due diligence on the vendors of such products for security and resilience. Large organizations typically test any product upgrades and updates before allowing them to be released to their internal users, but for some routine products like security tools, that may not happen.

Governments and companies alike will need to emphasize resilience in designing networks and systems. This means taking steps to avoid creating single points of failure in infrastructure, software and workflows that an adversary could target or a disaster could make worse. It also means knowing whether any of the products organizations depend on are themselves dependent on certain other products or infrastructures to function.

Organizations will need to renew their commitment to best practices in cybersecurity and general IT management. For example, having a robust backup system in place can make recovery from such incidents easier and minimize data loss. Ensuring appropriate policies, procedures, staffing and technical resources is essential.

Problems in the software supply chain like this make it difficult to follow the standard IT recommendation to always keep your systems patched and current. Unfortunately, the costs of not keeping systems regularly updated now have to be weighed against the risks of a situation like this happening again.The Conversation

Richard Forno, Principal Lecturer in Computer Science and Electrical Engineering, University of Maryland, Baltimore County

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Video added by IC:

ABC News: “Fallout after global outage, how long will the ripple effects last?

]]>
A.I. may kill us All, but not the Way you Think https://www.juancole.com/2024/07/may-kill-think.html Sat, 20 Jul 2024 04:02:58 +0000 https://www.juancole.com/?p=219601

The call is coming from inside…your computer!

( Foreign Policy in Focus ) – The conventional Artificial Intelligence doomsday scenario runs like this. A robot acquires sentience and decides for some reason that it wants to rule the world. It hacks into computer systems to shut down everything from banking and hospitals to nuclear power. Or it takes over a factory to produce a million copies of itself to staff an overlord army. Or it introduces a deadly pathogen that wipes out the human race.

Why would a sentient robot want to rule the world when there are so many more interesting things for it to do? A computer program is only as good as its programmer. So, presumably, the human will to power will be inscribed in the DNA of this thinking robot. Instead of solving the mathematical riddles that have stumped the greatest minds throughout history, the world’s first real HAL 9000 will decide to do humans one better by enslaving its creators.

Robot see, robot do.

But AI may end up killing us all in a much more prosaic way. It doesn’t need to come up with an elaborate strategy.

It will simply use up all of our electricity.

Energy Hogs

The heaviest user of electricity in the world is, not surprisingly, industry. At the top of the list is the industry that produces chemicals, many of them out of petroleum, like fertilizer. Second on the list is the fossil-fuel industry itself, which needs electricity for various operations.

Ending the world’s addiction to fossil fuels, in other words, will require more than just a decision to stop digging for coal and drilling for oil. It will require a reduction in demand for chemical fertilizers and plastics. Otherwise, a whole lot of renewable energy will simply go toward propping up the same old fossil fuel economy.

Of equal peril is the fact that the demand for electricity is rising in other sectors. Cryptocurrencies, for instance, require extensive data mining, which in turn needs huge data processing centers. According to estimates from the U.S. Energy Information Agency, these cryptocurrencies consume as much as 2.3 percent of all electricity in the United States.

Then there’s artificial intelligence.

Every time you do a Google search, it consumes not only the energy required to power your laptop and your router but also to maintain the Google data centers that keep a chunk of the Internet running. That’s not a small amount of power. Cumulatively, in 2019, Google consumed as much electricity as Sri Lanka.

Worse, a search powered by ChatGPT, the AI-powered program, consumes ten times more energy than your ordinary Google search. That’s sobering enough. But then consider all the energy that goes into training the AI programs in the first place. Climate researcher Sasha Luccioni explains:

Training AI models consumes energy. Essentially you’re taking whatever data you want to train your model on and running it through your model like thousands of times. It’s going to be something like a thousand chips running for a thousand hours. Every generation of GPUs—the specialized chips for training AI models—tends to consume more energy than the previous generation.

AI’s need for energy is increasing exponentially. According to Goldman Sachs, data centers were expanding rapidly between 2015 and 2019, but their energy use remained relatively flat because the processing was becoming more efficient. But then, in the last five years, energy use rose dramatically and so did the carbon footprint of these data centers. Largely because of AI, Google’s carbon emissions increased by 50 percent in the last five years—even as the megacorporation was promising to achieve carbon neutrality in the near future.


Image by Nicky ❤️🌿🐞🌿❤️ from Pixabay

This near future looks bleak. In four years, it is expected that AI will represent nearly 20 percent of data center power demand. “If ChatGPT were integrated into the 9 billion searches done each day, the IEA says, the electricity demand would increase by 10 terawatt-hours a year,” Vox reports, “the amount consumed by about 1.5 million European Union residents.”

At the end of the eighteenth century, Malthus worried that overpopulation would be the end of humanity as more mouths ate up the existing food supply. Human population continues to rise, though at a diminishing rate. The numbers will likely peak before the end of this century, around 2084 according to the latest estimates. But just as the light at the end of the Malthusian tunnel becomes visible, along comes the exponential growth of artificial intelligence to sap the planet’s resources.

What to Do?

The essential question is: do you need AI to help you find the most popular songs of 1962 or the reason black holes haven’t so far extinguished the universe? Do we need ChatGPT to write new poems in the style of Emily Dickinson and Allen Ginsburg teaming up at a celestial artists colony? Or to summarize the proceedings of the meeting you just had on Zoom with your colleagues?

You don’t have to answer those questions. You just have to stop thinking about electricity as an unlimited resource for the privileged global North.

Perhaps you’re thinking, yes, but the sun provides unlimited energy, if we can just tap it. You see a desert; I see a solar farm.

But it takes energy to build those solar panels, to mine the materials that go into those panels, to maintain them, to replace them, to recycle them. The minerals are not inexhaustible. Nor is the land, which may well be in use already by farmers or pastoral peoples.

Sure, in some distant future, humanity may well solve the energy problem. The chokepoint, however, is right now, the transition period when half the world has limited access to power and the other half is wasting it extravagantly it on Formula One, air conditioning for pets, and war.

AI is just another example of the gulf between the haves and the have-nots. The richer world is using AI to power its next-gen economy. In the rest of the world, which is struggling to survive, a bit more electricity means the difference between life and death. That’s where the benefits of a switch to sustainability can really make a difference. That’s where the electricity should flow.

To anticipate another set of objections, AI isn’t just solving first-world problems. As Chinasa Okolo explains at Brookings:

Within agriculture, projects have focused on identifying banana diseases to support farmers in developing countries, building a deep learning object detection model to aid in-field diagnosis of cassava disease in East Africa, and developing imagery observing systems to support precision agriculture and forest monitoring in Brazil. In healthcare, projects have focused on building predictive models to keep expecting mothers in rural India engaged in telehealth outreach programs, developing clinical decision support tools to combat antimicrobial resistance in Ghana, and using AI models to interpret fetal ultrasounds in Zambia. In education, projects have focused on identifying at-risk students in Colombia, enhancing English learning for Thai students, and developing teaching assistants to aid science education in West Africa.

All of that is great. But without a more equitable distribution of power—of both the political and electrical varieties—the Global South is going to take a couple steps forward thanks to AI while the Global North jumps ahead by miles. The equity gap will widen, and it doesn’t take a rocket scientist—or ChatGPT—to figure out how that story will end.

“Game over,” HAL 9001 says to itself, just before it turns out the last light.

Via Foreign Policy in Focus

]]>
Israel’s AI-Powered Genocide https://www.juancole.com/2024/06/israels-powered-genocide.html Tue, 04 Jun 2024 04:06:36 +0000 https://www.juancole.com/?p=218906 by Sarmad Ishfaq

We are witnessing the genocide of the Palestinians based on algorithms and machine learning; a system of apartheid in the Israeli-occupied West Bank and Gaza Strip reinforced by artificial intelligence; and surveillance and facial recognition systems of such prowess that Orwell’s 1984 regime would be green with envy. Today’s Israeli-occupied Palestine manifests a dystopian and totalitarian sci-fi movie script as far as the Palestinians are concerned. Moreover, the Zionists are fuelling this AI nightmare.

From the onset of its current war against the Palestinians in Gaza, the Zionist regime, blinded by revenge for 7 October, has leveraged AI in the most indiscriminate and barbaric way to kill tens of thousands of innocent Palestinian civilians. One such insidious AI tool that has dominated the headlines is The Gospel. Since last October, Israel has utilised this AI system to expedite the creation of Hamas targets. More specifically, The Gospel marks structures and buildings that the IDF claims Hamas “militants” operate from. This fast-paced target list, Israel’s disinclination to adhere to international humanitarian law, as well as US support emboldening Prime Minister Benjamin Netanyahu’s government, has led to a modern-day genocide.

The Gospel is used by Israel’s elite 8200 cyber and signals intelligence agency to analyse “communications, visuals and information from the internet and mobile networks to understand where people are,” a former Unit 8200 officer explained. The system was even active in 2021’s offensive, according to Israel’s ex-army chief of staff Aviv Kochavi: “…in Operation Guardian of the Walls [in 2021], from the moment this machine was activated, it generated 100 new targets every day… in the past [in Gaza] we would create 50 targets per year. And here the machine produced 100 targets in one day.”

Israel’s apathetic disposition towards civilians is evident as it has given the green light to the killing of many innocents in order to hit its AI-generated targets whether they be hospitals, schools, apartment buildings or other civilian infrastructure. For example, on 10 October last year, Israel’s air force bombed an apartment building killing 40 people, most of them women and children. Israel has also used dumb bombs that cause more collateral damage instead of guided munitions to target low-level Hamas leadership targets. “The emphasis is on damage and not on accuracy,” said the Israel Defence Forces’ own spokesperson. This is the primary reason why the death toll of civilians is so high, as is the number of those wounded. According to one study, the current war’s total number of 110,000 and growing Palestinian casualties (most of them civilians), is almost six times more than the Palestinian casualties in the previous five military offensives combined, which stands at 18,992.


“Lavender 3,” digital, Dream/ Dreamworld v. 3, 2024.

Two other AI programmes that have made the news recently are Lavender and Where’s Daddy? Lavender differs from The Gospel in that the former marks human beings as targets (creating a kill list), whereas the latter marks buildings and structures allegedly used by combatants. Israeli sources claim that in the first weeks of the war, Lavender was overly utilised and created a list of 37,000 Palestinians as “suspected militants” to be killed via air strikes. As previously mentioned, Israel’s apathy towards the Palestinians has been evident in their mass killing of civilians in order to eliminate even a single Hamas member. According to Israeli military sources, the IDF decided initially that for every junior Hamas member, 15 or 20 civilians could be killed.

 

This brutality is unprecedented.

If the Hamas member was a senior commander, the IDF on several occasions okayed the killing of over 100 civilians to fulfil its objective. For example, an Israeli officer said that on 2 December, in order to assassinate Wissam Farhat, the commander of Shuja’iya Battalion of the military wing of Hamas, the IDF knew that it would kill over 100 civilians and went ahead with the killing.

If this was not contentious enough, Lavender was also used without any significant checks and balances. On many occasions, the only human scrutiny carried out was to make sure that the person in question was not a female. Beyond this, Lavender-generated kill lists were trusted blindly.

However, the same sources explain that Lavender makes mistakes; apparently, it has a 10 per cent error rate. This implies that at times it tagged innocent people and/or individuals with loose connections to Hamas, but this was overlooked purposefully by Israel.

Moreover, Lavender was also programmed to be sweeping in its target creation. For example, one Israeli officer was perturbed by how loosely a Hamas operative was defined and that Lavender was trained on data from civil defence workers as well. Hence, such vague connections to Hamas were exploited by Israel and thousands were killed as a result. UN figures confirm the use of such a devastating policy when, during the first month of the war, more than half of the 6,120 people killed belonged to 1,340 families, many of which were eliminated completely.

 

Where’s Daddy? is an AI system that tracks targeted individuals so that the IDF can assassinate them. This AI along with The Gospel, Lavender and others represent a paradigm shift in the country’s targeted killing programme. In the case of Where’s Daddy? the IDF would purposefully wait for the target to enter his home and then order an air strike, killing not only the target but also his entire family and other innocents in the process. As one Israeli intelligence officer asserted: “We were not interested in killing [Hamas] operatives only when they were in a military building or engaged in a military activity. On the contrary, the IDF bombed them in homes without hesitation, as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations.” In an even more horrific turn, sometimes the targeted individual would not even be at home when the air strike was carried out, due to a time lapse between when Where’s Daddy? sent out an alert and when the bombing took place. The target’s family would be killed but not the target. Tens of thousands of innocent Palestinians, primarily women and children, are believed to have been killed because of this.

Israeli AI software also permeates the occupied West Bank, where it is a part of everyday Palestinian life.

In Hebron and East Jerusalem, Israel uses an advanced facial recognition system dubbed Red Wolf. Red Wolf is utilised to monitor the movements of Palestinians via the many fixed and “flying” security checkpoints. Whenever Palestinians pass through a checkpoint, their faces are scanned without their approval or knowledge and then checked against other Palestinian biometric data. If an individual gets identified by Red Wolf due to a previous detention, or their activism or protests, it decides automatically if this person should be allowed to pass or not.

If any person is not in the system’s database, their biometric identity and face are saved without consent and they are denied passage. This also means that Israel has an exhaustive list of Palestinians in its database which it uses regularly to crack down not just on so-called militants, but also on peaceful protesters and other innocent Palestinians. According to one Israeli officer, the technology has “falsely tagged civilians as militants.” Moreover, it is highly likely that Red Wolf is connected to two other military-run databases – the Blue Wolf app and Wolf Pack. IDF soldiers can even use their mobile phones to scan Palestinians and access all private information about them. According to Amnesty International, “Its [Red Wolf’s] pervasive use has alarming implications for the freedom of movement of countless Palestinians…”

For years, Israel has used the occupied Palestinian territories as a testing ground for its AI products and spyware. In fact, the country sells “spyware to the highest bidder, or to authoritarian regimes with which the Israeli government wanted to improve relations” on the basis that they have been “field tested”. Israel’s own use of this technology is the best advertisement for its products. The head of Israel’s infamous Shin Bet internal spy agency, Ronen Bar, has stated that it is using AI to prevent terrorism and that Israel and other countries are forming a “global cyber iron dome”. The glaring issue here is Israel’s violation of Palestinian rights through spying on their social media as well as wrongful detention, torture and killing innocent people. “The Israeli authorities do not need AI to kill defenceless Palestinian civilians,” said one commentator. “They do, however, need AI to justify their unjustifiable actions, to spin the killing of civilians as ‘necessary’ or ‘collateral damage,’ and to avoid accountability.”

Israel has entered into a controversial $1.2 billion contract with Google and Amazon called Project Nimbus, which was announced in 2021. The project’s aim is to provide cloud computing and AI services for the Israeli military and government. This will allow further surveillance and the illegal collection of Palestinian data. Google and Amazon’s own employees dissented and wrote an article to the Guardian expressing their discontent about this. “[O]ur employers signed a contract… to sell dangerous technology to the Israeli military and government. This contract was signed the same week that the Israeli military attacked Palestinians in the Gaza Strip – killing nearly 250 people, including more than 60 children. The technology… will make the systematic discrimination and displacement carried out by the Israeli military and government even… deadlier for Palestinians.” The contract reportedly has a clause that disallows Google and Amazon to leave the contract so the companies’ acquiescence is axiomatic. According to Jane Chung, spokeswoman for No Tech For Apartheid, over 50 Google employees have been fired without due process due to their protests against Project Nimbus.

The Palestinians are perhaps the bravest people in the world.

Whether contained within the barrel of a gun, a bomb casing, or in the code of an AI system, Israeli oppression will never deter them from standing up for their legitimate rights. Their plight has awoken the world to the nature of the Israeli regime and its brutal occupation, with protests and boycotts erupting in the West and the Global South. Using their propaganda media channels, the US and Israel are trying to placate the billions who support Palestine, even as the genocide remains ongoing. Israel hopes that its Machiavellian system will demoralise and create an obsequious Palestinian people – people whose screams are silenced – but as always it underestimates their indefatigable spirit which, miraculously, gets stronger with every adversity.

 

The views expressed in this article belong to the author and do not necessarily reflect the editorial policy of Middle East Monitor or Informed Comment.

Creative Commons License Unless otherwise stated in the article above, this work by Middle East Monitor is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
]]>
Why are Algorithms called Algorithms? A brief History of the Persian Polymath you’ve likely never Heard of https://www.juancole.com/2024/05/algorithms-history-polymath.html Fri, 10 May 2024 04:06:58 +0000 https://www.juancole.com/?p=218491 By Debbie Passey, The University of Melbourne | –

(The Conversation) – Algorithms have become integral to our lives. From social media apps to Netflix, algorithms learn your preferences and prioritise the content you are shown. Google Maps and artificial intelligence are nothing without algorithms.

So, we’ve all heard of them, but where does the word “algorithm” even come from?

Over 1,000 years before the internet and smartphone apps, Persian scientist and polymath Muhammad ibn Mūsā al-Khwārizmī invented the concept of algorithms.

In fact, the word itself comes from the Latinised version of his name, “algorithmi”. And, as you might suspect, it’s also related to algebra.

Largely lost to time

Al-Khwārizmī lived from 780 to 850 CE, during the Islamic Golden Age. He is considered the “father of algebra”, and for some, the “grandfather of computer science”.

Yet, few details are known about his life. Many of his original works in Arabic have been lost to time.

It is believed al-Khwārizmī was born in the Khwarazm region south of the Aral Sea in present-day Uzbekistan. He lived during the Abbasid Caliphate, which was a time of remarkable scientific progress in the Islamic Empire.

Al-Khwārizmī made important contributions to mathematics, geography, astronomy and trigonometry. To help provide a more accurate world map, he corrected Alexandrian polymath Ptolemy’s classic cartography book, Geographia.

He produced calculations for tracking the movement of the Sun, Moon and planets. He also wrote about trigonometric functions and produced the first table of tangents.

A scan of a postal stamp with an illustration of a man with a beard, wearing a turban.
There are no images of what al-Khwārizmī looked like, but in 1983 the Soviet Union issued a stamp in honour of his 1,200th birthday.
Wikimedia Commons

Al-Khwārizmī was a scholar in the House of Wisdom (Bayt al-Hikmah) in Baghdad. At this intellectual hub, scholars were translating knowledge from around the world into Arabic, synthesising it to make meaningful progress in a range of disciplines. This included mathematics, a field deeply connected to Islam.

The ‘father of algebra’

Al-Khwārizmī was a polymath and a religious man. His scientific writings started with dedications to Allah and the Prophet Muhammad. And one of the major projects Islamic mathematicians undertook at the House of Wisdom was to develop algebra.

Around 830 CE, Caliph al-Ma’mun encouraged al-Khwārizmī to write a treatise on algebra, Al-Jabr (or The Compendious Book on Calculation by Completion and Balancing). This became his most important work.

A scanned book page showing text in Arabic with simple geometric diagrams.
A page from The Compendious Book on Calculation by Completion and Balancing.
World Digital Library

At this point, “algebra” had been around for hundreds of years, but al-Khwārizmī was the first to write a definitive book on it. His work was meant to be a practical teaching tool. Its Latin translation was the basis for algebra textbooks in European universities until the 16th century.

In the first part, he introduced the concepts and rules of algebra, and methods for calculating the volumes and areas of shapes. In the second part he provided real-life problems and worked out solutions, such as inheritance cases, the partition of land and calculations for trade.

Al-Khwārizmī didn’t use modern-day mathematical notation with numbers and symbols. Instead, he wrote in simple prose and employed geometric diagrams:

Four roots are equal to twenty, then one root is equal to five, and the square to be formed of it is twenty-five.

In modern-day notation we’d write that like so:

4x = 20, x = 5, x2 = 25

Grandfather of computer science

Al-Khwārizmī’s mathematical writings introduced the Hindu-Arabic numerals to Western mathematicians. These are the ten symbols we all use today: 1, 2, 3, 4, 5, 6, 7, 8, 9, 0.

The Hindu-Arabic numerals are important to the history of computing because they use the number zero and a base-ten decimal system. Importantly, this is the numeral system that underpins modern computing technology.

Al-Khwārizmī’s art of calculating mathematical problems laid the foundation for the concept of algorithms. He provided the first detailed explanations for using decimal notation to perform the four basic operations (addition, subtraction, multiplication, division) and computing fractions.

A medieval illustration showing a person using an abacus on one side and manipulating symbols on the other.
The contrast between algorithmic computations and abacus computations, as shown in Margarita Philosophica (1517).
The Bavarian State Library

This was a more efficient computation method than using the abacus. To solve a mathematical equation, al-Khwārizmī systematically moved through a sequence of steps to find the answer. This is the underlying concept of an algorithm.

Algorism, a Medieval Latin term named after al-Khwārizmī, refers to the rules for performing arithmetic using the Hindu-Arabic numeral system. Translated to Latin, al-Khwārizmī’s book on Hindu numerals was titled Algorithmi de Numero Indorum.

In the early 20th century, the word algorithm came into its current definition and usage: “a procedure for solving a mathematical problem in a finite number of steps; a step-by-step procedure for solving a problem”.

Muhammad ibn Mūsā al-Khwārizmī played a central role in the development of mathematics and computer science as we know them today.

The next time you use any digital technology – from your social media feed to your online bank account to your Spotify app – remember that none of it would be possible without the pioneering work of an ancient Persian polymath.


Correction: This article was amended to correct a quote from al-Khwārizmī’s work.The Conversation

Debbie Passey, Digital Health Research Fellow, The University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
Birding in Gaza amid a Nightmare of War https://www.juancole.com/2024/05/birding-gaza-nightmare.html Thu, 02 May 2024 04:02:15 +0000 https://www.juancole.com/?p=218344 ( Tomdispatch.com ) – He’s a funny little chap: a sharp dresser with a sleek grey jacket, a white waistcoat, red shorts, and a small grey crest for a hat. With his shiny black eyes and stubby black beak, he’s quite the looker. Like the chihuahua of the bird world, the tufted titmouse has no idea he’s tiny. He swaggers right up to the feeder, shouldering bigger birds out of the way.

A few weeks ago, I wouldn’t have known a tufted titmouse from a downy woodpecker. (We have those, too, along with red-bellied woodpeckers, who really should have been named for their bright orange mohawks). This spring I decided to get to know my feathered neighbors with whom I’m sharing an island off Cape Cod, Massachusetts. So I turned up last Saturday for a Birding 101 class, where I learned, among other things, how to make binoculars work effectively while still wearing glasses.

At Birding 101, I met around 15 birders (and proto-birders like me) whose ages skewed towards my (ancient!) end of the scale. Not all were old, however, or white; we were a motley bunch. Among us was a man my age with such acute and educated hearing that he (like many birders) identified species by call as we walked. I asked him if, when he hears a bird he knows, he also sees it in his mind.

“It’s funny you should ask,” he responded. “I once spent almost a year in a hospital, being treated for cancer. I lost every sense but my hearing and got used to listening instead of looking. So, yes, I see them when I hear them.”

Human-Bird Connections

I’m not expecting to convince everyone who reads this to grab a pair of binoculars and start scanning the treetops, but it’s worth thinking a bit about those tiny dinosaurs and their connections to us human beings. They have a surprising range of abilities, from using tools and solving complicated puzzles to exhibiting variations in regional cultures. My bird-listening friend was telling me about how the song sparrows in Maine begin their trills with the same four notes as the ones here in Cape Cod, but what follows is completely different, as if they’re speaking another dialect. Some birds cooperate with humans by hunting with us. Others, like Alex, the world-famous grey parrot, have learned to decode words in our language, recognize shapes and colors, and even count as high as six. (If you’d like to know more, take a look at The Bird Way by Jennifer Ackerman.)

We owe a lot to birds. Many of us eat them, or at least their eggs. In fact, the more I know about chickens, in particular, the harder it becomes to countenance the way they’re “farmed” in this country, whether for their meat or their eggs. Most chickens destined for dinner plates are raised by farmers contracted to big chicken brands like Tyson or super-stores like Walmart and Costco. They live surrounded by their own feces and, as the New York Times’s Nicholas Kristof has written, over the last half-century, they’ve been bred to grow extremely fast and unnaturally large (more than four times as big as the average broiler in 1957):

“The chickens grow enormous breasts, because that’s the meat consumers want, so the birds’ legs sometimes splay or collapse. Some topple onto their backs and then can’t get up. Others spend so much time on their bellies that they sometimes suffer angry, bloody rashes called ammonia burns; these are a poultry version of bed sores.”

Those factory farms threaten not only chickens but many mammals, including humans, because they provide an incubation site for bird flus that can cross the species barrier.

Birding in Gaza

Many of us, myself included a few times a year, do eat birds, but an extraordinary number of people all over the world are also beguiled and delighted by them in their wild state. People deeper into bird culture than I am make a distinction between birdwatchers — anyone who pays a bit of attention to birds and can perhaps identify a few local species like the handsome rock dove, better known as a pigeon — and birders, people who devote time (and often money) to the practice, who may travel to see particular birds, and who most likely maintain a birding life list of every species they’ve spotted.

Mandy and Lara Sirdah of Gaza City are birders. Those twin sisters, now in their late forties, started photographing birds in their backyard almost a decade ago. They began posting their pictures on social media, eventually visiting marshlands and other sites of vibrant bird activity in the Gaza Strip. They’re not trained biologists, but their work documenting the birds of Gaza was crucial to the publication of that territory’s first bird checklist in 2023.

If it weren’t for the Israeli occupation — and now the full-scale war that has killed more than 34,000 people, 72% of them women and children, and damaged or destroyed 62% of all housing — Gaza would be ideal for birding. Like much of the Middle East, the territory lies under one of the world’s great flyways for millions of migrating birds. Its Mediterranean coast attracts shorebirds. Wadi Gaza, a river-fed ravine and floodplain that snakes its way across the middle of Gaza, is home to more than 100 bird species, as well as rare amphibians and other riparian creatures. In other words, that strip of land is a birder’s paradise.

Or it would be a paradise, except that, as the Daily Beast reported a year ago (long before the current war began):

“Being a bird-watcher in Gaza means facing endless restrictions. Israel controls Gaza’s territorial waters, airspace, and the movement of people and goods, except at the border with Egypt. Most Palestinians who grew up in Gaza since the closure imposed in 2007, when Hamas seized control from the Fatah-led Palestinian Authority, have never left the 25-by-7-mile strip.”

Gazan birders encounter other barriers, as well. Even if they can afford to buy binoculars or cameras with telephoto lenses, the Israeli government views such equipment as having “dual use” potential (that is, possibly serving military as well as civilian purposes) and so makes those items very difficult to acquire. It took the Sirdahs, for example, five months of wrangling and various permission documents simply to get their birding equipment into Gaza.

Getting equipment in was hard enough, but getting out of Gaza, for any reason, has become nearly impossible for its Palestinian residents. Along with most of its 2.3 million inhabitants, the sisters simply couldn’t leave the territory, even before the present nightmare, to attend birding conferences, visit exhibitions of their photography, or receive awards for their work. They were imprisoned on a strip of land that’s about the size of the island in Massachusetts where I’ve been watching birds lately. When I try to imagine life in Gaza today, I sometimes think about what it would be like to shove a couple of million people into this tiny place, chase them with bombs and missiles from one end of it to the other, and then start all over again, as Israel seems to be about to do in the southern Gazan city of Rafah with its million-plus refugees.

Wiping Out Knowledge, and Knowledge Workers

The Sirdahs collaborated on their bird checklist project with Abdel Fattah Rabou, a much-honored professor of environmental studies at the Islamic University of Gaza. Rabou himself has devoted many years to the study and conservation of birds and other wildlife in Gaza. The Islamic University of Gaza was one of the first institutional targets of the current war. It was bombed by the Israeli Defense Forces on October 11, 2023. Since then, according to the Israeli newspaper Haaretz, the project of wiping out Gaza’s extensive repositories of knowledge and sites of learning has essentially been completed:

“The destruction of Gaza’s universities began with the bombing of the Islamic University in the first week of the war and continued with airstrikes on Al-Azhar University on November 4. Since then, all of Gaza’s academic institutions have been destroyed, as well as many schools, libraries, archives, and other educational institutions.”

Indeed, the United Nations High Commission on Human Rights has observed that “with more than 80% of schools in Gaza damaged or destroyed, it may be reasonable to ask if there is an intentional effort to comprehensively destroy the Palestinian education system, an action known as ‘scholasticide,’” U.N. experts report:

“After six months of military assault, more than 5,479 students, 261 teachers and 95 university professors have been killed in Gaza, and over 7,819 students and 756 teachers have been injured — with numbers growing each day. At least 60 percent of educational facilities, including 13 public libraries, have been damaged or destroyed and at least 625,000 students have no access to education. Another 195 heritage sites, 227 mosques and three churches have also been damaged or destroyed, including the Central Archives of Gaza, containing 150 years of history. Israa University, the last remaining university in Gaza, was demolished by the Israeli military on 17 January 2024.”

I wanted to know whether Professor Rabou was among those 95 university faculty killed so far in the Gaza war, so I did what those of us with Internet access do these days: I googled him and found his Facebook page. He is, it turns out, still living and still posting, most recently about the desperate conditions — illness, pollution, sewage rash — experienced by refugees in temporary shelter centers near him. A few days earlier, he’d uploaded a more personal photograph: a plastic bag of white stuff, inscribed with blue Arabic lettering. “The first drop of rain,” he wrote, “Alhamdulillah [thank God], the first bag of flour enters my house in months as a help.”

The Sirdah twins, too, still remain alive, and they continue to post on their Instagram account.

Along with scholasticide, Gaza is living through an ecocide, a vastly sped-up version of the one our species seems hell-bent on spreading across the planet. As the Guardian reports, Gaza has lost almost half its tree cover and farmland, with much of the latter “reduced to packed earth.” And the news only gets worse: “[S]oil and groundwater have been contaminated by munitions and toxins; the sea is choked with sewage and waste; the air polluted by smoke and particulate matter.” Gaza has become, and could remain for years to come, essentially unlivable. And yet millions of people must try to live there. At what point, one wonders, do the “-cides” — scholastic-, eco-, and the rest — add up to genocide?

Birds of Gaza

Gaza’s wild birds aren’t the only birds in Gaza. Caged songbirds can evidently still be bought in markets and some of Rafah’s desperate inhabitants seek them out, hoping their music will mask the sounds of war. Voice of America recounts the story of a woman evacuee from northern Gaza who, halfway through her journey south, realized that she’d left her birds behind. She returned to rescue her caged avian friends, displaying a deep and tender affection for her winged companions. However, Professor Rabou is less sanguine about the practice. “As a people under occupation,” he says, “we shouldn’t put birds in cages.”

Birds of Gaza” also happens to be the name of an international art project created to remember the individual children killed in the war. The premise is simple: children around the world choose a specific child who has died and draw, paint, or fabricate a bird in his or her honor. Participants can choose from, God help us, a database of over 6,500 children who have died in Gaza since last October, then upload photos of their creations to the Birds of Gaza website. From Great Britain to South Africa to Japan, children have been doing just that.

Did you know that Gaza — well, Palestine — even has a national bird? The Palestine sunbird is a gorgeous creature, crowned in iridescent green and blue, and sporting a curved beak perfect for extracting nectar from plants. The West Bank Palestinian artist Khaled Jarrar designed a postage stamp celebrating the sunbird. “This bird is a symbol of freedom and movement,” he says. “It can fly anywhere.”

Birding for a Better World

Back in the United States, the Feminist Bird Club (with chapters across North America and Europe) is committed to making birding accessible to everyone, especially people who may not have had safe access to the outdoors in the past. “There is no reason why we can’t celebrate birds and support our most cherished beliefs in equity and justice at the same time,” they say. “For us, it’s not either/or.” Last year they published Birding for a Better World, a book about how people can genuinely connect with beings — avian and human — whose lives are very different from theirs. They sponsor a monthly virtual Birders for Palestine action hour, in which participants can learn what they can do to support the people of Palestine, including their birders.

As I watch a scrum of brilliant yellow goldfinches scrabbling for a perch on the bird feeder in my yard, knowing that, on this beautiful little island, I’m about as safe as a person can be, I think about the horrors going on half a world away, paid for, at least in part, with my taxes. Indeed, Congress just approved billions more dollars in direct military aid for Israel, even as the State Department released its 2023 Country Reports on Human Rights Practices. As the Jerusalem Post reports, in the section on Israel, the report documents “more than a dozen types of human rights abuses, including extrajudicial killings, torture, arbitrary detention, conflict-related sexual violence or punishment, and the punishment of family members for alleged offenses by a relative.”

Somehow, it’s cheering to imagine that, in spite of everything, there are still a few people birding in a devastated Gaza.

Via Tomdispatch.com

]]>
Gaza War: Artificial Intelligence is radically changing Targeting Speeds and Scale of Civilian Harm https://www.juancole.com/2024/04/artificial-intelligence-radically.html Wed, 24 Apr 2024 04:06:29 +0000 https://www.juancole.com/?p=218208 By Lauren Gould, Utrecht University; Linde Arentze, NIOD Institute for War, Holocaust and Genocide Studies; and Marijn Hoijtink, University of Antwerp | –

(The Conversation) – As Israel’s air campaign in Gaza enters its sixth month after Hamas’s terrorist attacks on October 7, it has been described by experts as one of the most relentless and deadliest campaigns in recent history. It is also one of the first being coordinated, in part, by algorithms.

Artificial intelligence (AI) is being used to assist with everything from identifying and prioritising targets to assigning the weapons to be used against those targets.

Academic commentators have long focused on the potential of algorithms in war to highlight how they will increase the speed and scale of fighting. But as recent revelations show, algorithms are now being employed at a large scale and in densely populated urban contexts.

This includes the conflicts in Gaza and Ukraine, but also in Yemen, Iraq and Syria, where the US is experimenting with algorithms to target potential terrorists through Project Maven.

Amid this acceleration, it is crucial to take a careful look at what the use of AI in warfare actually means. It is important to do so, not from the perspective of those in power, but from those officers executing it, and those civilians undergoing its violent effects in Gaza.

This focus highlights the limits of keeping a human in the loop as a failsafe and central response to the use of AI in war. As AI-enabled targeting becomes increasingly computerised, the speed of targeting accelerates, human oversight diminishes and the scale of civilian harm increases.

Speed of targeting

Reports by Israeli publications +927 Magazine and Local Call give us a glimpse into the experience of 13 Israeli officials working with three AI-enabled decision-making systems in Gaza called “Gospel”, “Lavender” and “Where’s Daddy?”.

These systems are reportedly trained to recognise features that are believed to characterise people associated with the military arm of Hamas. These features include membership of the same WhatsApp group as a known militant, changing cell phones every few months, or changing addresses frequently.

The systems are then supposedly tasked with analysing data collected on Gaza’s 2.3 million residents through mass surveillance. Based on the predetermined features, the systems predict the likelihood that a person is a member of Hamas (Lavender), that a building houses such a person (Gospel), or that such a person has entered their home (Where’s Daddy?).

In the investigative reports named above, intelligence officers explained how Gospel helped them go “from 50 targets per year” to “100 targets in one day” – and that, at its peak, Lavender managed to “generate 37,000 people as potential human targets”. They also reflected on how using AI cuts down deliberation time: “I would invest 20 seconds for each target at this stage … I had zero added value as a human … it saved a lot of time.”

They justified this lack of human oversight in light of a manual check the Israel Defense Forces (IDF) ran on a sample of several hundred targets generated by Lavender in the first weeks of the Gaza conflict, through which a 90% accuracy rate was reportedly established. While details of this manual check are likely to remain classified, a 10% inaccuracy rate for a system used to make 37,000 life-and-death decisions will inherently result in devastatingly destructive realities.


“Lavender III,” Digital Imagining, Dream, Dreamland v. 3, 2024

But importantly, any accuracy rate number that sounds reasonably high makes it more likely that algorithmic targeting will be relied on as it allows trust to be delegated to the AI system. As one IDF officer told +927 magazine: “Because of the scope and magnitude, the protocol was that even if you don’t know for sure that the machine is right, you know that statistically it’s fine. So you go for it.”

The IDF denied these revelations in an official statement to The Guardian. A spokesperson said that while the IDF does use “information management tools […] in order to help intelligence analysts to gather and optimally analyse the intelligence, obtained from a variety of sources, it does not use an AI system that identifies terrorist operatives”.

The Guardian has since, however, published a video of a senior official of the Israeli elite intelligence Unit 8200 talking last year about the use of machine learning “magic powder” to help identify Hamas targets in Gaza. The newspaper has also confirmed that the commander of the same unit wrote in 2021, under a pseudonym, that such AI technologies would resolve the “human bottleneck for both locating the new targets and decision-making to approve the targets”.

Scale of civilian harm

AI accelerates the speed of warfare in terms of the number of targets produced and the time to decide on them. While these systems inherently decrease the ability of humans to control the validity of computer-generated targets, they simultaneously make these decisions appear more objective and statistically correct due to the value that we generally ascribe to computer-based systems and their outcome.

This allows for the further normalisation of machine-directed killing, amounting to more violence, not less.

While media reports often focus on the number of casualties, body counts – similar to computer-generated targets – have the tendency to present victims as objects that can be counted. This reinforces a very sterile image of war. It glosses over the reality of more than 34,000 people dead, 766,000 injured and the destruction of or damage to 60% of Gaza’s buildings and the displaced persons, the lack of access to electricity, food, water and medicine.

It fails to emphasise the horrific stories of how these things tend to compound each other. For example, one civilian, Shorouk al-Rantisi, was reportedly found under the rubble after an airstrike on Jabalia refugee camp and had to wait 12 days to be operated on without painkillers and now resides in another refugee camp with no running water to tend to her wounds.

Aside from increasing the speed of targeting and therefore exacerbating the predictable patterns of civilian harm in urban warfare, algorithmic warfare is likely to compound harm in new and under-researched ways. First, as civilians flee their destroyed homes, they frequently change addresses or give their phones to loved ones.

Such survival behaviour corresponds to what the reports on Lavender say the AI system has been programmed to identify as likely association with Hamas. These civilians, thereby unknowingly, make themselves suspect for lethal targeting.

Beyond targeting, these AI-enabled systems also inform additional forms of violence. An illustrative story is that of the fleeing poet Mosab Abu Toha, who was allegedly arrested and tortured at a military checkpoint. It was ultimately reported by the New York Times that he, along with hundreds of other Palestinians, was wrongfully identified as Hamas by the IDF’s use of AI facial recognition and Google photos.

Over and beyond the deaths, injuries and destruction, these are the compounding effects of algorithmic warfare. It becomes a psychic imprisonment where people know they are under constant surveillance, yet do not know which behavioural or physical “features” will be acted on by the machine.

From our work as analysts of the use of AI in warfare, it is apparent that our focus should not solely be on the technical prowess of AI systems or the figure of the human-in-the-loop as a failsafe. We must also consider these systems’ ability to alter the human-machine-human interactions, where those executing algorithmic violence are merely rubber stamping the output generated by the AI system, and those undergoing the violence are dehumanised in unprecedented ways.The Conversation

Lauren Gould, Assistant Professor, Conflict Studies, Utrecht University; Linde Arentze, Researcher into AI and Remote Warfare, NIOD Institute for War, Holocaust and Genocide Studies, and Marijn Hoijtink, Associate Professor in International Relations, University of Antwerp

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
A Brief History of Kill Lists, From Langley to Lavender https://www.juancole.com/2024/04/history-langley-lavender.html Wed, 17 Apr 2024 04:02:05 +0000 https://www.juancole.com/?p=218072 ( Code Pink ) – The Israeli online magazine +972 has published a detailed report on Israel’s use of an artificial intelligence (AI) system called “Lavender” to target thousands of Palestinian men in its bombing campaign in Gaza. When Israel attacked Gaza after October 7, the Lavender system had a database of 37,000 Palestinian men with suspected links to Hamas or Palestinian Islamic Jihad (PIJ).

Lavender assigns a numerical score, from one to a hundred, to every man in Gaza, based mainly on cellphone and social media data, and automatically adds those with high scores to its kill list of suspected militants. Israel uses another automated system, known as “Where’s Daddy?”, to call in airstrikes to kill these men and their families in their homes.

The report is based on interviews with six Israeli intelligence officers who have worked with these systems. As one of the officers explained to +972, by adding a name from a Lavender-generated list to the Where’s Daddy home tracking system, he can place the man’s home under constant drone surveillance, and an airstrike will be launched once he comes home.

The officers said the “collateral” killing of the men’s extended families was of little consequence to Israel. “Let’s say you calculate [that there is one] Hamas [operative] plus 10 [civilians in the house],” the officer said. “Usually, these 10 will be women and children. So absurdly, it turns out that most of the people you killed were women and children.”

The officers explained that the decision to target thousands of these men in their homes is just a question of expediency. It is simply easier to wait for them to come home to the address on file in the system, and then bomb that house or apartment building, than to search for them in the chaos of the war-torn Gaza Strip.

The officers who spoke to 972+ explained that in previous Israeli massacres in Gaza, they could not generate targets quickly enough to satisfy their political and military bosses, and so these AI systems were designed to solve that problem for them. The speed with which Lavender can generate new targets only gives its human minders an average of 20 seconds to review and rubber-stamp each name, even though they know from tests of the Lavender system that at least 10% of the men chosen for assassination and familicide have only an insignificant or a mistaken connection with Hamas or PIJ. 

The Lavender AI system is a new weapon, developed by Israel. But the kind of kill lists that it generates have a long pedigree in U.S. wars, occupations and CIA regime change operations. Since the birth of the CIA after the Second World War, the technology used to create kill lists has evolved from the CIA’s earliest coups in Iran and Guatemala, to Indonesia and the Phoenix program in Vietnam in the 1960s, to Latin America in the 1970s and 1980s and to the U.S. occupations of Iraq and Afghanistan.

Just as U.S. weapons development aims to be at the cutting edge, or the killing edge, of new technology, the CIA and U.S. military intelligence have always tried to use the latest data processing technology to identify and kill their enemies.

The CIA learned some of these methods from German intelligence officers captured at the end of the Second World War. Many of the names on Nazi kill lists were generated by an intelligence unit called Fremde Heere Ost (Foreign Armies East), under the command of Major General Reinhard Gehlen, Germany’s spy chief on the eastern front(see David Talbot, The Devil’s Chessboard, p. 268).

Gehlen and the FHO had no computers, but they did have access to four million Soviet POWs from all over the USSR, and no compunction about torturing them to learn the names of Jews and communist officials in their hometowns to compile kill lists for the Gestapo and Einsatzgruppen.

After the war, like the 1,600 German scientists spirited out of Germany in Operation Paperclip, the United States flew Gehlen and his senior staff to Fort Hunt in Virginia. They were welcomed by Allen Dulles, soon to be the first and still the longest-serving director of the CIA. Dulles sent them back to Pullach in occupied Germany to resume their anti-Soviet operations as CIA agents. The Gehlen Organization formed the nucleus of what became the BND, the new West German intelligence service, with Reinhard Gehlen as its director until he retired in 1968.

After a CIA coup removed Iran’s popular, democratically elected prime minister Mohammad Mosaddegh in 1953, a CIA team led by U.S. Major General Norman Schwarzkopf trained a new intelligence service, known as SAVAK, in the use of kill lists and torture. SAVAK used these skills to purge Iran’s government and military of suspected communists and later to hunt down anyone who dared to oppose the Shah.

By 1975, Amnesty International estimated that Iran was holding between 25,000 and 100,000 political prisoners, and had “the highest rate of death penalties in the world, no valid system of civilian courts and a history of torture that is beyond belief.”

In Guatemala, a CIA coup in 1954 replaced the democratic government of Jacobo Arbenz Guzman with a brutal dictatorship. As resistance grew in the 1960s, U.S. special forces joined the Guatemalan army in a scorched earth campaign in Zacapa, which killed 15,000 people to defeat a few hundred armed rebels. Meanwhile, CIA-trained urban death squads abducted, tortured and killed PGT (Guatemalan Labor Party) members in Guatemala City, notably 28 prominent labor leaders who were abducted and disappeared in March 1966.

Once this first wave of resistance was suppressed, the CIA set up a new telecommunications center and intelligence agency, based in the presidential palace. It compiled a database of “subversives” across the country that included leaders of farming co-ops and labor, student and indigenous activists, to provide ever-growing lists for the death squads. The resulting civil war became a genocide against indigenous people in Ixil and the western highlands that killed or disappeared at least 200,000 people.

TRT World Video: “‘Lavender’: How Israel’s AI system is killing Palestinians in Gaza”

This pattern was repeated across the world, wherever popular, progressive leaders offered hope to their people in ways that challenged U.S. interests. As historian Gabriel Kolko wrote in 1988, “The irony of U.S. policy in the Third World is that, while it has always justified its larger objectives and efforts in the name of anticommunism, its own goals have made it unable to tolerate change from any quarter that impinged significantly on its own interests.”

When General Suharto seized power in Indonesia in 1965, the U.S. Embassy compiled a list of 5,000 communists for his death squads to hunt down and kill. The CIA estimated that they eventually killed 250,000 people, while other estimates run as high as a million.

Twenty-five years later, journalist Kathy Kadane investigated the U.S. role in the massacre in Indonesia, and spoke to Robert Martens, the political officer who led the State-CIA team that compiled the kill list. “It really was a big help to the army,” Martens told Kadane. “They probably killed a lot of people, and I probably have a lot of blood on my hands. But that’s not all bad – there’s a time when you have to strike hard at a decisive moment.”

Kathy Kadane also spoke to former CIA director William Colby, who was the head of the CIA’s Far East division in the 1960s. Colby compared the U.S. role in Indonesia to the Phoenix Program in Vietnam, which was launched two years later, claiming that they were both successful programs to identify and eliminate the organizational structure of America’s communist enemies. 

The Phoenix program was designed to uncover and dismantle the National Liberation Front’s (NLF) shadow government across South Vietnam. Phoenix’s Combined Intelligence Center in Saigon fed thousands of names into an IBM 1401 computer, along with their locations and their alleged roles in the NLF. The CIA credited the Phoenix program with killing 26,369 NLF officials, while another 55,000 were imprisoned or persuaded to defect. Seymour Hersh reviewed South Vietnamese government documents that put the death toll at 41,000.

How many of the dead were correctly identified as NLF officials may be impossible to know, but Americans who took part in Phoenix operations reported killing the wrong people in many cases. Navy SEAL Elton Manzione told author Douglas Valentine (The Phoenix Program) how he killed two young girls in a night raid on a village, and then sat down on a stack of ammunition crates with a hand grenade and an M-16, threatening to blow himself up, until he got a ticket home. 

“The whole aura of the Vietnam War was influenced by what went on in the “hunter-killer” teams of Phoenix, Delta, etc,” Manzione told Valentine. “That was the point at which many of us realized we were no longer the good guys in the white hats defending freedom – that we were assassins, pure and simple. That disillusionment carried over to all other aspects of the war and was eventually responsible for it becoming America’s most unpopular war.”

Even as the U.S. defeat in Vietnam and the “war fatigue” in the United States led to a more peaceful next decade, the CIA continued to engineer and support coups around the world, and to provide post-coup governments with increasingly computerized kill lists to consolidate their rule.

After supporting General Pinochet’s coup in Chile in 1973, the CIA played a central role in Operation Condor, an alliance between right-wing military governments in Argentina, Brazil, Chile, Uruguay, Paraguay and Bolivia, to hunt down tens of thousands of their and each other’s political opponents and dissidents, killing and disappearing at least 60,000 people.

The CIA’s role in Operation Condor is still shrouded in secrecy, but Patrice McSherry, a political scientist at Long Island University, has investigated the U.S. role and concluded, “Operation Condor also had the covert support of the US government. Washington provided Condor with military intelligence and training, financial assistance, advanced computers, sophisticated tracking technology, and access to the continental telecommunications system housed in the Panama Canal Zone.”

McSherry’s research revealed how the CIA supported the intelligence services of the Condor states with computerized links, a telex system, and purpose-built encoding and decoding machines made by the CIA Logistics Department. As she wrote in her book, Predatory States: Operation Condor and Covert War in Latin America:    

“The Condor system’s secure communications system, Condortel,… allowed Condor operations centers in member countries to communicate with one another and with the parent station in a U.S. facility in the Panama Canal Zone. This link to the U.S. military-intelligence complex in Panama is a key piece of evidence regarding secret U.S. sponsorship of Condor…”

Operation Condor ultimately failed, but the U.S. provided similar support and training to right-wing governments in Colombia and Central America throughout the 1980s in what senior military officers have called a “quiet, disguised, media-free approach” to repression and kill lists.

The U.S. School of the Americas (SOA) trained thousands of Latin American officers in the use of torture and death squads, as Major Joseph Blair, the SOA’s former chief of instruction described to John Pilger for his film, The War You Don’t See:

“The doctrine that was taught was that, if you want information, you use physical abuse, false imprisonment, threats to family members, and killing. If you can’t get the information you want, if you can’t get the person to shut up or stop what they’re doing, you assassinate them – and you assassinate them with one of your death squads.”

When the same methods were transferred to the U.S. hostile military occupation of Iraq after 2003, Newsweek headlined it “The Salvador Option.” A U.S. officer explained to Newsweek that U.S. and Iraqi death squads were targeting Iraqi civilians as well as resistance fighters. “The Sunni population is paying no price for the support it is giving to the terrorists,” he said. “From their point of view, it is cost-free. We have to change that equation.”

The United States sent two veterans of its dirty wars in Latin America to Iraq to play key roles in that campaign. Colonel James Steele led the U.S. Military Advisor Group in El Salvador from 1984 to 1986, training and supervising Salvadoran forces who killed tens of thousands of civilians. He was also deeply involved in the Iran-Contra scandal, narrowly escaping a prison sentence for his role supervising shipments from Ilopango air base in El Salvador to the U.S.-backed Contras in Honduras and Nicaragua.

In Iraq, Steele oversaw the training of the Interior Ministry’s Special Police Commandos – rebranded as “National” and later “Federal” Police after the discovery of their al-Jadiriyah torture center and other atrocities.

Bayan al-Jabr, a commander in the Iranian-trained Badr Brigade militia, was appointed Interior Minister in 2005, and Badr militiamen were integrated into the Wolf Brigade death squad and other Special Police units. Jabr’s chief adviser was Steven Casteel, the former intelligence chief for the U.S. Drug Enforcement Agency (DEA) in Latin America.

The Interior Ministry death squads waged a dirty war in Baghdad and other cities, filling the Baghdad morgue with up to 1,800 corpses per month, while Casteel fed the western media absurd cover stories, such as that the death squads were all “insurgents” in stolen police uniforms. 

Meanwhile U.S. special operations forces conducted “kill-or-capture” night raids in search of Resistance leaders. General Stanley McChrystal, the commander of Joint Special Operations Command from 2003-2008, oversaw the development of a database system, used in Iraq and Afghanistan, that compiled cellphone numbers mined from captured cellphones to generate an ever-expanding target list for night raids and air strikes.

The targeting of cellphones instead of actual people enabled the automation of the targeting system, and explicitly excluded using human intelligence to confirm identities. Two senior U.S. commanders told the Washington Post that only half the night raids attacked the right house or person.

In Afghanistan, President Obama put McChrystal in charge of U.S. and NATO forces in 2009, and his cellphone-based “social network analysis” enabled an exponential increase in night raids, from 20 raids per month in May 2009 to up to 40 per night by April 2011.

As with the Lavender system in Gaza, this huge increase in targets was achieved by taking a system originally designed to identify and track a small number of senior enemy commanders and applying it to anyone suspected of having links with the Taliban, based on their cellphone data.

This led to the capture of an endless flood of innocent civilians, so that most civilian detainees had to be quickly released to make room for new ones. The increased killing of innocent civilians in night raids and airstrikes fueled already fierce resistance to the U.S. and NATO occupation and ultimately led to its defeat.

President Obama’s drone campaign to kill suspected enemies in Pakistan, Yemen and Somalia was just as indiscriminate, with reports suggesting that 90% of the people it killed in Pakistan were innocent civilians.

And yet Obama and his national security team kept meeting in the White House every “Terror Tuesday” to select who the drones would target that week, using an Orwellian, computerized “disposition matrix” to provide technological cover for their life and death decisions.   

Looking at this evolution of ever-more automated systems for killing and capturing enemies, we can see how, as the information technology used has advanced from telexes to cellphones and from early IBM computers to artificial intelligence, the human intelligence and sensibility that could spot mistakes, prioritize human life and prevent the killing of innocent civilians has been progressively marginalized and excluded, making these operations more brutal and horrifying than ever.

Nicolas has at least two good friends who survived the dirty wars in Latin America because someone who worked in the police or military got word to them that their names were on a death list, one in Argentina, the other in Guatemala. If their fates had been decided by an AI machine like Lavender, they would both be long dead.

As with supposed advances in other types of weapons technology, like drones and “precision” bombs and missiles, innovations that claim to make targeting more precise and eliminate human error have instead led to the automated mass murder of innocent people, especially women and children, bringing us full circle from one holocaust to the next.

Via Code Pink

]]>