Internet – Informed Comment https://www.juancole.com Thoughts on the Middle East, History and Religion Thu, 26 Sep 2024 04:03:08 +0000 en-US hourly 1 https://wordpress.org/?v=5.8.10 Meta’s Oversight Board rules “From the River to the Sea” isn’t Hate Speech https://www.juancole.com/2024/09/metas-oversight-speech.html Thu, 26 Sep 2024 04:06:06 +0000 https://www.juancole.com/?p=220703

Company Should Address Root Causes of Censorship of Palestine Content

( Human Rights Watch ) – Earlier this month, Meta’s Oversight Board found that three Facebook posts containing the phrase “From the River to the Sea” did not violate Meta’s content rules and should remain online.

The majority of the Oversight Board members concluded that the phrase, widely used at protests to show solidarity with Palestinians, is not inherently a violation of Meta’s policies on Hate Speech, Violence and Incitement, or Dangerous Organizations and Individuals (DOI). In line with Human Rights Watch’s submission, it affirmed that while the phrase can have different meanings, it amounts to protected speech under international human rights law and should not, on its own, be a basis for removal, enforcement, or review of content under Meta’s policies. Meta created the board as an external body to appeal moderation decisions and provide non-binding policy guidance.

A minority of board members recommended imposing a blanket ban on use of the phrase unless there are clear signals it does not constitute glorification of Hamas. Such a ban would be inconsistent with international human rights standards, amounting to an excessive restriction on protected speech.

The board’s decision upholds free expression, but Meta has a broader problem of censoring protected speech about Palestine on its platforms. A 2023 Human Rights Watch report found that Meta was systemically censoring Palestine content and that broad restrictions on content relating to groups that Meta puts on its DOI list often resulted in the censorship of protected speech. Meta has said that core human rights principles have guided its crisis response measures since October 7. But its heavy reliance on automated detection systems fails to accurately assess context, even when posts explicitly oppose violence.


Digital imagining of “From the River to the Sea, Palestine will be Free,” Dream / Dreamland v3 / Clip2Comic, 2024.

For instance, on July 19, Human Rights Watch posted a video on Instagram and Facebook with a caption in Arabic that read: “Hamas-led armed groups committed numerous war crimes and crimes against humanity against civilians during the October 7 assault on southern Israel.” Meta’s automated tools “incorrectly” removed the post for violating its DOI policy. Formal appeals were unsuccessful, and the content was only restored after informal intervention.

Meta should address the systemic issues at the heart of its wrongful removal of protected speech about Palestine. Amending its flawed policies, strengthening context-based review, and providing more access to data to facilitate independent research are essential to protecting free expression on its platforms.

Via Human Rights Watch

]]>
Digital Deception: Disinformation, Elections, and Islamophobia: Juan Cole et al. https://www.juancole.com/2024/09/deception-disinformation-islamophobia.html Wed, 04 Sep 2024 04:29:08 +0000 https://www.juancole.com/?p=220390 Middle East Council on Global Affairs | Webinar on “Digital Deception: Disinformation, Elections, and Islamophobia,” September 2, 2024, featuring Juan Cole, Marc Owen Jones, Sohan Dsouza, and Sahar Aziz.

Middle East Council on Global Affairs: “Digital Deception: Disinformation, Elections, and Islamophobia

In 2007, the Brookings Institution in Washington, D.C. established the Brookings Doha Center (BDC). For fourteen years, BDC provided critical analysis on geopolitical and socioeconomic issues in the Middle East and North Africa and became recognized as a hub for high-quality independent research and policy analysis on the region. In 2021, with the support of its key stakeholders, BDC evolved into an independent policy research institution under the name of the Middle East Council on Global Affairs.

Transcript:

Juan Cole:

Well, hello everybody. Good afternoon, Doha time. My name is Juan Cole. I’m a professor of history at the University of Michigan, and I’m moderating this panel on digital deception, disinformation, elections, and Islamophobia.

The webinar is organized by the Middle East Council on Global Affairs. For Arabic speakers who are more comfortable in that language, we do have Arabic translation available in Zoom. You have to switch to the Arabic channel for that.

The panel’s subject is a recent report on disinformation research by researchers Marc Owen Jones and Sohan Dsouza. This report revealed a multiplatform global influence campaign promoting anti-Muslim bias, sectarianism, and anti-Qatar propaganda. Jones and Dsouza’s report highlights the use of disinformation to spread a broad neoconservative agenda, including xenophobic, anti-immigration, and anti-Muslim propaganda and disinformation.

We’ll hear from the authors. Let me quickly introduce them.

Mark Owen Jones is a non-resident fellow at the Middle East Council on Global Affairs and one of the co-authors of the Qatar plot. He is also an incoming associate professor of media analytics at Northwestern University in Qatar, where he specializes in research and teaching on disinformation, digital authoritarianism, and political repression, on which he has a recent book. He has also applied his research to concrete situations, such as in Bahrain.

We are honored to be joined by Professor Sahar Aziz, a professor of law at Rutgers University. She is the Chancellor’s Social Justice Scholar there. She is the founding director of the Center for Security, Race, and Rights. Her research explores the intersection of national security, race, religion, and civil rights. Her book, “The Racial Muslim: When Racism Quashes Religious Freedom,” is one you won’t look at the world the same after reading.

The other co-author of the report is Sohan Dsouza, a computational social scientist and open-source intelligence practitioner. He is interested in the intersection of disinformation, political polarization, and its effects. He also has experience as a software engineer, operations analyst, and research scientist at MIT.

We are very pleased to have this stellar assemblage of presenters. Let’s begin with each of them making a basic statement on the report. We’ll go for about eight minutes or so, and then we’ll have a panel discussion. Mark, would you begin, please?

Marc Owen Jones

Thank you very much, and thanks to my co-panelists and the Middle East Council for arranging this. I will endeavor to speak as slowly as I can, as I have a tendency to speak very quickly, just for the benefit of the translators. Since I’m talking first, it makes sense to summarize some key elements of the report. I imagine some of our listeners will have read it, but I want to give a bit more context about what it contains.

The report is called “The Qatar Plot,” a short name, but the longer subtitle, which is important, is “Unveiling a Multiplatform Influence Operation Using Anti-Muslim Propaganda to Attack Qatar in the EU, the US, and the UK.” The Qatar element is important, but in some ways, a bit of a red herring. If I were to summarize this report in simple terms, it is an unknown actor using Facebook and Meta’s ad platform to deliver anti-immigrant and anti-Muslim propaganda to a large audience. When I say a large audience, Soan and I documented that this campaign reached at least 41 million people, primarily on Meta’s platforms, specifically Facebook.

The campaign was also present on TikTok and Twitter (now X), and even in the real world. For instance, some of these campaigns appeared at CPAC, the conservative conference organized annually, and also on a giant digital billboard in Times Square.

I will talk mostly about the digital elements, and the other bits will come up. The people running this campaign, and I say people because us, the researchers, and even Facebook do not necessarily know who was behind it, spent at least $1.2 million in advertising money to spread this campaign. The campaign ran for approximately six months, starting at the beginning of 2024, towards the end of 2023. Ads were being run on Facebook targeting different parts of the world, including Lebanon, the US, the EU, and the UK. Within the EU, it targeted Belgium, France, Germany, Croatia, and Sweden, but primarily France.

When I say targeting, that’s where these ads were being delivered. This is a crucial period as it coincided with a number of European elections, including the UK elections. Recently, in the UK, we saw anti-immigrant and anti-Muslim riots. The timing of that is quite interesting.

The content of these ads typically featured pictures of Arab-looking men engaged in violence, taken out of context, with big lettering saying things like “There will be 60 million Muslims in Europe by 2030.” One part of the campaign was titled “Save Europe,” implying that Europe needed to be saved from a Muslim invasion. This rhetoric was very xenophobic and Islamophobic, designed to create the impression of a Muslim invasion of Europe.

Another aspect is where the Qatar bit comes in. The idea was that Qatar, as a country, was somehow orchestrating this Muslim invasion into Europe. This resembles the notion of the “great replacement theory,” a new type of conspiracy theory. In many conspiracy theories, there’s often a global elite at the top of it, like George Soros. In this conspiracy theory, Qatar takes that role, being framed as the orchestrator of the supposed Muslim invasion. This campaign used manipulative techniques on Facebook that Facebook struggled to combat.

What this campaign shows is that an unknown actor can spend over a million dollars on Facebook to have hate speech advertised through Facebook, reaching millions of people. Facebook’s algorithms are not efficient in combating this. In very few other contexts would anyone be allowed to take out this volume of hate speech in terms of advertising without the platform knowing the client. It’s a huge problem and a significant aspect of this campaign, especially given the violence we saw in the UK.

Thank you for your time. I’ll leave it there for now.

Thank you so much, Mark, for your concise and insightful remarks.

Before we move on to the next panelist, I want to remind the audience that you may submit questions for the panelists, and we will have a Q&A session with them later. You can submit questions through the Q&A portal in Zoom or in the chat. Our Middle East Council host has also put in the link to the report online.

Now, I’d like to move to Professor Aziz.

Sahar Aziz

Thank you. It’s such a pleasure to be here. This is such an important topic, one that is often overlooked as we study bias against various minority groups, whether it’s Muslims, Jewish communities, Black communities, or other immigrant communities. I appreciate the Middle East Council highlighting this topic and found the report fascinating.

What I will use my time for is to provide some context to show how this report is supported by previous reports. What we’re seeing is a really troubling trend that probably isn’t going to go anywhere unless we do something affirmative as a matter of policy and law in various countries.

So, I will use a PowerPoint just because it’s a little easier to show the data and the key points. Bear with me as I share my screen.

Okay, so you all should be able to see it. The first point I’d like to make, or the key takeaways, are as follows. I want to make three key takeaways or points.

The first is that digital Islamophobia spreads five common anti-Muslim racial tropes through social media explicitly and through mainstream media implicitly, at least in the United States. I admittedly focus more of my own research in the U.S. This happens without accountability or concern by government or private decision-makers. I’d like you to compare this digital Islamophobia with the responses against digital anti-Semitism because I think the stark contrast is obvious. I would argue that just as we take digital anti-Semitism very seriously, we should take digital Islamophobia equally seriously. We don’t tend to see the same level of response.

Digital Islamophobia threatens the safety, livelihoods, and equality of Muslims in America and also in Europe because Americans and Europeans are not educated in public schools to understand that these tropes are racist and harmful. There’s a huge education gap, so the public is very vulnerable to being manipulated by these fear tactics.

Finally, foreign governments and American politicians, as well as European politicians, intentionally stoke hate against Muslims and immigrants because it’s an effective political strategy, whether domestically or internationally, for sowing division among the electorate.

So, what are the five staple tropes? Many of them were highlighted in the report, but this is something to keep in mind as you learn about Islamophobia because you will see these tropes over and over. When someone says them explicitly or implicitly, or accuses a Muslim or a Muslim organization of these tropes, you should be on alert that it’s racist. It’s a trope. It’s the equivalent of accusing Jews of controlling the world or assuming that Jews aren’t loyal to their country, or assuming that Blacks are criminals, lazy, and thugs, and so on. We can come up with all sorts of racial tropes against different groups. It’s really important that you understand how absolutely insulting, harmful, and racist it is to assume that Muslims sympathize with terrorism or support terrorist groups.

This is particularly important today while we’re dealing with what I believe is a genocidal campaign by Israel, funded by the United States, against Palestinians in Gaza. The ability to even have that conversation always leads to Muslims, Arabs, and Palestinians being accused of supporting a terrorist group. Other tropes include misogyny against women, Islam being anti-woman, the accusation that Muslims are invading the West rather than contributing to the economic and cultural development of European countries and the United States, the trope that Muslims’ presence is a threat to the safety of white women, a threat to liberalism and democracy, and a threat to the security of the nation. Finally, there is a racist trope that Muslims are presumptively anti-Semitic.

Now, I’m just going to show two report summaries that will corroborate the report we’re talking about today. One report, titled Failure to Protect: Social Media Companies Failing to Act on Muslim Hate Crimes, was published by the Center for Countering Digital Hate in 2022. It analyzed 530 social media posts containing anti-Muslim hatred on Facebook, Instagram, TikTok, Twitter, and YouTube in February and March 2022.

Here are the results: social media companies do not respond when complaints are filed about Islamophobic posts spreading hate and bias, which could also lead to physical violence and harassment against people in real life. Only 3% of the flagged tweets were removed from Twitter. Similarly, Instagram, TikTok, and Twitter allowed hashtags like #IslamIsCancer and #Raghead to spread, garnering 1.3 million impressions. Four out of the ten most-cited domains in the Gab dataset—Gab being the right-wing version of social media—are focused on anti-Muslim hate. The second most-used hashtag was #BanIslam. In these right-wing ecosystems, there is a very robust, growing, and dangerous anti-Muslim ecosystem.

The second report I want to highlight is Islamophobia in the Digital Age: A Study of Anti-Muslim Tweets, published in 2022. The researchers analyzed 3.7 million Islamophobic tropes made between August 2019 and August 2021. One year later, 85% of those hateful Islamophobic tropes are still online. Nearly 86% of the geolocated anti-Muslim tropes originated in three places: India, the U.S., and the UK. Spikes in hate strongly correlated with newsworthy events related to Islam, particularly protests, terrorist attacks, and eruptions of conflict. This shows a pattern of guilt by association, where all Muslims are presumed responsible when one bad actor, who claims to be Muslim or claims to be acting in the name of Islam, commits a wrongdoing or a crime.

They each have to prove that they are innocent and that they don’t support that criminal act. We don’t do that to Christians. We don’t do that to Jews. We don’t do that to whites. Nor should we.

That’s another indication of racism: when you impose guilt or responsibility on an individual or a subgroup of an identity group for the wrongdoing of another individual who has the same identity.

Finally, I just want to quickly show the common hashtags of this report. As you can see, the five common tropes that I highlighted are on full display in these hashtags. If you see the most common ones, like “islamization,” “stop Islam,” and others like “save Europe,” “raghead,” some of them have very vulgar terminology which I won’t repeat. But “anti-Islam” is another hashtag, as well as “Islam is evil” and “Eurabia.” These are all hashtags that are growing across social media and effectively causing more and more people to internalize, mainstream, and normalize anti-Muslim hatred.

If you just look at this evidence, these common tropes are coming to light. I’m not simply pontificating or theorizing in the abstract—these five common Islamophobic tropes are real and they’re circulating widely.

Finally, this is my last slide. I just want to highlight that this really ties back into the report that foreign governments are actively using Islamophobia to wedge divisions within American society and within European society. I gave an example of Russia. There was an indictment brought down by the Mueller investigation in the U.S. in 2016 that shows $1.2 million spent every month. Included in the agenda was Islamophobia. For example, here’s one tweet that was intended to sabotage Hillary Clinton and support Donald Trump: “I think Sharia law will be a powerful new direction of Putin,” and “Support Hillary, save American Muslims rally” included in the sign above.

As you can see, of course, you have Trump and you have politicians in the U.S. who do exactly the same thing. I will just tell you to read two things. The first is Global Islamophobia and the Rise of Populism, which is a new co-edited book by me and John Esposito. Stay tuned for Punished for Participating, which will be coming out by the Center for Security, Race, and Rights in 2025, and will talk about the way in which politicians use Islamophobia to produce anti-Muslim hate in real life.

Okay, I have used up my time. Thank you very much for listening, and I look forward to your comments and hearing Sohan’s comments as well. Thank you.

Juan Cole:

Thank you, Professor Aziz.

Sohan, please.

Sohan Dsouza:

Thank you. Thank you, Juan. Thank you all for having me, and thank you, Mark, for the excellent overview of the Qatar plot report. I want to add some context to that. It’s actually paradoxical that we rail into Facebook mostly. I guess it’s appropriate, in a sense, because it was one of the places where, as far as we can tell, immense reach was available. Thanks in part to the powerful targeting features and algorithms of Facebook, the campaign was also active on X quite significantly.

They also attempted to vandalize Wikidata and Wikipedia pages. They successfully did so, actually, and stayed out for quite a while. Assets are still active on X, Telegram, TikTok, and YouTube at last check. At the time the Meta adversarial threat report was published, there was also a change.org petition that was eventually taken down. But on the other platforms—apart from Facebook, X, change.org, and Wikipedia—assets are still up, according to Meta’s threat report.

There’s also Instagram presence, although Instagram is famously opaque, so we weren’t able to find that at the time. All the same, the spend is very concerning because we don’t know who is behind it. The individuals we were able to identify, namely the Vietnam-based proxy for at least the Facebook part of the operation, as far as we can tell, are not talking or are deflecting. Other individuals we were able to identify as involved, at least in the IRL applications of the operation, are not talking either. That is very concerning. Someone is able to shunt $1.2 million through some content farm in Vietnam. Banks, accounts, and cards must have been involved at some point, and we still don’t know who it is.

We were able to connect the assets across different campaigns using the art and science of open-source intelligence. There were various burner and hacked assets involved, at least in the Facebook part of the campaign, and possibly even the Twitter/X part as well, including at least one hacked page that was apparently being prepared for being a public-facing page. Although most hacked assets were used for sponsoring ads, there were also networks of inauthentic engagement boosters on both Facebook and Twitter/X.

There were also some sophisticated and seemingly novel techniques used to evade countermeasures on many of the platforms, like cycling ad collections and using burner and boilerplate accounts on Facebook. One of the proxies was actually advertising on Facebook that they were able to bring back Facebook pages that had been suspended—and they proved it by actually bringing back Facebook pages that had been suspended in this campaign, in this operation. There were also distraction and coordination tactics used on Wikipedia, and specialized ad-running and engagement booster accounts on Twitter/X.

These were detailed in the report. But yeah, these were interesting and some very novel techniques as well. There was quite a bit of use of AI—not intended to look realistic, but all the same, used to produce propagandistic images of Big Ben bowing down to a caricature of a person. There was at least one incident of AI narration of a video, and at least one cheap fake of a fake audio track overlaid on a speech.

Somehow, all of this was organized, and we still don’t know who exactly is behind this. We’re looking at multiple messaging vectors spread across different platforms, putting security and espionage threats into the conversation, and making it somehow responsible for the “Islamization of Europe.” This harkens back to the whole “Great Replacement” and “Eurabia” tropes, attacking the royal family, and all sorts of things. Mainly, the Lebanon-targeted ones were portraying Qatar as a stooge of Iran.

There were some strange things that we noticed, like pivoting at one point from the ICRC president to targeting something else, which was a bit weird. There were fake organizations actually created by whoever is behind this, like the Euro Extremism Monitoring Project and Verbatim Citizens of Human Lives. This was not the only spelling and grammatical error, and there were quite a few of these. These might indicate that the people behind this may not be native English speakers.

At some point, the focus on more “milquetoast” things like secularism and concern for the hostages turned into overt xenophobia, anti-Muslim, and anti-immigrant bigotry. This shift drew especially on X and Telegram from the anti-immigrant and anti-Muslim ecosystem, literally putting their stamp on it. They took watermarks of “Made in Qatar” and “Part of the Qatari plan” and imposed them over these videos that were already circulating around with anti-Muslim and anti-immigrant TR groups.

This is also concerning given that these appear to have been timed with the election seasons of the UK, the EU, France’s surprise election, and certainly in a U.S. election year. All these places were targeted with these ads.

There is something to be said here about our issues with Facebook, as well as other platforms’ transparency. Most bafflingly, I think — and again, I’m going after Facebook for this because there were some features like crowd triangle that were very useful. There are also ad transparency features on Facebook, which is great. But somehow, most bafflingly, information would disappear from these. There would be specific pages or campaigns that we were tracking, and then when the page or the ad was taken down, the transparency information about the sponsor or other information was actually removed from that.

If an ad is violating standards, that’s especially when we need to know and be able to trace where it’s coming from or find out who’s behind it. To remove the information seems counterproductive.

I just want to say that open-source intelligence investigations involve a good deal of luck—waiting for people to slip up and make a mistake so we can find connections that way. But we got really lucky in many cases, and we shouldn’t have to be. There really should be more done in terms of transparency measures, and I really hope that all platforms—Facebook, X, especially—take more steps to ensure that there are such features that can aid open-source intelligence investigators.

Juan Cole:

Thank you. So, let’s move now to a more panel-based discussion. I thought that would give you all an opportunity to do another round of having the chance to maybe reply to some of the remarks that your colleagues made if you have something to weigh in on. We can switch it up. Professor Aziz, do you want to say anything further?

Sahar Aziz:

Yes. So, I would like to assume that both of you researched the other reports, whether they were the ones that I highlighted or much of the literature that is developing about digital Islamophobia. I am curious to know if you had an opportunity in your research to identify any correlation between the hate online and the hate in person and in real life.

Because there is the dignitary harm of feeling that it is normal, mainstreamed, and acceptable to insult a person’s religion, national origin, demean them, and tell people that one lives within society that your identity group is inferior or dangerous. That in itself is harmful. It’s hard to measure, but those of us who experience it know it’s palpable and real.

The other harm is when people act on it, whether at work, in public spaces, at school, or when participating in politics, like running for elected office. We’re seeing many Muslims being very aggressively attacked and accused of all sorts of things—terrorism, misogyny, etc. The most high-profile examples are Ilhan Omar and Rashida Tlaib, but it actually happens to almost every Muslim running for political office.

So, my question to you is: What insights have you been able to find in the literature about that correlation between online and the physical world? And what does that tell you about the need for future research?

Juan Cole

Marc, do you want to…?

Marc Owen Jones

Yeah, sure. Firstly, thanks again for the question. I mean, I’ve worked on Islamophobia in some form for a while, or on forms of hate speech. And of course, the reports you mentioned are vital reading.

Just as a caveat, the report we wrote is a very specific documentation of an influence campaign. The questions you’re asking are so important. Again, it’s the relationship between what happens online and offline. One of the things I will say is that people like to dismiss what happens online because they try to suggest it doesn’t have real-world impact.

If we actually look at the reality, for example, the ads we saw—Sohan and I downloaded a bunch of comments that were replies to some of these Islamophobic ads. We were interested in analyzing how people react to this content because you’re sort of seeing it in the wild, seeing if people are reacting to this. It does promote a level of antagonism and racist and Islamophobic comments as well. When people are exposed to content, they react to it at their keyboard. How much that translates into physical violence—these are questions we didn’t go into in the report. But I would say in the literature, there’s something very sinister and insidious going on here on a general level.

The scale of the campaign, the statistics you mentioned in the report—they indicate a normalization of hatred that is conducive to the kind of violence we saw in the UK. Once people start to operate in an atmosphere or climate that they think is permissive in terms of saying or doing things, saying first, then doing things—that’s really dangerous. This kind of hate speech allows for dehumanization, and as we know, dehumanization often comes before we see this real-world violence.

What this report shows, in addition to those other ones, is not only is the scale of this huge, but people are profiting off it. I just want to add another point: This is, as you’ve said, not happening in a vacuum. In the past year, we’ve seen anti-Semitism increase, we’ve seen Islamophobia increase, but as you noted, it does not get the same amount of attention.

I think this is true of campaigns like the one we documented. When it comes to hate speech or violence against Muslims, there is less interest in the press. This campaign we documented is probably one of the top five in terms of cost of all influence campaigns identified on Meta, yet it didn’t make a big splash as it would if we knew Russia was behind it or something like that. That’s significant. It’s not the first one this year. Let’s not forget that I worked on a similar investigation in February, where people in Canada, among others, were being targeted with Islamophobic hate speech.

That campaign was then tied back to Stoic, an Israeli PR company contracted by the Ministry of Diaspora Affairs in Israel, which has been cozying up to the far-right in Europe. When we look at the discourse online, it’s also indicative of offline activities. Again, I don’t have the answers for the causal solutions, but I’ll say this proliferation of speech, which appears to be increasing without the necessary condemnation in the press or from social media companies, is dangerous and can contribute to real-world violence. I also think the speech itself is a form of violence in its own way.

Juan Cole

I just want to interject that Facebook, in particular, was widely blamed for allowing rampant islamophobic speech in Myanmar (or Burma) that became implicated in violence against Burmese Muslims. This indeed contributed to the crisis of the Rohingya refugee community. The real-world effects of this speech have already been, to some extent, documented, and they can be dire.

Sohan, would you like to weigh in here?

Sohan Dsouza

Yeah, thanks, Mark. And thank you for the question, Dr. Aiz. You used the word “ecosystem,” actually, and yeah, I’ve been using that a lot myself. This whole operation, this whole investigation of the operation, acquainted me with this far-right, anti-immigrant, anti-Muslim ecosystem primarily on X. But this also percolates outwards into other platforms. It’s just that X, for all practical purposes, is not really moderated nowadays, so that’s where it really festers. Thanks to the opacity of effects, like the lack of transparency with creating accounts and even boosting them using blue checks and stuff, the issue persists.

As I mentioned, some of the campaigns in this operation used a lot of the anti-immigrant, anti-Muslim tropes and even the language, like “cultural enricher” and things like that—the euphemisms that were used by the ecosystem. Actually, the videos and images were just re-watermarked. It’s also interesting that in the recent UK riots, within a couple of hours after the attack, one of the accounts or one of the duo in that ecosystem was the first to specifically claim that the attacker was a “quote-unquote Muslim immigrant.” Barely a couple of hours later, that full fake profile emerged. About three hours after that, it was one of the first to also boost a troll post made by what appeared to be a Hindu Sikh nationalist, an anti-Muslim bigot, a troll post claiming it as a victory for Islam and things like that. It boosted that and further added to this whipping up of an anti-Muslim frenzy in the UK, which fed into focusing the riots on specific targets like mosques.

So, yeah, this ecosystem is something of great concern and will surely be used by other influence operations, given that it serves as a great testing ground for these influence operations on what’s the most effective anti-Muslim propaganda.

Sahar Aziz:

Can I just add one other thing that I think is important? The timing of our discussion is unique. We are currently, in the United States in particular -— I can’t speak as much to the experiences across the different European countries, which are very diverse—but right now, what we’re witnessing is heightened sensitivity, heightened scrutiny, and heightened opposition to anti-Semitism, which I believe is a good thing. However, in that regard, it is setting a gold standard. The way in which I have observed the sensitivity and the attention to anti-Semitism has convinced me just how completely lacking there is of any effort, policy, practice, or laws to combat islamophobia.

I think this is the time for us to use that as a gold standard. At the same time, we’re struggling, at least in the US, with wrestling with free speech and the freedom to assemble and protest. Unfortunately, some people are weaponizing anti-Semitism to quash those rights within the United States. Even if the motives are in good faith, meaning we really want to protect Jews, there is not the same sensitivity to Muslim students, Palestinian students, Muslim faculty, Palestinian faculty, or Arab faculty. I’m using higher education as an example because that is right now kind of the ground zero for these issues.

I would encourage attendees to do that comparison and see why certain groups—in this case, Muslims—are not taken seriously when they voice grievances. No one tries to stop the spread of islamophobia online, in stark contrast to combating anti-Semitism. We need to do the same for islamophobia and countering anti-Palestinian racism, which is a subset of islamophobia. If you look at the common racial tropes against Palestinians, they mirror those against Muslims. Everyone incorrectly assumes that all Palestinians are Muslims, when in fact, I think we have 15 to 20% who are actually Christian.

Juan Cole:

There’s a question already in the queue that we will come to later, but some people are demanding transparency from us. Mark and Sohan, with regard to the reports that you cited, Professor Aiz, they want to know who is behind this report that you did, who funded it, and should they be suspicious of that?

Marc Owen Jones:

I mean, happy to answer that—no one funded it. Sohan and I just did the research. Sohan served in AD, or he spotted it, and we just started working away at it. Myself, as an expert in digital authoritarianism and the region, and Sohan as an open-source researcher, we did it for the public good, right?

Sohan Dsouza:

Yep, absolutely. I wasn’t funded on this. I just did it because I was miffed, basically.

Sahar Aziz:

Well, that’s the way that academic research often works and should work. But I wanted to clear the air in this regard. Let me also state that I learned of this report when I was invited to comment on it and found it to be a very interesting set of conclusions and analysis. The reports I cited were reports that I found in my research. Admittedly, I read them for the first time as part of my preparation for this panel. I do not have any connection with the authors of those reports.

Another thing I find really fascinating is that every time I do my work on countering islamophobia, which is my research and what I do for a living as a law professor, I do get questioned about whether I’m getting funds from the government of X or Y, or foreign governments, which I don’t. But I do want to ask: do you ask that question of every researcher? Do you ask that question of white male American researchers? Do you ask it of white female American researchers, Christian American researchers, Jewish American researchers? If you do, then fine, that’s a fair question. But if you’re only asking it of people who have immigrant backgrounds, who are from the global North and the global South, or who are Black, then perhaps you should ask yourself whether you have internalized some of these biased, racist tropes.

Marc Owen Jones:

Yeah, I was sort of thinking along the same lines. It’s not a question everyone seems to get asked, but I’m used to being asked because, again, we’re dealing with this notion of a report exposing hate speech against Muslims, and the fact that Qatar is mentioned. I mean, you know, whoever created this campaign was mentioning Qatar, right? The question should be about who the hell is behind this campaign—$1.2 million spent, and we get asked about our funding?

It’s quite funny, but yeah, I think it’s just a question that comes with the territory in the nature of the work we do. It can be indicative of the kinds of prejudices that you get exposed to, particularly working, as I do, in Qatar and the global South. Often, I see it with journalists. They will ask questions of people that they wouldn’t necessarily ask someone else. In the world of disinformation, I still think this is relevant. If I were an American researcher researching a Russian disinformation campaign attacking Russia, I wouldn’t get asked, “Does my position as an American professor compromise my integrity?” But if I were doing research on people attacking Qatar, which happened in 2016 while I was living here, I would get asked if I’m being paid or sponsored. There is this huge bias, and it’s based on racism or embedded prejudice. It’s just something that you have to answer, unfortunately. I think it’s good that you added that addendum, because it’s unfortunate we have to deal with that when the real issue is: who the hell pays $1.2 million to spread hate speech?

Juan Cole:

Yes, people are always bringing up about Qatar that it’s involved in these negotiations with Hamas. Then, that involvement is used to tag it as pro-Hamas or supporting Hamas. I just would like to put it out there that these tropes are extremely unfair. The Obama Administration went to Qatar in 2014 and diplomatically pressured it to be a conduit for negotiations with Hamas. Since the United States has declared Hamas a terrorist organization, its diplomats can’t talk to Hamas directly and need an intermediary.

Qatar has often been reluctant about this role and publicly so. In 2018, it was revealed in the Israeli press that Qatar’s government had come to the end of its rope with Hamas due to its obstreperousness and was going to relinquish the role. The Prime Minister of Israel, Benjamin Netanyahu, then sent the head of Mossad to Qatar to plead with them not to give up this role.

Part of the agreement was that Qatar and Egypt provided funds for Gaza because Israel had put Gaza under an economic blockade. There was a danger of people starving to death if nothing was done. So, Qatar and Egypt provided funds for Gaza, not for Hamas in particular. These funds were actually deposited in Israeli bank accounts, and it was Netanyahu’s government that moved the funds to Gaza. If anyone was funding Hamas in that way, it was Netanyahu himself, not Qatar.

I think if the international community wants Qatar to play this role of intermediary, it’s extremely unfair to attempt to tag it as somehow supportive of Hamas, for which there is no evidence at all.

Sahar Aziz:

Can I also just add, Professor Cole, the irony that, again, the timing matters? It’s September 2nd, 2024, and it’s been almost 11 months now of the Israelis just destroying Gaza and killing over 50,000 Palestinians, injuring over 100,000, and 2.4 million are starving to death, among other atrocious humanitarian problems and crises.

Yet, when we engage in a political debate that criticizes Israel, the Israeli military, the US government, or Congress, especially if you disagree, you are accused of being anti-Semitic. Meanwhile, people can go and criticize Qatar all they want, and no one will call them Islamophobic. I don’t think criticizing a nation-state makes someone racist against the majority religion unless one explicitly states so. But again, the double standard is that Qatar cannot be a nation like any other, engaged in negotiations, without being completely attacked as having an ulterior agenda. Yet, if someone criticizes Israel, they can’t do that.

We need to be cognizant of these double standards. Criticize Israel if you want; criticize Qatar if you want. The way you do it matters. It could expose that you are just using that as a pretext to be Islamophobic or anti-Semitic. On its face, criticizing either is not Islamophobic or anti-Semitic. What’s interesting about this report is that it shows it’s not simply criticizing Qatar for its role, which is legitimate, like criticizing the US, Egypt, or Israel. It’s the way it’s criticized, the hashtags used, and the narratives. They are blatantly Islamophobic, rather than being a geopolitical analysis, a human rights analysis, or an international law analysis.

These are just part of those fear-based tactics that cause real harm to Muslims, Arabs, and Palestinians. Sorry, Sohan, you go ahead.

Sohan Dsouza:

Yeah, I agree. You can criticize any government, administration, faction, or the tenets of any religion, really. But with the participation or negligence of platforms, if you are inauthentically yanking everyone’s cranks, that’s fraudulent. We should expect better. In some cases, without violating anyone’s freedom of expression, we should be able to legislate better as well.

Definitely, with some of the feedback we’ve gotten online on social media, a lot of it has been very focused on posting something in response about Qatar’s politics or human rights record, or about Islam. They are missing the whole point—that someone is able to put this huge amount of money into inauthentically reaching people with their message, hiding behind it, and thereby escalating things. As I mentioned, this operation got more and more xenophobic as time went by. Perhaps, in the beginning, it was testing the waters to see if they’d be discovered. But as their identities weren’t discovered, they got bolder. By the end of it, it became almost indistinguishable from the rest of the anti-Muslim, anti-immigrant ecosystem.

If they’re able to get away with it, the worse they’ll get. We should definitely be concerned. As long as they can hide their identities and there isn’t transparency, and the cooperation of the platforms doesn’t come in, they will just get bolder. If they hire proxies to do this, adding a further layer of deniability, the situation will only get worse in terms of messaging.

Juan Cole:

I’d like to observe that, in light of these good comments from our panelists, it is not merely an issue of Islamophobia. Any Sikh group that got involved in these kinds of campaigns should have their heads examined because, in the United States at least, an anti-Muslim sentiment and irrational hatred of Muslims very frequently spills over onto other ethnic groups. Americans can’t distinguish between Sikhs and Muslims for the most part, and they have this odd idea that since Sikhs often wear turbans, Muslims must wear turbans, and therefore they attack Sikhs.

Beyond that, it plays into a general anti-immigrant sentiment which can blow back on Hindus and indeed on Eastern Orthodox Christians. We’ve had, in the previous big era of immigration in the United States in the early 20th century, anti-Greek and anti-Eastern Orthodox Christian riots in some cities in the United States. Getting people used to the idea that it’s alright to single out an ethnic group, especially one tagged as immigrant, although many American and European Muslims are second, third, or fourth generation —

Sohan Dsouza:

and people can always convert to Islam —

Juan Cole:

There are converts, but tagging them as immigrants or an invasion as a minority spills over onto others, including onto Jews. Many of the tropes, as Professor Aziz pointed out, used against Muslims are old anti-Semitic tropes. Promoting that way of thinking can’t possibly be good for the Jewish minority.

One last question I’ll broach to you all is this: you weren’t able to find out who was behind this. Almost certainly, it was not the Vietnamese government, although one of the actors was based there. There is a lot of conflict inside the Muslim world over issues in political Islam. The government of Egypt has crushed the Muslim Brotherhood and overthrew an elected Muslim Brotherhood government. Qatar itself was the target of a boycott by four nations. To what extent are these internal conflicts possibly spilling over onto Europe and the United States so that there is promotion of Islamophobia sometimes by Muslim-majority countries?

Marc Owen Jones:

I can go ahead, thank you. I discussed this in my books somewhat, especially from 2016. An important parallel between political Islam and what we saw with this Qatar plot report is sometimes the attempt or the deliberate attempt to conflate political Islam with Islam in general. We saw a lot of anti-Muslim rhetoric coming out of countries like the United Arab Emirates, which were paying a lot of money to create these anti-political Islam campaigns across Europe. However, people don’t necessarily interpret this subtly as anti-political Islam; it just gets interpreted as anti-Muslim sentiment.

One thing about this report, and why I mentioned early on why it’s a bit of a red herring, is that the mention of Qatar almost allows people to dismiss it by saying, “Oh, this is about Qatar.” Moreover, Qatar is being used as a metaphor for Muslims, a bypass of sorts, to talk about Islam. The campaign is fundamentally Islamophobic, and Qatar is a sideshow. This relates to the situation in Rohingya with the genocide there, but we must bear in mind that Qatar, in this report, is a synonym for Muslims. This ties in with how political Islam has been attacked and the consequences of that contribute to Islamophobia in Europe.

One more thing I want to say about the report that Facebook issued about this campaign. It mentioned that it had targeted Lebanon and Qatar but did not mention Islamophobia, even though that is the thrust of the campaign. Facebook thus provides justification for not addressing the most problematic element, which is hate speech against a large group of people.

Yes, Sohan, did you want to come in on this? You’re muted, sorry.

Sohan Dsouza:

I was concerned that Meta did not mention this in their report, as well as generally downplaying the targeting of specific countries, like not mentioning Belgium. Towards the end, especially by the Belgium phase, the pages’ names turned into things like “Save Europe” and “Europe First,” and the messaging switched to more xenophobic content. It seems that phase of it somehow escaped everyone’s attention when making the adversarial threat report. That did need mention.

Those of us who have been studying Islamophobia at least since 9/11 have noticed that it’s a blank IOU: fill in the blank of your Muslim-majority country to use as the pretext for perpetuating the five common anti-Muslim racial tropes. Remember when it was Iraq, Saudi Arabia, and now it’s also Iran, Qatar, Afghanistan. It’s based on geopolitics and who is the designated foreign state enemy of the United States, which then represents 1.8 billion Muslims. Meanwhile, nobody has a problem with that, but if we criticize Israel, we’re accused of being anti-Semitic. We need to parse this conversation and identify when discourse is about geopolitical debate versus a ruse for hatred towards a religious or racial group.

I approach political Islam the way that I approach Zionism. Both are political ideologies, diverse and complicated. There are people who argue that religion should inform law and public policy and that there should not be a separation between religion and state. Zionists, political Islamists, and similar Evangelical Christians in the US believe this. It is legitimate to debate these ideologies and their tensions with liberalism, multiculturalism, and religious plurality.

The problem is that you cannot talk about Zionism as a political ideology and problematize it as an academic, yet you can assume all Muslims are the most extreme political Islamists, like ISIS and Al-Qaeda. This contradiction shows the differential treatment, which in legal fields is evidence of discrimination and bias.

We must discern whether someone is genuinely criticizing some tenet of a religion or state based on their consistency. If they look away when it comes to other religions or states, but focus on one, that’s really suspect.

Juan Cole:

We just have time maybe to tweet one more question from the audience. One of the questioners brought up, quite rightly, the influence of the rising right in Europe and the United States in this matter. We know Islamophobia is very frequently a trope of the MAGA branch of the Republican party.

We just had elections in Eastern Germany where the AfD did disturbingly well, and anti-Muslim sentiment is a keynote of the AfD in Germany. To what extent is this campaign wrapped up with the rising Western right? We just have five minutes.

Sohan Dsouza:

It’s definitely trying to ride this as a vector for its messaging. We’ve seen them drawing on the same ecosystem and very current on ongoing events, celebrating what they see as the victories of the far right. The problem is, many campaign assets have still not been taken down on X and are still active, promoting and boosting anti-Muslim and anti-immigrant rhetoric. Unfortunately, the less moderated platforms make it harder for these masterminds to be exposed due to a lack of transparency features, letting them get away with this kind of messaging.

Juan Cole:

“X” is an example of a hostile takeover by the far right. Twitter was, in fact, moderated, and some of these campaigns were being disallowed. Elon Musk bought it with Russian and Saudi partners, I think partly to combat climate change science, but also now Musk has turned it to the amplification of far-right and Islamophobic tropes. He himself is a major purveyor of some of this disinformation.

Marc Owen Jones:

That’s an important point to emphasize. I’m glad we brought Musk in because the report focused a lot on Facebook. Elon Musk is promoting Islamophobia and personally promoting some of the most prolific Islamophobic accounts that are also fake accounts. The whole report we documented was an influence operation where someone was hiding who they were to spread a massive anti-Muslim campaign. For example, if you go onto X and look for the account “Europe Invasion” with two n’s, you will find an account with hundreds of thousands of followers getting millions of engagements, farming engagements, which means it’s probably amplified. This is a hacked and hijacked account with no information on who is behind it, but Elon Musk is promoting this account. He was doing so during the anti-Muslim riots in the UK, fanning those flames directly at a time of huge violence against immigrants and Muslims in the UK.

We need to consider not just the rise of the far right but the facilitators like Elon Musk using their platforms to not only fan the flames but also prevent people from tackling the issue of Islamophobia. Again, this goes back to what Facebook did and hasn’t done, which is to take these campaigns seriously and set a gold standard for tackling Islamophobia. Right now, it’s not just about people not doing enough; it’s about well-known people saying things that, in any other context, would not be okay.

It’s kind of an absurd state of affairs, to be honest.

Juan Cole:

I think we have to leave it there.

Sahar Aziz:

Can I just make one comment?

Juan Cole:

You’ll be cut off in about 30 seconds, but yes.

Sahar Aziz:

I just want to put a pitch in for Global Islamophobia and the Rise of Populism the book we just published, me and John Esposito, which will be in Europe and is focused on Europe. I just want to highlight that fear is profitable and wins votes. There is a very practical, pragmatic reason why this will continue. When you add to it white supremacy and the great replacement theory—all these fears that White society is being destroyed—there is every financial, political, and economic incentive for Islamophobia to continue online.

We need to be very proactive in trying to figure out how to stop it rather than waiting for it to disappear.

Juan Cole:

Well, thank you very much to all of our panelists for your excellent interventions. I think we have to leave it there. Thanks to the audience for joining us, and all the best. Thank you.

]]>
Tech Giants criticized for Silencing Pro-Palestinian Narratives https://www.juancole.com/2024/09/criticized-palestinian-narratives.html Sun, 01 Sep 2024 04:06:21 +0000 https://www.juancole.com/?p=220346

The fight against censorship on social media is a fight for the future of democratic debate itself.

]]>
Did Turkey Ban Instagram over Shadowbanning Palestine? Why did it Lift the Ban? https://www.juancole.com/2024/08/instagram-shadowbanning-palestine.html Tue, 13 Aug 2024 04:06:17 +0000 https://www.juancole.com/?p=219968 Istanbul (Special to Informed Comment; feature) – On August 2, Turkey blocked Instagram, the country’s most popular social network.

Although Turkey’s Information Technologies and Communication Authority (BTK) did not officially state the reason for the ban, the move came after Fahrettin Altun, the Presidential Communications Director, criticized Instagram for preventing users from sharing content related to the assassination of Ismail Haniyeh, the political leader of Hamas and a close ally of President Erdoğan.

Altun said on X: “I strongly condemn the social media platform Instagram for blocking people from posting condolence messages regarding Haniyeh’s martyrdom without providing any justification. This is an apparent and obvious attempt at censorship.”

In a similar incident, Meta, Instagram’s parent company, removed social media posts by Malaysian Prime Minister Anwar Ibrahim expressing condolences for Haniyeh. Meta designates Hamas as a “dangerous organization” and prohibits content that praises the group.

Ismail Haniyeh was killed in Tehran on July 31, where he had been attending the inauguration ceremony of Iran’s President Masoud Pezeshkian.

Historical Context of Social Media Bans in Turkey

 

Under Erdoğan, Turkey has previously blocked several social media platforms, including YouTube, Threads, EksiSozluk, Wikipedia, and X (formerly Twitter).

YouTube was first banned in Turkey in 2007 and again between 2008 and 2010, due to videos insulting Mustafa Kemal Atatürk, the founder of the modern Republic of Turkey. The platform was briefly banned again in 2014 and 2015.

X (formerly Twitter) was banned in 2014 following the circulation of alleged leaked recordings implicating government officials in corruption.

Wikipedia was banned in Turkey from 2017 to 2020 due to entries that accused the country of having links to terrorist organizations.

Additionally, the government has imposed bans on social media and broadcasting in response to disasters, terrorist attacks, and social unrest.

In 2024, the number of blocked web pages in Turkey surpassed one million. Meanwhile, Hüseyin Yayman, head of the Turkish Parliament’s Digital Media Commission, claimed that many Turkish people want TikTok to be banned. “People who see me on the street say, ‘If you shut down TikTok, you will go to heaven,’” Yayman added.

 

Impact of the Instagram Ban

Following the Instagram ban in Turkey, online searches for VPN services surged. In response, pro-government media began publishing articles warning people about the risks associated with free VPN services.

Professor Yaman Akdeniz, co-founder of the Freedom of Expression Association (İFÖD) and a law professor, said: “This ban must have been requested by either the presidency or a ministry. The BTK is required to obtain approval from a criminal court.”

Akdeniz added, “The censorship imposed on Instagram is arbitrary and cannot be explained or justified. No judge should approve such a request.”

Human Rights Watch and İFÖD stated that the block on Instagram violates the rights to freedom of expression and access to information for millions of users. With 57.1 million users, Turkey ranks fifth worldwide in the number of Instagram users.

The ban had a significant impact on the Turkish economy, as Instagram plays a crucial role in Turkey’s e-commerce landscape, with approximately 10% of the nation’s total online sales being conducted through social media platforms.

According to Buğra Gökçe, head of the Istanbul Planning Agency (IPA), the ban also disrupted the service sector, including tourism, hospitality, and restaurants in reaching customers. The IPA projects that the ban could lead to a weekly economic loss of approximately USD 396 million.

On August 5, President Recep Tayyip Erdoğan criticized opponents of the ban and used a racial slur to describe them. He claimed they care more about Western interests than Turkey’s sovereignty, stating: “The only purpose of the existence of ‘house negroes,’ who are both opportunists and losers, is to please their owners.”

Less than a week after the Instagram ban, Turkish authorities also prohibited access to the online video game platform Roblox. Ekrem İmamoğlu, the Mayor of Istanbul and a prominent opposition figure criticized the bans on Instagram and Roblox, stating: “Those who made this decision are ignorant of the new world, the economy, and technology.”

Israeli Response to Turkey’s Instagram Ban

 

Israeli Foreign Minister Israel Katz criticized Erdoğan, accusing him of turning Turkey into a dictatorship by blocking Instagram. Katz also tagged İmamoğlu in his comments, seemingly attempting to exploit the political polarization in Turkey to his advantage.

İmamoğlu responded by saying: “We have no need to receive lessons on democracy and law from those responsible for the suffering and deaths of countless innocents, including children.”

Katz’s attempt backfired, as despite the political polarization in Turkey, both sides of the spectrum largely voice support for Palestine, though in different ways—Islamists tend to back Hamas, while secularists in Turkey are more aligned with the Palestine Liberation Organization (PLO) or other left-wing Palestinian groups.

How Was the Ban Lifted?

On Saturday, Transport and Infrastructure Minister Abdulkadir Uraloğlu announced that Instagram had accepted Turkey’s conditions. The ban on Instagram was lifted after Meta reportedly agreed to comply with Turkish law and remove content related to certain crimes or terrorist propaganda.

The independent news website YetkinReport noted that Meta had already been publishing transparency reports indicating that Instagram was implementing these measures even before the ban. The latest report was published on July 31, just two days before the platform was blocked.

The nine-day ban was Turkey’s longest on a major social media platform in recent years. Since Instagram still continues to ban pro-Hamas content, it appears that little has changed. It remains unclear why Instagram was banned in Turkey in the first place, why the ban was lifted, and what problem, if any, was resolved by imposing the ban.

—–

France 24 Video: “Turkish president slams social media ‘fascism’ amid Instagram battle • FRANCE 24 English ”

]]>
Romney Admits Push to Ban TikTok Is Aimed at Censoring News Out of Gaza https://www.juancole.com/2024/05/romney-admits-censoring.html Tue, 07 May 2024 04:06:59 +0000 https://www.juancole.com/?p=218438 A conversation between Secretary of State Antony Blinken and the Republican senator offered an “incredible historical document” showing how the U.S. views its role in the Middle East.

( Commondreams.org ) – A discussion between U.S. Secretary of State Antony Blinken and Sen. Mitt Romney over the weekend included what one critic called an “incredible mask-off moment,” with the two officials speaking openly about the U.S. government’s long-term attempts to provide public relations work for Israel in defense of its policies in the occupied Palestinian territories—and its push to ban TikTok in order to shut down Americans’ access to unfiltered news about the Israeli assault on Gaza.

At the Sedona Forum in Sedona, Arizona on Friday, the Utah Republican asked Blinken at the McCain Institute event’s keynote conversation why Israel’s “PR been so awful” as it’s bombarded Gaza since October in retaliation for a Hamas-led attack, killing at least 34,735 Palestinians—the majority women and children—and pushing parts of the enclave into a famine that is expected to spread due to Israel’s blockade.

“The world is screaming about Israel, why aren’t they screaming about Hamas?” asked Romney. “‘Accept a cease-fire, bring home the hostages.’ Instead it’s the other way around, I mean, typically the Israelis are good at PR. What’s happened here? How have they, and we, been so ineffective at communicating the realities there?”

Blinken replied that Americans, two-thirds of whom want the Biden administration to push for a permanent cease-fire and 57% of whom disapprove of President Joe Biden’s approach to the war, are “on an intravenous feed of information with new impulses, inputs every millisecond.”

“And of course the way this has played out on social media has dominated the narrative,” said the secretary of state. “We can’t discount that, but I think it also has a very, very challenging effect on the narrative.”

Romney suggested that banning TikTok would quiet the growing outrage over Israeli atrocities in the United States.

“Some wonder why there was such overwhelming support for us to shut down, potentially, TikTok or other entities of that nature,” said Romney. “If you look at the postings on TikTok and the number of mentions of Palestinians relative to other social media sites, it’s overwhelmingly so among TikTok broadcasts.”

The interview took place amid a growing anti-war movement on college campuses across the U.S. and around the world, with American police forces responding aggressively to protests at which students have demanded higher education institutions divest from companies that contract with Israel and that the U.S. stop funding the Israel Defense Forces.

Right-wing lawmakers and commentators have suggested students have been indoctrinated by content shared on social media platforms including TikTok and Instagram, and wouldn’t be protesting otherwise.

Rep. Mike Lawler (R-N.Y.), who co-sponsored a recent bill to ban TikTok—included in a foreign aid package that Biden signed late last month—said last week that “there has been a coordinated effort off these college campuses, and that you have outside paid agitators and activists.”

“It also highlights exactly why we included the TikTok bill in the foreign supplemental aid package because you’re seeing how these kids are being manipulated by certain groups or entities or countries to foment hate on their behalf and really create a hostile environment here in the U.S.,” said Lawler.

Social media has provided the public with an unvarnished look at the scale of Israel’s attack, with users learning the stories of Gaza residents including six-year-old Hind Rajab, 10-year-old Yazan Kafarneh, and victims who have been found in mass graves and seeing the destruction of hospitals, universities, and other civilian infrastructure.

U.S. college students, however, are far from the only people who have expressed strong opposition to Israel’s slaughter of Palestinian civilians and large-scale destruction of Gaza as it claims to be targeting Hamas.

Human rights groups across the globe have demanded an end to the Biden administration’s support for Israel’s military and called on the U.S. president to use his leverage to end the war. Josep Borrell, the European Union’s high representative for foreign affairs and security policy, in February lambasted Biden and other Western leaders for claiming concern about the safety of Palestinians while continuing to arm Israel, and leaders in Spain and Ireland have led calls for an arms embargo on the country. The United Nations’ top expert on human rights in the occupied Palestinian territories said in March that there are “reasonable grounds” to conclude Israel has committed genocidal acts, two months after the International Court of Justice made a similar statement in an interim ruling.

US Department of State Video: “Secretary Blinken participates in a keynote conversation at the McCain Institute”

Romney and Blinken didn’t mention in their talk whether they believe social media and bad “PR” have pushed international leaders and experts to make similar demands to those of college students.

The conversation, said Intercept journalist Ryan Grim, was an “incredible historical document” showing how the U.S. government views its role in the Middle East—as a government that should “mediate” between Israel and the public to keep people from having “a direct look at what’s happening.”

“Romney’s comments betray a general bipartisan disinterest in engaging Israel’s conduct in Gaza on its own terms, preferring instead to complain about protesters, interrogate university presidents, and, apparently, muse about social media’s role in boosting pro-Palestinian activism,” wrote Ben Metzner at The New Republic. “As Israel moves closer to a catastrophic invasion of Rafah, having already banned Al Jazeera in the country, Romney and Blinken would be wise to consider whether TikTok is the real problem.”

Enterpreneur James Rosen-Birch added that “Mitt Romney flat-out asking Antony Blinken, in public, why the United States is not doing a better job manufacturing consent, is wild.”

Our work is licensed under Creative Commons (CC BY-NC-ND 3.0).

]]>
German Far Right Leader on Trial for Nazi Slogan: “X” Marks the Spot https://www.juancole.com/2024/04/german-speaking-friends.html Thu, 25 Apr 2024 04:15:10 +0000 https://www.juancole.com/?p=218225 Halle an der Saale, Germany (Special to Informed Comment; Feature) –– On the morning of April 18, in front of the district court in Halle, it became evident that not many people had taken up Björn Höcke’s invitation to support him before a trial. Höcke, the leader of the far-right Alternative for Germany (AfD) in the central-eastern state of Thuringia and power broker at the national level, had unusually posted in English on his “X” account (Elon Musk’s rebranding of Twitter) on April 6. He had done so to invite people “to come to Halle and witness firsthand the state of civil rights, democracy and the rule of law in Germany.”

Outside the court, at most twenty people could be counted as being there to support Höcke at some point during the morning. In their conversations, they complained that the procedure against Höcke was politically motivated. This had been Höcke’s message from the very beginning. Meanwhile, around 600 demonstrators had protested against the radical right politician earlier on the morning, before the start of the judicial process. There will be hearings until mid-May, but it is already clear that the most severe punishment for Höcke would be the payment of a fine. 

Höcke, who rivals Donald Trump in his mastery of self-victimization, failed to explain in his initial “X” post why he had to appear before a court in Halle. The AfD politician, who can be openly described as a fascist according to a German court, had to answer for his use, on at least two occasions, of the slogan “Alles für Deutschland” (Everything for Germany). The phrase was employed by the paramilitary National Socialist group SA (“Sturmabteilung”, or Storm Division). Using National Socialist slogans and symbols is a punishable crime in Germany. 

Höcke, a former history teacher, promised he did not know the origins of the slogan. His repeated use of expressions with strong National Socialist connotations, such as “entartet” (degenerate) or “Volkstod” (death of the nation) in public speeches and his 2018 book, belie this claim. Furthermore, the German sociologist Andreas Kemper has long established that there are striking parallels between Höcke’s public statements and different articles that appeared under the pseudonym Landolf Ladig in neo-Nazi publications more than a decade ago. One of these articles argued that Germany had been forced into a “preventive war” in 1939. 

The lack of open support for Höcke in front of the court in Halle was all the more embarrassing because the radical right politician had been given an incredibly powerful loudspeaker by Elon Musk, the billionaire and owner of Twitter/ “X”  since October 2022. Musk reacted to Höcke’s “X” post denouncing what in his eyes was a restriction on freedom of speech and asked him, “What did you say?”. After Höcke explained he had said “Everything for Germany”, Musk asked why the phrase was illegal. “Because every patriot in Germany is defamed as a Nazi, as Germany has legal texts in its criminal code not found in any other democracy,” replied Höcke. He forgot to add that no other democracy is the successor state of a regime that killed 6 million Jewish people and set the European continent on fire, with up to 20 million deaths in six years in Europe alone. 

Al Jazeera English Video: “German far-right politician on trial for alleged use of banned Nazi slogan”

Höcke has made abundantly clear in public statements how he understands Germany’s National Socialist past. He has referred to the monument to the Murdered Jews of Europe in Berlin as a “monument of shame” and said that history is not black-and-white when asked to comment about Nazism. Elon Musk’s apparent support for Höcke should not come as a surprise given their shared antisemitic and Islamophobic views. The South African businessman has launched antisemitic tropes against Hungarian-American billionaire and philanthropist George Soros. According to Musk, Soros “wants to erode the very fabric of civilization. Soros hates humanity.” The AfD, like so many other far-right movements around the world, has also targeted Soros. Furthermore, Musk recently espoused the antisemitic conspiracy theory that Jewish communities push “hatred against Whites.” Musk’s Islamophobia does certainly not lag behind. The “X” owner agreed with a far-right blogger who said France has been conquered by Islam. Again, Musk’s Islamophobia is a perfect fit for the AfD. The party was accurately described as having “a manifestly anti-Muslim program” by an independent commission established after a right-wing terrorist killed nine people, who had originally come as migrants, in Hanau in February 2020. 

Musk and the AfD have supported each other in the past. In September 2023, the billionaire criticized the German government’s funding of NGOs rescuing migrants in the Mediterranean and called people to vote for the AfD. Three months later, the co-leader of the AfD, Alice Weidel, said Musk’s takeover of Twitter was good for “freedom of opinion in Germany.” One of the deputy leaders of the AfD group in the German parliament, Beatrix von Storch, has supported Musk in his ongoing confrontation with the Brazilian Justice Alexandre de Moraes. The judge is demanding that “X” close accounts spreading fake news in Brazil. Since then, Musk has become a hero for the Brazilian far-right backing former Brazilian President Jair Bolsonaro. 

The mutual sympathies between Musk and German-speaking far-right radicals also extend to the Austrian political scene. According to Harald Vilimsky, a member of the European Parliament for the Freedom Party of Austria (FPÖ), Musk’s overtake of Twitter represented an end to censorship. The FPÖ, founded in 1955, has a far longer history than the AfD, established in 2013. Their political programs, however, defend similar far-right positions and both parties are members of the Identity and Democracy Party group in the European Parliament, one of the two far-right groups at the European level.

Meanwhile, in March 2024, Martin Sellner, the leader of the radical right group Identitarian Movement in Austria, was interrupted by the local police while delivering one of his racist speeches in the small Swiss municipality of Tegerfelden, close to Germany. When Sellner posted about the police action against him, Musk replied by asking whether this was legal. Sellner, taking a page from Höcke’s self-victimization, said that “challenging illegal immigration is becoming increasingly riskier than immigrating illegally.” The local police were simply enforcing a legal provision that allows them to force people out of the region if they “behave in a prohibited manner.” Sadly enough, Sellner is used to spreading his racist propaganda with impunity.

Martin Sellner and the Identitarian Movement’s hatred against migrants knows no limits. This transnational group of radicals hired a ship in 2017 to prevent NGOs in the Mediterranean from assisting boats in distress. Once they ran into technical problems, the Identitarians were helped by Sea Eye, a German NGO that normally rescues migrants instead of radical racists. The Identitarians have directly benefited from Musk’s acquisition of Twitter. After Musk bought the company, Sellner’s account on the social platform, and also that of his Identitarian Movement, were reinstated. Twitter had blocked the accounts in 2020 as they violated the rules to prevent the promotion of terrorism and violent extremism that the social platform had in place back then. In his first post after his Twitter account was reinstated, Sellner explicitly thanked Musk for “making the platform more open again.” Sellner was denied entry to the United States in 2019 because he had a $1,700 donation from the right-wing terrorist who killed 51 people in two mosques in Christchurch, New Zealand, also in 2019. 

In January 2024, the independent German investigative platform Correctiv reported that Sellner had presented his proposals for the deportation of millions of migrants with foreign citizenship and Germans with a migration background in a secret meeting in November 2023. The encounter in Potsdam, organized by two German businessmen, counted with the participation of Roland Hartwig (who at the time was the personal aide of the AfD co-leader Alice Weidel) and Ulrich Siegmund, the AfD parliamentary leader in Saxony-Anhalt. Some members of the “Werteunion” (Values Union), an ultra-conservative group within the center-right CDU, were also in attendance. The findings by Correctiv finally led the CDU to cut its ties to the “Werteunion”. 

The lack of open displays of support for Höcke in Halle last week was comforting. Even more positive were the mass protests against the far-right politician and the AfD in front of the court. However, recent polls in both Germany and Austria are reason for great concern. The AfD would currently receive around 18% of the votes and finish second in an election to the German parliament. Meanwhile, its Austrian counterpart, the FPÖ, would be close to 30% of the national vote and emerge as the strongest party. Austria will vote this autumn, whereas elections in Germany should take place at the end of 2025. 

In both Germany and Austria, as well as in other countries such as the United States and Brazil, the far-right is benefiting from Musk’s support and open-door policy to radicals on “X.” Needless to say, though, Musk is just offering a new platform to very old ideas. The far-right’s threat would hardly be less serious if the billionaire had a sudden political conversion. What to do, then? One of the banners at the demonstration against Höcke in Halle pointed to the holistic approach that will be needed to counter the far-right. The banner read “AfD Stoppen! Juristisch, Politisch, Gesellschaftlich.” In English: “Stopping AfD! Judicially, Politically, Socially.” 

 

]]>
Social Media Users say their Palestine Content is being Shadow-Banned — How to Know if it’s Happening to You https://www.juancole.com/2024/02/palestine-content-happening.html Fri, 23 Feb 2024 05:04:45 +0000 https://www.juancole.com/?p=217238 By Carolina Are, Northumbria University, Newcastle | –

Imagine you share an Instagram post about an upcoming protest, but none of your hundreds of followers like it. Are none of your friends interested in it? Or have you been shadow banned?

Social media can be useful for political activists hoping to share information, calls to action and messages of solidarity. But throughout Israel’s war on Gaza, social media users have suspected they are being censored through “shadow banning” for sharing content about Palestine.

Shadow banning describes loss of visibility, low engagement and poor account growth on platforms like Instagram, TikTok and X (formerly Twitter). Users who believe they are shadow banned suspect platforms may be demoting or not recommending their content and profiles to the main discovery feeds. People are not notified of shadow banning: all they see is the poor engagement they are getting.

Human Rights Watch, an international human rights advocacy non-governmental organisation, has recently documented what it calls “systemic censorship” of Palestine content on Facebook and Instagram. After several accusations of shadow banning, Meta (Facebook and Instagram’s parent company) argued the issue was due to a “bug” and “had nothing to do with the subject matter of the content”.

I have been observing shadow bans both as a researcher and social media user since 2019. In addition to my work as an academic, I am a pole dancer and pole dance instructor. Instagram directly apologised to me and other pole dancers in 2019, saying they blocked a number of the hashtags we use “in error”. Based on my own experience, I conducted and published one of the very first academic studies on this practice.

Why platforms shadow ban

Content moderation is usually automated – carried out by algorithms and artificial intelligence. These systems may also, inadvertently or by design, pick up “borderline” controversial content when moderating at scale.


Photo by Ian Hutchinson on Unsplash

Most platforms are based in the US and govern even global content according to US law and values. Shadow banning is a case in point, typically targeting sex work, nudity and sexual expression prohibited by platforms’ community guidelines.

Moderation of nudity and sexuality has become more stringent since 2018, after the introduction of two US laws, the Fight Online Sex Trafficking Act (Fosta) and Stop Enabling Sex Trafficking Act (Sesta), that aimed to crack down on online sex trafficking.

The laws followed campaigns by anti-pornography coalitions and made online platforms legally liable for enabling sex trafficking (a crime) and sex work (a job). Fearing legal action, platforms began over-censoring any content featuring nudity and sexuality around the world, including of legal sex work, to avoid breaching Fosta-Sesta.

Although censorship of nudity and sex work is heralded as a means to protect children and victims of non-consensual image sharing, it can have serious consequences for the livelihoods and wellbeing of sex workers and adult content creators, as well as for freedom of expression.

Platforms’ responses to these laws should have been a warning about what was to come for political speech.

Social media users reported conversations and information about Black Lives Matter protests were shadowbanned in 2020. Now journalistic, activist and fact-checking content about Palestine also appears to be affected by this censorship technique.

Platforms are unlikely to admit to a shadow ban or bias in their content moderation. But their stringent moderation of terrorism and violent content may be leading to posts about Palestine that is neither incitement to violence nor terror-related getting caught in censorship’s net.

How I proved I was shadow banned

For most social media users, shadow banning is difficult to prove. But as a researcher and a former social media manager, I was able to show it was happening to me.

As my passion for pole dancing (and posts about it) grew, I kept a record of my reach and follower numbers over several years. While my skills were improving and my follower count was growing, I noticed my posts were receiving fewer views. This decline came shortly after Fosta-Sesta was approved.

It wasn’t just me. Other pole dancers noticed that content from our favourite dancers was no longer appearing in our Instagram discovery feeds. Shadowbanning appeared to also apply to swathes of pole-dancing-related hashtags.

I was also able to show that when content surrounding one hashtag is censored, algorithms restrict similar content and words. This is one reason why some creators use “algospeak” editing content to trick the algorithm into not picking up words it would normally censor, as seen in anti-vaccine content throughout the pandemic.

Check if you are being shadow banned

TikTok and Twitter do not notify users that their account is shadow banned, but, as of 2022, Instagram does. By checking your “account status” in the app’s settings, you can see if your content has been marked as “non-recommendable” due to potential violations of Instagram’s content rules. This is also noticeable if other users have to type your full profile name for you to appear in search. In short, you are harder to find. In August 2023, X owner Elon Musk said that the company was working on a way for users to see if they had been affected by shadow bans, but no such function has been introduced. (The Conversation has contacted X for comment.)

The ability to see and appeal a shadow ban are positive changes, but mainly a cosmetic tweak to a freedom of expression problem that mostly targets marginalised groups. While Instagram may now be disclosing their decisions, the effect is the same: users posting about nudity, LGBTQ+ expression, protests and Palestine are often the ones to claim they are shadow banned.

Social media platforms are not just for fun, they’re a source of work and political organising, and a way to spread important information to a large audience. When these companies censor content, it can affect the mental health and the livelihoods of people who use it.

These latest instances of shadow banning show that platforms can pick a side in active crises, and may affect public opinion by hiding or showing certain content. This power over what is visible and what is not should concern us all.The Conversation

Carolina Are, Innovation Fellow, Northumbria University, Newcastle

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
Many thanks – We Reached our Fundraising Goal — Thanks to You https://www.juancole.com/2024/01/thanks-reached-fundraising.html Mon, 01 Jan 2024 05:04:22 +0000 https://www.juancole.com/?p=216310 Many thanks to all of you who donated and who support the site by sharing our articles with your circle — by email, on social media or by word of mouth. We’re proud to say that you ensured that we reached our goal this year.

Remember, though, that social media are shadowbanning sites that deal with issues like Palestine and are not a reliable way to get Informed Comment content.

It is more important than ever not only to support the site if you like its news analysis but to sign up for delivery of the daily postings by email so you don’t miss even one. Also, do share the postings with friends by email, since that is the one part of the internet that the billionaires haven’t ruined yet.

I asked the Wombo Dream app to generate an image expressing thanks to you for reading this site, and here’s what the robot came up with:

May 2024 bring you and yours many good things, and all of us better news about the Middle East.

]]>
Meta (Facebook and Instagram) is Systemically Censoring Palestine Content https://www.juancole.com/2023/12/instagram-systemically-censoring.html Sat, 23 Dec 2023 05:06:51 +0000 https://www.juancole.com/?p=216106

Overhaul Flawed Policies; Improve Transparency

Human Rights Watch – (New York) – Meta’s content moderation policies and systems have increasingly silenced voices in support of Palestine on Instagram and Facebook in the wake of the hostilities between Israeli forces and Palestinian armed groups, Human Rights Watch said in a report released today. The 51-page report, “Meta’s Broken Promises: Systemic Censorship of Palestine Content on Instagram and Facebook,” documents a pattern of undue removal and suppression of protected speech including peaceful expression in support of Palestine and public debate about Palestinian human rights. Human Rights Watch found that the problem stems from flawed Meta policies and their inconsistent and erroneous implementation, overreliance on automated tools to moderate content, and undue government influence over content removals.

“Meta’s censorship of content in support of Palestine adds insult to injury at a time of unspeakable atrocities and repression already stifling Palestinians’ expression,” said Deborah Brown, acting associate technology and human rights director at Human Rights Watch. “Social media is an essential platform for people to bear witness and speak out against abuses while Meta’s censorship is furthering the erasure of Palestinians’ suffering.”

Human Rights Watch reviewed 1,050 cases of online censorship from over 60 countries. Though they are not necessarily a representative analysis of censorship, the cases are consistent with years of reporting and advocacy by Palestinian, regional, and international human rights organizations detailing Meta’s censorship of content supporting Palestinians.

After the Hamas-led attack in Israel on October 7, 2023, which killed 1,200 people, mostly civilians, according to Israeli officials, Israeli attacks in Gaza have killed around 20,000 Palestinians, according to the Gaza Ministry of Health. Unlawful Israeli restrictions on humanitarian aid have contributed to an ongoing humanitarian catastrophe for Gaza’s 2.2 million population, nearly half of whom are children.


© 2023 Human Rights Watch

Human Rights Watch identified six key patterns of censorship, each recurring in at least 100 instances: content removals, suspension or deletion of accounts, inability to engage with content, inability to follow or tag accounts, restrictions on the use of features such as Instagram/Facebook Live, and “shadow banning,” a term denoting a significant decrease in the visibility of an individual’s posts, stories, or account without notification. In over 300 cases, users were unable to appeal content or account removal because the appeal mechanism malfunctioned, leaving them with no effective access to a remedy.

In hundreds of the cases documented, Meta invoked its “Dangerous Organizations and Individuals” (DOI) policy, which fully incorporates the United States designated lists of “terrorist organizations.” Meta has cited these lists and applied them sweepingly to restrict legitimate speech around hostilities between Israel and Palestinian armed groups.

Meta also misapplied its policies on violent and graphic content, violence and incitement, hate speech, and nudity and sexual activity. It has inconsistently applied its “newsworthy allowance” policy, removing dozens of pieces of content documenting Palestinian injury and death that has news value, Human Rights Watch said.

Meta is aware that its enforcement of these policies is flawed. In a 2021 report, Human Rights Watch documented Facebook’s censorship of the discussion of rights issues pertaining to Israel and Palestine and warned that Meta was “silencing many people arbitrarily and without explanation.”

An independent investigation conducted by Business for Social Responsibility and commissioned by Meta found that the company’s content moderation in 2021 “appear[s] to have had an adverse human rights impact on the rights of Palestinian users,” adversely affecting “the ability of Palestinians to share information and insights about their experiences as they occurred.”

In 2022, in response to the investigation’s recommendations as well as guidance by Meta’s Oversight Board, Meta made a commitment to make a series of changes to its policies and their enforcement in content moderation. Almost two years later, though, Meta has not carried out its commitments, and the company has failed to meet its human rights responsibilities, Human Rights Watch found. Meta’s broken promises have replicated and amplified past patterns of abuse.

Human Rights Watch shared its findings with Meta and solicited Meta’s perspective. In response, Meta cited its human rights responsibility and core human rights principles as guiding its “immediate crisis response measures” since October 7.

To meet its human rights due diligence responsibilities, Meta should align its content moderation policies and practices with international human rights standards, ensuring that decisions to take content down are transparent, consistent, and not overly broad or biased.

Meta should permit protected expression, including about human rights abuses and political movements, on its platforms, Human Rights Watch said. It should begin by overhauling its “dangerous organizations and individuals” policy to make it consistent with international human rights standards. Meta should audit its “newsworthy allowance” policy to ensure that it does not remove content that is in the public interest and should ensure its equitable and non-discriminatory application. It should also conduct due diligence on the human rights impact of temporary changes to its recommendation algorithms it introduced in response to the recent hostilities.

“Instead of tired apologies and empty promises, Meta should demonstrate that it is serious about addressing Palestine-related censorship once and for all by taking concrete steps toward transparency and remediation,” Brown said.

Human Rights Watch

]]>