Sheldon Himelfarb featured in Foreign Press Correspondents USA : The Global Information Environment is at Risk. These Two Experts Are Leading Efforts to Protect It.

Alan Herrera | September 12, 2023 | Foreign Press Correspondents USA

At a time of significant uncertainty regarding the health of our Global Information Environment (GIE)—defined as “the aggregate of individuals, organizations, and systems that collect, process, disseminate, or act on information”—one organization has taken a scientific consensus-building approach to combat the proliferation of mis- and disinformation around the globe. Thanks to their effort, we now have a clearer understanding of the largest threats facing our information environment.

The International Panel for the Information Environment (IPIE) recently published a first-of-its kind report that highlights the concerns of the world’s top information environment researchers. Based on a collaborative survey of 289 scientists over a four-week period this past spring, Trends in the Global Information Environment: 2023 Expert Survey Results, includes a corresponding summary for policymakers, titled Expert Survey on the Global Information Environment 2023: Lessons for Technology Policy and Design. As detailed in both, the report reveals that 66 percent of the experts believe the availability of accurate information is the most important feature of a healthy information environment. And yet the majority (54 percent) believe the information environment in their countries of expertise will worsen; just 12 percent believe the situation will improve. According to exactly half of the respondents, a healthy information environment incorporates diverse voices—who are often easy targets for bad actors.IPIE’s Co-founders, Sheldon Himelfarb, Executive Director of the IPIE Secretariat and Philip Howard, a Professor at the University of Oxford and IPIE’s Chair, sat for an interview with me to discuss their latest research. Both stressed the importance of global research. Without it, the lens on misinformation and other perils threatening our GIE are just too narrow. 

The international quality of the IPIE means ensuring a strong representation of experts around the world, including the Global South for instance,” said Himelfarb. 

That research offers a comprehensive look at the dangers faced by “accurate information environments” for which IPIE serves as both a defender and advocate. Whether the infrastructure of that environment delivers “the good stuff” or “junk,” says Howard, comes down to political will—and politicians bear a major part of the blame for fostering a culture of unaccountability. Consider that politicians were flagged by nearly a third (31 percent) of the experts who study democracies, an alarm bell for the sanctity of healthy information environments.

Artificial intelligence also tests the strength and resilience of information environments, experts say. Content moderation policies are sorely lacking, too, with 66 percent of respondents saying ill-conceived content moderation reinforces a culture of unaccountability. On the subject of AI-powered content moderation systems, 55 percent said poorly designed ones contribute to the wider issue. Nor should the influence of social media platforms be underestimated; they were flagged by 33 percent of experts who study both democracies and autocracies. But the experts find themselves on opposite sides of the fence on what they perceive to be the biggest threats to healthy information environments. Those studying autocracies “perceive serious threats from national governments, state-backed media, and local news outlets” and “misinformation on gender issues,” according to the report. In contrast, those who study democracies are more “concerned about foreign interference” and “misinformation on climate science and the environment.”

Nor have platforms been transparent with either independent researchers or journalists about the impact of their algorithms on information environments: almost three quarters (72 percent) of the international research community flag the lack of access to platform data as the major barrier to advancing our understanding of the global information environment.

Howard and Himelfarb have much to say. What brought us together for our talk over Labor Day weekend was a piece I wrote in May about IPIE’s inaugural report, which addressed the efficacy of content moderation as a primary strategy and highlighted alternative evidence-based tactics. 

That report served as my introduction to the organization, which will in the coming weeks present an educational program about their initiative to the community of foreign journalists who comprise the Association of Foreign Correspondents in the United States (AFPC-USA). Indeed, the international quality of our own organization offers a valuable opportunity for journalists to disseminate this information to their audiences in their native countries, many of which host less resilient information environments. That no nation is insulated from the threats they examined in painstaking detail underscores the importance of this research—to say nothing of the journalist tasked with imparting it.

The following interview has been condensed and edited for clarity.

What specific criteria do experts use to define an "accurate" information environment?

HOWARD: So I think we're finding that there's quite a number of different definitions and terms people use when they speak of a healthy information environment. And one of the goals of the IPIE is to try to bring some coherence to that, right? To get people using some standard definitions to develop a shared understanding of what a healthy information environment is. Right now, I'd say most people, most researchers broadly define it as a system of supplies of information that are trustworthy [and] that are accurate, at least in the sense of being easily fact-checked and sourced properly. Sometimes that involves sources of information that are governments. Sometimes it's civil society groups and sometimes it's professional journalists. 

So there's many different kinds of people who can produce high quality information and then there's the question of the infrastructure. So does the infrastructure actually deliver the good stuff or does it deliver junk? And that's kind of where our focus is because a growing amount of research demonstrates that social media platforms deliver junk, right? And they deliver the worst of the stuff. In fact, some of them in some countries actively cut out professional journalistic content so that it's not in circulation on the platforms. This is what happened to Canada over the summer [in regard to the ongoing wildfires].

How can diverse voices be encouraged and protected in the information environment?

HIMELFARB: This goes back to that initial point we were all discussing about the international quality of the IPIE. It is a high priority for us to make sure that our member affiliates actually have strong representation from the Global South, from diverse voices. But you have to work at it. I mean, the answer is it takes a lot of work because mainstream media is narrower than it should be. And we see that because of the disparity between the language of the research and the language of the information environment. Mis- and disinformation is happening in ranges of other languages besides English, and yet English is getting the preponderance of the research. So we're working at that. But it is hard.

In what ways do social media platforms pose threats to a healthy information environment?

HOWARD: I think there’s broadly two kinds of threats. There are ways in which the distribution of social media content privileges the junk, the stuff that we sometimes call the internet, harms. So structurally there, social media is designed to push around stuff that's sensational, that uses potty mouth words or has titles in all capital letters. The sensational clickbait stuff, which helps drive eyeballs, helps drive attention, and regular news stories or positive news stories or positive public policy ideas don't get as much traction because social media is designed to be predominantly negative and sensational. And then there's other various forms of misinformation that help generate traffic and flow. So the first way is that it produces and circulates content that can really misshape or change the perceptions of voters or citizens. 

The second thing is that there's a lot about social media operations that we don't fully understand, so the algorithms themselves that deliver our content or make choices about us are out of sight. There's not a lot of transparency there. [Social media companies] don't share data with independent researchers. They don't share data with journalists. And so the second threat is the risk that there are systems of distributing the information that we don't fully understand and aren't able to audit in a sensible way.

What actions by politicians are seen as most damaging to the information environment?

HOWARD: What we’re releasing today is an expert survey. This is a survey of hundreds of misinformation experts around the world, and one of the things they identify is that politicians generate significant amounts of misinformation. And this is beyond politicians just bending the truth or exaggerating or stumping doing the regular stump speeches. Increasingly, we find political parties, major candidates for elected office are spending, as part of their campaign budgets, money on advertising that is below the radar of elections officials. That isn't captured by the elections administrators. There's no public record of it. It's unaccounted for advertising. And so there's significant amounts of money that go into developing new techniques for manipulating voters. And it's regular politicians and mainstream political parties that spend this way. So I think that's the angle, that's the way that experts around the world are worried about how politicians are contributing to the problem.

What are the key issues with poorly designed AI-powered content moderation systems, and how do they impact the information environment?

HOWARD: Right now, AI systems have a lot of impact on content moderation so they determine what posts will end up in your feed. When you look at your social media feed, you're not actually seeing everything that's produced by everyone whom you've chosen to follow. You're seeing a selection. And that process of selection usually involves large language models. complex machine learning systems that evaluate what you've clicked on before, the preferences. You may have indicated your most recent search terms last time you were up in the Google search bar. That's what helps generate the redux of your feed.

We think—and there are moments—there are times in public life where it’s been possible to game that feed and outsiders can mess with it. When it’s in a moment of health crisis or a crisis of wildfires, serious emergencies, it's not clear that the highest quality information will arrive in your inbox or arrive in your social media feed in a way that's useful to you. There’s certainly a lot of exciting AI applications out there but the risks are that there'll be new and interesting ways to manipulate our public conversations. Without some kind of system for auditing or evaluating what those manipulation systems are like, it's a real risk to public conversation and public debate.

HIMELFARB: It’s important to note AI has been a feature in the online discourse for years, and AI and algorithmic bias has been a problem for a long time. So I think that we may be on the threshold of a time where we’re able to open the black box. That certainly is going to be one of the measures of our success in IPIE. Are we able to reach a point where, instead of the trend of the last five years where the platforms have been shrinking access of researchers to the backend, we will be able to show that there's great virtue for social good, for social progress for humanity, if we have access to those algorithms to understand the large language models better?

I think that's possibly the good news here. That we're actually having a healthy discourse in public life about the unintended consequences of AI is promising. I think because we've run headfirst into a lot of technologies—we ran headfirst into big data and social media without thinking hard enough about some of those implications—now we realize AI has got so much downside and so much upside. But we're not going to realize the potential on either side without getting better access for the research community like IPIE.

What factors contribute to the majority of researchers believing that the information environment in their countries will worsen in 2024?

HOWARD: That's a very topical question, and I think there's two answers. 

The first is that in the last year there’ve been at Twitter and Facebook, particularly Twitter, that have made it clear that the company is no longer interested in engaging with independent researchers, journalists, or regulators. [Elon Musk] has said that they're not participating in the European Commission's voluntary code of conduct. All these independent journalists and civil society projects that were dependent on a flow of data from Twitter, they're not getting that data anymore. So much of what we know about these problems comes from research papers based on Twitter data and we don’t have that stuff anymore. That’s made experts really nervous; the well of data is drying up.

And then we’re talking a lot about AI. There are more and more examples of large language models, Chat GPT-type applications that generate scary stuff. They generate more deep fake videos or complex ad campaigns and fast amounts of misinformation really quickly. Researchers working in the field are worried about what Chat GPT is going to do to the average person's ability to produce large amounts of junk text and push it out over social media. So just in the last year, it's the behavior of the firms that's changed, and it's the unleashing of new AI technologies publicly that's got people anxious.

HIMELFARB: The track record hasn’t been good for the last three to five years. I am going to bring it back to the IPIE and the reason why we've had 300 research scientists from around the world jump in. The reason why we could have easily another 500 more by this time next year is because there is a recognition among the research community that regulators and legislators have had some tools. They've had levers that they could use, but they have not used them. Anybody who works in this field closely sees that. In the U.S., the Section 230 conversation about regulating the social companies has been going on for over a decade.

The same legislators whose job it is to actually do this kind of thing, to pass legislation like that, they get angry with the social companies and yet they themselves are unwilling or unable to design, develop, and pass the legislation necessary. I think there's also a history over the last five years of watching regulators and legislators around the world not doing what needed to be done, so that would make people pretty pessimistic about the next year.

I think it's fair to say, and Phil, tell me if I'm wrong here, but I think it's fair to say that that snapshot would have looked pretty similar across the last couple of years.

HOWARD: Yeah, I think that's true. I think there was a time in 2018, 2019— when the European Commission was developing a voluntary code of conduct and it looked like it was going to have some teeth for a while. It wasn't going to be voluntary, it was going to be a code of conduct with fines. And that's a point at which a lot of experts were excited about the prospects for getting some progress. But like I said, it became voluntary and then firms have been blowing it out of the water by not really participating as promised.

HIMELFARB: I don't think we should discount either the way the world has been operating in the sense that governments have been using information as a tool of asymmetric warfare in a major, major way by intruding in other people’s elections. Phil's lab surfaced the real extent to which there was foreign interference in our elections, and then virtually every other issue: climate misinformation, health misinformation, and right up to the Ukraine conflict. Honestly, it's hard to be optimistic at this point in time. It really is, in an objective fashion. It's very hard to, which is really the motivation behind creating the IPIE. The clock is ticking. We need to really address these issues or we are creating an existential threat of our own making.

Can you elaborate on the differing concerns between experts studying autocracies and democracies in relation to the information environment?

HOWARD: To some degree, that's a function of the environment that the researchers were working in. If they work in an authoritarian regime, they’re worried about the things that'll be directly impacting them. The researchers who were based in dictatorships were most worried about their own national governments, really state-backed media enterprises, the local news outlets that can drive hate speech or can result in the targeting of academics or most of the researchers based in democracies. The worry was about foreign governments attacking voters or journalists or experts in the country. There was more concern about misinformation regarding the LGBTQ community, particularly in authoritarian regimes. Unfortunately, that's one of the softer targets for a lot of governments. It's really easy to pick on those communities. I think for the researchers working in democracies, the focus was more on climate change. Again, that might just be because there was more misinformation about climate science circulating in democracies.

HIMELFARB: These regional and local differences point us directly to the need for an organization like an IPIE modeled on the IPCC. … It is happening in many different manifestations around the world. We have to ensure that the work is getting done, but also to collect it, aggregate it, distill it. I think our first studies show the meta-analysis was important [in capturing] what people are saying. Publishing and aggregating that and distilling it for the greatest insights is absolutely critical. That's what we have to be doing right now, is organizing the research scientists along these lines.

How can the expertise of data scientists, academics, and researchers be harnessed to address these challenges?

HOWARD: If you can get everyone to agree about what the problems are and what the solutions are then we can go with a united voice to regulators and say, “You need to do 1,2, and 3,” and we can go to the technology firms and say, “If you design A, B, and C, you’re going to improve public life. Everybody's going to trust your technology, and you're much more likely to leave the world better off at the end of the year than it is at the moment.” Right now we can’t do that stuff. The data isn’t high quality. It's really hard to be able to stand by causal claims. We don't have enough of a sense of, “If you do this, this will be the outcome.” … 

We need to do a better job at generating transportable knowledge and actually explaining the causal mechanism so that when a regulator does [ask] what we should do, we actually have some answers. There's dozens of different answers, half of which are generated by industry and involve self-regulation. There’s got to be a balance that does represent industry interests [and] is close to [the] science;  the evidence of what can be done does reflect civic norms. The mission of the IPIE is to work out what those points of consensus are so the next time regulators ask, we have some coherent answers.

What policy recommendations are experts making to address the identified challenges?

HOWARD: This is a good question and I want to put it in my back pocket. I want you to ask us that a year from now. Right now the answer is content labeling and providing accurate information. These are the two things you can do to a social media platform to make it more trustworthy, to get people to remember accurate stuff and get them to learn from social media. It's not clear yet that you can enforce that. your question is actually about what the policies should be. What we've just been able to do is say, “What policies are likely to work?”

I think that's a different question about whether you should legislate that Facebook and Twitter would provide these mechanisms. We might eventually have that answer too, but for the moment, we've identified the ingredients for improving public life and different countries are going to approach this in different ways to make the social media platforms comply.

HIMELFARB: It's a great question that really points to the importance of the first release of the material from the IPIE because you know that the focus of policymakers has been so overwhelmingly about yanking content down as quickly as possible. [Regarding] content moderation, from what we concluded from looking at 5,000 research studies, is not that that’s possibly not the answer, but that you really should be looking in a different direction as well, that there is a consensus around content labeling that there is not around content moderation as an effective tool. 

That tells us two things: One is that we may have been approaching this issue—that old saying that if all you have is a hammer, every problem starts to look like a nail. That may have been the approach in the past and if we’re looking at effective policymaking, we can’t have that. We've got to be able to do it, approach it in a smarter, broader, deeper way. That points back to the need for all those data scientists, computational scientists, neuroscientists, anthropologists we need to involve. That’s the complexity of the information environment. The magnitude of the problem is enormous, and it requires a lot of people, a lot of people rowing in the same direction on this. And this, by the way, is a carbon copy of what happened in climate [discussions] for decades.

We’re really alarmed about this but at the same time, we are really encouraged because there is a way out of this and we think that we’re approaching it. Things are changing. I don’t want this all to be bad news. The world's dissolving into a morass of information, mis- and disinformation and we do see, we at the IPIE, because we're working with 300 research scientists that are all doing brilliant things in their own domain. Their work is exceptional.

ZeluisComment