2019:Meetups/Disinformation
Appearance
This is an Accepted submission for the Research space at Wikimania 2019. |
Meetups/Disinformation
Curie, Floor 7, Aula Magna · Sunday 13:10 - 13:55
Over the past days, we have noticed that there are a lot of small group conversations about disinformation. We (Research+Policy at Wikimedia Foundation) would like to invite you to a 45-min session to talk about disinformation. The goal of this very short session is to allow us all get a sense of what we all are thinking about when it comes to disinformation.
When Sunday, August 18, 13:10-13:55
Where Curie, in Aula Magna, Floor 7.
Interested If you'd like to attend and want to sign up, you can do it below.
- LZia (WMF) (talk) 10:48, 17 August 2019 (UTC)
- SSiy (WMF) (talk) 08:35, 18 August 2019 (UTC)
- --- FULBERT (talk) 00:25, 18 August 2019 (UTC)
- Liz (talk) 04:52, 18 August 2019 (UTC)
- CS natematias (talk) 11:16, 18 August 2019 (UTC) (see my research on the effect on humans and algorithms of nudging people to engage in crowdsourced fact-checking)
Meeting notes
This is how the meeting went.
- The definition of disinformation and the scope of the meeting was defined: "non-accidental misleading information that is likely to create false beliefs". The goal of the meeting was set to "to arrive at a shared understanding of how we each think about the topic of disinformation and get a pulse of the conversations."
- All participants present in the meetup at the start time introduced themselves by saying their name/username and what they do in the Wikimedia Movement or why they are interested in the disinformation discussion.
- We asked the attendees to share what they have in mind when they think about disinformation. The following topics was discussed (the topics or rephrased and those who proposed them may want to improve/update them if their point is not fully captured by the text below):
- The challenge of taking action on disinformation when the content is not encyclopedic and spreads fast. For example, on instant messaging apps, disinformation spreads very quickly and the kind of topics or content discussed may not be something we have content on Wikipedia to combat it. Some governments are interested to address such disinformation spread via Wikipedia.
- There are campaigns of disinformation that start outside of Wikimedia projects and can have eventual impact on Wikimedia projects. For example, English Wikipedia may be affected by specific campaigns on reddit/Twitter. It would be good to have systems that can monitor such external campaigns and notify editors if there is a predicted attempt to change content on Wikipedia. (Note: There was a question about the threshold of activity that editors can handle in case of a spike. There were a few comments that spikes have been traditionally easier to handle than disinformation that spread gradually in the pages because in the case of spikes, one knows that a focus of efforts is needed and it usually can be addressed if enough resources are spent.)
- Be aware of Credibility Coalition and their work. Check out Cite Unseen. WikiConference North America 2019 (Nov. 8-11) will be focused on reliability and credibility.
- There is a lot of good focus by philanthropic organizations on the challenges of big tech companies when it comes to disinformation. While the focus is welcomed and change is needed in that space, we can also benefit from working with these organizations to do a better job at combatting disinformation on Wikimedia projects.
- A challenge was brought up: projects for which editors are in different regions of the world and those regions may have access to secondary sources that at times contradicting each other. For example, in the case of Chinese Wikipedia, we have editors from at least three regions and in the current environment they have access to different secondary sources with conflicting information. In some of the cases, there is a need to understand the truth as only one answer can be correct. How do we assess the truth especially in light of the fact that some of the existing guidelines have conflict with the search for "truth"? One approach briefly discussed was to focus on building systems that can assess the reliability of sources in the specific topics the source is being used for (with the understanding that some sources are reliable for certain topics but not for others). There was also a discussion that reliability of the sources get questioned relatively frequently and the question is: can machines help here to save editor's time.
- Neutrality under stress: How to remain neutral under extreme circumstances or when editors are under a lot of external pressures.
- Civil society resilience and digital literacy: While it is great to build more resilience in our systems and community of editors and empower them to combat disinformation more effectively, the answer to addressing the disinformation question is not just on our platforms. We need to invest in building stronger civil society resilience and education programs that help people understand how to use content on the Web and how to assess its reliability.
- Slow changes in the articles are hard to detect.
- WMF-Research gave an update on the current state of the research in this space and the intentions and directions of the team. In a nutshell: we are doing research on understanding patrolling, a literature review of disinformation (to be able to understands what's solved, what's left as open questions and what is too hard to address for now), research on identifying statements in need of citation, their usage and reader trust. The next immediate step after understanding the literature and patrolling is to start brainstorming and prioritizing what kind of machine learning models we can build to help the existing community of patrollers in their work. There is a lot more to do in this space, of course, and we're in the early days of our thinking and direction.
- What are some of the current mechanisms and efforts that are working well?
- An effort on English Wikipedia to summarize past conversations about sources and decisions made so we don't have to restart conversations about the reliability of sources frequently (or when we start, we can point to earlier key points discussed and decisions). This is a time-saver. (Check out perennial sources for more info.)
- Monitoring social media activity of certain account (for example, celebrity accounts) and notifying those who can take action in case we see a pattern that may result in damage to the projects. There has been examples of chapters getting involved and getting on the phone with folks behind these accounts to mitigate the spread of disinformation on Wikimedia projects.
- Note: it's great that we're talking about all the things that work and don't work, however, it's important to point out that the current community of volunteers is doing a great job in defending the projects against the spread of disinformation. We should support and encourage this existing community in their work. (a lot of nods for this point.:)
- End of the meeting: what's next? How do we stay in touch? space.wmflabs.org was suggested as a platform to use for further conversations. Leila signed up to look into it and get back to the group.