The first edition of CODAI will be on 20 Oct, co-located with ECAI, in 19-24 OCTOBER 2024 Santiago de Compostela.
Workshop proceedings are available open access at CEUR-WS CEUR-WS Volume 3782
Social media platforms which have been designed primarily to allow users to create and share content with others, have become integral parts of modern communication, enabling people to connect with friends, family as well as for broadcasting information to a wider audience. On one side these platforms provide an opportunity to facilitate discussions in an open and free environment. On the flip side, new societal issues have started emerging on these platforms. Among all the issues, the topic of misinformation has been prevalent on these platforms. The term misinformation is an umbrella term which encompasses various entities such as fake news, hoaxes, rumors to name a few. While misinformation refers to non-intentional spread of non-authentic information, the term disinformation points to spreading of a piece of inauthentic information with certain malign intentions.
Initially, researchers have mainly focused on identifying and characterizing misinformation using text based techniques through traditional and advanced NLP techniques. However, with the advancement of techniques and availability of various AI tools, the (mis)information has started appearing in the form of multimodality. For example, a piece of image with incorrect text embedded on it or a morphed video with audio. In addition, the topic of misinformation has impacted individuals and communities from various domains such as medical, political, entertainment, business, etc. This calls for combining forces from different domains. In other words, to counter misinformation computer scientists need to work with domain specialists. To understand the intention a psychologist’s inputs can also be vital to understand the reasons for the spreading of misinformation. To summarize, a holistic view is needed to counter the menace of misinformation spread on online social media platforms.
The International Workshop “CODAI: COuntering Disinformation with Artificial Intelligence” provides a platform for researchers from various domains to come together and not only present their works but also provide an ecosystem to discuss ideas which can facilitate countering the spread of misinformation.
Call for Papers
Areas of interest to include, but are not limited to, the following:
Network level
- Information diffusion models for understanding and thwarting the spread of low-quality information;
- Understanding and detection of disinformation;
- Characterization and detection of coordinated inauthentic behavior;
- Novel techniques for detecting malicious accounts (e.g., bots, cyborgs and trolls);
- Graph mining and network analysis approaches for studying polarized communities and for reducing polarization;
Content level
- Information diffusion models for understanding and thwarting the spread of low-quality information;
- Understanding and detection of disinformation;
- Characterization and detection of coordinated inauthentic behavior;
- Novel techniques for detecting malicious accounts (e.g., bots, cyborgs and trolls);
- Graph mining and network analysis approaches for studying polarized communities and for reducing polarization;
Context level
- Study, inference and detection of narratives in disinformation campaigns;
- Impact/Harm of misinformation on society.
- Case-studies on the spread and impact of fake news in controversial topics such as politics, health, climate change, economics, migration.
- Social and psychological studies, or data analytics related to misinformation spreaders.
Evaluation
- Metrics, tools and methods for measuring the impact of fake news and of coordinated inauthentic behaviors;
- Datasets for evaluation.
This workshop will cover works that not only provide studies, characterize or model mis- and dis-information but where state-of-the-art frameworks can also be discussed for countering not only uni but also multi-model. We also invite works related to coordinated inauthentic behavior and information operations in different forms of misinformation, that is rumors, fake news, etc.
Important dates
- Submission deadline:
15th May 202424th May 2024 - Accept/Reject Communications:
1st July 202420th July 2024 - Camera-ready papers due:
4th August 202411th August 2024 - Workshop date: 20 October 2024
All deadlines are 11:59pm UTC-12 (“anywhere on earth”).
Schedule
9:15-9:30
Welcome and opening remarks
9:30-10:30
Invited talk by David Camacho
10.30-11:00
Coffee break ☕
11:00 - 12:30
Paper presentations
- Analysis of Climate Change Misleading Information in TikTok
Clara Baltasar, Sergio D’Antonio Maceiras, Alejandro Martin and David Camacho - Diachronic Political Content Analysis: A Comparative Study of Topics and Sentiments in Echo Chambers and Beyond
Michele Joshua Maggini, Virginia Morini, Davide Bassi and Giulio Rossetti - Factoring in context for the automatic detection of misrepresentation
Bruna Paz Schmid, Annette Hautli-Janisz and Steve Oswald - Are Misinformation Propagation Models Holistic Enough? Identifying Gaps and Needs
Raquel Rodriguez-García, Álvaro Rodrigo and Roberto Centeno
12:30 - 14:00
Lunch break 🥪
14:00 - 15:30
Paper presentations
- Detecting fake news using Twitter social information
Jesus Maria Fraile Hernandez, Alvaro Rodrigo and Roberto Centeno - On the Categorization of Corporate Multimodal Disinformation with Large Language Models
Ana-Maria Bucur, Sónia Gonçalves and Paolo Rosso - Automated Fact-checking based on Large Language Models: An application for the press
Bogdan Andrei Baltes, Yudith Cardinale and Benjamin Arroquia Cuadros
15:30 - 16:00
Coffee break ☕
16:00 - 17:00
Invited talk by Paolo Rosso
17:00 - 17:10
Closing remarks
Invited Speakers
Paolo Rosso
Paolo Rosso is Full Professor of Computer Science at the Universitat Politècnica de València, Spain. His current research interests fall mainly in the area of detection of harmful information in social media, both fake news and hate speech. He is the principal investigator of two related projects: XAI-DisInfodemics on eXplainable AI for disinformation and conspiracy detection during infodemics (PLEC2021-007681), and FAKEnHATE-PdC on FAKE news and HATE speech (PDC2022-133118-I00), both funded by the Spanish Ministry of Science, Innovation and Universities, and by European Union NextGenerationEU/PRTR. He collaborated with the Spanish National Security Department and with the Science and Technology Office (Oficina-C) of the Spanish Congress of Deputies in topics related to disinformation campaigns and AI.
Countering disinformation with AI: discriminating conspiracy theories from critical thinking
The rise of social media has offered a fast and easy way for the propagation of disinformation and conspiracy theories. Despite the research attention that has received, disinformation detection remains an open problem and users keep sharing texts that contain false statements. In this keynote I will briefly describe how to go beyond textual information to detect disinformation, taking into account also affective and visual information because providing important insights on how disinformation spreaders aim at triggering certain emotions in the readers. I will also describe how psycholinguistic patterns and users' personality traits may play an important role in discriminating disinformation spreaders from fact checkers. Moreover, I will comment on some studies on the propagation of conspiracy theories. In the framework of the PAN Lab at CLEF, we are organising a challenge on oppositional thinking analysis to discriminate between conspiracy narratives and critical thinking. This distinction between critical and conspiracist narratives is vital because considering a message as conspiratorial when it is only oppositional to mainstream views could start a psychosocial process that drives into the arms of the conspiracy communities those who were simply critical about controversial topics such as vaccination or climate change. Most of the work was done in the framework of IBERIFIER, the Iberian media research & fact-checking hub on disinformation funded by the European Digital Media Observatory, and the research projects XAI-DisInfodemics (eXplainable AI for disinformation and conspiracy detection during infodemics), and FAKEnHATE-PdC (FAKE news and HATE speech).
David Camacho
David Camacho is Full Professor at Computer Systems Engineering Department of Universidad Politécnica de Madrid (UPM), he is the head of the Applied Intelligence and Data Analysis research group (AIDA: https://aida.etsisi.uam.es), the Director of the PhD program in Computer Science and Technologies of Smart Cities, and the Director of the Master program in Machine Learning and Big Data at UPM. He has published more than 300 journals, books, and conference papers (google scholar). His research interests include Machine Learning (Clustering/Deep Learning), Computational Intelligence (Evolutionary Computation, Swarm Intelligence), Social Network Analysis, Fake News and Disinformation Analysis. He has participated/led more than 60 AI-based R&D projects (National and International: H2020, MCSA ITN-ETN, DG Justice, ISFP, NRF Korea), applied to real-world problems in areas as aeronautics, aerospace engineering, cybercrime/cyber intelligence, social networks applications, disinformation countering, or video games among others. He serves as Editor in Chief of Expert Systems from 2023 and sits on the Editorial Board of several journals including Information Fusion, Human-centric Computing and Information Sciences (HCIS), and Cognitive Computation, IEEE Transactions on Emerging Topics in Computational Intelligence (IEEE TETCI), among others. Contact at: David.Camacho@upm.es.
Rethinking the problem of disinformation and Artificial Intelligence: boundaries, threats, and trends
Disinformation (and more generally misinformation) is spreading everywhere online, causing problems for individuals, societies, and countries. This unchecked dissemination of falsehoods, has nurtured an environment ripe for the proliferation of rumors, propaganda, and hoaxes, exacting a toll on the economic, political, and public health realms, among many other aspects in our daily lives. Confronting this multifaceted adversary demands a united front, drawing upon the collective wisdom and resources of diverse stakeholders including individuals, media entities, governmental bodies, technology firms, and scholars. This keynote endeavours to illuminate the intricate contours of this challenge, delving into some popular Computational techniques such as Machine Learning and Graph Computing as a new set of weapons in the battle against misinformation. Focused primarily on three domains, Natural Language Processing (NLP) and Multimodal Deep Learning (MDL) and Social Network Analysis (SNA), our discourse aims to unveil the potential of these techniques in discerning truth from falsehood. Within the realm of NLP/MDL and SNA, particular attention will be devoted to the facter-check architecture, a novel framework that through the use of ensembles and deep learning techniques based in Transformer technology, enables the identification and tracking of misleading content across the vast expanse of online social networks.
Workshop description
Many recent performance improvements in NLP have come at the cost of understanding of the systems. How do we assess what representations and computations models learn? How do we formalize desirable properties of interpretable models, and measure the extent to which existing models achieve them? How can we build models that better encode these properties? What can new or existing tools tell us about these systems’ inductive biases?
The goal of this workshop is to bring together researchers focused on interpreting and explaining NLP models by taking inspiration from fields such as machine learning, psychology, linguistics, and neuroscience. We hope the workshop will serve as an interdisciplinary meetup that allows for cross-collaboration.
The topics of the workshop include, but are not limited to:
- Explanation methods such as saliency, attribution, free-text explanations, or explanations with structured properties.
- Mechanistic interpretability, reverse engineering approaches to understanding particular properties of neural models.
- Probing methods for testing whether models have acquired or represent certain linguistic properties.
- Applying analysis techniques from other disciplines (e.g., neuroscience or computer vision).
- Examining model performance on simplified or formal languages.
- Proposing modifications to neural architectures that increase their interpretability.
- Open-source tools for analysis, visualization, or explanation.
- Evaluation of explanation methods: how do we know the explanation is faithful to the model?
- Opinion pieces about the state of explainable NLP.
Feel free to reach out to the organizers at the email below if you are not sure whether a specific topic is well-suited for submission.
Submission
We will accept submissions through chairingtool at: https://chairingtool.com/. All submissions should use the ECAI 2024 template and formatting requirements specified by ECAI.
Submission Types
- Original submissions The submissions will be reviewed through a double-blind process and must remain anonymous. They can be either short papers (2-4 pages) or long papers (6-8 pages), with additional pages allowed for references.
- Non-archival option In addition to regular paper submissions, authors have the option of submitting previous research or abstract as non-archival.
Accepted submissions will be presented at the workshop as oral presentations.
Contact
Please contact the organizers at codaihelp@gmail.com for any questions. For additional information, please contact the web chair, Ahmed Sabir (ahmed.sabir(at)ut.ee).
Organizers
Rajesh Sharma
Rajesh Sharma, Head, CSS Lab, University of Tartu, Estonia, Email: rajesh.sharma@ut.ee (Primary contact) Rajesh Sharma is working as an Associate Professor of Information Systems at the Institute of Computer Science at the University of Tartu, Estonia. His interests include Big data analytics, especially in the domain of Social Media and Social Network Analysis. He has published papers in ICWSM, IEEE Big data and IEEE/ACM ASONAM conferences and in journals such as International Journal of Data Science and Analytics, IEEE Transactions on Network Science and Engineering, Journal of Social Network Analysis and Mining. In particular, he has published several papers on (mis)information diffusion on single and multilayer networks. He has served on the advisory board members of an IMF project about detecting conflicts of interest in public procurement. He is also associate editor for the Elsevier Journal of Social Network Analysis and Mining and is also a PC member for the ASONAM conference. He is also part of TPC of Complex Networks, and SocInfo Conferences. He was an invited speaker at the Digital Humanities Workshop, Estonia. Presently, he is leading efforts on SoBigData++ research infrastructure project, and two CHIST-ERA projects, namely SAI and HAMISON. In the past, he was part of SoBigData (H2020) project and InWeGe (EU commission project related to the gender pay gap in Estonia). He is also involved in an Industrial project with Swedbank, one of the largest banks in Baltics. He was one of the organizers for the “DisInfo” workshop (which was related to DisInformation behavior) that happened at the International Conference on Social Informatics (SocInfo) 2020.
Anselmo Peñas
Anselmo Peñas, NLP & IR UNED, Universidad Nacional de Educación a Distancia (UNED) Email: anselmo@lsi.uned.es Anselmo Peñas is full professor of computer science at UNED. He holds the Award of the Spanish Society for Natural Language Processing. In 2010 he stayed at the University of Southern California as a visiting scholar. In 2016 he stayed 6 months in the University of York working on unsupervised machine learning techniques applied to natural language interpretation. He has participated in several EU projects and, currently, he is the international coordinator of EU CHIST-ERA HAMISON project (2023-2025) on the Holistic Analysis of Disinformation. From 2007 to 2015, he acted as the international coordinator of the European Question Answering benchmarking and evaluation campaigns in multiple European Languages at the Cross-Language Evaluation Forum (CLEF QA Track).
Program Committee
Rodrigo Agerri, HiTZ Center - Ixa, University of the Basque Country (UPV/EHU)
Paolo Rosso, PRHLT Research Center, Universidad Politécnica de Valencia (UPV)
Arkaitz Zubiaga, Social Data Science lab , Queen Mary University of London
Harith Alani, Knowledge Media Institute, Open University, London
Anwitaman Datta, Nanyang Technological University, Singapore
Uku Kangur, University of Tartu, Estonia
Shakshi Sharma, Bennett University, India
Johannes Langguth, Simula Research Laboratory, Norway
David Camacho, Applied Intelligence & Data Analysis group, Universidad Politécnica de Madrid (UPM)
Anselmo Peñas, NLP & IR UNED, Universidad Nacional de Educación a Distancia (UNED)
Roberto Centeno, NLP & IR UNED, Universidad Nacional de Educación a Distancia (UNED)
Álvaro Rodrigo, NLP & IR UNED, Universidad Nacional de Educación a Distancia (UNED)
Rajesh Sharma, CSS Lab, University of Tartu, Estonia
Neha Pathak, Indian Institute of Information Technology (IIIT) Delhi
Ahmed Sabir, CSS Lab, University of Tartu, Estonia
Giulio Rossetti, CNR, Pisa, Italy
Jan Milan, Applied University of Science, Zurich
Rémy Cazabet, Univ. Lyon 1, Lyon, France
Roshni Chakraborty, CSS Lab, University of Tartu, Estonia