ACMMM 2026

IMPACT: Impact-aware Multimodal Persuasive Analysis and Contextual Trust

A workshop on multimedia-native, persuasion-optimized misinformation, bringing together multimedia forensics, multimodal reasoning, provenance, trust, robustness, and human-centered verification.

Workshop Overview

IMPACT targets a rapidly emerging reality: modern misinformation is increasingly multimedia-native and persuasion-optimized. It is no longer just a forged image or isolated deepfake clip, but an engineered package of visuals, captions, framing, narrative priming, and cross-platform remixing designed to manipulate belief and action.

Multimodal Manipulation Detection Cross-modal Contradiction Authentication & Provenance Severity-aware Modeling Explainable Verification Platform-scale Integrity

Important Dates

Contribution Submission16 July
Author Notification06 August
Camera-Ready20 August
Author Registration20 August
Final dates are aligned with the official ACM MM 2026 workshop schedule.

Home

IMPACT is scoped around a central concern: as generative modeling improves across image, video, audio, and text, the main threat is no longer only whether media is fake, but whether it is strategically persuasive, contextually misleading, and capable of causing real-world harm at speed and scale.

In high-stakes settings such as elections, public health, emergencies, conflict reporting, and financial narratives, manipulated media can distort decision-making even when the manipulation is subtle. This pushes the field toward integrated integrity systems that combine detection, localization, provenance, contextual reasoning, robustness, and explanations that humans can trust.

IMPACT brings together researchers in multimedia forensics, multimodal learning, trustworthy AI, social and behavioral computing, authentication and provenance, and human-centered verification systems. The workshop is designed not just around benchmark accuracy, but around robustness, auditability, severity awareness, and deployment value.

Scope and Topics

Core Technical Areas

  • Multimodal manipulation detection and localization across image, video, audio, and text
  • Cross-modal semantic inconsistency and narrative distortion
  • Authentication, watermarking, hashing, signatures, and provenance infrastructure
  • Severity- and impact-aware modeling beyond binary labels

System and Deployment Areas

  • Robustness under adversarial pressure and distribution shift
  • Explainable and human-centered verification systems
  • Platform-scale monitoring, moderation support, and triage
  • Policy, governance, ethics, and responsible deployment constraints

Organizers

The organizing team spans multimedia forensics and authentication, multimodal learning and reasoning, privacy and trustworthy systems, and large-scale social and behavioral computing.

Organising Chairs

Priyanka Singh

Priyanka Singh

The University of Queensland, Australia

Email: priyanka.singh@uq.edu.au

Senior Lecturer in Cyber Security at UQ. Her research spans multimedia forensics, privacy-preserving provenance, perceptual hashing, and accountable verification systems.

Xue Li

Xue Li

The University of Queensland, Australia

Email: xueli@uq.edu.au

Professor at UQ with expertise in data mining, social computing, and large-scale behavioral pattern discovery.

Pradeep K. Atrey

Pradeep K. Atrey

University at Albany, SUNY, USA

Email: patrey@albany.edu

Associate Professor and Co-Director of ALPS. His research includes multimedia authentication, provenance, privacy-aware analytics, and secure multimedia intelligence.

Program & Challenge Chair

Gagandeep Singh

Gagandeep Singh

The University of Queensland, Australia

Email: gagandeep.singh2@student.uq.edu.au

Research Assistant and Software Engineering student at The University of Queensland, working on multimodal misinformation detection.

Call for Papers

The IMPACT workshop solicits high-quality submissions advancing multimedia integrity under persuasion-oriented manipulation and contextual distortion.

We invite submissions in the following categories

  • Full Papers
  • Short Papers
  • Posters
  • System / Demo Submissions

Topics of Interest

  • Multimodal manipulation detection and localization
  • Cross-modal contradiction and narrative distortion
  • Authentication, watermarking, and provenance
  • Robustness to adversarial pressure and distribution shift
  • Severity- and impact-aware modeling
  • Human-in-the-loop verification and interfaces
  • Platform-scale moderation support
  • Policy, governance, and ethical constraints

Submission and Review

All submissions will be peer-reviewed by the program committee for originality, technical quality, clarity, relevance, and reproducibility. We encourage code, model cards, and evaluation artifacts where appropriate.

Accepted contributions will be presented as oral talks or posters/demos depending on program constraints, with an explicit focus on constructive discussion, failure-mode analysis, and cross-community exchange.

Submissions will follow the ACM Multimedia workshop format and appear in proceedings. Papers may be up to 6-8 pages and up to 2 additional pages for references. Please follow the official ACMMM guidelines for paper formatting. Submissions are intended to be single-blind and reviewed by at least two program committee members.

High-quality papers rejected from the ACM Multimedia 2026 main track may be submitted to the IMPACT workshop. Authors should submit the original reviews alongside a statement describing revisions made to the paper. The workshop organizers will review these materials and may invite eligible papers for inclusion. Accepted papers will be published in the ACM Multimedia 2026 workshop proceedings. All submissions must follow the official ACM Multimedia 2026 format.

ACM Multimedia 2026 is an on-site event only. This means that all papers and contributions must be presented by a physical person on-site; remote presentations will not be hosted or allowed. Papers and contributions not presented on-site will be considered a no-show and removed from the proceedings of the conference. More details will be provided to handle unfortunate situations in which none of the authors would be able to attend the conference physically.

Please follow https://2026.acmmm.org/site/calls-dates.html for further guidelines.

Important Dates

  • Contribution Submission: 16 July
  • Author Notification: 06 August
  • Camera-Ready: 20 August
  • Author Registration: 20 August

Tentative Schedule

TBA

The final workshop schedule will be published here once keynote and session timings are confirmed.

Tentative Keynote Speakers

The proposed speakers bring complementary perspectives across multimedia forensics, provenance security, trustworthy systems, and persuasive misinformation analysis.

Siwei Lyu

Siwei Lyu

University at Buffalo, SUNY

A globally recognized leader in multimedia forensics, authenticity, and deepfake detection. His work aligns strongly with persuasion-oriented misinformation, provenance, and accountability in visual evidence.

Tentative · Virtual if required
Mohan S. Kankanhalli

Mohan S. Kankanhalli

National University of Singapore

A pioneer in multimedia computing, computer vision, and trustworthy AI, with strong contributions to content authentication, multimedia security, and privacy-preserving analytics.

Confirmed · Virtual if required
Ryan Ko

Ryan Ko

The University of Queensland

Chair Professor of Cyber Security and Director of Research at UQ EECS. His expertise supports the workshop’s focus on secure and accountable infrastructures for misinformation response.

Confirmed · Virtual if required
Helen Huang

Helen Huang

The University of Queensland

Professor at UQ working on large-scale data mining, trustworthy machine learning, robustness, fairness, and reliable AI deployment.

Confirmed · Virtual if required

T-IMPACT Challenge

A central objective of IMPACT is to catalyze reproducible progress on impact-aware integrity, where the goal is not only to decide whether content is manipulated, but to quantify how it misleads and how harmful it could be if consumed and shared.

Challenge Motivation

Existing benchmarks often reward binary detection while failing to capture realistic manipulations involving subtle visual edits, narrative reframing, contextual distortion, and multimodal persuasion. The T-IMPACT challenge is designed to reward robustness, calibrated confidence, and explainable outputs, not just narrowly defined benchmark accuracy.

Dataset Overview

T-IMPACT consists of paired authentic and manipulated multimedia items reflecting real misinformation strategies across political events, public health, disasters, and conflict reporting. Samples are structured around image-text pairs, with optional metadata depending on release tier. Manipulations include:

  • Object insertion, removal, compositing, inpainting, and localized attribute changes
  • Caption swaps, narrative reframing, temporal or geographic misattribution
  • Mixed cases where both modalities are altered

Annotations and Supervision

  • Manipulation type labels and descriptors of narrative distortion
  • Severity / impact supervision as graded scores or ordinal categories
  • Localization signals such as manipulated regions and token-level text highlights

Tasks and Tracks

  • Task 1: Authenticity classification
  • Task 2: Severity / impact estimation
  • Task 3: Evidence localization (optional but strongly encouraged)

Evaluation Protocol

  • Classification: Accuracy, Precision/Recall, F1, ROC-AUC
  • Severity: MAE, RMSE, Spearman correlation, ordinal consistency
  • Localization: IoU-based metrics and token-level F1 where applicable
  • Stratified results by manipulation type and domain
  • Robustness reporting under shift conditions

Leaderboard and Infrastructure

A public leaderboard will rank Task 1 and Task 2. Task 3 will be supported as an explainability track and highlighted qualitatively. The challenge infrastructure is planned for a public evaluation platform such as CodaLab or EvalAI.

Release, Governance, and Ethics

T-IMPACT will be released with responsible-use terms to support detection and verification research without facilitating misuse. Documentation will describe construction procedures, annotation pipelines, limitations, privacy considerations, and fairness considerations across regions and languages.