Give us your feedback

AI-driven Solutions Against Disinformation: Key Tools from vera.ai

NEWS
Thu 17 Apr 2025

The AI Meets Media: Tackling Disinformation webinar brought together media and technology experts to present the cutting-edge tools developed by the vera.ai project to create a safer information ecosystem. The session focused on how artificial intelligence can support journalists, fact-checkers, researchers and the general audience in identifying, analysing, and countering disinformation in a rapidly evolving media landscape. vera.ai is a Horizon Europe project developing professional, trustworthy AI solutions for media professionals. These tools are now publicly accessible on the AI-on-Demand platform, marking a significant step forward in collective digital resilience.

Disinformation in 2025: A persistent threat

Disinformation continues to rank among the most pressing societal risks in the age of AI. From AI-generated political robocalls to altered wartime images and misleading translated speeches, synthetic media is becoming both more common and more convincing. The World Economic Forum’s Global Risks Report 2025 lists disinformation as one of the most urgent challenges of the AI era. vera.ai is responding with a suite of tools that go beyond detection—they support understanding, verification, and sustainable integration into journalistic workflows.

Disinformation in 2025 is not just more sophisticated; it’s more pervasive and personalized. Generative AI tools are now capable of creating entire campaigns of synthetic content that mimic the style, tone, and language of authentic media. Misleading claims, once localized, now scale rapidly across platforms and borders through deepfakes, synthetic voices, and AI-generated texts. Political actors, malicious influencers, and bad-faith networks increasingly rely on automated disinformation tactics to shape public opinion, distort facts, and discredit journalism. Recent analyses highlight that generative AI is being used in multiple languages, targeting elections, polarizing public debate, and overwhelming content moderation systems. The line between human-generated and machine-generated content continues to blur, challenging traditional fact-checking processes and public trust in information.

Vera.ai has released twelve assets available on the AI-on-Demand platform, including:

Text-based tools:

  • Claim check-worthiness classifiers trained on multilingual datasets (GeneralClaim, MultiCW)
  • Fine-tuned language models for detecting check-worthy content
  • A Central-Claim Extractor service that isolates key claims in noisy or structured text

Image analysis tools:

  • OMG-Fuser, a fusion transformer detecting image forgeries using forensic signals
  • RINE, a deep model leveraging intermediate encoder-blocks for synthetic image detection

Audio verification tools:

  • ODSS, a synthetic speech dataset with 17 hours of multilingual data
  • Audio provenance and phylogeny datasets for tracing manipulated or repurposed audio

Video tools:

Multimodal tools:

  • VERITE, a dataset for matching image-text pairs
  • CREDULE, a dataset for source reliability and evidence verification

Real-world impact and use cases

Throughout the webinar, experts demonstrated real-world use cases showing how the tools can counter disinformation campaigns. In one example, an AI-assisted verification of a viral video falsely claiming an “illegal immigrant attack” in Germany revealed the suspect was a Spanish national, highlighting how AI tools can assist in avoiding harmful misinformation spirals. In another case, an AI-generated image of an explosion outside the Pentagon caused momentary financial market disturbances—an example of the tangible impact that synthetic media can have if left unchecked.

What sets vera.ai apart is its collaborative, human-in-the-loop design. Fact-checkers and journalists are not replaced by AI but empowered through it. The tools are explainable, adaptable, and grounded in real-world use cases. They reflect a broader shift toward accountable, transparent AI in public-interest applications.

The vera.ai team emphasized that the next challenge is ensuring long-term sustainability and adoption of these tools. This will be achieved through partnerships with major media verification platforms and the integration of vera.ai assets into existing journalistic and research workflows.

Conclusion: Empowering Fact-Checkers with AI

The “AI Meets Media” webinar successfully showcased how vera.ai is not just building technology but shaping a responsible ecosystem for media verification. Its integration into the AI-on-Demand platform marks a significant step toward democratizing access to AI tools in the fight against disinformation. By aligning cutting-edge AI research with the needs of practitioners, vera.ai is setting a blueprint for responsible, explainable, and impactful AI in the public interest.

For more updates, follow @veraai_eu on Twitter or visit www.veraai.eu.

AUTHOR: Loredana Bucseneanu (DSME)