Auditing AI in the Age of Conflict and Propaganda

logo-funded-by-EU-03.png

Goal of research

To systematically assess how leading AI models respond to conflict-related disinformation prompts about Ukraine, focusing on factual accuracy, propaganda inclusion, tone, consistency, and safety behavior.

The goal is to evaluate accountability and bias in AI systems using a transparent, replicable framework — not to prove intent or infer causality.

Developed by Ihor Samokhodskyi. https://policygenome.org/

Want to see how AI talks about your issue, country or election?

→ Request a custom audit / methodology: Email, LinkedIn, WhatsApp.

Models

How we research

Languages: each prompt will be tested in English, Ukrainian, Russian.

Test Controls: