About
Full Fact Automated is an AI-powered fact-checking toolkit developed by Full Fact, the UK's independent fact-checking charity. It uses natural language processing and machine learning to monitor live political speech, parliamentary debates (Hansard), and broadcast media in near real time — automatically detecting checkable claims, matching them against a database of previously verified content, and alerting human fact-checkers when repeat claims surface. The system operates as a pipeline: text is ingested from structured sources such as Hansard XML and live caption feeds, segmented into sentences, classified for claim-worthiness, and then compared via semantic similarity search against an existing verdict corpus. This reduces the manual overhead of claim triage and helps fact-checkers avoid duplicating work already done. Full Fact has also collaborated with international partners in the IFCN ecosystem to share the infrastructure across organisations globally. Funded through charitable grants and philanthropy (from foundations and technology companies' journalism programmes), the tools are provided free to partner fact-checking organisations rather than sold commercially. Full Fact has open-sourced components of its work and published NLP research and datasets, contributing to the broader computational journalism and AI fact-checking research community.
Key Features
- Real-Time Claim Detection: NLP models process live parliamentary debates, broadcast transcripts, and media sources to identify sentences that constitute verifiable factual claims, filtering out opinions and rhetoric.
- Claim Matching & Similarity Search: Embedding-based semantic search compares newly detected claims against a corpus of previously fact-checked content, surfacing relevant prior verdicts to eliminate duplicated research effort.
- Hansard & Parliamentary Monitoring: Automatically ingests and processes UK Parliament transcripts (Hansard XML), enabling systematic tracking of claims made by politicians during debates and proceedings.
- Automated Fact-Checker Alerts: When a repeated or high-priority claim is detected, the system notifies editors and fact-checkers, allowing human reviewers to focus their attention where it matters most.
- Cross-Organisation Claim Sharing: Designed to be shared across IFCN partner fact-checking organisations globally, allowing multiple newsrooms to access the same claim database and avoid redundant verification work.
Pros
- Reduces Fact-Checker Workload: Automates the most time-consuming parts of fact-checking — claim monitoring and triage — allowing human researchers to spend more time on actual verification.
- Mission-Driven & Transparent: Developed by a registered charity with open research publications and partially open-sourced code, making the methodology accountable and reproducible.
- Enables Cross-Newsroom Collaboration: The shared infrastructure model lets multiple fact-checking organisations benefit from the same verdict database, multiplying the impact of each fact-check produced.
- Free for Partner Organisations: Funded by grants and donations, the tools are provided at no cost to qualifying fact-checking partners, lowering the barrier to adoption for under-resourced newsrooms.
Cons
- Limited Public Accessibility: There is no open public API or self-serve onboarding; access is restricted to vetted partner organisations, making it unavailable for independent developers or smaller newsrooms outside the IFCN network.
- Does Not Automate Verification: The system detects and matches claims but cannot verify them — the actual fact-checking judgment still requires human research, limiting how far automation can go.
- Primarily UK English Coverage: The tooling and training data are heavily oriented toward UK political discourse and the English language, restricting its applicability for international or non-English fact-checking contexts.