4 Comments

Remember that the training data for the LLMs will include text from political speeches, propaganda, etc.

Using a LLM like this to do factchecking is absolutely not the way to do it. LLMs are not a database of facts, rather reasoning and generating engines trained on consensus, not truth.

So, what we'd do to fact check with AI is use the reasoning power of an LLM to compare what was said with a trusted source of truth (not the LLM's training data) with a technique such as RAG. That would be simple to build.

However, the question then becomes, "What's the source of truth?"

Still, this would be the way to do it, not just use ChatGPT or Perplexity. And also easily implementable.

Expand full comment
author

So, this is absolutely true of LLMs in general, and yet. My very unscientific experiment suggested they can do a pretty fair job in this use case. Moreover, Perplexity includes links to its sources so they can always be checked. A human fact-checker can quickly double-check its work—no need to build something on top of it.

Expand full comment

You wish they had that machine in Sean Connery’s Bond re run Never Say Never which gives higher electrical shocks the more they fib in these debates and the AI picks up on it in real time. I reckon people would pay to watch that. It would keep politicians “honest”

Expand full comment
author

Bwahahaha

Expand full comment