Discussion about this post

User's avatar
Andrew Jack's avatar

Great post. Have been mulling something similar - with the addition of an extending Pinocchio nose added to the candidates as they speak and the non-facts accumulate

Expand full comment
Girish Gupta's avatar

Remember that the training data for the LLMs will include text from political speeches, propaganda, etc.

Using a LLM like this to do factchecking is absolutely not the way to do it. LLMs are not a database of facts, rather reasoning and generating engines trained on consensus, not truth.

So, what we'd do to fact check with AI is use the reasoning power of an LLM to compare what was said with a trusted source of truth (not the LLM's training data) with a technique such as RAG. That would be simple to build.

However, the question then becomes, "What's the source of truth?"

Still, this would be the way to do it, not just use ChatGPT or Perplexity. And also easily implementable.

Expand full comment
5 more comments...

No posts