Who Validates Truth in an AI Future?

One of the core use cases for the internet, and more recently AI, is “please answer this very specific question for me.”

And at this point the ten blue links version of the internet that dominated for a few decades very much looks like an inferior product experience to the way modern AI chat interfaces work.

Chat GPT:
Very specific question -> very specific answer

Vs.

Google:
Very specific question - > 10 blue links that might contain an answer that is usable.

And yet, Chat GPT’s better product experience is only useful if it is reliably true and accurate.

We’ve all seen the stories of AI hallucinations and getting facts wrong. I imagine that will get fixed with more rounds of product development.

But the other night talking with a friend I realized that there is a deeper concern I have about truth that I am not sure has been discussed quite as much.

One of the things people say is that the AI models are trained on “the entire internet and corpus of human knowledge up until that point.”  For now, I’ll leave aside how they got access to that knowledge and whether the producers of that knowledge were fairly compensated.

What interests me more is that if AI models become a replacement for search - how will the models continue to reliably reflect truth over time? As the world turns and new things happen, are discovered, are disproven etc. - how will these models validate and verify what is true?

If the core value proposition of an AI chat bot is that they eliminate the complexity of sourcing truth and just present you the answer - how do sources of truth get compensated for the important work they do documenting truth?

One of the reasons we love Chat GPT is that it has no ads (yet). It is a clean interface with just the truth we want.  But, that truth was in a sense stolen from the producers of that information by the AI companies.

If we look to the evolution of Google’s monopolistic 10 blue links paradigm - the future of this challenge does not look great. Today when I search, I see either Google’s own AI offering truth (without compensating truth sources) and then a proliferation of ads (that have become almost indistinguishable from organic results) before seeing the best organic response.

What does the ecosystem of truth providers look like that can thrive in a world where these AI companies dominate the market for information?  How will the AI companies choose and compensate these documenters of truth? Can they themselves document truth?  What do we demand as consumers when it comes to truth?  What economic incentives are in place for us to trust that what the AI companies say is truth is actually truth?

I hope this wave of technological innovation will be different. And that sources of truth will be compensated for the difficult and civilizationally important work they do. But I am not going to get my hopes up.

Previous
Previous

Bringing Back Local News

Next
Next

The Value of Trust Compounds Over Time