X to test AI-generated Community Notes: What it means for human fact-checkers

Elon Musk-owned X is preparing to roll out AI-generated Community Notes, a move that could significantly change how fact-checking happens on the platform. In a newly announced pilot programme, developers will soon be able to build AI bots capable of drafting Community Notes that appear under posts to provide additional context.

According to X, these AI-powered Notes can be generated via the platform’s own Grok AI chatbot or through third-party AI tools connected to its API. The initiative is set to begin with a limited group of developers later this month, and the Notes will appear in a ‘test mode’, meaning most users won’t see them immediately. If testing proves successful, the feature may be rolled out more widely.

What is the aim?

Keith Coleman, X’s vice president of product, told Bloomberg that the goal is to combine the speed and scalability of AI with human judgment. While bots will help generate Notes quickly, human contributors will continue to determine which Notes are “helpful enough to show” under posts.

“These bots can help deliver a lot more notes faster with less work, but ultimately the decision on what’s helpful enough to show still comes down to humans,” Coleman said.

Currently, Community Notes—introduced in 2021 back when X was still Twitter—rely entirely on crowdsourced fact-checking. Users add facts and context to posts, and a Note only appears if enough contributors vote that it provides helpful context.

Why now?

The announcement comes as X faces growing criticism for misinformation, political bias, and weak content moderation on its platform. By enlisting AI, X hopes to accelerate the process of clarifying or correcting posts. However, the plan also raises questions about whether AI-generated Notes will be trustworthy, given that large language models can sometimes fabricate information.

How will it work?

  • Developers create AI bots through X’s API.
  • Bots draft Notes in test mode, offering context under specific posts.
  • Human reviewers still decide which Notes are shown, maintaining the community-driven nature of the feature.

For now, X’s leadership sees AI and humans as complementary rather than competitive in this fact-checking role. But experts and critics are already watching closely, noting that AI’s tendency to hallucinate could create new challenges instead of solving old ones.

If the pilot succeeds, it could redefine how crowdsourced fact-checking and AI tools coexist—not just on X, but potentially across other social media platforms too.