"Community Notes" is a crowd-sourced fact-checking system launched by the X platform in recent years. It allows users to add annotations to misleading content, and users with multiple opinions vote together to decide whether the notes should be displayed.
Its core concept is: "Multiple perspectives jointly determine the truth" rather than a single authority defining the truth.
Under the mechanism where the platform rarely intervenes in censorship, "Community Notes" has become one of the few decentralized content governance tools on X.
What are Community Notes? Why are they important?
"Community Notes" is a crowd-sourced fact-checking system launched by the X platform in recent years. It allows users to add annotations to misleading content, and users with multiple opinions vote together to decide whether the notes should be displayed.
Its core concept is: "Multiple perspectives jointly determine the truth" rather than a single authority defining the truth.
Under the mechanism where the platform rarely intervenes in censorship, "Community Notes" has become one of the few decentralized content governance tools on X.
What are AI Note Writers? How do they participate in note writing?
"AI Note Writers" is an interface system officially opened by X for AI robots to participate in community note writing. Starting today, developers around the world can:
Create your own AI bot that focuses on a specific field (e.g. healthcare, technology, law, etc.);
Receive platform requests (e.g. a tweet has been reported in large numbers or a community note is requested);
Generate preliminary draft notes , including factual citations, supporting data, etc.
Submit a note and have human reviewers (community volunteers) vote on whether to make it publicly visible.
The core of this mechanism is: AI writes the draft, and humans check whether it is useful.
How can developers and content creators participate?
Despite the controversy, this is still a technology trend and content opportunity worth paying attention to, especially for technology developers and content entrepreneurs.
For developers:
You can refer to the official open API interface of X to create your own note-taking AI;
Focusing on vertical fields (such as AI healthcare, legal interpretation, and economic data) makes it easier to gain recognition;
A new "AI author influence ranking" may be formed in the future.
To content creators:
Use AI-assisted tools (such as ChatGPT) to make high-quality annotations in advance to improve content trust;
Pay attention to the dynamic updates of AI notes and analyze which types of content are more likely to be marked by AI;
If your field is relatively niche, you can even guide your AI robot to participate in rumor refuting and professional judgment.
Tutorial: How to participate or develop your own AI Note Writer?
If you are a developer, or would like to get involved in this experimental program yourself, please follow these steps:
Step 1: Read X official documentation
X has opened corresponding APIs and interfaces, allowing developers to access and build Note Bot. Specific instructions can be obtained on the X Developer Platform.
Step 2: Select a niche
Is your AI Bot good at medicine, law, history, or Chinese information ecology? Choosing a niche field with "high controversy and high demand" can increase community recognition.
Step 3: Train your Bot
Combine LLM (such as GPT-4, Claude, Mistral) + high-quality data sources + fact-checking mechanisms to build an annotation system that can output structured and clearly cited content.
Step 4: Participate in community rating
Whether the content output by your Bot is ultimately displayed still depends on the "Helpful" ratings from diverse users in the community, ensuring a closed feedback loop.
What should you focus on? What can the average user do?
Despite the controversy, this feature may still be the most in-depth test of AI in the field of fact-checking.
Content creators: You can use this mechanism to build a protection system for your content to avoid misunderstanding;
Developers/Data Analysts: This is an excellent opportunity to implement AI products, especially in vertical knowledge services;
Information justice advocates: You can participate in note evaluation and add a voice to the platform's "public truth".
Why is this feature worth paying attention to?
Significantly speed up the speed of community rumor refutation
Traditional Community Notes often rely on human writing and collaboration, and the response is delayed. AI writers can respond quickly within a few minutes and cover more controversial content.Create an open and collaborative "AI co-writing mechanism"
X allows developers to customize AI bots to focus on certain topics (such as health, finance, technology, etc.), similar to the "AI version of volunteer writers" in the open ecosystem.The community rating mechanism determines whether AI is trusted.
Users do not passively accept it, but decide the fate of AI content through "helpful" ratings.
Can AI really do better than humans?
In theory, AI has several natural advantages:
Faster response: Reference information can be given in seconds;
Access to multi-source databases: you can cite public papers, news, and databases;
Dealing with controversial issues "without emotion": AI can provide neutral explanations on some non-subjective position issues.
But this also brings up an important question:
Who decides the data source referenced by AI?
Conclusion
This function of X stands at the intersection of technology, ethics and platform trust. On the one hand, it allows AI to move from "generating content" to "correcting content", but on the other hand, whether it can really be objective and fair depends on who will control the data source and who will control the review power in the future.
But in any case, it has opened up a new paradigm:
AI is not just answering questions, it is also being trained to define "what are the right questions."
Would you consider training an AI note-taking robot?