Whatever that might go wrong with X's new AI-written neighborhood notes

INSUBCONTINENT EXCLUSIVE:
If AI note writers "generate initial drafts that represent a wider range of perspectives than a single human writer typically could, the
quality of community deliberation is improved from the start," the paper said.Researchers imagine that once X's testing is completed, AI
note writers could not just aid in researching problematic posts flagged by human users, but also one day select posts predicted to go viral
and stop misinformation from spreading faster than human reviewers could.Additional perks from this automated system, they suggested, would
include X note raters quickly accessing more thorough research and evidence synthesis, as well as clearer note composition, which could
speed up the rating process.And perhaps one day, AI agents could even learn to predict rating scores to speed things up even more,
researchers speculated
However, more research would be needed to ensure that wouldn't homogenize community notes, buffing them out to the point that no one reads
them.Perhaps the most Musk-ian of ideas proposed in the paper, is a notion of training AI note writers with clashing views to "adversarially
debate the merits of a note." Supposedly, that "could help instantly surface potential flaws, hidden biases, or fabricated evidence,
empowering the human rater to make a more informed judgment.""Instead of starting from scratch, the rater now plays the role of an
community notes, it's clear that AI could never replace humans, researchers said
Those humans are necessary for more than just rubber-stamping AI-written notes.Human notes that are "written from scratch" are valuable to
train the AI agents and some raters' niche expertise cannot easily be replicated, the paper said
And perhaps most obviously, humans "are uniquely positioned to identify deficits or biases" and therefore more likely to be compelled to
write notes "on topics the automated writers overlook," such as spam or scams.