First Amendment doesn’t just protect human speech, chatbot maker argues

INSUBCONTINENT EXCLUSIVE:
Although Character Technologies argues that it's common to update safety practices over time, Garcia's team alleged these updates show that
C.AI could have made a safer product and chose not to.Character Technologies has also argued that C.AI is not a "product" as Florida law
defines it
That has striking industry implications, according to Camille Carlton, a policy director for the Center for Humane Technology who is serving
as a technical expert on the case.At the press briefing, Carlton suggested that "by invoking these First Amendment protections over speech
without really specifying whose speech is being protected, Character.AI's defense has really laid the groundwork for a world in which LLM
outputs are protected speech and for a world in which AI products could have other protected rights in the same way that humans do."Since
they're incentivized to take because it would reduce their own accountability and their own responsibility," Carlton said.Jain expects that
whatever Conway decides, the losing side will appeal
However, if Conway denies the motion, then discovery can begin, perhaps allowing Garcia the clearest view yet into the allegedly harmful
chats she believes manipulated her son into feeling completely disconnected from the real world.If courts grant AI products across the board
such rights, Carlton warned, troubled parents like Garcia may have no recourse for potentially dangerous outputs."This issue could
fundamentally reshape how the law approaches AI free speech and corporate accountability," Carlton said
They're not people."Character Technologies declined Ars' request to comment.If you or someone you know is feeling suicidal or in distress,
please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.