New California law requires AI to tell you it’s AI

6 hours ago 19

A bill attempting to regulate the ever-growing industry of companion AI chatbots is now law in California, as of October 13th.

California Gov. Gavin Newsom signed into law Senate Bill 243, billed as “first-in-the-nation AI chatbot safeguards” by state senator Anthony Padilla. The new law requires that companion chatbot developers implement new safeguards — for instance, “if a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human,” then the new law requires the chatbot maker to “issue a clear and conspicuous notification” that the product is strictly AI and not human.

Starting next year, the legislation would require some companion chatbot operators to make annual reports to the Office of Suicide Prevention about safeguards they’ve put in place “to detect, remove, and respond to instances of suicidal ideation by users,” and the Office would need to post such data on its website.

“Emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids,” Newsom said in a statement on signing the bill, along with several other pieces of legislation aimed at improving online safety for children, including new age-gating requirements for hardware. “We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way. Our children’s safety is not for sale.”

The news comes after Governor Newsom officially signed Senate Bill 53, the landmark AI transparency bill that divided AI companies and made headlines for months, into law in California.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Read Entire Article