The AI chatbot developed by Snap has brought the firm to the attention of the UK’s data protection authorities, who are concerned that the technology could endanger youngsters’ privacy.
The Information Commissioner’s Office (ICO) declared today that Snap had received a preliminary enforcement notice for a “potential failure to properly assess the privacy risks posed by its generative AI chatbot ‘My AI.'”
A breach finding is not there in the ICO action. The warning, however, reveals that the UK regulator is worried that Snap could not have taken adequate measures to guarantee the product complies with data privacy laws, which have been tightened to include the Children’s Design Code as of 2021.
“The ICO’s investigation provisionally found the risk assessment Snap conducted before it launched ‘My AI’ did not adequately assess the data protection risks posed by the generative AI technology, particularly to children,” the regulatory body stated in a news release. “In this context, which involves the use of cutting-edge technology and the processing of personal data of 13 to 17-year-old children, the assessment of data protection risk is critical.”
Before the ICO decides whether the firm has broken the rules, Snap will now have the opportunity to address the regulator’s concerns.
In a statement, Information Commissioner John Edwards stated, “The preliminary results of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching ‘My AI.'” “We have been clear that businesses must weigh the dangers and advantages of AI together. The preliminary enforcement notice issued today demonstrates that we will take action to safeguard the privacy rights of UK customers.
Snap used OpenAI’s ChatGPT large language model (LLM) technology to power a chatbot pinned to the top of users’ feeds to operate as a virtual buddy who could be asked for advice or sent photos. Snap released the generative AI chatbot in February, though it launched in the UK in April.
The function was first restricted to users of Snapchat+, the paid version of the brief messaging service. However, quite soon, Snap gave free users access to “My AI” as well, and it also gave the AI the capacity to send snaps back to users who interacted with it (these snaps are produced using generative AI).
To ensure that generated information is appropriate for the user, the business has claimed that the chatbot has been designed with additional moderation and safety capabilities, including age consideration as a default. Additionally, the bot has been designed to refrain from responding in a violent, hateful, pornographic, or offensive manner. Additionally, via its Family Center feature, Snap’s parental safety capabilities inform parents whether their child has spoken to the bot in the previous seven days.
However, there have been complaints of the bot veering off course despite the ostensible guardrails. The chatbot had proposed techniques to cover up the smell of alcohol after learning the user was 15 years old, according to an early evaluation published in March by The Washington Post. Another time, the bot advised on “making it special” by creating the right atmosphere with candles and music after it learned the user was 13 and inquired how to prepare for their first sexual experience.
Snapchat users have reportedly bullied the bot, and some are upset that AI was ever introduced into their feeds.
Not for the first time, European privacy officials have noticed an AI chatbot. Replika, a manufacturer of a “virtual friendship service” based in San Francisco, was ordered by Italy’s Garante in February to halt processing local users’ data, citing worries about hazards to minors.
The following month, the Italian government issued a comparable stop-processing order for OpenAI’s ChatGPT product. After more thorough privacy disclosures and enhanced user controls, including the ability for users to request that their data not be used to train OpenAI’s AIs or be removed, the block was eventually lifted in April.
Due to concerns expressed by Ireland’s Data Protection Commission, the region’s top privacy regulator, Google’s Bard chatbot’s regional launch was also postponed. After adding more disclosures and controls, it was launched in the EU in July. However, a regulatory task force established within the European Data Protection Board is still primarily concerned with determining how to apply the EU’s General Data Protection Regulation (GDPR) to generative AI chatbots like ChatGPT and Bard.
Dr. Gabriela Zanfir-Fortuna, VP for global privacy at the Washington-based think tank, the Future of Privacy Forum (FPF), discussed how privacy and data protection regulators are approaching generative AI. She cited a statement adopted this summer by the G7 DPAs, which include watchdogs in France, Germany, Italy, and the UK, in which they listed key areas of concern, such as these tools’ legal basis for processing personal data, including data about minors.
According to the G7 DPAs, “Developers and providers should embed privacy in the design, conception, operation, and management of new products and services that use generative AI technologies and document their choices and analyses in a privacy impact assessment.”
The examination of Snap’s AI chatbot in the UK, focusing on worries surrounding children’s privacy, has now reached a pivotal moment. To summarize, as regulatory bodies and interested parties persist in evaluating the impact of this technology, the result may influence the future environment of AI-driven chatbots and their utilization on platforms designed for young users. Striking a balance between innovation and safeguarding privacy is expected to be a significant hurdle in the ongoing dialogues and potential regulatory measures.
Engineer | Content Writer
Want to be a catalyst for a positive change in the world