The Dutch privacy regulator has sought clarification from ChatGPT maker OpenAI on how it handles personal data when training its underlying system, amid increasing scrutiny of the generative artificial intelligence chatbot.
(Bloomberg) — The Dutch privacy regulator has sought clarification from ChatGPT maker OpenAI on how it handles personal data when training its underlying system, amid increasing scrutiny of the generative artificial intelligence chatbot.
In a letter to Microsoft Corp.-backed OpenAI, the watchdog asked whether the questions entered into ChatGPT are used to train its algorithm. It also inquired about the way the organization collects and uses personal data from the internet, it said in an emailed statement on Wednesday.
“That data can contain sensitive and very personal information, for example if someone asks for advice about a marital argument or about medical matters,” it said in the statement.
The Dutch data protection authority’s move comes amid growing calls for oversight of the chatbot. The European Data Protection Board in April set up a task force focused on ChatGPT, through which member states can exchange information on possible enforcement actions by regulators.
About 1.5 million people in the Netherlands used ChatGPT in the first four months since it was launched, the agency said.
More stories like this are available on bloomberg.com
©2023 Bloomberg L.P.