
An Italian data protection authority has claimed that ChatGPT, OpenAI's virtual assistant based on generative AI, is violating rules defined for the privacy of personal data.
The watchdog organisation, known as Garante, held accountable the AI firm on Monday while pacing forward with an ongoing investigation it opened last year regarding the same matter.
Garante is one of the European Union's most vigilant watchdog authorities which oversees and ensures that AI technologies are abiding by the bloc's data-oriented rules and regulations.
Read more: Microsoft appoints Johanna Faries as Blizzard's president
The authority also banned the AI tool last year for contravening EU privacy rules. However, the service was restored following OpenAI's resort to addressing issues regarding the right of users to reject the request to its use of personal data to train its algorithms. Meanwhile, the watchdog did not pull up its investigations and said it would resume the probe, Reuters reported.
While not reflecting in great details of contravention of the EU data privacy committed by ChatGPT, Garante said in a statement that it has found elements that suggest one or more possible data privacy violations.
Extending a timeframe of 30 days to the Microsoft-backed AI firm to present defense arguments, the regulator said its probe will be based on assessments conducted by a European task force comprising national privacy watchdogs.
Italy became the initial Western European nation to regulate ChatGPT, which has been rapidly developing and drawing the attention of legislators and regulators. With the introduction of the EU's General Data Protection Regulation (GDPR) in 2018, companies found to be in violation of the regulations could be fined up to 4pc of their global turnover.
In December, EU lawmakers and governments reached provisional agreements on regulating AI systems like ChatGPT, bringing them closer to establishing rules for governing this technology.