It’s been close to a year since we saw the EU allocate a task force designed to oversee the workings of OpenAI’s popular ChatGPT tool.
The goal was to see how the EU’s rulebook applies to the viral chatbot and from what has been published so far regarding preliminary findings, we can confirm that things are mostly ok in that regard but there are still a host of legal issues which continue to remain undecided.
This includes lawfulness as well as fairness arising with OpenAI’s processing.
These issues might seem tiny but they’re pivotal because many are deemed to be confirmed violations of the overall privacy regime. As a result, they can go up to 4% of the worldwide yearly turnover penalty.
Similarly, watchdogs could order for non-compliant processing to come to an end. So to put it simply, the company is really getting tasked in the sense that a regulatory risk does exist where laws are allocated for the AI domain but they’re still far from getting operational.
A certain amount of clarity is necessary from the data protectors of the EU regarding data protection laws that apply to tools like ChatGPT. The bet is safe in terms of giving OpenAI empowerment so that it can carry on with usual operations daily, despite a huge number of issues arising in the tech sector that violate different aspects of the GDPR.
For instance, the investigation linked to Poland’s DPA opened up complaints regarding the chatbot of a certain individual and then refused to fix the errors. Another issue was rolled out in Austria. So a lot of such issues mean saying hello to less enforcement.
The GDPR is a law that is applicable when companies engage in data collection of users where personal information gets processed. It’s a very common behavior from Large Language Models including ChatGPT. They’re scrapping data from the web for training purposes like siphoning certain posts across social media apps.
The regulations also empower DPAs to file laws to stop non-compliant processing of data from taking place. So as one can imagine, it’s a powerful endeavor for sharing how AI giants could function in the region when GDPR enforcers start opting to pull it.
We did get a taste of that from last year when lawmakers in Italy advocated against the tool on this front and banned it from processing the user data of locals. This resulted in a temporary shutdown of the service across the nation.
ChatGPT continued with operations across Italy after changes were done to data and controls given to users as a long list of demands offered by the DPA. But it was still under the shadow of a huge cloud of how it can function across the EU.
To put it simply, if you wish to have a tool that processes the data of users, you need to get the legal basis sorted out for that framework to ensure continuous operation. If not, then sadly there’s no room for you.
This particular report is bringing to light another very integral point. It discusses how the platform requires legal validation for data processing. From pre-processing to training and the final output too - there is a lot to consider to deem ChatGPT as lawful in its behavior.
Coming down to fairness and transparency, the GDPR has made it so clear that these aspects are not an option. They must be complied with by OpenAI and no arguments on this front could be accepted.
As far as transparency is concerned, the task force mentions how OpenAI needs to make use of exemptions that notify people regarding data collection about them.
Image: DIW-Aigen
Read next: Global Startup Rankings: The Top Cities to Watch
by Dr. Hura Anwar via Digital Information World
No comments:
Post a Comment