The fast-paced development and adoption of AI have led to much concern in the tech world today.
This is why companies have been calling for stringent laws with regulators to come into play and ensure the moderation of AI continues before things get out of hand,
In a recent discussion, OpenAI CEO Sam Altman made it very clear how he feels the need for an international agency to be developed to better moderate AI. According to him, this is your best bet for the approach as compared to laws that are not only poorly curated but inflexible too.
Altman further explained how AI was like an airplane where frameworks for appropriate safety needed to arrive at a quicker pace than what was taking place right now. And the fact that the makers of ChatGPT were keen on fostering this idea was serious news.
Image: All-In Podcast / YT
This is not the first time that we’ve heard him speak on the concerns that AI systems are bringing forward in the future. Altman mentioned that serious harm could arise, decades from today, due to improper regulation of AI while discussing the matter during a podcast last week.
The fact that systems can give rise to serious negative impacts that are beyond the realms of a single nation means big things. This is why they are calling for more stringent regulation by one agency that looks at these powerful systems and ensures the right kind of safety testing ensues.
Only with the right type of oversight in place, the world can see using AI as a balancing act, Altman added and it does make sense, when you think about it, where he was coming from.
Regulatory overreach seems to be the issue in OpenAI’s eyes. Either a lot is being done or too little too late. And when that’s the case, things can really go in the wrong direction.
And in case you’re still wondering, laws for AI regulation are currently underway. We saw in March of this year how the EU gave the green signal to the AI Act. This would better categorize the risks that AI brings and ban uses that aren’t acceptable.
We even saw Biden sign executive orders in the past year where calls were made for more transparency on this front and where the creators of some of the biggest AI models would be held responsible for such acts.
Now, we’re hearing more about how American states like California are taking charge of AI regulation as moves are made to approve close to 30 bills for more regulation in the country, as confirmed by Bloomberg recently.
But the OpenAI CEO says no matter what, the only solution he sees on this front has to do with international agencies offering greater flexibility than local laws, given how fast the world of AI is evolving.
Meanwhile, the reason why agencies are giving rise to these types of approaches is simple. Even the best world experts can only give rise to policies that correctly regulate such events once or two years from today.
Simply put, Sam Altman knows exactly what he is talking about. AI needs to be regulated just like we’re dealing with the likes of aircraft where poor safety can be detrimental.
Remember, any loss of human life is never wanted, and the fact that AI can make that a possibility is scary for obvious reasons. It’s similar to an airplane or any kind of example where many feel they’re happy to have any kind of experimental framework in place.
Read next: Elon Musk Questions Security of Signal
by Dr. Hura Anwar via Digital Information World
No comments:
Post a Comment