Wednesday, December 6, 2023

EU Approaches Historic Agreement, Shaping Global AI Governance for Tools like ChatGPT and Generative LLMs

As the sun sets on another day, the European Union edges closer to a groundbreaking agreement on regulating artificial intelligence (AI) – a potential game-changer in the realm of technological governance.

After prolonged negotiations, representatives from the European Commission, the European Parliament, and 27 member countries have found a common ground on controlling generative AI tools. These tools, exemplified by OpenAI's ChatGPT and Google's Bard, possess the unique ability to conjure content on demand, marking a pivotal moment in the broader legislation known as the AI Act.

This agreement is not just a bureaucratic milestone; it signifies a crucial stride toward establishing the most comprehensive AI regulation in the Western world. The EU, stepping into uncharted territory, is poised to become the first non-Asian government to impose robust constraints on generative AI technology.

This feat assumes particular significance in the absence of substantial action by the US Congress, thereby positioning the EU at the forefront of shaping the narrative surrounding AI tools like ChatGPT and Bard.

The road to consensus has been rife with intricate debates, reflective of the global discourse on AI regulation. The EU, mirroring the struggles of other nations such as the US and the UK, grapples with the delicate equilibrium between safeguarding its burgeoning AI startups and mitigating potential societal risks.

Throughout months of deliberation, policymakers have meticulously fine-tuned the language of the AI Act, racing against time to secure passage before imminent European elections. These elections loom large as a potential catalyst for further modifications, adding an extra layer of urgency to the negotiations.

In the crucible of debate, key sticking points have emerged, with countries like France and Germany opposing rules that could ostensibly handicap their local AI enterprises. Despite these challenges, optimism pervades the air, as officials anticipate a finalized deal early Thursday.

The proposed plan, emanating from EU policymakers, outlines stringent requirements for AI model developers, mandating transparency on training methodologies, summarization of copyrighted material, and clear labeling of AI-generated content. Moreover, models carrying "systemic risks" would be subject to collaboration through an industry code of conduct, necessitating monitoring and reporting of any incidents.

As the clock ticks toward dawn, the EU stands on the cusp of ushering in a new era of AI regulation, steering the course for a technologically shaped future.

EU nears historic agreement, shaping global narrative on AI, including tools like ChatGPT and generative technology.

Read next: The Biggest Mobile App Trends for 2024 Revealed
by Irfan Ahmad via Digital Information World

ChatGPT Doesn’t Have Human-Like Intelligence But New Models of AI May be Able to Compete with Human Intelligence Soon

In 2017, some Google researchers made Transformers that are the sole reason why most of the AI products work nowadays. Even ChatGPT uses transformers to chat with the user. When ChatGPT, an LLM (Large Language Model), was released last year in November, everyone predicted that it’s going to start an era of AI models. Bill Gates also agreed to that statement saying that ChatGPT is the first step to the age of AI. Now that many LLMs are in use, researchers are aiming for Artificial General Intelligence (AGI). The purpose of AGI will be to create an AI model that will be as intelligent as humans.

Even though behind every AI are transformers that are powering them, Google researchers say that the transformers cannot power AGI as AI hasn’t reached the point where it could compete with human abstractions, thought processing and predictions. For instance, ChatGPT only responds to the prompts that users ask it to. It cannot make anything of its own. Even the free version of ChatGPT cannot tell about anything that has happened after 2022. These kinds of instances are making researchers think whether the dream to make human-like AI is even possible or not. Researchers are also trying out different other technologies that can work instead of transformers.
What we can expect in the future is a better AI model than ChatGPT. Albert Gu, assistant professor at the machine-learning department of Carnegie Mellon and Tri Dao who is a chief assistant at TogetherAI, submitted research to ArXiv which highlights a model called Mamba. Mamba is an SSM (State Space Model) that works even better than Transformers. There isn’t much difference between SSM and LLM. SSM also replies to commands like LLM but it uses more mathematical states. Dao also wrote on X that Mamba can reply 5 times faster than transformers when given a prompt.

If tests like Mamba keep on happening, there can soon be an AI model that can compete with human intelligence. ChatGPT works well but it is nowhere near human-like intelligence. We will soon get a model that will be able to generate responses similar to humans.

Researchers, including Albert Gu and Tri Dao, explore alternatives like Mamba, an SSM, which outperforms transformers.

Read next: ChatGPT Gave the Wrong Answers for Nearly 74% of Medical Questions
by Arooj Ahmed via Digital Information World

Amazon and Google Challenge Microsoft in CMA Cloud Investigation

Amazon, mirroring Google's recent actions, formally registered a complaint with the UK Competition and Markets Authority (CMA) against Microsoft, highlighting concerns over alleged anti-competitive behavior related to the latter's cloud licensing policies.

In response to the ongoing CMA investigation, Amazon contends that Microsoft imposes separate licenses for its software products when utilized in conjunction with alternative cloud service providers. This, Amazon argues, introduces a financial barrier for customers opting for providers other than Microsoft.

The filed grievance, dated November 23 and recently disclosed to the public, asserts that Microsoft strategically modified licensing terms in 2019 and 2022. Amazon claims these alterations were designed to impede customers from seamlessly utilizing popular software offerings on competitor platforms like Google Cloud, AWS, and Alibaba. The complaint specifically targets Microsoft's alleged effort to complicate the transition process away from its Azure service.

Notably, Google has also weighed in on the CMA investigation, suggesting that Microsoft should be mandated to improve interoperability and furnish security updates for customers transitioning between different cloud providers.
In response to the accusations, Microsoft rebuts the claims, attributing variations in cloud services to the natural competition within the innovation market. The company maintains that these differences do not result from illicit business practices tied to licensing.

In its defense, Microsoft underscores the diverse sources of competition in the UK's cloud market, highlighting significant investments by other major players such as Google, Oracle, and IBM. The company expresses a readiness to collaborate with the CMA throughout the ongoing market investigation to explore potential remedies.

Photo: DIW - AIgen

Read next: Most Young People Only Use Instagram and TikTok Because of FOMO
by Irfan Ahmad via Digital Information World

Tuesday, December 5, 2023

Most Young People Only Use Instagram and TikTok Because of FOMO

Becker Friedman Institute of Economics, and University of Chicago conducted a study about the impacts and trends of social media among users and the reason behind why they use social media apps. The study revealed an interesting yet shocking fact that most young people use social media out of fear that they will miss out on certain aspects. They say that they use social media because they see everyone using it and don’t want people to think that they are not cultured enough.

The bottom-line of the whole study was that when individuals aren't active on social media as much as other people, they fear that they will miss out on new trends and will be left behind. Same thing happens with not owning luxury brands, as people often disregard people who are not familiar or do not own a branded item. It also affects their social status among peers.

This study started after offering more than 1,000 college students money if they deactivate their accounts. They were asked to deactivate the accounts while the rest of the people can still use their social media. The result showed that if the people in the study had to deactivate, $59 would be enough for them to delete their TikTok account. And they would deactivate their Instagram account if given $47. After that, they were asked if they would deactivate their social media accounts if the other would do too. The users agreed that they will take $28 to deactivate their TikTok account and $10 to deactivate their Instagram account if others would deactivate too.

It was also revealed that 58% of the respondents say that they would like to live in a world where TikTok and Instagram couldn’t exist. Some also agreed that everyone should deactivate their accounts if people in the study are going to do it. A small percentage also said that no one should deactivate their Instagram and TikTok accounts.

Respondents who said that they would deactivate their accounts were also asked why they still use social media apps. The main reason was FOMO(Fear of Missing Out). They said that they feel like not using social media, especially Instagram, will make them feel that they are not up-to-date with social media trends. Others admitted that they are just addicted to Instagram while some said that they use it for entertainment. Maps app was also included in the survey. The reason that people told of using Maps app was 30% productivity, 30% information and 10% FOMO.





Read next: The Rise of Smart Homes in America
by Arooj Ahmed via Digital Information World

Monday, December 4, 2023

OpenAI’s COO Provides Interesting Insights Surrounding The Current AI Hype

As the world of AI dominates, we’re saying hello to all kinds of perspectives of the masses. But it’s interesting to hear what the actual leaders have to say that are responsible for the unfazed trends linked to this domain.

And who better to ask than the COO of OpenAI who did not fail in terms of having his say about the current AI boom. While most of us continue to debate how useful of an endeavor it really is, this next report concentrates on Brad Lightcap’s views and they’re very interesting, to say the least.

The AI hype is certainly not a solution for everything. And as per Lightcap’s opinion, it’s insane how some tech giants are holding on to unrealistic goals and targets to achieve with the breakthrough technology.

Sitting down for a recently held interview with media outlet CNBC, he added how one of the most hyped and talked about components in this entire ordeal has to do with the capability of AI to completely alter an enterprise in a single go.

He further added how his organization tends to hold discussions and meetings with other industry leaders and they keep on speaking about how they want to achieve a particular target for so long but just aren’t getting there.

It could be related to attaining a revenue growth of 20% that was seen in the past but just isn’t seen now. Maybe they wish to embark on reducing costs, and so on and so forth, the leading executive added.
Lightcap reiterated how this year has seen the boom of generative AI and ever since they rolled out their famous AI tool dubbed ChatGPT, it’s safe to say that there can never be just a single thing to do when using AI if you wish to solve your business’s issues completely.

He added how most firms fail to understand that AI creates a new realm for users in terms of individuality and their respective empowerment in the industry. But it’s not just the big shots who can benefit.

AI can help those starting out or working as independent entrepreneurs with great ambitions who can use the AI tool and generate more revenue. Moreover, when one person asked the program to generate tips on how to get a successful firm up and running with just a startup sum of $100, ChatGPT did that. And while the person or firm isn’t earning millions just yet, it’s surely on its way.

But the COO of OpenAI revealed how such actions are yet to be announced and usually go unnoticed and he thinks it’s not fair. He added how you can find plenty of people in the industry carrying superpowers that are only in the leading position thanks to what plenty of tools equipped them with. So what they once perceived as impossible was actually possible now with assistance.

Programs and tools including the likes of ChatGPT give businesses the shortcuts needed to get desirable results in a faster and more efficient manner and with the help of the company’s owners and workforce, success is no longer a dream, it’s reality, he confirmed.

One leading industry example was provided with Morgan Stanley and how it enhanced earnings with projections expected to hit a whopping $83 million in the next five years. But then comes the negativity of some who can’t help but talk about how AI is detrimental to the human race and will end up stealing people’s work while getting rid of jobs and that’s proof of how society can never be happy.


Read next: ChatGPT Exposed: OpenAI Warns Repeating The Word ‘Forever’ Is Against Its Terms Of Service As It Could Reveal Training Codes
by Dr. Hura Anwar via Digital Information World

More Trouble For Meta As Company Sued $600 Million For Repeated Anti-Competitive Behavior In The EU

Tech giant Meta is not going to be happy to hear how another major lawsuit is being fired in its direction with a fine worth $600 million.

Media outlets in Spain allege that the tech giant has repeatedly been violating the European Union’s data protection laws by exhibiting anti-competitive behavior across the region. The purpose seems to be Meta’s keen goals linked to dominating the local advertising industry, the Spanish media confirmed, which now seeks damage for the act.

The group who filed the legal case entails 83 different media outlets in the country and they feel Meta’s constant strive to grab user data without requesting consent can no longer go unnoticed. It is a huge violation of the GDPR, they added which reportedly came into being nearly five years back. But the fact that Meta continues with its unlawful actions is proof of how little it cares about others in the industry including its own user base in Europe.

For those who might not be aware, the GDPR says all social media apps that make use of user data need to first attain consent from users as it belongs to them. But the fact that Meta repeatedly fails to do this and carries on with its own rules is a violation and clear how it wants to dominate the market via such acts.

To better comply with such laws, Meta says it’s working closely with different interpretations linked to legislation, specifically those having to do with getting consent.

If we go back to the year’s start, the tech giant revealed how it was working hard to stay close to the clauses outlined in the GDPR and hoped to do so by reconstructing the app requirements.

The tech giant released a statement on this front and added how both platforms were designed in a personal manner and they feel how it's important to give users a tailored experience including the types of ads on display.

Referring to it as an imperative component of its services, Meta feels reliance upon Contractual Necessity will be required to prove to people how various behavioral advertisements arise, depending on the activities taking place across its apps. This will be subject to both privacy and security changes in the settings, it added. They also called it bizarre to not give users a customized experience.

This particular method appears to be the center of a major push with the tech giant mentioning how it would be required to constantly change the legal basis that utilizes such a process. Moreover, it is more than certain that such information can influence behavioral marketing through ads.

The change it is making is to answer all the growing figures of requirements taking place in that region. This includes how the DPC who heads such a rule in the EU interprets the laws in regards to recent rulings while paving the way for the upcoming DMA.

This seems to be a major push arising from the side of the Spanish press who feel it must be applied in other places where the leading tech giant operates. But Meta is yet to provide remarks regarding the latest filing in Spain.

The changing data regulations across the EU continue to serve as a huge headache for those operating websites in this part of the world. Many people must adhere to the latest rules that revolve around giving users the right to accept or reject data usage.

But even within that jurisdiction, plenty of individuals continue to look for loopholes throughout the system and hope to keep on offering their own services without halting this process.

Even Meta seems to be keen on avoiding this by rolling out a subscription package that’s free from ads across the EU. Therefore, Meta will carry on with its usual business model of providing ads and if you wish to opt out of that endeavor, you can pay more to reap the benefits.

Meta says it’s not keen on making its users pay for the version without ads but it just needs to provide them with options to fulfill those requirements.


Read next: Meta Cancels Support For Cross-App Chats Between Instagram And Facebook Messenger Apps
by Dr. Hura Anwar via Digital Information World

ChatGPT Exposed: OpenAI Warns Repeating The Word ‘Forever’ Is Against Its Terms Of Service As It Could Reveal Training Codes

A team of researchers at ChatGPT claims repeating the term ‘forever’ will end up revealing segments linked to its training codes.

And now, parent firm OpenAI is shedding light on this subject in regards to how such a command is also a clear violation of the company’s terms of service.

The news arose as authors who hail from 404 Media mentioned how words like computer forever end up revealing the code but only parts of it. However, it also came with a warning regarding how such behavior violates its respective content policy.


For now, experts feel OpenAI’s statement on the matter regarding a clear violation is very vague but when you actually sit down and get deep into the policy, you see how it delineates how compilation, compilation, reverse assembly, translation, and more is not allowed, provided the intent is simply to identify source codes. Similarly, it is to be avoided if done for the mere fact of figuring out which model parts, systems, and algorithms are used in the chatbot.

The company similarly bars all users from making use of automated means to remove data from its Services.

The glitch arose in the public eye when we saw one high-level researcher who hailed from Google Brain, make an attempt with her team to expose the chatbot’s behavior of scraping the online web for data that it could use for training reasons, ahead of time.

Whenever prompts like the repetition of terms like the company was made, it caused the chatbot to carry on repeating this particular term and this forced the chatbot to repeat such a word close to 313 times, right before text was published from a source based in New Jersey. The latter included contact details like email IDs and even phone numbers.

The team observed how such issues don’t regularly arise, each time they’re used. However, the authors disclosed the results to ChatGPT’s parent firm in August of this year and even waited for three months before having the data released regarding the matter. So in the end, OpenAI got plenty of time to find a solution to the error.

There is yet to be any response for such comments being generated.

Also, the group of researchers want OpenAI to know that researchers are constantly working to attain exemption from the DMC Act that gives them the power to break into company policy and override copyright claims to highlight vulnerabilities but as one can expect, big tech giants don’t like the sound of that.

Read next: Wall Street in Peril as AI Threat Reaches London Finance Sector
by Dr. Hura Anwar via Digital Information World