Wednesday, January 31, 2024

WhatsApp Reinforces Security and Privacy with Locked Chats on Web and Automated Account Reports

Whatsapp is coming up with a new feature that will enable the user to lock their chats on WhatsApp web, as spotted by WBI.

Users have been demanding locked chats on WhatsApp web and now Meta's messaging platform has already started working on it. The locked chats will have a separate tab for them that will greatly protect the user’s privacy. Anything about tab placement and its design is too soon to discuss as WhatsApp hasn’t said (officially) anything about it. What some experts are anticipating is that the tab is likely to have a code that will help the user to hide their private chats.


Keep in mind that a lock for chats on WhatsApp is already available on its iOS and Android. There are a lot of advantages to locking your chats on WhatsApp in terms of privacy and security. The biggest advantage of locking the chats on WhatsApp web will be when the user will be working among other people and wants to protect their privacy. WhatsApp is working hard to add new features on the app and on Web WhatsApp so users can easily use it without worrying about their privacy and security.

This new feature shows that WhatsApp wants users to feel secure while chatting with people on their platform.

On the other hand, WhatsApp is also working on automating account reports in an upcoming update. The beta version for Android 2.24.3.30 reveals a feature that will generate monthly reports for user accounts and channels automatically. This promises users a hassle-free way to stay informed about their account activities without manual intervention, enhancing convenience and efficiency. The development aims to streamline the process initiated in 2018, allowing users to effortlessly track changes in their account information over time. Further details on this feature will be shared as the update progresses.


Read next: Content creators on YouTube can now highlight their top clips, reaching more viewers with a new feature
by Arooj Ahmed via Digital Information World

Content creators on YouTube can now highlight their top clips, reaching more viewers with a new feature

Last year, Creator Insider talked about the research tab on YouTube that included content gaps on shorts. The search tab gives a summary of what the audience on the platform are searching for. The summary on the research tab can give viewers ideas about content gaps and what type of content users want on YouTube. A content gap topic appears when users cannot find exactly what they are searching for. The content creators on YouTube can take help from those content gaps to create content that doesn’t exist or could be improved. All of these updates are available on Studio Desktop (that was previously just limited to mobile) with more improvements.


Another feature Creator Insider discussed was how many users discover new clips on Reddit or Discord, but cannot find them on YouTube. Now, YouTube has launched the feature that will enable content creators to highlight their top clips on their channel so more viewers can discover and interact with them. To use this feature, the users have to enable it by going to the YouTube Studio’s customization tab.

A new analytics feature for playlists is also being introduced on YouTube. If you have a channel and want to see your playlist’s analytics, just click on the playlist and YouTube will show you grouped analytics for videos in the playlist. This feature is similar to the analytics feature on advanced mode, but it is for the playlists. Another feature that YouTube is adding for content creators is automatic early access. Till now, creators had to manually change it but now they can automatically set a date on which the Members First video has to be released to the public in later days. This means that creators can release the video just as Members first but can set a date to release it publicly later. When the video is available for members, they will receive a notification and when it’s available for the public, the general subscribers will then receive a notification as well.

Read next: Q4 2023: Google's Ad Revenue Surges by $6.48 Billion, YouTube Ads Cash In at $9.2 Billion
by Arooj Ahmed via Digital Information World

OpenAI Faces Legal Issues in Europe Over Data Privacy Concerns

OpenAI, the company that created ChatGPT, might have to pay big fines. Italy's main technology authority has accused them of not following Europe's data privacy laws. The issue seems to be about not controlling the content for young users.

In Italy, the Data Protection Authority, called the Garante, informed OpenAI they might have broken data protection rules. They did not give details on what OpenAI did wrong or what actions they might take.

The Italian regulators are worried about young users seeing bad content on the chatbot. OpenAI's website says users should be at least 13 years old and those under 18 need permission from a parent or guardian.

The Italian agency is also looking at how OpenAI collects data from users to train the chatbot. The Garante thinks OpenAI might have broken rules in the EU's General Data Protection Regulation.

Italy had banned ChatGPT before, the first time in Europe, but they allowed it again after OpenAI answered their privacy concerns.

The EU law says companies can be fined a lot of money if they break these rules.

It's not clear if OpenAI will face another ban because of this new issue. The Garante has not made any comments yet.

OpenAI disagrees with the Italian agency. They say they follow the GDPR and other privacy laws.

They are also trying to use less personal data in their systems. OpenAI wants to keep working with Garante to solve these problems.

This is not the only trouble for OpenAI. They are facing legal and regulatory issues in the US too. In the US and Europe, authorities are looking at OpenAI's relationship with Microsoft for potential competition issues.

This attention grew after Microsoft helped bring back OpenAI's CEO, Sam Altman, who was briefly fired.

OpenAI is also dealing with a lawsuit from the New York Times. The newspaper says OpenAI used its articles to train ChatGPT without permission or payment.

Image: DIW

Read next: Microsoft Sees Revenue Growth in Latest Financial Quarter
by Mahrukh Shahid via Digital Information World

Microsoft Sees Revenue Growth in Latest Financial Quarter

Microsoft has reached a big milestone. It is now a $3 trillion company. Its latest financial quarter was the strongest so far.

The money it made from search and news ads grew by 8%. Though, this is a bit less than the 10% growth in the previous quarter.

LinkedIn, which is part of Microsoft, also did well. Its revenue went up by 9%.

The total revenue in Microsoft's productivity and business area was $19.2 billion. This is a 13% increase.

For the quarter ending December 31, 2023, Microsoft's total sales were $62 billion. This is 18% more than last year.

The company's net income also went up a lot. It was 33% higher at $21.9 billion.

A big part of this success was due to Office and cloud services. These areas make up about 60% of Microsoft's total revenue.

The gaming side, especially Xbox, also did well. Xbox's revenue went up 61% because it bought Activision Blizzard. This helped increase the total gaming revenue by 49%.

Even though Microsoft did well overall, the growth in search and news ads was not as big as other areas.

LinkedIn's steady growth might mean it's doing better than other parts of Microsoft. Microsoft is using AI in many ways to get new customers and improve productivity.

Satya Nadella, the leader of Microsoft, shared some thoughts. He said they are now using AI a lot, not just talking about it.

They put AI into every part of their technology. This is helping them get new customers. It's also making things better and more productive in many areas.

Image: DIW - AIGen

Read next: Microsoft Edge In Hot Waters Again After Being Accused Of Data Theft
by Mahrukh Shahid via Digital Information World

Tuesday, January 30, 2024

Apple Fears UK Government Will Quietly Veto Tech Changes Made By Leading Companies

iPhone maker Apple is worried about the power that the UK government holds in terms of pre-approving any kind of changes in store for security features by leading tech giants.

A series of proposals for changes required for updating the 2016 IPA was discussed recently where the Home Office claimed it showed massive support for any kind of tech that was based upon privacy as the goal was to keep the nation safe.

A spokesperson for the government mentioned how they were always on the lookout to update modern technology in the best way imaginable to make sure everything remained updated at all times.

Another statement from the UK government added how there was a clear need to support innovation but if it was coming at the cost of users’ privacy then that was necessary to look at in detail as the main goal was to ensure the nation’s safety.

In cases when the government refused to make any changes for updates made by tech giants like Apple, it wouldn’t be released in other nations too as the public wouldn’t be informed about any of this.

At the moment, the country’s government is looking to make updates as they do support innovation but not at the cost of security. They have always been super clear about the support that tech innovation and privacy brings to the table which entails E2E encryption.

All such changes would be debated at the House of Lords on Wednesday as the tech giant feels it’s quite an overreach by the British Government.

It would not be wrong to mention how a host of people are all concerned about what changes people can expect to arise so soon. The goal is to secretly veto protections of users from all over the globe and stop them from giving them to clients.

Speaking to the BBC recently, the Home Office says it understands what’s going on right now and there are a series of issues like terrorism and abuse that’s at peak and it’s time that those responsible are brought to justice.

Meanwhile, the current law in place has been called out by critics as a total snoopers chart but that’s not the first time that the iPhone maker has lashed out against a host of proposals to enhance the scope.

In July of last year, we saw the leading tech firm mention how its goal was to pull out services like Facetime and iMessage from the United Kingdom instead of compromising the security of the future.

Through such law proposals, things would go above and beyond the usual iMessage and FaceTime as it entails a host of products.

During the start of January, plenty of civil liberties such as the Big Brother and Open Rights Group as well as Privacy International were out and about, showing great support as well as opposition for leading parts of the bill.

The groups mentioned how they were so worried that such changes would force technology and firms including those located abroad to keep the government aware of plans for better security and privacy. In this way, the government would put out notices to stop changes like these from taking place.

This would work to quietly but effectively convert private firms into areas of surveillance and erode security belonging to both the web and devices.

Such changes would be in line with reviews of the currently existing laws and entail a wide array of updates around data collection and making use of records about internet connection.


Photo: Digital Information World - AIgen

Read next: New Study Shows that AI Will Help in Increasing US GDP to About $1 Trillion in Coming Years
by Dr. Hura Anwar via Digital Information World

Monday, January 29, 2024

New Study Shows that AI Will Help in Increasing US GDP to About $1 Trillion in Coming Years

According to a study conducted by Cognizant, AI could bring a lot of change in the US economy in the coming years. It is predicted that AI could bring about $1 trillion to the US economy in the coming decade. But it’s not all, because many US workers will also lose their jobs because AI will take over those. The study took help from a model to demonstrate how AI could impact low, middle and high businesses. The first model revealed that AI is quickly taking over many businesses.

2023 was a year when companies were experimenting with AI but in the coming years, all work load will be shifted to them. 13% of the companies in the US are going to adopt the AI tech for work with in the next 3 to 4 years. This will increase up to 31% in the next four to 8 years and up to 50% in a decade. After 10 years, the AI adoption will slow down a little bit but won’t completely stop. Companies will soon adapt AI in work in the next 20 years.


The researchers studied all the tasks that run the US economy like content generation, market analysis, reports and emails. They then compared all this work to generative AI’s and found out that 90% of the jobs related to the US economy will do about 50% of their tasks with the help of AI. 52% of the jobs will be affected by work done by AI one way or another. This means that about 9% of the employees related to the US economy will lose their jobs in the next 10 years.

In the next few years, the US economy is set to change with the increase in US GDP to $1 trillion, all driven by productivity under AI tools. AI will impact different types of jobs and will take over credit analysis, computer programming, data administration and graphic designing jobs. This doesn’t mean all people will lose their jobs. The employees who possess great critical thinking and knowledge skills will be highly appreciated in the upcoming era of artificial intelligence.

Read next: CenterView Founder Highlights Critical Thinking as Key Amidst AI's Potential Skill Replication
by Arooj Ahmed via Digital Information World

Sunday, January 28, 2024

CenterView Founder Highlights Critical Thinking as Key Amidst AI's Potential Skill Replication

AI is getting popular day by day, especially among young people. Many youngsters are using AI for doing their work, especially coding. But one of the New York’s top bankers say that AI wasn't made so that kids could use it for developing coding skills. It was made so the kids can learn good judgment and critical skills. The co-founder of Centerview Partners LLC, Blair Effron said that I am not sure if I have ever advised my kids that learning coding is going to be a good skill in the coming 10 years. What I advise my kids is that critical thinking and judgment skills are going to help them a lot in the next few years.

Blair Effron is known as someone who had the largest corporate deal in the history of the US and that's why his opinion about these things matter. He said that critical thinking skills are a lot important in banking investments. And AI models do not possess these types of skills. AI is set to replace many skills but critical thinking is something it cannot replace no matter what. A Nobel Prize winner economist, Christopher Pissarides, said that new employees should look for positions that want them to use their empathetic skills, instead of skills that would be dominated by AI in the coming years.

Now the business schools are asking their students to make their own business models and review them. The business firm of Effron is responsible for arranging 12% global deals, all based on the critical skills. He said that there is no reason to believe that we cannot top the $4 trillion market of the past five years in 2024. He is sure that he can do that because of his optimism. There are going to be risks to the US economy if Donald Trump beats Joe Biden in the election. It is going to be a big downfall. He said that there should be a good relationship with allies and adversaries if you want to protect the corporate world of the US.

CenterView Founder: Critical Thinking Trumps Easily Replaceable Skills by AI in the Future Workforce
Photo: Digital Information World - AIgen

Read next: Will AI Create Jobs? This Staffing Agency Says Yes
by Arooj Ahmed via Digital Information World

Saturday, January 27, 2024

ByteDance Unveils StreamVoice: AI-Powered Live Voice Conversion Raises Deepfake Concerns and Misinformation Risks

ByteDance, the renowned Chinese technology firm responsible for the popular TikTok platform, has unveiled something new for its users—StreamVoice. This tool, leveraging generative-AI technology, enables users to seamlessly alter their voices to mimic others.

As of now, StreamVoice remains inaccessible to the general public, yet its introduction underscores the noteworthy progress in AI development. The tool facilitates the effortless creation of audio and visual impersonations of public figures, commonly referred to as "deepfakes." Notable instances include the use of AI to emulate the voices of President Joe Biden and Taylor Swift, a phenomenon particularly prevalent as the 2024 election looms.

Collaborating on this groundbreaking initiative are technical researchers from ByteDance and Northwestern Polytechnical University in China. It's imperative to note that Northwestern Polytechnical University, recognized for its collaborations with the Chinese military, should not be confused with Northwestern University in the United States.

In a recently published paper, the researchers underscore StreamVoice's capacity for "real-time conversion" of a user's voice to any desired alternative, requiring only a singular instance of speech from the target voice. The output unfolds at livestreaming speed, boasting a mere 124 milliseconds of latency—a significant achievement in light of historical limitations associated with AI voice conversion technologies, traditionally effective in offline scenarios.

The researchers attribute StreamVoice's success to recent advancements in language models, enabling the creation of a tool that performs live voice conversion with high speaker similarity for both familiar and unfamiliar voices. Experiments, as detailed in the paper, emphasize the tool's efficacy in streaming speech conversion while maintaining performance comparable to non-streaming voice conversion systems.

Referring to Meta's Llama large language model, a prominent entity in the AI landscape, the paper details the utilization of the "LLaMA architecture" in constructing StreamVoice. Additionally, the researchers incorporated open-source code from Meta's AudioDec, described by Meta as a versatile "plug-and-play benchmark for audio codec applications." Training primarily on Mandarin speech datasets and a multilingual set featuring English, Finnish, and German, the researchers achieved the tool's proficiency.

Although the researchers refrain from prescribing specific use cases for StreamVoice, they acknowledge potential risks, such as the dissemination of misinformation or phone fraud. Users are encouraged to report instances of illegal voice conversion to appropriate authorities.

AI experts, cognizant of advancing technology, have long cautioned against the escalating prevalence of deepfakes. A recent incident involved a robocall deploying a deepfake of President Biden, urging people not to vote in the New Hampshire primary. Authorities are currently investigating this deceptive robocall, underscoring the urgent need for vigilance in the face of evolving AI capabilities.

Content generated using AI and reviewed by humans. Photo: DIW - AIGen

Read next: Data Shows Most Popular AI Tools in 2023, With ChatGPT Coming At Top
by Irfan Ahmad via Digital Information World

Data Shows Most Popular AI Tools in 2023, With ChatGPT Coming At Top

New AI tools are being introduced every day ever since AI became popular in 2023. There are many LLMs (Large Language Models) including text-based assistants and image-to-text generators are in use by many people now. A report by Writerbuddy shows how frequently AI tools are being used.

Without any surprise, OpenAI’s ChatGPT is the most popular AI chatbot in 2023. It was introduced to the public in November 2022, but now it is the biggest AI tool worldwide. ChatGPT had a total of 14.6 billion visits, with 60% of visits recorded from November 2022 to August 2023. Characters.AI is another AI tool that acts more as a personalized chatbot. It is also known as a dialogue agent where the users can talk with different video games, TV characters or even a psychologist. The third in the list of the most popular AI tools is QuillBot. It is a writing AI tool that users use for different writing purposes. These top three AI tools add up to about 80% of visits to any AI website.

As the top three were under the category of LLMs, an AI image generator MidJourney was the fourth most popular AI site. Hugging Face was the fifth while Google Bard was the sixth most popular AI website. The other AI tools on the list were NovelAI, CapCut, JanitorAI, and CivitAI.
All of these AI tools tell us that the future is going to be all about artificial intelligence. Even though it has not been much time since AI has taken over, some companies are making many different AI tools. These AI tools have already recorded billions of visits. The world couldn't have imagined this wave of AI a few years ago, but AI has already surpassed all expectations. Now, we have to see what the coming years will bring to the world of AI.


Read next: Study Shows that TikTok is the Most Popular App Among Gen-Z For Using As a Search Engine
by Arooj Ahmed via Digital Information World

Friday, January 26, 2024

NSA's Secret Web: General Nakasone Unveils Controversial Data Acquisition Tactics!

  • Gen. Nakasone reveals how NSA buys lots of Americans' internet data without permission for foreign intel and cybersecurity.
  • Netflow data shows internet traffic details, raising privacy worries for mental health and assault survivor sites.
  • Senator Wyden reveals NSA's domestic data collection, worries about agencies getting Americans' data without asking.
  • ODNI urged to make spy agencies follow rules like FTC's for legal data buying and be transparent about data keeping.
The departing chief of the U.S. National Security Agency (NSA), General Paul Nakasone, has unveiled a revelation that raises eyebrows from privacy critics — the NSA is delving into an extensive pool of commercially available web browsing data from Americans, all without the encumbrance of obtaining a warrant. This disclosure, unveiled by Senator Ron Wyden after Nakasone's correspondence, peels back the layers on the NSA's acquisition of a diverse array of information procured from data brokers, serving purposes such as foreign intelligence, cybersecurity, and secret missions.

In Nakasone's letter, he highlighted the NSA's interest in commercially available netflow data, concentrating on the intricacies of wholly domestic internet communications and interactions involving a U.S. Internet Protocol address connecting with its overseas counterpart. Netflow data, a cloak-and-dagger trove of non-content metadata, reveals the nuances of internet traffic flow, unraveling the mysteries of network activities and spotlighting servers that may be harboring the mischief of potential hackers.

Despite the NSA's discretion regarding the specific origins of the purchased internet records, Senator Wyden voiced apprehension over the sensitivity of this internet metadata. He underscored its potential to lay bare private information linked to individuals' online ventures, encompassing visits to websites dedicated to mental health, resources for survivors of sexual assault, or telehealth providers specializing in birth control or abortion medication.

Senator Wyden, entrenched in the Senate Intelligence Committee, unearthed details about the NSA's domestic internet records collection back in March 2021. However, the disclosure couldn't see the light of day until it shed its classified status. The revelation adds a layer of complexity to the ongoing scrutiny of the U.S. intelligence community's penchant for acquiring substantial datasets from private data brokers. While this practice isn't a novel concept, the ODNI's acknowledgment in June 2023 spurred concerns about its ramifications on privacy and civil liberties.

The NSA's dependence on commercially sourced data for intelligence-gathering has thrown a legal spotlight on the agency, especially as Congress scrutinizes its surveillance powers. Senator Wyden has seized upon recent actions by the Federal Trade Commission (FTC) against data brokers like X-Mode and InMarket, viewing them as significant legal milestones. These actions spotlight concerns about government agencies procuring Americans' data without explicit consent.

The NSA contends that prevailing U.S. law doesn't tether them to obtaining a court order for commercially available information. They argue that such data is equally accessible to foreign adversaries, private entities, and the U.S. government alike. Senator Wyden advocates for the ODNI to enact a policy aligning with FTC standards for legal data sales. This would compel U.S. spy agencies to purge data that doesn't meet these standards, or if retention is imperative, inform Congress or the public.

While the NSA affirms its collection of commercially available internet netflow data, the ambiguity persists on whether the agency also dips into location databases, a practice observed in other federal government agencies. Nakasone clarified in his letter that the NSA refrains from acquiring and using location data from phones or vehicles known to be within the United States, leaving room for interpretation concerning the acquisition of commercially available data originating from non-U.S. devices. The NSA, when probed, declined to expound on Nakasone's statements.

Note: Content is generated using AI and editing by humans. Photo: DIW - AIGen

Read next: The UN is Afraid of Killer Robots, Here’s Why
by Unknown via Digital Information World

Will AI Create Jobs? This Staffing Agency Says Yes

The biggest concern that people tend to have whenever AI is brought up is that it might make their jobs obsolete. Gen AI is capable of writing books and screenplays, offering weather predictions and performing various other tasks that once commanded a salary. In spite of the fact that this is the case, the chief of a Zurich based staffing agency seemed to think that AI will actually create more jobs than it eliminates with all things having been considered and taken into account.

Denis Machuel is the CEO of Adecco, and he opined that the rise of AI is similar to the arrival of the internet. It might cause significant disruptions that would eliminate certain forms of employment at this current point in time, but in the long run, it will replace those jobs with new roles that require the use of AI.

With all of that having been said and now out of the way, it is important to note that white collar jobs will be affected more than blue collar ones. Any role that involves the computing and processing of information will likely fall by the wayside, so legal and financial roles might be in jeopardy.

However, this doesn’t mean that all lawyers will be AI in the future. Problem solving and critical thinking are two things that AI hasn’t learned to do yet, at least not in the way that humans can intuitively manage. Complex legal matters will still require humans to make the right decisions, even if AI is handling the more innocuous and routine aspects of their jobs.

Adecco is playing its part by partnering with Microsoft to create a platform that can help people see what career paths they can follow through with in the age of AI. Many workers will have transferable skills for the most part, and some new skills related to AI can be learned. This process is essential because of the fact that this is the sort of thing that could potentially end up opening up new avenues for people whose careers have been upended by this brand new tech.

Photo: DIW - AIGen

Read next: The UN is Afraid of Killer Robots, Here’s Why
by Zia Muhammad via Digital Information World

Thursday, January 25, 2024

AI Incidents Increased by 30% Year Over Year

Since AI has been advancing at such a rapid pace, it stands to reason that negative AI incidents will also be on the rise with all things having been considered and taken into account. It turns out that 2023 was a record breaking year, with 121 incidents recorded according to a recent report released by Surfshark. This represents a 30% increase from 2022, and it also comprises a solid 20% of all AI incidents recorded since 2010.

With all of that having been said and now out of the way, it is important to note that OpenAI was involved in over 25% of the incidents that were factored in. Microsoft came in second with a total of 17 incidents, followed by Google with 10 and Meta with 5.

Quite a few of these incidents had to do with deepfakes and various forms of impersonation, with figures like Pope Francis, Tom Hanks and others becoming the subject of AI generated images. Politicians were also popular targets, with everyone from Donald Trump to Barack Obama being included in this list. 2024 is an election year, which is pertinent because of the fact that this is the sort of thing that could potentially end up making the number of incidents become even higher than might have been the case otherwise.

It bears mentioning that these incidents actually became somewhat less prevalent in the latter half of the year. The first quarter of 2023 saw 54 incidents, followed by 33 in Q2, but in the third and fourth quarters this plummeted to 14 and 22 respectively.

It will be interesting to see where things go from here on out. The downward trend seems to suggest that the perpetrators of AI incidents are losing interest, but in spite of the fact that this is the case, the election year might lead to a spike that will break even more records. Whatever the case may be, AI will continue to become ever more advanced. That will only make these incidents harder to detect or prevent in the first place, and they’ll also be far more accurate than they are right now.

Number of AI incidents in 2023 surged by 30% compared to 2022

Read next: IEA Projects Data Center Electricity Needs to Exceed 1,000 Twh by 2026, Raising Environmental Concerns
by Zia Muhammad via Digital Information World

Warrant Necessary for Law Enforcement Officials, Says Amazon Ring

Amazon Ring has updated its policy, now making it mandatory for police and other officials to obtain a warrant to access footage from its doorbell cameras. This change was recently announced in a blog post by the company.

Previously, through the "request for assistance" (RFA) feature, police and public safety agencies could directly request video footage from Ring users, bypassing the need for a warrant. However, this practice has been discontinued. While these agencies can continue to utilize the Neighbors app for sharing safety tips and community information, they can no longer request videos through the app.

The decision to forego this practice came after Amazon faced severe backlash for allowing private security footage without proper consent. As a result, the company had modified its policy and allowed policy requests for videos to be made public on the app. However, the latest change mandates that law enforcement can only access Ring footage through a warrant.

Renowned policy analysts proclaimed this step as a positive one. However, experts do emphasize the need for further improvements by Ring to make their security features better. He suggested that end-to-end encryption should be enabled by default. Additionally, the company should disable default audio collection, which has been shown to capture sound from greater distances.

Amazon's approach to privacy has long been a subject of concern. In a notable incident last year, Amazon agreed to an almost $6 million settlement with the FTC, stemming from claims that the company failed to properly inform customers about how their data could be accessed. This agreement came in the wake of Amazon's own acknowledgment that it had provided police with video footage in specific "emergency" scenarios, doing so without the consent of the users or a warrant.

Ring discontinues direct police access to user footage, now mandating a warrant for law enforcement.
Photo: Digital Information World - AIgen

Read next: Artificial intelligence Can Exacerbate Ransomware Attacks, Warns UK's National Cyber Security Center
by Saima Jiwani via Digital Information World

Wednesday, January 24, 2024

Artificial intelligence Can Exacerbate Ransomware Attacks, Warns UK's National Cyber Security Center

UK-based organizations and businesses have always been prominent victims of cyber threats – particularly ransomware. Britain’s cyber mastermind has recently investigated the role of AI and predict that the number of these attacks will only increase with time. Hackers will get ample chances to breach sensitive data due to the convenience that AI provides.

The National Cyber Security Center released a report stating their findings. According to them, AI removes the entry barrier for hackers who are new to the game. They can easily get into any system and carry out malicious activities without getting caught. Targeting victims will be a piece of cake with AI being available round the clock.

The NCSC claims that the next two years will significantly increase global ransomware threat incidents. Contemporary criminals have created criminal generative AI, more popularly referred to as “GENAI.” They are all set to offer it as a service, for people who can afford it. This service will make it even easier for any layman to enter into office systems and hack them.

Lindy Cameron who is the chief executive at NCSC, urges companies to remain at pace with modern cyber security tools. She emphasizes the importance of using AI productively for risk management on cyber threats.
Ransomware is the most frequent form of cybercrime, with good reason. It offers substantial financial compensation and has a well-established business model. Moreover, with the integration of AI, it’s evident that ransomware attacks are not going anywhere.

The Director General, James Babbage at NSA further ascertains that the released report is factually correct. Criminals will continue exploiting AI for their benefit and businesses must upscale to deal with it. AI increases the speed and abilities of already existing cyberattack schemes. It offers an easy entry point for all kinds of cyber criminals – regardless of their expertise or experience. Babbage also talks about child sexual abuse and fraud – both of which will also be affected as this world advances.

The British Government is strategically working on its cyber security plan. As of the latest reports, £2.6 billion ($3.3 billion) has been invested to protect the country from malicious cyberattacks.

Criminals offer "GENAI" as a service, making hacking office systems accessible. Cybersecurity urged to evolve.
Photo: Digital Information World - AIgen

Read next: 6 In 10 SEOs Don't Think That Google's SGE Will Have a Good Impact
by Mahrukh Shahid via Digital Information World

6 In 10 SEOs Don't Think That Google's SGE Will Have a Good Impact

Google has been hard at work trying to make it so that its search engine can maintain its dominance in the industry. A major part of that over the course of the last year or so has been to incorporate AI into it as much as possible, and this has culminated in the creation of the Google Search Generative Experience, or SGE for short.

The main benefit of SGE according to Google is that it can enhance the search engine’s ability to provide information to users. The way this works is that it will generate a snapshot of all the relevant info regarding a particular search query using AI, and there’s also a handy “ask a follow up” button that can help them dive even deeper into the topic than might have been the case otherwise.

With all of that having been said and now out of the way, it is important to note that SEOs don’t really seem to think that SGE will have a good impact on the manner in which they do business. A poll posted on the SEO FOMO forums revealed that 61% of SEO professionals are worried about how it might affect the industry moving forward.

27% do seem to think that the effects of SGE will be largely positive, but in spite of the fact that this is the case, the vast majority are of the opinion that it will be harmful in the long run in some way, shape or form. When a similar poll was conducted on X, 59.1% (or around 6 in 10) agreed that it was concerning, which seems to suggest that the results weren’t just a one off with all things having been considered and taken into account.
It remains to be seen whether or not SGE will have a positive impact on Google Search. It might just drive more traffic away from sites and keep it on the SERP, which appears to be something that Google tends to prefer due to the profit it can generate. While SGE is still only in the current testing phase, Google might roll it out sooner rather than later.

SEOs divided as 27% foresee positive outcomes, but majority express concerns about SGE's implications for the industry.
Image: Digital Information World - AIgen

Read next: Microsoft’s Bing And Edge Browsers Could Avoid Being Regulated Under The Upcoming DMA
by Zia Muhammad via Digital Information World

Tuesday, January 23, 2024

AI Might Not Steal That Many Jobs According to This MIT Study

The general assumption surrounding AI is that it has the potential to end up taking away an inordinate quantity of jobs from human beings, but is there actually any truth to this sentiment? A team at MIT sought to find an answer to this question, and their research revealed that AI might not be the job killer that so many people fear it might be.

This study was conducted at MIT’s Computer Science and Artificial Intelligence Laboratory, or CSAIL for short, and it refuted a lot of the assertions made so far. For example, Goldman Sachs has estimated that as many as 25% of jobs can be taken over by AI related automation in just a few short years, whereas McKinsey estimates that 50% of all work will be done by AI by the year 2055.

A poll conducted by UPenn, Princeton, and NYU suggested that 80% of jobs will be taken over by ChatGPT, which just goes to show how pervasive this sentiment truly is. In spite of the fact that this is the case, it might not actually be financially viable to have AI do these jobs according to the MIT report.

The research suggests that AI can indeed automate certain tasks, but that doesn't mean that it can replace jobs related to these tasks. For example, an average of 6% of a baker’s time is devoted to quality control, so if a bakery pays 5 bakers $48,000 a year each, it could save $14,000 on an annual basis by having AI do it.

"We find that only 23% of worker compensation “exposed” to AI computer vision would be cost-effective for firms to automate because of the large upfront costs of AI systems.", highlights study.

However, the system itself would cost upwards of $165,000 a year in maintenance and upkeep, which means that just having humans continue doing their jobs would be more financially suitable. This goes to show that just because AI can do a task does not mean that it will be cheaper, and businesses will be looking at costs instead of just blindly replacing humans. It might be more likely that human laborers will incorporate AI into their work flow which will actually boost productivity across the board in the near to distant future.
Photo: Digital Information World - AIgen

Read next: How Tech Professionals Can Prepare for the Future of IT
by Zia Muhammad via Digital Information World

The Age of Artificial Intelligence: What Modern Tech Means for Journalism

Life can be pretty scary for creators right now. The rise of AI language models like ChatGPT that can produce somewhat convincing pieces of writing, as well as the growing popularity of AI art, have all seen creatives re-evaluate their careers. After all, why would someone pay through the nose for a carefully crafted, curated piece of work, when the sophistication of this emerging technology is progressing by leaps and bounds with no sign of stopping or slowing down?

AI's rise prompts creators, especially journalists, to reassess careers. Can AI replace the essence of human storytelling?
Photo: Digital Information World - AIgen

One of the most seemingly-vulnerable occupations in the line of AI’s fire is Journalism. The vocation to deliver news and current events to the people seems to be perfectly tailored to the efficiency and ease of use of AI. After all, where once you may have had to get an on-campus or online journalism degree such as a master's, now we can simply construct an AI program with the necessary crafting process, feed it enough content so that it learns the tone and structure of what we want from it, then feed it the data we want it to write about. Boom, journalism.

Right?

How AI Works

Artificial Intelligence has been around for ages. Every time you use a search engine, you’re using an AI. GPS programs and devices use AI to extrapolate the best routes to a destination with consideration of traffic and roadworks. Although integrated into so many aspects of our lives, even healthcare, very few people understand how AI works, and how it arrives at the conclusions that it does.

Let’s not think of AI as a program for a moment. Let’s think of it as a brain. We already know computers can accept input and produce output - for example, pressing a key (input) to print a letter to the screen (output). This is a process of receiving, understanding, and responding to stimuli, just like a human brain.

Image: Stefano Bucciarelli/Unsplash

When an AI program is first written, it’s just like the brain in a newborn baby. Inexperienced, curious, and imprintable. AI models begin with a “supervised learning” period, where the creators feed the AI brain a whole bunch of input that is defined. In the context of a baby, this can be likened to babies learning “mama” and “dada” as their first words; through continued exposure, the infant learns the names of its caregivers. The more data that it is exposed to, the more the baby learns, retains, and can accurately respond to. After a while, the AI will have acquired enough knowledge to undergo unsupervised learning, where after being given some parameters, the AI is then allowed to go through unlabelled data and learn what it could. In the context of a child, we can see this as going to daycare, and eventually school.

The next step is called the “Transformer Architecture,” and it performs a task that we often think of as unique to human brains - it draws from established knowledge to reach contextual conclusions about new knowledge. For example, if a child has only ever seen and used chairs before, they will likely be able to look at a barstool for the first time and, using their understanding of what chairs look like, where they’re used, and how they function, will be able to ascertain what a barstool is and how to use it. The transformer architecture does the same thing.

There is certainly more to it, but that’s the basic process. A program is created and fed defined data, then it’s fed undefined data, that it then defines itself, and then it uses the transformer architecture to contextually define more data. This is how ChatGPT and other AI models of its kind can absorb, understand, and then accurately respond to questions and prompts provided by the users.

Photo by Tyler Franta on Unsplash

What This Means for Journalists and Journalism

Technology is constantly being developed, and there are always new ways to research and present information. Over the years there has been a dramatic shift in traditional media being responsible for relaying accurate, trustworthy information to the public, to online news outlets and content creators being the dominant source of information regarding current events.

Now, we stand at the precipice of a new industrial revolution. Although it can be easy to look at the things ChatGPT does and believe that it will dominate, possibly even wipe out career journalism and other industries, it appears to be more of a panic reaction than a logical one.

Although AI will inevitably disrupt industries, industry disruption likewise brings additional opportunity and scope. Not only that, but AI models like ChatGPT are capable of summarizing studies and articles, producing more efficient data for research in journalistic and expository writing pieces.

To illustrate the difference, allow us to put it in the following terms.

An AI will be able to produce a passable script on the effects of war by using data, statistics, eyewitness accounts, images, and videos. However, an AI cannot go to the scene of the war, develop its unique impressions, and produce a written or video piece with the creative decisions of a human journalist. An AI-created piece on the horrors of war would resemble something more akin to a documentary where a person in a seat just sits and rattles off what happened. You will learn, but you won’t understand, feel, or be stirred by it.

Modern journalism is just as much art as it is fact, and although this can produce some discrepancy and ethical dilemmas where one overtakes the other, it is the hints of humanity that make journalism a timeless profession. People will forever need to know what is going on in the world, but it’s the actions of journalists and the people they report on that put these events into a more “human” context; such as that people aren’t just aware of these events, they are moved by them.

If you’re a journalist concerned about your job once AI hits the main stage, we have some encouraging words for you. First of all, AI has already hit the main stage and you’re still here. Second, an AI can never do what you do. Finally, maybe you will be the person behind the next big development of the collaboration between AI and journalists. Either way, AI is merely a hammer, and that’s all it’s going to be for a long time. It’s up to you to drive the nail in.

by Irfan Ahmad via Digital Information World

Meta Rolls Out Detailed Strategy On How It Plans To Combat New EU Competition Rules

It’s no surprise that impressing the EU is never an easy ordeal, especially when you’re a leading tech giant in today’s competitive industry.

Regulators in the European Commission keep rolling out stringent strategies to help ensure Big Tech stays in check and that means restrictions galore.

But Facebook’s parent firm Meta is making sure that it’s ready to tackle all the hurdles being thrown in its direction by responding to the latest set of competition regulations in the best manner and they’re being very transparent about the matter by making sure fairer dealings arise on leading apps.

The DMA was rolled out for six of the world’s leading tech giants which included Meta amongst other leaders.

We saw how the EU dubbed Meta as a ‘gatekeeper’ and listed its top six offerings as core services under the new law. This entailed the Facebook and Instagram apps, advertisement delivery systems, a plethora of messaging options, the WhatsApp and Facebook Messenger apps, and the virtual platform called Marketplace.

But it has to be remembered how the rules in place apply to a wider range of services rolled out by gatekeepers.

So the DMA restricts how such gatekeepers behave in terms of processing the data of users for the sake of advertising. In the same way, the rules also mentioned how gatekeepers can’t link user data between core platform services or with any user information arising from Meta’s other services that it provides or any data given out by third parties either. The only exception is if it provides them with a certain choice and attains consent.

The deadline for such gatekeepers to respect the DMA is March 7th, 2024 and as the date nears, top tech giants are striving and scrambling to do everything that ensures they’re in a dominant position. This means making as many amendments as possible to make the EU happy.

A post was also published by Meta in this regard in terms of how the Tech giant would soon begin rolling out notifications to various users across the region where it is applicable. This gives them more choice in terms of how to use the services more effectively. And one of the top choices included blocking the company from linking data on the Facebook and Instagram apps.

This is a huge deal as it means Facebook can no longer say hello to its goal of sharing data with Instagram and ensure cross-linking between both platforms which has been at the forefront of Meta since day one. After all, they spend billions to invest in the app.

That enabled them to boost the visibility of users' activity through ads as they bought a major arch-rival and got access to the app’s data at the same time.

Users of the company’s leading apps would now be able to see the account separation choices through any existing Account features found.

They hope such changes would better accommodate the DMA that comes into effect this year. Moreover, it suggests these choices will not be alive until the deadline arises for the start of the DMA in March.

Such choices will also ensure Facebook Messenger’s regional users will prevent Meta from using their information. But if they do agree to block the app, they would need to make another Messenger account that generates friction to prevent people from firewalling the messaging actions through social networking on a public level.

Other changes include allowing users on Meta to devise more notable obstacles for blocking them from making use of the app’s gaming endeavors to not having their activity collaborate with their use of the social networking activities by clicking no access to the likes of social gaming.

Photo: Digital Information World - AIgen/HumanEdited

Read next: X Bug Causes Hundreds Of Posts To Be Flagged As ‘Sensitive Media’
by Dr. Hura Anwar via Digital Information World

Are Bad Translations Plaguing The Internet?

Back in the late 1900s, Bill Gates thought he could accumulate people from different areas of the world on the digital platform. These people, speaking 7,000 different languages can gather on the internet like they would at a town square.

And he was right! The World Wide Web has definitely made it possible for people to interact with each other without any physical barriers.

However, a recent study indicates a challenge to this blessing.

According to a study by Amazon Web Services and the University of California, the majority of the translations present in the Internet database are not up to the mark. The study compared over 6 billion sentences translated in at least two languages to assess the quality. They concluded that the more the sentences were translated, the worse they got.

Researchers predict that the low-quality translation was most probably conducted by computers. They also said that these computers are writing new stuff, especially when it comes to languages that are not as widespread. For example, Woolf and Xhosa from Africa.

Unfortunately, the lack of resources on the internet has led many users to rely on these bad translations. Additionally, many of these translations have led to funny and embarrassing scenarios for the users.

For example, Google once translated "Russia is a great country" into something about a fictional place in "The Lord of the Rings." In 2019, Facebook's translation tool also made a big mistake with the Chinese President's name in an article translated from Burmese. After realizing the mistake, Facebook did issue an apology, citing technical issues for the error.

And there was a funny mistake when translating medical advice for Armenian speakers. Instead of suggesting "ibuprofen for pain," it said to take an "anti-tank missile for pain."

Seems like most of the language translation services are simply pushing content on the internet just to make money from ads. What do you think?

Photo: AI-gen

Read next: Meta's Facebook and Instagram Are the Most Data Hungry Apps According to This Study
by Saima Jiwani via Digital Information World

Sunday, January 21, 2024

X Forced To Defend Itself After Being Accused Of ‘Shadow Boosting’ Mr. Beast’s First Video On The App

The world’s most popular YouTuber Mr. Beast is famous for a reason. And that has to do with the fact that his content continues to entertain the masses. Perhaps a reason why could be linked to how large of a budget he allocates to keep his viewers engaged.

But a recent finding on X had people alleging it continues to soak up profits from Mr. Beast's first video upload by marketing it as an undisclosed advertisement. This forced Musk’s firm to step in and refute the claims that this was far from the truth. But more research into the matter shows that while it might be right, it’s still lying in terms of how data is being deceivingly displayed to the public.

This past week, we saw several users on X state how the content was visible across streams with the same old advertisement disclosures found in the drop-down menu, despite this not being clearly marketed as ‘promoted content’. Meanwhile, others have gone on to speculate more on this front including the platform marketing the post to several people to try and boost the count of viewers and therefore make greater profits as it seems to be an enticing endeavor on the app.

But the platform clarified how the disclosure in question here is linked to pre-roll ads across the clip and not the video itself. So as per those claims, it is not really shadow boosting the content but we do feel it’s doing everything in its power to make the video aware to the masses. After all, making the world-famous YouTuber happy would be a huge marketing achievement of the potential for other creators in the industry.

However, all of the positives would be cut out against the negatives by not rolling out the right stats related to the new clip. This was witnessed through an exchange of words with one fan account on X, suggesting how the video gets more views on the app. Then we saw one of the company’s own workers double down on the claims, adding how these view counts are better than what’s on display here. This is because YouTube counts other view types which X may not.

But that is very unfair comparison (or at least cherry picking claim). The view count for X posts includes those counted each time it’s displayed on the users’ feed. Whether the user sees it or not, that does not matter and it’s crazy. That should not be designated as view counts because YouTube counts views when the viewer seen clips at least for the first 30 seconds.

So as you can see, the video streaming giant’s count for views is so much more indicative of the real interest taking center stage. But for X, the view count is simply the figure allocated for post impressions. And as one can imagine, it’s much less meaningful.

The X app realizes this and its employees confirm it through replies. But to boost confusing stats in this manner and give the world the impression that it performing far better than reality is wrong for so many different reasons.

It’s quite like how billionaire Elon Musk mentions stats about the app, raising eyebrows with critics for years. Many fail to understand where and how he quotes data from when their research never indicates the same.

What is even more interesting is how Musk continues to reshare the facts like he’s the only one who knows what’s going on in terms of traffic and the rest are immature critics.

You simply cannot fool the world by quoting a small fraction of the reality and expect them to accept whatever you’re saying. It’s quite a meaningless effort for obvious reasons. Maybe it will work for the time being as X takes on more small-scale advertisers but in the long term, there are plenty of problems in terms of how authentic the facts really are.

But these views don’t matter at the end of the day if there are no advertisers present to support the brand. So many leading names in the industry continue to be hesitant in terms of rolling out their campaigns here and the chances of winning those names back are not looking great right now either.

Clearly, having income from X Premium’s users’ subscriptions is far from what’s required to save the app. In case it does not begin getting its advertising business back to where it started, we might have billions in terms of user count but that won’t stop it from working at a serious loss. What do you think?

There is a lot to think about and ponder here and being dishonest with unreliable data is not going to be the right approach, we believe.


Read next: Success For X As Its Mobile Revenue Crossed Major Milestone In December 2023
by Dr. Hura Anwar via Digital Information World

Internet Piracy Has Increased by 36% Year Over Year, Are Streaming Sites to Blame?

The rise of streaming was supposed to bring about an end to the age of Internet piracy because of the fact that this is the sort of thing that could potentially end up offering a convenient way to access content from anywhere in the world. However, it turns out that the illegal downloading of various forms of media has actually increased by as much as 13% since 2019.

With all of that having been said and now out of the way, it is important to note that there were around 125 billion site visits recorded in 2019, and this decreased to 104 billion in 2020. In spite of the fact that this is the case, a report released by Muso revealed that the number of visits reached 145 billion in 2023, which is a 36% increase from 2020.

92% of this traffic is going towards movie and TV show downloads. 11% of this traffic came from the US, with India comprising another 11%. It bears mentioning that America has seen its proportion increase by 2 percentage points since 2018, whereas India has seen a staggering 7 point increase in that timespan with all things having been considered and taken into account.

Rise in Internet Piracy Despite Streaming Boom: Illegal downloads surge by 13% since 2019, posing challenges to the industry.

Streaming Saturation Blamed for Surge in Internet Piracy: Muso points to overwhelming choices and paywalls driving users towards illegal downloads.

This begs the question, why is internet traffic on the rise? According to Muso, it might have something or the other to do with the saturated streaming market. There was a time when Netflix was the only streaming service out there, but nowadays, practically every single media company has launched their own service.

The sheer number of choices overwhelms consumers, with many feeling like the majority of content is locked behind paywalls that are out of reach. Subscribing to each and every service can be a prohibitively costly endeavor, and piracy seems like an easier choice in that regard.

Streaming services might need to introduce bundled packages in the future, which ironically would make streaming just a refreshed version of broadcast and cable television. Either way, the rise in piracy shows that the time to change has arrived, and the industry might not be able to survive if it doesn’t adapt.

Read next: Gamers Are More Likely to Suffer Hearing Loss, Here’s Why
by Zia Muhammad via Digital Information World

Gamers Are More Likely to Suffer Hearing Loss, Here’s Why

A new study, titled "Risk of sound-induced hearing loss from exposure to video gaming or esports: a systematic scoping review", published in BMJ health has found that gamers have an above average risk of suffering from hearing loss or tinnitus in the long term. This is the result of them playing video games for several hours at a time with the noise level being turned up far past safe levels for human beings with all things having been considered and taken into account.

According to the World Health Organization, a noise level of 80 decibels for forty hours per week is relatively safe. In spite of the fact that this is the case, any minor excess to this safe level results in an exponential increase in the likelihood of harm being done. 90 decibel sounds are only safe for four hours a week, and 95 decibels for just one hour and fifteen minutes.

With all of that having been said and now out of the way, it is important to note that the noise level in four shooting games that were analyzed as part of this study hovered between 88.5 decibels and 91.2 decibels. This is the average noise level, and gamers also experience short bursts of up to 119 decibels as well.

The duration of this exposure may result in irreversible damage to their hearing, which is why it’s so important to intervene and inform. It bears mentioning that this study is based on self reported data which goes back to the 1990s, and games weren’t the same back then. However, it’s difficult to deny that a correlation exists between extended gaming sessions and various kinds of hearing loss and tinnitus.

The way to prevent this damage is by encouraging keep volumes down to reasonable levels instead of pushing the sound to the max. We might start to see a widespread epidemic of hearing loss, especially given how loud music can get at concerts, and with gaming now also added to the mix it is more important than ever to educate people on the long term effects of noise exposure. More evidence will also need to be collected in order to verify the theories presented in the paper.

Prolonged video gaming linked to increased risk of hearing loss and tinnitus, warns BMJ Health study.
Photo: Digital Information World - AIgen

Read next: Excessive Social Media Usage Might Increase Risk Seeking Behaviors Among Children
by Zia Muhammad via Digital Information World

AI Might Surpass Humans in All Tasks by 2047

The rise of AI has been so unbelievably rapid that within the span of just one year it managed to become perhaps the central topic of discussion in the world of tech as well as in day to day life. The main question that people tend to ask here is how effective AI will be at replacing humans in a wide variety of tasks. It turns out that researchers at AI Impacts tried to figure out the likelihood of this occurring in the future.

The researchers at this organization collaborated with the University of Oxford and the University of Bonn to survey 2,778 experts who have published their thoughts on AI. According to these authors, there’s a 10% chance that AI will outperform humans in every single task you can think of over the course of the next three years.

If the current trend persists, these experts say that the chances of it surpassing human beings by the year 2047 is around 50% with all things having been considered and taken into account. As for human occupations in particular, researchers concluded that there’s a 10% chance of all of them becoming automated by the year 2037.

Translating text, recognizing objects after seeing them a single time, writing basic code in Python, writing fiction that could reach the New York Times bestseller list, creating a payment processing site, and even creating large language models of their very own could become a reality. AI is advancing far more quickly than anyone could have expected, which makes it rather necessary to figure out where things might go from here.

It bears mentioning that these are perhaps the most pessimistic predictions that anyone could make. Optimistic predictions have also come about, with many saying that AI will complement jobs rather than replace them entirely, but the fact of the matter is that the pessimistic outlook is no less likely than the optimistic one. Indeed, 81% of survey respondents believe that AI will be able to talk like human experts within the next 20 years, which would make the respondents themselves obsolete all in all.

Probability of AI surpassing humans in all tasks reaches 10% within the next three years, study.
Photo: Digital Information World - AIgen

Read next: The Enormous Scale of GDPR Fines for Mark Zuckerberg’s Companies Revealed
by Zia Muhammad via Digital Information World

The Enormous Scale of GDPR Fines for Mark Zuckerberg’s Companies Revealed

To date, the social media giant Meta has been penalized with a massive $2.8 billion (yes with B in USD) fine for going against the GDPR.

Meta continues to remain a favorite company by officials in the EU and they’ve been targeting the organization left and right for several violations of the GDPR.

It wouldn’t be wrong to refer to Ireland as the world’s ‘Leading Data Regulatory Authority’ that continues to impose massive fines on big technology companies in the industry who fail to follow the stringent GDPR law that came out in 2018.

Meta is a big player in the digital space and it’s been subject to legal actions for years, let’s not forget how fines worth billions have been handed over. GDPR breaches incurred by Meta have been a constant worry for Zuckerberg and from what we’re seeing right now, it’s not ending soon.

The company was penalized a massive $442 million in Sep. 2022 and then another massive one comprising $425 million was rolled out on Meta Limited in January of 2023. And as time went on, things did not get any better. It’s interesting how the Irish-based regulator has been penalizing the organization from the start of GDPR. The majority of fines that Meta has been forced to pay have arisen from here.

Remember, WhatsApp Ireland also led the pack in terms of fining the company in 2021 so we’re not quite sure what’s going on here. But one thing is certain, Meta’s got a lot to do to get back into the firm’s good books.

The Irish Data Protection Commission enforced a lion’s share record featuring 1.78 billion Euros in the past year.

Most tech firms have headquarters based in Ireland and it’s shocking how the country has been serving as the leading data privacy enforcer for so long. Fines have been hitting tech giants left and right and Meta is the hardest hit amongst them all. Now what could the reason behind all of this be?

Both Facebook and Instagram are now being called out for wrongful data processing. Their apps have breached the GDPR for most of the same reasons and many can’t help but wonder why Meta does not learn from the past.

The head for data privacy regarding cybersecurity mentioned how it appears that the country’s data regulatory body has more to do with its favorable environment than anything else. But recent reports have gone on to prove how the number of fines rolled out across the EU in the past year is mostly due to successful appeals in several jurisdictions. Many also feel this has to do with the fact that opinions have been divided regarding several decisions including the EU Data Protection Board.

In May of 2023, the company was barred from transferring data belonging to EU citizens into the US, which investigators said was a huge privacy violation. This saw cross-border penalties being implemented on the tech giant with a warning of such action to be repeated in the future. But now, we are seeing it being punished for unlawful data processing, leading to a loss worth billions.

Meta did come under fire for another leading reason. Its decision to roll ad-free subscriptions for those with Premium tiers was called out for breaching EU laws. So as you can see, failure in terms of complying with EU laws appears to be the main reason why Meta keeps being penalized.

Mark Zuckerberg’s Companies Hit with $2.8 Billion Fine for Data Processing Breaches
Sources: Dlapiper ReportEnforcementtracker

Read next: Google To Profit Billions From Changes To Its Search Thanks To Generative AI
by Dr. Hura Anwar via Digital Information World