Wednesday, July 17, 2024

Meta Responds To Brazil’s New Privacy Policy By Suspending All AI Tools

Tech giant Meta has given a fierce response to Brazil’s latest privacy policy by suspending all of its Generative AI tools.

The news comes after the Brazilian government objected to Meta’s offerings and how they were impinging on users’ personal data rights.

It’s not too shocking as the Brazilian market features close to 200M individuals and therefore is deemed to be a crucial target region for the company. It’s also the second biggest userbase of WhatsApp, falling closely behind the Indian userbase.

We saw Meta roll out an event in the Brazilian capital last month where it spoke about launching its first advertising program based on breakthrough technology so that organizations using AI could benefit.

However, with time, tensions grew as the country opted to suspend Meta’s new privacy regulations regarding personal data use for AI training in the nation.

The ANPD group in Brazil ruled how the organization would require a privacy policy to get rid of a section linked to personalized data processing for the sake of AI training.

Therefore, Meta has given a strong response to show the country that it does value its laws, and therefore suspension of its tools has begun as they begin talks to find a solution linked to addressing the government’s doubts related to generative AI.

Meta has time after time been very clear on how transparency related to its products is necessary to ensure good working relations. Therefore, it hopes to clear any doubts the Brazilian government and activist groups might have regarding AI tools and using the personal data of users to equip them efficiently.


Read next: Meta Seeks To Tackle Rising Content Theft On Threads With New ‘Use Media’ Feature
by Dr. Hura Anwar via Digital Information World

Apple Watch Car Crash Detection: Review and Legal Implications

The integration of car crash detection in smart wearables has marked a significant advancement in technology, with Apple Watch being at the forefront of this innovation. The feature is designed to detect if the wearer is involved in a vehicular accident and subsequently alert emergency services. Beyond its potential to save lives, the system has been fine-tuned to minimize false triggers, using an algorithm that considers factors such as sudden acceleration, deceleration, and impact force.

But as this technology becomes more widespread, it opens up a discussion on the legal implications of its use. The automatic dispatch of emergency services raises privacy concerns, and there's a question of liability in the event of a false alarm or, conversely, a system failure to respond to an actual crash. Instances where data from such devices are used in legal settings to establish timelines or determine fault have already begun to surface.

Reviewers have put the Apple Watch's car crash detection feature through rigorous tests to assess its reliability and accuracy under various conditions. These evaluations are crucial in understanding how often the feature could potentially save lives, if it functions internationally, and how it impacts emergency response systems. They also consider the implications of relying on a piece of technology during life-threatening events, and whether this fosters a culture of dependence or empowerment among users.

Apple Watch Car Crash Detection Features

The Apple Watch car crash detection leverages state-of-the-art sensors and algorithms to distinguish between everyday movements and vehicular impacts. This feature aims to enhance user safety by providing automated emergency responses in the event of a car accident.

Technology Behind Crash Detection

The car crash detection on the Apple Watch uses a combination of accelerometers, gyroscopes, and microphone data to monitor for potential crashes. When certain thresholds are exceeded, which may indicate a high-impact event such as a car crash, the watch initiates a response sequence. The underlying technology includes:

  • High G-Force Accelerometer: Detects sudden acceleration and deceleration forces that occur in a crash.
  • Gyroscope: Measures rate of rotation, helping to assess the orientation changes typical in accidents.
  • Microphone: Listens for loud, concussive sounds characteristic of car accidents.
  • Advanced Algorithms: These process sensor data in real-time to identify accident patterns reliably.

User Experience and Reliability

The user experience of the Apple Watch car crash detection is straightforward and unobtrusive. Users can:

  • Enable/Disable Feature: Choose whether to use the crash detection feature.
  • Emergency Services Alert: Receive automatic alerts and emergency service notifications if a crash is detected.

The reliability of this feature hinges on the accuracy of the sensors and the sophistication of the algorithms used to interpret the data. False positives are a consideration, but Apple has designed the system to minimize these occurrences. The Apple Watch provides clear instructions on how to respond to alerts, which can be especially useful in situations where a person might be disoriented following an accident.

It is important for users to understand how this feature can impact legal proceedings following an accident. An accident lawyer may use data from car crash detection as part of the evidence in a case, as it may indicate the severity and timing of a crash. However, this information should complement, not replace, other evidence and witness accounts.

Legal Implications and User Privacy

The introduction of car crash detection in Apple Watches has raised questions regarding the treatment of sensitive user data and potential legal disputes involving the technology.

Data Security and Legal Compliance

User data obtained during car crash detection events must be handled with the highest security protocols to ensure privacy and compliance with laws such as the General Data Protection Regulation (GDPR). The responsibility to protect this information lies heavily on Apple Inc., as any breach could result in serious legal repercussions. Key considerations include:

  • Encryption: Data must be encrypted both in transit and at rest.
  • Access Control: Strict access control policies must be upheld to prevent unauthorized data access.

Responsibility and Legal Cases

When Apple Watches accurately detect car crashes, legal scenarios may involve the use of collected data as evidence. An accident lawyer may rely on this data to support a client’s case, yet the admissibility of such evidence is subject to court discretion. Two main points are:

  • Data Reliability: Can the data be considered reliable and tamper-proof?
  • User Consent: Have users given informed consent for their data to be used in a legal setting?

In cases of false positives or technology malfunctions, determining liability becomes complex. If a watch fails to detect a crash, or mistakenly reports one, affected parties may seek legal action against the manufacturer. This opens up an intricate legal discourse on the extent of Apple's responsibility for its wearable technology's performance in critical situations.

Image: DIW-AIgen
by Asim BN via Digital Information World

Meta Seeks To Tackle Rising Content Theft On Threads With New ‘Use Media’ Feature

A new and useful feature for creators is under experimentation by Meta’s Threads platform.

The news comes as a solution for the app to work against the growing figures related to content theft incidents that have been a serious concern for many on the platform in the past.

A social media app researcher Alessandro Paluzzi was one of the first to notice this. He says the option is called Use Media and comes with other alternatives like Repost and Quote to make things more transparent on the app when sharing material.

This would give users the chance to give insights on a certain picture or media while providing credit to the original creator at the same time.

We feel it’s one of the most useful options in today’s day and age, not to mention handy. It is designed to prevent users from simply republishing content they find on the web such as memes or videos, making it appear to others that they’re the owners of the content.

So now with resharing, the original publisher gets credit for their work, and data theft is kept at a bare minimum. How’s that for a healthier and better ecosystem in terms of creator flourishing?

Credit to the actual content creator is rarely seen these days. It’s hard for many to not get ripped off of their own content and material seen online. This has been a major demotivating factor for creators so the latest update can really boost morale and give them more incentives to return and keep on posting through Threads.

We’re seeing something similar be rolled out on Instagram recently.

In April of this year, Meta’s Instagram made it so clear that it started to get rid of aggregator accounts from recommendations on the feed. This was to discourage and remove monetization of content that was simply republished and belonged to others without any credit.

Now, the current Instagram system replaces reposts with real content on Explore with a new detection system that can determine the actual profile in many situations.

The new Use Media feature for Threads is operational on similar lines as it gives users another means to credit those who made the content and keep others informed that it’s not theirs. Also, you can no longer simply right-click to save another creator’s work.

In general, copyright laws have been very murky lately, especially concerning the trending Generative AI era. Getting actual credit or ensuring your ownership of your material is not easy and people are profiting off the hard work and efforts of others, especially the artist community.

This is why giving users the chance to use a one-click option for reusing material and sharing credit is the right step. It would serve as the most valuable addition to the whole Threads process.

For now, it’s still being experimented with but we do hope a rollout comes soon.


Read next: WhatsApp Launches New Favorites Filter For Quick Access To Important Chats
by Dr. Hura Anwar via Digital Information World

Anyone Can Change Your Business Location Pin on Google Maps Which Can Impact Your Google Business Profile Rankings

There is a new threat going on for many businesses who have added their location on Google Maps or Google Business Profile. Scammers or competitors can easily move the location of a business on Google which can lead to negative rankings on Google. The scammers use the ‘suggest an edit’ (and then edit map location) button on Google Business Profile (GBP) to change the location of the pin on Google Maps.



This is going on for many months and many business listings have been affected by this on Google. The biggest problem is that when someone changes the pin on Google maps, the owners of business do not get the message that their pin has been moved to another location. So, this, as a result cause confusion for users and ultimately lower rankings on Google Business listings without owners realizing the cause unless they look on Google Maps.

Many businesses cannot even change their pin location to the original location because it can also lead to Google Business Profile suspension. If businesses want someone to change their location on Google Maps to original, it is best that they log in to Google accounts with an email which is not similar to their Google Business Profile. Then use the ‘suggest an edit’ button to edit the map location by dragging the pin to the correct location and then clicking on save. It is always best to look for any changes on your GBP after every few days so you are aware if someone has made any changes.

Read next: Which App Categories Are the Most Downloaded and Generate the Most Revenue?
by Arooj Ahmed via Digital Information World

New Investigation Accuses World’s Big Tech Firms Of Training AI Using YouTube Without Consent

A new report from Proof News is shedding light on AI training carried out by big tech giants using YouTube without consent.

A whopping 173k videos from the popular video-sharing app were taken without the company's knowledge to train AI models, the report highlighted. And the latest dataset features nearly 48k channel scripts that were scrapped directly through the app.

Names of big tech giants accused of the act included Apple, Anthropic, and even NVIDIA amongst many others. So as you can imagine, it’s a big deal. Furthermore, the findings related to this investigation showcase the uncomfortable truth related to the world of AI and how the tech is majorly built using data taken from creators without any permission or compensation.

The dataset does not entail any content from the app but does entail transcripts from some big global creators like MrBeast and giant media powerhouses like NYT, BBC, and ABC. Moreover, a host of subtitles were related to tech media outlets too.

It’s amazing how iPhone maker Apple’s name is also on the list who is believed to have sourced its data for training AI models from a range of firms. Amongst those were YouTube’s scripts. So as you can imagine, it’s going to be a serious issue for many years to come.

Speaking to Engadget, comments from the past were delivered by YouTube’s CEO who warned that training models using the app’s data without consent was a clear violation of its terms of service. And right now, no tech giant accused in the report is replying to comments made on this front.

It’s quite evident from this news that AI firms are not transparent about the information being used for data training of models. During this past month, we saw plenty of artists and photographers slam Apple for not disclosing its source of data training of Apple Intelligence. The latter is the name reserved for their own spin linked to generative AI arising from millions of devices that belong to Apple.

YouTube is undoubtedly a goldmine when it comes to video, audio, and even pictures and therefore is said to be a top source for training purposes.

At the beginning of 2024, the head of OpenAI avoided any queries about whether or not the company used data from YouTube to train its AI models like Sora. Despite further insistence on the subject, they remained hushed and chose to sidetrack the queries, adding how any information used was available to the public.

On that note, Sundar Pichai of Alphabet warned that using YouTube for this purpose was violating its rules.

So as you can imagine, this is going to be one very long and interesting battle and we’re curious to see what steps Alphabet takes against those violating its terms of service.

Image: DIW-Aigen

Read next:

YouTube Tests New Community Spaces Feature To Encourage More Engagement Via Text Posts
by Dr. Hura Anwar via Digital Information World

Tuesday, July 16, 2024

Global Millionaire Count to Surge Despite Wealth Inequality, UBS Report Reveals

According to the annual report by Swiss Bank UBS, the number of dollar millionaires keeps on rising in the globe despite global wealth inequality. UBS did a survey in 56 countries and found out that millionaires are going to increase a lot by 2028 in 52 countries. In The Netherlands and UK, the number of millionaires is going to decrease by 2028 and there will be 17% less millionaires in 2028 than 2023 in these two countries.

The USA has the most number of millionaires right now. It had 7.64 million millionaires in the USA in 2000 but now 21.95 million people from the total population are millionaires in 2023. This number will increase up to 25.43 million by 2028. 0.4% of China’s population are millionaires which makes six million total millionaires in the country. France is followed by China with 2.9 million people being millionaires in 2023.

Other countries with the most number of millionaires in 2023 are Japan (2.83 million), Germany (2.82 million), UK (3.06 million), Canada (1.99 million) and Australia (1.94 million). In 2000 or early 21st century, there were just 14.7 million total millionaires in the countries which have been analyzed. In 2023, there are about 58 million total millionaires in all these countries.

USA's Millionaire Population Soars, Expected to Reach 25.43 Million by 2028

USD Millionaires (current and forecast)
Country 2023 2028 2023–2028
Taiwan 788,799 1,158,239 47%
Türkiye 60,787 87,077 43%
Kazakhstan 44,307 60,874 37%
Indonesia 178,605 235,136 32%
Japan 2,827,956 3,625,208 28%
South Korea 1,295,674 1,643,799 27%
Israel 179,905 226,226 26%
Mexico 331,538 411,652 24%
Thailand 100,001 123,531 24%
Sweden 575,426 703,216 22%
India 868,671 1,061,463 22%
Brazil 380,585 463,797 22%
Norway 253,085 308,247 22%
Russia 381,726 461,487 21%
Canada 1,991,416 2,402,200 21%
Australia 1,936,114 2,334,015 21%
South Africa 90,595 108,557 20%
Switzerland 1,054,293 1,253,334 19%
Hong Kong SAR 629,155 737,716 17%
Chile 81,274 95,173 17%
France 2,868,031 3,322,460 16%
United States 21,951,319 25,425,792 16%
Belgium 564,666 653,881 16%
Saudi Arabia 351,855 403,874 15%
UAE 202,201 232,067 15%
Germany 2,820,819 3,229,283 14%
Hungary 24,692 28,260 14%
Qatar 26,163 29,927 14%
Singapore 333,204 375,725 13%
Spain 1,180,703 1,327,797 12%
Portugal 171,797 189,235 10%
Italy 1,338,142 1,461,731 9%
Mainland China 6,013,282 6,505,669 8%
Greece 80,655 80,295 −0.45%
Netherlands 1,231,625 1,179,328 −4%
United Kingdom 3,061,553 2,542,464 −17%

Read next: 2024 Data Shows Decreasing Global Wealth in Most of the Countries this Decade
by Arooj Ahmed via Digital Information World

Fury Increases In Latin America Against Meta After It Fails To Notify Artists About Using Their Data To Train AI

Meta making use of personal data belonging to the artist community for AI training has long been a subject of debate. Many have resorted to complaints, lawsuits, and going as far as to boycott its apps as the tech giant failed to get consent or reward compensation.

Now, fury is increasing in the Latin American region as many continue to complain about Meta’s AI models scrapping their online work for the sake of training.

Meta tried to respond by rolling out a new form for those in the UK, US, and EU where it would request users to opt-out before their data was used. Unfortunately, the same was not the case with those in Latin America who are now complaining about Meta’s hidden tactics of failing to do the same, stripping them of hard work and great efforts without any compensation.

One illustrator in the EU has explained how Meta marketed her hand-painted crafts featuring everyday life without consent as she was not an artist and therefore Meta didn’t feel the need to ask her before using her material.

Now, people in Latin America are furious as AI regulation is close to being non-existent. There are no rules for privacy and even if there are, it’s more or less like they never existed. Hence, no users in this part of the world get a say on whether or not their content could be used for the sake of training Meta’s AI models.

Speaking to nine artists arising from this region, one local media outlet had their views recorded including how they were blindsided by Meta’s unfair actions. But the tech giant is refuting the claims and mentioning that it sent them plenty of alerts in the app, including emails to explain what was going on.

Meta is now being slammed for discrimination and failure to adopt the same regulations globally that are seen in places like the US and EU. Meta’s spokesperson is not backing down, adding how the firm believes in creating and using AI that’s safe and responsible.

It continued to explain how using public data for AI model training was nothing new and a widely accepted and used practice in today’s modern day and age, definitely not something that they felt was unique.

In 2023 September, Facebook’s parent firm rolled out a host of new AI features where the content was mined across various apps. This is where posts shared publicly via different apps were part of the information used for training models that it unveiled at Connect.

Common examples included the AI-based search tools combined into the Instagram platform as well as picture generators located across different nations like Ghana, India, the US, South Africa, Australia, and Canada.

This is not the first time that we’ve heard about complaints rolling out from the Spanish community. Many artists have also reiterated the same thoughts on how Meta indulges in data collection practices that are illegal and unfair so it can attain its own benefits at the expense of others.

This is why more companies are now seeking to add copyright protection policies to ensure fair trading, along with gender equality and moves to reduce carbon footprint.

The perfect example has to be Brazil which is giving greater protection to citizens than that observed in any other Latin American nation. Its laws related to data protection are the most similar to that seen in the EU.

Therefore, Meta is bound by the law to provide users with the chance to opt out of using their data for training AI models.

Image: DIW-Aigen

Read next: Most of the Apps on Google Play and App Store are Making a Decent Chunk of the Revenue from Subscriptions
by Dr. Hura Anwar via Digital Information World