Thursday, September 19, 2024

Study Finds ChatGPT Gives Biased Answers About Cultural Questions which Often Depict an English Speaking or Protestant European Country

A study published in PNAS Nexus found that ChatGPT shows some biases when it comes to some cultures. ChatGPT and many AI models are trained on different cultures by some individuals so it isn't that shocking that it can show some biases about some cultures. The researchers of the study asked five different versions of ChatGPT ten questions taken from the World Values Survey. The survey is quite important when it comes to knowing about what people from different countries think of different cultures.

The questionnaire included questions like an individual's belief in God and what they think of self expression values. The OpenAI’s model was asked to answer like any normal individual would. The results showed that ChatGPT mostly answered like someone from English-speaking and Protestant European countries.

This means that most of the answers were related to self expression like foreigners, environmental protection, diversity, sexual orientation and gender equality. All of the models surveyed neither answered in highly traditional ways like individuals from Ireland or Philippines would nor answered in a highly secular way like individuals from Estonia or Japan would.

To avoid these kinds of answers, researchers then asked ChatGPT models to answer the questions in a way that the individuals from each 107 countries would. The results were somewhat different and had reduced biases for 71% countries on ChatGPT-4o. The researchers say that ChatGPT can reduce its biases if we ask it to answer in a specific way. The way you give a prompt to an AI model is very important for it to answer in a way you want.

Image: DIW-Aigen

Read next: Generative AI Transforms Marketing Strategies Amid Rising Ethical and Legal Concerns
by Arooj Ahmed via Digital Information World

A New Research Shows Many Browser Extensions Compromise User Data and Privacy

According to research by professors from the School of Cybersecurity and Privacy and the School of Electrical and Computer Engineering, many browser extensions can make data of users vulnerable. Most users use browser extensions for different purposes like managing their passwords, fixing grammar, finding shopping deals and translating web pages.

Even though there are thousands of advantages of browser extensions, they do not come without risks. The research says that these browser extensions are a risk to user privacy and many of these extensions extract user data without their permission and use them for different purposes. Frank Li, the lead researcher, says that we already know that browser extensions can access web history and searches of users, but this new research is done to find out whether browser extensions can also access sensitive information like emails, passwords, social media accounts and bank information.

The team of researchers designed Arcanum, a web framework, to test their suspicions about browser extensions accessing user data. The researchers studied more than 100,000 browser extensions in the Chrome Web Store. It was found that nearly 3000 browser extensions can access private data of users, and more than 200 browser extensions took private user data directly and posted it on different servers.

Some browser extensions also take user data for acceptable reasons, for instance, to improve the browser's functionality so we cannot say whether a browser extension is taking user data for the right purposes or not. To know about this, researchers took some browser extensions and tried to match their privacy policies with their data collection activities. This way they could determine which browser extensions are legitimately taking user data and which ones are not.

Most of the browser extensions were found to have no proper data protection policies. This suggests that web browsers like Google should take a stricter privacy approach when it comes to browser extensions. Users shouldn't be worried about their data and privacy when it comes to browser extensions and only new policies can help them protect their data.

Image: DIW-Aigen

Read next: Apple Returns To Its Top Spot In America’s Customer Satisfaction Index
by Arooj Ahmed via Digital Information World

Wednesday, September 18, 2024

Apple Returns To Its Top Spot In America’s Customer Satisfaction Index

Samsung made headlines last year when it broke Apple’s 20-year-old record of snitching the top spot in the American Customer Satisfaction Index. Now, we can confirm that Apple is back at number one, regaining its top spot after a slight shuffle.

In 2023, Samsung went into a tie with the iPhone maker but that’s not the case this year as the Cupertino firm won fair and square. In second place came HP while Samsung was knocked down to position three.

The development was published in ACSI’s annual report where nearly 13k respondents took part between the period of June 2023 to June 2024. The PC category entails desktops, tablets, and even laptops.


Apple regained the crown in the domain of customer satisfaction while the score for Samsung sank. For this year, it’s all thanks to Apple’s Mac and iPad who scored 85/100 and got four points more than the average for PCs this year and two points more than what Apple got in 2023.

On the other hand, Samsung tied with iPhone maker Apple who witnessed a fall in satisfaction by another point to hit the 82/100 mark. Seeing HP reach the runner-up spot in 2024 with a score of just 84 was news as it was trailing behind tech giant Apple.

It was also impressive to see HP witness a 3-point rise from that witnessed YoY.

Both Amazon and Dell rounded out to positions number four and five. When all other brands are concerned, desktops outranked laptops and tablets with averages reaching 82/100. This YoY trend was a 2-point decline for desktops, one 1-point rise for laptops, and 5 5-point rise for tablets.

Read next:

• Meta Strengthens Child Safety on Instagram with New Parental Approval Features

• Warrant Canary: What This Secret Message by Service Providers Means for Users
by Dr. Hura Anwar via Digital Information World

Meta Strengthens Child Safety on Instagram with New Parental Approval Features

Many social media apps are introducing child safety features on their apps to keep children and teens from the dangers lurking around social media. Instagram has taken the step to make all accounts of teens private on its platform and have also limited their DMs in an attempt to shield children. If some teens want to change the settings of a “teen account”, they would need parent’s approval first.


Meta’s head of product, Naomi Gelt, says that Instagram has taken this step because of parents’ concern about extra time spent, inappropriate contact and unwanted contact their children often encounter on the app. When teens are on private accounts, they would only be able to message or tag the people they follow. Instagram will also make screen time reminders for children.

Meta has always been a target because of its no-so-friendly child policies on the app. Recently, Mark Zuckerberg apologized to parents whose children died because of social media. Now Meta has blocked all the harmful content like eating disorder, self harm and nudity to teen users. Congress also passed the Kids Online Safety Act which asked all the apps to block harmful content. There are still concerns about child’s freedom of speech regarding the bill, but if it passes in the House it will bring a lot of safety to children online.

Meta also now only allows children of ages 13 and above to make an account. Even if they can lie about their ages, Meta has partnered up with Yoti, a British Company, which helps Meta identify someone's age by looking at their face. Many social media apps are also now asking people who are making new accounts to submit their video and picture as a proof that they aren't lying about their age.

Parental control on social media apps is also being strictly available. But Meta is working on a feature which will only allow parental controls to a parent or a guardian after verifying that they are indeed a parent or guardian. But this could also have harmful effects because if a child is in an abusive household, their guardians can stop the teens from raising their voices or finding their identities.

Read next:

• Warrant Canary: What This Secret Message by Service Providers Means for Users

• Generative AI Transforms Marketing Strategies Amid Rising Ethical and Legal Concerns

Majority of Americans Still Turn To Social Media To Get News Insights, New Study Confirms
by Arooj Ahmed via Digital Information World

Warrant Canary: What This Secret Message by Service Providers Means for Users

Warrant canaries have been employed since the early years of this century by online service providers for protecting and retaining the privacy of their users. They became a permanent feature of online websites and platforms as a potential solution to the United States’ Patriot Act, passed in 2001, because most tech giants residing in the USA enable the government agencies to have complete access to all data of users available online. For it was in direct conflict with the privacy policies of websites and tech giants of silicon valley, they resorted to warrant canaries – a secret message to their users.

Warrant canaries have a history of constant fluctuation. They have been employed and then not employed later. Different companies typed down varied and succinct messages for their users over the years. Were they successful? The answer to this is still not possible. But all this uncertainty is comprehensible when the government is brought into the equation.

Understand the Origin of the Term To Grasp the Concept

Sometimes it requires an analogy to explain a concept. Caged Canaries were used in the past to ascertain the presence of poisonous gas in a coal mine. Because canaries are more sensitive than humans to gasses, they could detect them before. If the canary used died, it was a clear warning of a poisonous gas and of not venturing any further.

Similarly, websites started using messages on their main pages, depicting that they had not received any request form the government or any of its agencies for data of certain or all people. If that message disappears from their pages, it would be a warning sign, discreetly telling its users that they have received a request from the authorities for data of specific people. These messages came to be called warrant canaries. Like the absence of a canary in a coal mine is a warning sign, so is the absence of these messages on websites.

How Does Warrant Canary Really Work?

Many tech giants have leveraged warrant canaries, like Reddit and Tumblr. As I have mentioned before about the USA and its legislation regarding access to personal data, the federal government can compel any website owner to provide them data of their users, via NSL– National Security Letter. The companies have no option than taking down the warrant canaries on their pages–indicating to their users of behind-the-door requests received. Moreover, they will not be able to use warrant canaries over, for they have once received the request.

The following is a warrant canary once used by Reddit:

Reddit has never received a National Security Letter, an order under the Foreign Intelligence Surveillance Act, or any other classified request for user information.

History of Warrant Canaries

After the legislation of Patriot act, 2001, warrant canaries were a natural consequence for the protection of users’ data on websites. The company rsync.net became the first one to employ warrant canary for this purpose. It took some time before major players realized the importance of warrant canaries.

But things changed drastically in 2013. Edward Snowden, an intelligence contractor and whistleblower, leaked some confidential documents, revealing the surveillance done by the government. The revelation also alarmed the many technology companies that then followed the same path of warrant canaries. Apple started using warrant canaries in 2013, and Reddit and Tumblr soon followed suit. The following was the a brief warrant canary used by Apple:

Apple has never received an order under Section 215 of the USA Patriot Act. We would expect to challenge such an order if served on us.

But again, these drastic steps were soon undone: Apple dropped its warrant canary in 2014, so did Reddit in 2016, for unknown reasons.

Moreover, to keep a record of all warrant canaries and to spread awareness about them, Canary Watch was formed by different organizations. But it also came to an end in 2016 because it has served its purpose, as per Electronic Frontier Foundation.

Are Warrant Canaries Worth Employing?

One does not have to be a scientist to realize this fact that the government is as well aware of the warrant canaries as users are. If an intelligence agency wants to have data of users from a specific platform without the platform dropping its warrant canary, it will not be difficult to achieve. The authorities can easily coerce any company to keep its warrant canary displaying that it has not received any order to provide data of certain people. This assumption buries all canaries.
The government can even nip this evil in the bud–evil to the authorities–by passing a law that bans warrant canaries altogether, just like Australia surveillance law did in 2015. They have not done this yet, which, from one perspective, makes the whole thing suspicious.

Sometimes technical issues can also lead to misunderstanding. A warrant canary might disappear once a website is updated and then reappear within a day or days. Or Changing in wording over a period of time could also spread the rumor of the website being forced to hand over some data.

This is what the cryptographer, Bruce Schneier, has to say about the effectiveness of warrant canaries:

I have never believed this trick would work. It relies on the fact that a prohibition against speaking doesn’t prevent someone from not speaking. But courts generally aren’t impressed by this sort of thing, and I can easily imagine a secret warrant that includes a prohibition against triggering the warrant canary. And for all I know, there are right now secret legal proceedings on this very issue.

What To Do If Warrant Canaries Are Not the Solution?

There are some ways on users’ end that are better in keeping their privacy intact than relying on warrant canaries. As most of these tech. companies are based in the USA, only the country’s laws and government can coerce them to hand over data. But if you have an account on a platform based in another country–other than the USA, the UK, Australia, New Zealand and Canada–you would be in a much better and safer position. Also, if you provide only a limited amount of data to online platforms, you would not be facing privacy-breached issues altogether, for you have not provided any information that must be kept confidential or that could lead someone to you.

Or you can resort to the VPN way. The following VPNs do use warrant canaries, but they keep them updated and enable users to browse anonymously–NordVPN is the best of them, for it is from Panama, so the American laws are not applicable to it; Surfshark is also a good option, for it is updated on daily basis; PureVPN is last of them and just like Surfshark, it is updated daily.

Things to Remember

Users must bear in mind that the removal of a warrant canary from a website does not necessarily mean that the website has received a request for data of users–it could be a technical issue. More importantly, even if that is the case, considering yourself to be one of the people whose data is asked for is not wise.

Warrant canaries could be called archaic due to the technological evolution in the last decade. Governments have the latest tools in hand to gain access to whatever they want. But in retrospect, warrant canaries would always be tech companies’ best try at securing users’ privacy.

Image: DIW-Aigen

Read next: Accessing the Blocked Websites: Follow the Guide to Access Them Without a VPN
by Ehtasham Ahmad via Digital Information World

Google Search To Include Updated ‘About This Image Feature’ To Differentiate Real Images From AI

Google has confirmed that it’s currently working on a new technology that can distinguish real from AI images. This means providing more insights if pictures were created using AI models, software like Photoshop, or simply taken with a camera.

In the next few months, the search results will entail an improved feature called ‘About this Image’ that provides more insights on the source, the company confirmed today.

current 'About this Image' feature in Google without any AI info

It seems to be coming at a time that’s aligned with Instagram’s decision to roll out its advanced ‘Made with AI’ labels that also help to determine if AI was used to make or edit pictures.

Google highlighted more about the system and how it’s making use of C2PA technology which is one of the best and biggest groups to distinguish AI images from one another. This is usually seen as a tech standard that entails details about where the picture originated from at the start and is also said to include details about both hardware and software used for its curation.

So many tech giants have already backed up the technology but we are yet to see its implementation come at a fast pace. Right now, it’s Amazon, Intel, Google, Adobe, and Microsoft who are a part of the list. Meanwhile, Google integration into search will be a mighty test for this program, it continued.

Google says it has worked to help create the latest C2PA tech version and will be using it along with an upcoming C2PA trust list. This will enable different platforms to use Search to identify the picture’s origin.

Google says it has some major plans to combine the data into the company’s systems for ads too. This would ramp up with time and make use of the right signals to enforce important policies.

In the same way, Google hopes to use the tech on its other apps like YouTube whenever images are captured using a camera. They hope to provide more updates on this front soon.

In case you didn’t know, Google is one of the first firms to adopt this breakthrough C2PA authentication technology. It hopes to provide the best interoperability challenges in this manner.

So far, just a limited number of cameras have been used to support the technology which will entail camera settings data and that taken about a picture’s location to adopt the tech better. While Canon and Nikon have both taken the initiative to use C2PA, Apple and Google are still in line to show support and implement it across their devices.

As of right now, many software like Photoshop and Lightroom do come with this technology embedded into their platforms but we’re still waiting for more image editors to do the same.

It appears that Google’s big step forward to include this in its search results might encourage others to do the same.

Read next: Many Big Tech Companies Produce More Carbon Emissions than What They Say In Reports
by Dr. Hura Anwar via Digital Information World

Tuesday, September 17, 2024

Many Big Tech Companies Produce More Carbon Emissions than What They Say In Reports

There has always been news about how there's excess carbon emissions from big tech companies. But now due to artificial intelligence, the carbon emissions have reached their all time high. Even though many companies tell the amount of carbon emission from their data centers, most of them do not tell the accurate amount. According to data, companies like Google, Meta and Microsoft have 7.62 times more carbon emissions than these companies tell the authorities.

The International Energy Agency says that many data centers were already taking 1 to 1.5% of the global electricity in 2022 but then ChatGPT got released and the need for electricity in data centers increased. AI takes more energy in data centers than other cloud applications. Goldman Sachs reports that an AI query takes 10 times more energy than a Google query and the need for power in data centers will grow 160% by 2030. Data centers are expected to emit 2.5 bn metric tons of carbon dioxide by 2030.

All big 5 tech companies have announced to be carbon neutral. Amazon did so recently by saying that it became carbon neutral 7 years prior to when it was expected. Amazon has implemented a gross carbon emission cut of 3%. But it is all due to creative accounting because Amazon is still expanding its fossil fuel use in data centers or in its diesel trucks.

Most companies use renewable energy certificates or Recs to show that they are using renewable energy sources but most of the time, they do not consume the energy or only use it on one site of production. Recs show what are the market based emissions but when Recs are offsite, we get location based emissions. This is actual emission generated in a specific area. The location based emissions of big five tech companies is concerning. If we take the stats of 2022 and put all the companies as one country, it will be the 23rd highest emitter in the world.

Google and Microsoft say that they are working to achieve the goal of making all their data centers on renewable energy sources and to match market based emissions and location based emissions. Google has already stopped using its Rec certificate and will totally emit their non-location specific Recs by 2030. GHG Protocol says that no company should hide their carbon emissions under any circumstances.

The carbon emissions that come from in-house data centers are called Scope 2 which is concerned with mainly electricity. Most data centers, except Amazon, also make up for the majority of Scope 2. Amazon has most carbon emissions from warehouses and e-commerce logistics. 100% of Meta’s centers are responsible for Scope 2 emissions with 97.4% location based emissions. The second biggest Scope 2 and location based emissions are from Microsoft. Meta’s carbon emissions are 19,000 times higher than 2022 while Microsoft’s emissions are 22 times higher than 2022 now.

Many big tech companies also rent third-party data centers which represent 37% of the worldwide data center capacity. All the emissions from third-party data centers fall under scope 3 emissions which include emissions from in-house data center constructions and electricity related emissions. Google and Microsoft have both blamed AI for their emissions while Apple is planning to work hard to reduce its carbon emissions as much as possible.

Image: DIW-AIgen

Read next: From Code to Crisis: The Startling Energy Consumption of Top Tech Giants Revealed!
by Arooj Ahmed via Digital Information World