Saturday, August 31, 2024

New Study Shows Students Who Use AI Are Less Productive and Have Low Chances to Succeed in the Future

According to a recent repot, most of the youth use generative AI like ChatGPT because they lack cognitive skills and have less chances to succeed in life. Many AI tools like ChatGPT and Gemini have become crucial parts of people’s lives, especially students, and they are being used for enhancing learning. But generative AI also encourages cheating in students. That’s why the researchers of a new study wanted to know how generative AI correlates with the performance of students.

Executive functioning (EF) or cognitive processes are important for learning as they have focus, research and memorize the stuff. These processes are important in educational settings as they play a role in completing complex tasks like writing papers. People with lower EF levels often struggle to complete their tasks. On the other hand, higher EF levels promise success in school and professional life.

The report says that when adolescents use AI tools, their EF levels get challenged. Students who already have lower EF levels often use generative AI to write their assignments. This means that AI tools provide support for students who are suffering from low EF and cannot do well in schools. Even though AI tools are said to boost productivity, it can be more harmful for people who are not even productive at all.

To find out more about it, the researchers conducted a study among two groups: Group one had 385 adolescents between the ages of 12 to 16 and group two had 359 students between the ages of 15 to 19. The results of the study showed that 15% of the adolescents and 53% of the older teens use AI tools for work. As older students have more complex assignments, there is more AI use among them.

The use of AI tools is getting more common in educational institutions with a German study showing that two thirds of the university students use AI tools for different purposes. 75% of the adolescents in Sweden use AI tools to structure their presentations, write analysis and for other studying purposes. But there should be a line drawn between what is cheating and what is just taking help from AI for studying. Many students copy entire assignments from AI tools which is considered cheating. If students are just using AI tools for getting ideas or for outlines, then it should be considered okay. Students and educators should also check for AI generated work as many academic papers have AI writing.

Image: DIW-Aigen

Read next: Can AI Be A Better Boss Than Humans? This New Survey Has The Answer
by Arooj Ahmed via Digital Information World

Major Blow To X As Brazil’s Supreme Court Orders Immediate And Complete Suspension Of The App

Elon Musk is not going to be happy as Brazil’s Supreme Court has issued an order for an immediate and complete suspension of the X platform. Furthermore, both Google and Apple were directed to remove the app from their stores on an urgent basis.

Musk had already anticipated this yesterday when he published through his X account that the country was freezing Starlink’s accounts. This was a clear signal that the X app would be suspended soon and today the news has been confirmed.

The decision was taken by the top judge in the Brazilian Supreme Court after a legal battle with the platform. Musk was ordered to block several accounts known for spreading misinformation but X failed to do so, calling it a violation of free speech.

This did not sit well with Judge Alexandre de Moraes who was sick and tired of the constant feuds with the tech billionaire. The judge further imposed a daily fine of nearly $9000 for anyone using other means such as VPNs to evade this ban.

Elon Musk reacted to the news by rolling out some more controversial opinions including baseless conspiracies about the country’s voting machines. He also accused the government of bankrupting those who tried to promote truth.

Friday is when we first saw the country freezing Starlink’s accounts which meant no domestic transactions could be done. This is Musk’s satellite broadband service which links more than a quarter of a million users in the country.

It’s a big blow to X because Brazil is the second most populated nation in South America so that’s losing out on a lot of revenue and users. Clearly, Musk is not going to be happy.

Before Musk could announce shutting down operations in Brazil, it looks like the court acted swiftly to ensure things went down just like they had planned to do so. They felt the operations were undermining their democracy and they had proof of many accounts spreading false information.

We saw an incident taking place in January last year that was an insurrection similar to that seen on Capitol Hill on January 6. A lot of controversial elements and stories were published online that Brazil felt could further worsen matters in the country.

This ban is not the first time that we’ve seen lawmakers and authorities make similar threats in the past, calling out apps for noncompliance. In 2022, the same judge ordered a complete shutdown of Telegram in the nation. The reason was the same, it failed to combat misinformation.

In this case, Elon Musk has taken the threats and ban more seriously and on a more personal level. He criticized the judge in a hostile manner and has been feuding with him on several occasions now. Musk anticipated the ban but vowed to be more transparent about the matter with the world while vowing to take legal action.


Read next: Massive Data Breach Raises Questions About Data Brokers' Security Practices and Responsibility
by Dr. Hura Anwar via Digital Information World

Massive Data Breach Raises Questions About Data Brokers' Security Practices and Responsibility

A dataset of 170 million sensitive data which included names, addresses, phone numbers, emails, skills, education history and employment history of individuals roaming freely on the internet. After tracing who leaked the data, it was found that it was leaked by a San Francisco based data broker, People Data Labs (PBL). The website of PBL has data of 1.5 billion individuals which is used for recruiting, sales, marketing and data enrichment purposes.

The data was leaked because of an Elasticsearch server which was left unprotected, and was not directly linked to the company. It is being said that a third-part is probably responsible for this data leak. Even though the third-party hasn’t been identified yet, it is very important to have a password on Elasticsearch server or else your data can easily be obtained by threat actors. These threat actors can easily expose your data in seconds which can end up in identity theft and fraud.

Cybernews research team says that data brokers are always in controversy because they do not check and control the data completely and it often ends up in the hands of wrong parties. The threat actors can use your data for many large attacks. PDL was also responsible for leaking data in 2019, and it was also done by an unprotected Elasticsearch server and PDL refused to take responsibility.

The current data leak was marked as Version 26.2, suggesting that it can be related to the previous data leak. Even if PDL is not responsible for the leak, these kinds of leaks taint the reputation of data brokers and clients cannot trust them anymore. As PDL experienced a data leak in 2019 previously, it shows how ignorant they are being with people’s data and are not thinking about personal data security.

If you think that you have been potentially affected by a data leak, make sure to stay cautious of phishing attacks and scams. Use some data removal services to protect against future data leak, use strong passwords and enable two factor authentication. Always monitor your accounts to check any suspicious activities.


Image: DIW-Aigen

Read next:

• Apple’s Revenue For 2024 Could Reach Record High Thanks to iPhone 16 Sales

• Can AI Be A Better Boss Than Humans? This New Survey Has The Answer

• How Many Hours Do People Work Around the World?
by Arooj Ahmed via Digital Information World

Friday, August 30, 2024

How Many Hours Do People Work Around the World?

The chart below ranked countries according to their average working hours by sourcing data from OECD (Organization for Economic Co-operation and Development).

An interesting fact about this ranking is that there is a gap of 864 hours (36 days) between the country with the most working hours and the country with the least working hours.

The country with the most working hours, as of 2023, is Mexico with 2207 hours worked annually per person. The second and third countries with the most working hours are also from the Americas: Costa Rica and Chile with 2171 and 1953 hours worked annually respectively. The reasons why these countries have the most working hours could be due to several factors like economic structure of the country, social policies and lower wages. If the workers are getting low wages, they need to work more to create a comfortable living for them.

Some other countries with the most working hours include Greece, Korea, Canada, Poland and the USA. The countries with the least working hours are from the European Union and have strong economies. All the employees from these countries also get 4 weeks paid holiday every year. These countries include Sweden, Luxembourg, Lithuania, and the Netherlands. Denmark and Germany are at the end of the list, meaning they have the least working hours. Denmark has 1380 annual working hours, whereas, Germany has 1343 annual working hours which makes 168 8 hour workdays in a year.
How Many Hours Do People Work Around the World?
Country No. of 8-hour workdays Annual Hours Worked per Person
Mexico 276 2207
Costa Rica 271 2171
Chile 244 1953
Greece 237 1897
Israel 235 1880
Korea 234 1872
Canada 233 1865
Poland 225 1803
U.S. 225 1799
Czechia 221 1766
New Zealand 219 1751
Estonia 218 1742
Italy 217 1734
Hungary 210 1679
Australia 206 1651
Lithuania 205 1641
Ireland 204 1633
Spain 204 1632
Portugal 204 1631
Slovak Rep. 204 1631
Slovenia 202 1616
Japan 201 1611
Latvia 194 1548
UK 191 1524
France 188 1500
Finland 187 1499
Luxembourg 183 1462
Iceland 181 1448
Sweden 180 1437
Austria 179 1435
Norway 177 1418
Netherlands 177 1413
Denmark 173 1380
Germany 168 1343

Read next: Dream Jobs at Big Tech: Best and Worst Companies to Interview For
by Arooj Ahmed via Digital Information World

Meta’s Carbon Footprint In The Spotlight: Discrepancies Over The Company’s Emissions Arise

It’s not easy to unravel the claims made by tech giants today regarding their carbon footprints. The same appears to be the case for tech giant Meta.

While the company’s sustainability report is out to answer any questions on people’s minds about greenhouse gas emissions, it’s not easy to decipher. Many are confused about whether or not the organization's claims about a drop in emissions are true or not.

The discrepancies are arising because you need to take a simple fact into consideration. Is Meta talking about total emissions or is it making the claims after net emissions? If it’s being done locally, you have to determine the key points where the company operates as well.


As seen in the chart above, the light gray-toned bars display gas emissions and since 2019, the figures have kept going upwards. This means the levels of carbon dioxide polluting the environment are at an all-time high.

The dark bars then show market-based results where the emissions have kept falling since last year. Hence, when you look at this footprint, they are half as small. Now the question is what to believe and why.

While Meta might be highlighting the smaller figure or drop in emissions at the report’s top, we need to take both of them into consideration when making a final decision. This is especially true, considering how effective market-based efforts can be in getting rid of pollution and therefore impacting climate change.

The emissions on paper are close to half but it’s not easy to claim how much it’s reduced as a whole. This is compounded with claims from Meta that it balances its high energy use with its renewable purchases for energy.

The company has highlighted the importance of several steps it’s taking to offer better support to both wind and solar initiatives near data centers. One study showed how its support for several renewable energy sources will give rise to 9800MW to local grids next year. It similarly hopes to curate geothermal energy for data centers so it’s really pushing through on this front.

Whatever the case may be, we feel a lot of progress needs to be made. Meta sourced close to 8.5% of the purchases for renewable energy as per the firm’s latest tech assessment.

As a whole, if we were to conclude, the company’s carbon footprint is much bigger than what it was four years back. Keeping in mind its goal for net zero emissions by 2030, lots of work must be done.

Remember, there are a lot of different operations involved on this front. From supply chain to usage of consumer products, the list is long. As we can see now, it’s drifting further and further from its end goal so it’s about time to reconsider. What do you think?

Read next: Many Large Websites Begin Blocking Apple’s AI Web Crawler After Increased Warnings
by Dr. Hura Anwar via Digital Information World

Many Large Websites Begin Blocking Apple’s AI Web Crawler After Increased Warnings

Many large-scale websites are getting tired of Apple and its AI web crawler.

Despite several warnings issued in the past, the iPhone maker’s web crawler continues to land on different websites and extracts data for Apple’s AI training endeavors. The list of names who have confirmed this act of Apple are plenty and they are some very big names in the tech world.

While Apple continues to deny the act and says it respects the wishes of these companies, it’s doing otherwise.

As per recently published research, Apple is crawling on sites belonging to Instagram, Facebook, Tumblr, Craigslist, the NYT, The Financial Times, The Atlantic, and Vox Media. Now, many of these organizations are left with no choice but to block the company as Apple continues giving the cold shoulder.

The practice of using robotic crawlers is not something new. It’s been in the works for a long time but these bots are now taking others’ data for training AI models. And that’s not something many feel is right as it robs them of their hard work.

The news is alarming as Apple Intelligence is all set to launch soon and we can see why Apple might be engaging in these tactics. As per reports from the NYT, many companies are blocking web crawlers at a record-breaking rate and most of these are AI firms.

The study that came to that conclusion shared how 14k web domains were included for training AI data sets. It’s a necessary step that publishers feel they need to take to stop data from getting harvested.

As per estimates from research experts, around 5% of all data present and 25% of data coming from the best sources was restricted. Most of those were set up via a Robots Exclusion Protocol which is an ancient method for owners of different webpages to stop bots from this move. It’s usually done via files dubbed robots.txt.

It’s very interesting to see Apple disregard the voices of many companies and enable tradeoffs of AI crawlers for scraping AI websites and deciding that it’s not worth it.

But we should not be surprised if more AI giants go about blocking companies now after learning about Apple’s shocking actions. Interestingly, most of the firms complaining are big tech giants. Smaller-scale enterprises don’t seem to mind or care as it makes little to no difference to them.

Image: DIW-Aigen

Read next: OpenAI and Anthropic Agree To Share AI Models With US AI Safety Institute
by Dr. Hura Anwar via Digital Information World

OpenAI and Anthropic Agree To Share AI Models With US AI Safety Institute

The US AI Safety Institute was first established last year for better regulation of AI models. Now, both OpenAI and Anthropic have signed an agreement with the institute to ensure the complete safety of various projects. This will help give the tech giants more feedback before model design and after the launch.

We saw OpenAI CEO Sam Altman drop hints this past month about the deal and now Google’s spokesperson is also confirming that discussions are in the works.

For now, no other company tackling the AI space was said to be included but we do hope to share more information when it’s available. It’s interesting how the news came when Google updated its chatbots and Gemini image generators.

The director of the US AI Safety Institute shared a statement on this matter. She says safety is the main factor that fuels tech innovation. This is why such agreements are necessary and the institute looks forward to greater collaborations in the future as this is the start.

She also spoke about the deal as a serious milestone to better drive the future of AI

The institute is designed to curate and publish the latest guidelines on this front while setting up tests and practices that determine the potential dangers linked to this system. As we all are well aware, AI can do a lot of good but the opposite is also true.

This is certainly the first deal of its kind. The agency hopes to get more access so it can reach more models out there before they get launched for the public. Such agreements and collaborations minimize risk and better evaluate the capabilities of AI models to enhance security. We’re also hearing more about how the institute has major plans to link with the same agency in the UK.

The news comes as regulators around the globe are trying to introduce more AI guardrails but fail to tackle the concerns on the rise. We did see California introduce a new bill regarding AI safety recently but that does come with a hefty $100M cost.

This bill is also a little controversial as it forces many AI firms to roll out kill switches which shut down models in cases where they go out of control.


Read next: Your Phone is the New Shopping Mall – See How It’s Taking Over!
by Dr. Hura Anwar via Digital Information World