Sunday, March 10, 2024

YouTube Alters Homepage: No Recommendations for Logged-Out or Incognito Mode Users

YouTube has implemented a controversial change that strips away personalized video recommendations for users browsing anonymously or through incognito mode (both on web and mobile devices). The move seems aimed at pushing more people to stay logged into their Google accounts while using the platform. Which is kind of become a norm for social media platforms now as Meta's Instagram has implemented the same feature in 2019.

Instead of suggested videos tailored to their interests, anonymous YouTube visitors now see a blank homepage prompting them to search for something to get started and "Start watching videos" to build a personalized feed. Searches yield no recommendations either, just a persistent nudge to watch more content.

This radical departure from YouTube's previous recommendation approach, which served up video picks even for non-logged-in users, has sparked backlash. Many view it as a heavy-handed effort to strong-arm people into handing over their viewing data for ad targeting.

The lack of recommendations isn't limited to incognito mode either. As per Mayank Parmar and Haridev on X those who clear their watch and search histories or disable tracking settings report similarly sparse YouTube homepages when signed in - a potential punishment for refusing to share that valuable engagement data.

While YouTube hasn't officially commented, it's clear the platform is tightening its grip on the personalized, data-driven experiences that earn it billions. Whether casual users will accept being cornered into persistent tracking remains to be seen.

Read next: Microsoft’s Orca-Math, an SLM is Developed to Solve Math Questions and It is Better than Most LLMs
by Asim BN via Digital Information World

Cybercrime Losses Increased by 22% in 2023 According to the FBI

Cybercrime has been a prevalent issue for quite some time now, and despite many efforts to curb it, it’s only become worse as the years have gone by. Based on information released by the FBI’s Internet Crime Complaint Center, losses incurred due to cybercrime jumped by a whopping 22% in the year of 2023 alone with all things having been considered and taken into account. These losses now total over $12 billion within the span of a single year.

With all of that having been said and now out of the way, it is important to note that there has been a 10% increase in the quantity of complaints received. What this basically means is that the total number of instances has increased by a smaller margin than the total value of the losses, indicating that each individual instance of cybercrime is becoming more financially devastating than might have been the case otherwise.

All in all, 880,418 complaints were registered in 2023. It bears mentioning that the total value of losses is a bit on the conservative side, and they may very well be higher. Over the past five years, consumers have lost a mind boggling $37.4 billion to cybercrime, and the problem doesn’t seem like it will be going away anytime soon.

Investment fraud appeared to be the worst kind of cybercrime that people reported, with losses increasing from $3.31 billion in 2022 to $4.57 billion in 2023. Just under $4 billion, or $3.94 billion to be precise, came from cryptocurrency related scams.
Business email compromise cybercrimes came in second on this list, costing consumers $2.9 billion last year. This is when malicious actors use spoofed email accounts of suppliers or other service providers to convince businesses to send them large sums of money.

Tech support scams have also cost consumers upwards of $1.3 billion in 2023. They generally involve people receiving a message or a call falsely claiming that their system is compromised and that they need to pay to have it fixed. Pop ups claiming that a virus has infected their system are quite commonplace here.





Read next: Google Search Liaison Gives Complicated Answer Regarding Reliability of Information
by Zia Muhammad via Digital Information World

Google Search Liaison Gives Complicated Answer Regarding Reliability of Information

The question of whether or not Google provides accurate or reliable information is a rather important one to ask with all things having been considered and taken into account Many wonder if Google just shows people what they want to see instead of the most factual information available.

Danny Sullivan, Google’s long time search liaison, recently provided a complicated answer to this query on X, formerly known as Twitter. He claimed that people want reliable information, and as a result of the fact that this is the case, Google was more likely to provide factual responses than might have been the case otherwise.

With all of that having been said and now out of the way, it is important to note that he did mention that Google has certain protocols in place to determine the reliability of the information it leads users to. A particularly thorny issue is that of consensus, since it can be difficult to figure out what information has been accepted as factual by a majority of individuals.



Google has made attempts to improve accuracy in certain regions around the world. Perhaps the best example of this was seen in Japan, where Google adjusted the algorithm because of the fact that this is the sort of thing that could potentially end up showing more reliable information. Google is also known to provide warning prompt in situations where the information available can’t really be trusted.
The search engine juggernaut has also clarified that it has an undeniable bias towards information that has been verified in a scientific setting. Simply put, if there is data out there that is considered to be a scientific truth, Google will accept that.

It remains to be seen how people will view this answer. As much as everyone would like to get a simple response that is either a yes or a no, the complex landscape of search engines and factuality in general makes that a challenging hurdle to overcome. Google’s efforts to verify information before presenting it to users might not be enough for some, although others would say that it is doing the best that it can.

Read next: AI Energy Consumption Soars: ChatGPT Devours Over 500,000 kWh Daily, Dwarfing Homes' Usage
by Zia Muhammad via Digital Information World

Saturday, March 9, 2024

AI Energy Consumption Soars: ChatGPT Devours Over 500,000 kWh Daily, Dwarfing Homes' Usage

Artificial intelligence tech is booming, but it comes at a huge cost i.e. soaring electricity usage.

According to NewYorker, OpenAI's famous chatbot, ChatGPT, gulps down over half a million kilowatt-hours of power each day. That's a whopping 17,241 times more than the average American home's daily consumption of just 29 kilowatt-hours (based on 2022 data).

Why does AI need so much juice? The computer systems and GPUs running these advanced AI models are incredibly energy-hungry. A single AI server can easily gobble up as much electricity as over a dozen UK households combined. No wonder the numbers add up alarmingly fast.

If AI capabilities like ChatGPT get integrated into massively popular services like Google Search, the energy drain could reach catastrophic levels. Data scientist Alex de Vries estimates Google would need around 29 billion kilowatt-hours per year - that's more than entire countries like Kenya use annually.

Calculating AI's total power usage isn't easy though. Tech giants driving the AI boom tend to keep energy data under wraps. Still, de Vries made a rough estimate. He used public figures from chipmaker Nvidia, which supplies around 95% of processors for AI work.

De Vries' analysis, published in the journal Joule, projects the whole AI industry could require between 85-134 terawatt-hours by 2027. For perspective, that accounts for up to 0.5% of global electricity consumption - just from AI alone!

As AI capabilities explode, so does the environmental cost. Tackling AI's ravenous energy needs must become a top priority. Sustainable practices and increased efficiency will be crucial to keep AI's emissions under control.

Image: DIW-Aigen

Read next: Ethical Concerns Rise as Google Fires Engineer Opposing Israeli Military Contract
by Asim BN via Digital Information World

Ethical Concerns Rise as Google Fires Engineer Opposing Israeli Military Contract

A Google employee was shown the door after loudly protesting the company's controversial cloud contract with Israel's military during an official presentation this week. The terminated worker, a Google Cloud engineer, stood up and raised his concerns in a session led by a Google Israel executive at the Mind the Tech conference in New York City on Monday.

According to a video shared by 'No Tech For Apartheid' on X, the employee firmly declared his refusal to build tech that enables genocide or unlawful surveillance. The outburst was aimed at Project Nimbus, a $1.2 billion deal signed in 2021 that gives the Israeli government access to cloud services from Google and Amazon Web Services.

"An employee was terminated for violating our policies after disrupting an official company event," a Google spokesperson confirmed in a statement.

However, the worker's dismissal has further inflamed an ongoing controversy over Project Nimbus within Google's ranks. Hundreds of employees have spoken out against the deal, arguing the cloud capabilities could abet unlawful data collection and monitoring of Palestinians by Israeli authorities.

In the aftermath of Monday's incident, the advocacy group No Tech For Apartheid blasted Google's decision to fire the engineer who spoke up. The group accused the tech behemoth of trying to muzzle moral opposition within its workforce.

Since last fall's flare-up of Israeli-Palestinian violence, Google staffers have escalated their protests over Project Nimbus. Workers staged a "die-in" at the company's San Francisco offices late last year. Over 600 others signed a petition urging Google to stop sponsoring Mind the Tech due to the Israeli military connection.

For its part, Google has defended Project Nimbus as providing public cloud resources for companies across Israel. But the dissent seems unlikely to fade as long as the contract remains a source of ethical concerns for many of the company's own employees.

The incident has also reignited a broader discussion around the ethical blindspots and moral failings that can take root within major tech corporations. Critics argue that a relentless pursuit of profits and market dominance has caused some of the biggest companies to lose sight of higher values and accountability. There are growing concerns that important employee voices raising ethical alarms get suppressed or silenced when they clash with lucrative business interests.

As one of the world's most powerful and influential companies, Google's handling of this situation provides a high-profile example of how the tech industry's corporate cultures can prioritize financial motives over developing a workforce aligned with societal conscience. The termination suggests voices striving to uphold moral integrity may find themselves simply dismissed rather than engaged with—a pattern that could exacerbate erosion of ethical guardrails if left unaddressed within the sector's elite.

Termination of Google engineer protesting Israel contract renews concerns over ethics in tech corporations.
Image: DIW-Aigen

Read next: EU’s Digital Markets Act In The Spotlight: Which Core Platform Services Will Be Regulated?
by Asim BN via Digital Information World

Meta Confirms Bug in Threads' Trends Test, Acknowledges Global User Access Error

Tech giant Meta may have said no to political content on Threads but we’ve got some more news on this front.

The platform is rolling out a new Trends feature which is designated as a key endeavor at one of the top rivals for Twitter/X. Now, we’re hearing more on this front including how users would be saying hello to greater political theme content on the Threads platform than ever.

As indicated by the search trends, people simply cannot get enough of politics as it is trending forever. And now that the upcoming US elections are all set to take center stage, it makes sense as to what’s going on here.

The platform started to experiment with this endeavor recently with a small group of users in what many described as an exclusive trial. It was for both those on the web and on mobile phones.
Post by @mattnavarra
View on Threads

Instagram is still busy with that particular test and thanks to this new expansion, it now appears that what’s trending on Threads is a theme that many had not anticipated.

But the masses have spoken and there’s not a lot that can be done on this front.

To give you a better idea of what sort of content people are falling in love with, it’s linked to Biden’s speech at the State of the Union which featured the audience’s reactions to everything he was blurting out of his mouth.

Then the response was generated by the Republicans in regards to the heckling of Marjorie Taylor Greene.

Seeing the State of the Union address this matter in detail obviously has a lot of people shocked as to how keen they are in what went on that evening. People love engaging with posts like this and plenty of reactions are generated quickly to recap everything in minutes.

There’s also plenty of debate and discussion as to what’s trending next on the app such as taxes, rights to carry out abortion, and even gun control amongst so many others.

The tech giant has opted to keep a distance from the world of politics across platforms for so long after it received so much criticism for loving one side of the picture and not the other. We saw how the firm’s head opted to alter how the Facebook Feed works.

The goal is to keep such posts as a top priority from loved ones over matters such as the news. Then in the year 2022, we saw it get a rebranding face that got outlined as Feed. Previously, we saw the tech giant even admit how there were plenty of disinformation campaigns running at large on the Facebook platform by Russians whose main objective was to impact the American Election race.

And now, as we draw closer to the elections phase, we’re seeing apps like Instagram roll out more proactive content from the world of politics on both Instagram as well as Threads. This is making so many individuals super upset as they did not expect to see their welcome on the app.

Creators generating posts regarding the law including the election phase and more political-themed context weren’t excited about the matter either.

If there happens to be one area where the app is yet to suppress the world of politics, it’s the new Trends rollout. Now we do hope the platform does something about it because not everyone happens to be enjoying this type of content that is already seen in places like X/Twitter.

Update: In an update, the company confirms the ongoing test in the U.S., acknowledging a bug that briefly extended the feature globally (for a short period of tome).

Meta's Admission: Bug Causes Global Exposure in Ongoing Threads' Trends Test
Image: DIW-Aige

Read next: Low AI Trust Drives High Employment Demand for Risk Researchers
by Dr. Hura Anwar via Digital Information World

Friday, March 8, 2024

Low AI Trust Drives High Employment Demand for Risk Researchers

Everybody has some kind of an opinion regarding the manner in which AI might end up impacting our lives in the near to distant future. Suffice it to say that most of these opinions are thoroughly on the negative end of the spectrum, and companies are no different.

While there are many benefits to the rise of AI, it bears mentioning that it also brings its fair share of risk to the table. As a result of the fact that this is the case, many companies are looking to employ specialists who can understand these risks and potentially mitigate them at the end of the day.

With all of that having been said and now out of the way, it is important to note that these roles aren’t exactly the same as Chief AI Ethics Officers. C-Suite roles are just of the tip of the iceberg, with ethics researchers, compliance officers, and analysts for technology policy all seeing a rise in demand as well.

Based on a recent survey conducted by Deloitte, 49% of companies have created guidelines focusing on the ethical use of AI. 37% are planning to roll these guidelines out in the near future, and the board of directors is involved in 52% of the decisions surrounding the ethical use of AI.

What's more is that 45% of the companies that responded to this survey stated that they’re retraining their employees so that they are better able to contend with this ever shifting landscape than might have been the case otherwise. On top of all of that, 44% are looking for new employees to fill AI related roles because of the fact that this is the sort of thing that could potentially end up keeping them in lockstep with competitors.

This is an extremely specialized field, one that will likely see tremendous growth in the coming years. Government regulations are right around the corner, and many of these new jobs will have something or the other to do with the creation of new and better processes that can keep these regulations at the forefront of the company’s business dealings and operations pertaining to AI.

Take a look at the charts below for more insights:
Read next: How to Accept Bitcoin Payments: Your Complete Guide
by Zia Muhammad via Digital Information World