Wednesday, July 3, 2024

How to Implement Cloud Threat Hunting in Your Organization

Businesses are now going paperless and digital, storing their valuable data in the cloud. However, whereas cloud storage allows everyone in the company to access the data at any time, it also brings certain cybersecurity challenges.

Source: Unsplash / Adi Goldstein

While cloud environments are scalable, flexible, and efficient, traditional security approaches often fall short, creating a need for advanced cybersecurity measures designed specifically for the cloud. One important technique in this regard is cloud threat hunting – a systematic and continuous search of malicious activity on the cloud and its subsequent elimination.

Cyber attackers are becoming increasingly sophisticated, employing advanced techniques including AI, zero-day exploits, and advanced persistent threats (APTs). Insider threats caused by employees with access to sensitive data also pose a challenge. Cloud threat hunting can help identify anomalies early on and reduce the dwell time of any malware. Here’s how you can do cloud threat hunting:

1. Developing a Strategy

To implement an effective cloud threat hunting strategy, you must have a well-structured approach that includes creating a robust threat-hunting framework and establishing meaningful metrics to measure success.

Two strategies are commonly used here: hypothesis-driven methodology and data-driven methodology. In the former, you start with a specific hypothesis about potential threats or vulnerabilities, while in the latter, you start with large volumes of data and use advanced analytics to identify anomalies and potential threats.

In hypothesis-driven methodology, you need a limited set of data relevant to the hypothesis, while in the latter, you need large amounts of raw data.

During the planning phase, you must also set up your KPIs. One KPI can be detection time, or the average time taken to detect a threat after it has entered the environment. Another KPI can be response time, or the average time taken to respond to and mitigate a threat after it is detected.

2. Leveraging Tools and Technologies

You can use tools provided by cloud service providers or third-party tools. For example, AWS equips your storage with AWS GuardDuty which continuously monitors for unwanted and unauthorized activity to protect accounts. It uses anomaly detection, machine learning, and integrated threat intelligence to pinpoint hazards. Similarly, Google Cloud has a Security Command Center (SCC) for cloud threat-hunting purposes.

Third-party tools include RED and SIEM. EDR, or Endpoint Detection and Response tools, monitor endpoint activities and provide detailed visibility into potential threats. On the other hand, Security Information and Event Management or SIEM tools aggregate and analyze log data from various sources to provide real-time event monitoring, threat detection, and incident response.

3. Conducting Threat Hunts

Regular threat hunts are essential for maintaining a strong security posture in cloud environments. This process involves developing hypotheses based on threat intelligence and historical data, performing active hunts for indicators of compromise (IOCs), and analyzing and correlating data.

When conducting hunts, you look for IOCs which are evidence of a potential security breach. They include unusual traffic or suspicious files. Similarly, you look for anomalous behavior such as unexpected data transfers, particularly to malicious domains, irregular login times, or unusual patterns of resource usage.

Endnote

As cloud solutions become more common in the digital scape, the number of cyber attacks also grows. Cloud threat hunting is a cloud cybersecurity approach in which you systemically and continuously scan your storage for any threat. You can use either a hypothesis-based or a data-driven methodology, and leverage either native or specialist third-party tools. Through constant cloud threat hunting, you can ensure safety for your cloud storage.


by Web Desk via Digital Information World

AI Fraud: How Deep Fakes Cost Companies Billions!

Given the increasing popularity of AI, deep fake incidents are on the rise. The biggest threat of deep fake technology is to the banking and financial sector. In 2023, $12.3 billion loss was reported because of deep fake while the losses are expected to grow to $40 billion by the end of 2027, i.e. according to Deloitte. There are many AI apps and websites available now which provide attackers platforms to fake their voices, impersonate someone’s voice and create fake documents.

Image: Deloitte

According to Pindrop’s Voice Intelligence and Security Report 2024, deep fraud that is aimed at contact centers amounts to $5 billion losses annually. Bloomberg also recently reported that there is a dark web network that sells scamming software to attackers for $20 to thousands of dollars. If you want to see how quickly AI fraud is growing all over the world, Sumsub’s Identity Fraud Report 2023 covers all of that.


Image: Statista

Adversarial AI has also created a new wave of deep fake attacks which create fake identities to attack different people. Many of the enterprises do not have any strategies to keep themselves safe from adversarial AI deep fakes where the attackers create deep fakes of key executives. According to the 2024 State of Cybersecurity Report by Ivanti, 74% of enterprises are already experiencing AI driven attacks. 89% of the enterprises say that AI attacks have already started while 60% of the enterprises are not prepared to defend against AI attacks. Because of generative AI, the AI attacks which will get more dangerous will be phishing (45%), software vulnerabilities (38%), ransomware attacks (37%), API-related vulnerabilities (34%) and DDoS attacks (31%).


Many CEOs of software cybersecurity enterprises have admitted that these AI attacks have gotten more real and legitimate looking. George Kurtz, CEO of CrowdStrike says that as AI is getting more advanced, attackers are also taking full advantage of it. CrowdStrike is well-known for its expertise in AI and machine learning. George Kurtz says that the deep fake technology is getting so good. The company also has started investigating AI deep fakes and the impact they can cause in the coming years.

Read next:

• Meta, Amazon, Apple Most Impersonated in Phishing Scams: Study

How to Create Strong Passwords and Keep Hackers at Bay
by Arooj Ahmed via Digital Information World

Google’s New Environmental Report Shows Alarming 50% Spike In Greenhouse Gas Emissions Due To AI

Google is in the spotlight after its latest environmental report showcased a record-breaking high of greenhouse gas emissions.

As per the report, the emissions witnessed a 50% spike, which is the greatest seen in the past five years. This is all thanks to data centers consuming large amounts of energy and giving out emissions, the company added.

Owing it to the spike in AI, Google says the data centers were fueling the demand for this trend and it’s certainly alarming, not to mention a significant hindrance to the company’s path of achieving carbon neutrality.

Every year, the search engine giant rolls out the report to display how much progress it has made in this attempt to go green but from what this year’s stats reveal, it’s nothing to be proud of and far from the environmentally friendly goals that Google wishes to attain by the year 2030.

The company rolled out close to 14.3 million metric tons of CO2 in 2023 alone and that was 49% higher than the year 2019 and nearly 13% greater than 2022.

Google very proudly announced how the rise in data centers used for fueling the rise in AI trends was to blame and as more AI gets incorporated into a wide array of its products, the consumption would similarly increase.

The challenge is one that the company has been dealing with for a long time and Google says things are only going to get more difficult as it makes more investments in its respective technical infrastructure.

The report similarly highlighted how the environment is forced to suffer through the explosive AI trend on Earth. Several tech giants including Google, Apple, Meta, Microsoft, and Amazon are keen on making billion-dollar investments related to AI but also spending a fortune to train their models.

This does not come easy, not to mention the huge amounts of energy resources needed to fuel it all. Meanwhile, reports have also spoken about how AI features make use of a lot of energy as well. Last year, researchers from an AI startup and the University of Carnegie Mellon proved how producing one picture through AI could use the same amount of energy needed to charge mobile phone devices.

In the same way, some tech analysts feel AI will soon double the electricity demand of users in the country and the overall consumption could end up diminishing the current electric supply for just two years.

It’s all very alarming and that’s probably why many tech giants including the likes of Microsoft are opening their eyes toward their attempts to go carbon-negative soon.

Meanwhile, this report by Google mentioned how its own data centers were making use of more water supplies than in the past so that they could maintain cooling due to increased AI workloads. A lot of those entail the likes of Google Search’s AI Overviews which has already been the spectacle of massive debate after asking users to consume rocks or use glue to stick toppings across a pizza.

When we look at figures from the previous year, the Android maker’s data center used 17% more water supplies than the year before that. That’s close to 6.1 billion liters which could irrigate approximately 41 whole golf courses each year in the southwest of the US.

The fact that Google is working on bettering the environment and not just focusing on profits gained through the launch of AI across its products is certainly music to environmental activists’ ears but at the same time, a lot of work needs to be done in this domain if the firm wishes to stay on track with its goals for the end of this decade. Do you agree?

Image: DIW-Aigen

Read next:

• WhatsApp Is Working On New AI-Generated Image Feature Of Users

• Windows 11 Sees Positive Comeback With Growth In Market Share For The First Time Since Late 2023 Slump
by Dr. Hura Anwar via Digital Information World

Tuesday, July 2, 2024

WhatsApp Is Working On New AI-Generated Image Feature Of Users

Messaging giant WhatsApp is currently working on a new and exciting feature for users that makes use of Meta AI.

The feature will be optional and give users the chance to generate AI-based pictures of themselves through Meta’s AI Llama model.

We saw in the last update how the company was giving users the chance to select which AI-based Llama model they were keen on using to carry out AI interactions. Those who wished for a quick and simple approach could use the default variant while those in search of advanced models for complex queries were free to use the newer 3-405B variant.

Right now, it seems like the platform is more committed to practicing refinement on the user end through the likes of Meta AI. The goal appears to be linked to better personalization. And thanks to the newest update, that could be possible with this latest optional offering to produce pictures of themselves through Meta AI.

As can be witnessed through a new screenshot attached, the app wishes to explore more with images generated through the AI of users. They hope to schedule the feature in the next upcoming update for the platform.

This would ensure users can take a set of pictures which the Meta AI software would use to produce AI pictures. It’s up to the user which images they would like to set up for analysis to produce such images, making sure the ones made using AI best represent what they appear like.

The company confirmed how the user would have full control of the feature and can remove the setup photo at any point in time through the app’s AI settings.

After they take the setup images, users could ask Meta AI to produce AI pictures of themselves by typing Imagine Me through the conversation. Additionally, users can use such features in other conversations by using the @Meta AI imagine me option.

Since such commands undergo separate processing, Meta AI could not read other texts and the image produced as a result of this will get shared through automated means in the chat by the platform. Hence, all of the privacy of users would always remain intact at all times.

The offering is totally optional so users need to opt in to use it. Anyone who agrees would need to opt in through manual means for enabling it through settings and hence can take respective setup images first.

For now, we’ll just wait and watch for when it’s released and we’ll update you soon here when it happens.


Image: Wabetainfo

Read next: Median Salary of Magnificent Seven Companies Revealed
by Dr. Hura Anwar via Digital Information World

YouTube Gives Users The Chance To Report AI Content That Resembles Them Without Taking Consent

Video-sharing giant YouTube is taking stricter measures to combat the misuse of AI.

The company just mentioned how it was giving users the chance to report any AI-based content that resembled them in appearance or sound. As it is, the company forces creators to mention if the material was made using AI tools.

Now, it’s giving them the chance to report how AI-based material might be misrepresenting them online without obtaining consent or them being aware of it.

The news comes in the form of a change rolled out in the app’s support page where the platform explains how various factors would be considered when the admin acts upon their complaints.

Several factors would be evaluated including whether the material was altered or not, if it was portrayed to the masses as being altered or not, if the person included could be identified, and if the material is real to begin with.

Other than that, the content would be judged upon whether or not it entails a parody, have public interests attached to it, or if it entails some kind of value. Similarly, any video featuring famous identities or a public face must be mentioned if they are involved in any kind of sensitive-themed acts like violence or criminal behavior.

The app says you can begin the process of Privacy Complaint to inform the platform when or if someone is making use of AI to produce content that resembles them. But at the same time, the app reiterated how filing complaints does not necessarily mean that an individual would be kicked off the app altogether.

Only if the content does qualify for deletion from the platform would be be removed like those depicting real versions of people.

At the same time, YouTube wants users to make sure they can be identified uniquely through the content in question before any complaint is filed. So that’s like making sure enough data is available to confirm that others can recognize you in the content.
After such complaints are filed, the app will provide uploaders with 48 hours to act on the matter. The entire process of reviewing the private data would begin if the material isn’t edited out or deleted.

However, it must be remembered that the platform needs claims from first parties but there are a few exceptions to keep in mind. This includes minors, those vulnerable, and anyone having zero access to online content.

It was mentioned in the previous year how bad actors made use of AI content to spread malware via the app. Now, it’s a part of the process and it accounts for creators on the app that still exist and are continuously being targeted to attain bigger audiences and ensure uploads remain legitimate at all times.

Image: DIW-AIgen

Read next: Google Rolls Out New Disclosure Policy For Digitally Altered Ads To Combat Election Disinformation
by Dr. Hura Anwar via Digital Information World

Google Rolls Out New Disclosure Policy For Digitally Altered Ads To Combat Election Disinformation

Election season is in full swing and search engine giant Google is pulling out all the stops to ensure disinformation is limited.

The company just updated its Political Content Policy which entails content that is digitally manipulated like pictures, videos, and audio. The new policy came into effect starting yesterday and the company feels it’s about time viewers were aware of what they were seeing.

If any information portrayed had been altered through digital means, viewers would now be informed as would the case be when synthetic content is up for grabs. But what exactly are the criteria for such a policy to be implemented in the first place?

According to the Android maker, all content that has been manipulated and incorrectly details real people or events would be a part of the change. This includes those displaying a person taking part in a conversation or action that never happened in reality. Similarly, any footage altered to gain attention toward a real event and also material that displays a realistic depiction of events that never happened in the first place would be included.

We saw Facebook’s parent firm roll out something similar for content made through AI means that was politically themed in February of this year and it makes sense why companies are pulling out all the stops to ensure viewers remain informed.

Google’s latest policy is said to explain how advertisers can give rise to a campaign only after they tick off a list of boxes explaining what sort of content is being generated and if it’s altered or produced through synthetic means.

Google hopes to set this as the new standard for all ad disclosures involving politically themed content online. Whether it’s operating on mobile devices, televisions, computers, or social media platforms across the board.

If any other sort of format is being used, advertisers could select the synthetic content option and then provide personalized prominent disclosures that are clearer in nature and put in a location that’s not noticed by others.

In situations where the policy is violated, a warning would be generated a week before serious action is taken like account suspension by Google. Similarly, the company also provides some clear-cut examples of terminology being used such as how audio was generated through technology, how images didn’t portray real events, and how the video was made through synthetic programs.
It’s about time Google took matters into its own hands and curbed the alarming rate of election misinformation taking center stage. The fact that it’s all happening during a time when AI is giving rise to misleading content and carries the potential to sabotage the whole elections by swaying voters in the wrong direction meant it was much needed.

We’ve already heard about advanced forms of technology from Russia and China taking part in this manipulative behavior to impact elections so it’s a global issue and that’s why plenty of tech giants are scrambling to curb the matter before it gets out of control.

Image: DIW-Aigen

Read next:

• EU Regulators Accuse Meta Of Failing To Comply With Antitrust Rules Over New Ad-Supported Subscription

• Median Salary of Magnificent Seven Companies Revealed
by Dr. Hura Anwar via Digital Information World

EU Regulators Accuse Meta Of Failing To Comply With Antitrust Rules Over New Ad-Supported Subscription

Tech giant Meta is being accused of failing to comply with the EU’s landmark anti-trust laws.

On Monday, regulators from the region slammed the company for its newly introduced subscription that’s supported by ads.

The EC mentioned how such an option traps users into the cleverly designed pay or consent design which means users must either spend a lot of funds to withdraw or allow the company to track them down for targeted advertising.

This initiative was first rolled out by the organization last year through Meta’s Facebook and Instagram apps. Hence, an investigation has found that the model forces users to provide consent so Meta can use their personal information.

Meanwhile, the company’s spokesperson explained through a new statement how the model that comes supported with ads follows the high court’s executive decision and hence is in line with the Digital Markets Act.

They further explained how they were looking forward to a meeting with the EC where they hope to sit down and explain their side of the matter so that the investigation can come to an end soon.

Meta further revealed how the latest model in question is in reply to a decision from the EU court which stated clearly how a firm could offer a new version of the service that doesn’t put reliance on data collection for advertisements.

In the past, we’ve seen Meta mention how such a ruling is a top-of-the-line reason for rolling out such subscription offers.

The European Commission disagreed and explained that launching such models by Meta is a smart method that doesn’t give users a way out in terms of data collection of less personal information. Secondly, it’s equal to the method of making use of a personalized advertising service. So in both methods, it’s flourishing.

Regulators strongly feel that users need to be entitled to access in a way that uses less personal data, especially when it’s clear that this would be for ad purposes.

The other big reason why such a model is being shunned by the EU is that it fails to give users the right to use their freedom to consent for personal information used by Meta so that it can attain its relevant gains through online ads.

If Meta is indeed found guilty here, it would be forced to pay a hefty fine. Remember, the DMA has already come into play since March of 2024. So the law is aiming to crack down against any tech giant taking part in anti-competitive behavior while ensuring they open up options for archrivals in the industry.

As announced recently, fines could go up to 10% of the firm’s yearly revenue and repeated breaches may result in double that figure.

As far as Meta is concerned, if it indeed breached the DMA, it might be hit with a fine that surpasses the $13.4 billion mark, all depending on the firm’s earning figures each year.

Now that Facebook’s parent firm has received the investigation’s preliminary findings, the company would be given the chance to offer a defense in writing of what it feels in this regard.

The investigation by the EU was first rolled out in March, which is the same time that we saw two other similar investigations get launched against market-leading tech firms like Apple and Alphabet. So the investigation is expected to conclude within 12 months from the start of its proceedings.

Meta faces accusations of breaching EU antitrust laws with its ad-supported subscription model on Facebook, Instagram.

Image: DIW-Aigen 

Read next:

• Median Salary of Magnificent Seven Companies Revealed

• Survey Reveals the Cost of Website Development in 2024
by Dr. Hura Anwar via Digital Information World