Sunday, September 21, 2025

Your Supplier’s Breach May Be Flagged by AI Before They Even Know It

By Estelle Ruellan , threat intelligence researcher at TEM company Flare.

Image: DIW-AIgen

Cybercriminals persistently target critical infrastructure to disrupt key lifeline services and influence more prudent attacks that tempt companies into paying large amounts of ransom.

Such was the case when advanced persistent threat (APT) groups like Volt Typhoon, APT41, and Salt Typhoon leveraged legitimate account credentials to conduct long-term intrusions, moving laterally across multiple U.S. state government networks.

In collaboration with Flare, Verizon found stolen credentials were involved in 88% of basic web application attack breaches , making them not only the most common initial attack vector but also, frequently, the only one.

In 2024 and 2025, there has been a surge in infostealer and credential marketplace activity, and security teams are struggling with alert fatigue. Most organizations can’t afford analysts spending hours every day trawling through Telegram, forums, and paste sites. If a model helps filter the noise, it gives human teams breathing room.

Our latest research shows that GPT-powered models can scan hundreds of daily posts on underground forums like XSS, Exploit.in , and RAMP, detecting stolen credentials and mapping live malware campaigns with 96% accuracy.

With the right prompts and navigation, LLMs can detect emerging breaches, identify compromised credentials, and surface novel exploits. When properly directed, these models can take on the heavy lifting of cyber threat intelligence, handling the foundational work of CTI gathering and basic analysis, so security analysts can dedicate their expertise to complex investigations and strategic threat assessments that demand human judgment and deeper insight.

However, the takeaway here isn’t “LLMs will solve cyber threat intelligence (CTI).” They are more like hyper-fast execution engines that require detailed human instruction rather than seasoned analysts who understand business risk and context.

Security analysts must understand the tool's blind spot: LLMs need humans to dissect every element, provide domain knowledge, map decision-making steps, and supply contextual understanding. When properly instructed with this comprehensive guidance, they can execute tasks at incredible speed, but they remain fundamentally blind without human strategic oversight.

Let’s look at where LLMs succeed in CTI, and where their limitations are to use them safely.

Where LLMs Add Real Value in CTI

Security teams are drowning in noise. Microsoft Defender for Endpoint has seen a significant increase in the number of indicators of attack (IOAs), with a 79% growth from January 2020 to today. Many of these alerts will be false positives, such as flagging logins from unusual geographies, devices, or IPs when employees are on business travel or working from new cafes.

LLMs can chip away at the overload. In our study , GPT-3.5 parsed hundreds of daily forum posts, pulling out details like stolen credentials, malware variants, and targeted sectors. For an analyst, that means minutes instead of hours spent sifting through chatter.

Its usefulness in allowing for breach and leak monitoring is potent. The use of valid account credentials and the exploitation of public-facing applications were tied as the top initial access vectors observed in 2024, both representing 30% of X-Force incident response engagements .

Having LLMs summarize cybercrime forum conversations and flag when credentials or other sensitive data appear to be leaked or traded can help flag exposures before they hit production systems. In our study, the model highlighted mentions of compromised companies or products and surfaced potential breaches or exploits being discussed. This provides valuable context for breach and leak monitoring, giving analysts early awareness of emerging threats without hours of manual review.

Moreover, threat actors rarely stay in one lane; they might sell infostealers on Telegram, have initial access brokers (IABs) packages that access and list them on forums, and in another channel, advertise phishing kits to weaponize those stolen credentials. Each stage looks like a separate conversation if you only see one channel, but they’re pieces of the same campaign pipeline.

LLMs are uniquely good at pattern recognition across disjointed conversations. Done right and with the right context, they could stitch fragments together into early warning signals, giving analysts a clearer picture of emerging campaigns.

Blind Spots and Risks of Overreliance

While LLMs show potential in minimizing false positives, these tools are not immune to them. Our team noted that GPT-3.5 struggled with something as basic as verb tense, confusing an ongoing breach with one that had already ended. The key points here are your prompt engineering (how you craft your prompt) and a reminder that high accuracy in controlled studies does not guarantee the same results in live and variable scenarios.


LLMs can fabricate connections or misclassify chatter when context is thin. In practice, that means a model might confidently link stolen credentials to the wrong sector, sending analysts down rabbit holes and wasting valuable time. According to Gartner, 66% of senior enterprise risk executives noted AI-assisted misinformation as a top threat in 2024.

Cost and scale matter too. Running models across thousands of daily posts isn’t free. If teams lean too hard on closed-source LLMs without evaluating cost-performance trade-offs, they risk creating yet another tool that looks great in a proof of concept but doesn’t survive budget cycles.

Projects like LLaMA 3, Mistral, and Falcon are catching up to closed models in language understanding. Fine-tuning or training them on your own CTI datasets can be cheaper in the long term, with more control over model updates and security. The trade-off is that you need in-house expertise to manage training and guardrails.

What CISOs Should Demand

CISOs already know the only way to stay ahead of automated attacks is to automate defenses. Some 79% of senior executives say they are adopting agents in their companies to strengthen security. The key part is knowing how to use them without adding new risks.

A model with 96% accuracy is impressive, but it still misses nearly one in twenty signals. And, as we mentioned earlier, they can still alert to false positives or link stolen credentials to the wrong sector. That’s why all AI triage must be overseen and verified by an analyst, ensuring errors don’t slip into executive briefings or trigger costly over-reactions.

These tools only work if they are steered with precision. Prompt engineering is critical. Context, down to the last detail and tense used, all affect the LLM performance. In one case, a discussion about purchasing data in Israel, titled “Buy GOV access,” was mislabeled as not targeting critical infrastructure, when in fact it was, because that title wasn’t part of the prompt. CISOs or security teams using these models must always ground outputs with missing yet critical context.


Moreover, variables like “is targeting a large organization” or “critical infrastructure” were interpreted inconsistently by the model, since there was no shared definition. It flagged globally known names accurately but missed sector-specific or less famous entities. When prompting an LLM, don’t rely on the model’s definitions, set your own. Because if you don’t set the rules, the model will make them up/follow its own. Therefore, when using subjective or loosely defined labels, security teams should embed definitions or examples within prompts, such as, “Critical infrastructure encompasses essential systems and facilities such as energy, oil and gas sector, transportation, water supply, telecommunications, internet providers, military, governments, harbour, airport.”

Some best practices include:

  • Define the LLM’s role and provide an explicit output structure
  • Align verb tense to context (“has sold” vs. “is selling”)
  • Always include relevant context (e.g., thread titles or summaries of the previous conversation)
  • Provide clear definitions or decision rules for subjective categories

Finally, CISOs should demand clear ROI benchmarks before betting big on tools that could become shelfware. Closed-source models deliver strong results, but open-source alternatives are catching up.

LLMs are not perfect, but when tied tightly to structured prompts, contextual data, and clear analyst-defined rules, they can amplify defense strategies. They should not be treated as black-box oracles. They can sift vast volumes of dark-web chatter and hand analysts a distilled starting point. The key is not expecting them to make judgment calls on risk but designing the workflow so that they enrich human decision-making instead of replacing it.

Read next: Who Really Owns OpenAI? The Billion-Dollar Breakdown


by Web Desk via Digital Information World

Saturday, September 20, 2025

Who Really Owns OpenAI? The Billion-Dollar Breakdown

As OpenAI cements its place as one of the most valuable artificial intelligence companies in the world, questions around ownership and control have become central to the company’s future. Based on a $500 billion valuation, recent estimates provide a clearer picture of who holds the biggest stakes in OpenAI.

Microsoft remains the single largest shareholder, with 28% of the company, valued at approximately $140 billion. The close partnership between OpenAI and Microsoft has grown since their multibillion-dollar collaborations, cementing the tech giant’s influence over the AI firm’s trajectory.

OpenAI’s nonprofit parent entity follows closely with 27% ($135 billion), ensuring that the company’s original mission of prioritizing safety and long-term public benefit still retains substantial weight. Meanwhile, OpenAI employees collectively own 25% ($125 billion), reflecting the company’s strategy of rewarding and retaining top AI talent.

On the investor side, the most significant group is participants in the 2025 fundraise, who hold 13% ($65 billion). Smaller but still notable are investors from the 2024 fundraise with 4% ($20 billion), along with IO shareholders at 2% ($10 billion) and OpenAI’s earliest backers at 1% ($5 billion).

This ownership structure highlights a balance between big-tech partnership, nonprofit oversight, employee ownership, and venture capital backing. As OpenAI scales further in 2025 and beyond, the mix of stakeholders will play a pivotal role in shaping not only the company’s innovations but also the governance of AI at a global level.

Microsoft Holds 28% Stake as OpenAI’s Governance Faces Growing Scrutiny
Stakeholder Share
Microsoft 28% ($140B)
OpenAI’s nonprofit 27% ($135B)
OpenAI employees 25% ($125B)
Investors (2025 fundraise) 13% ($65B)
Investors (2024 fundraise) 4% ($20B)
IO shareholders 2% ($10B)
OpenAI’s first investors 1% ($5B)

Notes: This post was edited/created using GenAI tools.

Read next: FCC Considers Cutting Satellites Out of Environmental Oversight
by Irfan Ahmad via Digital Information World

Trump Sets $100K Fee for H-1B Visas, Tech Sector Faces New Strain

President Donald Trump has ordered a steep new cost on skilled worker visas, setting a $100,000 annual fee for H-1B applications. The proclamation, signed on Friday, is the latest move in his administration’s tightening of immigration rules.

How the program works

The H-1B system lets U.S. companies hire foreign workers with specialized skills in science, technology, engineering, or medicine. The visas run for three years with the option to extend to six. Each year, 65,000 are granted by lottery, with an additional 20,000 for graduates of U.S. advanced degree programs. Approvals, including renewals, reached about 400,000 in 2024. India remains the main source of recipients, accounting for the majority of visas.

White House justification

The administration says the change addresses abuse in the system. Officials point to examples where companies obtained thousands of H-1B visas while cutting American jobs. A White House fact sheet noted one company received approval for over 5,000 foreign workers this year while laying off about 16,000 U.S. staff. The proclamation also frames the fee as a matter of national security.

Exemptions and timeframe

The Homeland Security Secretary has been given power to exempt individuals, companies, or industries if national interest is cited. The new fee takes effect immediately and is set to last for one year unless extended.

Wage rules under review

Alongside the fee, the Labor Secretary has been directed to revise wage requirements. The goal is to prevent companies from undercutting U.S. salaries by relying on lower-paid foreign workers. Federal data shows that H-1B holders now fill more than 65 percent of IT roles, up from about 32 percent in 2003. Unemployment among recent computer science graduates has risen above six percent.

Impact on the tech industry

Technology firms are expected to resist the move. Many of them rely on foreign talent, especially Indian engineers, to fill roles that U.S. graduates cannot meet at scale. Past visa holders have included figures who went on to shape the industry. Elon Musk entered the United States on an H-1B before founding Tesla and SpaceX. Instagram co-founder Mike Krieger, originally from Brazil, also began on an H-1B and faced delays that nearly derailed his startup plans.

Policy background

The H-1B program has swung in response to changing administrations. Approvals peaked in 2022 under President Joe Biden. Rejections reached their highest point in 2018 during Trump’s first term. The new financial barrier is seen as a continuation of the current White House crackdown on immigration.

Additional residency track

The order also creates a new residency pathway known as the “gold card.” Individuals can secure permanent U.S. status by paying $1 million, while companies may sponsor workers by paying $2 million. The administration has promoted the measure as a way to attract high-value investors.

Legal challenges ahead

The sharp rise in costs is expected to trigger pushback from Silicon Valley and other sectors that depend heavily on international talent. Legal challenges are likely in the months ahead as the policy takes hold.

Conclusion

The new restrictions also reveal a wider truth about global labor. Workers from developing nations often help sustain the economies of wealthier countries, yet they can be discarded once policy priorities change. When advanced nations draw on foreign talent to meet their needs and later push those same people aside, it reduces human beings to temporary resources rather than valued contributors. Such patterns highlight the imbalance of power in international labor markets and raise questions about fairness, dignity, and long-term responsibility toward the people who help drive growth. 


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: LinkedIn to Tighten Data Rules, Expand Microsoft Ad Sharing and AI Training on November 3

by Irfan Ahmad via Digital Information World

Friday, September 19, 2025

LinkedIn to Tighten Data Rules, Expand Microsoft Ad Sharing and AI Training on November 3

LinkedIn is preparing to tighten its data rules this autumn, and the update is more than just a formality. Beginning November 3, the Microsoft-owned platform will apply a new set of terms that determine how member information is shared for advertising and how it is used inside LinkedIn’s artificial intelligence tools.

Data Moving Into Microsoft’s Orbit

One of the most visible shifts involves advertising. In several countries outside the EU, LinkedIn will send Microsoft more information about member activity, from profile details to ad clicks. That information will help Microsoft push more tailored promotions across its family of products.

The catch is that even if someone blocks the sharing, Microsoft ads will still appear — they just will not draw from LinkedIn habits. To stop the flow of data, members need to go into account settings and switch off the Data Sharing with Microsoft option before the terms kick in.

AI Training Set to Expand

At the same time, LinkedIn is widening the scope of its generative AI training. In Europe, the UK, Canada, Switzerland, and Hong Kong, public content and profile information will automatically be pulled into AI systems that suggest posts, polish profiles, or help recruiters match with candidates. Private messages remain out of reach, but everything else that is public can be fed into these models.

By default, the switch is on. Anyone who wants out has to dig into Settings > Data Privacy > Generative AI Improvement and toggle it off. Turning it off will not disable LinkedIn’s AI features; it just stops personal data from being folded into future training.

Other regions, including the United States, will not see changes to AI training this round.

Legal and Policy Notes

The company says its approach differs by region. In the EU and UK, data processing for AI rests on the legal principle of “legitimate interest,” while in other markets the emphasis is on user choice through opt-out tools.

Alongside these changes, the platform has also updated its User Agreement. Deepfakes and impersonations are now spelled out as violations, new rules explain when secondary payment methods can be used, and members have been given clearer ways to appeal restrictions on their accounts.

What Members Should Do

For most people, the practical step is reviewing privacy controls before November 3. Leaving the defaults in place means LinkedIn can share data with Microsoft for ad targeting and use profile details for AI model training where applicable. Those who are not comfortable with that approach should turn the features off manually.

Anyone who continues using LinkedIn past the deadline will be considered to have accepted the new terms. Those unwilling to do so have the option to close their account entirely.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Italy Sets National AI Rules, First in European Union
by Asim BN via Digital Information World

Italy Sets National AI Rules, First in European Union

Italy has introduced a national law to govern artificial intelligence, becoming the first country in the European Union to take this step. The legislation applies across healthcare, education, workplaces, justice, sports, and public administration. In each area, AI systems must remain traceable and subject to human oversight.

Criminal penalties and child protections

The law introduces penalties for harmful uses of AI. Creating deepfakes or using the technology to commit crimes such as fraud or identity theft can lead to prison sentences of one to five years. Children under 14 will now need parental consent to access AI platforms or services.

Copyright and creative use

AI-assisted works can qualify for copyright protection if they involve proven intellectual effort. The rules also limit text and data mining to content that is either non-copyrighted or part of authorized scientific research.

Oversight and enforcement

The government has appointed the Agency for Digital Italy and the National Cybersecurity Agency to enforce the new law. Oversight will extend to workplaces, where employers must inform staff if AI is being used. In healthcare, doctors remain the decision-makers, with patients entitled to clear information when AI is involved in treatment.

Financial support for local industry

To back the policy, Rome has pledged up to $1.09 billion through a state-supported venture capital fund. The money will support domestic companies developing AI, telecommunications, and cybersecurity technologies. The amount is significant in national terms, but it remains far below the larger investments being made in the United States and China.

EU alignment and national stance

The law complements the EU’s AI Act, which came into force in 2024. That legislation bans certain high-risk applications outright, including social scoring systems and unrestricted biometric surveillance. Italy has previously taken a strict line on AI, temporarily suspending ChatGPT in 2023 for failing to meet EU privacy requirements.


Image: Hongbin / Unsplash

Notes: This post was edited/created using GenAI tools.

Read next: How a Cybersecurity Veteran Approaches Parenting in the Age of Smartphones
by Irfan Ahmad via Digital Information World

How a Cybersecurity Veteran Approaches Parenting in the Age of Smartphones

Parents today face decisions that earlier generations never imagined. Phones, messaging apps, and social media are part of childhood in ways that can’t easily be undone. Alex Stamos, who previously led security at both Facebook and Yahoo and now lectures at Stanford, has seen how dangerous online spaces can be. That background has shaped the rules he follows at home and the advice he gives to other families.

When to Start

Stamos didn’t rush to give his youngest child a phone. “She got it at 13. That was her line,” he said during a recent interview on Tosh Show. He explained that many children have devices earlier, but parents can delay with tablets that have browsers locked and only approved apps installed. A full smartphone, he warned, should wait until kids are ready to manage it.

Trust With Oversight

At home, his guiding rule is simple: “It’s trust but verify.” Stamos believes children should know their parents have access to their devices. “You have to have the code to your kids’ phones, right? And you have to do spot checks,” he said. The rule is enforced by a clear consequence: if a child ever refuses to hand over the phone, it gets taken away.

For him, the point isn’t suspicion. He tells kids that oversight protects them from others. “There are bad people out there,” he said, recalling how predators often try to isolate children by convincing them not to tell parents about mistakes.

Lessons From School Talks

Stamos has also spoken to classrooms about safety. He tells children that when they get seriously hurt in real life, parents aren’t angry but frightened. The same applies online. “If you make a big mistake or you’re really hurt, your parents are there to help you,” he explained. The goal is to make sure kids never feel they have to hide a problem.

Bedtime Rules


One of his strictest boundaries involves sleep. Phones in his home are docked in a common area overnight. “Teenagers aren’t sleeping because they have their phones all night, and they text each other all night,” he said. Collecting devices in the evening also creates a natural moment for parents to carry out spot checks.

Social Media Boundaries

Stamos takes a cautious view of platforms like Instagram and TikTok. He advises families to wait until children are prepared, and even then to keep accounts private. He noted that many teenagers now prefer private chats on apps like WhatsApp or iMessage. “They’re much more into private communications with each other,” he observed, calling that shift a positive sign.

Adding Safeguards

Phones themselves now include tools that support boundaries. Stamos pointed to Apple’s “communication safety” feature, which can block explicit photos. He called it “an important one to turn on,” though he admitted older teens can override it. Screen time controls and app restrictions also help reinforce rules without constant parental monitoring.

What He Learned From Industry Work

His cautious stance is rooted in his career. While leading security at Facebook, Stamos supervised a child safety team and saw how predators exploited secrecy. That experience convinced him that openness at home is the strongest protection.

“The worst outcomes for kids are when they make a mistake and then feel that they can’t tell an adult,” he said. In his view, building a culture where children can bring problems to parents, even embarrassing ones, is more important than any technical filter.

A Framework for Families

Stamos’s approach combines delay, access, oversight, structure, and openness. Phones arrive later rather than earlier, passwords are shared, spot checks happen, devices are collected at night, social media stays limited, technical tools are enabled, and mistakes can be admitted without fear.

No system is perfect, but Stamos believes these boundaries reduce risk while teaching responsibility. “If you screw up, I will be there to help you,” he tells his children. For him, that promise is at the center of raising kids in a connected world.

Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next:

• WhatsApp Tests ‘Mention Everyone’ Option, But It May Open the Door to Spam

• Amnesty: Global Powers and Corporations Enabling Israel’s Unlawful Occupation and Gaza Genocide


by Asim BN via Digital Information World

Amnesty: Global Powers and Corporations Enabling Israel’s Unlawful Occupation and Gaza Genocide

Amnesty International has published a new briefing accusing states, public institutions, and major companies of sustaining Israel’s control over Palestinian territories and its military operations in Gaza. The organisation argues that the occupation, which international courts have already ruled unlawful, is supported by global political and economic structures that enable ongoing violations of international law.

Arms Transfers and Trade Connections

The report was released on the anniversary of a 2024 United Nations resolution that instructed Israel to withdraw from the occupied territories within one year. Amnesty says that the deadline has now passed without compliance and that attacks, civilian suffering, and food shortages continue.

The organisation is calling for immediate bans on the export of weapons, surveillance systems, and military technology to Israel. It also wants restrictions on re-export arrangements that allow such equipment to reach Israel through third states. Amnesty adds that suspending arms flows alone is not enough, urging governments to block contracts, licences, and financial dealings with companies that supply equipment for settlement activities or military operations.

Businesses Cited in the Briefing

Amnesty names fifteen firms across several industries. They include American defence contractors Boeing and Lockheed Martin, Israeli weapons manufacturers Elbit Systems, Rafael Advanced Defense Systems, and Israel Aerospace Industries, and technology companies such as Palantir (a US-based company), Hikvision (a china-based company), and Corsight. Other firms mentioned are the Spanish train manufacturer CAF, South Korea’s HD Hyundai, and Israel’s state-owned water utility Mekorot.

The briefing describes how Boeing bombs and Lockheed Martin aircraft have been used in Gaza airstrikes that killed large numbers of civilians. It also details the role of Israeli companies in providing drones, ammunition, and border control systems. Surveillance technology supplied by Hikvision and Corsight is linked to security measures described as enforcing apartheid conditions. Mekorot is accused of operating water networks in a way that favours Israeli settlements over Palestinian communities.

The report also recalls previous criticism of travel companies Airbnb, Booking.com, Expedia, and TripAdvisor for continuing to list properties located in Israeli settlements.

Technology Giants Under Scrutiny

While Amnesty’s briefing focuses on arms producers, infrastructure firms, and surveillance companies, separate recent reports have also examined the role of large US technology corporations in Israel’s security operations. Reports published over the past year describe how Microsoft, Amazon, Google, and OpenAI have supplied cloud services and artificial intelligence tools later used by Israeli authorities for surveillance and intelligence work in Gaza and the West Bank.

According to leaked documents, Microsoft gave Israel’s Unit 8200, a military intelligence branch, a segregated space on its Azure cloud to store mass recordings of Palestinian phone calls. Analysts say this information helped guide some military activity. Microsoft also delivered translation services and AI tools to the Israeli Ministry of Defense. Independent reviews have not confirmed a direct link to civilian harm but accepted that such applications carry significant risks.

Google and Amazon face criticism for their participation in Project Nimbus, a cloud services contract signed with the Israeli government in 2021. The deal grants Israeli ministries and agencies access to computing infrastructure. Critics argue that the project strengthens state surveillance and decision-making tied to military operations. Employees at both companies have staged protests over the lack of oversight and safeguards.

Meta has also faced criticism for content moderation policies that restricted pro-Palestinian voices on Facebook and Instagram, with digital rights groups arguing that the company applied its rules unevenly during the Gaza conflict.

States and Companies Urged to Act

Amnesty calls on governments to enforce sanctions that include travel bans, asset freezes, and restrictions on trade shows, research projects, and public contracts for companies involved in supplying Israel with settlement-related or military goods.

The organisation also rejects the idea that companies can remain neutral, saying that continued business ties risk both reputational damage and possible legal accountability under international law.

International Legal Background

The report references key rulings by the International Court of Justice. In July 2024, the Court declared Israel’s occupation unlawful and said its policies in the territories amount to racial segregation. In January 2024, the Court warned of a risk of genocide in Gaza and ordered Israel to take preventive measures. Those warnings were repeated in March and May of that year.

Despite these rulings, Amnesty says Israel intensified its campaign in Gaza through late 2024 and into 2025, with widespread bombardments, forced displacement, and what it describes as deliberate deprivation of food supplies. By December 2024, Amnesty concluded that genocide was taking place, a position that has since gained support from several international legal experts.

Call for Public Pressure

Beyond governments and companies, the report urges civil society, universities, and investors to apply pressure by cutting ties with businesses linked to the occupation and military operations. Amnesty argues that consumer action and peaceful mobilisation are necessary to hold institutions accountable.

The central claim of the briefing is that Israel’s occupation and campaign in Gaza cannot continue without international support. Amnesty warns that unless states and corporations act now, they risk becoming complicit in serious breaches of international law.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: “Scheming” AI: Why Deceptive Behavior Could Become a Bigger Risk as Models Grow Smarter
by Irfan Ahmad via Digital Information World