Monday, September 22, 2025

Digital Currencies Push Into Global Economic Rankings

The cryptocurrency sector has crossed a point where its scale can be measured beside national economies. Current estimates place the total value of all coins and tokens at around 3.88 trillion dollars. That makes it larger than India’s economy and places it just behind Japan and Germany. The speed of this climb is striking. In the past year, the market more than doubled, and compared with 2023 the increase is close to threefold. Looking back five years, the value is nearly ten times higher.

Setting It Against Countries

Putting digital assets against the size of traditional economies highlights the change. At today’s level, the crypto market is roughly 80 percent above the output of Italy, Brazil, Canada, or Russia. It is twice the scale of South Korea, Spain, or Australia. Against Switzerland or Poland, it is several times greater. The comparison shows that what started as a niche technology has become large enough to sit in global economic tables.

Stock Market Benchmarks

When compared with stock exchanges, the picture is mixed. At 3.88 trillion dollars, crypto is ahead of the London Stock Exchange and the Toronto Stock Exchange. Yet it remains far smaller than the New York Stock Exchange, which is valued above 25 trillion dollars. This places crypto in the middle ground, larger than some national markets but still behind the biggest hubs of global finance.

An Uneven Landscape

The size of the sector does not tell the whole story. Out of more than nine thousand coins and tokens in circulation, the top ten account for more than ninety percent of total value. Most tokens trade below one dollar, and only a handful are worth hundreds or thousands. This leaves a picture of concentration, with a few dominant currencies shaping the market while thousands contribute very little.

What Lies Ahead

The first half of 2025 has shown both the promise and the fragility of this market. Regulation is developing, institutional investors are entering, and volatility continues to define trading patterns. The result is an industry that now stands at a global scale but still carries the risks of a sector in transition.

Crypto Market Now Comparable to the World’s Fifth Largest Economy
Country GDP (nominal, 2023) GDP (abbrev.)
United States $27,720,700,000,000 27.721 trillion
China $17,794,800,000,000 17.795 trillion
Germany $4,525,700,000,000 4.526 trillion
Japan $4,204,490,000,000 4.204 trillion
Crypto $3,880,000,000,000 3.88 trillion
India $3,567,550,000,000 3.568 trillion
United Kingdom $3,380,850,000,000 3.381 trillion
France $3,051,830,000,000 3.052 trillion
Italy $2,300,940,000,000 2.301 trillion
Brazil $2,173,670,000,000 2.174 trillion
Canada $2,142,470,000,000 2.142 trillion
Russia $2,021,420,000,000 2.021 trillion
Mexico $1,789,110,000,000 1.789 trillion
Australia $1,728,060,000,000 1.728 trillion
South Korea $1,712,790,000,000 1.713 trillion
Spain $1,620,090,000,000 1.62 trillion
Indonesia $1,371,170,000,000 1.371 trillion
Netherlands $1,154,360,000,000 1.154 trillion
Turkey $1,118,250,000,000 1.118 trillion
Saudi Arabia $1,067,580,000,000 1.068 trillion
Switzerland $884,940,000,000 884.94 billion
Poland $809,201,000,000 809.201 billion
Argentina $646,075,000,000 646.075 billion
Belgium $644,783,000,000 644.783 billion
Sweden $584,960,000,000 584.96 billion
Ireland $551,395,000,000 551.395 billion
Thailand $514,969,000,000 514.969 billion
United Arab Emirates $514,130,000,000 514.13 billion
Israel $513,611,000,000 513.611 billion
Austria $511,685,000,000 511.685 billion
Singapore $501,428,000,000 501.428 billion
Norway $485,311,000,000 485.311 billion
Bangladesh $437,415,000,000 437.415 billion
Philippines $437,146,000,000 437.146 billion
Vietnam $429,717,000,000 429.717 billion
Denmark $407,092,000,000 407.092 billion
Iran $404,626,000,000 404.626 billion
Malaysia $399,705,000,000 399.705 billion
Egypt $396,002,000,000 396.002 billion
Hong Kong $380,812,000,000 380.812 billion
South Africa $380,699,000,000 380.699 billion
Nigeria $363,846,000,000 363.846 billion
Colombia $363,494,000,000 363.494 billion
Romania $350,776,000,000 350.776 billion
Czech Republic (Czechia) $343,208,000,000 343.208 billion
Pakistan $337,912,000,000 337.912 billion
Chile $335,533,000,000 335.533 billion
Finland $295,532,000,000 295.532 billion
Portugal $289,114,000,000 289.114 billion
Peru $267,603,000,000 267.603 billion
Kazakhstan $262,642,000,000 262.642 billion
New Zealand $252,176,000,000 252.176 billion
Iraq $250,843,000,000 250.843 billion
Algeria $247,626,000,000 247.626 billion
Greece $243,498,000,000 243.498 billion
Qatar $213,003,000,000 213.003 billion
Hungary $212,389,000,000 212.389 billion
Ukraine $178,757,000,000 178.757 billion
Kuwait $163,705,000,000 163.705 billion
Ethiopia $163,698,000,000 163.698 billion
Morocco $144,417,000,000 144.417 billion
Slovakia $132,908,000,000 132.908 billion
Dominican Republic $121,444,000,000 121.444 billion
Ecuador $118,845,000,000 118.845 billion
Sudan $109,266,000,000 109.266 billion
Oman $108,811,000,000 108.811 billion
Kenya $108,039,000,000 108.039 billion
Guatemala $104,450,000,000 104.45 billion
Bulgaria $102,408,000,000 102.408 billion
Uzbekistan $101,592,000,000 101.592 billion
Costa Rica $86,497,941,439 86.498 billion
Luxembourg $85,755,006,124 85.755 billion
Angola $84,824,654,482 84.825 billion
Croatia $84,393,795,502 84.394 billion
Sri Lanka $84,356,863,744 84.357 billion
Panama $83,318,176,900 83.318 billion
Serbia $81,342,660,752 81.343 billion
Lithuania $79,789,877,416 79.79 billion
Tanzania $79,062,403,821 79.062 billion
Côte d'Ivoire $78,875,489,245 78.875 billion
Uruguay $77,240,830,877 77.241 billion
Ghana $76,370,396,722 76.37 billion
Azerbaijan $72,356,176,471 72.356 billion
Belarus $71,857,382,746 71.857 billion
Slovenia $69,148,468,417 69.148 billion
Myanmar $66,757,619,000 66.758 billion
DR Congo $66,383,287,003 66.383 billion
Turkmenistan $60,628,857,143 60.629 billion
Jordan $50,967,475,352 50.967 billion
Cameroon $49,279,410,983 49.279 billion
Uganda $48,768,955,863 48.769 billion
Tunisia $48,529,595,417 48.53 billion
Bahrain $46,079,867,021 46.08 billion
Macao $45,803,067,940 45.803 billion
Bolivia $45,135,398,009 45.135 billion
Libya $45,096,462,972 45.096 billion
Paraguay $42,956,263,544 42.956 billion
Cambodia $42,335,646,896 42.336 billion
Latvia $42,247,850,065 42.248 billion
Estonia $41,291,245,222 41.291 billion
Nepal $40,908,073,367 40.908 billion
Zimbabwe $35,231,367,886 35.231 billion
Honduras $34,400,509,852 34.401 billion
El Salvador $34,015,620,000 34.016 billion
Cyprus $33,886,930,712 33.887 billion
Iceland $31,325,116,556 31.325 billion
Senegal $30,848,333,084 30.848 billion
Georgia $30,777,833,585 30.778 billion
Papua New Guinea $30,729,242,919 30.729 billion
Zambia $27,577,956,471 27.578 billion
Bosnia and Herzegovina $27,514,782,476 27.515 billion
Trinidad and Tobago $27,372,285,698 27.372 billion
Armenia $24,085,749,592 24.086 billion
Albania $23,547,179,830 23.547 billion
Malta $22,328,640,242 22.329 billion
Guinea $22,199,409,741 22.199 billion
Mozambique $20,954,220,984 20.954 billion
Mali $20,661,794,596 20.662 billion
Mongolia $20,325,121,394 20.325 billion
Burkina Faso $20,324,617,845 20.325 billion
Haiti $19,850,829,758 19.851 billion
Benin $19,676,049,076 19.676 billion
Jamaica $19,423,355,409 19.423 billion
Botswana $19,396,084,498 19.396 billion
Gabon $19,388,402,542 19.388 billion
Nicaragua $17,829,218,219 17.829 billion
State of Palestine $17,420,800,000 17.421 billion
Afghanistan $17,233,051,620 17.233 billion
Guyana $17,159,509,565 17.16 billion
Niger $16,819,170,421 16.819 billion
Moldova $16,539,436,547 16.539 billion
Laos $15,843,155,731 15.843 billion
Madagascar $15,790,113,247 15.79 billion
North Macedonia $15,763,621,848 15.764 billion
Congo $15,321,055,823 15.321 billion
Brunei $15,128,292,981 15.128 billion
Mauritius $14,644,524,819 14.645 billion
Bahamas $14,338,500,000 14.338 billion
Rwanda $14,097,768,472 14.098 billion
Kyrgyzstan $13,987,627,909 13.988 billion
Chad $13,149,325,362 13.149 billion
Malawi $12,712,150,082 12.712 billion
Namibia $12,351,025,067 12.351 billion
Equatorial Guinea $12,337,550,584 12.338 billion
Tajikistan $12,060,602,009 12.061 billion
Mauritania $10,651,709,411 10.652 billion
Togo $9,171,261,838 9.171 billion
Montenegro $7,530,593,375 7.531 billion
Barbados $6,720,733,200 6.721 billion
Maldives $6,590,894,302 6.591 billion
Sierra Leone $6,411,869,546 6.412 billion
Fiji $5,442,046,565 5.442 billion
Eswatini $4,442,875,788 4.443 billion
Liberia $4,240,000,000 4.24 billion
Andorra $3,785,067,332 3.785 billion
Aruba $3,648,573,136 3.649 billion
Suriname $3,455,146,281 3.455 billion
Belize $3,066,850,000 3.067 billion
Burundi $2,642,161,669 2.642 billion
Central African Republic $2,555,492,085 2.555 billion
Cabo Verde $2,533,819,406 2.534 billion
Saint Lucia $2,430,148,148 2.43 billion
Gambia $2,396,111,022 2.396 billion
Seychelles $2,141,450,171 2.141 billion
Lesotho $2,117,962,451 2.118 billion
Timor-Leste $2,079,916,900 2.08 billion
Guinea-Bissau $2,048,348,108 2.048 billion
Antigua and Barbuda $2,033,085,185 2.033 billion
Solomon Islands $1,633,319,401 1.633 billion
Comoros $1,352,380,971 1.352 billion
Grenada $1,316,733,333 1.317 billion
Vanuatu $1,126,313,359 1.126 billion
St. Vincent & Grenadines $1,065,962,963 1.066 billion
Saint Kitts & Nevis $1,055,499,778 1.055 billion
Samoa $938,189,444 938.189 million
Sao Tome & Principe $678,976,265 678.976 million
Dominica $653,992,593 653.993 million
Micronesia $460,000,000 460 million
Palau $281,849,063 281.849 million
Kiribati $279,208,903 279.209 million
Marshall Islands $259,300,000 259.3 million
Tuvalu $62,280,312 62.28 million

Read next: Malware Counts Climb Higher on Windows as macOS Sees Fewer Cases
by Asim BN via Digital Information World

Malware Counts Climb Higher on Windows as macOS Sees Fewer Cases

Fresh data from 2025 shows that Windows computers continue to attract the bulk of malware activity. Surfshark Antivirus recorded close to 479,000 detections from January through late August. Out of that total, about 419,000 were on Windows devices and just over 60,000 were on macOS. The difference puts Windows at nearly seven times the number of infections seen on Apple systems.

Market Share Shapes Attacks

One reason behind the imbalance is the larger share of Windows in the desktop market. Globally, Windows accounts for around 71 percent of users, while macOS holds about 15 percent. The picture is similar across individual regions. In the United States and the United Kingdom, Windows has about two thirds of the share. In Germany, France, and Spain it ranges from 70 to 72 percent, while in South Korea it climbs as high as 85 percent. Attackers lean toward platforms that promise the widest reach, and the scale of Windows keeps it at the top of their list.

Malware Types on macOS

Although the raw numbers on Apple machines remain smaller, the data makes clear that macOS faces its own risks. Viruses accounted for the largest portion at 28 percent. Trojans followed at 26 percent. Riskware came in at 15 percent, adware at 8 percent, and exploits at 7 percent, while the rest fell into less common categories. Each carries a different method of operation, from malicious code that attaches to programs to software that appears legitimate but opens a pathway for further attacks.

Windows Categories and July Surge

On Windows, the most common detections involved malicious PowerShell scripts, which made up 22 percent of the total. Trojans represented 21 percent, viruses 17 percent, heuristic detections 14 percent, and potentially unwanted applications 11 percent. The reliance on PowerShell was most visible in July, when detections rose to 100,000. That figure was more than double the monthly average of 47,000. More than half of those infections were linked to PowerShell-based attacks that coincided with known flaws in Microsoft’s SharePoint software. April and May also showed smaller peaks with 13,000 and 23,000 detections tied to the same method.

Importance of Timely Updates

MacOS did not show spikes of that scale, although some variation appeared, such as a rise in trojans during May. Even with fewer cases overall, the platform still recorded a share of threats designed to exploit unpatched systems. About 7 percent of detections on macOS fell into this category. This pattern underscores the need for users to keep their systems updated. Both Microsoft and Apple issue regular patches to close security gaps, and the data shows how quickly attackers try to take advantage of those who delay applying them.


Notes: This post was edited/created using GenAI tools.

Read next: AI Bias in Healthcare: How Small Language Shifts Affect Women and Minority Patients
by Irfan Ahmad via Digital Information World

AI Bias in Healthcare: How Small Language Shifts Affect Women and Minority Patients

Medical research has often leaned on data from white men. Women and minority patients were left out of many past trials. That gap now shows up in artificial intelligence. Models trained on these records are being used in hospitals and clinics, and the shortcomings are visible in their recommendations.

Findings from MIT’s study

A team at the Massachusetts Institute of Technology tested four large language models, including GPT-4, Llama 3 (two different variants), and Palmyra-Med. They wanted to see how models respond when patient questions are slightly altered in ways that don’t change the medical facts. The changes included shifting gender markers, removing gender entirely, adding typos or extra spaces, and rewriting in anxious or dramatic tones.


Even with the same clinical information, treatment recommendations changed by about seven to nine percent on average. The direction of change often meant less medical care. For example, some patients who should have been advised to seek professional help were told instead to manage their symptoms at home.

Groups most affected

The errors hit some groups harder. Female patients faced more recommendations for reduced care than men, even when the cases were identical. In one test, whitespace errors led to seven percent more mistakes for women compared with men. The problem extended to other groups as well. Non-binary patients, people writing in anxious or emotional tones, those with limited English, and those with low digital literacy also saw weaker results.

Removing gender markers did not solve the issue. The models inferred gender and other traits from writing style and context, which meant disparities continued.

Drop in conversation accuracy

The researchers also tested models in conversational exchanges that mirrored chat-based patient tools. Accuracy dropped by around seven percent across all models once these small changes were introduced. These settings are closer to real-world use, where people type informally, include errors, or express emotion in their writing. In those cases, female patients again saw more frequent advice to avoid care that would have been necessary.

Evidence from other studies

The MIT work is not the only warning sign. A study from the London School of Economics reported that Google’s Gemma model consistently downplayed women’s health needs. A Lancet paper from last year found GPT-4 produced treatment plans linked to race, gender, and ethnicity rather than sticking to clinical information. Other researchers found that people of color seeking mental health support were met with less compassionate responses from AI tools compared with white patients.

Even models built for medicine are vulnerable. Palmyra-Med, designed to focus on clinical reasoning, showed the same pattern of inconsistency. And Google’s Med-Gemini model recently drew criticism when it produced a fake anatomical part, showing that errors can range from obvious to subtle. The obvious ones are easier to catch, but biases are less visible and may pass through unchecked.

Risks for deployment in healthcare

These findings come as technology firms move quickly to market their systems to hospitals. Google, Meta, and OpenAI see healthcare as a major growth area. Yet the evidence shows language models are sensitive to non-clinical details in ways that affect patient care. Small variations in writing can shift recommendations, and the impact often falls on groups already disadvantaged in medicine.

The results point to the need for stronger checks before rolling out AI systems in patient care. Testing must go beyond demographics to include writing style, tone, and errors that are common in real-world communication. Without this, hospitals may end up deploying tools that quietly reproduce medical inequality.

Notes: This post was edited/created using GenAI tools.

Read next:

Who Really Owns OpenAI? The Billion-Dollar Breakdown

Your Supplier’s Breach May Be Flagged by AI Before They Even Know It


by Irfan Ahmad via Digital Information World

Sunday, September 21, 2025

Your Supplier’s Breach May Be Flagged by AI Before They Even Know It

By Estelle Ruellan , threat intelligence researcher at TEM company Flare.

Image: DIW-AIgen

Cybercriminals persistently target critical infrastructure to disrupt key lifeline services and influence more prudent attacks that tempt companies into paying large amounts of ransom.

Such was the case when advanced persistent threat (APT) groups like Volt Typhoon, APT41, and Salt Typhoon leveraged legitimate account credentials to conduct long-term intrusions, moving laterally across multiple U.S. state government networks.

In collaboration with Flare, Verizon found stolen credentials were involved in 88% of basic web application attack breaches , making them not only the most common initial attack vector but also, frequently, the only one.

In 2024 and 2025, there has been a surge in infostealer and credential marketplace activity, and security teams are struggling with alert fatigue. Most organizations can’t afford analysts spending hours every day trawling through Telegram, forums, and paste sites. If a model helps filter the noise, it gives human teams breathing room.

Our latest research shows that GPT-powered models can scan hundreds of daily posts on underground forums like XSS, Exploit.in , and RAMP, detecting stolen credentials and mapping live malware campaigns with 96% accuracy.

With the right prompts and navigation, LLMs can detect emerging breaches, identify compromised credentials, and surface novel exploits. When properly directed, these models can take on the heavy lifting of cyber threat intelligence, handling the foundational work of CTI gathering and basic analysis, so security analysts can dedicate their expertise to complex investigations and strategic threat assessments that demand human judgment and deeper insight.

However, the takeaway here isn’t “LLMs will solve cyber threat intelligence (CTI).” They are more like hyper-fast execution engines that require detailed human instruction rather than seasoned analysts who understand business risk and context.

Security analysts must understand the tool's blind spot: LLMs need humans to dissect every element, provide domain knowledge, map decision-making steps, and supply contextual understanding. When properly instructed with this comprehensive guidance, they can execute tasks at incredible speed, but they remain fundamentally blind without human strategic oversight.

Let’s look at where LLMs succeed in CTI, and where their limitations are to use them safely.

Where LLMs Add Real Value in CTI

Security teams are drowning in noise. Microsoft Defender for Endpoint has seen a significant increase in the number of indicators of attack (IOAs), with a 79% growth from January 2020 to today. Many of these alerts will be false positives, such as flagging logins from unusual geographies, devices, or IPs when employees are on business travel or working from new cafes.

LLMs can chip away at the overload. In our study , GPT-3.5 parsed hundreds of daily forum posts, pulling out details like stolen credentials, malware variants, and targeted sectors. For an analyst, that means minutes instead of hours spent sifting through chatter.

Its usefulness in allowing for breach and leak monitoring is potent. The use of valid account credentials and the exploitation of public-facing applications were tied as the top initial access vectors observed in 2024, both representing 30% of X-Force incident response engagements .

Having LLMs summarize cybercrime forum conversations and flag when credentials or other sensitive data appear to be leaked or traded can help flag exposures before they hit production systems. In our study, the model highlighted mentions of compromised companies or products and surfaced potential breaches or exploits being discussed. This provides valuable context for breach and leak monitoring, giving analysts early awareness of emerging threats without hours of manual review.

Moreover, threat actors rarely stay in one lane; they might sell infostealers on Telegram, have initial access brokers (IABs) packages that access and list them on forums, and in another channel, advertise phishing kits to weaponize those stolen credentials. Each stage looks like a separate conversation if you only see one channel, but they’re pieces of the same campaign pipeline.

LLMs are uniquely good at pattern recognition across disjointed conversations. Done right and with the right context, they could stitch fragments together into early warning signals, giving analysts a clearer picture of emerging campaigns.

Blind Spots and Risks of Overreliance

While LLMs show potential in minimizing false positives, these tools are not immune to them. Our team noted that GPT-3.5 struggled with something as basic as verb tense, confusing an ongoing breach with one that had already ended. The key points here are your prompt engineering (how you craft your prompt) and a reminder that high accuracy in controlled studies does not guarantee the same results in live and variable scenarios.


LLMs can fabricate connections or misclassify chatter when context is thin. In practice, that means a model might confidently link stolen credentials to the wrong sector, sending analysts down rabbit holes and wasting valuable time. According to Gartner, 66% of senior enterprise risk executives noted AI-assisted misinformation as a top threat in 2024.

Cost and scale matter too. Running models across thousands of daily posts isn’t free. If teams lean too hard on closed-source LLMs without evaluating cost-performance trade-offs, they risk creating yet another tool that looks great in a proof of concept but doesn’t survive budget cycles.

Projects like LLaMA 3, Mistral, and Falcon are catching up to closed models in language understanding. Fine-tuning or training them on your own CTI datasets can be cheaper in the long term, with more control over model updates and security. The trade-off is that you need in-house expertise to manage training and guardrails.

What CISOs Should Demand

CISOs already know the only way to stay ahead of automated attacks is to automate defenses. Some 79% of senior executives say they are adopting agents in their companies to strengthen security. The key part is knowing how to use them without adding new risks.

A model with 96% accuracy is impressive, but it still misses nearly one in twenty signals. And, as we mentioned earlier, they can still alert to false positives or link stolen credentials to the wrong sector. That’s why all AI triage must be overseen and verified by an analyst, ensuring errors don’t slip into executive briefings or trigger costly over-reactions.

These tools only work if they are steered with precision. Prompt engineering is critical. Context, down to the last detail and tense used, all affect the LLM performance. In one case, a discussion about purchasing data in Israel, titled “Buy GOV access,” was mislabeled as not targeting critical infrastructure, when in fact it was, because that title wasn’t part of the prompt. CISOs or security teams using these models must always ground outputs with missing yet critical context.


Moreover, variables like “is targeting a large organization” or “critical infrastructure” were interpreted inconsistently by the model, since there was no shared definition. It flagged globally known names accurately but missed sector-specific or less famous entities. When prompting an LLM, don’t rely on the model’s definitions, set your own. Because if you don’t set the rules, the model will make them up/follow its own. Therefore, when using subjective or loosely defined labels, security teams should embed definitions or examples within prompts, such as, “Critical infrastructure encompasses essential systems and facilities such as energy, oil and gas sector, transportation, water supply, telecommunications, internet providers, military, governments, harbour, airport.”

Some best practices include:

  • Define the LLM’s role and provide an explicit output structure
  • Align verb tense to context (“has sold” vs. “is selling”)
  • Always include relevant context (e.g., thread titles or summaries of the previous conversation)
  • Provide clear definitions or decision rules for subjective categories

Finally, CISOs should demand clear ROI benchmarks before betting big on tools that could become shelfware. Closed-source models deliver strong results, but open-source alternatives are catching up.

LLMs are not perfect, but when tied tightly to structured prompts, contextual data, and clear analyst-defined rules, they can amplify defense strategies. They should not be treated as black-box oracles. They can sift vast volumes of dark-web chatter and hand analysts a distilled starting point. The key is not expecting them to make judgment calls on risk but designing the workflow so that they enrich human decision-making instead of replacing it.

Read next: Who Really Owns OpenAI? The Billion-Dollar Breakdown


by Web Desk via Digital Information World

Saturday, September 20, 2025

Who Really Owns OpenAI? The Billion-Dollar Breakdown

As OpenAI cements its place as one of the most valuable artificial intelligence companies in the world, questions around ownership and control have become central to the company’s future. Based on a $500 billion valuation, recent estimates provide a clearer picture of who holds the biggest stakes in OpenAI.

Microsoft remains the single largest shareholder, with 28% of the company, valued at approximately $140 billion. The close partnership between OpenAI and Microsoft has grown since their multibillion-dollar collaborations, cementing the tech giant’s influence over the AI firm’s trajectory.

OpenAI’s nonprofit parent entity follows closely with 27% ($135 billion), ensuring that the company’s original mission of prioritizing safety and long-term public benefit still retains substantial weight. Meanwhile, OpenAI employees collectively own 25% ($125 billion), reflecting the company’s strategy of rewarding and retaining top AI talent.

On the investor side, the most significant group is participants in the 2025 fundraise, who hold 13% ($65 billion). Smaller but still notable are investors from the 2024 fundraise with 4% ($20 billion), along with IO shareholders at 2% ($10 billion) and OpenAI’s earliest backers at 1% ($5 billion).

This ownership structure highlights a balance between big-tech partnership, nonprofit oversight, employee ownership, and venture capital backing. As OpenAI scales further in 2025 and beyond, the mix of stakeholders will play a pivotal role in shaping not only the company’s innovations but also the governance of AI at a global level.

Microsoft Holds 28% Stake as OpenAI’s Governance Faces Growing Scrutiny
Stakeholder Share
Microsoft 28% ($140B)
OpenAI’s nonprofit 27% ($135B)
OpenAI employees 25% ($125B)
Investors (2025 fundraise) 13% ($65B)
Investors (2024 fundraise) 4% ($20B)
IO shareholders 2% ($10B)
OpenAI’s first investors 1% ($5B)

Notes: This post was edited/created using GenAI tools.

Read next: FCC Considers Cutting Satellites Out of Environmental Oversight
by Irfan Ahmad via Digital Information World

Trump Sets $100K Fee for H-1B Visas, Tech Sector Faces New Strain

President Donald Trump has ordered a steep new cost on skilled worker visas, setting a $100,000 annual fee for H-1B applications. The proclamation, signed on Friday, is the latest move in his administration’s tightening of immigration rules.

How the program works

The H-1B system lets U.S. companies hire foreign workers with specialized skills in science, technology, engineering, or medicine. The visas run for three years with the option to extend to six. Each year, 65,000 are granted by lottery, with an additional 20,000 for graduates of U.S. advanced degree programs. Approvals, including renewals, reached about 400,000 in 2024. India remains the main source of recipients, accounting for the majority of visas.

White House justification

The administration says the change addresses abuse in the system. Officials point to examples where companies obtained thousands of H-1B visas while cutting American jobs. A White House fact sheet noted one company received approval for over 5,000 foreign workers this year while laying off about 16,000 U.S. staff. The proclamation also frames the fee as a matter of national security.

Exemptions and timeframe

The Homeland Security Secretary has been given power to exempt individuals, companies, or industries if national interest is cited. The new fee takes effect immediately and is set to last for one year unless extended.

Wage rules under review

Alongside the fee, the Labor Secretary has been directed to revise wage requirements. The goal is to prevent companies from undercutting U.S. salaries by relying on lower-paid foreign workers. Federal data shows that H-1B holders now fill more than 65 percent of IT roles, up from about 32 percent in 2003. Unemployment among recent computer science graduates has risen above six percent.

Impact on the tech industry

Technology firms are expected to resist the move. Many of them rely on foreign talent, especially Indian engineers, to fill roles that U.S. graduates cannot meet at scale. Past visa holders have included figures who went on to shape the industry. Elon Musk entered the United States on an H-1B before founding Tesla and SpaceX. Instagram co-founder Mike Krieger, originally from Brazil, also began on an H-1B and faced delays that nearly derailed his startup plans.

Policy background

The H-1B program has swung in response to changing administrations. Approvals peaked in 2022 under President Joe Biden. Rejections reached their highest point in 2018 during Trump’s first term. The new financial barrier is seen as a continuation of the current White House crackdown on immigration.

Additional residency track

The order also creates a new residency pathway known as the “gold card.” Individuals can secure permanent U.S. status by paying $1 million, while companies may sponsor workers by paying $2 million. The administration has promoted the measure as a way to attract high-value investors.

Legal challenges ahead

The sharp rise in costs is expected to trigger pushback from Silicon Valley and other sectors that depend heavily on international talent. Legal challenges are likely in the months ahead as the policy takes hold.

Conclusion

The new restrictions also reveal a wider truth about global labor. Workers from developing nations often help sustain the economies of wealthier countries, yet they can be discarded once policy priorities change. When advanced nations draw on foreign talent to meet their needs and later push those same people aside, it reduces human beings to temporary resources rather than valued contributors. Such patterns highlight the imbalance of power in international labor markets and raise questions about fairness, dignity, and long-term responsibility toward the people who help drive growth. 


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: LinkedIn to Tighten Data Rules, Expand Microsoft Ad Sharing and AI Training on November 3

by Irfan Ahmad via Digital Information World

Friday, September 19, 2025

LinkedIn to Tighten Data Rules, Expand Microsoft Ad Sharing and AI Training on November 3

LinkedIn is preparing to tighten its data rules this autumn, and the update is more than just a formality. Beginning November 3, the Microsoft-owned platform will apply a new set of terms that determine how member information is shared for advertising and how it is used inside LinkedIn’s artificial intelligence tools.

Data Moving Into Microsoft’s Orbit

One of the most visible shifts involves advertising. In several countries outside the EU, LinkedIn will send Microsoft more information about member activity, from profile details to ad clicks. That information will help Microsoft push more tailored promotions across its family of products.

The catch is that even if someone blocks the sharing, Microsoft ads will still appear — they just will not draw from LinkedIn habits. To stop the flow of data, members need to go into account settings and switch off the Data Sharing with Microsoft option before the terms kick in.

AI Training Set to Expand

At the same time, LinkedIn is widening the scope of its generative AI training. In Europe, the UK, Canada, Switzerland, and Hong Kong, public content and profile information will automatically be pulled into AI systems that suggest posts, polish profiles, or help recruiters match with candidates. Private messages remain out of reach, but everything else that is public can be fed into these models.

By default, the switch is on. Anyone who wants out has to dig into Settings > Data Privacy > Generative AI Improvement and toggle it off. Turning it off will not disable LinkedIn’s AI features; it just stops personal data from being folded into future training.

Other regions, including the United States, will not see changes to AI training this round.

Legal and Policy Notes

The company says its approach differs by region. In the EU and UK, data processing for AI rests on the legal principle of “legitimate interest,” while in other markets the emphasis is on user choice through opt-out tools.

Alongside these changes, the platform has also updated its User Agreement. Deepfakes and impersonations are now spelled out as violations, new rules explain when secondary payment methods can be used, and members have been given clearer ways to appeal restrictions on their accounts.

What Members Should Do

For most people, the practical step is reviewing privacy controls before November 3. Leaving the defaults in place means LinkedIn can share data with Microsoft for ad targeting and use profile details for AI model training where applicable. Those who are not comfortable with that approach should turn the features off manually.

Anyone who continues using LinkedIn past the deadline will be considered to have accepted the new terms. Those unwilling to do so have the option to close their account entirely.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Italy Sets National AI Rules, First in European Union
by Asim BN via Digital Information World