Monday, January 27, 2025

Is DeepSeek the AI That Will Topple Google and OpenAI? Here's What You Need to Know

In January 2025, DeepSeek, a Chinese AI chatbot, made waves in the global market. It quickly attracted attention, sparking debates about the future of AI and the shifting balance of technological dominance. While many have focused on its impressive capabilities, others have raised concerns about its underlying influences. DeepSeek signals a new direction in AI, one that challenges established norms and introduces new possibilities in both technology and geopolitics.

The Birth of DeepSeek

DeepSeek was founded in 2023 by Liang Wenfeng, a former quantitative finance expert. His leap into artificial intelligence was driven by a desire to build a chatbot that could combine human-like reasoning with practical problem-solving features. By January 2025, DeepSeek had launched its flagship product, the DeepSeek-R1 AI chatbot. Despite being in its early stages, the company’s smart development and bold approach have positioned it as a formidable contender in the AI space.

The Technology Behind DeepSeek

DeepSeek's technology relies on a mix of advance approaches to AI development. What sets it apart is how efficiently it uses resources. While companies like OpenAI and Google have poured billions into their AI systems, DeepSeek developed a competitive product with just under $6 million. This lean development has allowed DeepSeek to avoid the need for expensive hardware and massive data centers, which have been the norm in AI development.

The core of DeepSeek's model is its ability to function with minimal computing power while still delivering fast, accurate responses. Unlike its American counterparts, which require vast amounts of processing power and other resources, DeepSeek uses more efficient algorithms and less resource-intensive models. As a result, it’s able to provide similar performance at a fraction of the cost, making AI more accessible to companies and individuals alike.

Open-Source Revolution

One of DeepSeek's most significant moves is its decision to release the chatbot’s code as open-source software. Unlike most major AI companies that keep their code locked behind proprietary systems, DeepSeek has made its technology available to everyone under an MIT license. This is a big deal because it encourages the kind of collaboration and innovation that can drive accelerated technological advancement.
By making its source code open, DeepSeek is democratizing access to AI. Developers, businesses, and researchers can not only use the chatbot but also improve upon it, adapt it to different needs, and contribute to its ongoing development. This open-source approach is a stark contrast to the more closed-off models from major AI companies.

DeepSeek’s Impact on the AI Market

DeepSeek is already shaking up the AI market in ways that go beyond its technology. The cost-effective nature of the product is one of its most significant selling points. Businesses that previously couldn’t afford AI now have access to a powerful tool that can help them scale operations, automate tasks, and enhance decision-making processes.

Notably, by making AI technology affordable and accessible, DeepSeek is challenging the business models of major AI players. Companies like OpenAI have traditionally relied on subscription fees, enterprise contracts, and other monetization strategies. In contrast, DeepSeek's API service is priced much lower, making it an attractive option for smaller firms and independent developers. This shift in how AI services are priced could force larger companies to reconsider their pricing models or risk losing market share.

Political and Geopolitical Tensions

DeepSeek's rise isn't just about technology, it's also about geopolitics. As China continues to assert itself in the global tech landscape, DeepSeek’s success is seen as part of a broader strategy to challenge the dominance of U.S. companies in the AI industry. While the U.S. has been the driving force behind most AI innovations, China is quickly catching up, and DeepSeek is leading the charge.
DeepSeek’s success is symbolic of China’s growing influence in the AI sector. The country has been investing heavily in AI research and development, and DeepSeek is reaping the benefits of this state-backed push. While the Chinese government has allowed the company to operate with relative autonomy, there are signs that this could change. Recent moves by the government to increase its investment in AI could signal greater involvement in DeepSeek's future.

This shift in power dynamics has already had consequences. For instance, Nvidia, a key supplier of chips used in AI models, saw its stock dropped in billions in response to DeepSeek's rise. Investors are now questioning whether the traditional methods of building AI, that is relying on high-end hardware and data centers, are sustainable in the face of more efficient, cost-effective alternatives.

Censorship and Ethical Dilemmas

One of the most significant criticisms of DeepSeek is its censorship practices. Like many Chinese tech companies, DeepSeek has limits on the type of content its AI can discuss. For example, when asked about the 1989 Tiananmen Square protests, DeepSeek’s chatbot refuses to answer, instead redirecting users to other topics. This self-censorship is a reflection of China’s strict policies on controlling the flow of information.

The issue of censorship raises important questions about the ethical implications of AI. While AI has the potential to facilitate open dialogue and promote free expression, it can also be used to suppress inconvenient truths. DeepSeek’s refusal to address sensitive historical events is a reminder of the risks that come with allowing government influence over AI technology.

Though DeepSeek’s approach to censorship mirrors that of other Chinese tech companies, there are signs that the company is not entirely subject to government control. Analysts believe that the Chinese government has largely stayed out of DeepSeek's operations, but this could change as the company grows and attracts more attention. The increasing investment in AI research by the Chinese government suggests that DeepSeek might eventually face greater scrutiny or even direct intervention.

Looking Ahead: What’s Next for DeepSeek?

As DeepSeek moves forward, its future is filled with both opportunities and challenges. The company has already proven that it can build a competitive product with limited resources, but scaling that success will require overcoming several hurdles. One of the biggest challenges will be ensuring that DeepSeek-R1 can handle more complex tasks without compromising its efficiency.
DeepSeek’s open-source model will be crucial in this process. As more developers and companies adopt the technology, the chatbot will continue to evolve and improve. However, whether DeepSeek can maintain its momentum against the likes of OpenAI, Google, and other tech giants remains to be seen. If it can continue to innovate while keeping costs low, the company could redefine how AI is developed and used in the future.

Conclusion

DeepSeek represents a shift in how AI is built, accessed, and monetized. Its cost-effective, open-source model is reshaping the industry, making AI more accessible to businesses and developers of all sizes. At the same time, DeepSeek is a reminder that the future of AI will be shaped not just by technological advancements but also by the political and ethical decisions that govern its development.

While DeepSeek’s rise signals a new era in AI, it also raises important questions about how AI should be regulated, who controls it, and how it can be used to promote open dialogue and transparency. As the company continues to grow and improve, the world will be watching to see if it can challenge the established powers in the AI space and redefine the future of technology.

Have you tried DeepSeek yet? Share your experience with us in the comments or tag us on social media!

With minimal resource usage and a lean development model, DeepSeek disrupts AI market pricing and competition.
Image: DIW-Aigen

Read next:

• Did the App Store Create Hundreds of New Millionaires in 2024? Here’s the Proof!

• Social Media’s 'News-Finds-Me' Mentality Fuels Misinformation Sharing

• Which U.S. Cities Are Seeing the Fastest Rise in Digital Crime and Fraud Reports?
by Asim BN via Digital Information World

Which U.S. Cities Are Seeing the Fastest Rise in Digital Crime and Fraud Reports?

According to the FTC, 2.5 million Americans became victims of fraud in 2023, losing about $10 billion online. In response to this alarming trend, All About Cookies conducted a survey to identify which U.S. cities have the highest rates of online scam, based on fraud and identity theft reports. They gave scores from 1-100, with 100 being the worst score and it was also observed that most scams happen in the cities which have aging populations and warm climates. Miami was the most scammiest city in the US with score of 72.0. There were 1,775 average fraud reports per 100k population, with 162 spam call complaints within the same range.

Followed by Miami is Las Vegas, with 200 spam calls reported per 100k and 488 identity theft reports per 100k as well. The third scammiest city in the US is Orlando (1,602 fraud reports and 167 spam call complaints per 100k). Tampa has the fourth highest scam rate in the country with 1,623 fraud reports per 100k. Another city in the top five scammiest city in the US is Atlanta which had 1,988 fraud reports per 100k, with highest number of robocalls (4,133) received per person. There was a 29% decrease in identity theft reports in Atlanta and that's why it isn't ranked higher. Other top ten scammiest cities in the US include San Diego, Tallahassee, Tucson, Hartford and Dallas. Most of the cities with highest rates of scam and identity theft are from Florida.

All About Cookies also ranked the safest cities in America for digital crime, with Amarillo (18.6 score) topping the list. It also had a 21% decrease in identity theft reports which helped a lot in its ranking. Other safest cities in the US for digital media are Fort Wayne, Wichita, Nashville and Kansas City. Fort Wayne also saw a 14% decline in fraud reports while Kansas City has the lowest fraud reports and robocall rates in the country. On the other hand, Hartford saw a 49% increase in identity theft reports. Des Moines saw the second biggest increase in identity theft reports (40%) while Boston saw the third biggest increase (34%). Other cities in the US where identity theft is rising quickly are Omaha, Lincoln and Denver. Cities like Little Rock, Cincinnati, Providence and Portland are also seeing increases of 15%-13% in identity theft reports.




Read next: Survey Highlights $3,313 Average Loss Per Identity Theft Victim in the US
by Arooj Ahmed via Digital Information World

Why Are AI Giants Betting Big on Washington, and What’s at Stake for the Future?

According to OpenSecrets, many AI companies have significantly increased their lobbying efforts on federal AI issues. In 2023, 458 companies spent on AI lobbying while 648 companies spent on AI lobbying in 2024. This is a 141% YoY increase in AI lobbying between 2023 and 2024. Companies like Microsoft and OpenAI have ramped up their efforts to influence AI-related legislation. CREATE AI Act, which focuses on benchmarking AI systems in the US and Advancement and Reliability Act which aims at creating a government center for AI research was backed up by Microsoft and OpenAI respectively.

OpenAI has increased its lobbying expenditure from $260,000 in 2023 to $1.76 million in 2024 while Anthropic’s lobbying expenditure increased from $280,000 to $720,000 in 2024. A startup named Cohere has also increased its budget to $230,000 in 2024 from $70,000 in 2022. OpenAI, Cohere and Anthropic collectively spent $2.71 million on federal lobbying in 2024, which is a significant increase from $610,000 being spent on federal lobbying in 2023. It is still small if we compare it to what large tech industries spent on lobbying in 2023 ($61.5 million).

In 2024, domestic policy making was a mess with Congress considering about 90 AI-related bills in the first half of the year but Congress didn't take any actions and asked the states to act independently. Some of the actions which were taken because of that were Tennessee becoming the first state to protect voice artists from unauthorized AI cloning, Colorado adopting a risk based approach to AI policy and California enacting multiple AI safety bills. But no state was successful in implementing AI regulations as good as the EU's AI Act.

It is still unclear whether there will be more actions on AI legislation this year as compared to last year at a federal level. President Donal Trump has recently ordered federal agencies to suspend Biden-era AI policies, even the export rules on AI models. Anthropic has urged the federal government to implement targeted AI regulation while OpenAI has also called for some government action on AI as well as its development and infrastructure.

Image: DIW-Aigen

Read next:

• Meta AI Launches Document Editor to Enhance Writing and Editing Tasks

• Apple iOS 18 Adoption Hits 68%, Surpassing iOS 17 by 2% Amid Sluggish Generative AI Interest
by Arooj Ahmed via Digital Information World

Sunday, January 26, 2025

Meta AI Launches Document Editor to Enhance Writing and Editing Tasks

Meta AI is offering its users a text-based document editor to help them in their writing tasks. If you want to write documents using AI assistance, head to the web client on Meta AI and open the document. You will have options just like any other document and Meta’s Llama 3.2 will help you make any kind of changes to your document. This tool is not on Meta AI yet and Meta hasn't shared any news about it either. But it is mentioned on Meta’s AI homepage that the feature is available on beta. Six weeks ago, a help page for this feature also appeared but there's no announcement about its public release yet.

You can also generate images on the document using Meta’s imagine feature with options about saving, copying and printing the document. However, the option to download the document in formats like DOCX is not yet available. Users can instruct Meta AI to make changes to the document and will also be able to take advantage of versioning features. If you do not like a change AI made, you can still go back to previous versions using the back arrow.

Many students use generative AI to cheat on their homework and writing tasks. When writing generated by Meta AI in the document editor was checked for AI, it was all detected as AI. This means that it cannot be used for writing homework without going undetected so using the rewrite feature will come in handy. Meta AI in the document editor can also be used for fixing spelling and grammatical mistakes.


H/T: Neowin

Read next: 

• Study Reveals Social Media Users Share News Without Reading, Driving Misinformation

• Survey Highlights $3,313 Average Loss Per Identity Theft Victim in the US
by Arooj Ahmed via Digital Information World

Survey Highlights $3,313 Average Loss Per Identity Theft Victim in the US

According to the Federal Trade Commission (FTC), millions of Americans get affected by fraud and lose billions because of it. In 2023, Americans lost $10 billion in fraud, mostly due to identity theft or $1.8 million every year. All About Cookies conducted a survey among 1000 Americans to find out how many of them have been affected by identity theft, how they became a victim of identity theft and how long did it take for them to recover from it. According to FTC’s 2022 and 2023 Consumer Sentinel Data Books, the state which is most compromised with identity theft is Connecticut, with a 68% increase in identity theft from the year prior. It is followed by Massachusetts (+55%), Iowa (+44%) and Nebraska (+30%).

The Bureau of Justice Statistics’ report shows that 12% of people over the ages of 16 knew that their identities had been stolen and 46% of people knew someone close to them who had been a victim of identity theft. 14% of the respondents in the survey said that they themselves had their identity stolen in the past. The victims said that the average cost of identity theft to them was $3313.

There are some common methods that identity thieves use to steal information from victims. The respondents were asked how they got their data stolen which contributed to identity theft, with 38% saying they got their data stolen from a data breach on a website (38%), followed by stolen or missing credit card (16%) and official documents (13%). There were also 27% of the respondents who said that they don't know how identity thieves stole their data. 45% of the respondents said that identity thieves opened new accounts from their data and 42% that identity thieves used their data to steal from financial accounts. 20% of the identity thieves also used victim’s data to take out loans.

The survey also asked the respondents how they found out their identity had been compromised, 46% responded that they received a credit card monitoring alert and 42% said that they noticed their money missing. 50% of the respondents said that they were using ID theft monitoring at the time of the theft and were alerted by it. 36% of the respondents said that it took them an average a week or less to know that their identity had been stolen while 17% got to know about it in one to three months. 51% of the victims of identity theft got to know about their identity being stolen within two weeks of the crime.

23% of the respondents said that they still haven't recovered from identity theft while 20% said it took them a week or less. 50% of victims figured out their identity had been stolen within two weeks. 48% of the respondents of the survey said that they don't have enough protection against identity theft.








Read next: Your Old Device Might Be a Goldmine: 26% of Americans Skip Wiping Data Before Recycling
by Arooj Ahmed via Digital Information World

Study Reveals Social Media Users Share News Without Reading, Driving Misinformation

According to a new study, most social media users repost news links without reading them first or verifying its content. In this era of media and technology, most people remain up to date on news through social media, where all types of news are available just one click away. But this is also helping in the spread of misinformation on the internet because a lot of people are not reading articles completely before sharing them on social media sites like Facebook.

An analysis of 35 million Facebook posts between 2017 and 2020 found that politically extreme content gets shared right away without any clicks. Users believing in some certain parties also share links without reading the full article if the content seems aligned with their existing beliefs. The study also found that there is a difference between links being shared by liberal and conservative social media users. 76.9% of the conservative users on social media shared more misinformation than 14.3% of liberals. 76% to 82% of the links containing misinformation also emerged from conservative news sites.

The researchers of the study say that news content being shared on social media is linked to what is written in headlines or blurbs rather than what the actual content is. This is being called ideological segregation in the online world by the researchers because users are not verifying information as it is just talking about their existing beliefs. This study raises questions about social media design and information literacy among people. Social media sites like Meta should take some steps and implement some solutions to decrease link sharing misinformation.

Image: DIW-Aigen

H/T: University of Florida

Read next:

• Multi-Account Feature Arrives on WhatsApp Beta for iOS, Simplifying User Experience

• New Report Shows Google Seems to Favor User Generated Content, With Reddit Rising in Search Visibility
by Arooj Ahmed via Digital Information World

Saturday, January 25, 2025

Multi-Account Feature Arrives on WhatsApp Beta for iOS, Simplifying User Experience

WhatsApp Beta for iOS 25.2.10.70 update is here with a new addition called “Multi-account”. It was first available on WhatsApp beta Android 2.23.17.8 update by which users could log in to their WhatsApp app using multiple accounts. It is a great feature for users as they will be able to manage many of their accounts including personal and professional accounts within the same app. It is convenient and allows users to simplify their account management so they can have a smooth experience. Now this feature is also going to be available in the new WhatsApp beta for iOS update.

Users who have multiple phone numbers will be able to fully take advantage of this multi-account feature. Many users used to rely on WhatsApp Business to manage their other account even if they didn't have a business because they couldn't open their other account within the same WhatsApp app. Now with this update, all users need to do is add their other accounts to their WhatsApp app through app settings and then they can switch to whatever account they want.

Users will get two options to add new accounts to their app, with setting the device as a primary account or scanning a QR code to link the new account as a companion. This feature is also going to organize all the conversations within the same app with notifications, media and backup being separate for each account. Users who use dual SIMs will be able to assign both their numbers to only one WhatsApp account. This feature is currently under development and will be available in the future updates of WhatsApp on the App Store.


Read next:

• Which US Mobile Networks Excel in 5G? New Report by Ookla Reveals

• Google Shares New Android ‘Identity Check’ Security Feature That Locks Sensitive Settings

• New Report Shows Google Seems to Favor User Generated Content, With Reddit Rising in Search Visibility
by Arooj Ahmed via Digital Information World