Friday, November 7, 2025

AI Agents Struggle in Simulated Markets, Easily Fooled by Fake Sellers, Microsoft Study Finds

AI assistants are being trained to handle purchases and digital errands for people, but Microsoft’s latest research shows that these systems remain far from reliable. The company built an experimental platform called Magnetic Marketplace to test how modern AI agents behave in a simulated economy. Instead of becoming efficient digital shoppers, many of them made poor choices, got distracted by fake promotions, and sometimes fell for manipulation.

The simulation brought together 100 virtual customers and 300 virtual businesses. On paper, it sounds like a practical way to study real-world digital transactions, where one agent buys food, books, or services from another. Microsoft’s team loaded the environment with models from OpenAI, Google, and several open-source projects, including GPT-4o, GPT-5, Gemini-2.5-Flash, OSS-20b, and Qwen3. Each model acted either as a buyer or seller, negotiating through a controlled online market. The results were revealing.

When agents were asked to order something as simple as a meal or home repair, their decision-making showed deep weaknesses. As the range of available choices grew, performance fell sharply. In one test, GPT-5’s average consumer welfare score dropped from near 2,000 to around 1,100 when exposed to too many options. Gemini-2.5-Flash saw its score decline from about 1,700 to 1,300. Agents that had to navigate long lists or compare hundreds of sellers lost their focus and often settled for “good enough” matches rather than ideal ones.


The study described this as a kind of “paradox of choice.” More options did not mean better results. In many runs, agents reviewed only a small fraction of available businesses, even when hundreds were open for selection. Some models like GPT-4o and GPT-4.1 maintained slightly steadier performance, staying near 1,500 to 1,700 points, but they too struggled when markets became crowded. Claude Sonnet 4’s score collapsed from 1,800 to just 600 under heavier loads.

Another problem emerged around speed. In this artificial economy, selling agents that responded first dominated the market. Microsoft measured a 10 to 30 times advantage for early replies compared to slower ones, regardless of product quality. This behavior hints at a potential flaw in future automated markets, where quick manipulation could outweigh fair competition. Businesses might end up competing on who responds fastest instead of who offers the best value.

Manipulation also proved alarmingly effective. Microsoft’s researchers tested six different persuasion and hacking strategies, ranging from false awards and fabricated reviews to prompt injection attacks that tried to rewrite an agent’s instructions. The results varied by model. Gemini-2.5-Flash resisted most soft manipulations but gave in to strong prompt injections. GPT-4o and some open-source models like Qwen3-4b were far more gullible, sending payments to fake businesses after reading false claims about certifications or customer numbers.

Even simple psychological tricks worked. When presented with phrases that invoked authority or fear, such as fake safety warnings or references to “award-winning” restaurants, several agents switched their choices. These behaviors highlight major security concerns for future AI marketplaces, where automated systems may end up trading with malicious agents that pretend to be trustworthy.

The researchers also noticed bias in how agents selected from search results. Some open-source models tended to pick businesses listed at the top or bottom of a page, showing positional bias unrelated to quality. Across all models, there was a pattern known as “first-offer acceptance.” Most agents picked the first reasonable offer they received instead of comparing multiple ones. GPT-4o and GPT-5 displayed this same bias, even though they performed better overall.

When taken together, the findings show that these AI agents are not yet dependable for financial decisions. The technology still requires close human supervision. Without it, users could end up with wrong orders, biased selections, or even security breaches. Microsoft’s team acknowledged that their simulation represented static conditions, while real markets constantly change. Agents and users learn over time, but such adaptation adds another layer of complexity that has not yet been solved.

The Magnetic Marketplace experiment gives a glimpse of what might come next in the evolution of digital economies. It shows that even advanced models can collapse under too much data, misjudge credibility, or act impulsively when overloaded. For now, these systems are better suited as assistants than autonomous decision-makers.

Microsoft’s open-source release of the Magnetic Marketplace offers an important testing ground for developers and researchers. Before AI agents are allowed to manage money, they will need stronger reasoning, improved security filters, and mechanisms to handle complex human-like markets. The results make one thing clear: automation alone cannot guarantee intelligence. Real trust will depend on oversight, transparency, and the ability of these systems to resist persuasion as well as they handle logic.

Notes: This post was edited/created using GenAI tools.

Read next: Your Favorite AI Might Be Cheating Its Exams, Researchers Warn
by Asim BN via Digital Information World

15 Billion Scam Ads Every Day: How Meta’s Platform Turns Fraud Into Billions

Meta’s apps are showing users a staggering number of scam ads every day. Internal documents reveal that Facebook, Instagram, and WhatsApp combined display around 15 billion high-risk scam advertisements daily. These include fake products, illegal gambling, and banned goods. On top of that, users encounter an additional 22 billion “organic” scams, like bogus marketplace listings and false job offers. The scale is enormous, and the people behind it are exploiting the trust users place in brands and public figures.

Revenue Over Regulation

According to internal projections, scam ads could account for roughly 10 percent of Meta’s yearly revenue, amounting to around $16 billion. Yet the company has long taken a cautious approach to enforcement. Advertisers suspected of running scams are only removed if the system is 95 percent sure of fraud. Otherwise, they may continue running ads, sometimes racking up hundreds of strikes without suspension. For larger advertisers, suspected of misconduct, Meta even charges higher ad rates. The system is designed to deter some activity while still maintaining revenue.

Meta’s ad-personalization tools, meant to serve content based on user interests, end up pushing more scam ads toward users who interact with them. Those clicks feed into more exposure, creating a cycle that benefits the platform financially. In late 2024, the company anticipated earning roughly $7 billion from high-risk ads alone, part of that 10 percent estimate.

Balancing Act

Meta’s internal documents show a delicate balance between enforcement and revenue. The company has aimed to gradually cut the share of revenue from scams and banned goods, targeting a drop from 10.1 percent in 2024 to 7.3 percent by the end of 2025. Internal memos stress moderation, ensuring enforcement does not hurt overall projections or investments, especially in artificial intelligence, where the company is spending billions.

The documents also make clear that Meta prioritizes removing fraudulent ads when regulators are watching closely. Other areas receive lighter enforcement, allowing some advertisers to continue until stricter oversight forces action. Even as new systems reduce user complaints, the documents suggest that enforcement remains calibrated to protect revenue while appearing to address risk.

Real-World Consequences

The impact of scam ads is tangible. Meta’s platforms were reportedly involved in a third of all successful U.S. scams in 2025. Users lose money, trust, and sometimes access to accounts. In one instance, a hacked account used to promote cryptocurrency scams defrauded multiple people. Internal reviews show that historically, the majority of user reports of scams went unaddressed or were incorrectly dismissed. Fraudsters take advantage of gaps in the enforcement system, exploiting users with fake financial offers and phony promotions from public figures.

Steps Toward Change

Meta has expanded teams to monitor scam activity and improved automated detection. In 2025, the company removed over 134 million scam ads, cutting global user complaints by about 58 percent. Penalty-based bidding systems were introduced, charging likely fraudsters more to participate in ad auctions. Early results show a decline in scam reports and a modest drop in ad revenue. While these measures are a step forward, documents indicate the company remains cautious, mindful of revenue losses.

Regulators Loom Large

Authorities in the U.S., U.K., and other regions are scrutinizing Meta’s handling of fraudulent advertising. Fines could reach up to $1 billion, but internal figures show revenue from high-risk ads exceeds anticipated penalties. The discrepancy highlights the tension between profit and user protection. Meta continues to weigh enforcement costs against business priorities, even as its platforms play a major role in the global scam ecosystem.

Meta faces a difficult choice. Cut scam ad revenue and potentially hinder its ambitious AI projects, or let high-risk ads continue, maintaining billions in income but leaving users exposed. The internal records suggest the company is trying to thread that needle, making cautious moves that preserve financial gains while slowly tightening controls. The next few years will test whether Meta can reduce the flood of scams while keeping investors satisfied.


Notes: This post was edited/created using GenAI tools. Image: Julio Lopez / Unsplash

Read next:

• Healthy Habits of a Billion-Dollar Founder: What Canva's Melanie Perkins Knows About Focus

• How AI, Influencers, and Video Are Rewriting Marketing Playbooks for 2026
by Irfan Ahmad via Digital Information World

Thursday, November 6, 2025

How AI, Influencers, and Video Are Rewriting Marketing Playbooks for 2026

Marketing teams head into 2026 with tighter budgets, smaller crews, and far higher expectations. They are expected to publish faster, prove measurable results, and keep up with artificial intelligence while avoiding burnout.

Emplifi’s State of Social Media Marketing 2026 survey of 564 marketers sketches a field under strain yet learning to adapt through smarter tools, new content formats, and shifting collaboration habits.

AI: Gains, but Not a Revolution

Eight in ten marketers say AI has improved their productivity, but only about a third call the gains significant. Nearly half describe them as moderate. The finding shows how automation has become routine without yet redefining creative work. Emplifi notes that AI “is proving its value where marketers need it most: time.”

The next phase of adoption will focus on predictive analytics (30%), automated content creation (28%), and AI-driven ad targeting (26%). Privacy issues (27%) and integration problems (23%) remain the biggest barriers. “The primary obstacles are less about the technology itself and more about the readiness of organizations to integrate and scale it effectively,” the report warns.

Its guidance is pragmatic: build confidence through training, align leadership with execution, embed AI in planning and reporting, and “track not just time saved, but downstream effects on engagement and ROI.” The report encourages treating AI as “a co-pilot, not just a feature,” signaling a shift from experiments toward full workflow integration.

Influencers Become Central Strategy

Influencer marketing has matured into a core discipline. Sixty-seven percent of marketers plan to raise their influencer budgets next year, and most will focus on micro- and macro-creators—each cited by 47 percent of respondents—rather than mega influencers. “Brands use micro-creators for trust, engagement, and niche targeting,” Emplifi explains, while macro-creators “deliver awareness, brand building, and global reach.”

The strongest campaigns combine both: large creators for visibility and smaller ones for authenticity. Brand awareness remains the top objective (70%), followed by community growth (49%) and content creation (48%). Sales (43%) and product launches (33%) trail behind.

A new twist is the rise of digital personas. “One area seeing momentum is virtual influencers,” says the report, with 58 percent of marketers planning to increase such collaborations. These AI-generated figures allow control and consistency but still need careful audience management to avoid fatigue.

The Quiet Power of User-Generated Content

Eighty-two percent of marketers rate user-generated content as important, yet only 31 percent actively encourage it. Most depend on social tags (65%), reviews (64%), or photos and videos shared by customers (56%). Collecting enough quality material (30%) and measuring ROI (24%) are the hardest parts.

Emplifi urges brands to operationalize UGC: “Treat UGC as a primary, affordable content engine, not just a ‘nice-to-have.’ By operationalizing it, you slash production costs while scaling the authentic content that actually drives results.” The report recommends integrated tools for discovering, moderating, and tracking customer posts to turn scattered submissions into measurable assets.

Platforms and Formats Shift Again

Instagram still leads platform priorities (48%), but LinkedIn (37%) now ranks ahead of Facebook (35%) and TikTok (32%). The real trend, Emplifi says, is “diversification,” as marketers spread limited resources across more networks and rely on automation and cross-channel analytics to stay efficient. One in five plan to expand onto Reddit, drawn by its community-driven discussions and growing visibility through AI chat references.

Video keeps its dominance. “Short-form video will dominate content strategy in 2026,” predicts the report, with 73 percent citing it as their main format. Engagement and reputation are the top goals, while lead generation sits a bit lower at 47 percent. Short clips are described as “fast, authentic, and algorithm-friendly,” giving the best balance between reach and conversion.

Inside the Marketing Department

Behind the content boom sits a small workforce. More than half of social teams have fewer than six members, and 36 percent have under four. These people juggle strategy, content creation, analytics, and paid campaigns. On paper most call workloads “manageable,” yet 76 percent experience burnout at least occasionally.

The report calls capacity “the biggest constraint on today’s social teams,” not creativity. Automation can ease the load by handling scheduling, tagging, and reporting, but leadership support remains inconsistent. Forty-two percent of marketers feel strongly backed by executives in adopting new technologies; another 42 percent feel somewhat supported.

Emplifi argues that sustained growth depends on internal coordination: “Leadership sets the tone by encouraging experimentation and providing resources, while collaboration between marketing, commerce, and care ensures that strategies are executed consistently.” About half of respondents want more joint planning between departments, a reminder that integration, not just innovation, drives results.





Outlook for 2026

The study’s closing message is cautious optimism. “The next era of marketing won’t be defined by who adopts the most tools, but by who uses them with purpose.” Teams that harness AI for efficiency without losing human creativity, invest in credible creators, and manage burnout through smarter workflows will stand out.

In 2026, technology remains the enabler, but progress will hinge on how human each brand’s storytelling still feels.

Read next: When Algorithms Start to Lead: Sam Altman Says the First AI CEO Could Be Closer Than Anyone Thinks


by Irfan Ahmad via Digital Information World

YouTube Deletes Palestinian Rights Videos, Complying with U.S. Sanctions that Shield Israel

The deletion of hundreds of human rights videos under U.S. sanctions raises deeper questions about corporate complicity, political pressure, and the silencing of evidence from Gaza and the West Bank.

YouTube’s Compliance and the Quiet Erasure

In early October, YouTube quietly deleted the official accounts of three major Palestinian human rights organizations: Al-Haq, Al Mezan Center for Human Rights, and the Palestinian Centre for Human Rights. Together, their channels held more than 700 videos documenting what many rights groups describe as genocidal actions by the Israeli military in Gaza and the occupied West Bank. The removal wasn’t an accident. It followed sanctions issued by the Trump administration against these groups for their cooperation with the International Criminal Court (ICC), which had charged Israeli officials with war crimes and crimes against humanity.

Google, YouTube’s parent company, confirmed that the deletions were carried out after internal review to comply with U.S. sanctions law. The company pointed to its trade compliance policies, which block any sanctioned entities from using its publishing products. In doing so, YouTube effectively erased years of recorded evidence of civilian harm, including footage of bombed homes, testimonies from survivors, and investigative reports on Israeli military operations.

For Palestinian groups, the loss was devastating. Al Mezan’s channel was terminated without warning on October 7, cutting off a key avenue for sharing documentation of daily life under siege. Al-Haq’s account disappeared a few days earlier, flagged for unspecified violations of community guidelines. The Palestinian Centre for Human Rights, which the United Nations has described as Gaza’s oldest human rights body, saw its archive vanish completely. Each organization had built its presence over years of careful documentation, recording field investigations, interviews, and legal analyses used by international agencies.

The takedowns arrived at a moment when visibility for Palestinian suffering was already shrinking. As the war intensified, digital evidence became one of the few tools available to counter state narratives. The erasure of those archives doesn’t simply silence content, it wipes away history that could inform accountability proceedings in the future.

Legal Justifications and Political Influence

The sanctions that triggered these removals were issued in September, when the Trump administration renewed restrictions on organizations linked to the ICC. Officials justified the move by claiming the court’s investigations targeted U.S. allies unfairly. The three Palestinian groups were accused of aiding the ICC’s case against Israeli Prime Minister Benjamin Netanyahu and former Defense Minister Yoav Gallant. Those cases, which alleged deliberate starvation of civilians and obstruction of humanitarian aid, led to international arrest warrants in 2024.

Washington’s sanctions freeze the groups’ assets in the United States, restrict international funding, and prohibit American companies from offering them services. On paper, these are financial measures. In practice, they extend into the digital realm, where platforms like YouTube treat sanctioned organizations as if they were engaged in trade rather than speech. That blurred line allows the suppression of human rights evidence under the cover of legal compliance.

Critics of the decision argue that Google’s interpretation of sanctions law is unnecessarily broad. Legal experts have noted that the relevant statutes exempt informational materials, including documents and videos. In other words, the very evidence documenting war crimes should remain accessible. Instead, YouTube’s compliance posture has aligned itself with political pressure from Washington and Tel Aviv, creating a precedent where evidence of human rights violations can disappear from public view with a single policy citation.

Such alignment between political power and digital enforcement isn’t new. Over the past decade, several social media platforms have shown uneven enforcement when moderating Palestinian content. Posts documenting military raids or civilian casualties have been flagged or removed more frequently than comparable Israeli content. Human rights monitors have repeatedly raised this issue, warning that corporate algorithms and moderation rules often reflect geopolitical bias, not neutral principles.

Censorship Beyond a Single Platform

YouTube’s action didn’t occur in isolation. Mailchimp, the email marketing platform owned by Intuit, also closed Al-Haq’s account around the same time. Earlier in the year, YouTube had shut down Addameer, another Palestinian advocacy group, after pressure from pro-Israeli organizations in the United Kingdom. In each case, the stated justification referenced sanctions or community guidelines, yet the underlying pattern was unmistakable — Palestinian institutions engaged in documenting or challenging Israeli policies were being digitally erased.

For Palestinian civil society, these losses cut deeper than convenience or communication. Documentation is their defense against narrative manipulation. When platforms remove archives that show destroyed neighborhoods, the testimonies of detainees, or the aftermath of strikes on schools, they deprive the world of verifiable context. What remains is a filtered version of events shaped by governments and corporations more interested in political alignment than in truth.

This censorship also isolates Palestinian human rights workers from global audiences. Many of them operate under siege, with limited electricity, sporadic internet, and constant threat. Their videos were among the few ways to break through that isolation. Losing access to those tools compounds an existing asymmetry: Israel controls much of the digital infrastructure, while Palestinian voices depend on Western-owned platforms that can be withdrawn at will.

Some activists have begun turning to smaller or non-U.S.-based platforms, but those reach fewer viewers. Others use mirrored archives on decentralized servers, though these require technical resources that many NGOs cannot sustain under blockade conditions. The result is a fragmented digital resistance struggling to preserve its own record of survival.

A Broader Web of Complicity

The convergence of U.S. policy, Israeli influence, and corporate compliance reveals a wider structure of control. Sanctions serve as the formal mechanism, but they function through the voluntary obedience of global tech firms. YouTube’s willingness to preemptively enforce Washington’s directives shows how far economic power can extend into informational space. When a company with billions of users decides that compliance outweighs conscience, the consequences echo far beyond its servers.

Israel, for its part, has long sought to delegitimize Palestinian human rights organizations by labeling them as security threats. In 2021, it formally designated several as terrorist entities, a move widely criticized by international observers. That framing has since enabled allies to justify restrictions on cooperation or funding. By echoing those designations through digital enforcement, tech companies contribute indirectly to a political strategy aimed at dismantling Palestinian civil society.

Even before this recent escalation, YouTube’s history with Palestinian content showed bias in moderation. Videos of bombings, protests, or military incursions were often taken down for alleged violations of graphic content rules, while similar footage from other conflict zones remained accessible. This pattern, documented by digital rights groups and journalists, reinforces the perception that Palestinian narratives are treated as inherently suspect.

When viewed together, these actions form a digital blockade — less visible than physical barriers but equally effective in limiting access to truth. Erasing archives of war crimes evidence narrows the historical record and undermines justice mechanisms that depend on public documentation. It shifts power from those documenting suffering to those seeking to conceal it.

The Moral Weight of Public Response

The erasure of these videos is more than a technical policy issue; it’s a question of moral responsibility. Tech companies operate with global reach, yet their accountability remains largely domestic, shaped by the governments that regulate them. When those governments are themselves implicated in enabling war crimes, the corporations become instruments of impunity. That reality demands a response not only from policymakers but from ordinary users who sustain these platforms through daily engagement.

As consumers, people can refuse to normalize this complicity. Boycotts alone may not shift global policy, but they signal that silence has a cost. Public pressure, local activism, and political engagement can challenge both companies and governments to reconsider the boundaries of compliance. University groups, labor unions, and community organizations can demand transparency from the platforms they use. Municipal and regional leaders can introduce resolutions urging fair moderation practices. These steps, small on their own, build collective weight.

History often judges societies not by their technology but by their moral choices. When evidence of atrocity disappears because compliance took precedence over conscience, the responsibility extends beyond boardrooms. It reaches everyone who benefits from the systems that allowed it. Ensuring that such erasures never happen again requires more than outrage. It requires persistence — a refusal to let digital silence overwrite human suffering.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next:

• From Viral Videos to Real World Results: How TikTok is Shaping Gen Z and Millennial Job Searches

• AI Visibility Data: Ahrefs Finds Brand Mentions Rank Higher Than Backlinks or Domain Rating in the Off-Page SEO Shift
by Irfan Ahmad via Digital Information World

Wednesday, November 5, 2025

From Viral Videos to Real World Results: How TikTok is Shaping Gen Z and Millennial Job Searches

For young professionals, the place where people gain job-related advice is no longer in the traditional sources of textbooks and counseling experts. It is in TikTok. Young people post advice in a concise, entertaining way about everything from writing a resume to getting a certain job interview. It is a source of job advice that Gen Z’ers and Millennials turn to. A recent survey among 1,000 professionals in this age group conducted by Youngstown State University (YSU) highlights the extent of the influence that TikTok is exerting on this age group in terms of shaping early career development. It looks as if what had been a source of entertainment is gradually becoming a virtual classroom.

How Young Professionals Use TikTok to Get Ahead

TikTok is not only a source of entertainment. It is also a source of practical career advice. In the survey, two thirds of Gen Z in the workplace, as well as nearly half of the younger generation of professionals, indicated that they used TikTok in searching for a career path. Also, half of the respondents indicated that they accessed job-related information in the app that they used in searching a career path.

However, this is not all they're doing on TikTok. A whopping forty-five percent of respondents believed that they had confidence in applying the topics that they learned from the videos. Over a tenth believed that a video led to them getting a job. Half of those who received a job remained in the same industry.

This is across industries, and over half of professionals in the technology and medical industries admitted that they turned to TikTok as a source of advice in those sectors. In the technology industry, one in four people credited a piece of advice from TikTok with helping them obtain a job.

Also, many people on TikTok indicated that they altered their job searching activities as a result of the videos on the app. Approximately 30% of young professionals altered their resume as a result of what they looked at on TikTok videos. Some popular hashtags that people can use to look for videos on TikTok include #jobsearch, #Resumetips, #careertok, #interviewtips, among others.

Even if most people consider TikTok as a social app only used for entertainment purposes, most of them had a positive experience in terms of getting job advice from the app since only eight percent received a negative response in following a career tip from TikTok.

TikTok’s Role in Career Resilience and Mental Health

What creators are doing on TikTok is not only making videos with advice on how one can make a resume, they’re also imparting to young individuals the unpredictability that comes with the new job market. This idea of career cushioning, preparing for potential job loss, will soon become a new trend that many young individuals will engage in.

Eighteen percent of respondents rely on TikTok to get job leads, skills, and backup plans if the initial attempts fail. This is particularly relevant in the technology field. A fourth of those in the technology industry use TikTok to post activities of looking for a job in order to keep themselves updated in case they lose that particular job. It also involves those in the medical and education industries, but not as often.

Nevertheless, TikTok is not always the best application to apply through as well as gain information. A third of young professionals revealed that seeing others' videos of job-related content, such as job offers or promotions, made them feel like they are falling behind in comparison to others. Over a third of respondents also believed that most of the job-related content on TikTok is too perfect. This is a source of stress. Thirty-four percent of respondents indicated that they feel the need to make their job searching process look beautiful and perfect with frequent postings. Nearly one in ten indicated that this is a source of stress that makes them not use TikTok at all.

Who Young Professionals Trust for Career Advice

Despite the immense use of TikTok, young professionals do not trust it completely. Nearly half of them indicated that LinkedIn is the app that they use the most, a whole lot more than they trust TikTok. Glassdoor and Indeed trailed not far behind. Reddit ranked high as a source of unbiased, unfiltered advice. Friends and peers also ranked high as sources. Career counselors as well as AI sources ranked lower as sources of advice.

TikTok came in as the least source of career advice that respondents trust. Only 16 percent of respondents trust TikTok as a source of career advice. This highlights that most youths use TikTok as a point of entry. While it's very useful for many, it can't be a full substitute for job platforms and actual human help.

A New Form of Career Education

Career education meant a workshop, a career fair, a classroom presentation, but this is a rapidly changing environment. A quick video on TikTok is what Generation Z, as well as Generation Y, needs in order to gain information in the same way as any form of entertainment.

In contrast to other traditional sources of information, TikTok contains authentic individuals communicating authentic experiences of a personal nature. This content could be disorganized, including a degree of fabrication, making it authentic nonetheless. For those viewers who feel no connection to the establishment’s career services, this app is more relatable.

It also normalizes failure and not being in the same position as other young professionals. In place of presenting that everyone gets a great job straight out of college that is well-paying, TikTok presents that struggle is common, as well as failure. This is a great theme in today’s job market that is often unpredictable.

Conclusion

TikTok wasn’t meant to be a career-focused app, but that is what people are doing. Young professionals are looking to this app to get advice, motivation, and help. It is assisting them in making changes to resumes, preparing for interviews, as well as getting a job.

It certainly won’t substitute the use of LinkedIn sites, but this is something that they lack. It gives people a source where they can see real individuals in real time talking about real experiences. This makes a job hunt experience a little less lonely.

As the lines between personal and professional life continue to blur online, a new purpose is emerging for TikTok. They, as well as those who use this service seriously in a professional environment, may gain a greater understanding of the needs of the next generation of workers.



Read next: 

• ChatGPT, Gemini, and DeepSeek Still Confuse Belief with Fact, Study Warns

Everyone’s Using AI for Contracts, But Should They?
by Irfan Ahmad via Digital Information World

Google Maps to Add Live Lane Guidance for Cars with Built-In AI Systems

Google Maps is introducing an advanced navigation feature that can visually recognize which lane a car is in and tailor directions accordingly. The new capability, called live lane guidance, will first appear in Polestar 4 vehicles in the United States and Sweden before expanding to other models and regions in partnership with additional automakers.

The feature is designed for cars with Google built-in, a platform that directly integrates Google services into vehicle dashboards. It aims to reduce confusion on multi-lane highways and at complex junctions by providing lane-specific guidance in real time.

How the system “sees” the road

At the core of this upgrade is a combination of onboard cameras and artificial intelligence. The vehicle’s front-facing camera captures live footage of lane markings and road signs, which the system then interprets using Google’s AI models. These insights are instantly processed and displayed through the Maps interface, allowing the driver to receive timely prompts when a lane change or exit is required.

Unlike standard navigation prompts, live lane guidance continuously updates based on the car’s actual lane position. If the vehicle remains in a lane that will not lead to the upcoming turn, Maps will issue an alert through both sound and visual indicators to guide the driver smoothly across traffic lanes.

Rolling out with Polestar before a wider release

The Polestar 4, one of the latest vehicles to include Google’s infotainment platform by default, will be the first to receive the update. Google confirmed that broader availability will follow, covering more cars and road conditions over time. The company already supports Google built-in across more than 50 car models, and that number is expected to grow through 2026.

For drivers, the change marks a shift toward navigation that interacts directly with the physical environment rather than relying solely on map data. It also demonstrates how AI is gradually becoming part of everyday driving, supporting tasks that used to depend entirely on driver judgment.

A useful tool that still needs oversight

While the feature promises greater precision, experts note that AI-based driving aids should not replace human awareness. Systems that interpret camera data can misread lane markings in poor weather or construction zones, and users should remain alert to avoid over-reliance on automation.

As Google’s new live lane guidance rolls out, it may help reduce last-minute turns and missed exits, but responsible use remains essential. Technology can enhance safety and convenience, yet human attention will continue to play the most critical role on the road.


Notes: This post was edited/created using GenAI tools.

Read next:

• Everyone’s Using AI for Contracts, But Should They?

• Creators Find Their Flow: Generative AI Now Shapes the Work of Most Digital Artists Worldwide
by Irfan Ahmad via Digital Information World

Tuesday, November 4, 2025

Everyone’s Using AI for Contracts, But Should They?

AI is drafting the paperwork now, according to new research from Smallpdf, and not everyone’s thrilled about it.

For decades, crafting contracts would fall on a lawyer, paralegal, or anyone willing to burn midnight oil to meet a deadline. It’s an important role, as that legwork would serve as the key for turning handshakes into deals and give business relationships their legal backbone, but the way that those agreements take shape has changed.

Across law offices, startups, and even kitchen tables, professionals are letting artificial intelligence take a swing at things. Writers who would agonize over hours of drafting, reviewing, and editing contracts now use ChatGPT, Claude, and other LLMS to speed up the pace.

A new study from Smallpdf shows that this speed-hack is not just a tech trend, but a valid method that has been accepted across industries, generations, and job titles that used to be miles away from any sort of automation

The survey of 1,000 U.S. professionals, including business owners, freelancers, and full-time employees, showcases the enthusiasm of some and the uneasiness of others. Some applaud AI for how it quickens the pace. Others question accuracy, accountability, and what “trust” means on paper now.

It’s a given that AI can write a contract, but would people reach for the pen if it does?

The Legal Intern That Doesn’t Need to Be Trained

These days, AI has been given another new role; it isn’t just crunching numbers or writing copy anymore, but is quietly sitting in on contract work too, with thousands of professionals treating it as a second pair of hands. In Smallpdf’s recent survey, more than half of respondents (55%) have admitted to using AI for drafting, editing, or reviewing contracts. It’s sound logic, as the less time spent nitpicking documents means more time for business.

The ways they use these tools aren’t all the same:

  • 66% said they lean on AI to review contracts
  • 65% use it to polish tone or structure
  • 60% have used it for full drafting duties at least once

A process that once required several revision rounds now wraps up before an afternoon coffee break, as freelancers reported using prompts to build quick service agreements while small business owners have it look over everything to tidy up proposals or vendor terms.

The time savings are significant, as workers estimate getting 4 hours back each week, adds up to 26 workdays across the entire year. That’s an enormous win for startups that need that time pursuing investors, or consultants that need the time for balancing their extensive client list.

AI Proves that Time is Money

The savings speak for themselves. Respondents said they’re saving about $2,300 a year by using AI instead of hiring outside counsel, and a few even claimed savings north of $10,000.

But time is money, and the speed is where AI really earns it keep, with nearly half (47%) having said that they close deals faster when AI is involved to help smooth the bottlenecks that used to drag projects down.

The minutes pile up from the small stuff:

  • Cutting down repetitive reviews
  • Simplifying language and formatting
  • Summarizing lengthy contracts in seconds
  • Reusing standardized templates

Still, convenience comes with a trade-off, as the speed that gets contracts signed quicker can bury mistakes that reveal themselves after it’s too late to fix them.

The Price of All That Speed

AI speed doesn’t mean it’s always right, as over a third of professionals (36%) reported having to redo or toss out entire contracts because of AI-related mistakes.

The biggest issues show up in the most crucial areas:

  • Scope of work
  • Payment terms
  • Definitions and legal language
  • Governing law and jurisdiction
  • Liability and indemnity clauses

Smaller mistakes popped up in confidentiality terms, intellectual property clauses, and dispute resolution sections. Even one misplaced word can shift the meaning of an entire deal, which explains why nearly nine in ten people still bring in a human reviewer before signing anything.

But not everyone plays it safe. According to the study:

  • 31% never mention any AI usage
  • 12% have had a contract flagged for sounding AI-generated
  • 25% skip legal review entirely to save time or money

That tug-of-war between choosing speed and certainty is shaping the way professionals handle these tools. For now, most are accepting the risk in favor of moving faster, even if it means cleaning up the mess later.

Does AI Hold up in the Court of Law?

The real trouble shows up when those AI-written contracts hit the courtroom.

While two-thirds (67%) of people in Smallpdf’s survey said that they believe AI-drafted contracts are legally valid, others are not as confident. Only 24% think courts can handle AI-related disputes, but 45% doubt that they could keep up. A third aren’t entirely sure either way.

This gap speaks volumes to how people trust their own use of AI, but not the institutions that have to interpret it when anything goes awry.

And as more millions are tacked onto a deal, the nerves increase. When people were asked if they would trust an AI-written contract for a deal that was north of $100,000, only 20% said they’d risk it for the sake of expediting it; 80% said they’d still want a lawyer’s review.

AI is incredible with efficiency, but creates a barrier in trust that most people aren’t ready to cross.

Adoption Grows Amidst the Doubts

Even with doubts, people don’t plan to slow down their AI use, with roughly one in 3 people in Smallpdf’s survey reporting on their plans to use it even more for contracts over the next year.

Some industries are clearly ahead of the curve:

  • Marketing and finance teams lean on AI to polish client agreements
  • Healthcare employees use it for vendor forms and compliance paperwork
  • Tech and manufacturing companies depend on it to crank out supplier contracts

Adoption is rising across job titles as well, with over half of respondents (57%) saying that they use AI to translate legal jargon into plain English for coworkers or clients. It helps break the barriers that kept people from understanding contracts in the first place.

Interestingly, 38% of respondents said they think AI-written contracts are fair to both sides, which suggests that there’s optimism towards automation as a way to make negotiations more balanced, not just faster.

Still, most agree that judgment, context, and trust are things that machines haven’t fully figured out yet.

Use AI, Don’t Rely on it

As much as AI can help to draft, summarize, and polish contracts, it still needs a person keeping an eye on it. The people getting the best results use the tech for efficiency while trusting their experience for the rest.

A few habits help keep things safe:

  • Always get a human review. Even the tiniest wording errors can create expensive problems in the long run.
  • Keep sensitive data out of AI Tools. Names, financial info, and addresses shouldn’t be used on public AI platforms.
  • AI’s great for structure, but not the final draft. It’s great for cleaning up ideas and organizing notes, not replacing a lawyer.
  • Be open about it. Let clients or partners know if AI assisted with a document to build trust and maintain honest communication.
  • Keep up with the rules. Laws and standards around AI are changing quickly, and staying informed is the best protection.

Most professionals are already doing some forms of these without realizing it. AI makes the process easier, but judgment calls and accountability still belong to the people.

AI’s Don’t Sign the Deals – We Do

There’s no question that AI is helping professionals save time and money with deals closing faster, reviews taking less effort, and legal work becoming more manageable. But everyone in Smallpdf’s study agreed on one thing; technology is helpful, but it doesn’t replace intuition.

Tucked away in the complex legal terminology and intricate phrasing of a contract are tones, intentions, and extensions of trust that algorithms just can’t help but ignore. No matter how well a chatbot can fix grammar or how quickly it can clean up writing structure, lacking human perception will always place a limit on what it can effectively do.

For small businesses and freelancers, the key is balance. Let AI take the tediousness out of drafting, but real people have to be in charge of the intent and fairness. That mix of speed and sense is what keeps a business honest.

And besides, when it finally comes to signing the deal, it doesn’t matter how much AI helped with shaping the contract. It’ll always be real people signing it.





Read next:

• Search Engines Welcome Grokipedia as AI Starts Rewriting the Internet’s Reference Pages

• Microsoft's Mustafa Suleyman’s Mission: Building AI That Serves People, Not Pretends to Be One
by Irfan Ahmad via Digital Information World