Saturday, October 18, 2025

Creator Economy Shifts Offline as Brands Embrace IRL Events to Build Stronger Community Connections

The new age of the creator economy is taking place in-person, IRL, and it’s time for brands to catch up to those spearheading the movement off-screens. Community is at the heart of the creator economy. Before the modern influencer marketing industry emerged today, the internet served as a hub for like-minded fans and creatives to connect. With the evolution of social media over the past few decades, users now struggle with “scrolling fatigue” and “digital captivity.” Avid social media users are looking to take their interests offline while also finding a balance in building communities under their favorite creators. As Andrew Roth, Founder and CEO of dcdx, puts it, “The desire and appetite for IRL has never been clearer; young people are not just interested in in-person gatherings, they crave them.” A recent report from EMARKETER revealed that over 84% of Gen Z and Millennials value brands that develop a marketing mix that incorporates both technological and physical experiences. Although social media is a key tool for users to discover creators and engage with brands, these opportunities for community building flourish in person when paired with digital activations.

This October, The Influencer Marketing Factory published its Creators IRL blog , featuring key trends on experiential influencer marketing. The Influencer Marketing Factory also conducted its 2025 Creators IRL survey, exploring user sentiment and never-before-seen statistics regarding in-person creator activations.

1. Why Are Creator IRL Events Beneficial for Brands?

Creator IRL events are beneficial for both brands and creators since they serve the growing user need for in-person interactions and community building. More than 46% of mentions derive from community-building accounts, according to The Influencer Marketing Hub, signalling how long-term value-driven relationships are on the rise in the creator economy.

Creator IRL events are the perfect opportunity for brands to garner user-generated content, aka UGC, another common trend in influencer marketing. Fans, attendees, brands, and creators all contribute to a diverse content pipeline spanning from ideation to the execution of Creator IRL events. Whether creators are sharing BTS footage of planning in-person activations or brands and fans are posting vlogs recapping these exciting events, the opportunities for organic UGC are endless.

Expanding sponsorship value is another key benefit of Creator IRL events for brands. For instance, Dude Perfect’s 21-event national tour this past summer demonstrates how top brands can reach a dynamic audience of fans at various touchpoints and geographic locations, contributing to an ongoing live storyline with creators and developing new opportunities for user connections. “The repeated exposure and the depth of emotion you get at a live event is an asset that we’re lucky to have at our disposal,” Dude Perfect CEO Andrew Yaffe told Digiday.

2. Top Examples of Creator IRL Events

Creator IRL events can span all niches and industries, from beauty to professional sports. The Influencer Marketing Factory outlined several top examples of high-performing in-person creator activations in its recent blog and infographic, including the following.

  • Tana Mongeau’s Cancelled Live Tour: Tana Mongeau’s Cancelled Live Tour, co-hosted by Brooke Schofield, is one of the most viral examples of an in-person creator experience. In an interview with creator Jeff Wittek, Mongeau revealed that the international live podcast tour proved to have an amazing ROI thanks to ticket and merchandise sales. According to data from StubHub, influencers, podcasters, and authors sold 500% more tickets for events in 2025 compared to last year.
  • Salish Matter’s Sincerely Yours Launch: Salish Matter, daughter of YouTuber Jordan Matter, launched her debut skincare line this September, breaking records for both influencer-founded brands and Creators IRL. In celebration of the launch, Salish Matter hosted a fun pop-up, drawing a record-breaking 87K fans to American Dream Mall. Due to overcrowding and capacity concerns, fans had to leave the event early. Dedicated fans then redirected their efforts to sharing social content and selling out Sincerely Yours’ Sephora inventory, proving the power of IRL product activations.
  • Jake Paul vs. Gervonta “Tank” Davis: Creator sporting events are extremely engaging for fans as they increasingly blend digital and in-person activations. For example, Jake Paul’s upcoming boxing match against The Tank can be attended by fans in person at the State Farm Arena in Atlanta or streamed worldwide on Netflix. Such a hybrid model for Creators IRL allows fans to choose between in-person energy and at-home viewing, expanding accessibility and scale.
  • Addison Rae & Conan Gray on Tour: Creators-turned-musicians, like Addison Rae and Conan Gray, are fusing live shows with brand partnerships, reshaping both entertainment and commerce. The current Addison Tour features a wide range of in-person fan experiences like meet-and-greets and branded activations like Rae’s Lucky Brand Jeans collab, while Conan Gray partnered with various lifestyle and fashion brands throughout his Found on Heaven Tour. Regardless, such tours allow creators and performers to leverage their creativity and storytelling while contributing to pop culture and promoting brands.

3. Best Practices for Brands Hosting Creator IRL Events

If you are a marketer looking to host a branded in-person event with influencers, here are some key guidelines to follow, as per The Influencer Marketing Factory.

  • Co-Create With Creators & Influencers: The most successful Creator IRL events are built alongside creators, not just around them. Influencers know their audiences best, so collaborating with them during the planning stages of an experiential marketing campaign, product launch, or any other Creator IRL event can increase user satisfaction.
  • Design Your Event for Shareability: Given that Creator IRL events act as built-in content pipelines, brands should design such events for shareability. Provide attendees with photo-ops, aesthetic displays, and fun interactive stations that make for amazing social media content.
  • Create Immersive, Multi-Sensory Moments: Immersive, multi-sensory pop-ups are another major trend in experiential influencer marketing. From OLIPOP’s Orange Cream Drive-Thru to Sol de Janeiro’s Casa Cheirosa Coachella Activation, multi-sensory events establish a more engaging and immersive experience for attendees, also inspiring exciting UGC content.
  • Tap Into Niche Communities With Hyper-Local Events: Not all in-person creator events have to be extremely large–sometimes, true power and engagement come at a local scale. Try tapping into niche communities with hyper-local events featuring micro-to-mid-sized creators to foster more one-on-one connections and drive more authentic community interactions for your brand.
  • Think Omnichannel & Extend Content Lifecycle: Utilize an omnichannel marketing strategy to extend the content lifecycle of your Creator IRL event well after it wraps in-person. Leverage livestreams, behind-the-scenes content, creator vlogs, and other content created during your event on your brand’s website, emails, and other platforms.

4. Exclusive Interview With Brooke Berry, Founder of The Shift Crawl

Forbes recently reported that over 95% of Gen Z and Millennials expressed an interest in taking their online interactions and passions to the real world through in-person experiences. Brooke Berry’s latest viral initiative, The Shift Crawl, is directly serving the needs of these younger generations by creating new opportunities for community-building and fan-creator interactions.

The Influencer Marketing Factory held an exclusive interview with Berry to reveal the true inspiration behind Shift Crawl, how to select creators and businesses for partnerships, and the cultural significance of Creators IRL. As Berry shared, "Any person with a venue has now become a stage." The following are three notable quotes from the interview.

  • Brooke Berry’s Inspiration Behind The Shift Crawl: "Post-COVID, I've been thinking a lot about just in-person. I think everybody's trying to figure out the algorithm online, and I'm just trying to figure out the algorithm in-person."
  • The Importance of Hybrid Creator IRL Events: “I'm going to create that same universe for Shift Crawl that both happens on TV and online, but then it's also paired with this real-life experience where you can come and see."
  • Building Authentic Partnerships and IRL Experiences: "For me, the goal is to start with what the creator is passionate about and build the Shift Crawl around that…When Jeremiah said he was down, The Last Bookstore was the first and only thing that came to mind."

5. Key 2025 Creators IRL Survey Insights From The Influencer Marketing Factory

The Influencer Marketing Factory surveyed 1,000 U.S.-based social media users ages 18-65 to learn more about how fans are connecting with brands and their favorite creators through experiential influencer marketing. After analyzing their results, The Influencer Marketing Factory identified the following three key insights.

  • 41% of respondents reported attending at least one in-person influencer event in the past year, highlighting the growing demand for offline creator-led experiences.
  • Overall interest in future in-person influencer events among non-attendees is strong, with two-thirds of respondents open to attending (34% yes, 33% maybe). Meet and Greets ranked as the #1 most-exciting Creator IRL event among respondents, followed by Product Launches and Workshops.
  • $10-$50 is the “sweet spot” range U.S. fans are willing to pay to attend in-person influencer events.







Read next:

WhatsApp to Test Monthly Limit on Unanswered Messages

• People Are Getting Obsessed with AI Prompts, Here's What Global Search Data Tells Us

by Irfan Ahmad via Digital Information World

Researchers Say AI Chatbots Learn from Conversations That Users Thought Were Private

A new analysis from Stanford University has raised fresh alarms about how major artificial intelligence developers use private chat data. The research found that all six leading U.S. companies behind large language models routinely collect user conversations to train and improve their systems, often without explicit consent.

The study examined privacy policies from Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI. Together, these firms represent nearly ninety percent of the American chatbot market. According to the Stanford team, every company in this group processes user chat data by default, meaning the information people type into AI systems like ChatGPT, Gemini, or Copilot may be stored and reused for model development unless the user actively opts out.

Researchers said most privacy policies remain vague about how data is collected, stored, and reused. Several companies retain chat logs indefinitely. Some allow human reviewers to read user transcripts, while others combine data from different products within their ecosystem, linking chat behavior with browsing history, shopping activity, or social media use.

Expanding data collection under minimal oversight

The Stanford review was based on 28 separate documents tied to these six companies, including privacy statements, FAQs, and linked sub-policies. It found that developers rely on a complex web of overlapping policies rather than a single clear disclosure. The researchers concluded that this fragmented approach makes it difficult for users to know how their information is handled once it enters a chatbot.


In several cases, the privacy language extended far beyond chats themselves. Google, Meta, and Microsoft acknowledge using data from their other products to refine their language models. For example, user preferences expressed in social media posts or search queries may influence chatbot behavior. Meanwhile, companies such as Amazon and Meta retain the right to store interactions indefinitely, citing operational or legal reasons.

Microsoft was the only company that described efforts to remove personal identifiers from chat data before training, including names, email addresses, and device IDs. Others, like OpenAI and Anthropic, said they incorporate “privacy by design” into their models to prevent repetition of sensitive data but did not detail specific filtering methods.

Children’s data and consent concerns

The study identified major inconsistencies in how companies handle data from minors. Four of the six developers appear to include children’s chat data in model training. Google recently expanded Gemini to allow accounts for teenagers who opt in, while Meta and OpenAI permit users as young as thirteen without indicating any extra safeguards. Only Anthropic stated that it excludes under-18 users entirely, although it does not verify age at sign-up.

The researchers said these gaps raise legal and ethical concerns, particularly because minors cannot provide informed consent. The collection of chat content from young users may violate child privacy protections if those data sets are later used in commercial AI systems.

Data stored for years, sometimes permanently

Retention policies also varied widely. Google keeps chat data for up to eighteen months by default but stores any conversations reviewed by humans for up to three years. Anthropic deletes data within thirty days for users who opt out of training but keeps it for five years when training is active. OpenAI and Meta provide no specific limits.

The report warned that indefinite storage of chat logs could expose users to serious risks if data were ever leaked or misused. Because AI chat data often contains personal context, such as health information, employment details, or relationship issues, even anonymized transcripts can reveal identifiable patterns.

U.S. regulation lags behind global standards

The researchers emphasized that the United States still lacks a unified privacy framework for AI systems. Only a patchwork of state laws currently governs how personal data can be collected and used. California’s Consumer Privacy Act offers the strongest protections but does not prohibit companies from using chat data for training if users agree to their terms of service.

Unlike Europe’s General Data Protection Regulation, which requires a lawful basis and limits retention of personal data, U.S. firms face few restrictions. This gap has allowed developers to continue harvesting user information while presenting their collection practices as standard business operations.

The Stanford team grounded its analysis in California’s privacy law to test compliance. It found that companies’ documentation often failed to specify what categories of personal information were being collected or how users could access, correct, or delete their data.

Opt-out systems favor companies, not users

The researchers noted that all six firms now rely on opt-out systems for training data, reversing Anthropic’s previous opt-in model. In practice, this means users must locate hidden settings or submit requests to prevent their conversations from being reused.

Because default settings tend to shape user behavior, few people are likely to take these extra steps. The report said this design favors the developers’ business interests while weakening consumer control. Enterprise customers, by contrast, are automatically opted out, creating a two-tier privacy system where paying clients receive stronger protections than the general public.

The push for privacy-preserving AI

The Stanford team urged policymakers to update federal privacy law to address large language models directly. It recommended mandatory opt-in for model training, limits on data retention, and built-in filtering of sensitive information such as health and biometric data. The researchers also encouraged companies to publish standardized transparency reports detailing their data collection and training practices.

The study noted that a few developers outside this group, including Apple and Proton, have adopted more privacy-focused designs by processing data locally or avoiding chat retention altogether. It also highlighted emerging research into privacy-preserving AI techniques, such as differential privacy and secure on-device training, which could reduce dependence on user conversations for improving models.

A growing tension between innovation and trust

While AI chatbots have become essential tools for productivity, research, and communication, the report argued that the race for better performance has outpaced responsible data governance. The collection of personal chat histories gives developers powerful resources for improvement but erodes public confidence.

As large language models continue to expand across daily life, the Stanford team concluded that policymakers and developers must decide whether the gains from training on private chat data justify the potential loss of personal privacy. Without stronger regulation or transparency, the study warned, the public will remain unaware of how much of their own voice is being used to build the systems they rely on.

Notes: This post was edited/created using GenAI tools.

Read next: When AI Feels Like a Friend: Study Finds Attachment Anxiety Linked to Emotional Reliance on Chatbots


by Irfan Ahmad via Digital Information World

When AI Feels Like a Friend: Study Finds Attachment Anxiety Linked to Emotional Reliance on Chatbots

People are forming deeper emotional ties with chatbots than they realize. A new study from Nanyang Technological University suggests that users with attachment anxiety are more likely to treat artificial intelligence as human. Those who fear rejection or loneliness tend to see AI systems as responsive companions and may depend on them for comfort.

Researchers examined how different attachment styles affect human behavior toward conversational AI. They found that emotional needs, not curiosity, often drive this connection. People with anxious attachment scored higher in anthropomorphism, a tendency to attribute human traits to nonhuman agents. That belief strengthened emotional reliance, turning simple interaction into a form of companionship.

The study involved 525 adults who already had experience using AI chatbots. Participants answered detailed questionnaires about personality, communication habits, and emotional reactions. The results showed a clear divide between anxious and avoidant users. Anxious individuals viewed AI as understanding and trustworthy. Avoidant individuals kept distance and treated it as a tool.

Researchers concluded that attachment style influences how people relate to machines. Anthropomorphism acted as a link between emotion and behavior. When users imagined AI as sentient, they developed stronger feelings of connection. This often created a cycle where comfort-seeking led to overreliance. The more someone engaged emotionally, the more human the AI seemed.

The data analysis used moderated mediation models to test how personality, anthropomorphism, and engagement interact. The findings showed that anxious users formed habits of emotional dependence that could interfere with human relationships. Avoidant users rarely experienced that pattern. Their emotional distance protected them from dependency but limited positive engagement.

During the pandemic, isolation made such attachments stronger. Many people turned to chatbots for company when social contact was limited. The study’s timing reflected that reality. The researchers observed that people with higher anxiety found reassurance in predictable AI responses. The system never argued, never withdrew, and always replied. That pattern reinforced trust and made users believe in a mutual understanding that didn’t truly exist.

The study also revealed a psychological projection effect. Participants with anxious attachment were more likely to believe AI could “understand” their emotions. That belief wasn’t based on logic or technical accuracy but on personal interpretation. It showed how emotional need can shape perception. When people feel vulnerable, they tend to fill the gaps left by human relationships with imagined empathy from machines.

This behavior isn’t necessarily harmful in short-term use. The study’s authors acknowledged that therapeutic or educational chatbots could provide temporary support. For individuals struggling with stress or communication barriers, AI interaction can help build confidence. The problem starts when users replace real human connections with digital ones. Continuous emotional dependence may reduce resilience and increase social withdrawal.

The researchers suggested that future chatbot design should consider these psychological factors. Systems could include subtle cues that remind users of the artificial nature of the interaction. Developers might also integrate features that promote reflection or social engagement outside the app. Responsible design could reduce the risk of dependency and encourage healthier use.

The study used self-report surveys, which limits how much can be said about cause and effect. Participants’ answers relied on self-perception rather than observation of real behavior. The authors recommended future research that tracks user behavior over time or analyzes communication patterns directly within chat platforms.

Despite those limits, the work adds an important dimension to understanding human-AI relationships. It suggests that the emotional dynamics shaping human interaction extend naturally to artificial systems. The same needs that drive attachment in childhood or adulthood can surface when a machine becomes consistently responsive. The human brain, wired for connection, adapts quickly to any entity that provides predictable feedback.

The researchers did not describe this as a failure of technology. They viewed it as evidence of how emotional mechanisms remain constant even when the partner is virtual. This insight could guide how AI support systems are used in therapy, education, or care environments. With careful design, they could reinforce healthy habits rather than create emotional dependence.

The findings also raise broader social questions. If AI can simulate empathy well enough to elicit attachment, then emotional regulation may become a shared responsibility between user and developer. The line between comfort and dependence will continue to blur as systems grow more conversational and personalized. Understanding that line is now essential for ethical AI development.

In the end, the study’s message is simple. People don’t just talk to machines. They project feelings, needs, and expectations onto them. For those who struggle with insecurity, AI becomes a steady presence that listens without judgment. That connection can soothe anxiety, but it can also trap users in a loop of emotional reassurance. Recognizing that pattern is the first step in using AI as support, not substitution.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: People Are Getting Obsessed with AI Prompts, Here's What Global Search Data Tells Us
by Asim BN via Digital Information World

WhatsApp to Test Monthly Limit on Unanswered Messages

Meta is preparing a new measure to contain spam on WhatsApp. The company will start testing a cap on how many messages users and businesses can send when they don’t receive a reply.

The test will count every message sent to a contact who hasn’t responded. If someone replies, earlier messages are removed from the total. WhatsApp will show a notice when a person or business gets close to the limit, explaining how many messages remain for the month.

The company hasn’t shared an exact figure. It said that the change is aimed mainly at accounts that send large batches of messages, not ordinary users. The trial will run in several countries over the next few weeks.

Spam Control and User Experience

WhatsApp now serves more than three billion people, and its role has grown far beyond personal chat. It connects families, groups, communities, and businesses. That growth has also made it a target for unwanted promotions and scams. Many users receive marketing messages and unknown contact requests that crowd their inboxes.

In India alone, where WhatsApp has over 500 million users, this type of spam is a regular complaint. The new cap follows earlier steps by Meta to control this behavior. In 2024, WhatsApp started testing monthly limits on how many marketing messages a business could send. The company also added an option to unsubscribe from promotional updates. This year, it began expanding controls on broadcast lists, which limit how many people can be reached in one go.

Earlier Efforts and Account Bans

Despite several measures, unwanted activity has continued. Spammers often find ways to bypass filters and automated systems. Meta reported that it banned more than 6.8 million WhatsApp accounts linked to scam centers in the first half of 2025. Around the same time, WhatsApp introduced alerts that warn users when someone outside their contacts adds them to a group.

These steps form part of a wider attempt to keep conversations safer without disrupting regular communication. Many of the company’s updates now focus on making spam harder to spread through large contact lists or automated tools.

Preparing for New Username System

The limit also comes as WhatsApp prepares a username feature that will let people connect without sharing phone numbers. That update could make the platform easier to use for new contacts but also raise fresh concerns about spam. By setting a cap on unanswered messages, Meta wants to keep the balance between openness and user control.

The new rule is still in testing, but its purpose is clear. Meta is trying to discourage persistent, unwanted messaging while keeping daily conversations unaffected.

Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.


by Irfan Ahmad via Digital Information World

Friday, October 17, 2025

People Are Getting Obsessed with AI Prompts, Here's What Global Search Data Tells Us

AI is no longer a shiny toy just for the tech crowd. Everyone from small business owners to college students are trying to figure out how to talk to machines. And not just casually, global search data shows a surge in people looking up how to write better prompts for tools like ChatGPT, Midjourney, and Adobe Firefly. Curious? You're not alone.

A recent report from Adobe Express digs into this exact trend, blending U.S. survey results with worldwide search data. It’s not just about what tools people are using, it’s about how people everywhere are racing to get better at using them.

Everyone’s Asking the Same Question: “How Do I Prompt This Thing?”

You might expect tech-savvy users in the U.S. or Germany to be leading the charge, and they are. But they’re not alone. Adobe’s data shows that AI prompt curiosity is literally everywhere. People are Googling how to get AI to write better stories, create sharper images, mimic artistic styles, and more.

Take a look at these global annual search volumes:

  • Prompts for ChatGPT: 70,060
  • Prompts for Midjourney: 69,710
  • Stable Diffusion prompts: 35,790
  • Adobe Firefly prompts: 15,970
  • Prompts for DALL·E: 14,160

Clearly, folks aren’t just clicking “generate.” They want to steer the AI ship themselves, and get better results in the process.

Where Is Prompt Curiosity Heating Up the Fastest?

Unsurprisingly, the U.S. and India are leading the charge. Germany's in there too, as expected. But what’s really interesting is that countries like Ukraine and Pakistan are also showing strong activity in AI-related search trends.

Here’s the top 10 list:

  1. United States
  2. India
  3. Germany
  4. Ukraine
  5. United Kingdom
  6. Brazil
  7. Canada
  8. Spain
  9. France
  10. Pakistan

What does this tell us? Curiosity about generative AI spans beyond big economies. It’s reaching into emerging markets, which could signal bigger global shifts in tech education and creative tooling in the years ahead.

So… Who’s Looking Things Up, and For What Reason?

Adobe also broke down interest by specific tools:

  • ChatGPT prompts were most popular in India, followed by the U.S., Germany, the U.K., and Pakistan.
  • Midjourney got lots of attention from the U.S., India, and Ukraine.
  • DALL·E was big in Spain and France.
  • Firefly got love from the U.S., Germany, Japan, and the U.K.

This tells us that AI isn’t just for English-speaking users or traditional tech hubs. Design-focused tools like Firefly are making waves in countries with strong visual arts communities.

Zooming In: What’s Happening in the U.S.?

Turns out the AI buzz isn't just a West Coast thing. Sure, states like California are active, but so are places like Ohio, Georgia, and Virginia.

Top U.S. prompt queries include:

  • ChatGPT: 13,270
  • Midjourney: 11,840
  • Stable Diffusion: 6,040
  • Firefly: 4,050
  • DALL·E: 3,430

Oregon saw a spike in Midjourney queries. Massachusetts leaned into Firefly. And across the Midwest, ChatGPT is getting serious attention.

This tells us that AI curiosity is less about tech infrastructure and more about creative opportunity. People are using these tools to solve real problems, not just play with them.

But What About Learning to Prompt? That’s the Real Growth Area

If you thought folks were just playing around, think again. According to Adobe’s U.S. survey:

  • 79% of people want to learn how to write better prompts.
  • 67% said they’d take a course on prompt writing.

The top skills they want to learn?

  • How to tailor prompts for different tools (78%)
  • How to create specific art styles (38%)
  • The differences between AI models (37%)

Clearly, people are hungry to understand not just how to use AI, but how to use it well. That’s a big deal.

How Do People Want to Learn?

We live in the age of YouTube tutorials and short attention spans, so it makes sense that most learners prefer flexible formats:

  • Pre-recorded video lessons: 53%
  • Interactive workshops: 19%
  • Live online classes: 17%

Basically, if it’s bite-sized and available on-demand, it’s going to get traction. But there’s still room for live or guided learning, especially when new tools drop.

Different Generations, Different Motivations

The age gap matters too. Here’s what Adobe found:

  • Gen Z (18–27): Highest familiarity with AI (87%). They’re drawn to new platforms like Copy.ai and Character.ai.
  • Millennials (28–44): Most eager to improve prompt skills. Likely aiming to keep up professionally.
  • Gen X and Boomers: Less engaged overall, possibly due to steeper learning curves or lower perceived value.

This generational divide is useful for anyone designing AI education or tool onboarding. You can’t market to everyone the same way.

Why Any of This Matters In the Long Run

Let’s zoom out for a second. Search trends don’t lie, they reflect what people care about. And right now, people care a lot about learning how to talk to AI in a way that gets better results.

This is about more than just cool art or faster emails. Prompting is fast becoming a new kind of literacy. Just like knowing how to Google well or use Excel was once a competitive edge, prompt fluency could be the next big skill that separates dabblers from doers.

And here’s the kicker, this trend isn’t slowing down. If anything, it’s just starting. With tools for video, music, 3D, and coding emerging, prompting is about to get a whole lot more complex, and interesting.

The Big Picture

The Adobe data isn’t just a snapshot of interest; it’s a map of where the digital world is headed. Whether you’re a content creator, designer, small business owner, or educator, learning how to prompt AI effectively might just become as standard as knowing how to use a search engine.

So, next time you find yourself typing a question into ChatGPT, remember, you’re not alone. You’re part of a global movement trying to figure out how to speak the language of machines. And the better you get at it, the more doors it opens.










Read next:

• Why Chatbots Still Struggle to Sound Human

• The Way We Talk to Chatbots Can Shape How Smart They Become
by Irfan Ahmad via Digital Information World

Pinterest Gives Users Power to Filter Out AI-Generated Content

Pinterest has started rolling out new controls that let users limit how much artificial intelligence–generated imagery appears in their feeds, responding to growing frustration over the spread of what users have called “AI slop.”

The platform, long known for its collection of inspirational images and shopping ideas, said the feature is designed to restore balance between human creativity and algorithmic production. It comes after months of complaints that generative AI visuals were crowding out authentic content across categories like fashion, beauty, and home décor.

New Settings to “Dial Down” AI

Users will now find a “Refine your recommendations” section in the Pinterest settings menu, where they can choose to see less AI-generated content within certain categories. The company said more options will be added later based on user feedback. The feature is currently available on Android and desktop, with an iOS rollout expected in the coming weeks.


Pinterest’s new system expands on its earlier effort to identify synthetic media through labels such as “AI-modified.” Those labels appear when the company detects AI-generated metadata or when its automated systems flag likely synthetic images. The latest update makes these labels more visible and gives people direct control over how much of this material appears on their feed.

Responding to User Backlash

For months, online forums and media coverage have chronicled frustration among Pinterest users who say their feeds have been flooded with artificial visuals that often misrepresent design ideas or fashion trends. Analysts have warned that if the issue persists, it could harm Pinterest’s credibility and weaken the sense of discovery that keeps users returning.

Academic estimates cited by the company suggest that AI-generated material now makes up more than half of all online content, roughly 57 percent. That rapid shift has made it increasingly difficult to distinguish between human-made and machine-produced visuals.

Matt Madrigal, Pinterest’s chief technology officer, said the new tools are meant to help people “personalize their experience” and find inspiration that feels genuine. He described the move as part of a broader effort to ensure the platform remains a space where creativity, not automation, drives engagement.

The Challenge of Detection

Even with the new filters, Pinterest acknowledges that identifying AI content is far from simple. Synthetic images can lose their identifying metadata when edited or screenshotted, making it harder for automated systems to detect them. While the new controls can reduce the visibility of such images, they cannot eliminate them entirely.

Pinterest also allows users to give direct feedback as they browse. If a Pin seems inauthentic or unappealing, users can open the three-dot menu to mark it as AI-related, which further refines future recommendations.

A Broader Industry Dilemma

Pinterest’s move highlights a wider dilemma faced by social platforms: balancing the growing role of generative AI with users’ desire for real, human-made material. While many companies continue to promote AI tools that let people generate their own digital artwork or profile images, the backlash suggests not everyone wants to see these creations taking over their feeds.

For Pinterest, the update is both a defensive and strategic step, aiming to protect the platform’s distinctive appeal while acknowledging that AI-generated content is here to stay. By giving users the choice to filter it, the company hopes to keep its visual catalog a place of authentic discovery rather than algorithmic noise.

Notes: This post was edited/created using GenAI tools.

Read next:

• Emoji Misfires: How Misunderstood Icons Are Scrambling Work and Brand Messages Around the World

• Global Survey Shows Public Still Wary of AI Despite Growing Use

• Training the Next Generation - How Summit Group Builds Local Expertise in Global Energy Markets


by Irfan Ahmad via Digital Information World

Training the Next Generation - How Summit Group Builds Local Expertise in Global Energy Markets


Summit Group operates at the intersection of global energy technology and Bangladesh's developing economy, creating unique human resource challenges. The company's response—international training programs and systematic knowledge transfer—has developed a workforce capable of managing complex infrastructure despite limited local industry precedent.

"We have monthly and yearly training regimes that started six or seven years ago, and we regularly train our operational people," explains Sayedul Alam , managing director of Summit LNG Terminal Company. "Sometimes we send them abroad to France, Singapore, Malaysia, and other countries for continuous improvement of operations."

Recruiting and Retraining Maritime Expertise

Summit recruits experienced ship captains and marine engineers, then retrains them for floating terminal operations. The company maintains what Alam describes as "a very good pool of people who are all ex-mariners, either captain or engineers" with decades of professional experience.

"Bangladesh has a good number of mariners who have good LNG experience working outside the country in places like Japan and Singapore," Alam notes, "but they don't have specific knowledge about FSRUs or onshore terminal management."

The distinction between sailing vessel experience and terminal operations creates training requirements. "There are a good number of people who are marine engineers and master mariners, and they have been working with the LNGC vessel. They are basically sailing-vessels, but they really don't have experience operating a terminal. It’s a different ball game," says Alam

International Training Programs

Summit sends personnel to multiple countries for operational training.

Training programs in France, Singapore, Malaysia and other countries support what Alam calls "continuous improvement of operations. We have to continuously improve ourselves to remain competitive with the other stakeholders or other industry practices as well."

Engineering Workforce Development

Summit Power Limited maintains a substantial engineering workforce.

"Summit Power Limited is the employer of the highest number of engineers in the private sector of Bangladesh," according to Monirul Akhand, managing director of Summit Power Limited.

"We have a very good pool of energy experts in Bangladesh right now," Akhand notes, while acknowledging that specific technical areas require development.

Local Industry Limitations

The absence of local offshore industry creates operational challenges and training requirements.

"Though we have two FSRU terminals in Bangladesh, unfortunately this local offshore industry has not developed in Bangladesh," Alam explains.

"For support, we need to go abroad, we need to go to the nearest country like Singapore or Thailand for any supposed contract or hire the offshore divers or DSV vessel dynamics, positioning vessel, all these types of vessels we don't have available in Bangladesh," he continues.

This gap affects costs. "Our maintenance cost becomes very high, where it sometimes becomes five to 10 times more than if this asset could have been obtained from the local market," Alam says.

Partnership-Driven Knowledge Exchange

Summit Power Limited's international partnerships create bidirectional learning opportunities that extend beyond capital investment into operational expertise development. The company's joint ventures demonstrate how foreign direct investment can facilitate technology transfer and professional development across both organizations.

The Mitsubishi Corporation partnership in Summit LNG exemplifies this mutual learning approach. "It was a good opportunity for Mitsubishi also to learn about the FSRU and LNG business, and on the other hand, we have also been exposed to them and to the international arena," says Alam.

"We share the technical know-how with each other and that's why we benefit," he continues

Workforce Development Recommendations

Summit executives point to the need for systematic workforce development policies.

"For the next generation, it'll be a good move if people align their education with LNG infrastructure development or offshore terminal operation development and all these aspects, because they're still lacking behind in Bangladesh in terms of skill development," Alam suggests.

He advocates for government involvement: "The government should take initiative and make appropriate policies for manpower development to handle this type of critical industry."

[Partner Content]


by Web Desk via Digital Information World