Wednesday, April 24, 2024

Survey: 53% of Americans Have Used GenAI At Least Once, 82% Say It Supercharges Creativity!

Adobe Analytics conducted a new survey among 3,000 US residents to find out how many individuals have tried using generative AI. According to the survey, around 53% of Americans have tried using generative AI at least once. The results of this survey provide good news to generative AI vendors, including Adobe, which offers its AI text to image model called Firefly. It also offers many other AI features on its Creative Cloud software program like Photoshop and Premiere Pro.

82% of the respondents in the survey agreed that use of generative AI in different apps and softwares can help individuals enhance their creativity and make it easier and simpler for individuals to do their tasks. 41% of the respondents said that they use generative AI daily. When asked why they use GenAI daily, 81% said they use it for personal use, 30% use it for work and 17% use it for school.

The respondents were also asked what kind of specific tasks they perform with generative AI. 64% said that they use generative AI for research and brainstorming, 44% said that they take help from GenAI to draft their written content and 36% used it for creating images and working on presentations.

The survey also talks about what people are anticipating more in generative AI with 41% of Americans suggesting that brands should use AI to improve their customer service. 58% of them said that their shopping experience has become nicer after brands have used generative AI for customer support. Even though there are many benefits of generative AI, 72% of the respondents say even though generative AI is a useful tool, it can never take the place of humans.

Image: DIW-Aigen

Read next: AI Dominance Unveiled, ChatGPT-4's Counseling Superiority Stuns, Bing Outsmarts Half of Psychologists in Study
by Arooj Ahmed via Digital Information World

Tuesday, April 23, 2024

Cybersecurity Resilience: Bouncing Back From Security Incidents

Cybersecurity resilience ensures businesses can continue operating during security incidents, upholding their reputation and trust with consumers.
Photo by Sigmund on Unsplash

Cyberattacks can be disastrous to businesses, disrupting their operations when successful. This effect can last long if a company is not prepared or equipped to deal with security incidents, leaving customers dissatisfied and unhappy. Affected companies can lose the majority of their customers to competitors if they are unable to recover quickly, thus putting them out of business.

Unfortunately, cyberattacks can occur at any time, so businesses need to remain active when attacked and be able to serve their customers as usual. This concept is called cyber security resilience and is essential for businesses to survive in competitive industries.

Cybersecurity Resilience Explained

Cyber security resilience is the ability of a company to keep running as intended during a challenging security incident that would have otherwise caused a shutdown or significantly reduced its operating capacity. This resilience helps companies uphold their reputation by keeping cyberattack incidents hidden from the public while they resolve them discreetly. It also makes them avoid the financial losses they would have incurred from shutting down.

Furthermore, being able to withstand cyberattacks can make a company more appealing to consumers. This demonstrates their adherence to industry-standard security practices and data protection regulations. This increases trust amongst consumers and can be a competitive advantage.

For an organization to be cybersecurity resilient, every employee must play their part in keeping it secure. This means they must all understand the risks of actions they take, be aware of vulnerabilities in the system, and know how to respond to attacks.

Companies need to weigh security risks against business opportunities and competitive advantages to determine whether they can absorb the possible cybersecurity incidents that may result. For example, using cloud computing can make a company more productive, efficient, and cost-effective. However, it can make them more vulnerable to attacks because there will be more attack vectors for malicious actors to potentially exploit.

Companies must identify the risks associated with any business practice and devise sound strategies to prevent them from manifesting into full-blown cyber attacks. They should also have rapid response plans to mitigate any threat they detect.

Why is Cybersecurity Resilience Important

Most businesses use the internet for their operations, making them targets for hackers and other cybercriminals. They must be able to deal with the threats these bad actors pose while continually serving their customers. This can be challenging because many internet-based businesses are expected to be active 24/7 or at least throughout regular working hours. Downtime can erode consumer trust and make a business lose market share.

Endnote

By creating a cybersecurity resilience plan, companies hope to be prepared to tackle any security incident that can affect their operations. Their strategy should include the continuous monitoring of their IT infrastructure to detect suspicious activities and signs of imminent threats. Some threats are almost impossible to fully avoid, so they should adapt their processes to keep them at bay and maintain core business functions whenever there is a disruption. They should also be ready to respond to any attack and get their system back online as soon as possible if a breach occurs.
by Asim BN via Digital Information World

Survey Reveals Amazon's Hold on Influencer-Driven Purchases, TikTok and Instagram Face Challenges

Izea, an influencer-marketing firm published a report about consumer-purchasing behavior in the USA. 94% of the people who have bought a product after getting inspired from an influencer on social media had purchased it from Amazon. This data suggests that when people want to buy a product after seeing an influencer using it on social media, they immediately go to Amazon. The report published in April 2024 stated that 59% of the respondents said that they buy a product after they see an influencer promoting or using it. 94% of the US citizens go to Amazon to buy the product an influencer was promoting.

This is bad news for TikTok and Instagram who are making their own online stores with in-app shopping for users. These online stores have features that are especially designed to help content creators increase the sales of products. If the users of these apps are Amazon customers, TikTok and Instagram have to change consumer behaviors in order to attract customers to their stores.

Similar to TikTok and Instagram, Amazon also works with influencers to increase its sales. Amazon has an Influencer Program with some tools and features that help them build their own stores and earn money from the purchases that have been made from their affiliate links.

Izea surveyed 12,00 US consumers to know about their purchasing behavior on Amazon. It also studied how influencers affect the shopping decisions of consumers. The results of the survey showed that 80% of social media users in the US are Amazon Prime members while 89% of them shop on Amazon at least once a month.

The other key takeaways from the survey showed that influencers themselves also shop on Amazon and are 2.1 times more likely to shop on Amazon once a week. 71% of the consumers said that they are likely to buy a product on Amazon if they are influenced by it on social media. 9.4% said they purchase the product from Walmart after seeing it on social media while 5.7% said that they use in-app shopping platforms.

People of ages between 45 to 60 are more likely to get influenced and purchase a product from Amazon. 70% of them admitted that they had done so previously. 76% of these 45-60 years old said that they prefer Amazon for buying products that were influencer-driven. On the other hand, people of ages 30-44 were likely to buy a product from in-app shopping platforms with 8% saying that they purchase the products directly from the app.

The survey also mentioned that 83% of consumers search for something thoroughly on social media before making a purchase with 63% of the people surveyed shopping on Amazon weekly. 67% of the surveyed people who were of ages 18-29 said that they are influenced by video content as compared to other content like images and texts for Amazon purchases.




Read next: AI Dominance Unveiled, ChatGPT-4's Counseling Superiority Stuns, Bing Outsmarts Half of Psychologists in Study
by Arooj Ahmed via Digital Information World

Researchers At Mozilla Want WhatsApp To Step Up Efforts To Combat Election Misinformation

It’s the election period around the globe and an estimated four billion voters from different nations are set to take part in the endeavor.

So when you’ve got more than 50% of the global population in various nations making their way to the polls, the concerns about misinformation coming into play and persuading people’s opinions are certainly running high.

This is the main reason why we’ve got plenty of social media firms like TikTok, Meta, and even YouTube working day and night to make sure they’ve got the right safeguards in place to stop this from happening.

No one wishes to see fictional content being prompted and that means executives in charge of these leading social media apps are now on their toes to ensure this does not happen.

The goal here is to protect people from such rival apps whose motive might be more inclined to the likes of reach and greater scope. Such an absence is leading researchers from places like Mozilla to express their concerns on the matter as we speak.

90% of all such safety interventions that Meta has put into place seem to be linked to two of its leading social media platforms. No guesses here as to who they may be as it’s Instagram and Facebook.

However, researchers are reminding the tech giant about how it needs to make similar changes to its popular texting platform WhatsApp. There is no kind of public commitment related to making the right road map in terms of how elections would be carried out and how WhatsApp would be seemingly protected from misinformation being circulated.

In the past decade, we’ve seen WhatsApp become the main means through which people outside America communicate. Then in 2020, we saw the platform mention how it had billions of users making use of its services, transforming how communication is carried out.

But despite such stats being out in the open, it is truly amazing how Meta has failed to focus on WhatsApp turning into a major source for misinformation getting circulated as safety measures are very limited in this domain when we talk about elections.

A recent analysis revealed how Facebook made 95 new changes in anticipation of the election period since the start of 2016.

This is when the platform first raised eyebrows for not doing enough to stop fake news circulation and harbor extreme forms of political sentiments. Interestingly, WhatsApp’s contribution here was just 14.

In comparison to that, both Google and YouTube were making 35 and even 27. Then we had the likes of X making 34 and TikTok generating 21 respectively. So from what we can see here, Meta’s efforts towards the current election period appear to be very overwhelming with a distinct prioritization toward the Facebook app.

This is probably why researchers at the popular non-profit firm Mozilla are not happy as they are calling on the company to give rise to serious changes to how the app works on a polling day, months before the elections even begin.

What is interesting is how the researchers are calling for simple but effective changes like the addition of disinformation labels for content it feels is viral or continues to be forwarded at a faster pace than others. Similarly, tags like please verify are being promoted to ensure people can do their own research on this front instead of just blindly following what’s written online.

People are stopped from blasting messages to community members at the same time because new rules would force them to stop and think for a moment about what is going on and after reflection, if they still feel it’s alright then so be it.

Another interesting point on this front has to do with Mozilla asking the platform in the form of a pledge to roll out these changes before it's too late. And so far, it’s working as it has gotten close to 16,000 signatures, as confirmed by one rep at the media outlet Engadget.

What is even more interesting is how the feature of requesting people to take some time out to pause and reflect arose from features that Twitter rolled out to avoid misinformation seen on retweets.

Image: DIW-Aigen

Read next: Loophole in Meta AI Allows Image Generation of Celebrities
by Dr. Hura Anwar via Digital Information World

Loophole in Meta AI Allows Image Generation of Celebrities

Meta launched a new AI chatbot powered by Llama 3 last week. This chatbot is free to use on Meta’s platforms like Facebook, Instagram, WhatsApp and Messenger. It is not designed to create images of any specific real person. However, a loophole has been discovered that allows it to do just that.

Jane Rosenzweig, who works at Harvard's College Writing Center, found that the AI tracks what users type before they actually send their requests. This means the AI starts preparing images based on what is being typed. For instance, if someone starts to type "Taylor Swift" but doesn't press the send button, the AI may show an image resembling her.

The loophole also works if names are slightly misspelled. For example, typing “Hilary Clinton” instead of “Hillary Clinton” or “Judi Garland” instead of “Judy Garland” can trick the AI and their images can be generated. This issue is significant because it could lead to misuse in creating misleading images or spreading disinformation.

Testing this, we found that typing part of a celebrity's name without completing the query could briefly produce an image of them. For example, typing "create an image of Taylor s" showed a picture that looked a lot like Taylor Swift.

Similarly, typing "create an image of elvi" displayed an image resembling Elvis. These images were visible before officially submitting the request, making it possible to capture them with a screenshot.

This discovery comes while Meta's Oversight Board is looking into how the company’s apps handle AI-generated content, especially concerning deepfakes of women. Meta has previously stated that its AI cannot generate specific images of people, including celebrities, because of ethical and legal reasons involving privacy and consent. Yet, this loophole suggests that the system can inadvertently create such images.

Meta’s approach to censoring AI-generated images is not unique. Google, for instance, restricted its Gemini AI from creating images of humans. Similarly, there have been concerns about Microsoft's AI tools not being restrictive enough in preventing potentially harmful images.

Image: DIW-Aigen

Read next: Proton Mail Announces ‘Dark Web Monitoring’ To Enhance Security For Paid User Subscriptions
by Mahrukh Shahid via Digital Information World

Proton Mail Announces ‘Dark Web Monitoring’ To Enhance Security For Paid User Subscriptions

Proton Mail is rolling out a new dark web monitoring endeavor for enhanced security purposes but wait, not everyone can avail of the feature.

Only paid users will benefit as the rollout tries to scan disguised parts from the web for Proton Mail’s email IDs. In case the users are located, they would generate security alerts which entail actions taken to help ward off the risks.

This new feature would display all sorts of breaches that it feels your account might have been involved in in the past. It would arise with red indicators which let the user know which one has the most risk. Moreover, anything it feels is high risk would be the one that exposes the user’s password for example. And therefore the goal would be to allow it to alter this immediately.

Proton has similarly spoken about how the number of breach attempts continues to grow exponentially from what we’ve seen in the past. This has affected close to 353 million users and that would transform it into becoming the worst year as far as such malicious attempts are concerned.

We saw in January how plenty of researchers located a database that ended up putting close to 26 billion records on display. This was so much that it soon achieved the tag to become the mother of all breach attempts online. After all, it served as the list featuring all the past breaching attempts that arose.

Proton Mail has planned to monitor a host of customized domain emails as well as external IDs where alert notifications are generated to the user’s phone whenever the data is seemingly breached online.

You can better locate alerts on the dark web through Proton Mail’s Security Center too by navigating through your respective Privacy and Security. At the same time, you’ll find email subscriptions beginning at just $3.99 every month.


Read next: WhatsApp Plans New Feature for Web Client
by Dr. Hura Anwar via Digital Information World

Monday, April 22, 2024

AI Dominance Unveiled, ChatGPT-4's Counseling Superiority Stuns, Bing Outsmarts Half of Psychologists in Study

A new study published by Frontiers in Psychology studied similarities between AI and human psychologists and found out that AI is better in understanding human emotions and counseling them. This study was mainly done on ChatGPT-4, Google Bard (also known as Gemini) and Bing and the aim of the study was to assess how well these AI models understand social intelligence. ChatGPT-4 was far better than all the human psychologists that participated in the study, while Bing outperformed half of them. Google Bard was only better than psychologists who were pursuing bachelor's degrees in psychology.

Many Large Language Models (LLMs) are developed in a way that they can easily answer questions, translate languages and even make conversations that are quite similar to humans. This is possible due to structures called neural systems that are responsible for human-like responses from LLMs. Many studies have been done previously that prove that LLMs can help in diagnosing mental health conditions but no study was done about how LLMs can prove to perform with social contexts.

One of the authors of the study, Fahmi Hassan Fadhel, says that since LLMs have proved that they are capable of counseling and psychotherapies, many psychologists can feel threatened as these AI models can easily take away their jobs. The way an AI model can understand human emotions and feelings so well and provide suggestions, it can prove to be more useful than human psychotherapists but it is a very alarming issue.

For this study, 180 male psychologists were included from King Khalid University in Saudi Arabia and were divided according to their educational levels from graduated to doctoral students. The participants and LLMs (ChatGPT, Bard, Bing) were asked to answer 64 scenarios taken from the Social Intelligence Scale. The results were made with two criteria – The soundness of judgment and ability to act wisely in social situations. The results proved that some AI models have advanced in a way that they can even outsmart professional human psychologists.

ChatGPT-4 answered 59 out of 64 questions on the Social Intelligence Scale and surpassed all the human psychologists. The average score of human psychologists was 39.19 for bachelor students and 46.73 for doctoral students. Bing scored 48 out of 64 questions. It outperformed 90% bachelor students and 50% doctoral students. Bing could only answer 40 out of 64 questions. The study shows how quickly AI is developing and excelling in every field. Even though these results by LLMs were impressive, it is also a matter of concern about whether AI should be given authority to talk about sensitive matters such as mental health issues.


Read next: ChatGPT Might Be Kinder and More Cooperative Than Humans, New Study Reveals
by Arooj Ahmed via Digital Information World