With AI systems advancing at such a rapid rate, there may come a time when this form of tech is asked to make moral judgements on our behalf. In spite of the fact that this is the case, many have been concerned about the ability of an AI to weigh in on issues that require complex moral frameworks. A study conducted in China which was recently published in Behavioral Sciences provided some more insight into the matter at hand.
With all of that having been said and now out of the way, it is important to note that people tend to base their opinions on the context. The morality of a decision which is rendered by AI can be greatly altered by the type of situation it was made in, and this needs to be factored in whenever these decisions are being discussed.
In one of the tests that were conducted, participants were provided the standard trolley dilemma. Essentially, participants were asked to say if they felt like the agent in charge should pull the switch to ensure that fewer people die, or if the AI should do nothing at all. When told that an AI would be making the decision, participants said that taking an action to ensure fewer deaths was not moral.
However, it bears mentioning that these participants did not feel the same way if the individual in question was a human rather than an AI. This seems to suggest that the participants believe that AI does not possess enough agency to provide such consequential moral judgements.
The primary finding that is worth looking into here is that people adopt a different set of moral requirements for humans than for AI. Such a difference is quite intriguing because of the fact that this is the sort of thing that could potentially end up determining how people would react to AI making decisions on their behalf. The future of AI may depend on whether or not people are willing to hand it the reigns of various essential actions that need to be committed to all in all.
Read next: Exploring the Hidden Aspects Boss Could Notice During Your Working Hours
by Zia Muhammad via Digital Information World
"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Tuesday, July 4, 2023
Twitter Is Struggling With Bots And Accounts Marketing Adult Content As Experts Speak Of No Solution In Sight
Just when you thought crypto spam accounts were the worst thing that could happen to an app, new reports are speaking about Twitter going through a crisis where no easy solution seems to be in sight.
We’re talking about a long list of bots with accounts that promote adult content increasing to new heights each day. They’re bombarding users’ message inboxes and even trying to interact with them in every way possible to start a conversation. So far, no easy way out has been identified.
We know that such issues are not something awfully new but seeing the figure for adult bots rise is truly an ironic affair. Let’s not forget how tech billionaire Elon Musk has even promised in the past that he would be tackling such issues involving bots and spam after coming into power. So far, that’s yet to happen.
This past week, we saw some top security experts and other users mention that plenty of accounts that looked suspicious were getting flagged when they noticed that they were following them or trying to engage in an unwanted chat through DMs.
The experts even went on to release a tweet where they put such suspicious accounts in the limelight that they felt were trying to gain nothing but their own benefits as likes. This way, it would lure more people to view their accounts and allow them to press on links that had been enlisted across their specific bio.
Such links would cause the viewer to land on a suspicious site including those that are NSFW themed. But the app has been on alert and so are its security researchers and they’re trying to do everything in their power to suspend such accounts and prevent it from taking place.
The company says it’s playing an active role in terms of suspending such accounts that promote such bogus tactics but it’s yet to clearly outline a defined solution on the matter that would really prove to be worthwhile in the long run.
Right now, bots are turning out to be a huge issue for the firm, and without a proper plan to combat the rise in spam, things are feared to go from bad to worse. Did we mention how Elon Musk has mentioned time and time again that the app would carry on with this fight against spam until the last second?
Just a few days ago, we saw the tech billionaire mention how his goal at this moment in time was to put a limit on rates of viewing Twitter posts to assist with the alarming issue of data scraping with system manipulation.
For now, it’s just not clear if targeting bots has anything to do with this or not. But by the looks of it, we’re not quite sure if we’ll ever find out.
Before this endeavor, we saw the controversial activity of having users’ blue badges stripped if they were verified under the company’s previous criteria of being verified. But that also was very confusing because some accounts got the ticks back because they were prominent.
The point of view of Elon Musk in this regard was to tear down an age-old system and make it fair for all but again, returning blue badges to those deemed to be high profile in nature would always be questioned for obvious reasons.
So you can’t help but wonder what the purpose of verification is when some have to pay while other famous ones do not. What do you think?
Read next: Jack Dorsey Tells Elon Musk To Hang In There Because Running Twitter Isn’t Easy Amid Users’ Outcry Against Rate Limits
by Dr. Hura Anwar via Digital Information World
We’re talking about a long list of bots with accounts that promote adult content increasing to new heights each day. They’re bombarding users’ message inboxes and even trying to interact with them in every way possible to start a conversation. So far, no easy way out has been identified.
We know that such issues are not something awfully new but seeing the figure for adult bots rise is truly an ironic affair. Let’s not forget how tech billionaire Elon Musk has even promised in the past that he would be tackling such issues involving bots and spam after coming into power. So far, that’s yet to happen.
This past week, we saw some top security experts and other users mention that plenty of accounts that looked suspicious were getting flagged when they noticed that they were following them or trying to engage in an unwanted chat through DMs.
The experts even went on to release a tweet where they put such suspicious accounts in the limelight that they felt were trying to gain nothing but their own benefits as likes. This way, it would lure more people to view their accounts and allow them to press on links that had been enlisted across their specific bio.
Such links would cause the viewer to land on a suspicious site including those that are NSFW themed. But the app has been on alert and so are its security researchers and they’re trying to do everything in their power to suspend such accounts and prevent it from taking place.
The company says it’s playing an active role in terms of suspending such accounts that promote such bogus tactics but it’s yet to clearly outline a defined solution on the matter that would really prove to be worthwhile in the long run.
Right now, bots are turning out to be a huge issue for the firm, and without a proper plan to combat the rise in spam, things are feared to go from bad to worse. Did we mention how Elon Musk has mentioned time and time again that the app would carry on with this fight against spam until the last second?
Just a few days ago, we saw the tech billionaire mention how his goal at this moment in time was to put a limit on rates of viewing Twitter posts to assist with the alarming issue of data scraping with system manipulation.
For now, it’s just not clear if targeting bots has anything to do with this or not. But by the looks of it, we’re not quite sure if we’ll ever find out.
Before this endeavor, we saw the controversial activity of having users’ blue badges stripped if they were verified under the company’s previous criteria of being verified. But that also was very confusing because some accounts got the ticks back because they were prominent.
The point of view of Elon Musk in this regard was to tear down an age-old system and make it fair for all but again, returning blue badges to those deemed to be high profile in nature would always be questioned for obvious reasons.
So you can’t help but wonder what the purpose of verification is when some have to pay while other famous ones do not. What do you think?
Read next: Jack Dorsey Tells Elon Musk To Hang In There Because Running Twitter Isn’t Easy Amid Users’ Outcry Against Rate Limits
by Dr. Hura Anwar via Digital Information World
Swedens Leading Privacy Protection Agency Bars Four Firms From Using Google Analytics For Data Transfer
Sweden is making sure it’s got its companies in check after the country’s top privacy protection regulator announced a veto against on four different enterprises from using a Google tool.
All four firms in question are currently carrying out operations in the country and so the regulator called them out for using a tool by Google that’s designed to measure web traffic. They concluded how such tools can transfer personal information to the US. And they mean serious business as fines have also been allocated to one of the four companies in question. This was outlined to be worth $1 million.
The country’s agency that looks after privacy and protection, IMY, mentioned how it went through tools like Google Analytics after an issue was brought to its attention by another privacy protection group situated in Austria. The latter had reportedly filed so many complaints against the Android Maker in the EU region.
The company called Noyb spoke about how using Google Analytics caused data from the EU to be sent across to the US and that’s a clear mistake that goes against the GDPR. Moreover, the latter enables data transfer to third nations in cases when the European Commission vows to provide some form of privacy that reaches the level of the European Union.
We even saw how the EU’s Court of Justice ended up striking down another transfer deal between the two regions because it felt that it was not sufficient.
The IMY claims data being transferred toward Google in the US by such firms is that of a personal nature and any technical guidelines linked to the security of the firms were not properly evaluated. Hence, they’re inadequate in terms of providing the right kind of protection that has been designated as correct by the EU since day one.
Similarly, we saw it offering fines to another firm called Tele2 which is based in telecommunications. The fine comprised 12 million kronor while another company called CDON got a fine worth a staggering value of 300k kronor.
A top grocery store outlet that goes by the name of Coop & Dagens Industri escaped the fine because it did take enough safety measures to ensure data was not transferred. Moreover, Tele2 says it put an end to making use of Google Analytics while IMY issued an argument that had other firms be barred from making use of it.
The legal advisor for IMY that took charge of this investigation adds that such rulings made it so clear that the right requirements were in place in terms of technical security. Moreover, a host of other measures were also evident in terms of data exchange to another third nation which happened to be the US.
The ruling was not slammed by Nyob who clearly welcomed the rule of IMY with open arms. It even issued a statement on the matter where it had authorities from countries like Italy and Austria mention how they knew about the use of Google’s tool going against the GDPR. Hence, they called it the first financial fine put on different firms for using Analytics.
Then when we approached the month of May, it was observed how the EC hoped to set out legal frameworks that would ensure safe data transfer occurs between Europe and the US.
Right now, the RGPD is currently functioning and that could lead to major fines that hit the 20 million Euro target or 4% of the firm’s total revenue generated around the globe.
Read next: Google’s New Privacy Policy Sparks Concern As Company Allowed To Collect Users’ Data For Training AI Models
by Dr. Hura Anwar via Digital Information World
All four firms in question are currently carrying out operations in the country and so the regulator called them out for using a tool by Google that’s designed to measure web traffic. They concluded how such tools can transfer personal information to the US. And they mean serious business as fines have also been allocated to one of the four companies in question. This was outlined to be worth $1 million.
The country’s agency that looks after privacy and protection, IMY, mentioned how it went through tools like Google Analytics after an issue was brought to its attention by another privacy protection group situated in Austria. The latter had reportedly filed so many complaints against the Android Maker in the EU region.
The company called Noyb spoke about how using Google Analytics caused data from the EU to be sent across to the US and that’s a clear mistake that goes against the GDPR. Moreover, the latter enables data transfer to third nations in cases when the European Commission vows to provide some form of privacy that reaches the level of the European Union.
We even saw how the EU’s Court of Justice ended up striking down another transfer deal between the two regions because it felt that it was not sufficient.
The IMY claims data being transferred toward Google in the US by such firms is that of a personal nature and any technical guidelines linked to the security of the firms were not properly evaluated. Hence, they’re inadequate in terms of providing the right kind of protection that has been designated as correct by the EU since day one.
Similarly, we saw it offering fines to another firm called Tele2 which is based in telecommunications. The fine comprised 12 million kronor while another company called CDON got a fine worth a staggering value of 300k kronor.
A top grocery store outlet that goes by the name of Coop & Dagens Industri escaped the fine because it did take enough safety measures to ensure data was not transferred. Moreover, Tele2 says it put an end to making use of Google Analytics while IMY issued an argument that had other firms be barred from making use of it.
The legal advisor for IMY that took charge of this investigation adds that such rulings made it so clear that the right requirements were in place in terms of technical security. Moreover, a host of other measures were also evident in terms of data exchange to another third nation which happened to be the US.
The ruling was not slammed by Nyob who clearly welcomed the rule of IMY with open arms. It even issued a statement on the matter where it had authorities from countries like Italy and Austria mention how they knew about the use of Google’s tool going against the GDPR. Hence, they called it the first financial fine put on different firms for using Analytics.
Then when we approached the month of May, it was observed how the EC hoped to set out legal frameworks that would ensure safe data transfer occurs between Europe and the US.
Right now, the RGPD is currently functioning and that could lead to major fines that hit the 20 million Euro target or 4% of the firm’s total revenue generated around the globe.
Read next: Google’s New Privacy Policy Sparks Concern As Company Allowed To Collect Users’ Data For Training AI Models
by Dr. Hura Anwar via Digital Information World
Googles New Privacy Policy Sparks Concern As Company Allowed To Collect Users Data For Training AI Models
The past few days saw tech giant Google make some major changes to its privacy policy. But the news of the sudden revamp is not making a lot of people happy because the controversial changes include allowing the firm to collect users’ shared data.
The Android maker will use this data that users share with others for the sake of training its AI models. Moreover, it seemed very optimistic about the approach and was on the lookout to enhance services and create new goods that were powered by AI technology.
The new update went on to mention how the tech giant would use such data to enhance its services and create new goods and offerings as well as a system that could benefit others including the general members of the public.
A great example that was provided included training models based on AI technology and better the current services that Google provides including translation, Bard, and also its Cloud. So as you can imagine, it’s causing major concern as that’s a huge shift in the overall policy.
The major shift from things like Language Models to the world of AI means saying hello to a policy that makes a major shift from the company’s previous commitments to the privacy of others. But right before we saw this update take center stage, the company detailed more on how it was all done for the sake of others because improving language models means improving all of its services.
Today, Google says it does have the right now to make use of users’ information to better its overall system where translation, text production, and even Cloud are included. It further went on to add how such changes to its policies can be found across its archive page.
On most occasions, such changes tend to limit firms from data collection that people use directly. And with the firm’s own policy in place, we’re saying hello to using data that are posted to the public on the web.
As one can imagine, the news is fairly controversial because it raises a lot of questions in terms of privacy. Making use of AI systems to see people’s data is a huge deal. Moreover, AI technology including Bard from Google and ChatGPT would be taking all posts into their systems, scrutinizing them, and then further going one step ahead to use them for its own training purposes.
But some might argue that when you do post something online, it could be viewed by all. But the point to ponder here is that the data over here keeps on altering. See, the huge concern has to do with shifts related to those that can attain access and how it’s used across the board.
Above all, let’s not forget how the entire issues linked to how legal this could be is still questionable. And as most of us go ahead, we can expect a line of lawsuits to pop up and bring in huge concerns having to do with copyright.
Most importantly, the problems linked to web scraping have really made so many individuals aware recently of how big of a concern it really is after Twitter chose to make serious changes on the platform so that it could stop the extraction of data.
On Saturday, Elon Musk faced intense criticism for limiting the figure of posts that any of the app’s users could see each day. So that meant literally barring people from using such a feature.
He blamed that on the issue of data scrapping and how the company’s systems were being manipulated. This is why so many leading tech organizations are calling this behavior out as one that must be valued when using data and outlining privacy guidelines.
Read next: Google Helps Marketers By Launching New Brand Restriction Settings For Ad Campaign Control
by Dr. Hura Anwar via Digital Information World
The Android maker will use this data that users share with others for the sake of training its AI models. Moreover, it seemed very optimistic about the approach and was on the lookout to enhance services and create new goods that were powered by AI technology.
The new update went on to mention how the tech giant would use such data to enhance its services and create new goods and offerings as well as a system that could benefit others including the general members of the public.
A great example that was provided included training models based on AI technology and better the current services that Google provides including translation, Bard, and also its Cloud. So as you can imagine, it’s causing major concern as that’s a huge shift in the overall policy.
The major shift from things like Language Models to the world of AI means saying hello to a policy that makes a major shift from the company’s previous commitments to the privacy of others. But right before we saw this update take center stage, the company detailed more on how it was all done for the sake of others because improving language models means improving all of its services.
Today, Google says it does have the right now to make use of users’ information to better its overall system where translation, text production, and even Cloud are included. It further went on to add how such changes to its policies can be found across its archive page.
On most occasions, such changes tend to limit firms from data collection that people use directly. And with the firm’s own policy in place, we’re saying hello to using data that are posted to the public on the web.
As one can imagine, the news is fairly controversial because it raises a lot of questions in terms of privacy. Making use of AI systems to see people’s data is a huge deal. Moreover, AI technology including Bard from Google and ChatGPT would be taking all posts into their systems, scrutinizing them, and then further going one step ahead to use them for its own training purposes.
But some might argue that when you do post something online, it could be viewed by all. But the point to ponder here is that the data over here keeps on altering. See, the huge concern has to do with shifts related to those that can attain access and how it’s used across the board.
Above all, let’s not forget how the entire issues linked to how legal this could be is still questionable. And as most of us go ahead, we can expect a line of lawsuits to pop up and bring in huge concerns having to do with copyright.
Most importantly, the problems linked to web scraping have really made so many individuals aware recently of how big of a concern it really is after Twitter chose to make serious changes on the platform so that it could stop the extraction of data.
On Saturday, Elon Musk faced intense criticism for limiting the figure of posts that any of the app’s users could see each day. So that meant literally barring people from using such a feature.
He blamed that on the issue of data scrapping and how the company’s systems were being manipulated. This is why so many leading tech organizations are calling this behavior out as one that must be valued when using data and outlining privacy guidelines.
Read next: Google Helps Marketers By Launching New Brand Restriction Settings For Ad Campaign Control
by Dr. Hura Anwar via Digital Information World
Study Reveals Increased Complexity and Collaboration in Technology Buying Decisions
According to the latest study, the process of buying technology is rapidly evolving, and growing increasingly intricate. The findings reveal a complicated landscape, demanding unique strategies and heightened engagement to navigate effectively. As organizations strive for success, understanding and adapting to this evolving buying cycle is paramount.
This study further highlights a staggering sixty-two percent consensus among polled informational technology authorities concluding that the technology acquisition approach is growing in complexity. Similarly, an even higher percentage of sixty-five percent within business initiatives with over a thousand workers agree, emphasizing the pressing need for innovative approaches and intensified employment in helming this intricate geography. Comprehending these trends is crucial for establishments aiming to thrive in the ever-evolving technology industry.
As technology advances, so does the complexity of the buying cycle, and the belated conclusions from the calculation. The breakdown indicates a substantial growth in the duration of the buying expedition, stretching from twenty-eight weeks four years back to forty-two weeks a year back and now reaching forty-five weeks this year. This illustrative timeline accentuates the painstaking evaluation and decision-making required in today's technology geography.
Additionally, the analysis underscores a significant modification in the composition of purchasing groups. On average, the group extent has grown from twenty-one members in eight-four last four years and sixteen last five years to a robust twenty individuals. Notably, enterprise managers are assuming a more prominent role within these teams, particularly within establishment associations. Their presence signifies a broader range of viewpoints and backs the increasing interdepartmental collaboration essential for successful technology acquisitions.
In light of these conclusions, associations must recognize the significance of acclimating to this evolving geography, allocating adequate time and resources to navigate the complexities, and leveraging the diverse expertise within their expanded buying groups.
Amidst the evolving geography, corporations remain determined. Even in unsteady duration, fifty percent of the largest enterprises foresee an upturn in their informational technology allocation. Similarly, forty percent of the least ones show resilience and anticipate an increase. As inner and outer characteristics shape their decisions, these organizations adapt, forging ahead with strategic payout choices.
Read next: The rise of Generative AI: A game-changer for digital professionals
by Arooj Ahmed via Digital Information World
This study further highlights a staggering sixty-two percent consensus among polled informational technology authorities concluding that the technology acquisition approach is growing in complexity. Similarly, an even higher percentage of sixty-five percent within business initiatives with over a thousand workers agree, emphasizing the pressing need for innovative approaches and intensified employment in helming this intricate geography. Comprehending these trends is crucial for establishments aiming to thrive in the ever-evolving technology industry.
As technology advances, so does the complexity of the buying cycle, and the belated conclusions from the calculation. The breakdown indicates a substantial growth in the duration of the buying expedition, stretching from twenty-eight weeks four years back to forty-two weeks a year back and now reaching forty-five weeks this year. This illustrative timeline accentuates the painstaking evaluation and decision-making required in today's technology geography.
Additionally, the analysis underscores a significant modification in the composition of purchasing groups. On average, the group extent has grown from twenty-one members in eight-four last four years and sixteen last five years to a robust twenty individuals. Notably, enterprise managers are assuming a more prominent role within these teams, particularly within establishment associations. Their presence signifies a broader range of viewpoints and backs the increasing interdepartmental collaboration essential for successful technology acquisitions.
In light of these conclusions, associations must recognize the significance of acclimating to this evolving geography, allocating adequate time and resources to navigate the complexities, and leveraging the diverse expertise within their expanded buying groups.
Amidst the evolving geography, corporations remain determined. Even in unsteady duration, fifty percent of the largest enterprises foresee an upturn in their informational technology allocation. Similarly, forty percent of the least ones show resilience and anticipate an increase. As inner and outer characteristics shape their decisions, these organizations adapt, forging ahead with strategic payout choices.
Read next: The rise of Generative AI: A game-changer for digital professionals
by Arooj Ahmed via Digital Information World
Monday, July 3, 2023
AI-Generated Images Are Spreading A Wave Of Disinformation As They Fool AI Detection Software Easily
Images produced using AI technology have been the center of concern for so many individuals around the globe.
Be it stolen pictures, artwork, or fake marketing campaigns- they’re the root cause of a rapidly spreading disinformation wave online. And with months passing and no solution being outlined, people are getting worried, and rightly so.
The news comes to us thanks to a recently published report by the New York Times that states how software that was designed to detect such ordeals is now easily fooled. Yes, one of the leading forms of defense against this misinformation spread is getting tricked by the simple addition of grain on pictures produced by AI technology.
The report further went on to elaborate upon how the addition of grain by the image’s editor or shall we say texture would cause the picture produced by AI to be less discernible. We’re talking about a decline in detection figures that go from 99% to only 3%. How’s that for some shocking news?
What is even more appalling is how one popular and sought-after software called Hive is also having trouble despite previous studies showing it had a huge rate of success in the past for AI detection.
Hive cannot differentiate between regular images and those produced using AI technology, especially when the owners of the images turn them into a more pixelated picture.
As a result of this, experts claim that such software shouldn’t be the only means for detection as so many companies are working hard to get rid of misinformation and stop the publication and release of images along these lines.
It’s like robbing people of their hard work and talent and we don’t know how that can ever be acceptable for obvious reasons. One expert from Duke University who happens to know the software in and out says that each time a person creates great generators, others tend to come forward and create even better discriminators. And then the latter uses the discriminator to create a bigger generator.
The news comes at a period when we are seeing users put out new kinds of misinformation that are made through AI technology. The idea behind that is to set forward political campaigns that would influence the minds of the general public in a deceiving manner which is obviously wrong.
One of the greatest and most recent examples has to do with Florida Governor Ron DeSantis and his recent announcement of running for president in the upcoming elections.
That particular campaign sent out fake pictures of former US President Trump.
Read next: The rise of Generative AI: A game-changer for digital professionals
by Dr. Hura Anwar via Digital Information World
Be it stolen pictures, artwork, or fake marketing campaigns- they’re the root cause of a rapidly spreading disinformation wave online. And with months passing and no solution being outlined, people are getting worried, and rightly so.
The news comes to us thanks to a recently published report by the New York Times that states how software that was designed to detect such ordeals is now easily fooled. Yes, one of the leading forms of defense against this misinformation spread is getting tricked by the simple addition of grain on pictures produced by AI technology.
The report further went on to elaborate upon how the addition of grain by the image’s editor or shall we say texture would cause the picture produced by AI to be less discernible. We’re talking about a decline in detection figures that go from 99% to only 3%. How’s that for some shocking news?
What is even more appalling is how one popular and sought-after software called Hive is also having trouble despite previous studies showing it had a huge rate of success in the past for AI detection.
Hive cannot differentiate between regular images and those produced using AI technology, especially when the owners of the images turn them into a more pixelated picture.
As a result of this, experts claim that such software shouldn’t be the only means for detection as so many companies are working hard to get rid of misinformation and stop the publication and release of images along these lines.
It’s like robbing people of their hard work and talent and we don’t know how that can ever be acceptable for obvious reasons. One expert from Duke University who happens to know the software in and out says that each time a person creates great generators, others tend to come forward and create even better discriminators. And then the latter uses the discriminator to create a bigger generator.
The news comes at a period when we are seeing users put out new kinds of misinformation that are made through AI technology. The idea behind that is to set forward political campaigns that would influence the minds of the general public in a deceiving manner which is obviously wrong.
One of the greatest and most recent examples has to do with Florida Governor Ron DeSantis and his recent announcement of running for president in the upcoming elections.
That particular campaign sent out fake pictures of former US President Trump.
Read next: The rise of Generative AI: A game-changer for digital professionals
by Dr. Hura Anwar via Digital Information World
New Survey Shows People Are More Likely to Ask for Financial Advice than Relationship Advice
Americans are traditionally the types of people that prefer to handle things all on their own, and a recent survey commissioned by AmeriLife and conducted by OnePoll seems to confirm this with all things having been considered and taken into account. According to the results that can be seen in this survey, just 22% of respondents said that they like to ask anyone for help.
In spite of the fact that this is the case, 75% said that they themselves are great at helping others. What’s more, when viewed in the lens of financial management, 55% said that they don’t have much trouble asking for help in that regard. However, one thing that might change their view is if they are undergoing a period of financial strife.
36% of the people that responded to this survey stated that they would find it more challenging to ask for the aid of another individual during financial struggles. Interestingly, 30% of women said this as compared to 24% of men.
With all of that having been said and now out of the way, it is important to note that more Americans would be willing to give a public speech than ask someone that cares about them for some financial assistance. This might stem from 69% of Americans being of the opinion that they possess a superior level of financial know-how to anyone else around them, which could make it difficult for them to admit when they are in a precarious position financially speaking.
Also, 27% of Americans said that they found financial management to be stressful, which might also explain why they are so hesitant to ask anyone to lend them a helping hand. 26% said that they would prefer to just have someone teach them the fundamentals about financial management, with 23% stating that they would rather just receive money. Such trends can have an enormous impact on society, since they can determine what people will do whenever there is a global economic crunch such as the one that the world has been experience since the pandemic wreaked havoc.
Read next: Where Can You Find the Fastest Internet Connections in The World?
by Zia Muhammad via Digital Information World
In spite of the fact that this is the case, 75% said that they themselves are great at helping others. What’s more, when viewed in the lens of financial management, 55% said that they don’t have much trouble asking for help in that regard. However, one thing that might change their view is if they are undergoing a period of financial strife.
36% of the people that responded to this survey stated that they would find it more challenging to ask for the aid of another individual during financial struggles. Interestingly, 30% of women said this as compared to 24% of men.
With all of that having been said and now out of the way, it is important to note that more Americans would be willing to give a public speech than ask someone that cares about them for some financial assistance. This might stem from 69% of Americans being of the opinion that they possess a superior level of financial know-how to anyone else around them, which could make it difficult for them to admit when they are in a precarious position financially speaking.
Also, 27% of Americans said that they found financial management to be stressful, which might also explain why they are so hesitant to ask anyone to lend them a helping hand. 26% said that they would prefer to just have someone teach them the fundamentals about financial management, with 23% stating that they would rather just receive money. Such trends can have an enormous impact on society, since they can determine what people will do whenever there is a global economic crunch such as the one that the world has been experience since the pandemic wreaked havoc.
Read next: Where Can You Find the Fastest Internet Connections in The World?
by Zia Muhammad via Digital Information World
Subscribe to:
Posts (Atom)