Sunday, October 15, 2023

The Three Most Common Bitcoin Scams, and How to Avoid Them

Scammers and hackers are coming for your Bitcoin. Whether it's a financially-motivated lone wolf attacker or state-sponsored groups linked to hostile nations, these bad actors are constantly looking for new and novel ways to move your Bitcoin into their wallets.

The key to staying one step ahead is knowledge. And that's what this latest study from CoinKickoff is all about. Using data collected from chainabuse.com and blockchain.info, they put together a series of charts highlighting the most common types of Bitcoin scams, and when they’re most likely to happen.

And to help you hold onto your precious BTC, we've added a short section highlighting the best ways to avoid being the next crypto victim.

Is Bitcoin a scam?

Bitcoin's legitimacy as a digital currency is a topic of debate. Supporters argue that its decentralization, transparency, and acceptance by mainstream institutions make it a viable store of value and an inflation hedge.

However, critics highlight the lack of regulation in the cryptocurrency market, Bitcoin's price volatility, its pseudonymous transactions, and its susceptibility to market manipulation as potential red flags.

Why do scammers target Bitcoin?

The Bitcoin ecosystem has many characteristics that are attractive to hackers and scammers.

Bitcoin transactions are not directly connected to people's real identities, so it's easier for them to hide their tracks or ill-gotten gains. And once you send Bitcoin to an address, it's impossible to claw back, and there are no Bitcoin regulators to complain to or fight your case.

Its global nature makes Bitcoin a useful tool for criminals moving stolen funds around the globe.

And finally, a lack of education and a younger user base means hackers view many Bitcoiners as naive and more likely to fall for scams, like phishing or giveaway frauds.

A list of Bitcoin scams

Here's a look at some of the most common Bitcoin scams:

Phishing Scams: Scammers create fake websites or emails that resemble legitimate Bitcoin services or exchanges to trick users into revealing their private keys or login credentials.

Ponzi Schemes: Fraudsters promise high returns on investments in Bitcoin, but they use funds from new investors to pay returns to earlier investors, creating a pyramid scheme that eventually collapses.

Fake Wallets: Fraudulent mobile or desktop wallets are created to steal the private keys and funds of unsuspecting users.

Fake Exchanges: Scammers set up phony cryptocurrency exchanges that look real but are designed to steal users' deposits or personal information.

Impersonation Scams: Fraudsters impersonate well-known figures in the cryptocurrency industry on social media, often asking for donations or investments in exchange for fake promises.

Tech Support Scams: Scammers claim to be from a cryptocurrency exchange's tech support team and ask for remote access to your computer to steal your funds.

Giveaway Scams: Scammers pose as celebrities or influential figures on social media, promising to send you more Bitcoin if you send them a small amount first. They never send anything in return.

Fake Mining Operations: Scammers offer cloud mining contracts or mining hardware at attractive rates but never deliver the promised returns or equipment.

Ransomware Attacks: Hackers use malware to encrypt a victim's data and demand Bitcoin as a ransom payment to unlock it.

Fake Airdrops: Scammers promise free cryptocurrency tokens in exchange for personal information or a small payment.

The three most common Bitcoin scams

Data collected by the CoinKickoff researchers reveals the top three most reported crypto scams since 2018. They are:

Blackmail (85,534 reported cases): Bitcoin blackmail involves threats to reveal compromising information unless a payment is made in Bitcoin. Scammers exploit the pseudo-anonymity of the cryptocurrency to hide their identity, making it difficult for victims to recover funds or trace the culprits.

Sextortion (61,298 reported cases): Sextortion scams involve threats to release intimate or compromising photos or videos of the victim unless a Bitcoin ransom is paid. Victims are often contacted through email, with scammers claiming to have hacked their devices or online accounts. Paying rarely guarantees safety from future extortion attempts.

Ransomware (61,018): Ransomware is malicious software that encrypts a victim's files or locks them out of their system. The attacker then demands a Bitcoin payment in exchange for the decryption key. These attacks target individuals, corporations, and public institutions, causing data loss and financial strain. Regular backups and robust cybersecurity measures are essential for protection.

The year of the scammer

The year 2020 was a good one for Bitcoin. The world's biggest cryptocurrency was in the middle of another bull run that would eventually push the price to almost $70,000.

But the increased attention and the allure of quick wealth attracted new and inexperienced users, and it didn't take long for the sharks to start circling. There were over 35,000 reported scams in 2020; that's more than any other period in Bitcoin's history.

Since then, the number of reported scams has been declining dramatically. In the first six months of 2023, there were less than 6,000 cases. But this could be the quiet before the storm. Another bull run is scheduled to start in mid-2024, so expect a rise in scammers trying to trick new investors out of their Bitcoin.

How much Bitcoin did scammers get?

Scammers are stealing eye-watering amounts from Bitcoin holders and investors.

In 2018, they pilfered over $2 billion in Bitcoin. That figure jumped to over $6 billion in 2019, followed by another spike to $18 billion in 2020.

But those are measly gains compared to the value of all the looted Bitcoin in 2021. Data analyzed by CoinKickoff shows that hackers and cybercriminals made over $55 billion stealing Bitcoin.

So where does it all go, and who has it now?

Nobody really knows.

Stolen Bitcoin typically goes through a complex web of transactions to disguise its origins. Cybercriminals who steal Bitcoin often use various techniques like mixing services, tumblers, and darknet markets to launder the funds and make it challenging to trace.

Identifying the responsible parties can be extremely difficult due to the pseudonymous nature of Bitcoin. Transactions are recorded on a public ledger but linked to cryptographic addresses rather than real-world identities. Tracking the individuals behind these addresses is a formidable task, often involving cooperation between law enforcement agencies, blockchain analysis firms, and exchanges.

How to spot and avoid Bitcoin scams

Here are some tips to protect yourself and your Bitcoin stash:

Research and Verification: Before investing or transacting in Bitcoin, thoroughly research the platform, service, or individual you are dealing with. Verify their credentials, check for reviews, and seek recommendations from trusted sources.

Too Good to Be True: Be skeptical of offers that promise guaranteed high returns with minimal risk. If it sounds too good to be true, it probably is.

Secure Wallets: Use reputable cryptocurrency wallets with strong security features. Avoid sharing private keys or wallet recovery phrases with anyone, and store them in a safe, offline location.

Phishing Awareness: Be cautious of phishing emails, websites, or social media accounts impersonating legitimate crypto services. Always double-check URLs and verify the authenticity of the communication.

Cold Storage: Consider using cold storage options like hardware wallets for long-term Bitcoin holdings. These are less vulnerable to online threats.

Trust Your Gut: If something feels off or too risky, trust your instincts and refrain from proceeding.







Read next: Security Experts Warn Android’s Financial Apps Pose Increased Risks To Privacy By Demanding Excessive Permissions
by Irfan Ahmad via Digital Information World

Low Income Countries Pay More for Slower Internet, New Report Reveals

Access to the internet is about as important as access to any other essential utility, such as water and electricity. In spite of the fact that this is the case, internet access is not distributed evenly across the world. It turns out that people living in low income countries actually have to pay more for internet than might have been the case otherwise, and they also tend to get slower speeds than their counterparts in more developed nations.

This data comes from Surfshark’s latest Digital Quality of Life index, and it reveals that low income countries need to work 4.1 times the rate of higher income countries, and their internet is 3.3 times slower. For context, this means that someone living in a low income nation would have to work 12 hours in order to get 42 Mbps internet. In developed and high income countries, just 3 hours of work can be enough to obtain internet that can reach speeds of 120 Mbps.

With all of that having been said and now out of the way, it is important to note that mobile internet appears to be somewhat more equal according to the findings presented in this report. In lower income countries, 2 hours and 37 minutes of work will get you internet that can attain 32 Mbps speeds, while in richer nations, 1 hour and 41 minutes of work is enough for 96 Mbps internet.

While the divide is smaller, it can still be harmful because of the fact that this is the sort of thing that could potentially end up leaving low income nations behind. However, simply living in a high income country does not automatically grant you access to internet that can attain the fastest speeds with all things having been considered and taken into account.

For example, South Africa is a member of the high income club, but its average internet speed of 70 Mbps is half that of other countries in its group. On the other side of the spectrum, low income countries can have surprisingly high internet speeds. The Philippines is a great representation of this, since despite being a low income country, the Southeast Asian nation has thrice the average internet speeds of other low income countries, or 119 Mbps to be precise.


Read next: AI Might Put People’s Job Security At Risk But More Positions Are Being Created To Review AI Models And Their Inputs
by Zia Muhammad via Digital Information World

Saturday, October 14, 2023

AI Might Put People’s Job Security At Risk But More Positions Are Being Created To Review AI Models And Their Inputs

The thought of people’s jobs being at risk due to the advanced world of AI is a concern that has been debated on for months. After all, job security is a big deal and no one wishes to get replaced at the hands of technology.

But wait, we might be getting a little ahead of ourselves because while the threat remains, a new wave of jobs is actually getting rolled out. The latter is designed to only focus on tasks linked to regulating AI models and the various types of inputs and outputs they generate.

Ever since November of last year, we heard about tech giants, huge business leaders, and even those in the world of academia expressing fear that they will soon be extinct because AI is reigning supreme. After all, when you have technology doing a better job for free, why would you employ someone and pay them a share, right?

Remember, Generative AI is designed to allow algorithms based on AI technology to produce the most real or lifelike behavior. This could be pictures, text, prompts, and whatnot. Moreover, it’s trained on the best types of data so that again is a point worth pondering.

In the end, you get the most carefully and intricately designed presentations that are similar to what a qualified professional could produce. And therefore, the fear in place was justified for obvious reasons.

Analysts predicted how close to 300 million jobs may soon be taken up by the world of AI and that entails both office positions as well as administration. Other fields at threat included supportive tasks, engineering, law, and even architecture. Similarly, finance, business, and social sciences were also a part of the list.

Such inputs received by AI models and the outputs produced really do require guidance and must be reviewed by humans. In the end, it creates the best careers as well as side positions.

Now, new roles are rising up and they include the chance to review AI. The latest on this front happens to be linked to a firm called Prolific, a firm that links AI developers to those related to research. They are literally compensating people after hiring them to produce reviews of AI material.

The firm will pay all employees a salary to go through AI-based outputs and gauge whether their quality is up to the mark or not. And we’re talking nearly $12 per hour while the bare minimal payments are fixed at an hourly rate of $8.

Moreover, human reviewers receive guidance through clients at Prolific and they include some big names such as Oxford, Google, and UCL. The latter helps these employees along the way, giving them knowledge about the various kinds of inaccurate and harmful materials they might be coming across.

As expected, they will be required to give consent to taking part in such research practices. One such worker unveiled to the media outlet CNBC how he utilized Prolific on numerous instances to gather his verdict regarding the standard of work that AI models were producing.

Replying anonymously, he says there were multiple occasions where he stepped in because AI models went haywire and generated inaccuracies. Hence, they needed to be corrected to make sure replies weren’t unsavory.

Similarly, he spoke about more occasions where the models were rolling out things that created major problems such as AI prompting users to take part in the purchase and use of drugs. Wow, how’s that for a reality check?


Read next: AI Pioneer Says AI Will Become a Threat in 5 Years
by Dr. Hura Anwar via Digital Information World

Friday, October 13, 2023

EU's Content Control: Navigating Bias in Israel-Gaza Disinformation Battle

Europe is exerting more pressure on tech giants, including Meta, X (formerly Twitter), and TikTok, in dealing with disinformation and violent content regarding the Israel-Gaza conflict than the U.S.

European Commissioner Thierry Breton sent stern warnings to these platforms, emphasizing the potential impact on their business should they fail to comply with regulations under the Digital Services Act.

This European approach differs from the U.S., where the First Amendment shields a wide range of speech and restricts government intervention. Efforts by the U.S. government to encourage content moderation have faced legal challenges for potentially infringing on free speech rights.

In the U.S., there is no legal definition for hate speech or disinformation, making certain provisions of the Digital Services Act incompatible. The European stance allows regulators to pressure platforms more aggressively, signaling their close scrutiny of content moderation.

Under the DSA, large online platforms must establish robust mechanisms to remove hate speech and disinformation while balancing free expression. Non-compliance can lead to fines of up to 6% of global annual revenues.

In the U.S., a government threat of penalties is risky, and officials must carefully distinguish requests from enforcement actions. The contrast in approaches is evident in letters from New York AG Letitia James, which request information without threats of penalties.

The impact of these European rules and warnings on global content moderation remains uncertain, but social media companies may choose to apply them selectively. Individual users should have control over their content exposure, allowing them to make informed decisions about complex issues like the Israel-Gaza conflict, instead of relying on any country or law maker who have biased opinions.

Image: Freepik/macrovector

Read next: Bard's Tale: Google Employees Question AI-based Chatbot's Magic
by Irfan Ahmad via Digital Information World

Shopping Apps Surge in Popularity, with Users Spending 50 Billion Hours in 2023

Buckle up, shoppers and marketers – it's a retail revolution! 🛒 In 2023, the smartphone shopping spree continues to dazzle, and consumers are slated to indulge in a whopping 50 billion hours within Android shopping apps. This astonishing figure represents a staggering 42% increase since 2020, signifying a seismic shift in consumer behavior.

The latest findings from Data.ai unveil that this digital shopping extravaganza is not a mere post-lockdown fling. What fuels this relentless shopping affair? It's a mobile-centric mindset! Retailers, supermarkets, and pharmacies, to name a few, have adapted seamlessly to this on-the-go shopping era.

The appetite for buy-now-pay-later (BNPL) apps also skyrocketed, though they did a little dance in the first half of 2023. But what's the secret sauce behind a successful shopping app, you ask? Well, they're not just any run-of-the-mill apps; they're packed with features galore. Think third-party payments, tantalizing shopping content, daily or hourly deals, BNPL options, and rewards programs.

Let’s talk engagement. eCommerce giants like SHEIN, AliExpress, and Temu turned the engagement dial up.

These apps are not just attracting users; they're keeping them hooked for hours, sealing their reign in the mobile shopping arena. It's like an epic retail showdown out there! 💪

In a nutshell: Shopping apps are post-pandemic superheroes, Android users are set to splurge 50 billion hours in 2023, the top 10 apps rule with their flashy features, and Temu is the superstar of downloads.




Read next: Experts Say AI Industry Will Slow Down in 2024
by Irfan Ahmad via Digital Information World

New York’s Latest Bill Will Force Kids To Attain Parental Permission When Using Apps With Algorithmic Feeds

A new law is coming forward in New York that’s designed to help keep kids protected at all times when using popular social media apps including YouTube, TikTok, and Instagram.

The latest bill is reported to have children attain parental permission if they wish to continue using these platforms which are based on algorithmic feeds.

The current bill in discussion has already attained support from the state’s governor as well as the state attorney general. And it’s being dubbed SAFE so far which represents stopping addictive feeds exploitation among young minds.

The news was first broken by tech media outlet Engadget who says such apps having algorithms are a walking red flag for young kids and they literary hunt for kids and single them out to carry out exploitation of the worst kind.

Moreover, her statement called for support on this front and further for adults to ensure kids remain protected and not become villainized by the modern world of technology which is preying on youngsters as we speak.

This is not the first time that we have heard about such concerns regarding children. So many lawmakers in today’s day and age are said to have had enough and they keep on going back to the literature to prove how such apps are linked to bad mental health in kid’s minds. Similarly, they highlight how it leads to poor sleep quality in youngsters, especially when it’s used excessively.

Referring to the matter as a complete mental health conflict where young minds are being taken advantage of, the state’s Attorney General said it’s time the right steps were taken. As it is, New Yorkers are called out for having alarming levels of both anxiety as well as depression. Moreover, so many social media firms are using features that children find addictive, and that makes them stick to such platforms for a longer period of time.

Hence, it appears like this bill is coming at the right time while others argue it was a long time coming and perhaps may have been better if introduced before. Whatever the case may be, it’s a relief and will tackle the major risks linked to social media targeting kids and protecting privacy.

Such bills will also further enforce social media platforms to include additional kinds of parental control that disable alerts for any notifications generated between midnight and early morning hours. At the same time, it would include restrictions in the form of limits on how much screen time can be attained. Lastly, it would bar access to them during the midnight to early morning hours so kids can do what they’re supposed to and that is rest in this period.

The new law is getting sponsored by many of the state’s senators and they vow to introduce it as early as the start of next year.

But we are not surprised to see how leading social media app firms like Meta and even TikTok are speaking up against it. Moreover, they argue how it limits freedom of speech and steals kids of the chance to become a huge part of leading communities found on these online forums.

However, New York isn’t the first state to roll out such laws that need parental consent to use social media apps by users who are below the age of 18.

The same was done by Utah during this year’s start where anyone falling in this age bracket would need consent from either parents or guardians to produce the right profile on social media.

Photo: DIW / Generated with AI
Read next: Meta Unveils Much-Needed Edit Button and Voice Messages for Threads
by Dr. Hura Anwar via Digital Information World

Microsoft Enhances Bing AI Services and Moderation System

Microsoft is making major efforts to improve its Bing AI services and provide a more secure and trustworthy online experience. The tech behemoth recently launched a new bug bounty program for its Bing AI services while also trying to improve the moderation mechanism for Bing Image Creator.

Bing AI Services Bug Bounty Program

Microsoft has announced the establishment of a bug bounty program to find and resolve any vulnerabilities in its Bing AI services and applications. This effort compensates developers and security experts for uncovering vulnerabilities and issues, thereby fostering a safer environment for users.

The specific AI services covered by this program encompass a wide range of Microsoft's offerings, including:
  • AI-powered Bing experiences on bing.com in various browsers, including Bing Chat, Bing Chat for Enterprise, and Bing Image Creator.
  • AI-powered Bing integration in Microsoft Edge for Windows and in the Microsoft Start Application for iOS and Android.
  • AI-powered Bing integration in the Skype Mobile Application for iOS and Android.
These advancements are the result of considerable investments and insights, which have resulted in numerous key changes, including an upgrade to the vulnerability severity categorization specific to AI services and the establishment of an AI security research competition.

Microsoft has detailed the conditions for developers who want to participate in the Bing AI bug bounty program. Developers must report problems that have not previously been found or reported to Microsoft in order to be eligible for awards. In terms of severity, the reported bug must be classified as Critical or Important. Furthermore, developers or security researchers must offer clear instructions for reproducing the flaw.

Get ready to be rewarded handsomely for your bug-hunting skills! The bug bounty program offers a generous range of payouts, starting from $2,000 and going all the way up to $15,000. The amount you receive will depend on the seriousness of the bug you discover and the quality of your report. Hold on tight because if you come across an exceptional vulnerability and submit an outstanding essay, Microsoft may even grant you an even higher reward. It's time to put your bug-finding prowess to work and reap the benefits!

Microsoft emphasized the importance of engaging with security researchers via bug bounty programs in a blog post, underlining their critical role in the company's holistic approach to safeguarding customers from security threats. Microsoft is thrilled to broaden the scope of our bug bounty initiatives to include AI-powered Bing experiences, highlighting its dedication to security and user safety.

Improvements to Bing Image Creator Moderation System

Microsoft's Bing Image Creator product, which uses AI to generate images, has been chastised for its severe moderation system, which has resulted in a high number of false positives.

Microsoft is actively working to improve the moderation system for Bing Image Creator in response to user input and reports of excessive moderating. The major goal is to improve the system's accuracy and reduce the number of false positives.

Users had noted flaws with the moderation system, such as frequent image blocking, which hampered their ability to use the tool. Microsoft's Mikhail Parakhin recognized the issue and informed users that the exact issue had been identified and was being rectified.

While certain difficulties have been resolved, Microsoft is still working on resolving other persistent issues and fine-tuning the moderation system. Trigger warnings and incorrect picture content detection have been reported by users, requiring Microsoft to take rapid corrective action.

In conclusion, the introduction of the bug bounty program and continuous improvements to the Bing Image Creator moderation system demonstrate Microsoft's commitment to improving the security and user experience of its Bing AI services. These initiatives demonstrate Microsoft's commitment to making the Internet a safer and more reliable place for its users.


Read next: Google Search Generative Experience Gets Artsy and Literate
by Rubah Usman via Digital Information World