Thursday, July 4, 2024

Music Copyright Claims On YouTube: App Offers Creators New Options And Here’s What To Expect

It’s not uncommon for creators to receive copyright claims for music they’ve used while creating content on a certain video. Once a certain claim is addressed or removed, creators can monetize their videos but not before that.

So what options do creators have in terms of editing? For starters, YouTube says there’s Trim, Mute, Song Replacement, and Erase Song. And this can be achieved via the YouTube Studio.

Up until now, creators could make use of the Erase Song feature in Beta to silence any sounds that came with copyrights in the content. However, the tool’s performance just was not as accurate as many would have preferred.

Thankfully, the app is making huge improvements to the feature, with all credit to the enhanced AI-powered algorithm to better highlight, correctly identify, and get rid of content featuring copyright claims.

When the Erase Song feature is clicked, creators will witness two options pop up on the screen. This includes erasing the song and muting all sounds.


All you need to do is click on the Summary page and go to the right-hand side. Depending on the claims’ details, there are several options to select from including erasing the sound if the creator is eligible. Once that’s selected, several options pop up like Audio Intact, Dialogue, and more including Background sounds.

If the creator opts to have all sounds removed, then all the music between the timestamps specified is erased. YouTube confirmed how the feature is up for grabs on the app’s desktop and YouTube Studio mobile in the upcoming few weeks.

In situations where the erase song option does not end up removing a claim successfully on a certain video clip, creators have the chance to explore more options for editing through the app’s YouTube Studio like Trim and Mute.

Seems like the app is on a mission to please creators and this might be a great way to start because let's face it, no one likes to see a copyright infringement label that hinders monetization.

Read next: Meta’s Research Team Is Working On Four New AI Models To Help Developers Create New Apps
by Dr. Hura Anwar via Digital Information World

Meta’s Research Team Is Working On Four New AI Models To Help Developers Create New Apps

Tech giant Meta is working on developing four new AI models that will assist creators and developers with the curation of new apps.

The model would be available for public use and so far, the design has been published delineating the latest models and how it might be accepted for future use.

The news comes at a time when we witness a significant rise in AI demand, not to mention AI apps that are produced through such models and then further enhanced with capabilities. The latest effort by Meta has four models including two new Chameleon variants as well as JASCO and AudioSeal.

JASCO will accept various kinds of inputs related to audio and help with sound enhancements. Moreover, the team added how users would be able to tweak features including bass sounds, acoustic, and other melodic tunes to give rise to a great beat. This would also be able to accept text inputs that can be utilized for flavor music.


Examples will entail asking models to produce tunes linked to Blues jazz or rock and then also ensuring the hearer is aware that the tune was made using AI-based apps as the watermark would be present. This way, many would understand how the music comes with a commercial license.

Models like AudioSeal actually go as far as to mention which segments of the tune were generated through AI with watermarks being placed accurately in those locations.

Meanwhile, both Chameleon models can convert text to visuals and could be rolled out with limited features. Both variants called 7B and 34B need models to be aware of pictures and tech. Due to this, it carries out reverse processing and can produce captions for images.

Read next:

• Can AI Tools Like ChatGPT Experience Feelings And Capture Memories? This New Survey Has The Answer

• Google Offers Simple Tips On How Small Sites Can Offer Better Competition and Outrank Big Brands
by Dr. Hura Anwar via Digital Information World

Can AI Tools Like ChatGPT Experience Feelings And Capture Memories? This New Survey Has The Answer

A new survey is shedding light on AI tools like ChatGPT and what people’s opinions about them might be.

The survey questioned people about their thoughts on the leading AI tools and if they felt large language models actually had feelings or featured the capabilities of preserving memories. Surprisingly, a lot of the respondents said yes and we’re talking about two-thirds of the majority.

Thanks to the research carried out by the University of Waterloo, many felt AI does engage in some form of conscious behavior and therefore can give rise to subjective experiences like making memories or having emotions.

LLMs do frequently display conversational styles when rolling out content and such human capabilities give rise to human debates on whether AI really displays consciousness.

As per the experts, when people hold such strong sentiments about AI tools, it ultimately impacts how they work with them and interact with them. Moreover, this gives rise to strong social bonds and builds more trust.

Meanwhile, too much trust is also not a good thing as it means strong emotional dependence and limited human interactions, not to mention being overly dependent on AI to make pivotal decisions.

Many AI experts have denied time and time again how AI might be conscious but as per this new study, the opinions of the general public prove otherwise.

It’s a reality that many people believe in and that is why the authors embarked on a survey to prove just that. The results are right in front of us and with close to 300 individuals from the US taking part and most of them agreeing about ChatGPT being conscious, we might need to consider that for the future of AI.

Respondents were asked to comment on the tools’ mental state and their capability to produce plans, reasonings, and emotions, not to mention how frequently they engaged with the tool.

Furthermore, the study revealed that more the people make use of ChatGPT, the greater they contribute as having feelings which is significant considering how frequently AI is being used in our lives each day.

The authors also explained how the results delineated how powerful language is as a simple chat alone could lead many to assume that agents appearing and working differently from them might also have a mind of their own.

Other than emotions, being conscious has to do with intelligence that is linked to moral responsibilities. This is the ability to produce plans and act in a certain way, not to mention having more self-control over ethical ordeals and legal matters.

So this opens up the future for more scientific experiments and studies to better understand AI models in detail and how people perceive over time around the globe. After all, social bonding with LLM and AI tools is not something that has ever been discussed in the past.


Read next: Study Finds that AI is Capable of Working at Healthcare as It Gives Better Answers than Most of Physicians
by Dr. Hura Anwar via Digital Information World

Creative Community Criticizes Apple Intelligence For Lack Of Transparency

It looks like some of Apple’s most loyal customers aren’t too happy with its policy regarding generative AI.

This year has marked a major milestone for the leading Cupertino firm that opted to unveil its collaboration with the makers of ChatGPT, not to mention the rollout of Apple Intelligence. And yes, the iPhone maker has officially entered the much-heated AI race after taking a backstep for months.

The company confirmed how this year will see devices running on Apple Intelligence which is the name reserved for the tech giant’s version of generative AI. Amongst the many exciting features is the ability to produce pictures using text prompts.

While the news might excite many Apple users, it’s leaving a bad taste for creative community members who are surprised at how much lackluster surrounds the policy regarding AI models. Amongst their many concerns is zero transparency about how the models were trained and which data was used.

Many feel this is a fundamental right that every Apple user deserves to know and not seeing that is disappointing for many arising from the community of creative artists whose livelihood depends on personalized content creation.

For years, these members of society have looked up to Apple as a pioneer in the world of tech as well as liberal arts. Now, these same people are expressing frustration about how silent Apple has become in terms of how it attains data for use in AI models.

Generative AI’s success truly depends on what kind of information or training data was used to bring it up to the point of launch. So many firms continue to ingest data available online, refusing to take consent or provide compensation to the original creators.

The fact that Apple may have done what the others did is sad for some people to see, especially those having blind faith in the iPhone maker for decades.

To help give you a better understanding of what we mean, close to six billion pictures were used to train AI models such as LAION-5B. And all of those were stolen from the web and no one is calling the firm out for such acts.

This is why the creative community featuring music artists and authors, as well as those who deal with art daily, have united to take a stand against anyone consuming their efforts for free and keep generating more profits out of it.

We saw top music labels such as Sony and Universal sue AI music startup firms for breaking copyright claims. Today, we’re seeing tech giants strike deals through licensing with those producing content online like news publishers to stay safe from legal matters.

Seeing Apple do the same is something bothering the community who expected so much more from the Cupertino company while others speak about giving the iPhone maker the benefit of the doubt when they shouldn’t have.

There is no training source revealed for Apple Intelligence so far nor did Apple agree to doing so. Its company blog post says it used the same means that others in the industry do which is to scrape public data online through AppleBot. Are you shocked?


Image: DIW-Aigen

Read next: Threads Celebrates First Anniversary With Milestone 175 Million Monthly Active Users
by Dr. Hura Anwar via Digital Information World

Threads Celebrates First Anniversary With Milestone 175 Million Monthly Active Users

Meta’s Threads just turned one and the app is celebrating the news by sharing another milestone of its journey in the world of social media.

Launched in July of last year, the app had people running in flocks to get a subscription as the platform seemed like the next best thing after Instagram. It was less images and more text and many went as far as to reveal how the launch could give serious competition to archrival X, formerly known as Twitter.

As time passed, the app had its ups and downs but it’s certainly come a long way, facing plenty of hurdles. Today, it’s got a massive monthly userbase featuring over 175 million users and that’s a great point worth celebrating.

Mark Zuckerberg posted through his Threads account how the milestone achieved was just the boost that the app needed. Changes might be coming slowly but the fact that it was bare bones at the start with fierce competition from rivals is certainly something to be proud of.

In the beginning, we saw Meta’s Instagram spin-off platform be rolled out across both iOS and Android devices. Soon after that, the desktop version popped up. We do see it evolving to include more images than before but it wouldn’t be wrong to mention that 63% of all posts published on the app entail only text.

The app has confirmed that more than 50 million Tags were made using the app and amongst those, the most popular ones included BookThreads, PhotographyThreads, and those linked to the Gym.

No matter how much Facebook’s parent firm can deny the fact that it was initially supposed to serve great competition for Twitter, now X, the facts can no longer be ignored. X might have a massive user base and be doing great on its own terms under Elon Musk’s leadership, the competition is there and who knows what the future holds.

Do you recall the faceoff X had with Threads after its launch? The former accused the company of hiring former employees from X to run Threads and therefore trade secrets were getting shared. X felt threatened by the growing presence of Threads and a lawsuit came into being.

Musk continues to boast about X's success and how the user base features more than 600 million active users each month. And while Threads is nowhere near half that mark, it’s more keen on its own personal gains at this moment.


Read next: Meta’s Threads Platform Just Turned One But How Far Has The Instagram Spin-Off Come?
by Dr. Hura Anwar via Digital Information World

Wednesday, July 3, 2024

How to Implement Cloud Threat Hunting in Your Organization

Businesses are now going paperless and digital, storing their valuable data in the cloud. However, whereas cloud storage allows everyone in the company to access the data at any time, it also brings certain cybersecurity challenges.

Source: Unsplash / Adi Goldstein

While cloud environments are scalable, flexible, and efficient, traditional security approaches often fall short, creating a need for advanced cybersecurity measures designed specifically for the cloud. One important technique in this regard is cloud threat hunting – a systematic and continuous search of malicious activity on the cloud and its subsequent elimination.

Cyber attackers are becoming increasingly sophisticated, employing advanced techniques including AI, zero-day exploits, and advanced persistent threats (APTs). Insider threats caused by employees with access to sensitive data also pose a challenge. Cloud threat hunting can help identify anomalies early on and reduce the dwell time of any malware. Here’s how you can do cloud threat hunting:

1. Developing a Strategy

To implement an effective cloud threat hunting strategy, you must have a well-structured approach that includes creating a robust threat-hunting framework and establishing meaningful metrics to measure success.

Two strategies are commonly used here: hypothesis-driven methodology and data-driven methodology. In the former, you start with a specific hypothesis about potential threats or vulnerabilities, while in the latter, you start with large volumes of data and use advanced analytics to identify anomalies and potential threats.

In hypothesis-driven methodology, you need a limited set of data relevant to the hypothesis, while in the latter, you need large amounts of raw data.

During the planning phase, you must also set up your KPIs. One KPI can be detection time, or the average time taken to detect a threat after it has entered the environment. Another KPI can be response time, or the average time taken to respond to and mitigate a threat after it is detected.

2. Leveraging Tools and Technologies

You can use tools provided by cloud service providers or third-party tools. For example, AWS equips your storage with AWS GuardDuty which continuously monitors for unwanted and unauthorized activity to protect accounts. It uses anomaly detection, machine learning, and integrated threat intelligence to pinpoint hazards. Similarly, Google Cloud has a Security Command Center (SCC) for cloud threat-hunting purposes.

Third-party tools include RED and SIEM. EDR, or Endpoint Detection and Response tools, monitor endpoint activities and provide detailed visibility into potential threats. On the other hand, Security Information and Event Management or SIEM tools aggregate and analyze log data from various sources to provide real-time event monitoring, threat detection, and incident response.

3. Conducting Threat Hunts

Regular threat hunts are essential for maintaining a strong security posture in cloud environments. This process involves developing hypotheses based on threat intelligence and historical data, performing active hunts for indicators of compromise (IOCs), and analyzing and correlating data.

When conducting hunts, you look for IOCs which are evidence of a potential security breach. They include unusual traffic or suspicious files. Similarly, you look for anomalous behavior such as unexpected data transfers, particularly to malicious domains, irregular login times, or unusual patterns of resource usage.

Endnote

As cloud solutions become more common in the digital scape, the number of cyber attacks also grows. Cloud threat hunting is a cloud cybersecurity approach in which you systemically and continuously scan your storage for any threat. You can use either a hypothesis-based or a data-driven methodology, and leverage either native or specialist third-party tools. Through constant cloud threat hunting, you can ensure safety for your cloud storage.


by Web Desk via Digital Information World

AI Fraud: How Deep Fakes Cost Companies Billions!

Given the increasing popularity of AI, deep fake incidents are on the rise. The biggest threat of deep fake technology is to the banking and financial sector. In 2023, $12.3 billion loss was reported because of deep fake while the losses are expected to grow to $40 billion by the end of 2027, i.e. according to Deloitte. There are many AI apps and websites available now which provide attackers platforms to fake their voices, impersonate someone’s voice and create fake documents.

Image: Deloitte

According to Pindrop’s Voice Intelligence and Security Report 2024, deep fraud that is aimed at contact centers amounts to $5 billion losses annually. Bloomberg also recently reported that there is a dark web network that sells scamming software to attackers for $20 to thousands of dollars. If you want to see how quickly AI fraud is growing all over the world, Sumsub’s Identity Fraud Report 2023 covers all of that.


Image: Statista

Adversarial AI has also created a new wave of deep fake attacks which create fake identities to attack different people. Many of the enterprises do not have any strategies to keep themselves safe from adversarial AI deep fakes where the attackers create deep fakes of key executives. According to the 2024 State of Cybersecurity Report by Ivanti, 74% of enterprises are already experiencing AI driven attacks. 89% of the enterprises say that AI attacks have already started while 60% of the enterprises are not prepared to defend against AI attacks. Because of generative AI, the AI attacks which will get more dangerous will be phishing (45%), software vulnerabilities (38%), ransomware attacks (37%), API-related vulnerabilities (34%) and DDoS attacks (31%).


Many CEOs of software cybersecurity enterprises have admitted that these AI attacks have gotten more real and legitimate looking. George Kurtz, CEO of CrowdStrike says that as AI is getting more advanced, attackers are also taking full advantage of it. CrowdStrike is well-known for its expertise in AI and machine learning. George Kurtz says that the deep fake technology is getting so good. The company also has started investigating AI deep fakes and the impact they can cause in the coming years.

Read next:

• Meta, Amazon, Apple Most Impersonated in Phishing Scams: Study

How to Create Strong Passwords and Keep Hackers at Bay
by Arooj Ahmed via Digital Information World