The biggest concern that people tend to have whenever AI is brought up is that it might make their jobs obsolete. Gen AI is capable of writing books and screenplays, offering weather predictions and performing various other tasks that once commanded a salary. In spite of the fact that this is the case, the chief of a Zurich based staffing agency seemed to think that AI will actually create more jobs than it eliminates with all things having been considered and taken into account.
Denis Machuel is the CEO of Adecco, and he opined that the rise of AI is similar to the arrival of the internet. It might cause significant disruptions that would eliminate certain forms of employment at this current point in time, but in the long run, it will replace those jobs with new roles that require the use of AI.
With all of that having been said and now out of the way, it is important to note that white collar jobs will be affected more than blue collar ones. Any role that involves the computing and processing of information will likely fall by the wayside, so legal and financial roles might be in jeopardy.
However, this doesn’t mean that all lawyers will be AI in the future. Problem solving and critical thinking are two things that AI hasn’t learned to do yet, at least not in the way that humans can intuitively manage. Complex legal matters will still require humans to make the right decisions, even if AI is handling the more innocuous and routine aspects of their jobs.
Adecco is playing its part by partnering with Microsoft to create a platform that can help people see what career paths they can follow through with in the age of AI. Many workers will have transferable skills for the most part, and some new skills related to AI can be learned. This process is essential because of the fact that this is the sort of thing that could potentially end up opening up new avenues for people whose careers have been upended by this brand new tech.
Photo: DIW - AIGen
Read next: The UN is Afraid of Killer Robots, Here’s Why
by Zia Muhammad via Digital Information World
"Mr Branding" is a blog based on RSS for everything related to website branding and website design, it collects its posts from many sites in order to facilitate the updating to the latest technology.
To suggest any source, please contact me: Taha.baba@consultant.com
Friday, January 26, 2024
Thursday, January 25, 2024
AI Incidents Increased by 30% Year Over Year
Since AI has been advancing at such a rapid pace, it stands to reason that negative AI incidents will also be on the rise with all things having been considered and taken into account. It turns out that 2023 was a record breaking year, with 121 incidents recorded according to a recent report released by Surfshark. This represents a 30% increase from 2022, and it also comprises a solid 20% of all AI incidents recorded since 2010.
With all of that having been said and now out of the way, it is important to note that OpenAI was involved in over 25% of the incidents that were factored in. Microsoft came in second with a total of 17 incidents, followed by Google with 10 and Meta with 5.
Quite a few of these incidents had to do with deepfakes and various forms of impersonation, with figures like Pope Francis, Tom Hanks and others becoming the subject of AI generated images. Politicians were also popular targets, with everyone from Donald Trump to Barack Obama being included in this list. 2024 is an election year, which is pertinent because of the fact that this is the sort of thing that could potentially end up making the number of incidents become even higher than might have been the case otherwise.
It bears mentioning that these incidents actually became somewhat less prevalent in the latter half of the year. The first quarter of 2023 saw 54 incidents, followed by 33 in Q2, but in the third and fourth quarters this plummeted to 14 and 22 respectively.
It will be interesting to see where things go from here on out. The downward trend seems to suggest that the perpetrators of AI incidents are losing interest, but in spite of the fact that this is the case, the election year might lead to a spike that will break even more records. Whatever the case may be, AI will continue to become ever more advanced. That will only make these incidents harder to detect or prevent in the first place, and they’ll also be far more accurate than they are right now.
Read next: IEA Projects Data Center Electricity Needs to Exceed 1,000 Twh by 2026, Raising Environmental Concerns
by Zia Muhammad via Digital Information World
With all of that having been said and now out of the way, it is important to note that OpenAI was involved in over 25% of the incidents that were factored in. Microsoft came in second with a total of 17 incidents, followed by Google with 10 and Meta with 5.
Quite a few of these incidents had to do with deepfakes and various forms of impersonation, with figures like Pope Francis, Tom Hanks and others becoming the subject of AI generated images. Politicians were also popular targets, with everyone from Donald Trump to Barack Obama being included in this list. 2024 is an election year, which is pertinent because of the fact that this is the sort of thing that could potentially end up making the number of incidents become even higher than might have been the case otherwise.
It bears mentioning that these incidents actually became somewhat less prevalent in the latter half of the year. The first quarter of 2023 saw 54 incidents, followed by 33 in Q2, but in the third and fourth quarters this plummeted to 14 and 22 respectively.
It will be interesting to see where things go from here on out. The downward trend seems to suggest that the perpetrators of AI incidents are losing interest, but in spite of the fact that this is the case, the election year might lead to a spike that will break even more records. Whatever the case may be, AI will continue to become ever more advanced. That will only make these incidents harder to detect or prevent in the first place, and they’ll also be far more accurate than they are right now.
Read next: IEA Projects Data Center Electricity Needs to Exceed 1,000 Twh by 2026, Raising Environmental Concerns
by Zia Muhammad via Digital Information World
Warrant Necessary for Law Enforcement Officials, Says Amazon Ring
Amazon Ring has updated its policy, now making it mandatory for police and other officials to obtain a warrant to access footage from its doorbell cameras. This change was recently announced in a blog post by the company.
Previously, through the "request for assistance" (RFA) feature, police and public safety agencies could directly request video footage from Ring users, bypassing the need for a warrant. However, this practice has been discontinued. While these agencies can continue to utilize the Neighbors app for sharing safety tips and community information, they can no longer request videos through the app.
The decision to forego this practice came after Amazon faced severe backlash for allowing private security footage without proper consent. As a result, the company had modified its policy and allowed policy requests for videos to be made public on the app. However, the latest change mandates that law enforcement can only access Ring footage through a warrant.
Renowned policy analysts proclaimed this step as a positive one. However, experts do emphasize the need for further improvements by Ring to make their security features better. He suggested that end-to-end encryption should be enabled by default. Additionally, the company should disable default audio collection, which has been shown to capture sound from greater distances.
Amazon's approach to privacy has long been a subject of concern. In a notable incident last year, Amazon agreed to an almost $6 million settlement with the FTC, stemming from claims that the company failed to properly inform customers about how their data could be accessed. This agreement came in the wake of Amazon's own acknowledgment that it had provided police with video footage in specific "emergency" scenarios, doing so without the consent of the users or a warrant.
Photo: Digital Information World - AIgen
Read next: Artificial intelligence Can Exacerbate Ransomware Attacks, Warns UK's National Cyber Security Center
by Saima Jiwani via Digital Information World
Previously, through the "request for assistance" (RFA) feature, police and public safety agencies could directly request video footage from Ring users, bypassing the need for a warrant. However, this practice has been discontinued. While these agencies can continue to utilize the Neighbors app for sharing safety tips and community information, they can no longer request videos through the app.
The decision to forego this practice came after Amazon faced severe backlash for allowing private security footage without proper consent. As a result, the company had modified its policy and allowed policy requests for videos to be made public on the app. However, the latest change mandates that law enforcement can only access Ring footage through a warrant.
Renowned policy analysts proclaimed this step as a positive one. However, experts do emphasize the need for further improvements by Ring to make their security features better. He suggested that end-to-end encryption should be enabled by default. Additionally, the company should disable default audio collection, which has been shown to capture sound from greater distances.
Amazon's approach to privacy has long been a subject of concern. In a notable incident last year, Amazon agreed to an almost $6 million settlement with the FTC, stemming from claims that the company failed to properly inform customers about how their data could be accessed. This agreement came in the wake of Amazon's own acknowledgment that it had provided police with video footage in specific "emergency" scenarios, doing so without the consent of the users or a warrant.
Photo: Digital Information World - AIgen
Read next: Artificial intelligence Can Exacerbate Ransomware Attacks, Warns UK's National Cyber Security Center
by Saima Jiwani via Digital Information World
Wednesday, January 24, 2024
Artificial intelligence Can Exacerbate Ransomware Attacks, Warns UK's National Cyber Security Center
UK-based organizations and businesses have always been prominent victims of cyber threats – particularly ransomware. Britain’s cyber mastermind has recently investigated the role of AI and predict that the number of these attacks will only increase with time. Hackers will get ample chances to breach sensitive data due to the convenience that AI provides.
The National Cyber Security Center released a report stating their findings. According to them, AI removes the entry barrier for hackers who are new to the game. They can easily get into any system and carry out malicious activities without getting caught. Targeting victims will be a piece of cake with AI being available round the clock.
The NCSC claims that the next two years will significantly increase global ransomware threat incidents. Contemporary criminals have created criminal generative AI, more popularly referred to as “GENAI.” They are all set to offer it as a service, for people who can afford it. This service will make it even easier for any layman to enter into office systems and hack them.
Lindy Cameron who is the chief executive at NCSC, urges companies to remain at pace with modern cyber security tools. She emphasizes the importance of using AI productively for risk management on cyber threats.
The Director General, James Babbage at NSA further ascertains that the released report is factually correct. Criminals will continue exploiting AI for their benefit and businesses must upscale to deal with it. AI increases the speed and abilities of already existing cyberattack schemes. It offers an easy entry point for all kinds of cyber criminals – regardless of their expertise or experience. Babbage also talks about child sexual abuse and fraud – both of which will also be affected as this world advances.
The British Government is strategically working on its cyber security plan. As of the latest reports, £2.6 billion ($3.3 billion) has been invested to protect the country from malicious cyberattacks.
Photo: Digital Information World - AIgen
Read next: 6 In 10 SEOs Don't Think That Google's SGE Will Have a Good Impact
by Mahrukh Shahid via Digital Information World
The National Cyber Security Center released a report stating their findings. According to them, AI removes the entry barrier for hackers who are new to the game. They can easily get into any system and carry out malicious activities without getting caught. Targeting victims will be a piece of cake with AI being available round the clock.
The NCSC claims that the next two years will significantly increase global ransomware threat incidents. Contemporary criminals have created criminal generative AI, more popularly referred to as “GENAI.” They are all set to offer it as a service, for people who can afford it. This service will make it even easier for any layman to enter into office systems and hack them.
Lindy Cameron who is the chief executive at NCSC, urges companies to remain at pace with modern cyber security tools. She emphasizes the importance of using AI productively for risk management on cyber threats.
- Also read: World’s Biggest Online Data Breach Affects Billions: Here’s How You Can Keep Your Records Safe
The Director General, James Babbage at NSA further ascertains that the released report is factually correct. Criminals will continue exploiting AI for their benefit and businesses must upscale to deal with it. AI increases the speed and abilities of already existing cyberattack schemes. It offers an easy entry point for all kinds of cyber criminals – regardless of their expertise or experience. Babbage also talks about child sexual abuse and fraud – both of which will also be affected as this world advances.
The British Government is strategically working on its cyber security plan. As of the latest reports, £2.6 billion ($3.3 billion) has been invested to protect the country from malicious cyberattacks.
Photo: Digital Information World - AIgen
Read next: 6 In 10 SEOs Don't Think That Google's SGE Will Have a Good Impact
by Mahrukh Shahid via Digital Information World
6 In 10 SEOs Don't Think That Google's SGE Will Have a Good Impact
Google has been hard at work trying to make it so that its search engine can maintain its dominance in the industry. A major part of that over the course of the last year or so has been to incorporate AI into it as much as possible, and this has culminated in the creation of the Google Search Generative Experience, or SGE for short.
The main benefit of SGE according to Google is that it can enhance the search engine’s ability to provide information to users. The way this works is that it will generate a snapshot of all the relevant info regarding a particular search query using AI, and there’s also a handy “ask a follow up” button that can help them dive even deeper into the topic than might have been the case otherwise.
With all of that having been said and now out of the way, it is important to note that SEOs don’t really seem to think that SGE will have a good impact on the manner in which they do business. A poll posted on the SEO FOMO forums revealed that 61% of SEO professionals are worried about how it might affect the industry moving forward.
27% do seem to think that the effects of SGE will be largely positive, but in spite of the fact that this is the case, the vast majority are of the opinion that it will be harmful in the long run in some way, shape or form. When a similar poll was conducted on X, 59.1% (or around 6 in 10) agreed that it was concerning, which seems to suggest that the results weren’t just a one off with all things having been considered and taken into account.
It remains to be seen whether or not SGE will have a positive impact on Google Search. It might just drive more traffic away from sites and keep it on the SERP, which appears to be something that Google tends to prefer due to the profit it can generate. While SGE is still only in the current testing phase, Google might roll it out sooner rather than later.
Image: Digital Information World - AIgen
Read next: Microsoft’s Bing And Edge Browsers Could Avoid Being Regulated Under The Upcoming DMA
by Zia Muhammad via Digital Information World
The main benefit of SGE according to Google is that it can enhance the search engine’s ability to provide information to users. The way this works is that it will generate a snapshot of all the relevant info regarding a particular search query using AI, and there’s also a handy “ask a follow up” button that can help them dive even deeper into the topic than might have been the case otherwise.
With all of that having been said and now out of the way, it is important to note that SEOs don’t really seem to think that SGE will have a good impact on the manner in which they do business. A poll posted on the SEO FOMO forums revealed that 61% of SEO professionals are worried about how it might affect the industry moving forward.
27% do seem to think that the effects of SGE will be largely positive, but in spite of the fact that this is the case, the vast majority are of the opinion that it will be harmful in the long run in some way, shape or form. When a similar poll was conducted on X, 59.1% (or around 6 in 10) agreed that it was concerning, which seems to suggest that the results weren’t just a one off with all things having been considered and taken into account.
It remains to be seen whether or not SGE will have a positive impact on Google Search. It might just drive more traffic away from sites and keep it on the SERP, which appears to be something that Google tends to prefer due to the profit it can generate. While SGE is still only in the current testing phase, Google might roll it out sooner rather than later.
Image: Digital Information World - AIgen
Read next: Microsoft’s Bing And Edge Browsers Could Avoid Being Regulated Under The Upcoming DMA
by Zia Muhammad via Digital Information World
Tuesday, January 23, 2024
AI Might Not Steal That Many Jobs According to This MIT Study
The general assumption surrounding AI is that it has the potential to end up taking away an inordinate quantity of jobs from human beings, but is there actually any truth to this sentiment? A team at MIT sought to find an answer to this question, and their research revealed that AI might not be the job killer that so many people fear it might be.
This study was conducted at MIT’s Computer Science and Artificial Intelligence Laboratory, or CSAIL for short, and it refuted a lot of the assertions made so far. For example, Goldman Sachs has estimated that as many as 25% of jobs can be taken over by AI related automation in just a few short years, whereas McKinsey estimates that 50% of all work will be done by AI by the year 2055.
A poll conducted by UPenn, Princeton, and NYU suggested that 80% of jobs will be taken over by ChatGPT, which just goes to show how pervasive this sentiment truly is. In spite of the fact that this is the case, it might not actually be financially viable to have AI do these jobs according to the MIT report.
The research suggests that AI can indeed automate certain tasks, but that doesn't mean that it can replace jobs related to these tasks. For example, an average of 6% of a baker’s time is devoted to quality control, so if a bakery pays 5 bakers $48,000 a year each, it could save $14,000 on an annual basis by having AI do it.
"We find that only 23% of worker compensation “exposed” to AI computer vision would be cost-effective for firms to automate because of the large upfront costs of AI systems.", highlights study.
However, the system itself would cost upwards of $165,000 a year in maintenance and upkeep, which means that just having humans continue doing their jobs would be more financially suitable. This goes to show that just because AI can do a task does not mean that it will be cheaper, and businesses will be looking at costs instead of just blindly replacing humans. It might be more likely that human laborers will incorporate AI into their work flow which will actually boost productivity across the board in the near to distant future.
Read next: How Tech Professionals Can Prepare for the Future of IT
by Zia Muhammad via Digital Information World
This study was conducted at MIT’s Computer Science and Artificial Intelligence Laboratory, or CSAIL for short, and it refuted a lot of the assertions made so far. For example, Goldman Sachs has estimated that as many as 25% of jobs can be taken over by AI related automation in just a few short years, whereas McKinsey estimates that 50% of all work will be done by AI by the year 2055.
A poll conducted by UPenn, Princeton, and NYU suggested that 80% of jobs will be taken over by ChatGPT, which just goes to show how pervasive this sentiment truly is. In spite of the fact that this is the case, it might not actually be financially viable to have AI do these jobs according to the MIT report.
The research suggests that AI can indeed automate certain tasks, but that doesn't mean that it can replace jobs related to these tasks. For example, an average of 6% of a baker’s time is devoted to quality control, so if a bakery pays 5 bakers $48,000 a year each, it could save $14,000 on an annual basis by having AI do it.
"We find that only 23% of worker compensation “exposed” to AI computer vision would be cost-effective for firms to automate because of the large upfront costs of AI systems.", highlights study.
However, the system itself would cost upwards of $165,000 a year in maintenance and upkeep, which means that just having humans continue doing their jobs would be more financially suitable. This goes to show that just because AI can do a task does not mean that it will be cheaper, and businesses will be looking at costs instead of just blindly replacing humans. It might be more likely that human laborers will incorporate AI into their work flow which will actually boost productivity across the board in the near to distant future.
- Also read: AI Might Surpass Humans in All Tasks by 2047
Read next: How Tech Professionals Can Prepare for the Future of IT
by Zia Muhammad via Digital Information World
The Age of Artificial Intelligence: What Modern Tech Means for Journalism
Life can be pretty scary for creators right now. The rise of AI language models like ChatGPT that can produce somewhat convincing pieces of writing, as well as the growing popularity of AI art, have all seen creatives re-evaluate their careers. After all, why would someone pay through the nose for a carefully crafted, curated piece of work, when the sophistication of this emerging technology is progressing by leaps and bounds with no sign of stopping or slowing down?
Photo: Digital Information World - AIgen
One of the most seemingly-vulnerable occupations in the line of AI’s fire is Journalism. The vocation to deliver news and current events to the people seems to be perfectly tailored to the efficiency and ease of use of AI. After all, where once you may have had to get an on-campus or online journalism degree such as a master's, now we can simply construct an AI program with the necessary crafting process, feed it enough content so that it learns the tone and structure of what we want from it, then feed it the data we want it to write about. Boom, journalism.
Right?
Let’s not think of AI as a program for a moment. Let’s think of it as a brain. We already know computers can accept input and produce output - for example, pressing a key (input) to print a letter to the screen (output). This is a process of receiving, understanding, and responding to stimuli, just like a human brain.
Image: Stefano Bucciarelli/Unsplash
When an AI program is first written, it’s just like the brain in a newborn baby. Inexperienced, curious, and imprintable. AI models begin with a “supervised learning” period, where the creators feed the AI brain a whole bunch of input that is defined. In the context of a baby, this can be likened to babies learning “mama” and “dada” as their first words; through continued exposure, the infant learns the names of its caregivers. The more data that it is exposed to, the more the baby learns, retains, and can accurately respond to. After a while, the AI will have acquired enough knowledge to undergo unsupervised learning, where after being given some parameters, the AI is then allowed to go through unlabelled data and learn what it could. In the context of a child, we can see this as going to daycare, and eventually school.
The next step is called the “Transformer Architecture,” and it performs a task that we often think of as unique to human brains - it draws from established knowledge to reach contextual conclusions about new knowledge. For example, if a child has only ever seen and used chairs before, they will likely be able to look at a barstool for the first time and, using their understanding of what chairs look like, where they’re used, and how they function, will be able to ascertain what a barstool is and how to use it. The transformer architecture does the same thing.
There is certainly more to it, but that’s the basic process. A program is created and fed defined data, then it’s fed undefined data, that it then defines itself, and then it uses the transformer architecture to contextually define more data. This is how ChatGPT and other AI models of its kind can absorb, understand, and then accurately respond to questions and prompts provided by the users.
Photo by Tyler Franta on Unsplash
Now, we stand at the precipice of a new industrial revolution. Although it can be easy to look at the things ChatGPT does and believe that it will dominate, possibly even wipe out career journalism and other industries, it appears to be more of a panic reaction than a logical one.
Although AI will inevitably disrupt industries, industry disruption likewise brings additional opportunity and scope. Not only that, but AI models like ChatGPT are capable of summarizing studies and articles, producing more efficient data for research in journalistic and expository writing pieces.
To illustrate the difference, allow us to put it in the following terms.
An AI will be able to produce a passable script on the effects of war by using data, statistics, eyewitness accounts, images, and videos. However, an AI cannot go to the scene of the war, develop its unique impressions, and produce a written or video piece with the creative decisions of a human journalist. An AI-created piece on the horrors of war would resemble something more akin to a documentary where a person in a seat just sits and rattles off what happened. You will learn, but you won’t understand, feel, or be stirred by it.
Modern journalism is just as much art as it is fact, and although this can produce some discrepancy and ethical dilemmas where one overtakes the other, it is the hints of humanity that make journalism a timeless profession. People will forever need to know what is going on in the world, but it’s the actions of journalists and the people they report on that put these events into a more “human” context; such as that people aren’t just aware of these events, they are moved by them.
If you’re a journalist concerned about your job once AI hits the main stage, we have some encouraging words for you. First of all, AI has already hit the main stage and you’re still here. Second, an AI can never do what you do. Finally, maybe you will be the person behind the next big development of the collaboration between AI and journalists. Either way, AI is merely a hammer, and that’s all it’s going to be for a long time. It’s up to you to drive the nail in.
by Irfan Ahmad via Digital Information World
Photo: Digital Information World - AIgen
One of the most seemingly-vulnerable occupations in the line of AI’s fire is Journalism. The vocation to deliver news and current events to the people seems to be perfectly tailored to the efficiency and ease of use of AI. After all, where once you may have had to get an on-campus or online journalism degree such as a master's, now we can simply construct an AI program with the necessary crafting process, feed it enough content so that it learns the tone and structure of what we want from it, then feed it the data we want it to write about. Boom, journalism.
Right?
How AI Works
Artificial Intelligence has been around for ages. Every time you use a search engine, you’re using an AI. GPS programs and devices use AI to extrapolate the best routes to a destination with consideration of traffic and roadworks. Although integrated into so many aspects of our lives, even healthcare, very few people understand how AI works, and how it arrives at the conclusions that it does.Let’s not think of AI as a program for a moment. Let’s think of it as a brain. We already know computers can accept input and produce output - for example, pressing a key (input) to print a letter to the screen (output). This is a process of receiving, understanding, and responding to stimuli, just like a human brain.
Image: Stefano Bucciarelli/Unsplash
When an AI program is first written, it’s just like the brain in a newborn baby. Inexperienced, curious, and imprintable. AI models begin with a “supervised learning” period, where the creators feed the AI brain a whole bunch of input that is defined. In the context of a baby, this can be likened to babies learning “mama” and “dada” as their first words; through continued exposure, the infant learns the names of its caregivers. The more data that it is exposed to, the more the baby learns, retains, and can accurately respond to. After a while, the AI will have acquired enough knowledge to undergo unsupervised learning, where after being given some parameters, the AI is then allowed to go through unlabelled data and learn what it could. In the context of a child, we can see this as going to daycare, and eventually school.
The next step is called the “Transformer Architecture,” and it performs a task that we often think of as unique to human brains - it draws from established knowledge to reach contextual conclusions about new knowledge. For example, if a child has only ever seen and used chairs before, they will likely be able to look at a barstool for the first time and, using their understanding of what chairs look like, where they’re used, and how they function, will be able to ascertain what a barstool is and how to use it. The transformer architecture does the same thing.
There is certainly more to it, but that’s the basic process. A program is created and fed defined data, then it’s fed undefined data, that it then defines itself, and then it uses the transformer architecture to contextually define more data. This is how ChatGPT and other AI models of its kind can absorb, understand, and then accurately respond to questions and prompts provided by the users.
Photo by Tyler Franta on Unsplash
What This Means for Journalists and Journalism
Technology is constantly being developed, and there are always new ways to research and present information. Over the years there has been a dramatic shift in traditional media being responsible for relaying accurate, trustworthy information to the public, to online news outlets and content creators being the dominant source of information regarding current events.Now, we stand at the precipice of a new industrial revolution. Although it can be easy to look at the things ChatGPT does and believe that it will dominate, possibly even wipe out career journalism and other industries, it appears to be more of a panic reaction than a logical one.
Although AI will inevitably disrupt industries, industry disruption likewise brings additional opportunity and scope. Not only that, but AI models like ChatGPT are capable of summarizing studies and articles, producing more efficient data for research in journalistic and expository writing pieces.
To illustrate the difference, allow us to put it in the following terms.
An AI will be able to produce a passable script on the effects of war by using data, statistics, eyewitness accounts, images, and videos. However, an AI cannot go to the scene of the war, develop its unique impressions, and produce a written or video piece with the creative decisions of a human journalist. An AI-created piece on the horrors of war would resemble something more akin to a documentary where a person in a seat just sits and rattles off what happened. You will learn, but you won’t understand, feel, or be stirred by it.
Modern journalism is just as much art as it is fact, and although this can produce some discrepancy and ethical dilemmas where one overtakes the other, it is the hints of humanity that make journalism a timeless profession. People will forever need to know what is going on in the world, but it’s the actions of journalists and the people they report on that put these events into a more “human” context; such as that people aren’t just aware of these events, they are moved by them.
If you’re a journalist concerned about your job once AI hits the main stage, we have some encouraging words for you. First of all, AI has already hit the main stage and you’re still here. Second, an AI can never do what you do. Finally, maybe you will be the person behind the next big development of the collaboration between AI and journalists. Either way, AI is merely a hammer, and that’s all it’s going to be for a long time. It’s up to you to drive the nail in.
by Irfan Ahmad via Digital Information World
Subscribe to:
Posts (Atom)