AI models have only come into being thanks to the greatness displayed by programmers who spend years writing out codes for the respective systems. Now the question is how well such AI code generators compare to human programmers.
Thanks to a new study published last month, we’re getting more knowledge about how such codes produced by famous AI tools like ChatGPT compare against those produced by human generators.
The authors looked at complexity and security and where they could be used. And it’s safe to mention that the results had a wide range of success in terms of functional codes. Interestingly, the codes ranged between 0.66% to 89% accuracy, all related to how complex the task at hand was.
In most situations, the AI generator may have rolled out better codes than an average human mind but that did not come without some serious concerns from experts.
A leading expert from Glasgow University shed light on how AI code generation might give rise to major leads concerning bettering productivity and ensuring software development was automated in nature.
Therefore, it’s so important to realize how the pros and cons of such models vary with time.
A comprehensive analysis revealed how several issues are linked to the popular AI chatbot. And that’s where authors sat down to reveal the faults in detail.
It’s interesting to note how a programmer who spent decades producing a code is now getting replaced by an AI generator that can do it in seconds (but again to some extent).
The result of the research study proved that there was a wide range of success in terms of functional codes and a more in-depth analysis proved how exploring these drawbacks might become easier.
When we’re speaking in general, ChatGPT did a great job at solving issues in various coding languages. This was very true in terms of solving issues linked to LeetCode before the year 2021.
When algorithm issues were looked at after the year 2021, the tool’s ability to produce correct code was impacted. Therefore, this results in a failure to understand the true meaning that such queries have, even if it’s for easy problems.
The world of coding keeps evolving today and therefore the ChatGPT tool is yet to be exposed to new issues. Remember, it cannot think like a human and may address issues that it’s familiar with. This might explain why it’s better at addressing old coding issues than new ones.
Some experts fear ChatGPT could generate wrong codes as it does not understand the meaning behind algorithm issues. The study proved how the tool did a great job with smaller runtimes and reduced memory overheads when they were compared to the rest 50%.
Meanwhile, the researchers were seen exploring the chance for ChatGPT to fix coding issues after getting feedback. They were randomly selecting 50 coding situations where the tool rolled out wrong coding, simply because it failed to understand the content that was in front of it and what issues were at hand.
Usually, ChatGPT is deemed to work great at fixing errors but in this case, it was not great at fixing errors that it was making alone.
Researchers found that codes rolled out by ChatGPT did have a small number of vulnerabilities like missing null tests but a lot of those could be fixed.
So we can conclude by stating how developers using ChatGPT for coding must take the error factor into effect and also provide additional data to the AI chatbot to get the best results.
Image: DIW-Aigen
Read next:
• Can AI Tools Like ChatGPT Experience Feelings And Capture Memories? This New Survey Has The Answer
• Emerging Technologies Revolutionizing Business Process Automation Strategies
by Dr. Hura Anwar via Digital Information World
No comments:
Post a Comment