Kai Riemer, University of Sydney and Sandra Peter, University of Sydney
How do you translate ancient Palmyrene script from a Roman tombstone? How many paired tendons are supported by a specific sesamoid bone in a hummingbird? Can you identify closed syllables in Biblical Hebrew based on the latest scholarship on Tiberian pronunciation traditions?
These are some of the questions in “Humanity’s Last Exam”, a new benchmark introduced in a study published this week in Nature. The collection of 2,500 questions is specifically designed to probe the outer limits of what today’s artificial intelligence (AI) systems cannot do.
The benchmark represents a global collaboration of nearly 1,000 international experts across a range of academic fields. These academics and researchers contributed questions at the frontier of human knowledge. The problems required graduate-level expertise in mathematics, physics, chemistry, biology, computer science and the humanities. Importantly, every question was tested against leading AI models before inclusion. If an AI could not answer it correctly at the time the test was designed, the question was rejected.
This process explains why the initial results looked so different from other benchmarks. While AI chatbots score above 90% on popular tests, when Humanity’s Last Exam was first released in early 2025, leading models struggled badly. GPT-4o managed just 2.7% accuracy. Claude 3.5 Sonnet scored 4.1%. Even OpenAI’s most powerful model, o1, achieved only 8%.
The low scores were the point. The benchmark was constructed to measure what remained beyond AI’s grasp. And while some commentators have suggested that benchmarks like Humanity’s Last Exam chart a path toward artificial general intelligence, or even superintelligence – that is, AI systems capable of performing any task at human or superhuman levels – we believe this is wrong for three reasons.
Benchmarks measure task performance, not intelligence
When a student scores well on the bar exam, we can reasonably predict they’ll make a competent lawyer. That’s because the test was designed to assess whether humans have acquired the knowledge and reasoning skills needed for legal practice – and for humans, that works. The understanding required to pass genuinely transfers to the job.
But AI systems are not humans preparing for careers.
When a large language model scores well on the bar exam, it tells us the model can produce correct-looking answers to legal questions. It doesn’t tell us the model understands law, can counsel a nervous client, or exercise professional judgment in ambiguous situations.
The test measures something real for humans; for AI it measures only performance on the test itself.
Using human ability tests to benchmark AI is common practice, but it’s fundamentally misleading. Assuming a high test score means the machine has become more human-like is a category error, much like concluding that a calculator “understands” mathematics because it can solve equations faster than any person.
Human and machine intelligence are fundamentally different
Humans learn continuously from experience. We have intentions, needs and goals. We live lives, inhabit bodies and experience the world directly. Our intelligence evolved to serve our survival as organisms and our success as social creatures.
But AI systems are very different.
Large language models derive their capabilities from patterns in text during training. But they don’t really learn.
For humans, intelligence comes first and language serves as a tool for communication – intelligence is prelinguistic. But for large language models, language is the intelligence – there’s nothing underneath.
Even the creators of Humanity’s Last Exam acknowledge this limitation:
High accuracy on [Humanity’s Last Exam] would demonstrate expert-level performance on closed-ended, verifiable questions and cutting-edge scientific knowledge, but it would not alone suggest autonomous research capabilities or artificial general intelligence.
Subbarao Kambhampati, professor at Arizona State University and former president of the Association for the Advancement of Artificial Intelligence, puts it more clearly:
Humanity’s essence isn’t captured by a static test but rather by our ability to evolve and tackle previously unimaginable questions.
Developers like leaderboards
There’s another problem. AI developers use benchmarks to optimise their models for leaderboard performance. They’re essentially cramming for the exam. And unlike humans, for whom the learning for the test builds understanding, AI optimisation just means getting better at the specific test.
But it’s working.
Since Humanity’s Last Exam was published online in early 2025, scores have climbed dramatically. Gemini 3 Pro Preview now tops the leaderboard at 38.3% accuracy, followed by GPT-5 at 25.3% and Grok 4 at 24.5%.
Does this improvement mean these models are approaching human intelligence? No. It means they’ve gotten better at the kinds of questions the exam contains. The benchmark has become a target to optimise against.
The industry is recognising this problem.
OpenAI recently introduced a measure called GDPval specifically designed to assess real-world usefulness.
Unlike academic-style benchmarks, GDPval focuses on tasks based on actual work products such as project documents, data analyses and deliverables that exist in professional settings.
What this means for you
If you’re using AI tools in your work or considering adopting them, don’t be swayed by benchmark scores. A model that aces Humanity’s Last Exam might still struggle with the specific tasks you need done.
It’s also worth noting the exam’s questions are heavily skewed toward certain domains. Mathematics alone accounts for 41% of the benchmark, with physics, biology and computer science making up much of the rest. If your work involves writing, communication, project management or customer service, the exam tells you almost nothing about which model might serve you best.
A practical approach is to devise your own tests based on what you actually need AI to do, then evaluate newer models against criteria that matter to you. AI systems are genuinely useful – but any discussion about superintelligence remains science fiction and a distraction from the real work of making these tools relevant to people’s lives.![]()
Kai Riemer, Professor of Information Technology and Organisation, University of Sydney and Sandra Peter, Director of Sydney Executive Plus, Business School, University of Sydney
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Read next: Should companies replace human workers with robots? New study takes a closer look
by External Contributor via Digital Information World

No comments:
Post a Comment