Experts launch exam to assess expert-level AI intelligence
Summary: A team of experts has launched "Humanity’s Last Exam," seeking challenging questions for AI systems to assess when they reach expert-level intelligence. This initiative comes after recent AI models, like OpenAI o1, excelled in standard tests.
The project is organized by the Center for AI Safety and Scale AI. It aims to create at least 1,000 difficult questions, with submissions due by November 1. Winning entries will receive prizes and co-authorship.
The exam will focus on abstract reasoning and will not include questions about weapons. Organizers hope to ensure that AI responses are not based on memorized answers from existing datasets.
This is article metrics. Combined, they form a significance score, that indicates how important the news is on a scale from 0 to 10.
We analyze up to 10,000 news articles daily and find the most significant world news.
Read more about how we calculate significance, or see today's top rated news on the main page:
See today's news rankings