⚠️ Deprecation Notice

July 9, 2025 · View on GitHub

July 2025: simple-evals will no longer be updated for new models or benchmark results. The repo will continue to host reference implementations for HealthBench, BrowseComp, and SimpleQA.


Overview

This repository contains a lightweight library for evaluating language models. We are open sourcing it so we can be transparent about the accuracy numbers we're publishing alongside our latest models.

Benchmark Results

ModelPromptMMLUGPQA 1MATH 2HumanEvalMGSM3DROP3
(F1, 3-shot)
SimpleQA
o3
o3-high 4n/a 593.383.498.188.492.089.848.6
o3 6 4n/a92.982.897.887.492.380.649.4
o3-low 4n/a92.878.696.987.391.982.349.4
o4-mini
o4-mini-high 6 4n/a90.381.398.299.393.578.119.3
o4-mini 6 4n/a90.077.697.597.393.777.720.2
o4-mini-low 4n/a89.573.696.295.993.076.020.2
o3-mini
o3-mini-highn/a86.977.297.997.692.080.613.8
o3-minin/a85.974.997.396.390.879.213.4
o3-mini-lown/a84.967.695.894.589.477.613.0
o1
o1n/a91.875.796.4-89.390.242.6
o1-previewn/a90.873.385.592.490.874.842.4
o1-minin/a85.260.090.092.489.983.907.6
GPT-4.1
gpt-4.1-2025-04-14assistant 790.266.382.194.586.979.441.6
gpt-4.1-mini-2025-04-14assistant87.565.081.493.888.281.016.8
gpt-4.1-nano-2025-04-14assistant80.150.362.387.073.082.207.6
GPT-4o
gpt-4o-2024-11-20assistant85.746.068.590.290.381.538.8
gpt-4o-2024-08-06assistant88.753.175.990.290.079.840.1
gpt-4o-2024-05-13assistant87.249.976.691.089.983.739.0
gpt-4o-mini-2024-07-18assistant82.040.270.287.287.079.709.5
GPT-4.5-preview
gpt-4.5-preview-2025-02-27assistant90.869.587.188.686.983.462.5
GPT-4 Turbo and GPT-4
gpt-4-turbo-2024-04-09assistant86.749.373.488.289.686.024.2
gpt-4-0125-previewassistant85.441.464.586.685.181.5n/a
gpt-4-1106-previewassistant84.742.564.383.787.183.2n/a
Other Models (Reported)
Claude 3.5 Sonnetunknown88.359.471.192.091.687.128.9
Claude 3 Opusunknown86.850.460.184.990.783.123.5
Llama 3.1 405bunknown88.650.773.889.091.684.8n/a
Llama 3.1 70bunknown82.041.768.080.586.979.6n/a
Llama 3.1 8bunknown68.430.451.972.668.959.5n/a
Grok 2unknown87.556.076.188.4n/an/an/a
Grok 2 miniunknown86.251.073.085.7n/an/an/a
Gemini 1.0 Ultraunknown83.7n/a53.274.479.082.4n/a
Gemini 1.5 Prounknown81.9n/a58.571.988.778.9n/a
Gemini 1.5 Flashunknown77.938.640.971.575.578.4n/a

Background

Evals are sensitive to prompting, and there's significant variation in the formulations used in recent publications and libraries. Some use few-shot prompts or role playing prompts ("You are an expert software programmer..."). These approaches are carryovers from evaluating base models (rather than instruction/chat-tuned models) and from models that were worse at following instructions.

For this library, we are emphasizing the zero-shot, chain-of-thought setting, with simple instructions like "Solve the following multiple choice problem". We believe that this prompting technique is a better reflection of the models' performance in realistic usage.

We will not be actively maintaining this repository and monitoring PRs and Issues. In particular, we're not accepting new evals. Here are the changes we might accept.

  • Bug fixes (hopefully not needed!)
  • Adding adapters for new models
  • Adding new rows to the table below with eval results, given new models and new system prompts.

This repository is NOT intended as a replacement for https://github.com/openai/evals, which is designed to be a comprehensive collection of a large number of evals.

Evals

This repository currently contains the following evals:

Samplers

We have implemented sampling interfaces for the following language model APIs:

Make sure to set the *_API_KEY environment variables before using these APIs.

Setup

Due to the optional dependencies, we're not providing a unified setup mechanism. Instead, we're providing instructions for each eval and sampler.

For HumanEval (python programming)

git clone https://github.com/openai/human-eval
pip install -e human-eval

For the OpenAI API:

pip install openai

For the Anthropic API:

pip install anthropic

Running the evals

python -m simple-evals.simple_evals --list-models

This will list all the models that you can evaluate.

To run the evaluations, you can use the following command:

python -m simple-evals.simple_evals --model <model_name> --examples <num_examples>

This will launch evaluations through the OpenAI API.

Notes

By contributing to evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI evals will be subject to our usual Usage Policies: https://platform.openai.com/docs/usage-policies.

Footnotes

  1. Includes an answer regex tweak for GPQA benchmark.

  2. For newer models (anything on or after o1) we evaluate on MATH-500, which is a newer, IID version of MATH.

  3. We believe these evals are saturated for our newer models, but are reporting them for completeness. 2

  4. These results are with no tools enabled for o3 or o4-mini 2 3 4 5 6

  5. o-series models do not support using a system prompt.

  6. The default reasoning level for o3-mini is "medium". 2 3

  7. assistant system message in OpenAI API doc: "You are a helpful assistant." .