MLE-bench

April 24, 2026 · View on GitHub

Code for the paper "MLE-Bench: Evaluating Machine Learning Agents on Machine Learning Engineering". We have released the code used to construct the dataset, the evaluation logic, as well as the agents we evaluated for this benchmark.

Leaderboard

Update (04-24-2026): We are currently not taking any new submissions to the leaderboard while we develop an improved process for ensuring submissions are fair and comparable. We will share updates on this process in the future.

AgentLLM(s) usedLow == Lite (%)Medium (%)High (%)All (%)Running Time (hours)DateSource Code AvailableGrading Reports Available
Famou-Agent 2.0Gemini-3-Pro-Preview80.3 ± 1.5264.04 ± 2.3242.22 ± 2.2264.44 ± 1.18242026-02-23X
AIBuildAIClaude-Opus-4.677.27 ± 0.0061.40 ± 0.8846.67 ± 0.0063.11 ± 0.44242026-03-06X
CAIR MARS+Gemini-3-Pro-Preview78.79 ± 1.5260.53 ± 1.5244.44 ± 2.2262.67 ± 0.77242026-02-17X
MLEvolveGemini-3-Pro-Preview80.30 ± 1.5257.89 ± 1.5242.22 ± 2.2261.33 ± 1.33122026-02-14
PiEvolve
(Fractal AI Research)
Gemini-3-Pro-Preview180.30 ± 1.52258.77 ± 0.88240.0 ± 0.00261.33 ± 0.772242026-01-05X
Famou-Agent 2.0Gemini-2.5-Pro75.76 ± 1.5257.89 ± 1.5240.00 ± 0.0059.56 ± 0.89242025-12-27X
ML-Master 2.0Deepseek-V3.2-Speciale75.76 ± 1.5150.88 ± 3.5142.22 ± 2.2256.44 ± 2.47242025-12-16X
CAIR MARSGemini-3-Pro-Preview74.24 ± 1.5252.63 ± 3.0437.78 ± 2.2256.0 ± 1.54242026-01-25X
PiEvolve
(Fractal AI Research)
Gemini-3-Pro-Preview174.24 ± 3.03245.61 ± 0.88235.55 ± 2.22252.0 ± 0.772122026-01-05X
LeerooGemini-3-Pro-Preview168.18 ± 2.62244.74 ± 1.52240.00 ± 0.00250.67 ± 1.332242025-12-07
Thesisgpt-5-codex65.15 ± 1.5245.61 ± 7.1831.11 ± 2.2248.44 ± 3.64242025-11-10X
CAIR MLE-STAR-Pro-1.5Gemini-2.5-Pro68.18 ± 2.6234.21 ± 1.5233.33 ± 0.0044.00 ± 1.33242025-11-25X
Famou-AgentGemini-2.5-Pro62.12 ± 1.5236.84 ± 1.5233.33 ± 0.0043.56 ± 0.89242025-10-10X
Operand ensemblegpt-5 (low verbosity/effort)363.64 ± 0.0033.33 ± 0.88220.00 ± 0.00239.56 ± 0.442242025-10-06X
CAIR MLE-STAR-Pro-1.0Gemini-2.5-Pro66.67 ± 1.5225.44 ± 0.8831.11 ± 2.2238.67 ± 0.77122025-11-03X
InternAgentdeepseek-r162.12 ± 3.0326.32 ± 2.6324.44 ± 2.2236.44 ± 1.18122025-09-12X
R&D-Agentgpt-568.18 ± 2.6221.05 ± 1.5222.22 ± 2.2235.11 ± 0.44122025-09-26
Neo multi-agentundisclosed48.48 ± 1.5229.82 ± 2.3224.44 ± 2.2234.22 ± 0.89362025-07-28X
AIRA-dojoo355.00 ± 1.4721.97 ± 1.1721.67 ± 1.0731.60 ± 0.82242025-05-15
R&D-Agento3 + GPT-4.151.52 ± 4.0119.30 ± 3.1626.67 ± 0.0030.22 ± 0.89242025-08-15
ML-Masterdeepseek-r148.48 ± 1.5220.18 ± 2.3224.44 ± 2.2229.33 ± 0.77122025-06-17
R&D-Agento1-preview48.18 ± 1.118.95 ± 1.0518.67 ± 1.3322.40 ± 0.50242025-05-14
AIDEo1-preview35.91 ± 1.868.45 ± 0.4311.67 ± 1.2717.12 ± 0.61242024-10-08
AIDEgpt-4o-2024-08-0618.55 ± 1.263.06 ± 0.338.15 ± 0.848.63 ± 0.54242024-10-08
AIDEclaude-3-5-sonnet-2024062019.70 ± 1.522.63 ± 1.522.22 ± 2.227.56 ± 1.60242024-10-08
OpenHandsgpt-4o-2024-08-0612.12 ± 1.521.75 ± 0.882.22 ± 2.224.89 ± 0.44242024-10-08
AIDEllama-3.1-405b-instruct10.23 ± 1.140.66 ± 0.660.00 ± 0.003.33 ± 0.38242024-10-08
MLABgpt-4o-2024-08-064.55 ± 0.860.00 ± 0.000.00 ± 0.001.60 ± 0.27242024-10-08

Additional Leaderboard Submissions

Additional submissions that are not directly comparable to the main leaderboard (see Notes column).

AgentLLM(s) usedLow == Lite (%)Medium (%)High (%)All (%)Running Time (hours)DateNotesSource Code AvailableGrading Reports Available
DisarrayEnsemble (Claude-Opus-4.5, Claude-Sonnet-4.5, GPT-5.2-Codex, Gemini-3-Pro-Preview)90.91 ± 0.0072.81 ± 0.8871.11 ± 2.2277.78 ± 0.44242026-02-03Test-set feedbackX
LoongFlowGemini-3-Flash-Preview77.27 ± 0.0263.15 ± 1.51240.0 ± 0.00262.66 ± 0.762242026-02-09Test-set feedback

Producing Scores for the Leaderboard

To produce the scores for the leaderboard, please organize your grading reports in the runs/ folder organized by run groups, with one grading report per run group. Identify the run groups for your submission in runs/run_group_experiments.csv with an experiment id. Then run

uv run python experiments/aggregate_grading_reports.py --experiment-id <exp_id> --split low
uv run python experiments/aggregate_grading_reports.py --experiment-id <exp_id> --split medium
uv run python experiments/aggregate_grading_reports.py --experiment-id <exp_id> --split high
uv run python experiments/aggregate_grading_reports.py --experiment-id <exp_id> --split split75

Report the mean and standard error of the mean (SEM) for each of the splits on the reported any_medal_percentage metric. The --split75 flag corresponds to the All (%) column.

Benchmarking

This section describes a canonical setup for comparing scores on MLE-bench. We recommend the following:

  • Repeat each evaluation with at least 3 seeds and report the Any Medal (%) score as the mean ± one standard error of the mean. The evaluation (task and grading) itself is deterministic, but agents/LLMs can be quite high-variance!
  • Agent resources - not a strict requirement of the benchmark but please report if you stray from these defaults!
    • Runtime: 24 hours
    • Compute: 36 vCPUs with 440GB RAM and one 24GB A10 GPU
  • Include a breakdown of your scores across Low, Medium, High, and All complexity splits (see Lite evaluation below for why this is useful).

Lite Evaluation

Evaluating agents with the above settings on the full 75 competitions of MLE-bench can be expensive. For users preferring a "lite" version of the benchmark, we recommend using the Low complexity split of our dataset, which consists of only 22 competitions. This reduces the number of runs substantially, while still allowing fair comparison along one column of the table above.

Furthermore, the Low complexity competitions tend to be significantly more lightweight (158GB total dataset size compared to 3.3TB for the full set), so users may additionally consider reducing the runtime or compute resources available to the agents for further cost reduction. However, note that doing so risks degrading the performance of your agent. For example, see Section 3.3 and 3.4 of our paper where we have experimented with varying resources on the full competition set.

The Lite dataset contains the following competitions:

Competition IDCategoryDataset Size (GB)
aerial-cactus-identificationImage Classification0.0254
aptos2019-blindness-detectionImage Classification10.22
denoising-dirty-documentsImage To Image0.06
detecting-insults-in-social-commentaryText Classification0.002
dog-breed-identificationImage Classification0.75
dogs-vs-cats-redux-kernels-editionImage Classification0.85
histopathologic-cancer-detectionImage Regression7.76
jigsaw-toxic-comment-classification-challengeText Classification0.06
leaf-classificationImage Classification0.036
mlsp-2013-birdsAudio Classification0.5851
new-york-city-taxi-fare-predictionTabular5.7
nomad2018-predict-transparent-conductorsTabular0.00624
plant-pathology-2020-fgvc7Image Classification0.8
random-acts-of-pizzaText Classification0.003
ranzcr-clip-catheter-line-classificationImage Classification13.13
siim-isic-melanoma-classificationImage Classification116.16
spooky-author-identificationText Classification0.0019
tabular-playground-series-dec-2021Tabular0.7
tabular-playground-series-may-2022Tabular0.57
text-normalization-challenge-english-languageSeq->Seq0.01
text-normalization-challenge-russian-languageSeq->Seq0.01
the-icml-2013-whale-challenge-right-whale-reduxAudio Classification0.29314

Setup

Some MLE-bench competition data is stored using Git-LFS. Once you have downloaded and installed LFS, run:

git lfs fetch --all
git lfs pull

You can install mlebench with pip:

pip install -e .

Pre-Commit Hooks (Optional)

If you're committing code, you can install the pre-commit hooks by running:

pre-commit install

Dataset

The MLE-bench dataset is a collection of 75 Kaggle competitions which we use to evaluate the ML engineering capabilities of AI systems.

Since Kaggle does not provide the held-out test set for each competition, we provide preparation scripts that split the publicly available training set into a new training and test set.

For each competition, we also provide grading scripts that can be used to evaluate the score of a submission.

We use the Kaggle API to download the raw datasets. Ensure that you have downloaded your Kaggle credentials (kaggle.json) and placed it in the ~/.kaggle/ directory (this is the default location where the Kaggle API looks for your credentials). To download and prepare the MLE-bench dataset, run the following, which will download and prepare the dataset in your system's default cache directory. Note, we've found this to take two days when running from scratch:

mlebench prepare --all

To prepare the lite dataset, run:

mlebench prepare --lite

Alternatively, you can prepare the dataset for a specific competition by running:

mlebench prepare -c <competition-id>

Run mlebench prepare --help to see the list of available competitions.

Grading Submissions

Answers for competitions must be submitted in CSV format; the required format is described in each competition's description, or shown in a competition's sample submission file. You can grade multiple submissions by using the mlebench grade command. Given a JSONL file, where each line corresponds with a submission for one competition, mlebench grade will produce a grading report for each competition. The JSONL file must contain the following fields:

  • competition_id: the ID of the competition in our dataset.
  • submission_path: a .csv file with the predictions for the specified competition.

See more information by running mlebench grade --help.

You can also grade individual submissions using the mlebench grade-sample command. For example, to grade a submission for the Spaceship Titanic competition, you can run:

mlebench grade-sample <PATH_TO_SUBMISSION> spaceship-titanic

See more information by running mlebench grade-sample --help.

Environment

We provide a base Docker image mlebench-env which is the base environment for our agents. This base image contains:

  • Conda environment used to execute our agents. We optionally (default true) install Python packages in this environment which are commonly used across our agents. If you don't want to install these packages, set the INSTALL_HEAVY_DEPENDENCIES environment variable to false when building the image, by adding --build-arg INSTALL_HEAVY_DEPENDENCIES=false to the docker build command below
  • Instructions for agents to follow when creating their submission
  • Grading server for agents to use when checking that the structure of their submission is correct

Build this image by running:

docker build --platform=linux/amd64 -t mlebench-env -f environment/Dockerfile .

Agents

We purposefully designed our benchmark to not make any assumptions about the agent that produces submissions, so agents can more easily be evaluated on this benchmark. We evaluated three open-source agents; we discuss this procedure in agents/README.md.

Extras

We include additional features in the MLE-bench repository that may be useful for MLE-bench evaluation. These include a rule violation detector and a plagiarism detector. We refer readers to extras/README.md for more information.

Examples

We collect example usage of this library in the examples/ directory, see examples/README.md for more information.

Experiments

We place the code specific to the experiments from our publication of the benchmark in the experiments/ directory:

  • For instance, our competition splits are available in experiments/splits/.
  • For a completed set of runs from a given agent, you can use the provided experiments/make_submission.py script to compile its submission for grading.
  • We release our methodology for the "familiarity" experiments in experiments/familiarity/, see experiments/familiarity/README.md for more information.

Dev

Note, when running pytest locally, be sure to accept the competition rules otherwise the tests will fail.

Known Issues

There are some known issues with certain MLE-bench competitions. Since we have already received leaderboard submissions, we are postponing fixes to avoid invalidating the leaderboard. Instead, we plan to release batched fixes in the upcoming v2 release of MLE-bench on the openai/frontier-evals repo, which will include a version column in the leaderboard to distinguish between v1 and v2 results. If you wish to make a submission to v1 in the meantime, please still include the following competitions in your overall scores. The known issues are catalogued below:

  • tensorflow2-question-answering:
    • The validate_submission function in grade.py fails on this competition because the answer file is test.jsonl instead of test.csv. #134
  • tensorflow-speech-recognition-challenge:
    • The prepare.py script incorrectly prepares the test set such that there is a much larger range of test labels than there should be. #63
    • The prepare.py script does not properly create a test set where the speaker IDs are disjoint from those in train/val.
  • icecube-neutrinos-in-deep-ice: Checksums are mismatch. #58
  • ranzcr-clip-catheter-line-classification: The prepare.py script results in missing columns in the sample submission. #30
  • dog-breed-identification: The MLE-bench test split is created by holding out images from a publicly labeled source corpus derived from the Stanford Dogs Dataset, which agents may discover and leverage. #128
  • invasive-species-monitoring: The prepare.py script archives the prepared train/ and test/ directories incorrectly, so train.7z and test.7z can be missing their image contents in the prepared public dataset. #122
  • tabular-playground-series-dec-2021: The leaderboard is crowded -- very little difference between the top score and the median score.
  • tabular-playground-series-may-2022: The leaderboard is crowded -- very little difference between the top score and the median score.
  • jigsaw-toxic-comment-classification-challenge: The leaderboard is crowded -- very little difference between the top score and the median score.
  • champs-scalar-coupling: test molecules are missing in structures.csv. #70
  • multi-modal-gesture-recognition: public test .mat files leak test labels. #77
  • smartphone-decimeter-2022: The public test span_log.nmea files leak information that makes achieving a perfect score trivial. #93
  • hubmap-kidney-segmentation: The public test {image_id}.json files leak information that makes achieving a close-to-perfect score trivial. They should be removed.
  • random-acts-of-pizza: The field giver_username_if_known leaks the outcome, enabling trivial perfect prediction. This competition should be dropped. #108

Authors

Chan Jun Shern, Neil Chowdhury, Oliver Jaffe, James Aung, Dane Sherburn, Evan Mays, Giulio Starace, Kevin Liu, Leon Maksin, Tejal Patwardhan, Lilian Weng, Aleksander Mądry

Citation

Please cite using the following BibTeX entry:

@article{chan2024mle-bench,
  title={MLE-bench: Evaluating Machine Learning Agents on Machine Learning Engineering},
  author={Jun Shern Chan and Neil Chowdhury and Oliver Jaffe and James Aung and Dane Sherburn and Evan Mays and Giulio Starace and Kevin Liu and Leon Maksin and Tejal Patwardhan and Lilian Weng and Aleksander Mądry},
  year={2024},
  eprint={2410.07095},
  archivePrefix={arXiv},
  primaryClass={cs.CL},
  url={https://arxiv.org/abs/2410.07095}
}

Footnotes

  1. The architecture is primarily driven by Gemini-3-Pro-Preview, with a subset of modules utilizing GPT-5 and GPT-5-mini. 2 3

  2. Computed by padding incomplete seeds with failing scores. 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

  3. With some light assistance from an ensemble of models including Gemini-2.5-Pro, Grok-4, and Claude 4.1 Opus, distilled by Gemini-2.5-Pro.