M3KE: A Massive Multi-Level Multi-Subject Knowledge Evaluation Benchmark for Chinese Large Language Models

July 20, 2023 · View on GitHub

IntroductionHow to use M3KEEvaluation LeaderboardCitation

Introduction

We propose M3KE, a Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark, which is developed to measure knowledge acquired by Chinese large language models in zero- and few-shot settings. We have collected 20,477 questions from 71 tasks. Our selection covers all major levels of Chinese education system, ranging from the primary school to college, as well as a wide variety of subjects, including humanities, history, politics, law, education, psychology, science, technology, art and religion. All questions are multiple-choice questions with four options, hence guaranteeing a standardized and unified assessment process.

We’ve assessed and will continue to assess a number of Chinese large language models on our benchmark. The currently assessed models are either only pre-trained on massive data or pre-trained + fine-tuned with SFT or RLHF. Model sizes vary from 335M to 175B parameters.

This is collaborative research effort between the Natural Language Processing Laboratory at Tianjin University and Huawei Noah’s Ark Lab.

The comparison between M3KE and other relevant benchmarks.

BenchmarkLanguage#Tasks#Questions
MMLUEn5715908
AGIEvalEN, Zh208062
MMCUZh5111900
M3KEZh7120477

All 71 tasks displayed according to their subjects and levels.

Arts HumanitiesSocial SciencesNatural SciencesOthers
Primary schoolChineseMath
Junior high schoolChinese, HistoryPoliticsMath, Physics, Biology, Chemistry, Geography
High schoolChinese, HistoryPoliticsMath, Physics, Biology, Chemistry, Geography
CollegeModern History, History Foundation, Modern World HistoryChinese Constitutional Law, History of Chinese Education, History of the Chinese Legal System, Developmental and Educational Psychology, History of Foreign Education, Experimental Psychology, Introduction to Psychology, Moral Cultivation, Psychology of Teaching, Principles of Pedagogy, Educational Research Methods, Current Affairs and Politics, Introduction to Mao Tsetung Thoughts, Civil Law, Jurisprudence, Sociology, Basic Principle of Marxism, Criminal Jurisprudence, Outline of Chinese Modern HistoryHumanistic Medicine, Internal Medicine, Animal Physiology, Surgical Sciences, Operating Systems, Data Structures, Probability Theory, Biochemistry, Biochemistry and Pathology, Physiology, Principles of Computer Composition, Computer Networks,Advanced Mathematics, Linear Algebra, Stomatology, Anthropotomy, Pharmacology, ImmunologyManagement, Economics
OthersFilm, Music, Dance, Fine ArtsComputer Grade Exam (Computer Fundamentals, Programming Languages)Chinese Medicine, Ancient Chinese Language, Novels, Religion, Chinese Civil Service Examination

How to use M3KE

There are several approaches available for loading the M3KE dataset. Below, we provide three methods as references:

Method 1

First, clone this repository manually using the following command:

git clone https://github.com/tjunlp-lab/M3KE.git

Then, load the data using Python's built-in json package as shown below:

import json

fin = open("path/to/M3KE/test/Computer Programming Language-Natural Sciences-Other.jsonl", mode="r", encoding="utf-8")

for json_line in fin:
    json_data = json.loads(json_line)
    print(json_data)

Method 2

First, clone this repository manually using the following command:

git clone https://github.com/tjunlp-lab/M3KE.git

Then, load the data using the datasets package as shown below:

from datasets import load_dataset

ds = load_dataset("json", data_files="path/to/M3KE/test/Computer Programming Language-Natural Sciences-Other.jsonl", split="train")
print(ds)
"""
Dataset({
    features: ['id', 'question', 'A', 'B', 'C', 'D', 'answer'],
    num_rows: 236
})
"""

print(ds[0])
"""
{'id': 0, 'question': '下面判断正确的是?', 'A': 'char str[10]={"china"}; 等价于 char str[10];str[]="china";', 'B': 'char *s="china"; 等价于 char *s;s="china"; ', 'C': 'char *a="china"; 等价于 char *a;*a="china";', 'D': 'char c[6]="china",d[6]="china"; 等 价 于 char c[6]=d[6]="china"; ', 'answer': ''}
"""

Method 3

The M3KE dataset has been uploaded to the HuggingFace Datasets Hub, allowing for easy loading without manually cloning this repository. Use the following code to load the dataset:

from datasets import load_dataset

ds = load_dataset(
    path="TJUNLP/M3KE", 
    name="Computer Programming Language-Natural Sciences-Other"
)
print(ds)
"""
DatasetDict({
    test: Dataset({
        features: ['id', 'question', 'A', 'B', 'C', 'D', 'answer'],
        num_rows: 236
    })
    dev: Dataset({
        features: ['id', 'question', 'A', 'B', 'C', 'D', 'answer'],
        num_rows: 5
    })
})
"""

print(ds["test"][0])
"""
{'id': 0, 'question': '下面判断正确的是?', 'A': 'char str[10]={"china"}; 等价于 char str[10];str[]="china";', 'B': 'char *s="china"; 等价于 char *s;s="china"; ', 'C': 'char *a="china"; 等价于 char *a;*a="china";', 'D': 'char c[6]="china",d[6]="china"; 等 价 于 char c[6]=d[6]="china"; ', 'answer': ''}
"""

Evaluation Leaderboard (more models to be added)

If you want to have your Chinese large language models assessed and added to the leaderboard, please feel free to contact us via liuc_09@tju.edu.cn or submit a pull request.

Average zero-shot accuracy of each evaluated model on four major discipline clusters:

ModelsArts&HumanitiesSocial SciencesNature SciencesOthersAverage
GLM-335M0.0700.0460.0840.0440.062
Bloom-7B0.1630.1590.1610.1580.161
GLM-10B0.1800.2290.2190.1500.197
GLM-130B0.3260.3520.2740.3590.328
ChatGLM-6B0.2460.2670.1680.2630.236
MOSS-SFT-16B0.2600.2630.2070.2750.251
BELLE-7B-0.2M0.2470.2960.2600.2600.266
BELLE-7B-2M0.3280.3670.2820.3550.333
LLaMA-7B-2M0.2560.2270.2060.2440.233
LLaMA-13B-2M0.2940.3160.2460.2790.284
AquilaChat-7B0.2560.2530.2290.2460.246
GPT3.5-turbo0.4600.5380.4440.4810.481
GPT-40.5880.6760.6230.6650.638

Average five-shot accuracy of each evaluated model on four major discipline clusters:

ModelsArts&HumanitiesSocial SciencesNature SciencesOthersAverage
GLM-335M0.2200.2470.1930.1260.196
Bloom-7B0.2470.2600.2350.2460.247
GLM-10B0.2940.3040.2320.2110.260
GLM-130B0.2970.3290.2460.2280.275
ChatGLM-6B0.1880.1750.1210.1980.171
MOSS-SFT-16B0.2660.2640.2580.2840.268
BELLE-7B-0.2M0.2920.3270.2730.3070.299
BELLE-7B-2M0.2870.3090.2840.3130.298
LLaMA-7B-2M0.2730.2570.2220.2500.251
LLaMA-13B-2M0.2410.2340.1380.2190.208
AquilaChat-7B0.2570.2490.2480.2640.255
baichuan-7B0.2660.2640.1750.2410.237
GPT3.5-turbo0.4530.5400.4640.4760.483

Citation

@misc{liu2023m3ke,
    title={M3KE: A Massive Multi-Level Multi-Subject Knowledge Evaluation Benchmark for Chinese Large Language Models},
    author={Chuang Liu and Renren Jin and Yuqi Ren and Linhao Yu and Tianyu Dong and Xiaohan Peng and Shuting Zhang and Jianxiang Peng and Peiyi Zhang and Qingqing Lyu and Xiaowen Su and Qun Liu and Deyi Xiong},
    year={2023},
    eprint={2305.10263},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}