SkyThought

May 12, 2025 · View on GitHub

SkyThought

Github Twitter Hugging Face Collection Discord

News

  • [2025/02/21] 🎉 We released S*: Test time scaling for code generation (paper, code), a simple and extensible test time scaling framework for code generation.
  • [2025/02/11] 🎉 We released Sky-T1-7B (model) and Sky-T1-mini (model) to demonstrate the potential of RL in further enhancing model's capability beyond distillation.
  • [2025/01/23] ⚡️ We released Sky-T1-32B-Flash (model, data) to tackle overthinking and reduce reasoning sequence lengths while maintaining accuracy.
  • [2025/01/19] 🎉 Chat demo for Sky-T1-32B-Preview is alive! Please check it out!
  • [2025/01/10] 🎉 We have released our Sky-T1-32B-Preview model and data through HuggingFace!

Links

Getting Started

We open source the code and scripts we used for data curation, training, and evaluation for Sky-T1-32B-Preview, you can find more details in each directory.

  • recipes: Recipes - data curation steps and training strategies - for building our models Sky-T1-32B-Flash, Sky-T1-32B-Preview and Sky-T1-7B series.
  • skythought/evals: Our data generation and evaluation library. We provide a convenient CLI for evaluation as well as a Scorer API for scoring during data curation and training (example).
  • skythought/train: Training scripts for Sky-T1. We use Llama-Factory to perform training.
  • skythought/skythought-rl: RL training code for Sky-T1-7B and Sky-T1-mini.

Evaluation

Usage

You can install the latest release from PyPI or from source:

pip install skythought

Installing from source

# Clone the repository
git clone https://github.com/NovaSky-AI/SkyThought.git
cd SkyThought

# Create and activate a virtual environment (using uv here)
uv venv --python 3.10
source .venv/bin/activate

# Install the package in editable mode
uv pip install -e .

Running evaluation is as simple as:

skythought evaluate --model NovaSky-AI/Sky-T1-32B-Preview --task aime24

We support a wide variety of datasets in mathematics, science and coding:

  • AIME'24
  • MATH500
  • GPQADiamond
  • MMLU
  • ARC-Challenge
  • OlympiadBench
  • AMC'23
  • TACO
  • APPS
  • LiveCodeBench
  • MMLU Pro
  • MinervaMath
  • GSM8K
  • AIME'25

For more details, please refer to our evaluation guide and the evaluation README.

Evaluation results

Following, we show our evaluation results for the Sky-T1-32B-Preview model across math, coding, and science benchmarks.

MetricSky-T1-32B-PreviewQwen-2.5-32B-InstructQwQo1-preview
Math50086.481.492.281.4
AIME202443.316.750.040.0
LiveCodeBench-Easy86.384.690.792.9
LiveCodeBench-Medium56.840.856.354.9
LiveCodeBench-Hard17.99.817.116.3
GPQA-Diamond56.845.552.575.2
OlympiadBench (Math, EN)59.7946.7462.1759.2

Results on non-reasoning benchmarks

We also evaluate on non-reasoning benchmarks (these are benchmarks for instruction-following, QA, etc) to test whether the model has traded-off capability in other domains for better performance in reasoning-related benchmarks.

MetricSky-T1-32B-PreviewQwen-2.5-32B-InstructQwQ-32B-PreviewEval Implementation
MMLU (0 shot; no CoT)78.3674.1471.23lm_eval
MMLU (5 shot; no CoT)82.4682.6282.32lm_eval
ARC-C (0 shot; no CoT)49.4949.449.66lm_eval
IFEval75.7978.7442.51lm_eval
LLM-as-a-Judge9.129.198.30fastchat
MGSM (0 shot; direct)3342.319.07lm_eval
MGSM (8-shot; direct)58.461.4758.5lm_eval
BFCL-v353.1858.9217.41BFCL
Arena-Hard74.7966.5152.6Arena-Hard-Auto

For more details, refer here.

Fully Open-source: Driving Progress Together

We believe that open-source collaboration drives progress, and with Sky-T1-32B-Preview, we are fully committed to empowering the community. We open-source all details (i.e., data, codes, model weights) to enable the community to replicate and improve on our results easily:

Model
Sky-T1-32B-Preview
STILL-2
Journey
QwQ
o1
Data
Code
Report
Math domain
Coding domain
Model Weights

Citation

The code in this repository is mostly described in the post below. Please consider citing this work if you find the repository helpful.

@misc{sky_t1_2025,
  author       = {NovaSky Team},
  title        = {Sky-T1: Train your own O1 preview model within \$450},
  howpublished = {https://novasky-ai.github.io/posts/sky-t1},
  note         = {Accessed: 2025-01-09},
  year         = {2025}
}

Acknowledgement

This work is done at Berkeley Sky Computing Lab, with the amazing compute support from Lambda Labs, Anyscale, and Databricks. We would like to express our gratitude for the valuable academic feedback and support from the Still-2 Team, and Junyang Lin from the Qwen Team.