README.md

December 2, 2025 ยท View on GitHub

Ovis-U1: Unified Understanding, Generation, and Editing

arxiv code demo model

Building on the foundation of the Ovis series, Ovis-U1 is a 3-billion-parameter unified model that seamlessly integrates multimodal understanding, text-to-image generation, and image editing within a single powerful framework.


The overall architecture of Ovis-U1 (cf. Fig.2 in our report).

๐Ÿ† Highlights

  • Unified Capabilities: A single model excels at three core tasks: understanding complex scenes, generating images from text, and performing precise edits based on instructions.
  • Advanced Architecture: Ovis-U1 features a powerful diffusion-based visual decoder (MMDiT) and a bidirectional token refiner, enabling high-fidelity image synthesis and enhanced interaction between text and vision.
  • Synergistic Unified Training: Unlike models trained on single tasks, Ovis-U1 is trained on a diverse mix of understanding, generation, and editing data simultaneously. Our findings show that this approach achieves improved generalization, seamlessly handling real-world multimodal challenges with high accuracy.
  • State-of-the-Art Performance: Ovis-U1 achieves leading scores on multiple academic benchmarks, surpassing strong contemporary models in multimodal understanding (69.6 on OpenCompass), generation (83.72 on DPG-Bench), and editing (4.00 on ImgEdit-Bench).

โœจ Showcase

Here are some examples demonstrating the capabilities of Ovis-U1.

Ovis-U1 examples

๐Ÿš€ News

๐Ÿ“ฆ Installation

Ovis-U1 has been tested with Python 3.10, Torch 2.4.0, Transformers 4.51.3, and DeepSpeed 0.15.4. For a full list of package dependencies, please see requirements.txt.

git clone git@github.com:AIDC-AI/Ovis-U1.git
conda create -n ovis-u1 python=3.10 -y
conda activate ovis-u1
cd Ovis-U1
pip install -r requirements.txt
pip install -e .

๐Ÿ› ๏ธ Inference

We provide simple scripts to test the different capabilities of Ovis-U1.

For single image understanding, please run

python test_img_to_txt.py

For multi-image understanding, please run

python test_multi_img_to_txt.py

For text-to-image, please run

python test_txt_to_img.py \
    --height 1024 \
    --width 1024  \
    --steps 50 \
    --seed 42 \
    --txt_cfg 5  

For image editing, please run

python test_img_edit.py \
    --steps 50 \
    --img_cfg 1.5 \
    --txt_cfg 6  

Alternatively, you can try Ovis-U1 directly in your browser on Hugging Face Space

๐Ÿ“Š Performance

OpenCompass Multi-modal Academic Benchmarks

ModelAvgMMBMMSMMMUMathVistaHallusionAI2DOCRBenchMMVet
GPT-4o75.48670.272.971.65786.382.276.9
InternVL2.5-2B59.970.954.343.251.142.374.980.262.6
SAIL-VL-2B6173.756.544.162.845.977.483.144.2
InternVL3-2B61.17861.148.757.641.978.683.167
Qwen2.5-VL-3B64.576.856.351.261.246.681.482.860
Ovis2-2B65.276.956.745.664.150.282.787.358.3
SAIL-VL-1.5-2B6778.562.646.4675083.789.158.8
Ristretto-3B67.780.262.851.367.650.284.284.760.7
Ovis-U169.677.861.351.169.456.385.688.366.7

GenEval

ModelOverallSingle objectTwo objectCountingColorsPositionAttribute binding
GPT-4o0.840.990.920.850.920.750.61
BAGEL0.820.990.940.810.880.640.63
BAGEL ๐Ÿ“0.880.980.950.840.950.780.77
UniWorld-V10.800.990.930.790.890.490.70
UniWorld-V1 ๐Ÿ“0.840.980.930.810.890.740.71
OmniGen0.680.980.840.660.740.400.43
OmniGen20.8010.950.640.880.550.76
OmniGen2 ๐Ÿ“0.860.990.960.740.980.710.75
Ovis-U10.890.980.980.900.920.790.75

๐Ÿ“ denotes using the rewritten prompts

DPG-Bench

ModelOverallGlobalEntityAttributeRelationOther
BAGEL85.0788.9490.3791.2990.8288.67
UniWorld-V181.3883.6488.3988.4489.2787.22
OmniGen81.1687.9088.9788.4787.9583.56
OmniGen283.5788.8188.8390.1889.3790.27
Ovis-U183.7282.3790.0888.6893.3585.20

ImgEdit-Bench

ModelOverallAddAdjustExtractReplaceRemoveBackgroundStyleHybridAction
GPT-4o4.24.614.332.94.353.664.574.933.964.89
MagicBrush1.902.841.581.511.971.581.752.381.621.22
Instruct-P2P1.882.451.831.442.011.501.443.551.21.46
AnyEdit2.453.182.951.882.472.232.242.851.562.65
UltraEdit2.73.442.812.132.961.452.833.761.912.98
OmniGen2.963.473.041.712.942.433.214.192.243.38
Step1X-Edit3.063.883.141.763.402.413.164.632.642.52
ICEdit3.053.583.391.733.152.933.083.842.043.68
BAGEL3.23.563.311.73.32.623.244.492.384.17
UniWorld-V13.263.823.642.273.473.242.994.212.962.74
OmniGen23.443.573.061.773.743.23.574.812.524.68
Ovis-U14.004.133.622.984.454.064.224.693.454.61

GEdit-Bench-EN

ModelAvgBackground ChangeColor AlterationMaterial ModificationMotion ChangePortrait BeautificationStyle TransferSubject AdditionSubject RemovalSubject ReplacementText ModificationTone Transformation
GPT-4o7.5347.2056.4916.6078.0967.7686.9617.6228.3318.0677.4278.301
AnyEdit3.2124.6634.2602.5372.0243.4792.0323.9953.0893.1800.9225.151
Instruct-Pix2Pix3.6843.8255.1823.6883.5094.3394.5603.4612.0314.2370.9554.733
MagicBrush4.5185.6375.1365.0784.5134.4874.4395.2523.7044.9411.3845.130
OmniGen5.0625.2816.0035.3082.9163.0874.9036.6286.3525.6164.5195.064
Gemini6.3156.7816.3696.0406.9385.5914.6767.5016.4477.0035.7656.350
Step1X-Edit6.7016.5476.5456.2046.4836.7877.2216.9756.5127.0686.9216.448
Doubao6.7547.4307.0956.3396.9736.9726.7677.6746.7487.4473.4717.383
BAGEL6.5197.3246.9096.3814.7534.5736.1507.8967.1647.0217.3206.218
Ovis-U16.4207.4866.8796.2084.7905.9816.4637.4917.2547.2664.4826.314
  • Note that the leaderboard has been updated by this commit. The results shown here are from an earlier version.

๐Ÿ“š Citation

If you find Ovis-U1 useful for your research or applications, please cite our technical report:

@article{wang2025ovisu1,
  title={Ovis-U1 Technical Report}, 
  author={Wang, Guo-Hua and Zhao, Shanshan and Zhang, Xinjie and Cao, Liangfu and Zhan, Pengxin and Duan, Lunhao and Lu, Shiyin and Fu, Minghao and Zhao, Jianshan and Li, Yang and Chen, Qing-Guo},
  journal={arXiv preprint arXiv:2506.23044},
  year={2025}
}

๐Ÿ™ Acknowledgments

The code is built upon Ovis and FLUX. We thank their authors for open-sourcing their great work.

๐Ÿ“„ License

This project is released under Apache License 2.0 (http://www.apache.org/licenses/LICENSE-2.0, SPDX-License-identifier: Apache-2.0).

๐Ÿšจ Disclaimer

We used compliance checking algorithms during the training process, to ensure the compliance of the trained model to the best of our ability. Due to complex data and the diversity of language model usage scenarios, we cannot guarantee that the model is completely free of copyright issues or improper content. If you believe anything infringes on your rights or generates improper content, please contact us, and we will promptly address the matter.

๐Ÿ”ฅ We are hiring!

We are looking for both interns and full-time researchers to join our team, focusing on multimodal understanding, generation, reasoning, AI agents, and unified multimodal models. If you are interested in exploring these exciting areas, please reach out to us at qingguo.cqg@alibaba-inc.com.