README.md

February 13, 2026 · View on GitHub

MindNLP Logo

MindNLP

Run HuggingFace Models on MindSpore with Zero Code Changes

The easiest way to use 200,000+ HuggingFace models on Ascend NPU, GPU, and CPU

GitHub stars PyPI Downloads License

Documentation CI PRs Welcome Issues

Quick StartFeaturesInstallationWhy MindNLPDocumentation


🎯 What is MindNLP?

MindNLP bridges the gap between HuggingFace's massive model ecosystem and MindSpore's hardware acceleration. With just import mindnlp, you can run any HuggingFace model on Ascend NPU, NVIDIA GPU, or CPU - no code changes required.

import mindnlp  # That's it! HuggingFace now runs on MindSpore
from transformers import pipeline

pipe = pipeline("text-generation", model="Qwen/Qwen2-0.5B")
print(pipe("Hello, I am")[0]["generated_text"])

⚡ Quick Start

Text Generation with LLMs

import mindspore
import mindnlp
from transformers import pipeline

pipe = pipeline(
    "text-generation",
    model="Qwen/Qwen3-8B",
    ms_dtype=mindspore.bfloat16,
    device_map="auto"
)

messages = [{"role": "user", "content": "Write a haiku about coding"}]
print(pipe(messages, max_new_tokens=100)[0]["generated_text"][-1]["content"])

Image Generation with Stable Diffusion

import mindspore
import mindnlp
from diffusers import DiffusionPipeline

pipe = DiffusionPipeline.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5",
    ms_dtype=mindspore.float16
)
image = pipe("A sunset over mountains, oil painting style").images[0]
image.save("sunset.png")

BERT for Text Classification

import mindnlp
from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased")

inputs = tokenizer("MindNLP is awesome!", return_tensors="pt")
outputs = model(**inputs)

✨ Features

🤗 Full HuggingFace Compatibility

  • 200,000+ models from HuggingFace Hub
  • Transformers - All model architectures
  • Diffusers - Stable Diffusion, SDXL, ControlNet
  • Zero code changes - Just import mindnlp

🚀 Hardware Acceleration

  • Ascend NPU - Full support for Huawei AI chips
  • NVIDIA GPU - CUDA acceleration
  • CPU - Optimized CPU execution
  • Multi-device - Automatic device placement

🔧 Advanced Capabilities

  • Mixed precision - FP16/BF16 training & inference
  • Quantization - INT8/INT4 with BitsAndBytes
  • Distributed - Multi-GPU/NPU training
  • PEFT/LoRA - Parameter-efficient fine-tuning

📦 Easy Integration

  • PyTorch-compatible API via mindtorch
  • Safetensors support for fast loading
  • Model Hub mirrors for faster downloads
  • Comprehensive documentation

🧪 Mindtorch NPU Debugging

Mindtorch NPU ops are async by default. Use torch.npu.synchronize() when you need to block on results. For debugging, set ACL_LAUNCH_BLOCKING=1 to force per-op synchronization.

📦 Installation

# From PyPI (recommended)
pip install mindnlp

# From source (latest features)
pip install git+https://github.com/mindspore-lab/mindnlp.git
📋 Version Compatibility
MindNLPMindSporePython
0.6.x≥2.7.13.10-3.11
0.5.x2.5.0-2.7.03.10-3.11
0.4.x2.2.x-2.5.03.9-3.11

💡 Why MindNLP?

FeatureMindNLPPyTorch + HFTensorFlow + HF
HuggingFace Models✅ 200K+✅ 200K+⚠️ Limited
Ascend NPU Support✅ Native
Zero Code Migration-
Unified API
Chinese Model Support✅ Excellent✅ Good⚠️ Limited

🏆 Key Advantages

  1. Instant Migration: Your existing HuggingFace code works immediately
  2. Ascend Optimization: Native support for Huawei NPU hardware
  3. Production Ready: Battle-tested in enterprise deployments
  4. Active Community: Regular updates and responsive support

🗺️ Supported Models

MindNLP supports all models from HuggingFace Transformers and Diffusers. Here are some popular ones:

CategoryModels
LLMsQwen, Llama, ChatGLM, Mistral, Phi, Gemma, BLOOM, Falcon
VisionViT, CLIP, Swin, ConvNeXt, SAM, BLIP
AudioWhisper, Wav2Vec2, HuBERT, MusicGen
DiffusionStable Diffusion, SDXL, ControlNet
MultimodalLLaVA, Qwen-VL, ALIGN

👉 View all supported models

📚 Resources

🤝 Contributing

We welcome contributions! See our Contributing Guide for details.

# Clone and install for development
git clone https://github.com/mindspore-lab/mindnlp.git
cd mindnlp
pip install -e ".[dev]"

👥 Community

Join the MindSpore NLP SIG (Special Interest Group) for discussions, events, and collaboration:

QQ Group

⭐ Star History

Star History Chart

If you find MindNLP useful, please consider giving it a star ⭐ - it helps the project grow!

📄 License

MindNLP is released under the Apache 2.0 License.

📖 Citation

@misc{mindnlp2022,
    title={MindNLP: Easy-to-use and High-performance NLP and LLM Framework Based on MindSpore},
    author={MindNLP Contributors},
    howpublished={\url{https://github.com/mindspore-lab/mindnlp}},
    year={2022}
}

Made with ❤️ by the MindSpore Lab team