THIS PROJECT IS ARCHIVED
May 4, 2026 · View on GitHub
Intel will not provide or guarantee development of or support for this project, including but not limited to, maintenance, bug fixes, new releases or updates.
Patches to this project are no longer accepted by Intel.
If you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the community, please create your own fork of the project.
IPEX-LLM Tutorial
English | 中文
IPEX-LLM is a low-bit LLM library on Intel XPU (Xeon/Core/Flex/Arc/PVC). This repository contains tutorials to help you understand what is IPEX-LLM and how to use IPEX-LLM to build LLM applications.
The tutorials are organized as follows:
- Chapter 1
Introductionintroduces what is IPEX-LLM and what you can do with it. - Chapter 2
Environment Setupprovides a set of best practices for setting-up your environment. - Chapter 3
Application Development: Basicsintroduces the basic usage of IPEX-LLM and how to build a very simple Chat application. - Chapter 4
Chinese Supportshows the usage of some LLMs which suppports Chinese input/output, e.g. ChatGLM2, Baichuan - Chapter 5
Application Development: Intermediateintroduces intermediate-level knowledge for application development using IPEX-LLM, e.g. How to build a more sophisticated Chatbot, Speech recoginition, etc. - Chapter 6
GPU Accelerationintroduces how to use Intel GPU to accelerate LLMs using IPEX-LLM. - Chapter 7
Finetuneintroduces how to do finetune using IPEX-LLM. - Chapter 8
Application Development: Advancedintroduces advanced-level knowledge for application development using IPEX-LLM, e.g. langchain usage.