README.md

March 10, 2025 · View on GitHub


Twitter PyPI Conda Conda update PyPI - Python Version PyTorch Version

Loc Comments

Style Read en Docs Read zh_CN Docs Unittest Algotest deploy codecov

GitHub Org's stars GitHub stars GitHub forks GitHub commit activity GitHub issues GitHub pulls Contributors GitHub license Hugging Face Open in OpenXLab discord badge slack badge

Featured|HelloGitHub

Updated on 2024.12.23 DI-engine-v0.5.3

Introduction to DI-engine

Documentation | 中文文档 | Tutorials | Feature | Task & Middleware | TreeTensor | Roadmap

DI-engine is a generalized decision intelligence engine for PyTorch and JAX.

It provides python-first and asynchronous-native task and middleware abstractions, and modularly integrates several of the most important decision-making concepts: Env, Policy and Model. Based on the above mechanisms, DI-engine supports various deep reinforcement learning algorithms with superior performance, high efficiency, well-organized documentation and unittest:

  • Most basic DRL algorithms: such as DQN, Rainbow, PPO, TD3, SAC, R2D2, IMPALA
  • Multi-agent RL algorithms: such as QMIX, WQMIX, MAPPO, HAPPO, ACE
  • Imitation learning algorithms (BC/IRL/GAIL): such as GAIL, SQIL, Guided Cost Learning, Implicit BC
  • Offline RL algorithms: BCQ, CQL, TD3BC, Decision Transformer, EDAC, Diffuser, Decision Diffuser, SO2
  • Model-based RL algorithms: SVG, STEVE, MBPO, DDPPO, DreamerV3
  • Exploration algorithms: HER, RND, ICM, NGU
  • LLM + RL Algorithms: PPO-max, DPO, PromptPG, PromptAWR
  • Other algorithms: such as PER, PLR, PCGrad
  • MCTS + RL algorithms: AlphaZero, MuZero, please refer to LightZero
  • Generative Model + RL algorithms: Diffusion-QL, QGPO, SRPO, please refer to GenerativeRL

DI-engine aims to standardize different Decision Intelligence environments and applications, supporting both academic research and prototype applications. Various training pipelines and customized decision AI applications are also supported:

(Click to Collapse)
  • Traditional academic environments

    • DI-zoo: various decision intelligence demonstrations and benchmark environments with DI-engine.
  • Tutorial courses

  • Real world decision AI applications

    • DI-star: Decision AI in StarCraftII
    • PsyDI: Towards a Multi-Modal and Interactive Chatbot for Psychological Assessments
    • DI-drive: Auto-driving platform
    • DI-sheep: Decision AI in 3 Tiles Game
    • DI-smartcross: Decision AI in Traffic Light Control
    • DI-bioseq: Decision AI in Biological Sequence Prediction and Searching
    • DI-1024: Deep Reinforcement Learning + 1024 Game
  • Research paper

    • InterFuser: [CoRL 2022] Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer
    • ACE: [AAAI 2023] ACE: Cooperative Multi-agent Q-learning with Bidirectional Action-Dependency
    • GoBigger: [ICLR 2023] Multi-Agent Decision Intelligence Environment
    • DOS: [CVPR 2023] ReasonNet: End-to-End Driving with Temporal and Global Reasoning
    • LightZero: [NeurIPS 2023 Spotlight] A lightweight and efficient MCTS/AlphaZero/MuZero algorithm toolkit
    • SO2: [AAAI 2024] A Perspective of Q-value Estimation on Offline-to-Online Reinforcement Learning
    • LMDrive: [CVPR 2024] LMDrive: Closed-Loop End-to-End Driving with Large Language Models
    • SmartRefine: [CVPR 2024] SmartRefine: A Scenario-Adaptive Refinement Framework for Efficient Motion Prediction
    • ReZero: Boosting MCTS-based Algorithms by Backward-view and Entire-buffer Reanalyze
    • UniZero: Generalized and Efficient Planning with Scalable Latent World Models
  • Docs and Tutorials

On the low-level end, DI-engine comes with a set of highly re-usable modules, including RL optimization functions, PyTorch utilities and auxiliary tools.

BTW, DI-engine also has some special system optimization and design for efficient and robust large-scale RL training:

(Click for Details)

Have fun with exploration and exploitation.

Outline

Installation

You can simply install DI-engine from PyPI with the following command:

pip install DI-engine

For more information about installation, you can refer to installation.

And our dockerhub repo can be found here,we prepare base image and env image with common RL environments.

(Click for Details)
  • base: opendilab/ding:nightly
  • rpc: opendilab/ding:nightly-rpc
  • atari: opendilab/ding:nightly-atari
  • mujoco: opendilab/ding:nightly-mujoco
  • dmc: opendilab/ding:nightly-dmc2gym
  • metaworld: opendilab/ding:nightly-metaworld
  • smac: opendilab/ding:nightly-smac
  • grf: opendilab/ding:nightly-grf
  • cityflow: opendilab/ding:nightly-cityflow
  • evogym: opendilab/ding:nightly-evogym
  • d4rl: opendilab/ding:nightly-d4rl

The detailed documentation are hosted on doc | 中文文档.

Quick Start

3 Minutes Kickoff

3 Minutes Kickoff (colab)

DI-engine Huggingface Kickoff (colab)

How to migrate a new RL Env | 如何迁移一个新的强化学习环境

How to customize the neural network model | 如何定制策略使用的神经网络模型

测试/部署 强化学习策略 的样例

新老 pipeline 的异同对比

Feature

Algorithm Versatility

(Click to Collapse)

discrete  discrete means discrete action space, which is only label in normal DRL algorithms (1-23)

continuous  means continuous action space, which is only label in normal DRL algorithms (1-23)

hybrid  means hybrid (discrete + continuous) action space (1-23)

dist  Distributed Reinforcement Learning分布式强化学习

MARL  Multi-Agent Reinforcement Learning多智能体强化学习

exp  Exploration Mechanisms in Reinforcement Learning强化学习中的探索机制

IL  Imitation Learning模仿学习

offline  Offiline Reinforcement Learning离线强化学习

mbrl  Model-Based Reinforcement Learning基于模型的强化学习

other  means other sub-direction algorithms, usually as plugin-in in the whole pipeline

P.S: The .py file in Runnable Demo can be found in dizoo

No.AlgorithmLabelDoc and ImplementationRunnable Demo
1DQNdiscreteDQN doc
DQN中文文档
policy/dqn
python3 -u cartpole_dqn_main.py / ding -m serial -c cartpole_dqn_config.py -s 0
2C51discreteC51 doc
policy/c51
ding -m serial -c cartpole_c51_config.py -s 0
3QRDQNdiscreteQRDQN doc
policy/qrdqn
ding -m serial -c cartpole_qrdqn_config.py -s 0
4IQNdiscreteIQN doc
policy/iqn
ding -m serial -c cartpole_iqn_config.py -s 0
5FQFdiscreteFQF doc
policy/fqf
ding -m serial -c cartpole_fqf_config.py -s 0
6RainbowdiscreteRainbow doc
policy/rainbow
ding -m serial -c cartpole_rainbow_config.py -s 0
7SQLdiscretecontinuousSQL doc
policy/sql
ding -m serial -c cartpole_sql_config.py -s 0
8R2D2distdiscreteR2D2 doc
policy/r2d2
ding -m serial -c cartpole_r2d2_config.py -s 0
9PGdiscretePG doc
policy/pg
ding -m serial -c cartpole_pg_config.py -s 0
10PromptPGdiscretepolicy/prompt_pgding -m serial_onpolicy -c tabmwp_pg_config.py -s 0
11A2CdiscreteA2C doc
policy/a2c
ding -m serial -c cartpole_a2c_config.py -s 0
12PPO/MAPPOdiscretecontinuousMARLPPO doc
policy/ppo
python3 -u cartpole_ppo_main.py / ding -m serial_onpolicy -c cartpole_ppo_config.py -s 0
13PPGdiscretePPG doc
policy/ppg
python3 -u cartpole_ppg_main.py
14ACERdiscretecontinuousACER doc
policy/acer
ding -m serial -c cartpole_acer_config.py -s 0
15IMPALAdistdiscreteIMPALA doc
policy/impala
ding -m serial -c cartpole_impala_config.py -s 0
16DDPG/PADDPGcontinuoushybridDDPG doc
policy/ddpg
ding -m serial -c pendulum_ddpg_config.py -s 0
17TD3continuoushybridTD3 doc
policy/td3
python3 -u pendulum_td3_main.py / ding -m serial -c pendulum_td3_config.py -s 0
18D4PGcontinuousD4PG doc
policy/d4pg
python3 -u pendulum_d4pg_config.py
19SAC/[MASAC]discretecontinuousMARLSAC doc
policy/sac
ding -m serial -c pendulum_sac_config.py -s 0
20PDQNhybridpolicy/pdqnding -m serial -c gym_hybrid_pdqn_config.py -s 0
21MPDQNhybridpolicy/pdqnding -m serial -c gym_hybrid_mpdqn_config.py -s 0
22HPPOhybridpolicy/ppoding -m serial_onpolicy -c gym_hybrid_hppo_config.py -s 0
23BDQhybridpolicy/bdqpython3 -u hopper_bdq_config.py
24MDQNdiscretepolicy/mdqnpython3 -u asterix_mdqn_config.py
25QMIXMARLQMIX doc
policy/qmix
ding -m serial -c smac_3s5z_qmix_config.py -s 0
26COMAMARLCOMA doc
policy/coma
ding -m serial -c smac_3s5z_coma_config.py -s 0
27QTranMARLpolicy/qtranding -m serial -c smac_3s5z_qtran_config.py -s 0
28WQMIXMARLWQMIX doc
policy/wqmix
ding -m serial -c smac_3s5z_wqmix_config.py -s 0
29CollaQMARLCollaQ doc
policy/collaq
ding -m serial -c smac_3s5z_collaq_config.py -s 0
30MADDPGMARLMADDPG doc
policy/ddpg
ding -m serial -c ptz_simple_spread_maddpg_config.py -s 0
31GAILILGAIL doc
reward_model/gail
ding -m serial_gail -c cartpole_dqn_gail_config.py -s 0
32SQILILSQIL doc
entry/sqil
ding -m serial_sqil -c cartpole_sqil_config.py -s 0
33DQFDILDQFD doc
policy/dqfd
ding -m serial_dqfd -c cartpole_dqfd_config.py -s 0
34R2D3ILR2D3 doc
R2D3中文文档
policy/r2d3
python3 -u pong_r2d3_r2d2expert_config.py
35Guided Cost LearningILGuided Cost Learning中文文档
reward_model/guided_cost
python3 lunarlander_gcl_config.py
36TREXILTREX doc
reward_model/trex
python3 mujoco_trex_main.py
37Implicit Behavorial Cloning (DFO+MCMC)ILpolicy/ibc
model/template/ebm
python3 d4rl_ibc_main.py -s 0 -c pen_human_ibc_mcmc_config.py
38BCOILentry/bcopython3 -u cartpole_bco_config.py
39HERexpHER doc
reward_model/her
python3 -u bitflip_her_dqn.py
40RNDexpRND doc
reward_model/rnd
python3 -u cartpole_rnd_onppo_config.py
41ICMexpICM doc
ICM中文文档
reward_model/icm
python3 -u cartpole_ppo_icm_config.py
42CQLofflineCQL doc
policy/cql
python3 -u d4rl_cql_main.py
43TD3BCofflineTD3BC doc
policy/td3_bc
python3 -u d4rl_td3_bc_main.py
44Decision Transformerofflinepolicy/dtpython3 -u d4rl_dt_mujoco.py
45EDACofflineEDAC doc
policy/edac
python3 -u d4rl_edac_main.py
46QGPOofflineQGPO doc
policy/qgpo
python3 -u ding/example/qgpo.py
47MBSAC(SAC+MVE+SVG)continuousmbrlpolicy/mbpolicy/mbsacpython3 -u pendulum_mbsac_mbpo_config.py \ python3 -u pendulum_mbsac_ddppo_config.py
48STEVESAC(SAC+STEVE+SVG)continuousmbrlpolicy/mbpolicy/mbsacpython3 -u pendulum_stevesac_mbpo_config.py
49MBPOmbrlMBPO doc
world_model/mbpo
python3 -u pendulum_sac_mbpo_config.py
50DDPPOmbrlworld_model/ddppopython3 -u pendulum_mbsac_ddppo_config.py
51DreamerV3mbrlworld_model/dreamerv3python3 -u cartpole_balance_dreamer_config.py
52PERotherworker/replay_bufferrainbow demo
53GAEotherrl_utils/gaeppo demo
54ST-DIMothertorch_utils/loss/contrastive_lossding -m serial -c cartpole_dqn_stdim_config.py -s 0
55PLRotherPLR doc
data/level_replay/level_sampler
python3 -u bigfish_plr_config.py -s 0
56PCGradothertorch_utils/optimizer_helper/PCGradpython3 -u multi_mnist_pcgrad_main.py -s 0
57AWRdiscretepolicy/ibcpython3 -u tabmwp_awr_config.py

Environment Versatility

(Click to Collapse)
NoEnvironmentLabelVisualizationCode and Doc Links
1Ataridiscreteoriginaldizoo link
env tutorial
环境指南
2box2d/bipedalwalkercontinuousoriginaldizoo link
env tutorial
环境指南
3box2d/lunarlanderdiscreteoriginaldizoo link
env tutorial
环境指南
4classic_control/cartpolediscreteoriginaldizoo link
env tutorial
环境指南
5classic_control/pendulumcontinuousoriginaldizoo link
env tutorial
环境指南
6competitive_rldiscrete selfplayoriginaldizoo link
环境指南
7gfootballdiscretesparseselfplayoriginaldizoo link
env tutorial
环境指南
8minigriddiscretesparseoriginaldizoo link
env tutorial
环境指南
9MuJoCocontinuousoriginaldizoo link
env tutorial
环境指南
10PettingZoodiscrete continuous marloriginaldizoo link
env tutorial
环境指南
11overcookeddiscrete marloriginaldizoo link
env tutorial
12procgendiscreteoriginaldizoo link
env tutorial
环境指南
13pybulletcontinuousoriginaldizoo link
环境指南
14smacdiscrete marlselfplaysparseoriginaldizoo link
env tutorial
环境指南
15d4rlofflineoridizoo link
环境指南
16league_demodiscrete selfplayoriginaldizoo link
17pomdp ataridiscretedizoo link
18bsuitediscreteoriginaldizoo link
env tutorial
环境指南
19ImageNetILoriginaldizoo link
环境指南
20slime_volleyballdiscreteselfplayoridizoo link
env tutorial
环境指南
21gym_hybridhybridoridizoo link
env tutorial
环境指南
22GoBiggerhybridmarlselfplayoridizoo link
env tutorial
环境指南
23gym_soccerhybridoridizoo link
环境指南
24multiagent_mujococontinuous marloriginaldizoo link
环境指南
25bitflipdiscrete sparseoriginaldizoo link
环境指南
26sokobandiscreteGame 2dizoo link
env tutorial
环境指南
27gym_anytradingdiscreteoriginaldizoo link
env tutorial
28mariodiscreteoriginaldizoo link
env tutorial
环境指南
29dmc2gymcontinuousoriginaldizoo link
env tutorial
环境指南
30evogymcontinuousoriginaldizoo link
env tutorial
环境指南
31gym-pybullet-dronescontinuousoriginaldizoo link
环境指南
32beergamediscreteoriginaldizoo link
环境指南
33classic_control/acrobotdiscreteoriginaldizoo link
环境指南
34box2d/car_racingdiscrete
continuous
originaldizoo link
环境指南
35metadrivecontinuousoriginaldizoo link
环境指南
36cliffwalkingdiscreteoriginaldizoo link
env tutorial
环境指南
37tabmwpdiscreteoriginaldizoo link
env tutorial
环境指南
38frozen_lakediscreteoriginaldizoo link
env tutorial
环境指南
39ising_modeldiscrete marloriginaldizoo link
env tutorial
环境指南
40taxidiscreteoriginaldizoo link
env tutorial
环境指南

discrete means discrete action space

continuous means continuous action space

hybrid means hybrid (discrete + continuous) action space

MARL means multi-agent RL environment

sparse means environment which is related to exploration and sparse reward

offline means offline RL environment

IL means Imitation Learning or Supervised Learning Dataset

selfplay means environment that allows agent VS agent battle

P.S. some enviroments in Atari, such as MontezumaRevenge, are also the sparse reward type.

General Data Container: TreeTensor

DI-engine utilizes TreeTensor as the basic data container in various components, which is ease of use and consistent across different code modules such as environment definition, data processing and DRL optimization. Here are some concrete code examples:

  • TreeTensor can easily extend all the operations of torch.Tensor to nested data:

    (Click for Details)
    import treetensor.torch as ttorch
    
    
    # create random tensor
    data = ttorch.randn({'a': (3, 2), 'b': {'c': (3, )}})
    # clone+detach tensor
    data_clone = data.clone().detach()
    # access tree structure like attribute
    a = data.a
    c = data.b.c
    # stack/cat/split
    stacked_data = ttorch.stack([data, data_clone], 0)
    cat_data = ttorch.cat([data, data_clone], 0)
    data, data_clone = ttorch.split(stacked_data, 1)
    # reshape
    data = data.unsqueeze(-1)
    data = data.squeeze(-1)
    flatten_data = data.view(-1)
    # indexing
    data_0 = data[0]
    data_1to2 = data[1:2]
    # execute math calculations
    data = data.sin()
    data.b.c.cos_().clamp_(-1, 1)
    data += data ** 2
    # backward
    data.requires_grad_(True)
    loss = data.arctan().mean()
    loss.backward()
    # print shape
    print(data.shape)
    # result
    # <Size 0x7fbd3346ddc0>
    # ├── 'a' --> torch.Size([1, 3, 2])
    # └── 'b' --> <Size 0x7fbd3346dd00>
    #     └── 'c' --> torch.Size([1, 3])
    
  • TreeTensor can make it simple yet effective to implement classic deep reinforcement learning pipeline

    (Click for Details)
    import torch
    import treetensor.torch as ttorch
    
    B = 4
    
    
    def get_item():
        return {
            'obs': {
                'scalar': torch.randn(12),
                'image': torch.randn(3, 32, 32),
            },
            'action': torch.randint(0, 10, size=(1,)),
            'reward': torch.rand(1),
            'done': False,
        }
    
    
    data = [get_item() for _ in range(B)]
    
    
    # execute `stack` op
    - def stack(data, dim):
    -     elem = data[0]
    -     if isinstance(elem, torch.Tensor):
    -         return torch.stack(data, dim)
    -     elif isinstance(elem, dict):
    -         return {k: stack([item[k] for item in data], dim) for k in elem.keys()}
    -     elif isinstance(elem, bool):
    -         return torch.BoolTensor(data)
    -     else:
    -         raise TypeError("not support elem type: {}".format(type(elem)))
    - stacked_data = stack(data, dim=0)
    + data = [ttorch.tensor(d) for d in data]
    + stacked_data = ttorch.stack(data, dim=0)
    
    # validate
    - assert stacked_data['obs']['image'].shape == (B, 3, 32, 32)
    - assert stacked_data['action'].shape == (B, 1)
    - assert stacked_data['reward'].shape == (B, 1)
    - assert stacked_data['done'].shape == (B,)
    - assert stacked_data['done'].dtype == torch.bool
    + assert stacked_data.obs.image.shape == (B, 3, 32, 32)
    + assert stacked_data.action.shape == (B, 1)
    + assert stacked_data.reward.shape == (B, 1)
    + assert stacked_data.done.shape == (B,)
    + assert stacked_data.done.dtype == torch.bool
    

Feedback and Contribution

We appreciate all the feedbacks and contributions to improve DI-engine, both algorithms and system designs. And CONTRIBUTING.md offers some necessary information.

Supporters

↳ Stargazers

Stargazers repo roster for @opendilab/DI-engine

↳ Forkers

Forkers repo roster for @opendilab/DI-engine

Citation

@misc{ding,
    title={DI-engine: A Universal AI System/Engine for Decision Intelligence},
    author={Niu, Yazhe and Xu, Jingxin and Pu, Yuan and Nie, Yunpeng and Zhang, Jinouwen and Hu, Shuai and Zhao, Liangxuan and Zhang,  Ming and Liu, Yu},
    publisher={GitHub},
    howpublished={\url{https://github.com/opendilab/DI-engine}},
    year={2021},
}

License

DI-engine released under the Apache 2.0 license.