Deploy Modular, Data-centric AI applications at scale
June 16, 2025 ยท View on GitHub
Deploy Modular, Data-centric AI applications at scale
๐ก About
Seldon Core 2 is an MLOps and LLMOps framework for deploying, managing and scaling AI systems in Kubernetes - from singular models, to modular and data-centric applications. With Core 2 you can deploy in a standardized way across a wide range of model types, on-prem or in any cloud, and production-ready out of the box.
To reach out to Seldon regarding commercial use, visit our website.
๐ Documentation
The Seldon Core 2 Docs can be found here. For most specific sections, see here:
๐ง Installation ย โข ย โฝ Servers ย โข ย ๐ค Models ย โข ย ๐ Pipelines ย โข ย ๐งโ๐ฌ Experiments ย โข ย ๐ Performance Tuning
๐งฉ Features
- Pipelines: Deploy composable AI applications, leveraging Kafka for realtime data streaming between components
- Autoscaling for models and application components based on native or custom logic
- Multi-Model Serving: Save infrastructure costs by consolidating multiple models on shared inference servers
- Overcommit: Deploy more models than available memory allows, saving infrastructure costs for unused models
- Experiments: Route data between candidate models or pipelines, with support for A/B tests and shadow deployments
- Custom Components: Implement custom logic, drift & outlier detection, LLMs and more through plug-and-play integrate with the rest of Seldon's ecosytem of ML/AI products!
๐ฌ Research
These features are influenced by our position paper on the next generation of ML model serving frameworks:
๐ Desiderata for next generation of ML model serving
๐ License
Seldon is distributed under the terms of the The Business Source License. A complete version of the license is available in the LICENSE file in this repository. Any contribution made to this project will be licensed under the Business Source License.