Framework Integrations

February 24, 2026 · View on GitHub

NVIDIA NeMo Agent Toolkit provides comprehensive support for multiple agentic frameworks, allowing you to use your preferred development tools while leveraging the capabilities of NeMo Agent Toolkit. This document describes the framework integrations available and their respective levels of support.

Supported Frameworks

NeMo Agent Toolkit integrates with the following frameworks:

  • ADK: Google Agent Development Kit for building AI agents
  • Agno: A lightweight framework for building AI agents
  • AutoGen: A framework for building AI agents and applications
  • CrewAI: A framework for orchestrating role-playing AI agents
  • LangChain/LangGraph: A framework for developing applications powered by large language models
  • LlamaIndex: A data framework for building LLM applications
  • Semantic Kernel: Microsoft's SDK for integrating LLMs with conventional programming languages
  • Strands: AWS AgentCore runtime for running production agents on Bedrock

Framework Support Levels

NeMo Agent Toolkit provides different levels of support for each framework across the following dimensions:

LLM Provider Support

The ability to use various large language model providers with a framework, including NVIDIA NIM, OpenAI, Azure OpenAI, AWS Bedrock, LiteLLM, and Hugging Face.

Embedder Provider Support

The ability to use embedding model providers for vector representations, including NVIDIA NIM embeddings, OpenAI embeddings, and Azure OpenAI embeddings.

Retriever Provider Support

The ability to integrate with vector databases and retrieval systems, such as NeMo Retriever and Milvus.

Tool Calling Support

The ability to use framework-specific tool calling mechanisms, allowing agents to invoke functions and tools during execution.

Profiling Support

The ability to view workflow execution traces including intermediate steps, LLM calls, and tool calls within the NeMo Agent Toolkit profiler.

Framework Capabilities Matrix

The following table summarizes the current support level for each framework:

FrameworkLLM ProvidersEmbedder ProvidersRetriever ProvidersTool CallingProfiling
ADK✅ Yes❌ No❌ No✅ Yes✅ Yes
Agno⚠️ Limited❌ No❌ No✅ Yes✅ Yes
AutoGen✅ Yes❌ No❌ No✅ Yes✅ Yes
CrewAI✅ Yes❌ No❌ No✅ Yes✅ Yes
LangChain✅ Yes✅ Yes✅ Yes✅ Yes✅ Yes
LlamaIndex✅ Yes✅ Yes❌ No✅ Yes✅ Yes
Semantic Kernel⚠️ Limited❌ No❌ No✅ Yes✅ Yes
Strands✅ Yes❌ No❌ No✅ Yes✅ Yes

Framework-Specific Details

ADK (Google Agent Development Kit)

Google's Agent Development Kit (ADK) is a framework for building AI agents with multiple LLM providers. It provides a set of tools for creating agents that can be used to create complex workflows powered by LLMs. ADK focuses on modularity and extensibility, making it suitable for integrating custom data pipelines and enhancing intelligent applications.

For more information, visit the ADK website.

CapabilityProviders / Details
LLM ProvidersNVIDIA NIM, OpenAI, Azure OpenAI, AWS Bedrock, LiteLLM
Embedder ProvidersNone (use framework-agnostic embedders if needed)
Retriever ProvidersNone (use ADK native tools)
Tool CallingFully supported through the ADK FunctionTool interface
ProfilingComprehensive profiling support with instrumentation

Installation:

uv pip install "nvidia-nat[adk]"

Agno

Agno is a lightweight framework for building AI agents. It provides a set of tools for creating agents that can be used to create complex workflows powered by LLMs. Agno focuses on modularity and extensibility, making it suitable for integrating custom data pipelines and enhancing intelligent applications.

For more information, visit the Agno website.

CapabilityProviders / Details
LLM ProvidersNVIDIA NIM, OpenAI, LiteLLM
Embedder ProvidersNone (use framework-agnostic embedders if needed)
Retriever ProvidersNone (use Agno native tools)
Tool CallingFully supported through Agno's tool interface
ProfilingComprehensive profiling support with instrumentation

Installation:

uv pip install "nvidia-nat[agno]"

AutoGen

Microsoft AutoGen is a framework for creating and orchestrating multi-agent systems powered by large language models. It enables collaboration between multiple agents—each with specialized roles—to accomplish complex tasks by communicating and reasoning together. AutoGen offers a modular design, flexible agent-to-agent messaging, and supports integration with custom tools, LLM providers, and external data sources, making it well-suited for advanced agentic workflows in enterprise and research environments.

For more information, visit the Microsoft AutoGen webpage.

CapabilityProviders / Details
LLM ProvidersNVIDIA NIM, OpenAI, Azure OpenAI, AWS Bedrock, LiteLLM
Embedder ProvidersNone (use framework-agnostic embedders if needed)
Retriever ProvidersNone (use AutoGen native tools)
Tool CallingFully supported through AutoGen's tool integration
ProfilingComprehensive profiling support with instrumentation

Installation:

uv pip install "nvidia-nat[autogen]"

CrewAI

CrewAI is a framework designed for orchestrating teams of role-playing AI agents that can collaborate and complete complex tasks. It enables the creation of agents with distinct roles, goals, and tools, allowing for multi-agent workflows adaptable to a wide range of scenarios—from research assistants to business process automation.

For more information, visit the CrewAI website.

CapabilityProviders / Details
LLM ProvidersNVIDIA NIM, OpenAI, Azure OpenAI, AWS Bedrock, LiteLLM
Embedder ProvidersNone (use framework-agnostic embedders if needed)
Retriever ProvidersNone (use CrewAI native tools)
Tool CallingFully supported through CrewAI's tool system
ProfilingComprehensive profiling support with instrumentation

Installation:

uv pip install "nvidia-nat[crewai]"

LangChain/LangGraph

LangChain is a framework for building applications that utilize large language models (LLMs) to interact with data. It provides a set of tools for creating chains of LLM calls, allowing for complex workflows powered by LLMs. LangChain focuses on modularity and extensibility, making it suitable for integrating custom data pipelines and enhancing intelligent applications.

For more information, visit the LangChain documentation.

CapabilityProviders / Details
LLM ProvidersNVIDIA NIM, OpenAI, Azure OpenAI, AWS Bedrock, LiteLLM, Hugging Face
Embedder ProvidersNVIDIA NIM, OpenAI, Azure OpenAI
Retriever ProvidersNeMo Retriever, Milvus
Tool CallingFully supported through LangChain's StructuredTool interface
ProfilingComprehensive profiling support with callback handlers

Installation:

uv pip install "nvidia-nat[langchain]"

LlamaIndex

LlamaIndex is a powerful framework for building applications that utilize large language models (LLMs) to query and interact with structured and unstructured data. It provides a set of tools for creating indexes over data sources—such as documents, databases, and APIs—enabling complex retrieval, question answering, and orchestration workflows powered by LLMs. LlamaIndex focuses on modularity and extensibility, making it suitable for integrating custom data pipelines and enhancing intelligent applications.

For more information, visit the LlamaIndex website.

CapabilityProviders / Details
LLM ProvidersNVIDIA NIM, OpenAI, Azure OpenAI, AWS Bedrock, LiteLLM
Embedder ProvidersNVIDIA NIM, OpenAI, Azure OpenAI
Retriever ProvidersNone (Use LlamaIndex native retrievers)
Tool CallingFully supported through LlamaIndex's FunctionTool interface
ProfilingComprehensive profiling support with callback handlers

Installation:

uv pip install "nvidia-nat[llama-index]"

Strands

Strands is AWS's framework for building agents that can be deployed on Amazon Bedrock AgentCore runtime. The NeMo Agent Toolkit exposes Strands as another framework target so you can keep your existing workflows, tools, and profiler instrumentation while Strands and AgentCore manage execution inside AWS.

CapabilityProviders / Details
LLM ProvidersAWS Bedrock, NVIDIA NIM (OpenAI-compatible), OpenAI
Embedder ProvidersNone (use framework-agnostic embedders if needed)
Retriever ProvidersNone (use Strands native tools)
Tool CallingFully supported through the Strands AgentTool interface
ProfilingComprehensive profiling support through the Strands profiler callback handler

Installation:

uv pip install "nvidia-nat[strands]"

Learn more:

Semantic Kernel

Microsoft's Semantic Kernel is a framework for building applications that utilize large language models (LLMs) to interact with data. It provides a set of tools for creating kernels that can be used to create complex workflows powered by LLMs. Semantic Kernel focuses on modularity and extensibility, making it suitable for integrating custom data pipelines and enhancing intelligent applications.

For more information, visit the Semantic Kernel website.

CapabilityProviders / Details
LLM ProvidersOpenAI, Azure OpenAI
Embedder ProvidersNone (use framework-agnostic embedders if needed)
Retriever ProvidersNone (use Semantic Kernel native connectors)
Tool CallingFully supported through Semantic Kernel's function calling
ProfilingComprehensive profiling support with instrumentation

Installation:

uv pip install "nvidia-nat[semantic-kernel]"