What is the LLM component?
The LLM (Large Language Model) component in Agenite is the abstraction layer that provides a unified interface for communicating with different language models. It serves as the bridge between your agent logic and the underlying AI providers like OpenAI, Anthropic, AWS Bedrock, or Ollama. This abstraction is crucial as it allows you to:- Write provider-agnostic code that works across different LLM services
- Handle both streaming and non-streaming interactions consistently
- Work with rich content types beyond just text
- Manage tool usage through a standardized interface
Key aspects of the LLM component
- Abstraction layer: Provides a uniform interface across different LLM providers
- Content handling: Manages rich content like text, images, tool calls, and thinking blocks
- Message formatting: Standardizes the format of messages between agents and LLMs
- Token tracking: Facilitates monitoring token usage for performance and cost tracking
- Streaming support: Enables real-time streaming of model outputs
The LLM architecture
The LLM component sits between agents and providers, providing a clean abstraction that isolates agent logic from provider-specific implementation details.Core interfaces
The heart of the LLM component is theLLMProvider
interface, which defines three essential methods:
Message structure
The LLM component standardizes messages using theBaseMessage
interface:
Content blocks
Content blocks provide a flexible way to represent different types of content:Working with the LLM component
Basic text generation
The simplest way to use the LLM component is for basic text generation:Streaming responses
For real-time interactions, you can use the streaming interface:Working with tools
When working with tools, the LLM component provides structured handling:Integration with agents
The agent component uses the LLM component in theLLMStep
, which handles:
- Sending messages to the LLM provider
- Processing streaming responses
- Deciding whether to proceed to tool calling or end the conversation
- Managing token usage tracking
LLMProvider
interface rather than specific provider implementations, allowing you to easily swap providers without changing your agent logic.
The BaseLLMProvider class
For provider developers, Agenite includes aBaseLLMProvider
class that simplifies implementing the LLMProvider
interface. It provides a default implementation of the iterate
method based on the generate
and stream
methods:
generate
and stream
methods, making it easier to add support for new LLM services.
LLM utility functions
The LLM component exposes several utility functions that simplify working with messages and providers. These utilities help you format messages correctly, convert between different formats, and implement provider functionality with less boilerplate code.Message conversion utilities
Provider implementation helpers
The LLM package includes theiterateFromMethods
utility function that makes it easier to implement the iterate
method required by the LLMProvider
interface:
BaseLLMProvider
, the iterate
method is automatically implemented for you using iterateFromMethods
, which properly handles both streaming and non-streaming generation based on the options provided.
Content type utilities
When working with the LLM component’s rich content types, you’ll often need to create, transform, or filter content blocks. The LLM package provides type definitions that help with this:Benefits of the LLM abstraction
- Provider independence: Your agents can work with any supported LLM provider
- Consistent interfaces: Standardized methods for both streaming and non-streaming generation
- Rich content support: Handling of multimodal content and tool interactions
- Token tracking: Built-in mechanisms for monitoring token usage
- Future-proofing: As new providers emerge, your code remains compatible