What are providers?
Providers in Agenite are implementations of theLLMProvider
interface that connect your agents to specific large language model services. They translate the standardized Agenite messaging format into specific API calls and handle the response processing back into the Agenite format.
This provider abstraction allows your agents to:
- Switch between different LLM services without changing your core agent code
- Take advantage of specialized capabilities of different providers
- Maintain a consistent development experience across different LLMs
Key aspects of providers
- Implementation of the LLM interface: Each provider implements the standard
LLMProvider
interface - Provider-specific configuration: Handle authentication, model selection, and service-specific options
- Consistent usage pattern: All providers are used in the same way within agent definitions
- Easy switching: Agents can be reconfigured to use different providers with minimal code changes
- Specialized optimizations: Providers can implement optimizations for their specific LLM service
Supported providers
Agenite includes official support for several popular LLM providers:OpenAI
The OpenAI provider connects to OpenAI’s GPT models, including GPT-3.5 Turbo and GPT-4.Anthropic
The Anthropic provider connects to Anthropic’s Claude models.AWS Bedrock
The AWS Bedrock provider connects to various models available on the AWS Bedrock service, including Claude, Llama, and more.Ollama
The Ollama provider connects to locally-hosted open-source models using Ollama.Provider configuration
Each provider has its own configuration options, but they typically include:- Authentication credentials: API keys or other authentication methods
- Model selection: Which specific model to use
- Generation parameters: Temperature, max tokens, etc.
- Endpoint configuration: Custom endpoints or regions
Using providers with agents
Providers are passed to agents during initialization:Creating custom providers
You can create custom providers by implementing theLLMProvider
interface. This allows you to:
- Support proprietary or internal LLM services
- Add support for new public LLM services
- Create advanced wrappers around existing providers
BaseLLMProvider
, you only need to implement the generate
and stream
methods - the iterate
method will be handled for you.
Provider best practices
When working with providers, follow these best practices:- Store API keys securely: Never hardcode API keys or credentials. Use environment variables or secure secret management.
- Handle rate limits: Implement appropriate retry logic and respects rate limits of the LLM service.
- Implement timeouts: Set reasonable timeouts to handle slow or non-responsive API calls.
-
Configure for your use case: Adjust provider configuration options to match your specific use case:
- Lower temperatures (0.0-0.3) for more deterministic responses
- Higher temperatures (0.7-1.0) for more creative responses
- Appropriate max tokens based on expected response length
- Implement error handling: Always handle errors from the LLM service gracefully.