What is the LLM Tab?
The LLM Tab is where you select and configure the intelligence behind your voice AI agent. Choose from leading language model providers and fine-tune parameters like response length, creativity, and latency optimization to match your use case.Access Bolna playground from https://platform.bolna.ai/.

LLM Tab on Bolna Playground
Configuration options
-
Choose your LLM Provider - Select from providers like OpenAI, Azure OpenAI, Anthropic, and respective models (
gpt-4o,claude-3.5-sonnet, etc.) - Tokens - Increasing this number enables longer responses to be queued before sending to the synthesiser but slightly increases latency
- Temperature - Increasing temperature enables heightened creativity, but increases chance of deviation from prompt. Keep temperature as low if you want more control over how your AI will converse
-
Filler words - reduce perceived latency by smarty responding
<300msafter user stops speaking, but recipients can feel that the AI agent is not letting them complete their sentence
Next steps
Ready to optimize your LLM configuration? Explore related settings:- Configure agent prompts to guide your LLM’s responses
- Set up voice synthesis to match LLM output quality
- Add custom functions for dynamic capabilities
- Review prompting best practices for optimal results

