Skip to main content

What is the LLM Tab?

The LLM Tab is where you select and configure the intelligence behind your voice AI agent. Choose from leading language model providers and fine-tune parameters like response length, creativity, and latency optimization to match your use case.
Access Bolna playground from https://platform.bolna.ai/.
LLM configuration tab in Bolna Playground showing language model selection, provider settings, temperature controls, token limits, and knowledgebase integration options for Voice AI intelligence

LLM Tab on Bolna Playground

Configuration options

  1. Choose your LLM Provider - Select from providers like OpenAI, Azure OpenAI, Anthropic, and respective models (gpt-4o, claude-3.5-sonnet, etc.)
  2. Tokens - Increasing this number enables longer responses to be queued before sending to the synthesiser but slightly increases latency
  3. Temperature - Increasing temperature enables heightened creativity, but increases chance of deviation from prompt. Keep temperature as low if you want more control over how your AI will converse
  4. Filler words - reduce perceived latency by smarty responding <300ms after user stops speaking, but recipients can feel that the AI agent is not letting them complete their sentence

Next steps

Ready to optimize your LLM configuration? Explore related settings: Compare LLM provider options to choose the best fit for your use case.
I