Skip to main content

Documentation Index

Fetch the complete documentation index at: https://www.bolna.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

Many nodes in a flow always say the same thing: greetings, hold messages, confirmations, goodbyes. A static node pre-renders the audio for that message when the agent is saved and plays it back from cache at runtime. No LLM call. No TTS call. No latency. repeat_after_silence_seconds is a related setting that auto-replays a node after N seconds of user silence and exposes a _silence_repeats counter so expression edges can escalate after a few silent rounds (offer help, transfer, hang up).

Latency and cost

Node typeLatencyCost per turn
LLM node~800ms (LLM + TTS + audio)LLM tokens + TTS characters
Static node~50ms (cached audio)Zero

Configuring a static node

Set node_type to "static" and provide static_message:
{
  "id": "greeting",
  "node_type": "static",
  "static_message": "Hello! Thank you for calling Acme. How can I help you today?",
  "edges": [
    { "to_node_id": "main_menu", "condition": "User responds with a request" }
  ]
}
That’s it. The audio is pre-generated using the agent’s configured TTS voice when the agent is saved, then served from cache on every call.

Field reference

FieldTypeRequiredWhat it does
node_type"llm" or "static"No (defaults to "llm")Controls whether the node calls the LLM or plays cached audio.
static_messagestringYes when node_type == "static"The exact text to speak. Audio is pre-generated from this when the agent is saved.
repeat_after_silence_secondsnumberNoIf set, replays the node’s response after this many seconds of user silence. Works on both static and LLM nodes.
All three fields are optional with safe defaults. Existing graph agents that don’t use any of them behave exactly as before.

Silence repeat

When repeat_after_silence_seconds is set and the user goes quiet:
  1. The silence timer fires after the configured seconds.
  2. _silence_repeats increments by 1.
  3. Expression edges are evaluated. If one matches _silence_repeats, the agent transitions.
  4. Otherwise the node replays. A static node plays the same cached audio (zero cost). An LLM node regenerates with [silence] in the conversation history and rephrases naturally.
  5. _silence_repeats resets to 0 on any transition out of the node.

Example: greeting with silence fallback

Play a greeting. If the user is silent, repeat up to 3 times, then hang up.
{
  "id": "greeting",
  "node_type": "static",
  "static_message": "Hello! Thank you for calling. How can I help you today?",
  "repeat_after_silence_seconds": 8,
  "edges": [
    { "to_node_id": "main_menu", "condition": "User responds with a request" },
    {
      "to_node_id": "goodbye",
      "condition_type": "expression",
      "expression": {
        "conditions": [
          { "variable": "_silence_repeats", "operator": "gte", "value": 3 }
        ]
      }
    }
  ]
}
What happens:
  • Cached audio plays instantly.
  • User silent for 8s, audio replays (_silence_repeats = 1).
  • Still silent, replays (_silence_repeats = 2).
  • Still silent, replays (_silence_repeats = 3), expression matches, transitions to goodbye.

Example: LLM node with silence nudge

The same pattern works on LLM nodes. The LLM sees [silence] in conversation history and rephrases without any extra prompt engineering on your part.
{
  "id": "collect_email",
  "prompt": "Ask the user for their email address politely.",
  "repeat_after_silence_seconds": 10,
  "edges": [
    {
      "to_node_id": "confirm",
      "condition": "User shared an email",
      "parameters": { "email": "string" }
    },
    {
      "to_node_id": "goodbye",
      "condition_type": "expression",
      "expression": {
        "conditions": [
          { "variable": "_silence_repeats", "operator": "gte", "value": 3 }
        ]
      }
    }
  ]
}
The LLM might say “Could you share your email?” first, then “Sorry, I didn’t catch that, could you tell me your email?” on the next silence, then transition to goodbye after the third.

When the cache is built

Audio for every static node is generated when you save the agent, using the agent’s configured TTS voice. At call time the cached audio is streamed directly, no LLM or TTS call. If you change static_message later, re-save the agent so the cache regenerates with the new text.