A graph agent breaks a phone conversation into discrete nodes, each with its own purpose, instructions, and transition rules. Instead of one giant prompt that has to handle everything, you define exactly what the agent does at each step and exactly when it moves to the next one.Documentation Index
Fetch the complete documentation index at: https://www.bolna.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Predictable
Conversations follow explicit paths. Every transition is a rule you defined.
Easy to debug
When something breaks, you know which node failed and why.
Easy to update
Change one node without touching the rest of the flow.
Lower cost
Deterministic edges and static nodes skip the LLM entirely.
When to use a graph agent
Core concepts
Nodes
A node is one step in the conversation. Each node has one clear job.| Field | Type | Description |
|---|---|---|
id | string | Unique identifier. Referenced by edges and current_node_id. |
prompt | string | Instruction given to the response LLM when the conversation is in this node. |
edges | array | Possible transitions out of this node. |
examples | object | Sample responses per language ("en", "hi"). Guides tone and phrasing. |
node_type | string | "llm" (default) or "static". See Static Nodes. |
static_message | string | Required when node_type == "static". Pre-cached audio plays at runtime. |
repeat_after_silence_seconds | number | Auto-replay the node after N seconds of user silence. Works on LLM and static nodes. |
function_call | string | Forces the response LLM’s tool_choice to this tool when the node is entered (e.g. transfer nodes). |
rag_config | object | Optional per-node knowledge base. See Tools & RAG. |
Edges
Edges define how the conversation moves from one node to the next.Routing
After every customer message, a routing LLM evaluates the available LLM-typed edges on the current node and picks the best match.Expression and unconditional edges are evaluated before the routing LLM runs. If a deterministic rule matches, the transition fires instantly with zero latency and zero cost. The routing LLM is only invoked when no deterministic rule matches.
Where graph agent config lives
All graph-agent fields live insidellm_agent, nested under tools_config in your conversation task:
Top-level fields
| Field | Description |
|---|---|
agent_type | Must be "graph_agent" to enable the node-based flow. |
agent_information | Global system prompt. Persona, language rules, guardrails. Applied to every node. |
routing_instructions | Prompt given to the routing LLM. Prepended to every routing request. Supports {variable} substitution from context_data. |
current_node_id | Starting node when a new call begins. |
nodes | Array of all node objects. |
model | Response LLM. Defaults to gpt-4.1-mini. |
routing_model | Routing LLM. Defaults to gpt-4.1-mini. |
routing_max_tokens | Cap on routing response tokens. Defaults: 250 (non-GPT-5), 150 (GPT-5). |
routing_reasoning_effort | GPT-5 routing models only. "minimal" / "low" / "medium" / "high". |
agent_information is the identity layer
This prompt is applied to every node. Use it for persona, response rules (max sentence count, language switching), pronunciation rules, and hard guardrails.
Writing effective node prompts
A well-writtenprompt includes the node’s purpose, the exact question to ask, validation rules, a fallback, and any voice formatting rules.
Weak:
Next steps
Edges & routing
Edge types, expression operators, built-in variables, inline data extraction.
Static nodes
Pre-cached audio messages with auto-replay on user silence.
Event injection
Drive transitions and proactive speech from external events via REST.
Tools & RAG
Call transfer, custom API tools, per-node knowledge bases.
Debugging
Routing logs, common scenarios, and how to fix them.
Full example
Complete annotated JSON skeleton showing every feature end-to-end.

