PreviewAgent Builder is currently in preview and may change before general availability.
FlowX.AI 5.6.0 introduces a dedicated Intent Classification node that combines classification and routing in a single node. The pattern below (using TEXT_UNDERSTANDING + Condition) remains valid for advanced scenarios like confidence-based routing or multi-label classification.
When to use
Use this pattern when your workflow must handle multiple types of user input and respond differently depending on what the user wants. Intent classification is the foundation of any conversational AI app on FlowX — it determines what happens next.
Common scenarios:
- A chatbot that must distinguish greetings from product inquiries from data submissions
- An email triage pipeline that routes messages to different processing branches
- Any multi-turn conversation where the next step depends on user intent
Without intent classification, a workflow can only follow a single linear path regardless of what the user says.
Architecture
The pattern follows a detect-then-route structure:
User message
|
v
TEXT_UNDERSTANDING node
(classifies intent, returns structured JSON)
|
v
Condition node (Python expressions)
|
+--> Handler A (e.g., greeting response)
+--> Handler B (e.g., product inquiry)
+--> Handler C (e.g., data input)
+--> Fallback (unrecognized intent)
The TEXT_UNDERSTANDING node analyzes the user message and returns a structured JSON object with the detected intent. The Condition node evaluates that output and forks the workflow to the appropriate handler branch.
Implementation
1. Configure the TEXT_UNDERSTANDING node
Add a TEXT_UNDERSTANDING node to your workflow. This node sends the user message to an LLM with a classification prompt and enforces a structured response via a Response Schema.
Prompt
The prompt should instruct the LLM to classify the input into one of your defined intent categories. Be explicit about the categories and expected output format.
You are an intent classifier. Analyze the user message and classify it into exactly one
of the following intent categories:
- GREETING_SMALL_TALK — greetings, pleasantries, general conversation
- PRODUCT_OFFER_INQUIRY — questions about products, services, pricing, or features
- DATA_INPUT_UPDATE — the user is providing or updating personal data, documents, or form fields
- OTHER — anything that does not fit the above categories
Return a JSON object with:
- detected_intent: one of the categories above (exact string)
- confidence_score: a number between 0 and 1
- thought_process: a brief explanation of why you chose this category
- extracted_entities: (optional) any relevant entities found in the message
Do NOT include any text outside the JSON object.
Keep the prompt focused on classification only. Resist adding response generation here — that belongs in the handler branches downstream.
Response Schema
Define a Response Schema on the TEXT_UNDERSTANDING node to enforce structured output. This guarantees the LLM returns valid JSON matching your expected shape.
{
"type": "object",
"properties": {
"detected_intent": {
"type": "string",
"enum": [
"GREETING_SMALL_TALK",
"PRODUCT_OFFER_INQUIRY",
"DATA_INPUT_UPDATE",
"OTHER"
]
},
"confidence_score": {
"type": "number",
"minimum": 0,
"maximum": 1
},
"thought_process": {
"type": "string"
},
"extracted_entities": {
"type": "object"
}
},
"required": ["detected_intent", "confidence_score", "thought_process"]
}
Configuration summary
| Setting | Value |
|---|
| Node type | TEXT_UNDERSTANDING |
| Input | User message (from Chat Input node or upstream variable) |
| Output key | Configure an output key (e.g., routesKey) to store the classification result |
| Response Schema | JSON schema as shown above |
| Temperature | 0 or very low — classification should be deterministic |
After the TEXT_UNDERSTANDING node, add a Condition node to fork the workflow based on the detected intent. Each branch uses a Python expression that evaluates to True or False.
Condition expressions
Use substring matching with the in operator for resilience. This handles cases where the LLM wraps the intent in extra whitespace or slightly different formatting.
# Branch: Greeting / small talk
"GREETING" in str(input.get("routesKey", {}).get("output", {}).get("detected_intent", ""))
# Branch: Product inquiry
"PRODUCT_OFFER" in str(input.get("routesKey", {}).get("output", {}).get("detected_intent", ""))
# Branch: Data input / update
"DATA_INPUT" in str(input.get("routesKey", {}).get("output", {}).get("detected_intent", ""))
# Branch: Fallback (default)
# No condition needed — this is the default/else branch
Always include a fallback branch. Even with a Response Schema, edge cases can produce unexpected intent values. The fallback branch prevents the workflow from stalling.
Use substring matching (in operator) instead of exact equality (==). This makes the condition resilient to minor LLM output variations without sacrificing accuracy.
Condition node summary
| Branch | Expression | Routes to |
|---|
| Greeting | "GREETING" in str(input.get("routesKey", {}).get("output", {}).get("detected_intent", "")) | Greeting handler |
| Product inquiry | "PRODUCT_OFFER" in str(input.get("routesKey", {}).get("output", {}).get("detected_intent", "")) | Product info handler |
| Data input | "DATA_INPUT" in str(input.get("routesKey", {}).get("output", {}).get("detected_intent", "")) | Data processing handler |
| Fallback | Default branch | Generic response or clarification prompt |
3. Build handler branches
Each handler branch is a self-contained sub-workflow tailored to the classified intent. Examples:
- Greeting handler — responds with a welcome message or continues small talk using a text generation node
- Product inquiry handler — queries a knowledge base (see the Knowledge base RAG pattern) and returns product information
- Data input handler — validates and stores user-provided data, then confirms receipt
- Fallback handler — asks the user to rephrase or provides a generic help message
After all branches complete their work, they converge back to a shared endpoint (typically a Closing Gateway or End node).
Real-world example
The Mortgage advisor chatbot tutorial implements this pattern as its core routing mechanism. User messages are classified into intents such as greeting, product inquiry (mortgage offers), data submission (income, employment), and general questions — then routed to specialized handlers that combine knowledge base lookups, financial calculations, and conversational responses.
Variations
Binary classification
When you only need to distinguish between two outcomes (e.g., yes/no, relevant/irrelevant), simplify the schema to a single boolean or two-value enum. Use a single Condition branch instead of multiple forks.
{
"type": "object",
"properties": {
"is_relevant": {
"type": "boolean"
},
"confidence_score": {
"type": "number"
}
},
"required": ["is_relevant", "confidence_score"]
}
Multi-label classification
Some inputs may belong to multiple categories simultaneously (e.g., a message that both provides data and asks a question). Modify the schema to return an array of intents, and use a ForEach or parallel branching strategy to handle each detected intent.
Confidence-based routing
Add a confidence threshold check before routing. If the confidence score falls below a threshold (e.g., 0.7), route to a human review queue or ask the user to clarify instead of proceeding with low-confidence automation.
# High confidence — route normally
float(input.get("routesKey", {}).get("output", {}).get("confidence_score", 0)) >= 0.7
# Low confidence — route to human review or clarification
float(input.get("routesKey", {}).get("output", {}).get("confidence_score", 0)) < 0.7