Skip to main content
Available starting with FlowX.AI 5.6.0The Intent Classification node is a dedicated workflow node introduced in 5.6.0. For earlier versions, see the intent classification and routing pattern using TEXT_UNDERSTANDING + Condition nodes.

Overview

The Intent Classification node is a specialized Integration Designer workflow node that combines AI-powered message classification with workflow branching in a single node. It analyzes a user message, classifies it into one of your defined intents using an LLM, and automatically routes the workflow to the matching branch — replacing the previous two-node pattern (TEXT_UNDERSTANDING + Condition).
Intent Classification node
The Intent Classification node is available in Integration Designer workflows alongside other node types like REST Endpoint, Script, Condition, and Custom Agent. For BPMN process workflows, use the Start Integration Workflow action to invoke workflows that contain Intent Classification nodes.

AI + routing in one node

Combines LLM classification and conditional branching — no separate Condition node needed

Natural language conditions

Define intents as plain-text descriptions instead of code expressions

Conversation memory

Optionally use past conversation context for more accurate classification

Built-in fallback

Automatic “If No Intent Matches” branch handles unrecognized inputs

How it compares to the pattern approach

AspectPattern (TEXT_UNDERSTANDING + Condition)Intent Classification node
Nodes required2 (classify, then route)1 (classify and route)
ConditionsPython/JavaScript expressionsNatural language descriptions
Max branchesUnlimited10 intents + fallback
Response schemaManual JSON schemaAutomatic (handled internally)
RationaleMust be added to prompt manuallyBuilt-in toggle

Configuration

1

Open your workflow

Open your workflow in Integration Designer.
2

Add the node

Add an Intent Classification node from the AI Agents category in the left panel.
3

Configure the user message

Set the User message field to the input text that should be classified. Use ${} syntax to reference workflow data keys.
4

Define intents

Add intent descriptions — each intent becomes a separate output branch on the node.
5

Connect output branches

Connect each intent branch and the fallback branch to downstream nodes in your workflow.

User message

operationPrompt
string
required
The text input to classify. Use ${} syntax to reference workflow data keys.Example: ${userMessage}
Reference a workflow data key that contains the user’s message rather than hardcoding text. This allows the node to classify different messages at runtime.

Use memory

useMemory
boolean
When turned on, the node sends the session ID to the AI platform and appends conversation history to the LLM call. This provides context from past messages for more accurate classification.Default: OFF
Turn on memory when classifying messages in multi-turn conversations where context matters — for example, distinguishing “yes” (confirming a previous offer) from “yes” (answering a question).

Intents

intentClassificationOptions.conditions
array
required
A list of intent descriptions that define the classification categories. Each intent becomes an output branch on the node.
  • Minimum: 1 intent + fallback
  • Maximum: 10 intents + fallback
  • Each intent description can be up to 3 lines of text
  • The If No Intent Matches fallback branch is always present and cannot be removed
Example intents:
IntentDescription
Intent 1The user is asking about product features, pricing, or availability
Intent 2The user wants to update their personal information or submit documents
Intent 3The user is greeting or making small talk
FallbackIf No Intent Matches
For best accuracy, classify up to 10 intents per node. If you need more intents, chain multiple Intent Classification nodes to narrow down in stages.

Response key

responseKey
string
required
The key where the classification result is stored in the workflow data.Example: classificationResult

Include rationale

intentClassificationOptions.includeRationale
boolean
When turned on, the LLM includes a brief explanation of why it selected the matching intent.Default: OFF
Turn on rationale during development and testing to understand classification decisions. You can turn it off in production to reduce response size.

Output

The node stores its classification result under the configured Response key in the workflow data. The output contains:
FieldTypeDescription
selected_intentstringThe description text of the matched intent
selected_branchnumberThe branch number (1–11) corresponding to the matched intent
rationalestringExplanation of why the LLM chose this intent (only when Include rationale is ON)
Example output:
{
  "selected_intent": "The user is asking about product features, pricing, or availability",
  "selected_branch": 1,
  "rationale": "The user asked 'How much does the premium plan cost?' which is a direct inquiry about pricing."
}

Workflow routing

After classification, the node routes the workflow token to the matching branch:
  • Intent branches — the workflow continues along the branch whose intent description matched the user message
  • If No Intent Matches — the fallback branch activates when the LLM response doesn’t match any defined intent
This routing works like a Condition (Fork) node, but conditions are evaluated by the LLM using natural language instead of code expressions.

Examples

Scenario: Route customer messages to specialized support handlers.Intents:
  1. The customer is reporting a technical issue or bug
  2. The customer is asking about billing, payments, or invoices
  3. The customer wants to cancel or modify their subscription
  4. The customer is asking a general question about the product
Fallback: Routes to a human agent handoff workflow.Each branch connects to a specialized handler — for example, the billing branch triggers a workflow that looks up the customer’s invoice history, while the technical issue branch creates a support ticket.
Scenario: Classify banking customer requests in a conversational workflow.Intents:
  1. The user wants to check their account balance or transaction history
  2. The user wants to transfer money or make a payment
  3. The user is asking about loan or mortgage options
  4. The user is greeting or making small talk
Configuration:
  • Use memory: ON (to understand context in multi-turn conversations)
  • Include rationale: OFF (production deployment)
The balance branch queries the account API, the transfer branch starts the payment workflow, and the loan branch triggers a knowledge base lookup.
Scenario: Classify incoming emails and route to appropriate processing.Intents:
  1. The message is a complaint or escalation requiring urgent attention
  2. The message is a new business inquiry or sales lead
  3. The message is a support request with a reference to an existing ticket
  4. The message is informational (newsletter reply, out-of-office, automated notification)
Fallback: Routes to a manual review queue.Configuration:
  • Use memory: OFF (each email is independent)
  • Include rationale: ON (for audit trail)

Best practices

Write specific intent descriptions

Describe each intent clearly and distinctly. Vague or overlapping descriptions reduce classification accuracy.

Chain nodes for complex taxonomies

For more than 10 categories, use a first node to classify into broad groups, then a second node to classify within each group.

Always handle the fallback

Connect the fallback branch to meaningful handling — a clarification prompt, a human handoff, or a default response.

Use rationale for debugging

Turn on Include rationale during development to understand why the LLM routes to unexpected branches.

Last modified on March 16, 2026