Skip to main content
PreviewAgent Builder is currently in preview and may change before general availability.
The AI comparison and reconciliation pattern uses TEXT_UNDERSTANDING nodes as comparison agents to perform field-by-field validation between AI-extracted data and expected values from a system of record. The output is a structured exception report that drives downstream routing: auto-approve, human review, or reject.

When to use

Use this pattern when you need to:
  • Validate AI-extracted document data against a trusted source (TMS, ERP, core banking system, database)
  • Produce auditable, structured comparison results with per-field match status
  • Route exceptions based on severity and overall match rate
  • Replace manual spot-checking of document extraction results
This pattern typically follows the fan-out extraction pattern in a document processing pipeline.

Architecture

The workflow accepts two data sets as input, runs them through a TEXT_UNDERSTANDING comparison agent, and routes the result based on the exception report.
Extracted Data + System Data
        |
        v
 TEXT_UNDERSTANDING (compare)
        |
        v
   Exception Report
        |
        v
   Condition (severity routing)
      /       |       \
     v        v        v
Auto-approve  Human   Reject
              review
Node breakdown:
NodeTypePurpose
CompareTEXT_UNDERSTANDINGField-by-field comparison of extracted vs. expected values
RouteConditionBranch based on overall match rate and exception severity
Auto-approveEnd / next stepHigh-match documents proceed without intervention
Human reviewEnd / next stepMedium-match documents queued for manual review
RejectEnd / next stepLow-match documents flagged for rejection

Implementation

Input structure

The comparison node receives two objects: the AI-extracted values and the system-of-record values.
{
  "extractedData": {
    "shipperName": "Acme Logistics Ltd.",
    "consigneeName": "Beta Corp",
    "totalWeight": "14500",
    "weightUnit": "KG",
    "containerCount": 3,
    "bolNumber": "MSKU-2024-78432"
  },
  "systemData": {
    "shipperName": "Acme Logistics",
    "consigneeName": "Beta Corporation",
    "totalWeight": "14500",
    "weightUnit": "KG",
    "containerCount": 3,
    "bolNumber": "MSKU-2024-78432"
  }
}

Comparison prompt

Configure the TEXT_UNDERSTANDING node with a prompt that instructs the LLM to perform structured comparison.
You are a document reconciliation agent. Compare the AI-extracted values against
the system-of-record values and produce a structured exception report.

Instructions:
1. Compare each field individually. Use fuzzy matching for names and addresses
   (e.g., "Acme Logistics Ltd." vs "Acme Logistics" is a MATCH).
2. For numeric fields, compare exact values. Flag unit mismatches separately.
3. For derived values (e.g., total weight = sum of line items), verify the
   calculation independently.
4. Compute an overall match rate as a percentage (0-100).
5. Assign a confidence score (0-100) reflecting how certain you are in the
   comparison results.
6. Flag each exception with a severity: CRITICAL, WARNING, or INFO.

Extracted data:
{{extractedData}}

System-of-record data:
{{systemData}}

Output schema

Define the output schema on the TEXT_UNDERSTANDING node to enforce structured results.
{
  "type": "object",
  "properties": {
    "matchRate": {
      "type": "number",
      "description": "Overall match rate from 0 to 100"
    },
    "confidenceScore": {
      "type": "number",
      "description": "Confidence in comparison accuracy from 0 to 100"
    },
    "fieldResults": {
      "type": "array",
      "items": {
        "type": "object",
        "properties": {
          "fieldName": { "type": "string" },
          "extractedValue": { "type": "string" },
          "expectedValue": { "type": "string" },
          "status": {
            "type": "string",
            "enum": ["MATCH", "MISMATCH", "MISSING"]
          },
          "severity": {
            "type": "string",
            "enum": ["CRITICAL", "WARNING", "INFO"]
          },
          "note": { "type": "string" }
        },
        "required": ["fieldName", "status"]
      }
    },
    "exceptions": {
      "type": "array",
      "items": {
        "type": "object",
        "properties": {
          "fieldName": { "type": "string" },
          "description": { "type": "string" },
          "severity": {
            "type": "string",
            "enum": ["CRITICAL", "WARNING", "INFO"]
          }
        },
        "required": ["fieldName", "description", "severity"]
      }
    }
  },
  "required": ["matchRate", "confidenceScore", "fieldResults", "exceptions"]
}

Severity routing

Configure a Condition node after the comparison to route based on the exception report.
RouteConditionAction
Auto-approvematchRate >= 95 and no CRITICAL exceptionsProceed to next processing step
Human reviewmatchRate >= 70 and matchRate < 95, or any WARNING exceptionsQueue for manual review
RejectmatchRate < 70 or any CRITICAL exceptionFlag for rejection and notify
Adjust the match rate thresholds based on your business requirements. Start conservative (higher thresholds for auto-approve) and relax them as you gain confidence in extraction quality.

Configuration reference

SettingValueDescription
Node typeTEXT_UNDERSTANDINGComparison agent
Temperature0Use deterministic output for consistent comparisons
Output formatJSON schemaStructured exception report
Max tokens2000Sufficient for detailed field-by-field reports

Exception severity levels

SeverityDescriptionExamples
CRITICALData integrity issue that blocks processingMissing required field, numeric value off by more than 10%, wrong document matched to shipment
WARNINGDiscrepancy that requires human judgmentName variation beyond fuzzy match threshold, date format ambiguity, unit conversion needed
INFOMinor difference, typically acceptableAbbreviation vs. full name, trailing whitespace, formatting differences

Variations

Threshold-based routing

Instead of fixed severity categories, use configurable thresholds stored in a FlowX Database data source. This allows business users to adjust auto-approve and reject boundaries without modifying the workflow.

Multi-document cross-reference

Extend the pattern to compare fields across multiple documents in the same shipment. For example, verify that the total weight on the bill of lading matches the sum of weights on the packing list and that both align with the commercial invoice.

Audit trail generation

Add a downstream node that persists the full exception report (including all MATCH results) to a database or document store. This provides a complete audit trail for compliance and quality monitoring over time.
Last modified on March 16, 2026