Skip to main content
Available starting with FlowX.AI 5.3.0: Workflow data models enable integration between processes and Integration Designer workflows with structured data models at the workflow level, similar to Process Definitions.
Workflow data models define input and output parameters for Start Nodes and End Nodes:
  • Input parameters: Define the data structure that enters the workflow through the Start Node. The input parameters automatically pre-fill the Start Node when you open the workflow diagram or test the workflow at runtime.
  • Output parameters: Define the data structure returned by End Nodes when the workflow completes.
When a workflow is integrated into another resource (for example, a process), these parameters define how data is processed between the two components:
  • Input mapping (Start Integration Workflow Action): Maps data from the process to the workflow’s input parameters when starting the workflow.
  • Output mapping (Receive Message Task): Maps data returned from the workflow back to the process variables.

Overview

Define data models at the workflow level with input and output parameters, similar to Process Definitions

Workflow data models bring the same structured data management capabilities available in Process Definitions to Integration Designer workflows. By defining data models at the workflow level, you ensure consistent data lineage across your integration architecture, making workflows more reusable, testable, and maintainable.

Structured Data Management

Define entities, attributes, and custom data types at the workflow level, similar to Process Data Models

Input/Output Parameters

Specify which data enters and exits your workflow with automatic parameter management

Process Integration

Map data bidirectionally between processes and workflows using data mappers

Type Safety & Validation

Define data types and validation rules to ensure runtime data integrity

Key concepts

Workflow data model

A Workflow Data Model defines the structure of data used throughout a workflow’s execution. It consists of:
  • Entities: Logical groupings of related data (for example, Customer, Order, Payment)
  • Attributes: Individual data fields with types and constraints
  • Input Parameters: Subset of the data model defining workflow input parameters
  • Output Parameters: Subset of the data model defining workflow output parameters

Workflow

Workflow parameters are defined in the data model that define which data is passed as input or returned as output:
  • Input Parameters: Automatically pre-fills the Start Node with structured data when you open the workflow diagram tab or test the workflow at runtime
  • Output Parameters: Defines the data structure returned by End Nodes. Currently, output parameters work as-is (you can manually configure them), but automatic pre-filling from output parameters is planned for future releases

Node name uniqueness

Node names within a workflow must be unique to ensure clear data lineage and avoid ambiguity when mapping data.
When you rename nodes, the system validates uniqueness. If a duplicate name exists, an index is automatically appended (for example, Transform, Transform_2, Transform_3).

Creating a workflow data model

1

Navigate to Workflow Settings

Open your workflow in Integration Designer and navigate to the Data Model tab in workflow settings.
Workflow Data Model Tab
2

Define Entities

Create entities to represent the logical data structures in your workflow:
  • Click Create Attribute
  • Enter an entity name (e.g., Customer, Order)
  • Add a description for documentation
3

Add Attributes

For details on adding and configuring attributes for your data model—including data types, required fields, validation, and advanced options—see the Data Model documentation.
4

Configure Input Parameters

Define which data will be passed as input to your workflow:
  • Navigate to Input/Output Parameters tab
  • Click Define parameters in the Input section
  • Select attributes from your data model to include
  • Mark required parameters
Input parameters data automatically pre-fills the Start Node when you open the workflow diagram or test at runtime.
5

Configure Output Parameters

Define which data will be returned when your workflow completes:
  • In the Input/Output Parameters tab, scroll to the Output section
  • Click Define parameters in the Output section
  • Select attributes from your data model to include as outputs
  • These parameters define the data structure returned by End Nodes
Output parameters are used when mapping workflow results back to the calling process via the Receive Message Task.
6

Save and Test

Save your data model and test the workflow to verify input and output parameter mapping works correctly.

Input parameter management

Automatic start node pre-filling

When you define input parameters, the Workflow Start Node is automatically populated with the structured data:

Input parameters definition

Automatic pre-filling of the Workflow Start Node

Important: If you manually modify the JSON structure in the Start Node, your changes will be lost when you refresh the workflow diagram or reopen the workflow. The Start Node is automatically populated from the input parameters definition, so any manual edits are overwritten.To change the input structure, update the input parameters definition in the Input/Output Parameters tab instead of editing the Start Node JSON directly.

Empty input parameters behavior

If no input parameters are defined for a workflow, the Start Node contains an empty object ({}). The input parameters are optional, but defining it provides better structure and type safety for your workflow inputs.

Benefits of input parameters

No Manual Editing

Input parameters automatically pre-fill the Start Node - no JSON editing needed

Type Safety

Data types are enforced at the workflow level, preventing runtime errors

Clear Contracts

Input parameters document exactly what data the workflow expects

Easier Testing

Test workflows with structured input data based on the data model

Integration with processes

Process to workflow data mapping

Use data mappers to pass data from processes to workflows. The data mapper configuration in the Start Integration Workflow action maps process data to workflow input parameters, independent of the workflow’s input parameters definition.
The workflow’s input parameters are used for:
  • Pre-filling the Start Node when designing/testing workflows
  • Documenting the expected input structure
When invoking workflows from processes via Start Integration Workflow action, data mappers handle the actual data mapping at runtime.
1

Add Start Integration Workflow Action

In your process definition, add a Start Integration Workflow action to the node where you want to invoke the workflow. Choose the workflow you want to invoke from the dropdown.

Start Integration Workflow action

2

Configure Input Mapping

Access Input Mapping tab to configure the input mapping.

Input Mapping tab

Map process data to workflow input parameters using the data mapper:
{
  "customerId": "${application.customer.id}",
  "accountNumber": "${application.account.number}",
  "amount": "${application.transaction.amount}"
}

Input Mapping

The keys on the left should match your workflow’s input parameters for consistency, though the data mapper operates independently.
3

Add Receive Message Task

Add a Receive Message Task node to capture workflow output when the workflow completes.Add a Data Stream by going to the Node Config -> Integration Output -> Data Streams tab and clicking Add Data Stream.

Data Streams tab

4

Configure Output Mapping

Create a data mapper on the Receive Message Task to map workflow outputs back to process data:

Data Stream

When you use User Tasks, the input and output mapping are both available in the User Task node configuration at Node Config -> Integration Input/Output -> Data Streams.

Example: User search workflow

This example demonstrates a workflow that searches for a user in the FlowX database using a username and returns the user’s available information.

Workflow data model configuration

Configure your workflow data model with the following parameters: Input Parameter:
  • username (String, required) - The user’s name to search for

Input Parameter

Output Parameter:
  • user (Object) - The user object containing all available user information

Output Parameter

Process setup

The process includes:
  1. User Task: Contains an input field where users enter the username to search

User Task

  1. Search Button: Triggers the Start Integration Workflow action

Start Integration Workflow Action

  1. Results Container: Configured with conditional visibility to display only when results are returned
Example
**User found:**
Name: ${user.name}
Account ID: ${user.account_id}
Active user: ${user.is_active}

Results Container

Mapping configuration

The Start Integration Workflow action requires two mapping configurations: Input Mapping (Process → Workflow): Maps the username from the main process to the workflow’s expected input variable:
{
  "username": "${user.name}"
}
Output Mapping (Workflow → Process): To configure output mapping:
  1. Add a data stream for the workflow on the Receive Message Task
  2. Define the user object as the output variable needed by the main process
  3. Link the workflow’s returned variables to the process variables
{
  "user": "${user}" //entire user object is returned
}
The output mapping displays all available end nodes from the workflow. Select the appropriate end node that returns the user data.

User database

User database

The user database is a simple database with a single user object.

Workflow diagram

Workflow diagram

You can see the start workflow node is pre-filled with the input parameter.

Execution results

When the process runs and the search action executes: Workflow Input:
{
  "username": "Joshua Helk"
}
Workflow Output:
{
  "user": {
    "_id": "6936ceb5c72d63c5585b93e7",
    "name": "Joshua Helk",
    "account_id": "dsa76d09ashu0231",
    "creation_date": "date",
    "is_active": true
  }
}
After workflow completion, all output information is available in the process variables of the main process, and the results container displays the user information.

Best practices

Data model structure

Use logical entities

Group related attributes into meaningful entities that represent domain concepts

Consistent naming

Use camelCase for attributes and PascalCase for entities consistently

Reuse existing data types

Reuse existing data types for bulk mapping to streamline data model creation and ensure consistency

Document everything

Add descriptions to entities and attributes for better team understanding

Keep it simple

Start with minimal data models and expand as needed

Input and output design

DO: ✅ Define only required input parameters in the input parameters
✅ Use meaningful parameter names that describe the data
✅ Set appropriate data types for validation
✅ Mark parameters as required when they’re essential
DON’T: ❌ Include unnecessary data in input/output parameters
❌ Use ambiguous or cryptic parameter names
❌ Make all parameters optional when some are required
❌ Change data model structure without updating mappings

Node naming

Use descriptive node names that indicate their purpose:
  • FetchCustomerData, ValidateCreditScore, TransformResponse
  • Node1, Node2, Script
The system enforces unique node names within workflows. If you need similar operations, use descriptive suffixes:
  • FetchCustomerData_Personal
  • FetchCustomerData_Financial

Limitations and compatibility

Current limitations

The following features are not yet available:
  • Start sub-workflow node: Start sub-workflow nodes work as-is without data model integration
  • Endpoint schema integration: Imported Swagger schemas are not integrated with workflow data models yet
  • Database schema mapping: Database operation schemas are not mapped to workflow variables yet


Troubleshooting

Possible causes:
  • Input parameters not saved correctly
  • Browser cache issues
  • No input parameters defined (results in empty object)
Solutions:
  • Save the workflow and refresh the browser
  • Re-open the workflow diagram tab
  • Check that the input parameters contains attributes
  • Verify attributes are marked as required if needed
  • If no input parameters are defined, the Start Node will contain an empty object {}
Problem: You manually edited the JSON in the Start Node, but your changes disappeared after refreshing or reopening the workflow.Cause: The Start Node is automatically populated from the input parameters definition. Any manual edits are overwritten when the workflow diagram is refreshed or reopened.Solution:
  • Update the input parameters definition in the Input/Output Parameters tab instead of editing the Start Node JSON directly
  • Changes to the input parameters will automatically reflect in the Start Node
  • If you need a different structure, modify the data model and input parameters accordingly
Error message: “Node name must be unique within the workflow”Solution:
  • Choose a different, more descriptive node name
  • The system will suggest appending an index (e.g., _2)
  • Use descriptive suffixes instead (e.g., _Personal, _Financial)
Possible causes:
  • Input parameters keys don’t match process data mapper
  • Data types mismatch
Solutions:
  • Ensure data mapper keys match input parameters attribute names exactly
  • Verify data types are compatible (string to string, number to number)
  • Check for typos in attribute names
  • Review process data model to workflow data model mapping
Error message: “Entity/Attribute is referenced and cannot be deleted”Solution:
  • Check attribute usages using the API: GET .../attributes/{name}/usages
  • Remove references from input/output parameters first
  • Update workflow nodes that use the attribute
  • Then retry deletion
Last modified on December 24, 2025