Data Mappers provide a standardized way to define how data flows between different components in your FlowX processes, promoting reusability and simplifying integration.

Overview

Data Mappers enable users to visually map data transfers between components clearly and intuitively, ensuring simplicity, flexibility to include custom code when needed, and full backward compatibility with existing implementations.

Key concepts

Source Components

Send data to another component to trigger execution
  • Parent processes
  • Integration flows
  • Business rules

Destination Components

Receive input from a source component to execute
  • Subprocesses
  • Workflows
  • Business rules

Parameters & variables

Input Parameters: The structure a component expects to receive to execute successfully.Input Variables: The actual data sent from source to destination component for execution.
Input Parameters

Parameter types by origin

Prerequisites

Data Models are crucial for Data Mappers. Proper planning and setup are essential for success.

Data Model hierarchy

Understanding the data model hierarchy is essential for effective Data Mapper implementation:

Project Data Model

Organization Level: Define reusable data structures (e.g., “customer”, “client type”) that can be shared across all processes and workflows within the project.

Library Data Model

Enterprise Level: Create organization-wide standardized data types that can be reused across multiple projects by including the library as a dependency.

Process Data Model

Process Level: Define process-specific data structures while leveraging reusable components from project and library levels.

Setup requirements

1

Define Project Data Model First

Critical: Define your project data model before creating processes. This enables:
  • Reusability across processes and workflows
  • Consistent data structures
  • Improved error prevention
  • Enhanced mapping experience
2

Include Example Values

Add meaningful example values to your data model attributes. These serve multiple purposes:
  • Visualize attribute meaning and expected values
  • Enable testing without separate mock data
  • Improve configuration experience
3

Plan for Reusability

Design data structures that can be shared across multiple components:
  • Customer information structures
  • Common business entities
  • Standardized response formats

Configuration

Setting up process parameters

1

Configure Input Parameters

  • Open the process you want to configure
  • Navigate to the Start node
  • Select the Input parameters tab and click “Define parameters”
Define parameters
  • Define the schema based on the process’s Data Model
  • Mark optional fields to indicate their optionality in the UI during mapping
  • Note: This centralizes input definition at the subprocess level, replacing the previous “data to send” configuration on the parent process
Define parameters
2

Configure Output Parameters

  • Navigate to the End node
  • Select the Output parameters tab
Define parameters
  • Define the schema based on the process’s Data Model
  • Automatic Data Flow: For synchronous processes, subprocess output automatically appends to parent without requiring explicit “append to parent” action
Define parameters
3

Handle Multiple Start/End Nodes

  • If your subprocess has multiple start or end nodes, all will be displayed
  • Useful for scenarios involving exclusive gateways
  • Each node can have its own parameter configuration
Process Start Prompts: When starting a process with defined input parameters, the system automatically generates input prompts based on the required parameters, aiding in testing and validation.

Mapping scenarios

Call activity mapping

Synchronous call activity

- Parent process with defined Data Model
- Child process with input/output parameters on Start & End nodes
- Project-level data model for reusable structures
1

Set Up Call Activity

  • In a process, define a call-activity type node
  • Select the subprocess to be called
  • Data Mapping is now the default option
  • Toggle to “Switch to legacy mapping” if needed for backward compatibility
The new Data Mapping functionality is the default, with legacy mapping available via toggle for backward compatibility.
Define parameters
2

Map Input Parameters

  • View the subprocess’s input parameters
  • Click the “cog wheel” icon next to the start node
Define parameters
  • Source Component: Current process (parent)
  • Destination Component: Subprocess input parameters
  • Mapping Options:
    • Individual Mapping: Map each attribute individually
    • Quick Mapping: If source and destination have the same data type, use dropdown for automatic mapping of all attributes
Define parameters
  • Testing: Click the “test” button to visualize the JSON payload using example values from your data model
3

Map Output Parameters

  • Select the Output tab
  • View subprocess’s output parameters
  • Select attributes in parent process data model for storing output
  • Map subprocess output parameters (source) to parent process parameters (destination)
Define parameters
JavaScript Transformation:
  • Use JavaScript functions for data transformation and combination
  • Syntax Requirements: All computed values must use return statements
  • Example: return userVerification.status + " on " + userVerification.date
  • Testing: Use test button to see transformed output with example values
  • Value Persistence: Previous values are saved until confirmation and save
Type Handling:
  • System sends configured data even with type mismatches (e.g., string to number)
  • JavaScript errors trigger if functions encounter issues (e.g., reading property from undefined object)
4

Save Configuration

Save the mapping configuration for both input and output

Asynchronous call activity

For async nodes, output cannot be mapped as the parent process doesn’t wait for completion.
1

Set Up Async Call Activity

  • Define call-activity node or Start Subprocess action on User Task
  • Select subprocess to be called
  • Check the Run async toggle
2

Map Input Only

  • Choose Data Mapper implementation
  • Map input parameters from parent process (source) to subprocess (destination)
  • Save input mapping configuration

Testing and validation

Testing methodology

1

Use Example Values

  • Leverage example values from your data model for testing
  • No need for separate mock data
  • Values automatically populate in test scenarios
2

Test Payload Generation

  • Click “test” button to visualize JSON payload
  • See exactly what data will be sent to destination component
  • Verify transformations work as expected
3

Apply Previous Session Information

  • Use information from previous sessions in testing
  • Validate real-world scenarios
  • Ensure data flows correctly end-to-end
4

Testing subprocess data

  • When testing data coming from subprocesses, you need to check for the UUID instance of the subprocess and copy paste it in the instance field of the test data, as shown in the demo below:

To do that:

JavaScript transformation testing

Managing parameter changes

Parameter changes require manual updates to existing mappings.

Modifying defined parameters

When parameters are modified in Start or End nodes:
  • Changes do not automatically propagate to existing mappings
  • Mappings must be manually updated in the Data Mapper
  • Test mappings after parameter changes

Data Model changes

When changes are made to Project or Process Data Model:

Renamed Parameters

Parameters are renamed if primitive names change

Removed Parameters

Parameters are removed if primitives are deleted from Data Types

New Parameters

New parameters are not automatically selected and must be manually configured

Use cases

Best practices

Define Data Models Early

Establish project and library data models before creating processes for maximum reusability

Use Example Values

Include meaningful example values in data models to improve testing and configuration experience

Plan for Reusability

Design data structures that can be shared across multiple components and processes

Test Thoroughly

Validate mappings and transformations with various data scenarios before deployment

Document Dependencies

Note any dependencies between mapped parameters and transformation logic

Leverage Quick Mapping

Use quick mapping for components with identical data types to speed up configuration

Current limitations

Current limitations to be aware of:
  • Async Processes: Output mapping not available for asynchronous call activities
  • Manual Updates: Parameter changes require manual mapping updates
  • No Automatic Type Casting: System doesn’t automatically convert data types (e.g., string to number)
  • Limited Resource Support: Not yet implemented for all resources (expanding in future releases)
  • Parallel Multi-Instance: Handling under consideration for future releases