Skip to main content

Overview

Use Data Mappers to define how data flows between components. Add JavaScript transformations for complex scenarios.

Key concepts

Source Components

Send data to another component to trigger execution
  • Parent processes
  • Integration flows
  • Business rules

Destination Components

Receive input from a source component to execute
  • Subprocesses
  • Workflows
  • Business rules

Parameters & variables

Configure parameters and variables on the Data Model tab at process level.
  • Input Parameters
  • Output Parameters
Input Parameters: Define the structure your component expects to receive for successful execution.Input Variables: Send actual data from source to destination component for execution.
Input Parameters

Parameter types by origin

Established before the mapping action (e.g., input of a subprocess or workflow). These typically belong to destination components with fixed requirements.
Defined during the mapping process. Associated with source components that specify where destination component data comes from.

Prerequisites

Plan your Data Model carefully before implementing Data Mappers. Proper setup ensures success.

Data Model hierarchy

Understand the data model hierarchy to implement Data Mappers effectively:

Project Data Model

Organization Level: Define reusable data structures (e.g., β€œcustomer”, β€œclient type”) that you can share across all processes and workflows within your project.

Library Data Model

Enterprise Level: Create organization-wide standardized data types that you can reuse across multiple projects by including the library as a dependency.

Process Data Model

Process Level: Define process-specific data structures while leveraging reusable components from project and library levels.

Setup requirements

1

Define Project Data Model First

Critical: Define your project data model before creating processes. This enables:
  • Reusability across processes and workflows
  • Consistent data structures
  • Improved error prevention
  • Enhanced mapping experience
2

Include Example Values

Add meaningful example values to your data model attributes. This helps you:
  • Visualize attribute meaning and expected values
  • Test without separate mock data
  • Improve your configuration experience
3

Plan for Reusability

Design data structures that can be shared across multiple components:
  • Customer information structures
  • Common business entities
  • Standardized response formats

Configuration

Setting up process parameters

1

Configure Input Parameters

  • Open the process you want to configure
  • Navigate to the Data Model
  • Select the Input parameters tab and click β€œDefine parameters”
Define parameters
Define parameters for each start node separately when you have multiple start nodes. Select nodes from the dropdown.
  • Define the schema based on the process’s Data Model
  • Mark fields as optional if your component can function without those parameters. The optional flag indicates your component can operate without these parameters throughout its entire operation, not just during mapping.
  • Note: This centralizes input definition at the subprocess level, replacing the previous β€œdata to send” configuration on the parent process
Define parameters
2

Configure Output Parameters

  • Navigate to the Data Model
  • Select the Output parameters tab
  • Define the schema based on the process’s Data Model
Define parameters
3

Handle Multiple Start/End Nodes

  • View all start or end nodes when your subprocess has multiple nodes
  • Use this for scenarios with exclusive gateways
  • Configure parameters independently for each node
Process Start Prompts: When you start a process with defined input parameters, the system generates input prompts based on required parameters to help with testing and validation.

Mapping scenarios

Call activity mapping

Synchronous call activity

- Parent process with defined Data Model
- Child process with input/output parameters on Data Model  
- Project-level data model for reusable structures
1

Set Up Call Activity

  • In a process, define a call-activity type node
  • Select the subprocess to be called
  • Data Mapping is now the default option
  • Toggle to β€œSwitch to legacy mapping” if needed for backward compatibility
The new Data Mapping functionality is now the default option, while legacy mapping remains accessible via a toggle for backward compatibility.
Existing actions that were configured before this update automatically retain legacy mappingβ€”there is no need for you to modify them unless you want to switch to the new Data Mapping.
Define parameters
2

Map Input Parameters

  • View the subprocess’s input parameters
  • Click the β€œkey icon” next to the parameter
Define parameters
  • Source Component: Current process (parent)
  • Destination Component: Subprocess input parameters
  • Mapping Options:
    • Individual Mapping: Map each attribute individually using interpolation syntax: ${variableName}
    • Auto-populate Mapping: Select the data model types from dropdown for automatic mapping of all attributes
    • Quick Mapping: If source and destination have the same data type, use dropdown for automatic mapping of all attributes
Auto-populate Feature: Instead of manually typing variable paths, select your data model types from the dropdown and the system will automatically populate all the mappings for you.
Define parameters
  • Display Options: You can view variables as schema or JSON format for better visualization
  • Variable Format: When mapping manually, use interpolation syntax ${variableName} - this format differs from process keys
  • Testing: Click the β€œtest” button to visualize the JSON payload using example values from your data model
3

Map Output Parameters

  • Select the Output tab
  • View subprocess’s output parameters
  • Select attributes in parent process data model for storing output
  • Map subprocess output parameters (source) to parent process parameters (destination)
Define parameters
JavaScript Transformation:To access JavaScript transformations, click the function icon next to the variable you want to transform.
  • Use JavaScript functions for data transformation and combination
  • Syntax Requirements: All computed values must use return statements
  • Example: return ${userVerification.status} + " on " + ${userVerification.verificationDate}
Data Reference Syntax:
  • Dynamic values: Use ${variableName} syntax to reference dynamic values (same as used for variable mapping)
  • Example: return ${approverName} + " approved on " + ${date}
Testing and Persistence:
  • Testing: Use test button to see transformed output with example values
  • Value Persistence: Previous values are saved until you click β€œconfirmation” and β€œsave”
  • Error Handling: JavaScript errors trigger when functions encounter issues (e.g., reading property from undefined object)
Type Handling:
  • System sends configured data even with type mismatches (e.g., string to number)
  • Always validate your transformations before saving
4

Save Configuration

Save the mapping configuration for both input and output

Asynchronous call activity

For async nodes, output cannot be mapped as the parent process doesn’t wait for completion.
1

Set Up Async Call Activity

  • Define call-activity node or Start Subprocess action on User Task
  • Select subprocess to be called
  • Check the Run async toggle
2

Map Input Only

  • Choose Data Mapper implementation
  • Map input parameters from parent process (source) to subprocess (destination)
  • Save input mapping configuration

Testing and validation

Testing methodology

1

Use Example Values

  • Leverage example values from your data model for testing
  • No need for separate mock data
  • Values automatically populate in test scenarios
  • Auto-populate benefit: Example values help visualize attribute meaning and expected data
2

Test Payload Generation

  • Click β€œtest” button to visualize JSON payload
  • See exactly what data will be sent to destination component
  • Verify transformations work as expected
  • Display formats: View results as schema or JSON for better understanding
3

Variable Mapping Validation

  • Verify interpolation syntax ${variableName} is correctly formatted
  • Test both manual mappings and auto-populated ones
  • Ensure data type compatibility between source and destination
The interpolation format ${variableName} used in Data Mappers differs from process key formats. Pay attention to this when manually entering variable paths.
4

Apply Previous Session Information

  • Use information from previous sessions in testing
  • Validate real-world scenarios
  • Ensure data flows correctly end-to-end
5

Testing subprocess data

  • When testing data coming from subprocesses, you need to check for the UUID instance of the subprocess and copy paste it in the instance field of the test data, as shown in the demo below:

JavaScript transformation testing

  • All computed values must include return statements
  • Use clear parameter notation (e.g., userVerification.status)
  • Functions must handle potential undefined values
  • System sends data even with type mismatches
  • JavaScript errors trigger for undefined object property access
  • Test transformations thoroughly before deployment
  • Previous values are saved during configuration
  • Only lost upon clicking β€œconfirmation” and β€œsave”
  • Allows iterative testing and refinement

Managing parameter changes

Different types of parameter changes require different handling - some propagate automatically while others need manual updates.

Automatic propagation

These changes propagate automatically to input/output parameters and mappings:

Deleted Attributes

Automatic removal: When you delete an attribute from the data model, it disappears from input/output parameters and mappings

Renamed Attributes

Automatic update: When you rename an attribute in the data model, the change reflects in input/output parameters and mappings

Manual intervention required

New Attributes

Manual addition: When you add new attributes to the data model, you must manually:
  1. Add them to input/output parameters
  2. Configure them in the data mapper

Data Model propagation timing

Propagation speed varies by data model type:
When you don’t alter data type structure (e.g., adding attributes or changing types), updates propagate as follows:
  • Project data model: Immediately
  • Library data model: After the dependency build is updated

Best practices for parameter changes

Always test your mappings after any data model changes to ensure they function correctly.

Use cases

Configure data mappers for these scenarios:
Mappings can be configured whether the subprocess is triggered from:
  • Call Activity node: Direct subprocess invocation within process flow
  • User Task node: Subprocess triggered by user interaction
Both scenarios support full input and output parameter mapping capabilities.
Data mappers handle different subprocess execution patterns:
  • Synchronous execution: Parent process waits for completion, enabling both input and output mapping
  • Asynchronous execution: Parent process continues without waiting, supporting input mapping only
  • Single instance: One subprocess instance per trigger
  • Multiple instances: Multiple subprocess instances from the same trigger
When integrating a pre-defined reusable UI template into a User Task node:
  • Map data from parent process to template input parameters
  • Configure template output to flow back to parent process
  • Enable consistent UI behavior across multiple processes

Real-world examples

Example 1: Monthly income collection

Scenario: A subprocess collects monthly income information from users and passes it back to the parent process.
1

Subprocess Design

  • UI Behavior: Modal appears asking user to select currency and enter income amount
  • Data Collection: User enters monthly income value
  • Requirement: Pass collected data back to parent process
2

Data Model Configuration

In the subprocess:
  • Add monthlyIncome variable to the data model
  • Navigate to Output Parameters tab
  • Define monthlyIncome as an output parameter
In the parent process:
  • Initially no input parameters needed (not sending data to subprocess)
  • Focus on receiving output data
3

Call Activity Mapping

  • Open the call activity node in parent process
  • Navigate to Input and Output Parameters section
  • Output mapping: Connect subprocess output to parent process variable
Variable format: Use ${monthlyIncome} (interpolation syntax)
Auto-mapping shortcut: Select data model types from dropdown to automatically populate mappings instead of manual entry.
4

Variable Renaming Example

  • Subprocess variable: monthlyIncome
  • Parent process variable: incomePerMonth (renamed for clarity)
  • Mapping: ${monthlyIncome} β†’ incomePerMonth
  • Result: Two distinct data models successfully connected

Example 2: Time-off approval with JavaScript transformation

Scenario: A colleague submits a time-off request, requiring approval with enhanced data transformation.
1

Request Flow Design

  • Input data: Start/end dates, requester information
  • UI Behavior: Modal shows request details with approve/deny options
  • Output requirement: Return approval status + approver name
2

Input Parameter Mapping

  • Parent process sends:
    • Approver name
    • Request details (start date, end date)
  • Subprocess receives: Input parameters for display
Configuration: Parent process output parameters β†’ Subprocess input parameters
3

JavaScript Transformation for Output

Goal: Combine approval decision with approver identity
  • Click the function icon next to the output variable
  • Script example:
return ${approverName} + " approved on " + new Date().toISOString()
Dynamic value syntax:
  • Use ${variableName} to reference dynamic values: ${approverName}
Always include return statements in JavaScript transformations.
4

Testing Transformation

  • Click test button to preview transformed output
  • Verify JavaScript logic with example data
  • Result: Enriched data flows back to parent process
Key Takeaway: Data Mappers enable not just data transfer, but also data enrichment and transformation through JavaScript, making them powerful tools for complex business workflows.

Best practices

Define Data Models Early

Establish project and library data models before creating processes for maximum reusability

Use Example Values

Include meaningful example values in data models to improve testing and configuration experience

Plan for Reusability

Design data structures that can be shared across multiple components and processes

Test Thoroughly

Validate mappings and transformations with various data scenarios before deployment

Document Dependencies

Note any dependencies between mapped parameters and transformation logic

Leverage Quick Mapping

Use quick mapping for components with identical data types to speed up configuration

FAQs

It depends on the change type:
  • Deleted attributes: Automatically removed from mappings
  • Renamed attributes: Automatically updated in mappings
  • New attributes: You must manually add and configure them
Always test your mappings after making parameter changes to ensure they still function correctly.
When you start a subprocess using Data Mappers:
  • No need to configure β€œAppend Data to Parent Process” action in the subprocess anymore
  • For synchronous subprocesses: Output automatically flows back to parent process
  • For asynchronous subprocesses: You can configure input mapping, and the subprocess can append data back to the parent process asynchronously
You can define input and output parameters for each end node separately:
  • Navigate to Data Model β†’ Input/Output Parameters
  • Select the specific node from the dropdown
  • Configure parameters independently for each end node
Use this for processes with exclusive gateways where different entry points need different data structures.
Known limitation: When you send a currency variable with amount and code as input, the system doesn’t transmit the code because inputs don’t collect the code from currency fields.Solutions:
  • Set the code as optional in the output parameters, OR
  • Gather the code separately using a segmented button or other input method
Handle this properly to avoid mapping errors.

Current limitations

Current limitations to be aware of:
  • Array to Object Mapping: Cannot map array β†’ object for scenarios when you edit or add an item in a collection (estimated fix: 5.X Feature)
  • Currency Code Transmission: When mapping currency variables as input, only the amount is sent - the currency code is not transmitted
  • Workaround Required: Either make code optional in output parameters or collect currency code through separate input methods
⌘I