Skip to main content

Did you know?

Unlike Postman, which focuses on API testing, the Integration Designer automates workflows between systems. With drag-and-drop ease, it handles REST API connections, real-time processes, and error management, making integrations scalable and easy to mantain.

Overview

Integration Designer facilitates the integration of the FlowX platform with external systems, applications, and data sources.
Integration Designer focuses on REST API integrations, with future updates expanding support for other protocols.

Key features

1

Drag-and-Drop Simplicity

You can easily build complex API workflows using a drag-and-drop interface, making it accessible for both technical and non-technical audience.
2

Visual REST API Integration

Specifically tailored for creating and managing REST API calls through a visual interface, streamlining the integration process without the need for extensive coding.
3

Real-Time Testing and Validation

Allows for immediate testing and validation of REST API calls within the design interface.

Managing integration endpoints

Data Sources

A data source is a collection of resourcesβ€”endpoints, authentication, and variablesβ€”used to define and run integration workflows.

Creating a new data source definition

With Data Sources feature you can create, update, and organize endpoints used in API integrations. These endpoints are integral to building workflows within the Integration Designer, offering flexibility and ease of use for managing connections between systems. Endpoints can be configured, tested, and reused across multiple workflows, streamlining the integration process. Go to the Data Sources section in FlowX Designer at Workspaces -> Your workspace -> Projects -> Your project -> Integrations -> Data Sources.

Data sources types

There are two types of data sources:
  • RESTful System: for REST API integrations
  • FlowX Database: for FlowX Database integrations

RESTful System

Add a New Data Source, set the data source’s unique code, name, and description:
  • Select Data Source: RESTful System
  • Name: The data source’s name.
  • Code: A unique identifier for the external data source.
  • Base URL: The base URL is the main address of a website or web application, typically consisting of the protocol (http or https), domain name, and a path.
  • Description: A description of the data source and its purpose.
  • Enable enumeration value mapping: If checked, this system will be listed under the mapped enumerations. See enumerations section for more details.
To dynamically adjust the base URL based on the upper environment (e.g., dev, QA, stage), you can use environment variables and configuration parameters. For example: https://api.${environment}.example.com/v1.Additionally, keep in mind that the priority for determining the configuration parameter (e.g., base URL) follows this order: first, input from the user/process; second, configuration parameters overrides (set directly on FlowX.AI designer or environment variables); and lastly, configuration parameters.
  1. Set up authorization (Service Token, Bearer Token, or No Auth). In our example, we will set the auth type as a bearer and we will set it at system level:
The value of the token might change depending on the environment so it is recommended to define it at system level and apply Configuration Parameters Overrides at runtime.

Defining REST integration endpoints

In this section you can define REST API endpoints that can be reused across different workflows.
  1. Under the Endpoints section, add the necessary endpoints for system integration.
  1. Configure an endpoint by filling in the following properties:
    • Method: GET, POST, PUT, PATCH, DELETE.
    • Path: Path for the endpoint.
    • Parameters: Path, query, and header parameters.
    • Body: JSON, Multipart/form-data, or Binary.
    • Response: JSON or Single binary file.
    • Response example: Body or headers.

Defining variables

The Variables tab allows you to store system-specific variables that can be referenced throughout workflows using the format ${variableName}. These declared variables can be utilized not only in workflows but also in other sections, such as the Endpoint or Authorization tabs.
For example:
  • For our integration example, you can declare configuration parameters and use the variables to store your tableId and baseId and reference them the Variables tab.
  • Use variables in the Base URL to switch between different environments, such as UAT or production.

Endpoint parameter types

When configuring endpoints, several parameter types help define how the endpoint interacts with external systems. These parameters ensure that requests are properly formatted and data is correctly passed.

Path parameters

Elements embedded directly within the URL path of an API request that acts as a placeholder for specific value.
  • Used to specify variable parts of the endpoint URL.
  • Defined with ${parameter} format.
  • Mandatory in the request URL.
Path parameters must always be included, while query and header parameters are optional but can be set as required based on the endpoint’s design.

Query parameters

Query parameters are added to the end of a URL to provide extra information to a web server when making requests.
  • Query parameters are appended to the URL after a ? symbol and are typically used for filtering or pagination (e.g., ?search=value)
  • Useful for filtering or pagination.
  • Example URL with query parameters: https://api.example.com/users?search=johndoe&page=2.
These parameters must be defined in the Parameters table, not directly in the endpoint path.
To preview how query parameters are sent in the request, you can use the Preview feature to see the exact request in cURL format. This shows the complete URL, including query parameters.

Header parameters

Used to give information about the request and basically to give instructions to the API of how to handle the request
  • Header parameters (HTTP headers) provide extra details about the request or its message body.
  • They are not part of the URL. Default values can be set for testing and overridden in the workflow.
  • Custom headers sent with the request (e.g., Authorization: Bearer token).
  • Define metadata or authorization details.

Body parameters

The data sent to the server when an API request is made.
  • These are the data fields included in the body of a request, usually in JSON format.
  • Body parameters are used in POST, PUT, and PATCH requests to send data to the external system (e.g., creating or updating a resource).

Response body parameters

The data sent back from the server after an API request is made.
  • These parameters are part of the response returned by the external system after a request is processed. They contain the data that the system sends back.
  • Typically returned in GET, POST, PUT, and PATCH requests. Response body parameters provide details about the result of the request (e.g., confirmation of resource creation, or data retrieval)

Enum mapper

The enum mapper for the request body enables you to configure enumerations for specific keys in the request body, aligning them with values from the External System or translations into another language.
On enumerations you can map both translation values from different languages or values for different source systems.
Make sure you have the enumerations created with corresponding translations and system values values in your application already:
Select whether to use in the integration the enumeration value corresponding to the External System or the translation into another language.For translating into language a header parameter called β€˜Language’ is required to specify the language for translation.

Configuring authorization

  • Select the required Authorization Type from a predefined list.
  • Enter the relevant details based on the selected type (e.g., Realm and Client ID for Service Accounts).
  • These details will be automatically included in the request headers when the integration is executed.

Authorization methods

The Integration Designer supports several authorization methods, allowing you to configure the security settings for API calls. Depending on the external system’s requirements, you can choose one of the following authorization formats:

Service account

Service Account authentication requires the following key fields:
  • Identity Provider Url: The URL for the identity provider responsible for authenticating the service account.
  • Client Id: The unique identifier for the client within the realm.
  • Client secret: A secure secret used to authenticate the client alongside the Client ID.
  • Scope: Specifies the access level or permissions for the service account.
When using Entra as an authentication solution, the Scope parameter is mandatory. Ensure it is defined correctly in the authorization settings.

Basic authentication

  • Requires the following credentials:
    • Username: The account’s username.
    • Password: The account’s password.
      • Suitable for systems that rely on simple username/password combinations for access.

Bearer

  • Requires an Access Token to be included in the request headers.
  • Commonly used for OAuth 2.0 implementations.
  • Header Configuration: Use the format Authorization: Bearer {access_token} in headers of requests needing authentication.
  • System-Level Example: You can store the Bearer token at the system level, as shown in the example below, ensuring it’s applied automatically to future API calls:
Store tokens in a configuration parameter so updates propagate across all requests when tokens are refreshed or changed.

Certificates

You might want to access another external system that require a certificate to do that. Use this setup to configure the secure communication with the system. It includes paths to both a Keystore (which holds the client certificate) and a Truststore (which holds trusted certificates). You can toggle these features based on the security requirements of the integration.
When the Use Certificate option is enabled, you will need to provide the following certificate-related details:
  • Keystore Path: Specifies the file path to the keystore, in this case, /opt/certificates/testkeystore.jks. The keystore contains the client certificate used for securing the connection.
  • Keystore Password: The password used to unlock the keystore.
  • Keystore Type: The format of the keystore, JKS or PKCS12, depending on the system requirements.
Truststore credentials
  • Truststore Path: The file path is set to /opt/certificates/testtruststore.jks, specifying the location of the truststore that holds trusted certificates.
  • Truststore Password: Password to access the truststore.

File handling

You can now handle file uploads and downloads with external systems directly within Integration Designer. This update introduces native support for file transfers in RESTful connectors, reducing the need for custom development and complex workarounds.

Core scenarios

Integration Designer supports two primary file handling scenarios:
1

Downloading Files

Can call an external API (GET or POST) and receives a response containing one or more files. Integration Designer saves these files to a specified location and returns their new paths to the workflow for further processing.
2

Uploading Files

Can send a file that is already stored in the Document Plugin or a custom S3 Bucket to an external API via a POST request. The workflow transmits the file path, enabling file transfer without manual handling.
Common use cases include contract generation workflows where data is sent to external document services and the generated files are retrieved back into the process.

Receiving files (endpoint response configuration)

To configure an endpoint to handle incoming files from an external system, navigate to its Response tab. This functionality is available for both GET and POST methods.

Enabling and configuring file downloads

1

Activate File Processing

Switch the Save Files toggle to the β€œon” position to activate file processing for the response.
If this toggle is off, the system will not process files. A Single Binary response will result in an error, and a JSON response with Base64 data will be passed through as a raw string.
2

Configure Content-Type

Select the expected format of the successful API response from the Content-Type dropdown:
  • JSON (Default): For responses containing Base64 encoded file data
  • Single Binary: For responses where the body is the file itself

Handling JSON content-type

This option is used when the API returns a JSON object containing one or more Base64 encoded files. File Destination Configuration:
  • Document Plugin
  • S3 Protocol
Saves files to the platform’s managed storage, linking them to a specific process instance. The Document Plugin acts as a wrapper over the file system and provides special integration capabilities with document templates.
processInstance
string
required
The ID of the process instance. This field defaults to ${processInstanceId} to be mapped dynamically at runtime.
When using Document Plugin for file operations, you typically send only the document reference (ID) rather than the entire file content. The Integration Designer handles the special integration with document templates automatically.
Files Mapping Table:
ColumnDescriptionExample
Base 64 File KeyThe JSON path to the Base64 encoded stringfiles.user.photo
File Name KeyOptional. The JSON path to the filename stringfiles.photoName
Default File NameA fallback name to use if the File Name Key is not foundimagineProfil
Default FolderThe business-context folder, such as a client ID (CNP)folder_client
Default Doc TypeThe document type for classification in the Document PluginCarte Identitate
The Translate or Convert Enumeration Values toggle can be used in conjunction with the Save Files feature.

Handling Single Binary content-type

This option is used when the entire API response body is the file itself. The Single Binary content-type is ideal for endpoints that return raw file data directly in the response body.
1

Configure Content-Type

In the Response tab of your endpoint configuration:
  1. Enable the Save Files toggle
  2. Select Single Binary from the Content-Type dropdown
If the Save Files toggle is disabled, attempting to handle a Single Binary response will result in an error.
2

Choose File Destination

Select your preferred file storage destination:
  • S3 Protocol
  • Document Plugin
For files stored in custom S3 buckets, ideal for files not tied to specific process instances.
3

Configure File Name Identification

Choose how the system should identify the filename from the response.
File Name Identification Methods: File Destination Configuration:
  • Document Plugin
  • S3 Protocol
For files managed within FlowX’s document system with full process integration:
Default File Name
string
required
Fallback filename if header extraction fails
Default Folder
string
required
Business context folder (e.g., client ID, case number)
Default Document Type
string
required
Document classification for the Document Plugin
Files stored via Document Plugin are automatically linked to process instances and can be used with document templates and other FlowX document features.
Configuration Examples:
{
  "contentType": "Single Binary",
  "saveFiles": true,
  "autoIdentifyFile": true,
  "fileDestination": "S3 Protocol",
  "defaultFileName": "downloaded_file",
  "defaultFolder": "client_${clientId}"
}

Sending files (endpoint POST body configuration)

To configure an endpoint to send a file, navigate to the Body tab and select the appropriate Content Type.

Content Type: Multipart/Form-data

Use this to send files and text fields in a single request. This format is flexible and can handle mixed content types within the same POST request.
1

Configure File Source

Select where the file originates:
  • Document Plugin
  • S3 Protocol
2

Define Form Parts

Add rows to the resource table, defining each part of the form:
  • Key Type: Choose File or Text
  • Value:
    • For files: Provide the filePath (Minio path for S3 or Document Plugin reference)
    • For text: Provide the string value or variable reference
Multipart requests can be sent even without files - you can include only text fields by setting all Key Types to Text. The difference between content types is primarily in how data is packaged for transmission to the target server.

Content Type: Single binary

Use this to send the raw file as the entire request body. This method sends only the file content without any additional form data or metadata.
fileSource
string
required
Select Document Plugin or S3 Protocol
filePath
string
required
Specify the path of the file to be sent (Minio path for the file location)
When using Single Binary, only the Minio path is required since the entire request body will be the file content itself, without any additional packaging or metadata.

Content Type: JSON

This option should be used for standard JSON payloads only. It does not support embedding files for uploads; use Multipart/Form-Data or Single Binary for that purpose.

Runtime behavior & testing

Workflow node configuration

All configured file settings (e.g., File Path, Folder, Process Instance ID) are exposed as parameters on the corresponding workflow nodes, allowing them to be set dynamically using process variables at runtime.

Response payload & logging

When a node successfully downloads and saves a file, its output will contain the filePath to the stored file, not the raw Base64 string or binary content.
For security and performance, runtime logs will also only contain the filePath, not the raw file content.

Error handling

If a node is configured to receive a Single Binary file but the external system returns a JSON error (e.g., file not found), the JSON error will be correctly passed through to the workflow for handling.

Testing guidelines

The Test Modal is context-aware. It will only display input fields for file parameters (Process Instance ID, Folder, etc.) if Save Files is enabled on the endpoint.
If you test an endpoint that returns a binary file without configuring it as Single Binary, the test will fail with the error: β€œEndpoint returns a binary file. Please configure the Content-Type to handle binary responses.”
The Response Example tab is now separate from the Response configuration tab and includes both Body and Headers sections for better clarity.

Example: sending files to an external system after uploading a file to the Document Plugin

In this example, we’ll send a file to an external system using the Integration Designer.
1

Upload a file to the Document Plugin

Configure a process where you will upload a file to the Document Plugin.
  • Configure a User Task node where you will upload the file to the Document Plugin.
  • Configure an Upload File Action node to upload the file to the Document Plugin.
  • Configure a Save Data Action node to save the file to the Document Plugin.
2

Configure the Integration Designer

Configure the Integration Designer to send the file to an external system:
  • Configure a REST Endpoint node to send the file to an external system.

Workflows

A workflow defines a series of tasks and processes to automate system integrations. Within the Integration Designer, workflows can be configured using different components to ensure efficient data exchange and process orchestration.

Creating a workflow

  1. Navigate to Workflow Designer:
    • In FlowX.AI Designer to Projects -> Your application -> Integrations -> Workflows.
    • Create a New Workflow, provide a name and description, and save it.
  2. Start to design your workflow by adding nodes to represent the steps of your workflow:
  • Start Node: Defines where the workflow begins and also defines the input parameter for subsequent nodes.
  • REST endpoint nodes: Add REST API calls for fetching or sending data.
  • Fork nodes (conditions): Add conditional logic for decision-making.
  • Data mapping nodes (scripts): Write custom scripts in JavaScript or Python.
  • End Nodes: Capture output data as the completed workflow result, ensuring the process concludes with all required information.

Workflow nodes overview

Workflow nodes are the building blocks of your integration logic. Each node type serves a specific function, allowing you to design, automate, and orchestrate complex processes visually.
Node TypePurpose
Start NodeDefines workflow input and initializes data
REST Endpoint NodeMakes REST API calls to external systems
FlowX Database NodeReads/writes data to the FlowX Database
Condition (Fork)Adds conditional logic and parallel branches
Script NodeTransforms or maps data using JavaScript or Python
Subworkflow NodeInvokes another workflow as a modular, reusable subcomponent
End NodeCaptures and outputs the final result of the workflow

Start node

The Start node is the mandatory first node in any workflow. It defines the input data model and passes this data to subsequent nodes.
Define all required input fields in the Start node to ensure data mapping from processes or user tasks.

REST endpoint node

Enables communication with external systems via REST API calls. Supports GET, POST, PUT, PATCH, and DELETE methods. Endpoints are selected from a dropdown, grouped by system.
  • Params: Configure path, query, and header parameters.
  • Input/Output: Input is auto-populated from the previous node; output displays the API response.
You can test REST endpoint nodes independently to validate connections and data retrieval.

FlowX database node

Allows you to read and write data to the FlowX Database within your workflow.

Condition (Fork) node

Evaluates logical conditions (JavaScript or Python) to direct workflow execution along different branches.
  • If/Else: Routes based on condition evaluation.
  • Parallel Processing: Supports multiple branches for concurrent execution.
Use fork nodes to implement business rules, error handling, or multi-path logic.

Script node

Executes custom JavaScript or Python code to transform, map, or enrich data between nodes.

Subworkflow node

The Subworkflow node allows you to modularize complex workflows by invoking other workflows as reusable subcomponents. This approach streamlines process design, promotes reuse, and simplifies maintenance.
1

Add a Subworkflow Node

Select Start Subworkflow from the Select Next Node dropdown. Choose from workflows categorized as Local or Libraries.
2

Configure the Subworkflow Node

  • Workflow Selection: Pick the workflow to invoke.
  • Open: Edit the subworkflow in a new tab.
  • Preview: View the workflow canvas in a popup.
  • Response Key: Set a key (e.g., response_key) for output.
  • Input: Provide input in JSON format.
  • Output: Output is read-only JSON after execution.
Use subworkflows for reusable logic such as data enrichment, validation, or external system calls.

Execution logic and error handling

  • Parent workflow waits for subworkflow completion before proceeding.
  • If the subworkflow fails, the parent workflow halts at this node.
  • Subworkflow output is available to downstream nodes via the response key.
  • Logs include workflow name, instance ID, and node statuses for both parent and subworkflow.
If a subworkflow is deleted, an error displays: [name] subworkflow not found.
Subworkflow runs are recorded in workflow instance history for traceability.

Console logging, navigation, and read-only mode

  • Console shows input/output, workflow name, and instance ID for each subworkflow run.
  • Open subworkflow in a new tab for debugging from the console.
  • Breadcrumbs enable navigation between parent and subworkflow details.
  • In committed/upper environments, subworkflow configuration is read-only and node runs are disabled (preview/open only).
Subworkflow instances are logged in history, and you can navigate between parent and child workflow runs for comprehensive debugging.

Use case: CRM Data Retrieval with subworkflows

Suppose you need to retrieve CRM details in a subworkflow and use the output for further actions in the parent workflow.
1

Create the Subworkflow

Design a workflow that connects to your CRM system, fetches user details, and outputs the data in a structured JSON format.
2

Add a Subworkflow Node in the Parent Workflow

In your main workflow, add a Subworkflow Node and select the CRM retrieval workflow. Map any required input parameters.
3

Use Subworkflow Output in Parent Workflow

Downstream nodes in the parent workflow can reference the subworkflow’s output using the defined responseKey.
{
  "crmData": "${responseKey}"
}
4

Monitor and Debug

Use the console to view input/output data, workflow names, and instance IDs. Open subworkflow runs in new tabs for detailed debugging.
This modular approach allows you to build scalable, maintainable integrations by composing workflows from reusable building blocks.

End node

The End node signifies the termination of a workflow’s execution. It collects the final output and completes the workflow process.
  • Receives input in JSON format from the previous node.
  • Output represents the final data model of the workflow.
  • Multiple End nodes are allowed for different execution paths.
If the node’s output doesn’t meet mandatory requirements, it will be flagged as an error to ensure all necessary data is included.

Integration with external systems

This example demonstrates how to integrate FlowX with an external system, in this example, using Airtable, to manage and update user credit status data. It walks through the setup of an integration system, defining API endpoints, creating workflows, and linking them to BPMN processes in FlowX Designer.
Before going through this example of integration, we recommend:
  • Create your own base and table in Airtable, details here.
  • Check Airtable Web API docs here to get familiarized with Airtable API.

Integration in FlowX.AI

1

Define a System

Navigate to the Integration Designer and create a new system:
  • Name: Airtable Credit Data
  • Base URL: https://api.airtable.com/v0/
2

Define Endpoints

In the Endpoints section, add the necessary API endpoints for system integration:
  1. Get Records Endpoint:
    • Method: GET
    • Path: /${baseId}/${tableId}
    • Path Parameters: Add the values for the baseId and for the tableId so they will be available in the path.
    • Header Parameters: Authorization Bearer token
See the API docs.
  1. Create Records Endpoint:
    • Method: POST
    • Path: /${baseId}/${tableId}
    • Path Parameters: Add the values for the baseId and for the tableId so they will be available in the path.
    • Header Parameters:
      • Content-Type: application/json
      • Authorization Bearer token
    • Body: JSON format containing the fields for the new record. Example:
   {
    "typecast": true,
    "records": [
        {
            "fields": {
                "First Name": "${firstName}",
                "Last Name": "${lastName}",
                "Age": ${age},
                "Gender": "${gender}",
                "Email": "${email}",
                "Phone": "${phone}",
                "Address": "${address}",
                "Occupation": "${occupation}",
                "Monthly Income ($)": ${income},
                "Credit Score": ${creditScore},
                "Credit Status": "${creditStatus}"
            }
        }
    ]
}
3

Design the Workflow

  1. Open the Workflow Designer and create a new workflow.
    • Provide a name and description.
  2. Configure Workflow Nodes:
    • Start Node: Initialize the workflow.
On the start node add the data that you want to extract from the process. This way when you will add the Start Workflow Integration node action it will be populated with this data.
{
"firstName": "${firstName}",
"lastName": "${lastName}",
"age": ${age},
"gender": "${gender}",
"email": "${email}",
"phone": "${phone}",
"address": "${address}",
"occupation": "${occupation}",
"income": ${income},
"creditScore": ${creditScore},
"creditStatus": "${creditStatus}"
}
Make sure this keys are also mapped in the data model of your process with their corresponding attributes.
  • REST Node: Set up API calls:
    • GET Endpoint for fetching records from Airtable.
    • POST Endpoint for creating new records.
  • Condition Node: Add logic to handle credit scores (e.g., triggering a warning if the credit score is below 300).
Condition example:
input.responseKey.data.records[0].fields["Credit Score"] < 300
  • Script Node: Include custom scripts if needed for processing data (not used in this example).
  • End Node: Define the end of the workflow with success or failure outcomes.
4

Link the Workflow to a Process

  1. Integrate the workflow into a BPMN process:
    • Open the process diagram and include a User Task and a Receive Message Task.
In this example, we’ll use a User Task because we need to capture user data and send it to our workflow.
  1. Map Data in the UI Designer:
    • Create the data model
    • Link data attributes from the data model to form fields, ensuring the user input aligns with the expected parameters.
  1. Add a Start Integration Workflow node action:
  • Make sure all the input will be captured.
5

Monitor Workflow and Capture Output

Receive Workflow Output:
  • Use the Receive Message Task to capture workflow outputs like status or returned data.
  • Set up a Data stream topic to ensure workflow output is mapped to a predefined key.
6

Start the integration

  • Start your process to initiate the workflow integration. It should add a new user with the details captured in the user task.
  • Check if it worked by going to your base in Airtable. You can see, our user has been added.

This example demonstrates how to integrate Airtable with FlowX to automate data management. You configured a system, set up endpoints, designed a workflow, and linked it to a BPMN process.

FAQs

A: Currently, the Integration Designer only supports REST APIs, but future updates will include support for SOAP and JDBC.
A: The Integration Service handles all security aspects, including certificates and secret keys. Authorization methods like Service Token, Bearer Token, and OAuth 2.0 are supported.
A: Errors are logged within the workflow and can be reviewed in the monitoring dedicated console for troubleshooting and diagnostics
A: Currently, the Integration Designer only supports adding endpoint specifications manually. Import functionality (e.g., importing configurations from sources like Swagger) is planned for future releases.For now, you can manually define your endpoints by entering the necessary details directly in the system.