Did you know?

Unlike Postman, which focuses on API testing, the Integration Designer automates workflows between systems. With drag-and-drop ease, it handles REST API connections, real-time processes, and error management, making integrations scalable and easy to mantain.

Overview

Integration Designer facilitates the integration of the FlowX platform with external systems, applications, and data sources.
Integration Designer focuses on REST API integrations, with future updates expanding support for other protocols.

Key features

1

Drag-and-Drop Simplicity

You can easily build complex API workflows using a drag-and-drop interface, making it accessible for both technical and non-technical audience.
2

Visual REST API Integration

Specifically tailored for creating and managing REST API calls through a visual interface, streamlining the integration process without the need for extensive coding.
3

Real-Time Testing and Validation

Allows for immediate testing and validation of REST API calls within the design interface.

Managing integration endpoints

Systems

A system is a collection of resources—endpoints, authentication, and variables—used to define and run integration workflows.

Creating a new system definition

With Systems feature you can create, update, and organize endpoints used in API integrations. These endpoints are integral to building workflows within the Integration Designer, offering flexibility and ease of use for managing connections between systems. Endpoints can be configured, tested, and reused across multiple workflows, streamlining the integration process.
Go to the Systems section in FlowX Designer at Projects -> Your project -> Integrations -> Systems.
  1. Add a New System, set the system’s unique code, name, and description:
    • Name: The system’s name.
    • Code: A unique identifier for the external system.
    • Base URL: The base URL is the main address of a website or web application, typically consisting of the protocol (http or https), domain name, and a path.
    • Description: A description of the system and its purpose.
    • Enable enumeration value mapping: If checked, this system will be listed under the mapped enumerations. See enumerations section for more details.
To dynamically adjust the base URL based on the upper environment (e.g., dev, QA, stage), you can use environment variables and configuration parameters. For example: https://api.${environment}.example.com/v1.Additionally, keep in mind that the priority for determining the configuration parameter (e.g., base URL) follows this order: first, input from the user/process; second, configuration parameters overrides (set directly on FlowX.AI designer or environment variables); and lastly, configuration parameters.
  1. Set up authorization (Service Token, Bearer Token, or No Auth). In our example, we will set the auth type as a bearer and we will set it at system level:
The value of the token might change depending on the environment so it is recommended to define it at system level and apply Configuration Parameters Overrides at runtime.

Defining REST integration endpoints

In this section you can define REST API endpoints that can be reused across different workflows.
  1. Under the Endpoints section, add the necessary endpoints for system integration.
  1. Configure an endpoint by filling in the following properties:
    • Method: GET, POST, PUT, PATCH, DELETE.
    • Path: Path for the endpoint.
    • Parameters: Path, query, and header parameters.
    • Response Settings: Expected response codes and formats.
    • Body: JSON payload for requests.

Defining variables

The Variables tab allows you to store system-specific variables that can be referenced throughout workflows using the format ${variableName}. These declared variables can be utilized not only in workflows but also in other sections, such as the Endpoint or Authorization tabs.
For example:
  • For our integration example, you can declare configuration parameters and use the variables to store your tableId and baseId and reference them the Variables tab.
  • Use variables in the Base URL to switch between different environments, such as UAT or production.

Endpoint parameter types

When configuring endpoints, several parameter types help define how the endpoint interacts with external systems. These parameters ensure that requests are properly formatted and data is correctly passed.

Path parameters

Elements embedded directly within the URL path of an API request that acts as a placeholder for specific value.
  • Used to specify variable parts of the endpoint URL (e.g., /users/{userId}).
  • Defined with ${parameter} format.
  • Mandatory in the request URL.
Path parameters must always be included, while query and header parameters are optional but can be set as required based on the endpoint’s design.

Query parameters

Query parameters are added to the end of a URL to provide extra information to a web server when making requests.
  • Query parameters are appended to the URL after a ? symbol and are typically used for filtering or pagination (e.g., ?search=value)
  • Useful for filtering or pagination.
  • Example URL with query parameters: https://api.example.com/users?search=johndoe&page=2.
These parameters must be defined in the Parameters table, not directly in the endpoint path.
To preview how query parameters are sent in the request, you can use the Preview feature to see the exact request in cURL format. This shows the complete URL, including query parameters.

Header parameters

Used to give information about the request and basically to give instructions to the API of how to handle the request
  • Header parameters (HTTP headers) provide extra details about the request or its message body.
  • They are not part of the URL. Default values can be set for testing and overridden in the workflow.
  • Custom headers sent with the request (e.g., Authorization: Bearer token).
  • Define metadata or authorization details.

Body parameters

The data sent to the server when an API request is made.
  • These are the data fields included in the body of a request, usually in JSON format.
  • Body parameters are used in POST, PUT, and PATCH requests to send data to the external system (e.g., creating or updating a resource).

Response body parameters

The data sent back from the server after an API request is made.
  • These parameters are part of the response returned by the external system after a request is processed. They contain the data that the system sends back.
  • Typically returned in GET, POST, PUT, and PATCH requests. Response body parameters provide details about the result of the request (e.g., confirmation of resource creation, or data retrieval)

Enum mapper

The enum mapper for the request body enables you to configure enumerations for specific keys in the request body, aligning them with values from the External System or translations into another language.
On enumerations you can map both translation values from different languages or values for different source systems.
Make sure you have the enumerations created with corresponding translations and system values values in your application already:
Select whether to use in the integration the enumeration value corresponding to the External System or the translation into another language.For translating into language a header parameter called ‘Language’ is required to specify the language for translation.

Configuring authorization

  • Select the required Authorization Type from a predefined list.
  • Enter the relevant details based on the selected type (e.g., Realm and Client ID for Service Accounts).
  • These details will be automatically included in the request headers when the integration is executed.

Authorization methods

The Integration Designer supports several authorization methods, allowing you to configure the security settings for API calls. Depending on the external system’s requirements, you can choose one of the following authorization formats:

Service account

Service Account authentication requires the following key fields:
  • Identity Provider Url: The URL for the identity provider responsible for authenticating the service account.
  • Client Id: The unique identifier for the client within the realm.
  • Client secret: A secure secret used to authenticate the client alongside the Client ID.
  • Scope: Specifies the access level or permissions for the service account.
When using Entra as an authentication solution, the Scope parameter is mandatory. Ensure it is defined correctly in the authorization settings.

Basic authentication

  • Requires the following credentials:
    • Username: The account’s username.
    • Password: The account’s password.
      • Suitable for systems that rely on simple username/password combinations for access.

Bearer

  • Requires an Access Token to be included in the request headers.
  • Commonly used for OAuth 2.0 implementations.
  • Header Configuration: Use the format Authorization: Bearer {access_token} in headers of requests needing authentication.
  • System-Level Example: You can store the Bearer token at the system level, as shown in the example below, ensuring it’s applied automatically to future API calls:
Store tokens in a configuration parameter so updates propagate across all requests seamlessly when tokens are refreshed or changed.

Certificates

You might want to access another external system that require a certificate to do that. Use this setup to configure the secure communication with the system. It includes paths to both a Keystore (which holds the client certificate) and a Truststore (which holds trusted certificates). You can toggle these features based on the security requirements of the integration.
When the Use Certificate option is enabled, you will need to provide the following certificate-related details:
  • Keystore Path: Specifies the file path to the keystore, in this case, /opt/certificates/testkeystore.jks. The keystore contains the client certificate used for securing the connection.
  • Keystore Password: The password used to unlock the keystore.
  • Keystore Type: The format of the keystore, JKS or PKCS12, depending on the system requirements.
Truststore credentials
  • Truststore Path: The file path is set to /opt/certificates/testtruststore.jks, specifying the location of the truststore that holds trusted certificates.
  • Truststore Password: Password to access the truststore.

Workflows

A workflow defines a series of tasks and processes to automate system integrations. Within the Integration Designer, workflows can be configured using different components to ensure efficient data exchange and process orchestration.

Creating a workflow

  1. Navigate to Workflow Designer:
    • In FlowX.AI Designer to Projects -> Your application -> Integrations -> Workflows.
    • Create a New Workflow, provide a name and description, and save it.
  2. Start to design your workflow by adding nodes to represent the steps of your workflow:
  • Start Node: Defines where the workflow begins and also defines the input parameter for subsequent nodes.
  • REST endpoint nodes: Add REST API calls for fetching or sending data.
  • Fork nodes (conditions): Add conditional logic for decision-making.
  • Data mapping nodes (scripts): Write custom scripts in JavaScript or Python.
  • End Nodes: Capture output data as the completed workflow result, ensuring the process concludes with all required information.

Workflow Nodes Overview

Workflow nodes are the building blocks of your integration logic. Each node type serves a specific function, allowing you to design, automate, and orchestrate complex processes visually.
Node TypePurpose
Start NodeDefines workflow input and initializes data
REST Endpoint NodeMakes REST API calls to external systems
FlowX Database NodeReads/writes data to the FlowX Database
Condition (Fork)Adds conditional logic and parallel branches
Script NodeTransforms or maps data using JavaScript or Python
Subworkflow NodeInvokes another workflow as a modular, reusable subcomponent
End NodeCaptures and outputs the final result of the workflow

Start Node

The Start node is the mandatory first node in any workflow. It defines the input data model and passes this data to subsequent nodes.
Define all required input fields in the Start node to ensure seamless data mapping from processes or user tasks.

REST Endpoint Node

Enables communication with external systems via REST API calls. Supports GET, POST, PUT, PATCH, and DELETE methods. Endpoints are selected from a dropdown, grouped by system.
  • Params: Configure path, query, and header parameters.
  • Input/Output: Input is auto-populated from the previous node; output displays the API response.
You can test REST endpoint nodes independently to validate connections and data retrieval.

FlowX Database Node

Allows you to read and write data to the FlowX Database within your workflow.

Condition (Fork) Node

Evaluates logical conditions (JavaScript or Python) to direct workflow execution along different branches.
  • If/Else: Routes based on condition evaluation.
  • Parallel Processing: Supports multiple branches for concurrent execution.
Use fork nodes to implement business rules, error handling, or multi-path logic.

Script Node

Executes custom JavaScript or Python code to transform, map, or enrich data between nodes.

Subworkflow Node

The Subworkflow node allows you to modularize complex workflows by invoking other workflows as reusable subcomponents. This approach streamlines process design, promotes reuse, and simplifies maintenance.
1

Add a Subworkflow Node

Select Start Subworkflow from the Select Next Node dropdown. Choose from workflows categorized as Local or Libraries.
2

Configure the Subworkflow Node

  • Workflow Selection: Pick the workflow to invoke.
  • Open: Edit the subworkflow in a new tab.
  • Preview: View the workflow canvas in a popup.
  • Response Key: Set a key (e.g., response_key) for output.
  • Input: Provide input in JSON format.
  • Output: Output is read-only JSON after execution.
Use subworkflows for reusable logic such as data enrichment, validation, or external system calls.

Execution logic and error handling

  • Parent workflow waits for subworkflow completion before proceeding.
  • If the subworkflow fails, the parent workflow halts at this node.
  • Subworkflow output is available to downstream nodes via the response key.
  • Logs include workflow name, instance ID, and node statuses for both parent and subworkflow.
If a subworkflow is deleted, an error displays: [name] subworkflow not found.
Subworkflow runs are recorded in workflow instance history for traceability.

Console logging, navigation, and read-only mode

  • Console shows input/output, workflow name, and instance ID for each subworkflow run.
  • Open subworkflow in a new tab for debugging from the console.
  • Breadcrumbs enable navigation between parent and subworkflow details.
  • In committed/upper environments, subworkflow configuration is read-only and node runs are disabled (preview/open only).
Subworkflow instances are logged in history, and you can navigate between parent and child workflow runs for comprehensive debugging.

Use case: CRM Data Retrieval with Subworkflows

Suppose you need to retrieve CRM details in a subworkflow and use the output for further actions in the parent workflow.
1

Create the Subworkflow

Design a workflow that connects to your CRM system, fetches user details, and outputs the data in a structured JSON format.
2

Add a Subworkflow Node in the Parent Workflow

In your main workflow, add a Subworkflow Node and select the CRM retrieval workflow. Map any required input parameters.
3

Use Subworkflow Output in Parent Workflow

Downstream nodes in the parent workflow can reference the subworkflow’s output using the defined responseKey.
{
  "crmData": "${responseKey}"
}
4

Monitor and Debug

Use the console to view input/output data, workflow names, and instance IDs. Open subworkflow runs in new tabs for detailed debugging.
This modular approach allows you to build scalable, maintainable integrations by composing workflows from reusable building blocks.

End Node

The End node signifies the termination of a workflow’s execution. It collects the final output and completes the workflow process.
  • Receives input in JSON format from the previous node.
  • Output represents the final data model of the workflow.
  • Multiple End nodes are allowed for different execution paths.
If the node’s output doesn’t meet mandatory requirements, it will be flagged as an error to ensure all necessary data is included.

Integration with external systems

This example demonstrates how to integrate FlowX with an external system, in this example, using Airtable, to manage and update user credit status data. It walks through the setup of an integration system, defining API endpoints, creating workflows, and linking them to BPMN processes in FlowX Designer.
Before going through this example of integration, we recommend:
  • Create your own base and table in Airtable, details here.
  • Check Airtable Web API docs here to get familiarized with Airtable API.

Integration in FlowX

1

Define a System

Navigate to the Integration Designer and create a new system:
  • Name: Airtable Credit Data
  • Base URL: https://api.airtable.com/v0/
2

Define Endpoints

In the Endpoints section, add the necessary API endpoints for system integration:
  1. Get Records Endpoint:
    • Method: GET
    • Path: /${baseId}/${tableId}
    • Path Parameters: Add the values for the baseId and for the tableId so they will be available in the path.
    • Header Parameters: Authorization Bearer token
See the API docs.
  1. Create Records Endpoint:
    • Method: POST
    • Path: /${baseId}/${tableId}
    • Path Parameters: Add the values for the baseId and for the tableId so they will be available in the path.
    • Header Parameters:
      • Content-Type: application/json
      • Authorization Bearer token
    • Body: JSON format containing the fields for the new record. Example:
   {
    "typecast": true,
    "records": [
        {
            "fields": {
                "First Name": "${firstName}",
                "Last Name": "${lastName}",
                "Age": ${age},
                "Gender": "${gender}",
                "Email": "${email}",
                "Phone": "${phone}",
                "Address": "${address}",
                "Occupation": "${occupation}",
                "Monthly Income ($)": ${income},
                "Credit Score": ${creditScore},
                "Credit Status": "${creditStatus}"
            }
        }
    ]
}
3

Design the Workflow

  1. Open the Workflow Designer and create a new workflow.
    • Provide a name and description.
  2. Configure Workflow Nodes:
    • Start Node: Initialize the workflow.
On the start node add the data that you want to extract from the process. This way when you will add the Start Workflow Integration node action it will be populated with this data.
{
"firstName": "${firstName}",
"lastName": "${lastName}",
"age": ${age},
"gender": "${gender}",
"email": "${email}",
"phone": "${phone}",
"address": "${address}",
"occupation": "${occupation}",
"income": ${income},
"creditScore": ${creditScore},
"creditStatus": "${creditStatus}"
}
Make sure this keys are also mapped in the data model of your process with their corresponding attributes.
  • REST Node: Set up API calls:
    • GET Endpoint for fetching records from Airtable.
    • POST Endpoint for creating new records.
  • Condition Node: Add logic to handle credit scores (e.g., triggering a warning if the credit score is below 300).
Condition example:
input.responseKey.data.records[0].fields["Credit Score"] < 300
  • Script Node: Include custom scripts if needed for processing data (not used in this example).
  • End Node: Define the end of the workflow with success or failure outcomes.
4

Link the Workflow to a Process

  1. Integrate the workflow into a BPMN process:
    • Open the process diagram and include a User Task and a Receive Message Task.
In this example, we’ll use a User Task because we need to capture user data and send it to our workflow.
  1. Map Data in the UI Designer:
    • Create the data model
    • Link data attributes from the data model to form fields, ensuring the user input aligns with the expected parameters.
  1. Add a Start Integration Workflow node action:
  • Make sure all the input will be captured.
5

Monitor Workflow and Capture Output

Receive Workflow Output:
  • Use the Receive Message Task to capture workflow outputs like status or returned data.
  • Set up a Data stream topic to ensure workflow output is mapped to a predefined key.
6

Start the integration

  • Start your process to initiate the workflow integration. It should add a new user with the details captured in the user task.
  • Check if it worked by going to your base in Airtable. You can see, our user has been added.

This example demonstrates how to integrate Airtable with FlowX to automate data management. You configured a system, set up endpoints, designed a workflow, and linked it to a BPMN process.

FAQs