The Integration Designer simplifies the integration of FlowX with external systems using REST APIs. It offers a user-friendly graphical interface with intuitive drag-and-drop functionality for defining data models, orchestrating workflows, and configuring system endpoints.
Did you know?
Unlike Postman, which focuses on API testing, the Integration Designer automates workflows between systems. With drag-and-drop ease, it handles REST API connections, real-time processes, and error management, making integrations scalable and easy to mantain.
You can easily build complex API workflows using a drag-and-drop interface, making it accessible for both technical and non-technical audience.
2
Visual REST API Integration
Specifically tailored for creating and managing REST API calls through a visual interface, streamlining the integration process without the need for extensive coding.
3
Real-Time Testing and Validation
Allows for immediate testing and validation of REST API calls within the design interface.
With Systems feature you can create, update, and organize endpoints used in API integrations. These endpoints are integral to building workflows within the Integration Designer, offering flexibility and ease of use for managing connections between systems. Endpoints can be configured, tested, and reused across multiple workflows, streamlining the integration process.
Go to the Systems section in FlowX Designer at Projects -> Your project -> Integrations -> Systems.
Add a New System, set the system’s unique code, name, and description:
Name: The system’s name.
Code: A unique identifier for the external system.
Base URL: The base URL is the main address of a website or web application, typically consisting of the protocol (http or https), domain name, and a path.
Description: A description of the system and its purpose.
Enable enumeration value mapping: If checked, this system will be listed under the mapped enumerations. See enumerations section for more details.
To dynamically adjust the base URL based on the upper environment (e.g., dev, QA, stage), you can use environment variables and configuration parameters. For example: https://api.${environment}.example.com/v1.Additionally, keep in mind that the priority for determining the configuration parameter (e.g., base URL) follows this order: first, input from the user/process; second, configuration parameters overrides (set directly on FlowX.AI designer or environment variables); and lastly, configuration parameters.
Set up authorization (Service Token, Bearer Token, or No Auth). In our example, we will set the auth type as a bearer and we will set it at system level:
The value of the token might change depending on the environment so it is recommended to define it at system level and apply Configuration Parameters Overrides at runtime.
The Variables tab allows you to store system-specific variables that can be referenced throughout workflows using the format ${variableName}.These declared variables can be utilized not only in workflows but also in other sections, such as the Endpoint or Authorization tabs.
For example:
For our integration example, you can declare configuration parameters and use the variables to store your tableId and baseId and reference them the Variables tab.
Use variables in the Base URL to switch between different environments, such as UAT or production.
When configuring endpoints, several parameter types help define how the endpoint interacts with external systems. These parameters ensure that requests are properly formatted and data is correctly passed.
These parameters must be defined in the Parameters table, not directly in the endpoint path.
To preview how query parameters are sent in the request, you can use the Preview feature to see the exact request in cURL format. This shows the complete URL, including query parameters.
The data sent back from the server after an API request is made.
These parameters are part of the response returned by the external system after a request is processed. They contain the data that the system sends back.
Typically returned in GET, POST, PUT, and PATCH requests. Response body parameters provide details about the result of the request (e.g., confirmation of resource creation, or data retrieval)
The enum mapper for the request body enables you to configure enumerations for specific keys in the request body, aligning them with values from the External System or translations into another language.
On enumerations you can map both translation values from different languages or values for different source systems.
Make sure you have the enumerations created with corresponding translations and system values values in your application already:
Select whether to use in the integration the enumeration value corresponding to the External System or the translation into another language.For translating into language a header parameter called ‘Language’ is required to specify the language for translation.
The Integration Designer supports several authorization methods, allowing you to configure the security settings for API calls. Depending on the external system’s requirements, you can choose one of the following authorization formats:
Requires an Access Token to be included in the request headers.
Commonly used for OAuth 2.0 implementations.
Header Configuration: Use the format Authorization: Bearer {access_token} in headers of requests needing authentication.
System-Level Example: You can store the Bearer token at the system level, as shown in the example below, ensuring it’s applied automatically to future API calls:
Store tokens in a configuration parameter so updates propagate across all requests seamlessly when tokens are refreshed or changed.
You might want to access another external system that require a certificate to do that. Use this setup to configure the secure communication with the system.It includes paths to both a Keystore (which holds the client certificate) and a Truststore (which holds trusted certificates). You can toggle these features based on the security requirements of the integration.
When the Use Certificate option is enabled, you will need to provide the following certificate-related details:
Keystore Path: Specifies the file path to the keystore, in this case, /opt/certificates/testkeystore.jks. The keystore contains the client certificate used for securing the connection.
Keystore Password: The password used to unlock the keystore.
Keystore Type: The format of the keystore, JKS or PKCS12, depending on the system requirements.
Truststore credentials
Truststore Path: The file path is set to /opt/certificates/testtruststore.jks, specifying the location of the truststore that holds trusted certificates.
Truststore Password: Password to access the truststore.
A workflow defines a series of tasks and processes to automate system integrations. Within the Integration Designer, workflows can be configured using different components to ensure efficient data exchange and process orchestration.
Workflow nodes are the building blocks of your integration logic. Each node type serves a specific function, allowing you to design, automate, and orchestrate complex processes visually.
Node Type
Purpose
Start Node
Defines workflow input and initializes data
REST Endpoint Node
Makes REST API calls to external systems
FlowX Database Node
Reads/writes data to the FlowX Database
Condition (Fork)
Adds conditional logic and parallel branches
Script Node
Transforms or maps data using JavaScript or Python
Subworkflow Node
Invokes another workflow as a modular, reusable subcomponent
End Node
Captures and outputs the final result of the workflow
Enables communication with external systems via REST API calls. Supports GET, POST, PUT, PATCH, and DELETE methods. Endpoints are selected from a dropdown, grouped by system.
Params: Configure path, query, and header parameters.
Input/Output: Input is auto-populated from the previous node; output displays the API response.
You can test REST endpoint nodes independently to validate connections and data retrieval.
The Subworkflow node allows you to modularize complex workflows by invoking other workflows as reusable subcomponents. This approach streamlines process design, promotes reuse, and simplifies maintenance.
1
Add a Subworkflow Node
Select Start Subworkflow from the Select Next Node dropdown. Choose from workflows categorized as Local or Libraries.
2
Configure the Subworkflow Node
Workflow Selection: Pick the workflow to invoke.
Open: Edit the subworkflow in a new tab.
Preview: View the workflow canvas in a popup.
Response Key: Set a key (e.g., response_key) for output.
Input: Provide input in JSON format.
Output: Output is read-only JSON after execution.
Use subworkflows for reusable logic such as data enrichment, validation, or external system calls.
This example demonstrates how to integrate FlowX with an external system, in this example, using Airtable, to manage and update user credit status data. It walks through the setup of an integration system, defining API endpoints, creating workflows, and linking them to BPMN processes in FlowX Designer.
Before going through this example of integration, we recommend:
Create your own base and table in Airtable, details here.
Check Airtable Web API docs here to get familiarized with Airtable API.
Open the Workflow Designer and create a new workflow.
Provide a name and description.
Configure Workflow Nodes:
Start Node: Initialize the workflow.
On the start node add the data that you want to extract from the process. This way when you will add the Start Workflow Integration node action it will be populated with this data.
Script Node: Include custom scripts if needed for processing data (not used in this example).
End Node: Define the end of the workflow with success or failure outcomes.
4
Link the Workflow to a Process
Integrate the workflow into a BPMN process:
Open the process diagram and include a User Task and a Receive Message Task.
In this example, we’ll use a User Task because we need to capture user data and send it to our workflow.
Map Data in the UI Designer:
Create the data model
Link data attributes from the data model to form fields, ensuring the user input aligns with the expected parameters.
Add a Start Integration Workflow node action:
Make sure all the input will be captured.
5
Monitor Workflow and Capture Output
Receive Workflow Output:
Use the Receive Message Task to capture workflow outputs like status or returned data.
Set up a Data stream topic to ensure workflow output is mapped to a predefined key.
6
Start the integration
Start your process to initiate the workflow integration. It should add a new user with the details captured in the user task.
Check if it worked by going to your base in Airtable. You can see, our user has been added.
This example demonstrates how to integrate Airtable with FlowX to automate data management. You configured a system, set up endpoints, designed a workflow, and linked it to a BPMN process.
A: Currently, the Integration Designer only supports REST APIs, but future updates will include support for SOAP and JDBC.
How is security handled in integrations??
A: The Integration Service handles all security aspects, including certificates and secret keys. Authorization methods like Service Token, Bearer Token, and OAuth 2.0 are supported.
How are errors handled?
A: Errors are logged within the workflow and can be reviewed in the monitoring dedicated console for troubleshooting and diagnostics
Can I import endpoint specifications in the Integration Designer?
A: Currently, the Integration Designer only supports adding endpoint specifications manually. Import functionality (e.g., importing configurations from sources like Swagger) is planned for future releases.For now, you can manually define your endpoints by entering the necessary details directly in the system.