The Integration Designer simplifies the integration of FlowX with external systems using REST APIs. It offers a user-friendly graphical interface with intuitive drag-and-drop functionality for defining data models, orchestrating workflows, and configuring system endpoints.
Unlike Postman, which focuses on API testing, the Integration Designer automates workflows between systems. With drag-and-drop ease, it handles REST API connections, real-time processes, and error management, making integrations scalable and easy to mantain.
Integration Designer facilitates the integration of the FlowX platform with external systems, applications, and data sources.
Integration Designer focuses on REST API integrations, with future updates expanding support for other protocols.
Drag-and-Drop Simplicity
You can easily build complex API workflows using a drag-and-drop interface, making it accessible for both technical and non-technical audience.
Visual REST API Integration
Specifically tailored for creating and managing REST API calls through a visual interface, streamlining the integration process without the need for extensive coding.
Real-Time Testing and Validation
Allows for immediate testing and validation of REST API calls within the design interface.
A system is a collection of resources—endpoints, authentication, and variables—used to define and run integration workflows.
With Systems feature you can create, update, and organize endpoints used in API integrations. These endpoints are integral to building workflows within the Integration Designer, offering flexibility and ease of use for managing connections between systems. Endpoints can be configured, tested, and reused across multiple workflows, streamlining the integration process.
Go to the Systems section in FlowX Designer at Projects -> Your project -> Integrations -> Systems.
http
or https
), domain name, and a path.To dynamically adjust the base URL based on the upper environment (e.g., dev, QA, stage), you can use environment variables and configuration parameters. For example: https://api.${environment}.example.com/v1
.
Additionally, keep in mind that the priority for determining the configuration parameter (e.g., base URL) follows this order: first, input from the user/process; second, configuration parameters overrides (set directly on FlowX.AI designer or environment variables); and lastly, configuration parameters.
The value of the token might change depending on the environment so it is recommended to define it at system level and apply Configuration Parameters Overrides at runtime.
In this section you can define REST API endpoints that can be reused across different workflows.
The Variables tab allows you to store system-specific variables that can be referenced throughout workflows using the format ${variableName}
.
These declared variables can be utilized not only in workflows but also in other sections, such as the Endpoint or Authorization tabs.
For example:
When configuring endpoints, several parameter types help define how the endpoint interacts with external systems. These parameters ensure that requests are properly formatted and data is correctly passed.
Elements embedded directly within the URL path of an API request that acts as a placeholder for specific value.
/users/{userId}
).${parameter}
format.Path parameters must always be included, while query and header parameters are optional but can be set as required based on the endpoint’s design.
Query parameters are added to the end of a URL to provide extra information to a web server when making requests.
?
symbol and are typically used for filtering or pagination (e.g., ?search=value
)These parameters must be defined in the Parameters table, not directly in the endpoint path.
To preview how query parameters are sent in the request, you can use the Preview feature to see the exact request in cURL format. This shows the complete URL, including query parameters.
Used to give information about the request and basically to give instructions to the API of how to handle the request
Authorization: Bearer token
).The data sent to the server when an API request is made.
The data sent back from the server after an API request is made.
The enum mapper for the request body enables you to configure enumerations for specific keys in the request body, aligning them with values from the External System or translations into another language.
On enumerations you can map both translation values from different languages or values for different source systems.
Make sure you have the enumerations created with corresponding translations and system values values in your application already:
Select whether to use in the integration the enumeration value corresponding to the External System or the translation into another language.
For translating into language a header parameter called ‘Language’ is required to specify the language for translation.
The Integration Designer supports several authorization methods, allowing you to configure the security settings for API calls. Depending on the external system’s requirements, you can choose one of the following authorization formats:
Service Account authentication requires the following key fields:
When using Entra as an authentication solution, the Scope parameter is mandatory. Ensure it is defined correctly in the authorization settings.
Authorization: Bearer {access_token}
in headers of requests needing authentication.Store tokens in a configuration parameter so updates propagate across all requests seamlessly when tokens are refreshed or changed.
You might want to access another external system that require a certificate to do that. Use this setup to configure the secure communication with the system.
It includes paths to both a Keystore (which holds the client certificate) and a Truststore (which holds trusted certificates). You can toggle these features based on the security requirements of the integration.
When the Use Certificate option is enabled, you will need to provide the following certificate-related details:
/opt/certificates/testkeystore.jks
. The keystore contains the client certificate used for securing the connection.Truststore credentials
/opt/certificates/testtruststore.jks
, specifying the location of the truststore that holds trusted certificates.A workflow defines a series of tasks and processes to automate system integrations. Within the Integration Designer, workflows can be configured using different components to ensure efficient data exchange and process orchestration.
Users can visually build workflows by adding various nodes, including:
The Start node is the default and mandatory first node in any workflow. It initializes the workflow and defines the input parameters defined on it for subsequent nodes.
The Start node defines the initial data model for the workflow. This input data model can be customized. You can enter custom JSON data by clicking inside the code editor and typing their input. This input data will be passed to subsequent nodes in the workflow.
For example, if you want to define a first name parameter, you can add it like this in the Start Node:
Later, in the body of a subsequent workflow node, you can reference this input using:
This ensures that the data from the Start node is dynamically passed through the workflow.
When you try to send input data from a process to a workflow, you can use the Start workflow node to map the data coming from a process and to send it acrross the entire workflow.
Make sure the data is also mapped in the Start Integration Workflow node action where you have the data.
Only one Start node is allowed per workflow. The Start node is always the first node in the workflow and cannot have any incoming connections. Its sole function is to provide the initial data for the workflow.
The Start node cannot be altered in name, nor can it be deleted from the workflow.
The REST endpoint node enables communication with external systems to retrieve or update data by making REST API calls. It supports multiple methods like GET, POST, PUT, PATCH, and DELETE. Endpoints are selected via a dropdown menu, where available endpoints are grouped by the system they belong to.
The node is added by selecting it from the “Add Connection” dropdown in the workflow designer.
You can include multiple REST endpoint nodes within the same workflow, allowing for integration with various systems or endpoints.
Unlike some nodes, the Endpoint Call node can be run independently, making it possible to test the connection or retrieve data without executing the entire workflow.
Input and output
Each REST endpoint node includes some essential tabs:
The Condition node evaluates incoming data from a connected node based on defined logical conditions(if/else if with). It directs the workflow along different paths depending on whether the condition evaluates to TRUE or FALSE.
Defining Conditions in JavaScript or Python
Logical conditions for the Condition Node can be written in either JavaScript or Python, depending on the requirements of your workflow.
You can include multiple Condition nodes within a single workflow, enabling the creation of complex branching logic and decision-making flows.
Parallel processing and forking
The Condition node can split the workflow into parallel branches, allowing for multiple conditions to be evaluated simultaneously. This capability makes it ideal for efficiently processing different outcomes at the same time.
The Script node allows you to transform and map data between different systems during workflow execution by writing and executing custom code in JavaScript or Python. It enables complex data transformations and logic to be applied directly within the workflow.
The End node signifies the termination of a workflow’s execution. It collects the final output and completes the workflow process.
Multiple End nodes can be included within a single workflow. This allows the workflow to have multiple possible end points based on different execution paths.
The End node automatically receives input in JSON format from the previous node, and you can modify this input by editing it directly in the code editor. If the node’s output doesn’t meet mandatory requirements, it will be flagged as an error to ensure all necessary data is included.
The output of the End node represents the final data model of the workflow once execution is complete.
You can always test your endpoints in the context of the workflow. Run the endpoints separately (where is the case or run the entire workflow).
Use the integrated console after running each workflow (either if you test your workflow in the workflow designer or in a process definition). It provides useful info like logs, input and output data about eacg endpoint and other details like execution time etc.
Integrating workflows into a BPMN process allows for structured handling of tasks like user interactions, data processing, and external system integrations.
This is achieved by connecting workflow nodes to User Tasks and Service Tasks using the Start Integration Workflow action.
Create a BPMN Process
Open the FlowX Process Designer:
Define the Data Model:
Needed if you want to send data from your user task to the workflow.
Configure a User Task or Service Task
Add a Task:
Configure Actions for the Task:
Receive Data from the Workflow
This example demonstrates how to integrate FlowX with an external system, in this example, using Airtable, to manage and update user credit status data. It walks through the setup of an integration system, defining API endpoints, creating workflows, and linking them to BPMN processes in FlowX Designer.
Define a System
Navigate to the Integration Designer and create a new system:
https://api.airtable.com/v0/
Define Endpoints
In the Endpoints section, add the necessary API endpoints for system integration:
/${baseId}/${tableId}
See the API docs.
/${baseId}/${tableId}
Content-Type: application/json
Design the Workflow
On the start node add the data that you want to extract from the process. This way when you will add the Start Workflow Integration node action it will be populated with this data.
Make sure this keys are also mapped in the data model of your process with their corresponding attributes.
Condition example:
Link the Workflow to a Process
In this example, we’ll use a User Task because we need to capture user data and send it to our workflow.
Monitor Workflow and Capture Output
Receive Workflow Output:
Start the integration
This example demonstrates how to integrate Airtable with FlowX to automate data management. You configured a system, set up endpoints, designed a workflow, and linked it to a BPMN process.
Can I use protocols other than REST?
A: Currently, the Integration Designer only supports REST APIs, but future updates will include support for SOAP and JDBC.
How is security handled in integrations??
A: The Integration Service handles all security aspects, including certificates and secret keys. Authorization methods like Service Token, Bearer Token, and OAuth 2.0 are supported.
How are errors handled?
A: Errors are logged within the workflow and can be reviewed in the monitoring dedicated console for troubleshooting and diagnostics
Can I import endpoint specifications in the Integration Designer?
A: Currently, the Integration Designer only supports adding endpoint specifications manually. Import functionality (e.g., importing configurations from sources like Swagger) is planned for future releases.
For now, you can manually define your endpoints by entering the necessary details directly in the system.
The Integration Designer simplifies the integration of FlowX with external systems using REST APIs. It offers a user-friendly graphical interface with intuitive drag-and-drop functionality for defining data models, orchestrating workflows, and configuring system endpoints.
Unlike Postman, which focuses on API testing, the Integration Designer automates workflows between systems. With drag-and-drop ease, it handles REST API connections, real-time processes, and error management, making integrations scalable and easy to mantain.
Integration Designer facilitates the integration of the FlowX platform with external systems, applications, and data sources.
Integration Designer focuses on REST API integrations, with future updates expanding support for other protocols.
Drag-and-Drop Simplicity
You can easily build complex API workflows using a drag-and-drop interface, making it accessible for both technical and non-technical audience.
Visual REST API Integration
Specifically tailored for creating and managing REST API calls through a visual interface, streamlining the integration process without the need for extensive coding.
Real-Time Testing and Validation
Allows for immediate testing and validation of REST API calls within the design interface.
A system is a collection of resources—endpoints, authentication, and variables—used to define and run integration workflows.
With Systems feature you can create, update, and organize endpoints used in API integrations. These endpoints are integral to building workflows within the Integration Designer, offering flexibility and ease of use for managing connections between systems. Endpoints can be configured, tested, and reused across multiple workflows, streamlining the integration process.
Go to the Systems section in FlowX Designer at Projects -> Your project -> Integrations -> Systems.
http
or https
), domain name, and a path.To dynamically adjust the base URL based on the upper environment (e.g., dev, QA, stage), you can use environment variables and configuration parameters. For example: https://api.${environment}.example.com/v1
.
Additionally, keep in mind that the priority for determining the configuration parameter (e.g., base URL) follows this order: first, input from the user/process; second, configuration parameters overrides (set directly on FlowX.AI designer or environment variables); and lastly, configuration parameters.
The value of the token might change depending on the environment so it is recommended to define it at system level and apply Configuration Parameters Overrides at runtime.
In this section you can define REST API endpoints that can be reused across different workflows.
The Variables tab allows you to store system-specific variables that can be referenced throughout workflows using the format ${variableName}
.
These declared variables can be utilized not only in workflows but also in other sections, such as the Endpoint or Authorization tabs.
For example:
When configuring endpoints, several parameter types help define how the endpoint interacts with external systems. These parameters ensure that requests are properly formatted and data is correctly passed.
Elements embedded directly within the URL path of an API request that acts as a placeholder for specific value.
/users/{userId}
).${parameter}
format.Path parameters must always be included, while query and header parameters are optional but can be set as required based on the endpoint’s design.
Query parameters are added to the end of a URL to provide extra information to a web server when making requests.
?
symbol and are typically used for filtering or pagination (e.g., ?search=value
)These parameters must be defined in the Parameters table, not directly in the endpoint path.
To preview how query parameters are sent in the request, you can use the Preview feature to see the exact request in cURL format. This shows the complete URL, including query parameters.
Used to give information about the request and basically to give instructions to the API of how to handle the request
Authorization: Bearer token
).The data sent to the server when an API request is made.
The data sent back from the server after an API request is made.
The enum mapper for the request body enables you to configure enumerations for specific keys in the request body, aligning them with values from the External System or translations into another language.
On enumerations you can map both translation values from different languages or values for different source systems.
Make sure you have the enumerations created with corresponding translations and system values values in your application already:
Select whether to use in the integration the enumeration value corresponding to the External System or the translation into another language.
For translating into language a header parameter called ‘Language’ is required to specify the language for translation.
The Integration Designer supports several authorization methods, allowing you to configure the security settings for API calls. Depending on the external system’s requirements, you can choose one of the following authorization formats:
Service Account authentication requires the following key fields:
When using Entra as an authentication solution, the Scope parameter is mandatory. Ensure it is defined correctly in the authorization settings.
Authorization: Bearer {access_token}
in headers of requests needing authentication.Store tokens in a configuration parameter so updates propagate across all requests seamlessly when tokens are refreshed or changed.
You might want to access another external system that require a certificate to do that. Use this setup to configure the secure communication with the system.
It includes paths to both a Keystore (which holds the client certificate) and a Truststore (which holds trusted certificates). You can toggle these features based on the security requirements of the integration.
When the Use Certificate option is enabled, you will need to provide the following certificate-related details:
/opt/certificates/testkeystore.jks
. The keystore contains the client certificate used for securing the connection.Truststore credentials
/opt/certificates/testtruststore.jks
, specifying the location of the truststore that holds trusted certificates.A workflow defines a series of tasks and processes to automate system integrations. Within the Integration Designer, workflows can be configured using different components to ensure efficient data exchange and process orchestration.
Users can visually build workflows by adding various nodes, including:
The Start node is the default and mandatory first node in any workflow. It initializes the workflow and defines the input parameters defined on it for subsequent nodes.
The Start node defines the initial data model for the workflow. This input data model can be customized. You can enter custom JSON data by clicking inside the code editor and typing their input. This input data will be passed to subsequent nodes in the workflow.
For example, if you want to define a first name parameter, you can add it like this in the Start Node:
Later, in the body of a subsequent workflow node, you can reference this input using:
This ensures that the data from the Start node is dynamically passed through the workflow.
When you try to send input data from a process to a workflow, you can use the Start workflow node to map the data coming from a process and to send it acrross the entire workflow.
Make sure the data is also mapped in the Start Integration Workflow node action where you have the data.
Only one Start node is allowed per workflow. The Start node is always the first node in the workflow and cannot have any incoming connections. Its sole function is to provide the initial data for the workflow.
The Start node cannot be altered in name, nor can it be deleted from the workflow.
The REST endpoint node enables communication with external systems to retrieve or update data by making REST API calls. It supports multiple methods like GET, POST, PUT, PATCH, and DELETE. Endpoints are selected via a dropdown menu, where available endpoints are grouped by the system they belong to.
The node is added by selecting it from the “Add Connection” dropdown in the workflow designer.
You can include multiple REST endpoint nodes within the same workflow, allowing for integration with various systems or endpoints.
Unlike some nodes, the Endpoint Call node can be run independently, making it possible to test the connection or retrieve data without executing the entire workflow.
Input and output
Each REST endpoint node includes some essential tabs:
The Condition node evaluates incoming data from a connected node based on defined logical conditions(if/else if with). It directs the workflow along different paths depending on whether the condition evaluates to TRUE or FALSE.
Defining Conditions in JavaScript or Python
Logical conditions for the Condition Node can be written in either JavaScript or Python, depending on the requirements of your workflow.
You can include multiple Condition nodes within a single workflow, enabling the creation of complex branching logic and decision-making flows.
Parallel processing and forking
The Condition node can split the workflow into parallel branches, allowing for multiple conditions to be evaluated simultaneously. This capability makes it ideal for efficiently processing different outcomes at the same time.
The Script node allows you to transform and map data between different systems during workflow execution by writing and executing custom code in JavaScript or Python. It enables complex data transformations and logic to be applied directly within the workflow.
The End node signifies the termination of a workflow’s execution. It collects the final output and completes the workflow process.
Multiple End nodes can be included within a single workflow. This allows the workflow to have multiple possible end points based on different execution paths.
The End node automatically receives input in JSON format from the previous node, and you can modify this input by editing it directly in the code editor. If the node’s output doesn’t meet mandatory requirements, it will be flagged as an error to ensure all necessary data is included.
The output of the End node represents the final data model of the workflow once execution is complete.
You can always test your endpoints in the context of the workflow. Run the endpoints separately (where is the case or run the entire workflow).
Use the integrated console after running each workflow (either if you test your workflow in the workflow designer or in a process definition). It provides useful info like logs, input and output data about eacg endpoint and other details like execution time etc.
Integrating workflows into a BPMN process allows for structured handling of tasks like user interactions, data processing, and external system integrations.
This is achieved by connecting workflow nodes to User Tasks and Service Tasks using the Start Integration Workflow action.
Create a BPMN Process
Open the FlowX Process Designer:
Define the Data Model:
Needed if you want to send data from your user task to the workflow.
Configure a User Task or Service Task
Add a Task:
Configure Actions for the Task:
Receive Data from the Workflow
This example demonstrates how to integrate FlowX with an external system, in this example, using Airtable, to manage and update user credit status data. It walks through the setup of an integration system, defining API endpoints, creating workflows, and linking them to BPMN processes in FlowX Designer.
Define a System
Navigate to the Integration Designer and create a new system:
https://api.airtable.com/v0/
Define Endpoints
In the Endpoints section, add the necessary API endpoints for system integration:
/${baseId}/${tableId}
See the API docs.
/${baseId}/${tableId}
Content-Type: application/json
Design the Workflow
On the start node add the data that you want to extract from the process. This way when you will add the Start Workflow Integration node action it will be populated with this data.
Make sure this keys are also mapped in the data model of your process with their corresponding attributes.
Condition example:
Link the Workflow to a Process
In this example, we’ll use a User Task because we need to capture user data and send it to our workflow.
Monitor Workflow and Capture Output
Receive Workflow Output:
Start the integration
This example demonstrates how to integrate Airtable with FlowX to automate data management. You configured a system, set up endpoints, designed a workflow, and linked it to a BPMN process.
Can I use protocols other than REST?
A: Currently, the Integration Designer only supports REST APIs, but future updates will include support for SOAP and JDBC.
How is security handled in integrations??
A: The Integration Service handles all security aspects, including certificates and secret keys. Authorization methods like Service Token, Bearer Token, and OAuth 2.0 are supported.
How are errors handled?
A: Errors are logged within the workflow and can be reviewed in the monitoring dedicated console for troubleshooting and diagnostics
Can I import endpoint specifications in the Integration Designer?
A: Currently, the Integration Designer only supports adding endpoint specifications manually. Import functionality (e.g., importing configurations from sources like Swagger) is planned for future releases.
For now, you can manually define your endpoints by entering the necessary details directly in the system.