Integrations
What is an integration?โ
Integrations play a crucial role in connecting legacy systems or third-party applications to the
Integrations serve various purposes, including working with legacy APIs, implementing custom file exchange solutions, or integrating with RPAs.
High-level architectureโ
Integrations involve interaction with legacy systems and require custom development to integrate them into your FLOWX.AI setup.
Developing a custom integrationโ
Developing custom integrations for the
Steps to create a custom integrationโ
Follow these steps to create a custom integration:
Develop a microservice, referred to as a "Connector," using your preferred tech stack. The Connector should listen for Kafka events, process the received data, interact with legacy systems if required, and send the data back to Kafka.
Configure the process definition by adding a message send action in one of the nodes. This action sends the required data to the Connector.
Once the custom integration's response is ready, send it back to the FLOWX.AI engine. Keep in mind that the process will wait in a receive message node until the response is received.
For Java-based Connector microservices, you can use the following startup code as a quickstart guide:
ยปQuickstart connectorManaging an integrationโ
Managing Kafka topicsโ
It's essential to configure the engine to consume events from topics that follow a predefined naming pattern. The naming pattern is defined using a topic prefix and suffix, such as "ai.flowx.dev.engine.receive."
The suggested naming convention is as follows:
topic:
naming:
package: "ai.flowx."
environment: "dev."
version: ".v1"
prefix: ${kafka.topic.naming.package}${kafka.topic.naming.environment}
suffix: ${kafka.topic.naming.version}
engineReceivePattern: engine.receive
pattern: ${kafka.topic.naming.prefix}${kafka.topic.naming.engineReceivePattern}*
To ensure proper communication, make sure to:
- Convert data between different domains (e.g., date formats, list of values, units).
- Add integration-specific information that is not critical to the process flow (e.g., flags, trace GUIDs).
Building a Connectorโ
Connectors act as lightweight business logic layers and perform the following tasks:
- Converts data from one domain to another (date formats, list of values, units, etc.)
- Adds information that is required by the integration but is not important for the process (a flag, generates a GUID for tracing, etc.)
To build a Connector, you'll need to:
- Create a Kafka consumer - guide here
- Create a Kafka producer - guide here
When designing Connectors, keep in mind that the communication between the engine and the Connector is asynchronous in an event-driven architecture. It is essential to design Connectors in a way that avoids bloating the platform. Depending on the communication type between the Connector and the legacy system, you may need to implement custom solutions for load balancing requests, scaling the Connector, etc.
To ensure proper communication with the
For easy process flow tracing, consider adding a minimal setup for Jaeger tracing to your custom Connectors.