What is an integration?
Integrations play a crucial role in connecting legacy systems or third-party applications to the FLOWX.AI Process engine. They enable seamless communication by leveraging custom code and the Kafka messaging system.
Integrations serve various purposes, including working with legacy APIs, implementing custom file exchange solutions, or integrating with RPAs.
Integrations involve interaction with legacy systems and require custom development to integrate them into your FLOWX.AI setup.
Developing a custom integration
Developing custom integrations for the FLOWX.AI platform is a straightforward process. You can use your preferred technology to write the necessary custom code, with the requirement that it can send and receive messages from the Kafka cluster.
Steps to create a custom integration
Follow these steps to create a custom integration:
Develop a microservice, referred to as a "Connector," using your preferred tech stack. The Connector should listen for Kafka events, process the received data, interact with legacy systems if required, and send the data back to Kafka.
Once the custom integration's response is ready, send it back to the FLOWX.AI engine. Keep in mind that the process will wait in a receive message node until the response is received.
For Java-based Connector microservices, you can use the following startup code as a quickstart guide:»Quickstart connector
Managing an integration
Managing Kafka topics
It's essential to configure the engine to consume events from topics that follow a predefined naming pattern. The naming pattern is defined using a topic prefix and suffix, such as "ai.flowx.dev.engine.receive."
The suggested naming convention is as follows:
To ensure proper communication, make sure to:
- Convert data between different domains (e.g., date formats, list of values, units).
- Add integration-specific information that is not critical to the process flow (e.g., flags, trace GUIDs).
Building a Connector
Connectors act as lightweight business logic layers and perform the following tasks:
- Converts data from one domain to another (date formats, list of values, units, etc.)
- Adds information that is required by the integration but is not important for the process (a flag, generates a GUID for tracing, etc.)
To build a Connector, you'll need to:
When designing Connectors, keep in mind that the communication between the engine and the Connector is asynchronous in an event-driven architecture. It is essential to design Connectors in a way that avoids bloating the platform. Depending on the communication type between the Connector and the legacy system, you may need to implement custom solutions for load balancing requests, scaling the Connector, etc.
To ensure proper communication with the Engine, make sure to include all received Kafka headers in the response sent back to it.
For easy process flow tracing, consider adding a minimal setup for Jaeger tracing to your custom Connectors.