Creating a custom integration
Creating integrations for the FLOWX platform is pretty straightforward, you can use your preferred technology in order to write the custom code for them. The only constraint is that they need to be able to send and receive messages to/from the Kafka cluster.

Steps for creating an integration

  • Create a Microservice (we'll refer to it as a Connector) that can listen for and react to Kafka events, using your preferred tech stack. Add custom logic for handling the received data from Kafka and obtaining related info from legacy systems. And finally, send the data back to Kafka.
Here's the startup code for a Java Connector Microservice:
GitHub - flowx-ai/quickstart-connector: FLOWX Quickstart for Java Connectors
GitHub
  • Add the related configuration in the process definition, you will have to add a message send action in one of the nodes in order to send the needed data to the connector.
  • When the response from the custom integration is ready, send it back to the engine, keep in mind, your process will wait in a receive message node.

How to manage Kafka Topics

Don't forget, the engine is configured to consume all the events on topics that start with a predefined topic path (ex. flowx.in.*)
  • you will need to configure this topic pattern when setting up the Engine
  • and make sure to use it when sending messages from the Connectors back to the Engine.
  • all Kafka headers received by the Connector should be sent back to the Engine with the result.

How to build a Connector:

Connectors should act as a light business logic layer that:
  • Converts data from one domain to another (date formats, list of values, units, etc.)
  • Adds information that is required by the integration but is not important for the process (a flag, generates a GUID for tracing, etc.)
Keep in mind that you are in an event-driven architecture and the communication between the engine and the connector is asynchronous. The connectors will need to be designed in such a way that they do not bloat the platform. Depending on the communication type between the connector and the legacy system you might need to also add custom implementation for load balancing requests, scaling the connector, and such.
In order for the connector to communicate correctly with the Engine, you have to make sure to include all the received Kafka headers in the response that is sent back to the FLOWX Engine.
In order to be able to easily trace the process flow, a minimal setup for Jaeger tracing should be added to custom Connectors.