Skip to main content
Version: 2.13.0

FLOWX.AI Engine setup guide

Infrastructure Prerequisitesโ€‹

There are some components that are mandatory to start the engine:

Database - Postgres / Oracleโ€‹

For Microservices architecture, some Microservices holds their data individually using separate Databases.

A basic Postgres configuration:

  • helm values.yaml:

    existingSecret: {{secretName}}
    enabled: true
    annotations: {{prometheus port}} "true"
    type: ClusterIP
    release: prometheus-operator
    enabled: true
    interval: 30s
    scrapeTimeout: 10s
    enabled: true
    size: 1Gi
    postgresqlDatabase: onboarding
    maxConnections: 200
    sharedBuffers: 128MB
    postgresqlUsername: postgres
    cpu: 6000m
    memory: 2048Mi
    cpu: 200m
    memory: 512Mi


Redis serverโ€‹

Redis cluster that will allow the engine to cache process definitions, compiled scripts and Kafka responses

Kafka clusterโ€‹

Kafka is the backbone of the Engine, all plugins and integrations are accessed using the Kafka broker.

Management Toolsโ€‹

Additional you can check details about (the platform will start without these components):

  • Logging via Elasticsearch
  • Monitoring
  • Tracing via Jaeger


Datasource configurationโ€‹

To store process definition and the data about the process instances the engine uses a Postgres / Oracle database.

The following configuration details need to be added using environment variables:




You will need to make sure that the user, password, connection link and db name are configured correctly, otherwise, you will receive errors at start time.

The datasource is configured automatically via a liquibase script inside the engine. All updates will include migration scripts.

Redis configurationโ€‹

The following values should be set with the corresponding Redis-related values.




All the data produced by the engine will be stored in Redis under a specific key. The name of the key can be configured using the environment variable:


File upload sizeโ€‹

The maximum file size allowed for uploads can be set by using the SPRING_SERVLET_MULTIPART_MAX_FILE_SIZE & SPRING_SERVLET_MULTIPART_MAX_REQUEST_SIZE variables.

Kafka configurationโ€‹

Kafka handles all communication between the FLOWX Engine and external plugins and integrations. It is also used for notifying running process instances when certain events occur.

Both a producer and a consumer must be configured. The following Kafka-related configurations can be set by using environment variables:

SPRING_KAFKA_BOOTSTRAP_SERVERS - address of the Kafka server

KAFKA_AUTH_EXCEPTION_RETRY_INTERVAL - the interval between retries after AuthorizationException is thrown by KafkaConsumer

KAFKA_MESSAGE_MAX_BYTES - this is the largest size of the message that can be received by the broker from a producer.

The configuration related to consumers (group ids and thread numbers) can be configured separately for each message type:















It is important to know that all the events that start with a configured pattern will be consumed by the engine. This makes it possible to create a new integration and connect it to the engine without changing the configuration of the engine.

KAFKA_TOPIC_PROCESS_NOTIFY_ADVANCE - Kafka topic used internally by the engine

KAFKA_TOPIC_PROCESS_NOTIFY_PARENT - Topic used for sub-processes to notify parent process when finished

KAFKA_TOPIC_PATTERN - the topic name pattern that the Engine listens on for incoming Kafka events

KAFKA_TOPIC_LICENSE_OUT - the topic name used by the Engine to generate licensing-related details

KAFKA_TOPIC_TASK_OUT - used for sending notifications to the plugin

KAFKA_TOPIC_PROCESS_OPERATIONS_IN - user for receiving calls from the task management plugin


KAFKA_TOPIC_PROCESS_EXPIRE_IN - the topic name that the Engine listens on for requests to expire processes

KAFKA_TOPIC_PROCESS_SCHEDULE_OUT_SET - the topic name used by the Engine to schedule a process expiration

KAFKA_TOPIC_PROCESS_SCHEDULE_OUT_STOP - the topic name used by the Engine to stop a process expiration

KAFKA_TOPIC_PROCESS_SCHEDULE_IN_RUN_ACTION - the topic name that the Engine listens on for requests to run scheduled actions

ยปUsing the scheduler

KAFKA_TOPIC_DATA_SEARCH_IN - the topic name that the Engine listens on for requests to search for processes

KAFKA_TOPIC_DATA_SEARCH_OUT - the topic name used by the Engine to reply after finding a process

KAFKA_TOPIC_AUDIT_OUT - topic key for sending audit logs. Default value: ai.flowx.audit.log

Processes can also be started by sending messages to a Kafka topic.โ€‹

KAFKA_TOPIC_PROCESS_START_IN - the Engine listens on this topic for requests to start a new process instance

KAFKA_TOPIC_PROCESS_START_OUT - used for sending out the reply after starting a new process instance

Web socket configurationโ€‹

The engine also communicates with the frontend application via WebSockets. The socket server connection details also need to be configured:




Authorization & access rolesโ€‹

The following variables need to be set in order to connect to the identity management platform:




ยปConfiguring access roles for processes


Advanced debugging features can be enabled. When this happens, snapshots of the process status will be taken after each action and can be later used for debugging purposes. This feature comes with an exponential increase in database usage so we suggest having the flag set to true on debugging media and false production ones.

This feature can be enabled by setting the FLOWX_DEBUG environment variable to true.


The following environment variables could be set in order to control log levels:

LOGGING_LEVEL_ROOT - root spring boot microservice logs

LOGGING_LEVEL_APP - app-level logs

LOGGING_LEVEL_PROCESS - process instance orchestration-related logs, included in LOGGING_LEVEL_APP

LOGGING_LEVEL_MESSAGING- Kafka events-related logs, included in LOGGING_LEVEL_APP

LOGGING_LEVEL_SOCKET - WebSocket-related logs, included in LOGGING_LEVEL_APP

LOGGING_LEVEL_REDIS - Redis-related logs

LOGGING_LEVEL_JAEGER - Jaeger tracing related logs

LOGGING_LEVEL_OAUTH2_EXC - specific auth exception logs, included in LOGGING_LEVEL_APP

Advancing Controllerโ€‹

To use advancing controller, the following env vars are needed for process-engine to connect to Advancing Postgres DB.

ADVANCING_DATASOURCE_JDBC_URL - environment variable used to configure a JDBC (Java database connectivity) data source, it specifies the connection URL for a particular database, including the server, port, database name, and any other connection parameters necessary

ADVANCING_DATASOURCE_USERNAME - environment variable used to authenticate the user access to the data source

ADVANCING_DATASOURCE_PASSWORD - environment variable used to set the password for a data source connection

ยปAdvancing controller setup

Configuring Schedulerโ€‹

Below you can find a configuration .yaml to use scheduler service together with FLOWX.AI Engine:

enabled: false
cronExpression: 0 */5 0-5 * * ? #every day during the night, every 5 minutes, at the start of the minute.
batchSize: 1000
cronExpression: 30 */3 * * * ? #master election every 3 minutes
cronExpression: 0 * * * * *
expireMinutes: 30

Below you can find a configuration .yaml to use scheduler service together with FLOWX.AI Engine:

  • processCleanup: A configuration for cleaning up processes.
  • enabled specifies whether this feature is turned on or off.
  • cronExpression is a schedule expression that determines when the cleanup process runs. In this case, it runs every day during the night (between 12:00 AM and 5:59 AM) and every 5 minutes, at the start of the minute.
  • batchSize specifies the number of processes to be cleaned up in one batch.
  • masterElection: A configuration for electing a master.
  • websocket: A configuration for WebSocket connections.
  • expireMinutes specifies how long the WebSocket namespace is valid for (30 minutes in this case).

Was this page helpful?