FLOWX Engine Setup guide

Infrastructure Prerequisites

There are some components that are mandatory to start the engine:

Database - Postgres / Oracle

A basic Postgres configuration:
  • helm values.yaml:
    1
    onboardingdb:
    2
    existingSecret: {{secretName}}
    3
    metrics:
    4
    enabled: true
    5
    service:
    6
    annotations:
    7
    prometheus.io/port: {{phrometeus port}}
    8
    prometheus.io/scrape: "true"
    9
    type: ClusterIP
    10
    serviceMonitor:
    11
    additionalLabels:
    12
    release: prometheus-operator
    13
    enabled: true
    14
    interval: 30s
    15
    scrapeTimeout: 10s
    16
    persistence:
    17
    enabled: true
    18
    size: 1Gi
    19
    postgresqlDatabase: onboarding
    20
    postgresqlExtendedConf:
    21
    maxConnections: 200
    22
    sharedBuffers: 128MB
    23
    postgresqlUsername: postgres
    24
    resources:
    25
    limits:
    26
    cpu: 6000m
    27
    memory: 2048Mi
    28
    requests:
    29
    cpu: 200m
    30
    memory: 512Mi
    Copied!

Redis server

Redis cluster that will allow the engine to cache process de definitions, compiled scripts and Kafka responses

Kafka cluster

Kafka is the backbone of the Engine, all plugins and integrations are access using the kafka broker.

Management Tools

Additional you can check details about (the platform will start without this components):
  • Logging via Elasticsearch
  • Monitoring
  • Tracing via Jaeger

Configuration

Datasource Configuration

To store process definition and the data about the process instances the engine uses a Postgres / Orale database.
The following configuration details need to be added in the environment variables:
1
datasource:
2
url: jdbc:postgresql://${DB_HOST}:5432/${DB_NAME:} # jdbc:oracle:thin:@//${DB_HOST}:5432/${DB_NAME}
3
username: ${DB_USERNAME}
4
password: ${DB_PASSWORD}
5
hikari:
6
maximum-pool-size: ${HIKKARI_MAX_POOL_SIZE:10}
Copied!
You will need to make sure that the user, password, connection link and db name are configured correctly, otherwise you will receive errors at start time.
The datasource is configured automatically via a liquibase script inside the engine. All updates will include migration scripts.

Redis Configuration

All the data produced by the engine will be stored in redis under the key-prefix
1
cache:
2
type: redis
3
redis:
4
key-prefix: "engine:"
5
6
redis:
7
host: ${REDIS_MASTER_HOST}
8
port: ${REDIS_MASTER_PORT}
9
password: ${REDIS_PASSWORD}
Copied!

File upload size

The maximum file size allowed for uploads can be set by using the MULTIPART_MAX_FILE_SIZE variable.
1
servlet:
2
multipart:
3
max-file-size: ${MULTIPART_MAX_FILE_SIZE:50MB}
4
max-request-size: ${MULTIPART_MAX_FILE_SIZE:50MB}
Copied!

Kafka Configuration

Kafka handles all communication between the FlowX Engine and external plugins and integrations. It is also used for notifying running process instances when certain events occur.
Both a producer and a consumer must be configured. The following Kafka related configurations can be set by using environment variables:
1
spring:
2
kafka:
3
bootstrap-servers: KAFKA_BOOTSTRAP_SERVER
4
producer:
5
properties:
6
message:
7
max:
8
bytes: ${KAFKA_MESSAGE_MAX_BYTES:52428800} #50MB
9
max:
10
request:
11
size: ${KAFKA_MESSAGE_MAX_BYTES:52428800} #50MB
12
partition:
13
fetch:
14
bytes: ${KAFKA_MESSAGE_MAX_BYTES:52428800} #50MB
15
consumer:
16
group-id: KAFKA_CONSUMER_GROUP_ID
17
kafka:
18
consumerThreads: ${KAFKA_CONSUMER_THREADS:3}
19
authorizationExceptionRetryInterval: ${KAFKA_AUTH_EXCEPTION_RETRY_INTERVAL:10}
Copied!
KAFKA_BOOTSTRAP_SERVER - address of the Kafka server
KAFKA_CONSUMER_GROUP_ID - group of consumers
KAFKA_MESSAGE_MAX_BYTES - this is the largest size of the message that can be received by the broker from a producer.
KAFKA_CONSUMER_THREADS - the number of Kafka consumer threads
KAFKA_AUTH_EXCEPTION_RETRY_INTERVAL - the interval between retries after AuthorizationException is thrown by KafkaConsumer
It is important to know that all the events that start with a configured pattern will be consumed by the engine. This makes it possible to create a new integration and connect it to the engine without changing the configuration of the engine.
1
kafka:
2
topicNameProcessNotify: "${KAFKA_TOPIC_PROCESS_NOTIFY}"
3
topicPatternProcessReceive: "${KAFKA_TOPIC_PATTERN_PROCESS_RECEIVE}"
4
topicNameProcessExpire: "${KAFKA_TOPIC_PROCESS_EXPIRE}"
5
topicNameScheduleMessage: "${KAFKA_TOPIC_SCHEDULE_MESSAGE}"
6
topicNameStopScheduledMessage: "${KAFKA_TOPIC_STOP_SCHEDULED_MESSAGE}"
7
topicNameStartProcess: "${KAFKA_TOPIC_START_PROCESS_IN}"
8
topicNameResponseStartProcess: "${KAFKA_TOPIC_START_PROCESS_OUT}"
9
topicNameRunScheduledAction: "${KAFKA_TOPIC_ACTION_RUN}"
10
topicNameParentNotify: "${KAFKA_TOPIC_PARENT_NOTIFY}"
11
topicNameLicense: "${KAFKA_LICENSE_TOPIC}"
Copied!
KAFKA_TOPIC_PROCESS_NOTIFY - Kafka topic used internally by the engine
KAFKA_TOPIC_PARENT_NOTIFY - Topic used for sub-processes to notify parent process when finished
KAFKA_TOPIC_PATTERN_PROCESS_RECEIVE - the topic name pattern that the Engine listens on for incoming Kafka events
KAFKA_LICENSE_TOPIC - the topic name used by the Engine to generate licensing related details
KAFKA_TOPIC_PROCESS_EXPIRE - the topic name that the Engine listens on for requests to expire processes
KAFKA_TOPIC_SCHEDULE_MESSAGE - the topic name used by the Engine to schedule a process expiration
KAFKA_TOPIC_STOP_SCHEDULED_MESSAGE - the topic name used by the Engine to stop a process expiration
KAFKA_TOPIC_ACTION_RUN - the topic name that the Engine listens on for requests to run scheduled actions

Processes can also be started by sending messages to a Kafka topic.

KAFKA_TOPIC_START_PROCESS_IN - the Engine listens on this topic for requests to start a new process instance
KAFKA_TOPIC_START_PROCESS_OUT - used for sending out the reply after starting a new process instance

Web socket configuration

The engine also communicates with the frontend application via WebSockets. The socket server connection details also need to be configured:
1
web-socket:
2
webSocketExternalUrl: ${WEBSOCKET_PROTOCOL:ws}://${WEBSOCKET_ENDPOINT:localhost:8080}
3
webSocketServerPort: WS_SERVER_PORT
4
webSocketServerPath: WEBSOCKET_PATH
Copied!

Access roles

Debugging

Advanced debugging features can be enabled. When this happens, snapshots of the process status will be taken after each action and can be later used for debugging purposes. This feature comes with an exponential increase of database usage so we suggest to have the flag set to true on debugging media and false production ones.
1
application:
2
debug: ${PROCESS_DEBUG:false}
Copied!

Logging

The following environment variables could be set in order to control log levels:
LOGGING_ROOT_LOGLEVEL - root spring boot microservice logs
LOGGING_APP_LOGLEVEL - app level logs
LOGGING_PROCESS_LOGLEVEL - process instance orchestration related logs, included in LOGGING_APP_LOGLEVEL
LOGGING_MESSAGING_LOGLEVEL - Kafka events related logs, included in LOGGING_APP_LOGLEVEL
LOGGING_SOCKET_LOGLEVEL - WebSocket related logs, included in LOGGING_APP_LOGLEVEL
LOGGING_REDIS_LOGLEVEL - Redis related logs
LOGGING_JAEGER_LOGLEVEL - Jaeger tracing related logs
LOGGING_OAUTH2_EXC_LOGLEVEL - specific auth exception logs, included in LOGGING_APP_LOGLEVEL
Last modified 10d ago