This section outlines the OAuth2 configuration settings for securing the Spring application, including resource server settings, security type, and access authorizations.
Old configuration from < v4.1 releases (will be deprecated in v4.5):
SECURITY_OAUTH2_BASE_SERVER_URL: The base URL for the OAuth 2.0 Authorization Server, which is responsible for authentication and authorization for clients and users, it is used to authorize clients, as well as to issue and validate access tokens.
SECURITY_OAUTH2_CLIENT_CLIENT_ID: A unique identifier for a client application that is registered with the OAuth 2.0 Authorization Server, this is used to authenticate the client application when it attempts to access resources on behalf of a user.
SECURITY_OAUTH2_CLIENT_CLIENT_SECRET: Secret Key that is used to authenticate requests made by an authorization client.
SECURITY_OAUTH2_REALM: Security configuration env var in the Spring Security OAuth2 framework, it is used to specify the realm name used when authenticating with OAuth2 providers.
New configuration, starting from v4.1 release, available below.
List of public paths that do not require authentication
/api/platform/components-versions
SECURITY_PUBLIC_PATHS_1
List of public paths that do not require authentication
/manage/actuator/health
SECURITY_PATH_AUTHORIZATIONS_0_PATH
Defines a security path or endpoint pattern. It specifies that the security settings apply to all paths under the "/api/" path. The ** is a wildcard that means it includes all subpaths under "/api/**".
"/api/**"
SECURITY_PATH_AUTHORIZATIONS_0_ROLES_ALLOWED
Specifies the roles allowed for accessing the specified path. In this case, the roles allowed are empty (""). This might imply that access to the "/api/**" paths is open to all users or that no specific roles are required for authorization.
Kafka handles all communication between the FlowX.AI Engine and external plugins and integrations. It is also used for notifying running process instances when certain events occur.
In Kafka a consumer group is a group of consumers that jointly consume and process messages from one or more Kafka topics. Each consumer group has a unique identifier called a group ID, which is used by Kafka to manage message consumption and distribution among the members of the group.Thread numbers, on the other hand, refer to the number of threads that a consumer application uses to process messages from Kafka. By default, each consumer instance runs in a single thread, which can limit the throughput of message processing. Increasing the number of consumer threads can help to improve the parallelism and efficiency of message consumption, especially when dealing with high message volumes.Both group IDs and thread numbers can be configured in Kafka to optimize the processing of messages according to specific requirements, such as message volume, message type, and processing latency.The configuration related to consumers (group ids and thread numbers) can be configured separately for each message type as it follows:
Number of threads for notifying when a subprocess is blocked
6
KAFKA_CONSUMER_THREADS_ADAPTERS
Number of threads for processing messages related to adapters
6
KAFKA_CONSUMER_THREADS_SCHEDULER_ADVANCING
Number of threads for continuing advancement
6
KAFKA_CONSUMER_THREADS_SCHEDULER_RUN_ACTION
Number of threads for running scheduled actions
6
KAFKA_CONSUMER_THREADS_PROCESS_START
Number of threads for starting processes
6
KAFKA_CONSUMER_THREADS_PROCESS_START_FOR_EVENT
Number of threads for starting processes for an event
2
KAFKA_CONSUMER_THREADS_PROCESS_EXPIRE
Number of threads for expiring processes
6
KAFKA_CONSUMER_THREADS_PROCESS_OPERATIONS
Number of threads for processing operations from task management
6
KAFKA_CONSUMER_THREADS_PROCESS_BATCH_PROCESSING
Number of threads for processing bulk operations from task management
6
It is important to know that all the events that start with a configured pattern will be consumed by the Engine. This makes it possible to create a new integration and connect it to the engine without changing the configuration.
Task manager operations could be the following: assignment, unassignment, hold, unhold, terminate and it is matched with the ...operations.out topic on the engine side. For more information check the Task Management plugin documentation:π Task management plugin
The Process Engine uses Elasticsearch for process instance indexing and search capabilities. Configure the connection using these environment variables:
Specifies the type of advancing mechanism to be used. The advancing can be done either through Kafka or through the database (parallel)
PARALLEL (possible values #enum: KAFKA, PARALLEL)
ADVANCING_THREADS
Number of parallel threads to be used
20
ADVANCING_PICKING_BATCH_SIZE
Number of tasks to pick in each batch
10
ADVANCING_PICKING_PAUSE_MILLIS
Pause duration between picking batches, in milliseconds. After picking a batch of tasks, the system will wait for 100 milliseconds before picking the next batch. This can help in controlling the rate of task intake and processing
100
ADVANCING_COOLDOWN_AFTER_SECONDS
Cooldown period after processing a batch, in seconds. The system will wait for 120 seconds after completing a batch before starting the next cycle. This can be useful for preventing system overload and managing resource usage
120
ADVANCING_SCHEDULER_HEARTBEAT_CRONEXPRESSION
A cron expression that defines the schedule for the heartbeat. The schedulerβs heartbeat will trigger every 2 seconds. This frequent heartbeat can be used to ensure the system is functioning correctly and tasks are being processed as expected
This section contains environment variables that configure the schedulerβs behavior, including thread count, cron jobs for data partitioning, process cleanup, and master election.
Environment Variable
Description
Default Value
Possible Values
SCHEDULER_THREADS
Number of threads for the scheduler
10
Integer values (e.g., 10, 20)
SCHEDULER_PROCESS_CLEANUP_ENABLED
Activates the cron job for process cleanup
false
true, false
SCHEDULER_PROCESS_CLEANUP_CRON_EXPRESSION
Cron expression for the process cleanup scheduler
0 */5 0-5 * * ? -> every day during the night, every 5 minutes, at the start of the minute
This section details the environment variable that controls the expiration of subprocesses within a parent process. It determines whether subprocesses should terminate when the parent process expires or follow their own expiration settings.
Environment Variable
Description
Default Value
Possible Values
FLOWX_PROCESS_EXPIRE_SUBPROCESSES
Governs subprocess expiration in a parent process. When true, terminates all associated subprocesses upon parent process expiration. When false, subprocesses follow their individual expiration settings or persist indefinitely if not configured
The FlowX helm chart provides a management service with the necessary parameters to integrate with the Prometheus operator. However, this integration is disabled by default.
Old configuration from < v4.1 releases (will be deprecated in v4.5):
MANAGEMENT_METRICS_EXPORT_PROMETHEUS_ENABLED: Enables or disables Prometheus metrics export.
New configuration, starting from v4.1 release, available below. Note that this setup is backwards compatible, it does not affect the configuration from v3.4.x. The configuration files will still work until v4.5 release.
To configure Prometheus metrics export for the FlowX.AI Engine, the following environment variable is required: