There are two types of Config Params that can be read from the environment: variables and secrets. There is one provider for variables and secrets extracted from the environment variables, and two providers for the ones extracted from Kubernetes. By default, the variables and secrets are extracted from environment variables (env provider).
Configuration parameters from environment variables (default)
The env provider used for variables and secrets extracts them from environment variables. For security reasons, the env provider uses an allow list regex which defaults to FLOWX_CONFIGPARAM_.*. This means only environment variables that match this naming pattern can be read at runtime into configuration params (either as variables or secrets). Feel free to edit it to match the environment variables that you use in your deployment.
Environment variable
Description
Default value
FLOWX_CONFIGPARAMS_VARS_PROVIDER
Provider type for variables
env
FLOWX_CONFIGPARAMS_VARS_ALLOWLISTREGEX
Regular expression to match allowed env variables for variables
FLOWX_CONFIGPARAM_.*
FLOWX_CONFIGPARAMS_SECRETS_PROVIDER
Provider type for secrets
env
FLOWX_CONFIGPARAMS_SECRETS_ALLOWLISTREGEX
Regular expression to match allowed env variables for secrets
You can configure multiple secrets and ConfigMaps by incrementing the index number (e.g., FLOWX_CONFIGPARAMS_PROVIDERS_K8SSECRETS_SECRETSLIST_1, FLOWX_CONFIGPARAMS_PROVIDERS_K8SCONFIGMAPS_CONFIGMAPSLIST_1). Values are overridden based on the order in which the maps are defined.The default provider is env, but there is a built-in allowlist with the regex pattern FLOWX_CONFIGPARAM_.*. This means only configuration parameters that match this naming pattern can be read at runtime, whether they are environment variables or secret variables.
By default, FlowX.AI 4.7.1 uses Python 2.7 (Jython) as the Python runtime. To enable Python 3 via GraalPy with its 3x performance improvements and JavaScript with GraalJS, you must explicitly set the feature toggle.
Environment variable
Description
Default value
Possible values
FLOWX_SCRIPTENGINE_USEGRAALVM
Determines which Python and JavaScript runtime to use
false
true, false
Python 2.7 support will be deprecated in FlowX.AI 5.0. We recommend migrating your Python scripts to Python 3 to take advantage of improved performance and modern language features.
When using GraalVM (FLOWX_SCRIPTENGINE_USEGRAALVM=true), ensure the engine has proper access to a cache directory within the container. By default, this is configured in the /tmp directory.
For environments with filesystem restrictions or custom configurations, you need to properly configure the GraalVM cache.
There are two methods to configure the GraalVM cache location:
Option 1: Using Java options (Preferred)
Add the following Java option to your deployment configuration:
Report incorrect code
Copy
Ask AI
-Dpolyglot.engine.userResourceCache=/tmp
This option is set by default in the standard Docker image but might need to be included if youβre overriding the Java options.
Option 2: Using environment variables
Alternatively, set the following environment variable:
Report incorrect code
Copy
Ask AI
XDG_CACHE_HOME=/tmp
If you encounter errors related to Python script execution when using GraalVM, verify that the cache directory is properly configured and accessible.
For more details about supported scripting languages and their capabilities, see the Supported scripts documentation.
Kafka handles all communication between the FlowX.AI Engine, external plugins, and integrations. It also notifies running process instances when certain events occur.
Default context value for message routing when no context is provided
"" (empty string)
When KAFKA_DEFAULTFXCONTEXT is set and an event is received on Kafka without an fxContext header, the system will automatically apply the default context value to the message.
A consumer group is a set of consumers that jointly consume messages from one or more Kafka topics. Each consumer group has a unique identifier (group ID) that Kafka uses to manage message distribution.Thread numbers refer to the number of threads a consumer application uses to process messages. Increasing thread count can improve parallelism and efficiency, especially with high message volumes.
Consumer group configuration
Environment variable
Description
Default value
KAFKA_CONSUMER_GROUPID_NOTIFYADVANCE
Group ID for notifying advance actions
notif123-preview
KAFKA_CONSUMER_GROUPID_NOTIFYPARENT
Group ID for notifying when a subprocess is blocked
notif123-preview
KAFKA_CONSUMER_GROUPID_ADAPTERS
Group ID for messages related to adapters
notif123-preview
KAFKA_CONSUMER_GROUPID_SCHEDULERRUNACTION
Group ID for running scheduled actions
notif123-preview
KAFKA_CONSUMER_GROUPID_SCHEDULERADVANCING
Group ID for messages indicating continuing advancement
notif123-preview
KAFKA_CONSUMER_GROUPID_MESSAGEEVENTS
Group ID for message events
notif123-preview
KAFKA_CONSUMER_GROUPID_PROCESS_START
Group ID for starting processes
notif123-preview
KAFKA_CONSUMER_GROUPID_PROCESS_STARTFOREVENT
Group ID for starting processes for an event
notif123-preview
KAFKA_CONSUMER_GROUPID_PROCESS_EXPIRE
Group ID for expiring processes
notif123-preview
KAFKA_CONSUMER_GROUPID_PROCESS_OPERATIONS
Group ID for processing operations from Task Management plugin
notif123-preview
KAFKA_CONSUMER_GROUPID_PROCESS_BATCHPROCESSING
Group ID for processing bulk operations from Task Management plugin
notif123-preview
Consumer thread configuration
Environment variable
Description
Default value
KAFKA_CONSUMER_THREADS_NOTIFYADVANCE
Number of threads for notifying advance actions
6
KAFKA_CONSUMER_THREADS_NOTIFYPARENT
Number of threads for notifying when a subprocess is blocked
6
KAFKA_CONSUMER_THREADS_ADAPTERS
Number of threads for processing messages related to adapters
6
KAFKA_CONSUMER_THREADS_SCHEDULERADVANCING
Number of threads for continuing advancement
6
KAFKA_CONSUMER_THREADS_SCHEDULERRUNACTION
Number of threads for running scheduled actions
6
KAFKA_CONSUMER_THREADS_MESSAGEEVENTS
Number of threads for message events
6
KAFKA_CONSUMER_THREADS_PROCESS_START
Number of threads for starting processes
6
KAFKA_CONSUMER_THREADS_PROCESS_STARTFOREVENT
Number of threads for starting processes for an event
2
KAFKA_CONSUMER_THREADS_PROCESS_EXPIRE
Number of threads for expiring processes
6
KAFKA_CONSUMER_THREADS_PROCESS_OPERATIONS
Number of threads for processing operations from task management
6
KAFKA_CONSUMER_THREADS_PROCESS_BATCHPROCESSING
Number of threads for processing bulk operations from task management
6
All events that start with a configured pattern will be consumed by the Engine. This enables you to create new integrations and connect them to the engine without changing the configuration.
Task manager operations include: assignment, unassignment, hold, unhold, terminate. These are matched with the ...operations.out topic on the engine side. For more information, see the Task Management plugin documentation:π Task management plugin
Topics related to the scheduler extension
Environment variable
Description
Default value
KAFKA_TOPIC_PROCESS_EXPIRE_IN
Topic for requests to expire processes
ai.flowx.dev.core.trigger.expire.process.v1
KAFKA_TOPIC_PROCESS_SCHEDULE_OUT_SET
Topic used for scheduling process expiration
ai.flowx.dev.core.trigger.set.schedule.v1
KAFKA_TOPIC_PROCESS_SCHEDULE_OUT_STOP
Topic used for stopping process expiration
ai.flowx.dev.core.trigger.stop.schedule.v1
KAFKA_TOPIC_PROCESS_SCHEDULE_IN_RUN_ACTION
Topic for requests to run scheduled actions
ai.flowx.dev.core.trigger.run.action.v1
KAFKA_TOPIC_PROCESS_SCHEDULE_IN_ADVANCE
Topic for events related to advancing through a database
The Process Engine uses Elasticsearch for process instance indexing and search capabilities. Configure the connection using these environment variables: