# Node actions
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/actions/actions
The activity that a node has to handle is defined using an action. These can have various types, they can be used to specify the communication details for plugins or integrations.
Node actions allow you to incorporate **business rules** into a **process**, and send various data to be displayed in front-end applications.
The Flowx.AI platform supports the following **types of node actions**:
You can only define and add actions on the following types of **nodes**: [**send message task**](../node/message-send-received-task-node#message-send-task), [**task**](../node/task-node) and [**user task**](../node/user-task-node).
Actions fall into two categories:
* Business rules
* User interactions
### Business rules
Actions can use action rules such as DMN rules, MVEL expressions, or scripts in JavaScript, Python, or Groovy to attach business rules to a node.
For more information about supported scripting languages, click on this card.
Each button on the user interface corresponds to a manual user action.
### Action edit
Actions can be:
* Manual or automatic
* Optional or mandatory
If all the mandatory actions are not executed on a node, the flow (token) will not advance.
* Actions can also be marked as one-time or repeatable

### Action parameters
Action params store extra values as key/value pairs, like topics for outgoing messages or message formats for the front-end.
A decision on an **exclusive gateway** is defined using a **node rule**. Similar to action rules, these can be set using [DMN](../../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-dmn) or [MVEL](../../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-mvel).
## Configuring actions
Actions have a few characteristics that need to be set:
* an **action** can be set as **manual** or **automatic**. Manual actions can be executed only through the REST API, this usually means they are triggered by the application user from the interface. Automatic actions are executed without any need for external triggers.
* manual actions can be either mandatory or optional. Automatic actions are all considered mandatory.
* all actions have an **order.** When there are more actions on a single node, the order needs to be set.
* **repeatable** - the actions that could be triggered more than once are marked accordingly
* the actions can have a parent/child hierarchy
* **allow back to this action** - the user can navigate back to this action from a subsequent node
For more information, check the following section:
## Linking actions together
There are two ways actions could be linked together, so certain actions can be set to run immediately after others.
Certain actions can run immediately after another action by setting the `parentName` field on the action for callbacks. Callback actions are performed when a specific message is received by the Engine, indicated by the `callbacksForAction` header in the message. To run actions immediately after the parent action, set the `autoRunChildren` flag to true for the parent action.
### Child actions
A parent action has a flag `autoRunChildren`, set to `false` by default. When this flag is set to `true`, the child actions (the ones defined as mandatory and automatic) will be run immediately after the execution of the parent action is finalized.

### Callback actions
Child actions can be marked as callbacks to be run after a reply from an external system is received. They will need to be set when defining the interaction with the external system (the [Kafka send action](../node/message-send-received-task-node#configuring-a-message-send-task-node)).
For example, a callback function might be used to handle a user's interaction with a web page, such as upload a file. When the user performs the action, the callback function is executed, allowing the web application to respond appropriately.

Child actions can be marked as callbacks to be run after a reply from an external system is received. They will need to be set when defining the interaction with the external system (the Kafka send action).

#### Example
Callback actions are added in the **Advanced configuration** tab, in the **header** param - `callbacksForAction`.
```js
{"processInstanceId": ${processInstanceId}, "destinationId": "upload_file", "callbacksForAction": "upload_file"}
```
* `callbacksForAction` - the value of this key is a string that specifies a callback action associated with the "upload\_file" destination ID. This is part of an event-driven system (Kafka send action) where this callback will be called once the "upload\_file" action is completed.
## Scheduling actions
A useful feature for actions is having the ability to set them to run at a future time. Actions can be configured to be run after a period of time, starting from the moment the token triggered them to be executed.

# Append params to parent process
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/actions/append-params-to-parent-process
It is a type of action that allows you to send data from a subprocess to a parent process.
**Why is it important?** If you are using subprocesses that produce data that needs to be sent back to the main **process**, you can do that by using an **Append Params to Parent Process** action.
## Configuring an Append params to parent process
After you create a process designed to be used as a [subprocess](../process/subprocess), you can configure the action. To do this, you need to add an **Append Params to Parent Process** on a [**Task node**](../node/task-node) in the subprocess.
The following properties must be configured:
* [Action Edit](#action-edit)
* [Back in steps (for Manual actions)](#back-in-steps)
* [Parameters](#parameters)
* [Data to send (for Manual actions)](#data-to-send)
### Action edit
* **Name** - used internally to make a distinction between different actions on nodes in the process. We recommend defining an action naming standard to be able to quickly find the process actions
* **Order** - if multiple actions are defined on the same node, the running order should be set using this option
* **Timer expression** - it can be used if a delay is required on that action. The format used for this is [ISO 8601 duration format ](https://www.w3.org/TR/NOTE-datetime)(for example, a delay of 30 seconds will be set up as `PT30S`)
* **Action type** - should be set to **Append Params to Parent Process**
* **Trigger type** (options are Automatic/Manual) - choose if this action should be triggered automatically (when the process flow reaches this step) or manually (triggered by the user); in most use cases, this will be set to automatic
* **Required type** (options are Mandatory/Optional) - automatic actions can only be defined as mandatory. Manual actions can be defined as mandatory or optional.
* **Repeatable** - should be checked if the action can be triggered multiple times;
* **Autorun Children** - when this is switched on, the child actions (the ones defined as mandatory and automatic) will run immediately after the execution of the parent action is finalized
### Back in steps
* **Allow BACK on this action** - back in process is a functionality that allows you to go back in a business process and redo a series of previous actions in the process. For more details, check [**Moving a token backwards in a process**](../../flowx-designer/managing-a-process-flow/moving-a-token-backwards-in-a-process) section
### Parameters
* **Copy from current state** - data that you want to be copied back to the parent process
* **Destination in the parent state** - on what key to copy the param values
To recap: if you have a **Copy from current state** with a simple **JSON** -`{"age": 17}`, that needs to be available in the parent process, on the `application.client.age` key, you will need to set this field (**Destination in the parent state**) with `application.client`, which will be the key to append to in the parent process.
**Advanced configuration**
* **Show Target Process** - ID of the parent process where you need to copy the params, this was made available on to the `${parentProcessInstanceId}` variable, if you defined it when you [started the subprocess](./start-subprocess-action)
### Data to send
* **Keys** - are used when data is sent from the frontend via an action to validate the data (you can find more information in the [User Task configuration](../node/user-task-node) section)
**Data to send** option is configurable only when the action **trigger type** is **Manual.**
## Example
We have a subprocess that allows us to enter the age of the client on the **data.client.age** key, and we want to copy the value back to the parent process. The key to which we want to receive this value in the parent process is **application.client.age**.
This is the configuration to apply the above scenario:
**Parameters**
* **Copy from current state** - `{"client": ${data.client.age}}` to copy the age of the client (the param value we want to copy)
* **Destination in the parent state** - `application` to append the data o to the **application** key on the parent process
**Advanced configuration**
* **Show Target Process** - `${parentProcessInstanceId}`to copy the data on the parent of this subprocess

# Business rules types
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/actions/business-rule-action/business-rule-action
A business rule is an action type in FlowX that allows you to configure a script on a BPMN node. It's a way to specify business logic or decision-making processes in a business process.
The script can read and write the data available on the process at the moment the script is executed. For this reason, it is very important to understand what data is available on the process when the script is executed.
Business rules can be attached to a node by using actions with [**action rules**](../actions#action-rules) on them. These can be specified using [DMN rules](./dmn-business-rule-action), [MVEL](../../../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-mvel) expressions, or scripts written in JavaScript, Python, or Groovy.

For more information about supported scripting languages, see the next section:
You can also test your rules by using the **Test Rule** function.

## Configuration
To use a Business Rules Action, follow these steps:
1. **Select a BPMN Task Node**: Choose the BPMN task node to which you want to attach the Business Rules Action. This could be a Service Task, User Task, or another task type that supports actions.
2. **Define the Action**: In the task node properties, configure the "Business Rules Action" field and select the desired language (MVEL, Java, JavaScript or Python).
3. **Write the Business Rule**: In the selected language, write the business rule or decision logic. This rule should take input data, process it, and possibly generate an output or result.
4. **Input and Output Variables**: Ensure that the task node can access the necessary input variables from the BPMN process context and store any output or result variables as needed.
5. **Execution**: When the BPMN process reaches the task node, the attached Business Rules Action is executed, and the defined business rule is evaluated.
6. **Result**: The result of the business rule execution may affect the flow of the BPMN process, update process variables, or trigger other actions based on the logic defined in the rule.
Let's take look at the following example. We have some data about the gender of a user and we need to create a business rule that computes the formal title based on the gender:
1. This is how the process instance data looks like before it reaches the business rule:
```json
{
"application" : {
"client" :
{
"firstName" : "David",
"surName" : "James",
"gender" : "M",
}
}
}
```
2. When the token reaches this node the following script (defined for the business rule) is executed. The language used here for scripting is MVEL.
```java
if (input.application.client.gender == 'F') {
output.put("application", {
"client": {
"salutation": "Ms"
}
});
} else if (input.application.client.gender == 'M') {
output.put("application", {
"client": {
"salutation": "Mr"
}
});
} else {
output.put("application", {
"client": {
"salutation": "Mx"
}
});
}
```
3. After the script is executed, the process instance data will look like this:
```json
{
"application": {
"client": {
"firstName": "David",
"surName": "James",
"gender": "M",
"salutation": "Mr"
}
}
}
```

## Flattened vs unflattened keys
With version [**2.5.0**](https://old-docs.flowx.ai/release-notes/v2.5.0-april-2022/) we introduced unflattened keys inside business rules. Flattened keys are now obsolete. You are notified when you need to delete and recreate a business rule so it contains an unflattened key.

## Business rules examples
Examples available for [**v2.5.0**](https://old-docs.flowx.ai/release-notes/v2.5.0-april-2022/) version and higher
We will use the MVEL example used above to rewrite it in other scripting languages formats:
```Java
if (input.application.client.gender == 'F') {
output.put("application", {
"client": {
"salutation": "Ms"
}
});
} else if (input.application.client.gender == 'M') {
output.put("application", {
"client": {
"salutation": "Mr"
}
});
} else {
output.put("application", {
"client": {
"salutation": "Mx"
}
});
}
```
```python
if input.get("application").get("client").get("gender") == "F":
output.put("application", {
"client" : {
"salutation" : "Ms"
}
})
elif input.get("application").get("client").get("gender") == "M":
output.put("application", {
"client" : {
"salutation" : "Mr"
}
})
else:
output.put("application", {
"client" : {
"salutation" : "Mx"
}
})
```
```js
if (input.application.client.gender === 'F') {
output.application = {
client: {
salutation: 'Ms'
}
};
} else if (input.application.client.gender === 'M') {
output.application = {
client: {
salutation: 'Mr'
}
};
} else {
output.application = {
client: {
salutation: 'Mx'
}
};
}
```
```groovy
if (input.application.client.gender === 'F') {
def gender = input.application.client.gender
switch (gender) {
case 'F':
output.application = [client: [salutation: 'Ms']]
break
case 'M':
output.application = [client: [salutation: 'Mr']]
break
default:
output.application = [client: [salutation: 'Mx']]
}
```
For more detailed information on each type of Business Rule Action, refer to the following sections:
[DMN Business Rule Action](./dmn-business-rule-action)
# Configuring a DMN business rule action
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/actions/business-rule-action/dmn-business-rule-action
Decision Model and Notation is a graphical language used to specify business decisions. DMN helps convert complex decision-making code into easily readable diagrams.
## Creating a DMN Business Rule action
To create and link a DMN **business rule** action to a task **node** in FLOWX, follow these steps:
1. Launch **FlowX Designer** and navigate to **Process Definitions** .
2. Locate and select your specific process from the list, then click on **Edit Process**.
3. Choose a **task node**, and click the **Edit** button (represented by a key icon). This action will open the node configuration menu.
4. Inside the node configuration menu, head to the **Actions** tab and click the "**+**" button to add a new action.
5. From the dropdown menu, select the action type as **Business Rule**.
6. In the **Language** dropdown menu, pick **DMN**.
For a visual guide, refer to the following recording:

## Using a DMN Business Rule Action
Consider a scenario where a bank needs to perform client information tasks/actions to send salutations, similar to what was previously created using MVEL [here](./business-rule-action#business-rules-examples).
A business person or specialist can use DMN to design this business rule, without having to delve into technical definitions.
Here is an example of an **MVEL** script defined as a business rule action inside a **Service Task** node:
```java
if (input.application.client.gender == 'F') {
output.put("application", {
"client": {
"salutation": "Ms"
}
});
} else if (input.application.client.gender == 'M') {
output.put("application", {
"client": {
"salutation": "Mr"
}
});
} else {
output.put("application", {
"client": {
"salutation": "Mx"
}
});
}
```
The previous example can be easily transformed into a DMN Business Rule action represented by the decision table:

In the example above, we used FEEL expression language to write the rules that should be met for the output to occur. FEEL defines a syntax for expressing conditions that input data should be evaluated against.
**Input** - In the example above, we used the user-selected gender from the first screen as input, bound to the `application.client.gender` key.

**Output** - In the example above, we used the salutation (bound to `application.client.salutation`) computed based on the user's gender selection.

DMN also defines an XML schema that allows DMN models to be used across multiple DMN authoring platforms. The following output is the XML source of the decision table example from the previous section:
```xml
application.client.gender"M""Mr""F""Ms""O""Mx"
```
# Extracting additional data in business rules
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/actions/business-rule-action/extracting-additional-data
Business rules in FlowX.AI allow you to extract and use key values dynamically. This is essential for handling user security details, configuration parameters, and business logic calculations.
This guide is covering the following:
✅ Retrieving security details dynamically\
✅ Extracting specific user attributes\
✅ Fetching configuration parameters
## Extracting security details
Security details (`securityDetails`) store user-related data, including **email, username, roles, and permissions**. These values are stored in `additionalData.securityDetails` and can be accessed dynamically within a process.
### Retrieve security details
Use the following business rule to fetch and store security details:
```JavaScript
// Retrieve security details from additionalData
securityDetails = additionalData.get("securityDetails");
// Store the extracted security details
output.put("securityDetails", securityDetails);
```


Example output:
```json
{
"securityDetails": {
"Default": {
"owner": {
"username": "user@email.com",
"identifier": "f08b1452-7c4c-415c-ad6d-8bb2d2d14600",
"details": {
"firstName": "John",
"lastName": "Snow",
"email": "user@email.com",
"jwt": "your_jwt",
"roles": [],
"groups": [
"/Users/Flowx_demo",
"/superAdmin"
],
"attributes": {}
}
}
}
}
}
```
### Extract specific owner details
To retrieve specific attributes, such as email, username, first name, and last name, use one of the following scripts (JS or Python):
```javascript
// Retrieve security details
securityDetails = additionalData.get("securityDetails");
// Extract owner details from the Default swimlane
email = securityDetails.Default.owner.details.email;
username = securityDetails.Default.owner.username;
firstName = securityDetails.Default.owner.details.firstName;
lastName = securityDetails.Default.owner.details.lastName;
// Store extracted details in the output
output.put("email", email);
output.put("username", username);
output.put("firstName", firstName);
output.put("lastName", lastName);
```
```python
# Retrieve security details
security_details = additionalData.get("securityDetails")
# Extract owner details from the Default swimlane
email = security_details["Default"]["owner"]["details"]["email"]
username = security_details["Default"]["owner"]["username"]
first_name = security_details["Default"]["owner"]["details"]["firstName"]
last_name = security_details["Default"]["owner"]["details"]["lastName"]
# Store extracted details in the output
output["email"] = email
output["username"] = username
output["firstName"] = first_name
output["lastName"] = last_name
```
Extracted values:
* **Owner Email** (`securityDetails.Default.owner.details.email`)
* **Username** (`securityDetails.Default.owner.username`)
* **First Name** (`securityDetails.Default.owner.details.firstName`)
* **Last Name** (`securityDetails.Default.owner.details.lastName`)

### Dynamic extraction by swimlane
If your application uses multiple swimlanes, retrieve the owner details dynamically:
```JavaScript
// Get security details
securityDetails = additionalData.get("securityDetails");
// Extract owner details based on the swimlane
ownerDetails = securityDetails.Default.owner.details;
Default is the swimlane name in this example. Replace it with your swimlane name.
// Store extracted values in the process instance
output.put("email", ownerDetails.email);
output.put("username", securityDetails.Default.owner.username);
output.put("firstName", ownerDetails.firstName);
output.put("lastName", ownerDetails.lastName);
```
Example output:
```json
{
"email": "user@email.com",
"username": "user@email.com",
"firstName": "John",
"lastName": "Snow"
}
```
***
## Extracting values from configuration parameters
To make business rules flexible, store configuration values in project configuration parameters instead of hardcoding them.

### Retrieve configuration parameters
Use a business rule to fetch and store configuration parameter values dynamically:
```JavaScript
// Retrieve a configuration parameter (e.g., commission percentage)
commissionPercentage = additionalData.applicationConfiguration.get("commissionPercentage");
// Store the retrieved value in output in the process instance
output.put("commissionPercentage", commissionPercentage);
```
### Example use case: commission calculation
This example shows how to calculate a commission value dynamically using a configuration parameter.

#### Configuration parameters
| **Parameter Name** | **Description** | **Example Value** |
| ---------------------- | ----------------------------------------------- | ----------------- |
| `commissionPercentage` | The percentage used to calculate the commission | `0.05` (5%) |
Configuration parameters can be modified in:
➡ FlowX.AI Designer → Your Project → Configuration Parameters
#### Process flow
In a User task we have an input UI element where the user provides an amount (`userInputAmount`).

In the next node, a Service Task fetches the `commissionPercentage` from the configuration parameters.

Business rule used:
```JavaScript
// Retrieve user input
amount = input.get("userInputAmount");
// Retrieve a value from configuration
commissionPercentage = additionalData.applicationConfiguration.get("commissionPercentage");
// Apply the configuration value
commissionValue = amount * commissionPercentage;
// Store the calculated result in the process instance
output.put("commissionValue", commissionValue);
```
Formula used to calculate the commission:
$$
\text{commissionValue} = \text{userInputAmount} \times \text{commissionPercentage}
$$
The computed `commissionValue` is stored for further processing.
Final result:

Why Use Configuration Parameters?
✅ Keep business rules flexible by avoiding hardcoded values.\
✅ Adapt calculations dynamically based on environment settings.\
✅ Simplify updates by modifying values in the project configuration rather than editing business rules.
# Kafka send action
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/actions/kafka-send-action
The FlowX Designer offers various options to configure the Kafka Send Action through the Actions tab at the node level.
* [Action Edit](#action-edit)
* [Parameters](#parameters)

### Action Edit
* **Name** - Used internally to distinguish between different [actions](../actions/actions) within the process. Establish a clear naming convention for easy identification.
* **Order** - Sets the running order for multiple actions on the same node.
* **Timer Expression** - Enables a delay, using [ISO 8601 duration format](../node/timer-events/timer-expressions#iso-8601) (e.g., `PT30S` for a 30-second delay).
* **Action Type** - Designate as **Kafka Send Action** for sending messages to external systems.
* **Trigger Type** - Always set to Automatic.
The Kafka Send Action type is always **Automatic**. Typically, Kafka Send Actions automatically trigger when the process reaches this step.
* **Required Type** (Mandatory/Optional) - **Automatic** actions are typically set as **mandatory**. Manual actions can be either mandatory or optional.
* **Repeatable** - Allows triggering the action multiple times if required.
* **Autorun Children** - When activated, child actions (mandatory and automatic) execute immediately after the parent action concludes.
### Parameters
You can add parameters via the **Custom** option or import predefined parameters from an integration.
For detailed information on **Integrations management**, refer to [**this link**](../../platform-deep-dive/core-extensions/integration-management/).
* **Topics** - Specifies the Kafka topics listened to by the external system for requests.
* **Message** - Contains the message payload to be dispatched.
* **Advanced Configuration (Headers)** - Represents a JSON value sent within the Kafka message headers.

## Dynamic Kafka topics
You can use dynamic topic names for Kafka Send and Kafka Receive actions in FlowX processes by leveraging Configuration Parameters. This enables flexibility when working with Kafka topics across different environments or use cases.
Steps to Create Dynamic Kafka Topics:
Navigate to **Projects → Your Project → Configuration Parameters** in FlowX Designer.
Add or update configuration keys for Kafka topics.
Example:
* Key: `kafka_send_2`
* Value: `ai.flowx.plugin.document.trigger`
* Key: `kafka_receive`
* Value: `ai.flowx.engine.receive.plugin.document`

This allows referencing these parameters dynamically in your Kafka actions.
Open the process where you want to configure the Kafka Send Action.
Go to Node Config > Actions.
Select Kafka Send Action as the action type.
Under Parameters, locate the Topics field.
Use a dynamic reference to concatenate a parameter with another value:
`${kafka_send_2}.${topic2}`
Here:
* `${kafka_send_2}` dynamically pulls the value from Configuration Parameters.
* `${topic2}` can be another dynamic input set within the process.
Define the JSON structure for the message payload.

Ensure that all required fields align with the expected schema of the consuming system.
Start the process and check whether messages are correctly sent to the Kafka topic.
If needed, modify the configuration parameters to change the topic dynamically.
## Kafka send action scenarios
The Kafka Send action serves as a versatile tool that facilitates seamless communication across various systems and plugins, enabling efficient data transfer, robust document management, notifications, and process initiation.
This action finds application in numerous scenarios while configuring processes:
* **Communicating with External Services**
* **Interacting with Connectors** - For example, integrating a connector in the FlowX.AI Designer [here](../../platform-deep-dive/integrations/building-a-connector#integrating-a-connector-in-flowxai-designer).
* **Engaging with Plugins:**
* **Document Plugin:**
* Generating, uploading, converting, and splitting documents - Explore examples [here](../../platform-deep-dive/plugins/custom-plugins/documents-plugin/documents-plugin-overview).
* Updating/deleting documents - Find an example [here](../../platform-deep-dive/plugins/custom-plugins/documents-plugin/deleting-a-file)
* Optical Character Recognition (OCR) integration - View an example [here](../../platform-deep-dive/plugins/custom-plugins/ocr-plugin#scenario-for-flowxai-generated-documents).
* **Notification Plugin:**
* Sending notifications - Example available [here](../../platform-deep-dive/plugins/custom-plugins/notifications-plugin/sending-a-notification) and emails with attachments [here](../../platform-deep-dive/plugins/custom-plugins/notifications-plugin/sending-an-email-with-attachments).
* One-Time Password (OTP) validation - Refer to this [example](../../platform-deep-dive/plugins/custom-plugins/notifications-plugin/sending-a-notification).
* Forwarding notifications to external systems - Explore this [example](../../platform-deep-dive/plugins/custom-plugins/notifications-plugin/forwarding-notifications-to-an-external-system).
* **OCR Plugin**
* **Customer Management Plugin**
* **Task Management Plugin:**
* Bulk operations update - Find an example [here](../../platform-deep-dive/core-extensions/task-management/task-management-overview#bulk-updates).
* **Requesting Process Data for Forwarding or Processing** - For example, Data Search [here](../../platform-deep-dive/core-extensions/search-data-service).
* **Initiating Processes** - Starting a process via Kafka. Find examples [here](../../flowx-designer/managing-a-process-flow/starting-a-process).
The Kafka Send action stands as a versatile facilitator, enabling smooth operations in communication, document management, notifications, and process initiation across diverse systems and plugins.
# Send data to user interface
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/actions/send-data-to-user-interface
Send data to user interface action is based on Server-Sent Events (SSE), a web technology that enables servers to push real-time updates or events to clients over a single, long-lived HTTP connection. It provides a unidirectional communication channel from the server to the client, allowing the server to send updates to the client without the need for the client to continuously make requests.
**Why is it useful?** It provides real-time updates and communication between the **process** and the frontend application.
## Configuring a Send data to user interface action
Multiple options are available for this type of action and can be configured via the **FlowX Designer**. To configure a Send data to user interface, use the **Actions** tab at the [task node level](../../flowx-designer/managing-a-process-flow/adding-a-new-node), which has the following configuration options:
* [Action Edit](#action-edit)
* [Back in steps (for Manual actions)](#back-in-steps)
* [Parameters](#parameters)
* [Data to send (for Manual actions)](#data-to-send)
### Action Edit
* **Name** - used internally to make a distinction between different actions on nodes in the process. We recommend defining an action naming standard to be able to quickly find the process actions
* **Order** - if multiple actions are defined on the same node, the running order should be set using this option
* **Timer expression** - it can be used if a delay is required on that action. The format used for this is [**ISO 8601 duration format**](https://www.w3.org/TR/NOTE-datetime) (for example, a delay of 30 seconds will be set up as `PT30S`)
* **Action type** - should be set to Send data to user interface
* **Trigger type** (options are Automatic/Manual) - choose if this action should be triggered automatically (when the process flow reaches this step) or manually (triggered by the user); in most use cases, this will be set to automatic
* **Required type** (options are Mandatory/Optional) - automatic actions can only be defined as mandatory. Manual actions can be defined as mandatory or optional.
* **Repeatable** - should be checked if the action can be triggered multiple times
* **Autorun Children** - when this is switched on, the child actions (the ones defined as mandatory and automatic) will run immediately after the execution of the parent action is finalized
### Back in steps
* **Allow BACK on this action** - back in process is a functionality that allows you to go back in a business process and redo a series of previous actions in the process. For more details, check [Moving a token backwards in a process](../../flowx-designer/managing-a-process-flow/moving-a-token-backwards-in-a-process) section.

### Parameters
The following fields are required for a minimum configuration of this type of action:
* **Message Type** - if you only want to send data, you can set this to **Default** (it defaults to the **data** message type)
If you need to start a new process using a **Send data to user interface**, you can do that by setting the **Message Type** to **Action** and you will need to define a **Message** with the following format:
```json
{
"processName": "demoProcess",
"type": "START_PROCESS_INHERIT",
"clientDataKeys":["webAppKeys"],
"params": {
"startCondition": "${startCondition}",
"paramsToCopy": []
}
}
```
* `paramsToCopy` - choose which of the keys from the parent process parameters to be copied to the subprocess
* `withoutParams` - choose which of the keys from the parent process parameters are to be ignored when copying parameter values from the parent process to the subprocess
* **Message** - here you define the data to be sent as a JSON object, you can use constant values and values from the process instance data.
* **Target Process** - is used to specify to what running process instance should this message be sent - **Active process** or **Parent process**
If you are defining this action on a [**Call activity node**](../node/call-subprocess-tasks/call-activity-node), you can send the message to the parent process using **Target Process: Parent process**.
### Data to send
* **Keys** - are used when data is sent from the frontend via an action to validate the data (you can find more information in the [User task configuration](../node/user-task-node) section)
**Data to send** option is configurable only when the action **trigger type** is **Manual**.

### Send update data example
To send the latest value from the [process instance](../../projects/runtime/active-process/process-instance) data found at `application.client.firstName` key, to the frontend app, you can do the following:
1. Add a **Send data to user interface**.
2. Set the **Message Type** to **Default** (this is default value for `data`).
3. Add a **Message** with the data you want to send:
* `{ "name": "${application.client.firstName}" }`
4. Choose the **Target Process**.

# Start integration workflow action
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/actions/start-integration-workflow
The Start Integration Workflow action initiates a configured workflow to enable data processing, transformation, or other tasks across connected systems.
The Start integration workflow action allows for data transfer by sending configured inputs to initiate workflows and receiving outputs at the designated result key once the workflow completes. Here’s an overview of its key functionalities:
### Triggering
When a Start integration workflow action is triggered:
* The input data mapped in Input is sent as start variables to the workflow.
* The workflow runs with these inputs.
* Workflow output data is captured on the the specified result key upon completion.
### Integratiom mapping
The Select Workflows dropdown displays:
* All workflows within the current application version.
* Any workflows referenced in the application (e.g., via the Library).
### Workflow output
To receive output data from a workflow:
* Add a Receive Message Task node to the BPMN process.
* This node ensures that output data is properly captured and processed based on the designated workflow configuration.
# Start new project instance
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/actions/start-new-project-instance
The Start New Project Instance action allows users to initiate a completely new process (referred to as a "main process") from within an ongoing process. This functionality is designed to support scenarios where isolated use cases are managed across different applications or environments.

### Key considerations
* The "Start New Project Instance" action **cannot be configured as a subaction**. It must always be a standalone action.
* This action is exclusively triggered **manually**; automated execution is not supported.
You can launch a new project instance, which will always use the active version of the target application's build at the time of execution.
***
## Common use cases
This action is particularly useful when multiple use cases are managed separately and require isolation between applications. For example:
1. **Customer Profile Management:**
* Onboarding a power of attorney.
* Submitting a mortgage request.
* Enrolling in a credit card program.
Each of these processes may be managed by different teams and applications, emphasizing the need for separation and flexibility.
2. **Cross-Application Workflow Initiation:**
A process in one application might require triggering a process in another application with specific parameters. For instance:
* Initiating an "Onboarding" process (version 2) for a customer directly from the main application.
## Configuring the action
The **Start New Project Instance** action is configured via the **FlowX Designer**. To set it up, navigate to the **Actions** tab at the [task node level](../../flowx-designer/managing-a-process-flow/adding-a-new-node).
### Configuration options
#### Action settings
* **Name:** Used internally to distinguish actions on nodes. It’s recommended to define a naming standard for easier identification.
* **Order:** If multiple actions exist on the same node, specify the order in which they run.
* **Timer Expression:** Adds a delay to the action if needed. Use the [**ISO 8601 duration format**](https://www.w3.org/TR/NOTE-datetime) (e.g., `PT30S` for a 30-second delay).
* **Action Type:** Must be set to **Start New Project Instance**.
* **Trigger Type:** Always set to **Manual**.
* **Required Type:**
* **Mandatory:** Required for automatic actions.
* **Optional or Mandatory:** Can be used for manual actions.
* **Repeatable:** Check this box if the action can be triggered multiple times.
* **Autorun Children:** Automatically runs child actions (mandatory and automatic) immediately after the parent action completes.
* **Allow Back on This Action:** Enables users to move back in the process flow and redo previous actions. For details, see [Moving a Token Backwards in a Process](../../flowx-designer/managing-a-process-flow/moving-a-token-backwards-in-a-process).
The "Start New Project Instance" action is incompatible with subprocess configurations because projects, unlike libraries, are self-contained collections of resources designed to satisfy specific use cases. Libraries provide reusable, lightweight resources and routines, such as error handling or enumerations, and do not manage complex business logic.
#### Parameters
The following parameters must be configured for the action:
* **Project:** Specifies the target project to be initiated.
* **Process:** Selects the process from the list of processes available in all builds.
* **Start with Parameters:** Defines the parameters passed to the new process. These parameters can include:
* **Customer Name:** Useful for initiating flows tailored to a specific customer.
* **Copied Data:** Information from the current process that is required to start the target application’s process.

***
## Key benefits
* **Isolated Process Management:** Supports launching new, isolated processes in separate applications, ensuring flexibility and independence across teams and environments.
* **Active Version Execution:** Ensures that the project instance uses the active build version at the time of initiation, providing consistency in functionality.
* **Parameter Passing:** Enables seamless data transfer between the initiating and target processes, improving operational efficiency.
# Start subprocess action
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/actions/start-subprocess-action
A Start subprocess action is an action that allows you to start a subprocess from another (parent) process.
The "Start subprocess" is available only on [User Task](../node/user-task-node) nodes.
For any other scenarios, you should use the [Call Activity node](../node/call-subprocess-tasks/call-activity-node) instead.
Using **subprocesses** is a good way to split the complexity of your business flow into multiple, simple and reusable processes.
## Configuring a Start subprocess action
To use a process as a [subprocess](../process/subprocess) you must first create it. Once the subprocess is created, you can start it from another (parent) process. To do this, you will need to add a **Start Subprocess** action to a [**User task**](../node/task-node) node in the parent process or by using a [Call activity node](../node/call-subprocess-tasks/call-activity-node).
Here are the steps to start a subprocess from a parent process:
1. First, create a [process](../process/process-definition) designed to be used as a [subprocess](../process/subprocess).
2. In the parent process, create a **user task** node where you want to start the subprocess created at step 1.
3. Add a **Start subprocess** action to the task node.
4. Configure the **Start Subprocess** action and from the dropdown list choose the subprocess created at step 1.
By following these steps, you can start a subprocess from a parent process and control its execution based on your specific use case.

The following properties must be configured for a **Start subprocess** action:
* [Action Edit](#action-edit)
* [Back in steps (for Manual actions)](#back-in-steps)
* [Parameters](#parameters)
* [Data to send (for Manual actions)](#data-to-send)
### Action edit
* **Name** - used internally to make a distinction between different actions on nodes in the process. We recommend defining an action naming standard to be able to quickly find the process actions
* **Order** - if multiple actions are defined on the same node, the running order should be set using this option
* **Timer expression** - it can be used if a delay is required on that action. The format used for this is [ISO 8601 duration format ](https://www.w3.org/TR/NOTE-datetime)(for example, a delay of 30 seconds will be set up as `PT30S`)
* **Action type** - should be set to **Start Subprocess**
* **Trigger type** (options are Automatic/Manual) - choose if this action should be triggered automatically (when the process flow reaches this step) or manually (triggered by the user); in most use cases, this will be set to automatic
* **Required type** (options are Mandatory/Optional) - automatic actions can only be defined as mandatory. Manual actions can be defined as mandatory or optional.
* **Repeatable** - should be checked if the action can be triggered multiple times
* **Autorun Children** - when this is switched on, the child actions (the ones defined as mandatory and automatic) will run immediately after the execution of the parent action is finalized
### Back in steps
* **Allow BACK on this action** - back in process is a functionality that allows you to go back in a business process and redo a series of previous actions in the process. For more details, check [**Moving a token backwards in a process**](../../flowx-designer/managing-a-process-flow/moving-a-token-backwards-in-a-process) section.
### Parameters
* **Subprocess name** - the name of the process that you want to start as a subprocess
* **Branch** - a dropdown menu displaying available branches on the subprocess (both opened and merged)
* **Version** - the type of version that should be used within the subprocess
- **Latest Work in Progress**:
* Displayed if the selected branch is not merged into another branch.
* This configuration is used when there is a work-in-progress (WIP) version on the selected branch or when there is no WIP version on the selected branch due to either work in progress being submitted or the branch being merged.
* In such cases, the latest available configuration on the selected branch is used.
- **Latest Submitted Work**:
* This configuration is used when there is submitted work on the selected branch, and the current branch has been submitted on another branch (latest submitted work on the selected branch is not the merged version).
- **Custom Version**:
* Displayed if the selected branch contains submitted versions.
* **Custom version** - displayed if the selected branch contains submitted versions
* **Copy from current state** - if a value is set here, it will overwrite the default behavior (of copying the whole data from the subprocess) with copying just the data that is specified (based on keys)
* **Exclude from current state** - what fields do you want to exclude when copying the data from the parent process to the subprocess (by default all data fields are copied)

**Advanced configuration**
* **Show Target Process** - ID of the current process, to allow the subprocess to communicate with the parent process (which is the process where this action is configured)
### Data to send
* **Keys** - are used when data is sent from the frontend via an action to validate the data (you can find more information in the [**User task configuration**](../node/user-task-node) section)
**Data to send** option is configurable only when the action **trigger type** is **Manual**.
## Example
Let's create a main **process**, in this process we will add a user task node that will represent a menu page. In this newly added node we will add multiple subprocess actions that will represent menu items. When you select a menu item, a subprocess will run representing that particular menu item.

To start a subprocess, we can, for example, create the following minimum configuration in a user task node (now we configure the process where we want to start a subprocess):
* **Action** - `menu_item_1` - used internally to make a distinction between different actions on nodes in the process. We recommend defining an action naming standard to be able to quickly find the process actions
* **Trigger type** - Manual; Optional
* **Repeatable** - yes
* **Subprocess** - `docs_menu_item_1` - the name of the process that you want to start as a subprocess
* **Exclude from current state** - `test.price` - copy all the data from the parent, except the price data
* **Copy from current state** - leave this field empty in order to copy all the data (except the keys that are specified in the **Exclude from current state** field), if not, add the keys from which you wish to copy the data

**Advanced configuration**
* **Target process (parentProcessInstanceId)** - `${processInstanceId}` - current process ID
#### Result

# Upload file action
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/actions/upload-file-action
An Upload File action is an action type that allows you to upload a file to a service available on Kafka.
**Why is it useful?** The action will receive a file from the frontend and send it to Kafka, and will also attach some metadata.
## Configuring an Upload file action
Multiple options are available for this type of action and can be configured via the **FlowX Designer**. To configure an Upload File action, use the **Actions** tab at the [task node level](../../flowx-designer/managing-a-process-flow/adding-an-action-to-a-node), which has the following configuration options:
* [Action Edit](#action-edit)
* [Back in steps (for Manual actions)](#back-in-steps)
* [Parameters](#parameters)
* [Data to send (for Manual actions)](#data-to-send)
### Action edit
* **Name** - used internally to make a distinction between different actions on nodes in the process. We recommend defining an action naming standard to be able to quickly find the process actions
* **Order** - if multiple actions are defined on the same node, the running order should be set using this option
* **Timer expression** - it can be used if a delay is required on that action. The format used for this is [ISO 8601 duration format ](https://www.w3.org/TR/NOTE-datetime)(for example, a delay of 30 seconds will be set up as `PT30S`)
* **Action type** - should be set to **Upload File**
* **Trigger type** (options are Automatic/Manual) - choose if this action should be triggered automatically (when the process flow reaches this step) or manually (triggered by the user); in most use cases, this will be set to automatic
* **Required type** (options are Mandatory/Optional) - automatic actions can only be defined as mandatory. Manual actions can be defined as mandatory or optional.
* **Repeatable** - should be checked if the action can be triggered multiple times
* **Autorun Children** - when this is switched on, the child actions (the ones defined as mandatory and automatic) will run immediately after the execution of the parent action is finalized

### Back in steps
* **Allow BACK on this action** - back in process is a functionality that allows you to go back in a business process and redo a series of previous actions in the process. For more details, check [Moving a token backwards in a process](../../flowx-designer/managing-a-process-flow/moving-a-token-backwards-in-a-process) section.
### Parameters
* **Address** - the Kafka topic where the file will be posted
* **Document Type** - other metadata that can be set (useful for the [document plugin](../../platform-deep-dive/plugins/custom-plugins/documents-plugin/documents-plugin-overview))
* **Folder** - allows you to configure a value by which the file will be identified in the future
* **Advanced configuration (Show headers)** - this represents a JSON value that will be sent on the headers of the Kafka message
### Data to send
* **Keys** - are used when data is sent from the frontend via an action to validate the data (you can find more information in the [User Task configuration](../node/user-task-node) section)
**Data to send** option is configurable only when the action **trigger type** is **Manual**.
## Example
An example of **Upload File Action** is to send a file to the [document plugin](../../platform-deep-dive/plugins/custom-plugins/documents-plugin/documents-plugin-overview). In this case, the configuration will look like this:
**Parameters configuration**
* **Address (topicName)** - will be set to (the id of the document plugin service) `ai.flowx.in.document.persist.v1`
* **Document Type** - metadata used by the document plugin, here we will set it to`BULK`
* **Folder** - the value by which we want to identify this file in the future (here we use the **client.id** value available on the process instance data: `${application.client.id}`
**Advanced configuration**
* **Headers** - headers will send extra metadata to this topic -`{"processInstanceId": ${processInstanceId}, "destinationId": "curentNodeName"}`)

# Call activity node
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/node/call-subprocess-tasks/call-activity-node
Call activity is a node that provides advanced options for starting subprocesses.
There are cases when extra functionality is needed on certain nodes to enhance process management and execution.

The Call Activity node contains a default action for starting a subprocess, which can be started in two modes:
* **Async mode**: The parent **process** will continue without waiting for the subprocess to finish.
Select if this task should be invoked asynchronously. Make tasks asynchronous if they cannot be executed instantaneously, for example, a task performed by an outside service.
* **Sync mode**: The parent process must wait for the subprocess to finish before advancing.
The start mode can be chosen when configuring the call activity.
If the parent process needs to wait for the subprocess to finish and retrieve results, the parent process key that will hold the results must be defined using the *output key* node configuration value.

## Starting multiple subprocesses
This node type can also be used for starting a set of subprocesses that will be started and run at the same time.
This is useful when there is an array of values in the parent process parameters, and a subprocess needs to be started for each element in that array.

#### Business rule example
Below is an example of an MVEL business rule used to generate a list of shipping codes:
```java
import java.util.*;
def mapValues(shippingCode) {
return {
"shippingCode": shippingCode
}
}
shippingCodeList = [];
shippingCodeList.add(mapValues("12456"));
shippingCodeList.add(mapValues("146e3"));
shippingCodeList.add(mapValues("24356"));
shippingCodeList.add(mapValues("54356"));
output.put("shippingCodeList", shippingCodeList);
```
In this example, the shippingCodeList array contains multiple shipping code maps. Each of these maps could represent parameters for individual subprocesses. The ability to generate and handle such arrays allows the system to dynamically start and manage multiple subprocesses based on the elements in the array, enabling parallel processing of tasks or operations.
To achieve this, select the *parallel multi-instance* option. The *collection key* name from the parent process also needs to be specified.

When designing such a subprocess that will be started in a loop, remember that the input value for the subprocess (one of the values from the array in the parent process) will be stored in the subprocess parameter values under the key named *item*. This key should be used inside the subprocess. If this subprocess produces any results, they should be stored under a key named *result* to be sent back to the parent process.

#### Subprocess business rule example
Here's an MVEL business rule for a subprocess that processes shipping codes:
```java
import java.util.*;
map = new HashMap();
if (input.item.shippingCode.startsWith("1")) {
map.package = "Fragile";
} else {
map.package = "Non-fragile";
}
map.shippingCode = input.item.shippingCode;
output.put("result", map);
```
## Result (one of the subprocess instances)
The result shows the output of a process that has handled multiple shipping codes. The structure is:

```json
{
"package": "Non-fragile",
"shippingCode": "54356"
}
```
This contains the result of processing the specific shipping code, indicating additional attributes related to the shipping code (e.g., package type) determined during the subprocess execution.
# Start embedded subprocess
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/node/call-subprocess-tasks/start-embedded-subprocess
The Start Embedded Subprocess node initiates subprocesses within a parent process, allowing for encapsulated functionality and enhanced process management.
## Overview
The Start Embedded Subprocess node enables the initiation of subprocesses within a parent process, offering a range of features and options for enhanced functionality.

## Usage Considerations
### Data Management
Embedded subprocesses offer advantages such as:
* Segregated sections from a subprocess can be utilized without rendering them over the parent process.
* Data is stored within the parent process instance, eliminating the need for data transfer.
* Embedded subprocesses are visible in the navigation view.
### Runtime Considerations
**Important** runtime considerations for embedded subprocesses include:
* The **child process** must have only **one swimlane**.
* Runtime swimlane permissions are inherited from the parent.
* Certain boundary events are supported on Start Embedded Subprocess node, except for Timer events (currently not implemented).
Embedded subprocesses cannot support multiple swimlanes in the actual implementation.
## Example
Let's explore this scenario: Imagine you're creating a process that involves a series of steps, each akin to a sequential movement of a stepper. Now, among these steps, rather than configuring one step from scratch, you can seamlessly integrate a pre-existing process, treating it as a self-contained unit within the overarching process.
### Step 1: Design the Embedded Subprocess
Log in to the FlowX Designer where you create and manage process flows.
Start by creating a new process or selecting an existing process where you want to embed the subprocess.
Design your navigation areas to match your needs.

Make sure you allocated all your user tasks into the navigation area accordingly.
Within the selected process, design the subprocess by adding necessary tasks, events and so on. Ensure that the subprocess is contained within **a single swimlane**.

To initiate a process with an embedded subprocess, designate the root Navigation Area of the subprocess as an inheritance placeholder **Parent Process Area** label in the **Navigation Areas**.
Ensure that the navigation hierarchy within the Parent Process Area can be displayed beneath the parent navigation area within the main process interface.

Make sure you allocated all your user tasks into the navigation area accordingly.
### Step 2: Configure Start Embedded Subprocess Node
Within the parent process, add a **Start Embedded Subprocess Node** from the node palette to initiate the embedded subprocess.
Configure the node to specify the embedded subprocess that it will initiate. This typically involves selecting the subprocess from the available subprocesses in your process repository.

### Step 3: Customize Subprocess Behavior
[**Alternative flows**](../../process/navigation-areas#alternative-flows) configured in the **main process** will also be applied to **embedded subprocesses** if they share the same name.
Within the subprocess, handle data as needed. Remember that data is stored within the parent process instance when using embedded subprocesses.
Implement boundary events within the subprocess if specific actions need to be triggered based on certain conditions.

### Step 4: Test Integration
Test the integration of the embedded subprocess within the parent process. Ensure that the subprocess initiates correctly and interacts with the parent process as expected.
Verify that data flows correctly between the parent process and the embedded subprocess. Check if any results produced by the subprocess are correctly captured by the parent process.

For further details on other ways of configuring and utilizing subprocesses, refer to the following resources:
# Error events
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/node/error-events
Error Events expand the capabilities of process modeling and error handling within BPMN processing. These Error Event nodes enhance the BPMN standard and offer improved control over error management.

## Intermediate event - error event (interrupting)

## Compatibility matrix for Error Events
**Error Events** are used to handle error scenarios within BPMN processes. The following table outlines their compatibility with different node types:
| **Node Type** | **Error Boundary Event** |
| ------------------------ | ------------------------ |
| **User Task** | Yes |
| **Service Task** | Yes |
| **Send Message Task** | Yes |
| **Receive Message Task** | Yes |
| **Subprocess** | Yes |
| **Call Activity** | Yes |
## Key characteristics
1. **Boundary of an Activity node or Subprocess:**
* Error Events can only be used on the boundary of an activity, including subprocesses nodes. They cannot be placed in the normal flow of the process.
2. **Always Interrupt the Activity:**
* It's important to note that Error Events always interrupt the activity to which they are attached. There is no non-interrupting version of Error Events.
3. **Handling Thrown Errors:**
* A thrown error, represented by an Error Event, can be caught by an Error Catch Event. This is achieved specifically using an Error Boundary Event, which is placed on the boundary of the corresponding activity.
4. **Using error events on Subprocesses nodes**:
* An error catch event can be linked to a subprocess, with the error source residing within the subprocess itself, denoted by the presence of an error end event, signifying an abnormal termination of the subprocess.
## Configuring an Error Intermediate boundary event

* **Name**: Assign a name to the event for easy identification.
* **Condition**: Specify the condition that triggers the error event. Various script languages can be used for defining conditions, including:
* MVEL
* JavaScript
* Python
To draw a sequence from an error event node and link it to other nodes, simply follow these steps: right-click on the node, and then select the option to "Add Sequence."
When crafting a condition, use a predefined key as illustrated below:

For instance, in the example provided, we've taken a process key defined on a switch UI element and constructed a user-defined condition like this: `input.application.switch == true`.
* **Priority**: Determine the priority level of this error event in relation to other error events added on the same node.
When multiple error events are configured for a node, and multiple conditions simultaneously evaluate to true, only one condition can interrupt the ongoing activity and advance the token to the next node. The determination of which condition takes precedence is based on the "priority" field.
If the "priority" field is set to "null," the system will randomly select one of the active conditions to trigger the interruption.
* `input.application.switch`: This represents a key to bind a value to the Switch UI element within the "application" part of the "input". It is used in this example to capture input or configuration from a user.
* `==`: This is an equality operator, and it checks if the value on the left is equal to the value on the right.
* `true` is a boolean value, which typically represents a state of "true" or "on."
So, when you put it all together, the statement is checking if the value of the "input.application.switch" is equal to the string "true." If the value of "input.application.switch" is indeed "true" (as a string), the condition is considered true. If the value is anything other than "true," the condition is false and the error is triggered.
### Use case: handling errors during User Task execution
**Description:** This use case pertains to a page dedicated to collecting client contact data. Specifically, it deals with scenarios where users are given the opportunity to verify their email addresses and phone numbers.
In this scenario we will create a process to throw an error if an email address is not valid.

#### Example configuration
1. **Error Boundary Event:** We will set an error boundary event associated with a user task.
2. **Error Node:** The node will be responsible to redirect the user to other flows after the user's email address is validated based on the conditions defined.
```java
input.application.client.contactData.email.emailAddress != "john.doe@email.com"
```
The expression checks if the email address configured in `application.client.contactData.email.emailAddress` key is not equal to "[john.doe@email.com](mailto:john.doe@email.com)." If they are not the same, it evaluates to true, indicating a mismatch.

3. **Flow Control:** Depending on the outcome of the validation process, users will be directed to different flows, which may involve displaying error modals as appropriate.

# Exclusive gateway
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/node/exclusive-gateway-node
In the world of process flows, decisions play a crucial role, and that's where the Exclusive Gateway comes into play. This powerful tool enables you to create conditional pathways based on data provided.
## Understanding Exclusive (XOR) Gateway
An exclusive gateway (also called XOR gateway) represents a decision point in your process where exactly one outgoing path is selected based on conditions. This is one of the most common routing mechanisms used in business process modeling.
**Core principles of Exclusive Gateways**
When designing decision logic with exclusive gateways, consider these guiding principles:
An exclusive gateway evaluates multiple conditions, but only **one outgoing sequence flow** will be taken - the first condition that evaluates to *true*.
Each outgoing path from an XOR gateway depends on a condition that evaluates to *true* or *false*.
Overlapping conditions - where multiple paths could match - can lead to **unpredictable routing**. Gateways will select the first matching path, which might not be the correct one.
This is the essential question behind every XOR gateway: **"Given the input, which path should the process follow?"**
Before deploying a process with an XOR gateway, test it using **realistic data sets** and **edge cases**. Ensure each possible path behaves as expected - and none are silently skipped or broken.
Every XOR gateway introduces a **decision point**. Each outgoing sequence flow represents a **distinct outcome** - make sure those outcomes are meaningful and necessary.
Include a **default path** in the gateway to catch any scenario where none of the defined conditions match. This is especially important when the decision logic depends on external or dynamic data.
Your conditions should be **simple, explicit, and named clearly**. Avoid nesting complex expressions directly in the gateway when possible - move logic to a service task if needed.
Business rules change. Processes evolve. If your gateway logic doesn't keep up, the process will drift out of sync with how the business actually works.
## Configuring an Exclusive Gateway node

To configure this node effectively, it's essential to set up both the **input** and **output** sequences within the gateway process.


### General configuration
* **Node name**: Give your node a meaningful name.
* **Can go back**: Enabling this option allows users to revisit this step after completing it.
When a step has "Can Go Back" set to false, all preceding steps become inaccessible.
* [**Stage** ](../../platform-deep-dive/core-extensions/task-management/using-stages): Assign a stage to the node.

### Gateway decisions
* **Language**: When configuring conditions, you can use JavaScript, Python, MVEL, or Groovy expressions that evaluate to either **true** or **false**.
* **Conditions**: In the **Gateway Decisions** tab, you can see that the conditions (**if, else if, else**) are already built-in and you can **select** the destination node when the condition is **true**.
The order of expressions matters; the first **true** evaluation stops execution, and the token moves to the selected node.

After the exclusive portion of the process, where one path is chosen over another, you'll need to either end each path (as in the example below) or reunite them into a single process (as in the example above) using a new exclusive gateway without any specific configuration.

## JavaScript examples
### Getting input from a Switch UI element
Let's consider the following example: we want to create a process that displays 2 screens and one modal. The gateway will direct the token down a path based on whether a switch element (in our case, VAT) is toggled to true or false.

If, during the second screen, the VAT switch is toggled on, the token will follow the second path, displaying a modal.

After interacting with the modal, the token will return to the main path, and the process will continue its primary flow.

**Example configuration**:
* **Language**: JavaScript
* **Expression**:
```javascript
input.application.company.vat == true // you can use the same method to access a value for other supported scripts in our platform: MVEL, Python and Groovy
```
Essentially, you are accessing a specific value or property within a structured data object. The format is usually `input.{{key from where you want to access a value}}`. In simpler terms, it's a way to verify if a particular property within your input data structure (`input.application.company.vat` key attached to Switch UI element) is set to the value true. If it is, the condition is met and returns true; otherwise, it returns false.
The `application.company.vat` key corresponds to the switch UI element.

### Getting input from multiple UI elements
Consider another scenario in which the process relies on user-provided information, such as age and membership status, to determine eligibility for a discount. This decision-making process utilizes multiple conditions, and depending on the input, it may either conclude or continue with other flows.

**Configuration example**:

In our case, the expressions fields will be populated with the `input.application.client.age` and `input.application.client.membership` keys, which correspond to the user input collected on the initial screen.
Here's how we've configured the rules for our discount eligibility process:
1. Users under 18 with standard membership are not eligible (redirected to not\_eligible\_modal):
```
input.application.client.age < 18 && input.application.client.membership == "standard"
```
2. Users 18 or older with standard membership are not eligible due to membership level (redirected to not\_eligible\_membership):
```
input.application.client.age >= 18 && input.application.client.membership == "standard"
```
3. Users 18 or older with gold membership qualify for a discount (redirected to discount\_applied):
```
input.application.client.age >= 18 && input.application.client.membership == "gold"
```
4. Users 18 or older with platinum membership also qualify for a discount (redirected to discount\_applied):
```
input.application.client.age >= 18 && input.application.client.membership == "platinum"
```
5. Any other combinations are sent to needsValidation for further review.
Each rule uses the logical AND operator (&&) to ensure both conditions must be true for the rule to apply. The rules are evaluated in sequence, and the process follows the path of the first matching rule.
The process is visualized as follows:
# Message catch boundary events
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/node/message-events/message-catch-boundary-event
Boundary events are integral components linked to user tasks within a process flow. Specifically, Message Catch Boundary Events are triggered by incoming messages and can be configured as either interrupting or non-interrupting based on your requirements.
**Why is it important?** It empowers processes to actively listen for and capture designated messages during the execution of associated user tasks.
When an event is received, it progresses through the sequence from the intermediate **node**. Multiple intermediate boundary events can exist on the same user task, but only one can be activated at a time.

Message Catch Boundary Events can be categorized by their behavior, resulting in two main classifications:
* [**Interrupting**](#message-catch-interrupting-event)
* [**Non-interrupting**](#message-catch-non-interrupting-event)
## Message catch interrupting event

In the case of an Interrupting Message Catch Boundary Event triggered by a received message, it immediately interrupts the ongoing task. The associated task concludes, and the **process flow** advances based on the received message.
* **Use Cases:**
* Suitable for scenarios where the receipt of a specific message requires an immediate interruption of the current activity.
* Often used when the received message signifies a critical event that demands prompt attention.
* **Example:**
* A user task is interrupted as soon as a high-priority message is received, and the process flow moves forward to handle the critical event.
## Message catch non-interrupting event

Contrastingly, a Non-Interrupting Message Catch Boundary Event continues to listen for messages during the execution of the associated task without immediate interruption. The task persists in its execution even upon receiving messages. Multiple non-interrupting events can be activated concurrently while the task is still active, allowing the task to continue until its natural completion.
* **Use Cases:**
* Appropriate for scenarios where multiple messages need to be captured during the execution of a user task without disrupting its flow.
* Useful when the received messages are important but do not require an immediate interruption of the ongoing activity.
* **Example:**
* A user task continues its execution while simultaneously capturing and processing non-critical messages.
## Compatibility matrix for Message Events
**Message Events** can be used in various contexts within BPMN processes. The following table outlines their compatibility with the following nodes:
| **Node Type** | **Message Catch Event - Interrupting** | **Message Catch Event - Non-Interrupting** |
| ----------------------------- | -------------------------------------- | ------------------------------------------ |
| **User Task** | Yes | Yes |
| **Service Task** | Yes | Yes |
| **Send Message Task** | No | No |
| **Receive Message Task** | Yes | Yes |
| **Start Embedded Subprocess** | Yes | Yes |
| **Call Activity** | Yes | Yes |
## Configuring a message catch interrupting/non-interrupting event
#### General config
* **Correlate with Throwing Events** - the dropdown lists all throw events from accessible process definitions
Establishes correlation between the catch event and the corresponding throw event. Selection of the relevant throw event triggers the catch event upon message propagation.
* **Correlation Key** - process key used to correlate received messages with specific process instances
The correlation key associates incoming messages with specific process instances. Upon receiving a message with a matching correlation key, the catch event is triggered.
* **Receive Data (Process Key)** - the catch event can receive and store data associated with the message in a process variable with the specified process key
This received data becomes available within the process instance, facilitating further processing or decision-making.
## Illustrating boundary events (interrupting and non-interrupting)

**Business Scenario:**
A customer initiates the account opening process. Identity verification occurs, and after successful verification, a message is thrown to signal that the account is ready for activation.
Simultaneously, the account activation process begins. If there are issues during activation, they are handled through the interruption process. The overall process ensures a streamlined account opening experience while handling potential interruptions during activation, and also addresses exceptions through the third lane.
# Message catch start event
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/node/message-events/message-catch-start-event
Message Catch Start Event node represents the starting point for a process instance based on the receipt of a specific message. When this event is triggered by receiving the designated message, it initiates the execution of the associated process.
**Why it is important?** The Message Catch Start Event is important because it allows a process to be triggered and initiated based on the reception of a specific message.
## Configuring a message catch start event
A Message Catch Start Event is a special event in a process that initiates the start of a process instance upon receiving a specific message. It acts as the trigger for the process, waiting for the designated message to arrive. Once the message is received, the process instance is created and begins its execution, following the defined process flow from that point onwards. The Message Catch Start Event serves as the entry point for the process, enabling it to start based on the occurrence of the expected message.
It is mandatory that in order to use this type of node together with task management plugin, to have a service account defined in your identity solution. For more information, check our documentation in how to create service accounts using Keycloak, [**here**](../../../../setup-guides/access-management/configuring-an-iam-solution#process-engine-service-account).

#### General config
* **Can go back?** - setting this to true will allow users to return to this step after completing it, when encountering a step with `canGoBack` false, all steps found behind it will become unavailable
* **Correlate with catch events** - the dropdown contains all catch messages from the process definitions accessible to the user
* **Correlation key** - is a process key that uniquely identifies the instance to which the message is sent
* **Send data** - allows the user to define a JSON structure with the data to be sent along with the message
* **Stage** - assign a stage to the node, if needed

## Interprocess communication with throw and message catch start events
### Throw process

#### Configuring the throw intermediate event
##### General config
* **Can go back?** - Setting this to true allows users to return to this step after completion. When encountering a step with canGoBack set to false, all steps found behind it become unavailable.
* **Correlate with catch events** - Should match the configuration in the message catch start event.
* **Correlation key** - A process key that uniquely identifies the instance to which the message is sent.
* **Send data** - Define a JSON structure with the data to be sent along with the message. In our example, we will send a test object:
```json
{"test": "docs"}
```
* **Stage** - Assign a stage to the node if needed.

### Start with catch process

#### Configuring the message catch start event
Remember, it's mandatory to have a service account defined in your identity solution to have the necessary rights to start a process using the message catch start event. Refer to our documentation on how to create service accounts using Keycloak, [**here**](../../../../setup-guides/access-management/configuring-an-iam-solution#process-engine-service-account).

After running the throw process, the process containing the start catch message event will be triggered. The data is also sent:

# Message Events
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/node/message-events/message-events
Message events serve as a means to incorporate messaging capabilities into business process modeling. These events are specifically designed to capture the interaction between different process participants by referencing messages.

By leveraging message events, processes can pause their execution until the expected messages are received, enabling effective coordination and communication between various system components.
## Intermediate events
| Trigger | Description | Marker |
| ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| Message | A Message Intermediate Event serves to send or receive messages. A filled marker denotes a "throw" event, while an unfilled marker indicates a "catch" event. This either advances the process or alters the flow for exception handling. Identifying the Participant is done by connecting the Event to a Participant through a Message Flow. | Throw  Catch  |
## Boundary events
Boundary Events involve handling by first consuming the event occurrence. Message Catch Boundary Events, triggered by incoming messages, can be configured as either interrupting or non-interrupting.
| Trigger | Description | Marker |
| ------- | ------------------------------------------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------: |
| Message | **Non-interrupting Message Catch Event**: The event can be triggered at any time while the associated task is being performed. | Non-interrupting  |
| Trigger | Description | Marker |
| ------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------: |
| Message | **Interrupting Message Catch Event**: The event can be triggered at any time while the associated task is being performed, interrupting the task. | Interrupting  |
## Intermediate vs boundary
**Intermediate Events**
* Intermediate events temporarily halt the process instance, awaiting a message.
**Boundary Interrupting Events**
* These events can only be triggered while the token is active within the parent node.

* Upon activation, the parent node concludes, and the token progresses based on the boundary flow.

**Boundary Non-Interrupting Events**
* Similar to interrupting events, non-interrupting events can only be triggered while the token is active in the parent node.
* Upon triggering, the parent node remains active, and a new token is generated to execute the boundary flow concurrently.
FLOWX.AI works with the following message events nodes:
* [**Message catch start event**](./message-catch-start-event)
* [**Message intermediate events**](./message-intermediate/)
* [**Message catch boundary event**](./message-catch-boundary-event)
## Message events correlation
Messages are not sent directly to process instances. Instead, message correlation is achieved through message subscriptions, which consist of the message name and the correlation key (also referred to as the correlation value).
A correlation key is a key that can have the same value across multiple instances, and it is used to match instances based on their shared value. It is not important what the attribute's name is (even though we map based on this attribute), but rather the value itself when performing the matching between instances.
For example, in an onboarding process for a user, you hold a unique personal identification number (SSN), and someone else needs a portion of your process, specifically the value of your input (SSN).
The communication works as follows: you receive a message on a Kafka topic - `${kafka.topic.naming.prefix}.core.message.event.process${kafka.topic.naming.suffix}`. The engine listens here and writes the response.
## Message events configuration
* `attachedTo`: a property that applies to boundary events
* `messageName`: a unique name at the database level, should be the same for throw and catch events
* `correlationKey`: a process variable used to uniquely identify the instance to which the message is sent
* `data`: allows defining the JSON message body mapping as output and input
### Data example
```json
{
"document":{
"documentId": "${document.id}",
"documentUrl": "${document.url}"
}
}
```
# Intermediate message events in business processes
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/node/message-events/message-intermediate/example-intermediate-message-events
Business processes often involve dynamic communication and coordination between different stages or departments. Intermediate Message Events play an important role in orchestrating information exchange, ensuring effective synchronization, and enhancing the overall efficiency of these processes.
* [Throw and catch on sequence - credit card request process example](#throw-and-catch-on-sequence---credit-card-request-process-example)
* [Throw and catch - interprocess communication](#interprocess-communication-with-throw-and-catch-events)
## Throw and catch on sequence - Credit card request process example
### Business scenario
In the following example, we'll explore a credit card request process that encompasses the initiation of a customer's request, verification based on income rules, approval or rejection pathways, and communication between the client and back office.

#### Activities
##### Default swimlane (client)
* **Start Event:** Marks the commencement of the process.
* **User Task 1:** Customer Requests New Credit Card - Involves the customer submitting a request for a new credit card.
* **Exclusive Gateway:** The process diverges based on the verification result (dependent on income rules).
* **Path A (Positive Verification):**
* **User Task 2:** Approve Credit Card Application - The bank approves the credit card application.
* **End Event:** Denotes the conclusion of the process for approved applications.
* **Path B (Negative Verification):**
* **Parallel Gateway Open:** Creates two parallel zones.
* **First Parallel Zone:**
* **User Task 3:** Reject Credit Card Application - The bank rejects the credit card application.
* **Message Throw Intermediate Event:** Signals the rejection, throwing a message to notify the back office department.
* **End Event:** Signifies the end of the process for rejected applications.
* **Second Parallel Zone:**
* **User Task 3:** Reject Credit Card Application - The bank rejects the credit card application.
* **Message Catch Intermediate Event:** The back office department is notified about the rejection.
* **Send Message Task**: A notification is sent via email to the user about the rejection.
* **End Event:** Signifies the end of the process for rejected applications.
##### Backoffice swimlane
* **Message Catch Intermediate Event:** The back office department awaits a message to proceed with credit card issuance.
* **Send Message Task:** Send Rejection Letter - Involves sending a rejection letter to the customer.
### Sequence flow
```mermaid
graph TD
subgraph Default Swimlane
StartEvent[Start Process]
UserTask1[User Task 1: Customer Requests New Credit Card]
Gateway1{Exclusive Gateway}
UserTask2[User Task 2: Approve Credit Card Application]
endA[End Event: End approved scenario]
ParallelGateway{Parallel Gateway}
UserTask3A[User Task 3: Reject Credit Card Application]
MessageThrow[Message Throw Intermediate Event: Throwing a message to notify the back office department.]
Gateway2{Close Parallel}
endC[End Event: Signifies the end of the process for rejected applications.]
end
subgraph Backoffice Swimlane
MessageCatch
SendEmailTask
end
StartEvent -->|Start Process| UserTask1
UserTask1 -->|Income Verification| Gateway1
Gateway1 -->|Positive Verification| UserTask2
UserTask2 -->|Approved| endA
Gateway1 -->|Negative Verification| ParallelGateway
ParallelGateway -->|First Parallel Zone| UserTask3A
UserTask3A -->|Credit Card Rejected| MessageThrow
MessageThrow --> Gateway2 -->|Second Parallel Zone| MessageCatch
MessageCatch --> SendEmailTask
SendEmailTask --> Gateway2
Gateway2 -->|End| endC
```
### Message flows
A message flow connects the Message Throw Intermediate Event to the Message Catch Intermediate Event, symbolizing the communication of credit card approval from the credit card approval task to the back office department.
In summary, when a customer initiates a new credit card request, the bank verifies the information. If declined, a message is thrown to notify the back office department. The Message Catch Intermediate Event in the back office awaits this message to proceed with issuing and sending the rejection letter to the customer.
### Configuring the BPMN process
To implement the illustrated BPMN process for the credit card request, follow these configuration steps:
**FLOWX.AI Designer**: Open FLOWX.AI Designer.
**Draw BPMN Diagram**: Import the provided BPMN diagram into FLOWX.AI Designer or recreate it by drawing the necessary elements.
**Customize Swimlanes**: Set up the "Default" and "Backoffice" swimlanes to represent different departments or stakeholders involved in the process. This helps visually organize and assign tasks to specific areas.
**Define User Tasks**: Specify the details for each user task. Configure User Task 1, User Task 2, and User Task 3 with appropriate screens.
* **User Task 1** - *customer\_request\_new\_credit\_card*
We will use the value from `application.income` key added on the slider UI element to create an MVEL business rule in the next step.

* **User Task 2** - *approve\_credit\_card\_request*

In this screen, we configured a modal to display the approval.
* **User Task 3** - *reject\_credit\_card\_request*

**Configure Gateways**: Adjust the conditions for the Exclusive Gateway based on your business rules. Define the conditions for positive and negative verifications, guiding the process down the appropriate paths.
In our example, we used an MVEL rule to determine eligibility based on the income of the user. We used the `application.income` key configured in the first user task to create the rule.

Also, add Parallel gateways to open/close parallel paths.

**Set Message Events**: Configure the Message Throw and Message Catch Intermediate Events in the "Default" and "Backoffice" swimlanes, respectively. Ensure that the Message Catch Intermediate Event in the "Backoffice" swimlane is set up to wait for the specific message thrown by the Message Throw event. This facilitates communication between different stages of the process.
**Define End Events**: Customize the End Events for approved and rejected applications in the "Default" swimlane. Also, set an end event in the "Backoffice" swimlane to indicate the completion of the back-office tasks.
**Configure Send Message Task**: Set up the Send Message Task in the "Backoffice" swimlane to send a rejection letter as a notification to the user.

Define the content of the rejection letter, the method of notification, and any additional details required for a seamless user experience. More details on how to configure a notification can be found in the following section:
[**Sending a notification**](../../../../platform-deep-dive/plugins/custom-plugins/notifications-plugin/sending-a-notification)
**Validate and Test**: Validate the BPMN diagram for correctness and completeness. Test the process flow by simulating different scenarios, such as positive and negative verifications.
### Configuring intermediate message events
Configuring message events is a crucial step in orchestrating effective communication and synchronization within a business process. Whether you are initiating a message throw or awaiting a specific message with a catch, the configuration process ensures information exchange between different components of the process.
In this section, we explore the essential steps and parameters involved in setting up message events to optimize your BPMN processes.
### Message throw intermediate event
A Message Throw Intermediate Event is an event in a process where a message is sent to trigger communication or action with another part of the process (can be correlated with a catch event). It represents the act of throwing a message to initiate a specific task or notification. The event creates a connection between the sending and receiving components, allowing information or instructions to be transmitted. Once the message is thrown, the process continues its flow while expecting a response or further actions from the receiving component.

#### General Configuration
* **Can go back?** - Setting this to true allows users to return to this step after completing it. When encountering a step with `canGoBack` false, all steps found behind it will become unavailable.
* **Correlate with catch message events** - The dropdown contains all catch messages from the process definitions accessible to the user, in our example: `throwcatchsequenceloan`
It is imperative to define the message for the catch event first. This ensures its availability in the dropdown menu when configuring the throw intermediate event.
* **Correlation key** - This is a process key that uniquely identifies the instance to which the message is sent. In our example, we utilized the `processInstanceId` as the identifier, dynamically generated at runtime. This key is crucial for establishing a clear and distinct connection between the sender and recipient in the messaging process.
A correlation key is a key that can have the same value across multiple instances, and it is used to match instances based on their shared value. It is not important what the attribute's name is (even though we map based on this attribute), but rather the value itself when performing the matching between instances.
* **The Send data field** - This feature empowers you to define a JSON structure containing the data to be transmitted alongside the message. In our illustrative example, we utilized dynamic data originating from user input, specifically bound to a slider UI element.
```json
{"value": "${application.income}"}
```
* **Stage** - Assign a stage to the node.
In the end, this is what we have:

### Message catch intermediate event
A Message Catch Intermediate Event is a type of event in a process that waits for a specific message before continuing with the process flow. It enables the process to synchronize and control the flow based on the arrival of specific messages, ensuring proper coordination between process instances.

#### General Configuration
* **Can go back?** - Setting this to true allows users to return to this step after completing it. When encountering a step with `canGoBack` false, all steps found behind it will become unavailable.
* **Correlate with throwing events** - The dropdown contains all catch messages from the process definitions accessible to the user (must be the same as the one assigned in Message throw intermediate event)
* **Correlation key** - Process key used to establish a correlation between the received message and a specific process instance (must be the same as the one assigned in Message throw intermediate event).
* **Receive data** - The process key that will be used to store the data received from the throw event along with the message.
* **Stage** - Assign a stage to the node.

### Testing the final result
After configuring the BPMN process and setting up all the nodes, it is crucial to thoroughly test the process to ensure its accuracy and effectiveness.
We will test the path where the user gets rejected.

In the end, the user will receive this notification via email:

## Interprocess communication with throw and catch events
Facilitate communication between different processes by using message intermediate events.
### Business scenario
Consider a Bank Loan Approval process where the parent process initiates a loan application. During the execution, it throws a message to a subprocess responsible for additional verification.
#### Activities
**Parent Process:**
* **Start Event:** A customer initiates a loan application.
* **Start Subprocess:** Initiates a subprocess for additional verification.
* **User Task:** Basic verification steps are performed in the parent process.
* **Throw Message:** After the basic verification, a message is thrown to indicate that the loan application is ready for detailed verification.
* **End Event:** The parent process concludes.
**Subprocess:**
* **Start Event:** The subprocess is triggered by the message thrown from the parent process.
* **Catch Message:** The subprocess catches the message, indicating that the loan application is ready for detailed verification.
* *(Perform Detailed Verification and Analysis)*
* **End Event:** The subprocess concludes.
### Sequence flow
```mermaid
graph TD
subgraph Parent Process
a[Start]
b[Start Subprocess]
c[Throw message to another process]
d[User Task]
e[End]
end
subgraph Subprocess
f[Start]
g[Catch event in subprocess]
h[End]
end
a --> b --> c --> d --> e
f --> g --> h
c --> g
```
### Message flows
* The parent subprocess triggers the subprocess run node, initiating the child process.
* Within the child process, a message catch event waits for and processes the message thrown by the parent subprocess.
### Configuring the parent process (throw event)

**Open **FlowX.AI Designer** and create a new process**.
**Add a User Task for user input**.
* Within the designer interface, add a "User Task" element to collect user input. Configure the user task to capture the necessary information that will be sent along with the message.

**Integrate a [**Call activity**](../../../node/call-subprocess-tasks/call-activity-node.mdx) node**.
* Add a "Subprocess Run Node" and configure it:
* **Start Async** - Enable this option. When subprocesses are initiated in sync mode, they notify the parent process upon completion. The parent process then manages the reception of process data from the child and resumes its flow accordingly

* Add and configure the "Start Subprocess" action:
* **Parameters**:
* **Subprocess name** - Specify the name of the process containing the catch message event.
* **Branch** - Choose the desired branch from the dropdown menu.
* **Version** - Indicate the type of version to be utilized within the subprocess.

For a more comprehensive guide on configuring the "Start Subprocess" action, refer to the following section:
[Start Subprocess action](../../../../building-blocks/actions/start-subprocess-action)
**Insert a Throw Event for Message Initiation**.
* Add a "Throw Event" to the canvas, indicating the initiation of a message.
* Configure the throw message intermediate event node:
* **Correlate with catch message events** - The dropdown contains all catch messages from the process definitions accessible to the user, in our example: `throwcatchDocs`
* **Correlation key** - This is a process key that uniquely identifies the instance to which the message is sent. In our example, we utilized the `processInstanceId` as the identifier, dynamically generated at runtime. This key is crucial for establishing a clear and distinct connection between the sender and recipient in the messaging process.
* **The Send data field** - This feature empowers you to define a JSON structure containing the data to be transmitted alongside the message. In our illustrative example, we utilized dynamic data originating from user input, specifically bound to some slider UI elements.
```json
{
"client_details": {
"clientIncome": ${application.income},
"loanAmount": ${application.loan.amount},
"loanTerm": ${application.loan.term}
}
}
```

### Configuring the subprocess (catch event)

Configure the node:
* **Correlate with throwing events** - Utilize the same correlation settings added for the associated throw message event.
* **Correlation Key** - Set the correlation key to the parent process instance ID, identified as `parentProcessInstanceId`.
* **Receive data** - Specify the process key that will store the data received from the throw event along with the corresponding message.

* Integrate and Fine-Tune a Service Task for Additional Verification.
* Incorporate a service task to execute the additional verification process. Tailor the configuration to align with your preferred method of conducting supplementary checks.
### Throw with multiple catch events
Download the example
# Message catch intermediate event
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/node/message-events/message-intermediate/message-catch-intermediate-event
A Message Catch Intermediate Event is a type of event in a process that waits for a specific message before continuing with the process flow.

**Why it is important?** It enables the process to synchronize and control the flow based on the arrival of specific messages, ensuring proper coordination between process instances.
Similar to the message catch boundary event, the message catch intermediate event is important because it facilitates the communication and coordination between process instances through messages. By incorporating this event, the process can effectively synchronize and control the flow based on the arrival of specific messages.
Message Catch Intermediate Event can be used as a standalone node, this means that it will block a process until it receives an event.
## Configuring a message catch intermediate event
Imagine a process where multiple tasks are executed in sequence, but the execution of a particular task depends on the arrival of a certain message. By incorporating a message catch intermediate event after the preceding task, the process will pause until the expected message is received. This ensures that the subsequent task is not executed prematurely and allows for the synchronization of events within the process.

#### General config
* **Can go back?** - setting this to true will allow users to return to this step after completing it, when encountering a step with `canGoBack` false, all steps found behind it will become unavailable
* **Correlate with throwing message events** - the dropdown contains all catch messages from the process definitions accessible to the user
* **Correlation key** - process key used to establish a correlation between the received message and a specific process instance
* **Receive data** - the process key that will be used to store the data received along with the message
* **Stage** - assign a stage to the node

# Message intermediate events overview
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/node/message-events/message-intermediate/message-intermediate
An intermediate event is an occurrence situated between a start and an end event in a process or system. It is represented by a circle with a double line. This event can either catch or throw information, and the directional flow is indicated by connecting objects, determining whether the event is catching or throwing information.

### Message throw intermediate event
This event throws a message and continues with the process flow.
It enables the sending of a message to a unique destination.
### Message catch intermediate event
This event waits for a message to be caught before continuing with the process flow.
# Message throw intermediate event
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/node/message-events/message-intermediate/message-throw-intermediate-event
Using a Throw intermediate event is like throwing a message to tell someone about something. After throwing the message, the process keeps going, and other parts of the process can listen to that message.

**Why it is important?** The Message Throw Intermediate Event is important because it allows different parts of a process to communicate and share information with each other.
## Configuring a message throw intermediate event
A Message throw intermediate event is an event in a process where a message is sent to trigger a communication or action with another part of the process (can be correlated with a catch event). It represents the act of throwing a message to initiate a specific task or notification. The event creates a connection between the sending and receiving components, allowing information or instructions to be transmitted. Once the message is thrown, the process continues its flow while expecting a response or further actions from the receiving component.

#### General config
* **Can go back?** - setting this to true will allow users to return to this step after completing it, when encountering a step with `canGoBack` false, all steps found behind it will become unavailable
* **Correlate with catch events** - the dropdown contains all catch messages from the process definitions accessible to the user
* **Correlation key** - is a process key that uniquely identifies the instance to which the message is sent
* **The data field** - allows the user to define a JSON structure with the data to be sent along with the message
* **Stage** - assign a stage to the node

# Send message/receive message tasks
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/node/message-send-received-task-node
Send message task and Receive message task nodes are used to handle the interaction between a running process and external systems. This is done using Kafka.
## Send message task
This node is used to configure messages that should be sent to external systems.

### Configuring a send message task
Node configuration is done by accessing the **Node Config** tab. You have the following configuration options for a send message task:
#### General configuration
Inside the **General Config** tab, you have the following properties:
* **Node Name** - the name of the node
* **Can Go Back** - switching this option to true will allow users to return to this step after completing it
When encountering a step with `canGoBack` switched to false, all steps found behind it will become unavailable.
* [**Swimlane**](../../platform-deep-dive/user-roles-management/swimlanes) - choose a swimlane (if there are multiple swimlanes on the process) to ensure only certain user roles have access to certain process nodes; if there are no multiple swimlanes, the value is **Default**
* [**Stage**](../../platform-deep-dive/core-extensions/task-management/using-stages) - assign a stage to the node

To configure a send message task, we first need to add a new node and then configure an **action** (**Kafka Send Action** type):
1. Open **Process Designer** and start configuring a process.
2. Add a **send message task** node.
3. Select the **send message task** node and open the **Node Configuration**.
4. Add an **action** , the type of the action set to **Kafka Send Action**.
5. A few action parameters will need to be filled in depending on the selected action type.

Multiple options are available for this type of action and can be configured via the FLOWX.AI Designer. To configure and [add an action to a node](../../flowx-designer/managing-a-process-flow/adding-an-action-to-a-node), use the **Actions** tab at the node level, which has the following configuration options:
* [Action Edit](#action-edit)
* [Back in steps (for Manual actions)](#back-in-steps)
* [Parameters](#parameters)
* [Data to send (for Manual actions)](#data-to-send)
#### Action Edit
* **Name** - used internally to make a distinction between different [actions](../actions/actions) on nodes in the process. We recommend defining an action naming standard to easily find the process actions
* **Order** - if multiple actions are defined on the same node, set the running order using this option
* **Timer Expression** - it can be used if a delay is required on that action. The format used for this is [ISO 8601 duration format](./timer-events/timer-expressions#iso-8601) (for example, a delay of 30 seconds will be set up as `PT30S`)
* **Action Type** - should be set to **Kafka Send Action** for actions used to send messages to external systems
* **Trigger Type** (options are Automatic/Manual) - choose if this action should be triggered automatically (when the process flow reaches this step) or manually (triggered by the user); in most use cases, this will be set to automatic
* **Required Type** (options are Mandatory/Optional) - automatic actions can only be defined as mandatory. Manual actions can be defined as mandatory or optional.
* **Repeatable** - should be checked if the action can be triggered multiple times
* **Autorun Children** - when this is switched on, the child actions (the ones defined as mandatory and automatic) will run immediately after the execution of the parent action is finalized
#### Back in steps
* **Allow BACK on this action** - back in the process is a functionality that allows you to go back in a business process and redo a series of previous actions in the process, or for more details, check [**Moving a Token Backwards in a Process**](../../flowx-designer/managing-a-process-flow/moving-a-token-backwards-in-a-process) section

#### Data to Send
* **Keys** - are used when data is sent from the frontend via an action to validate the data (you can find more information in the [User Task Configuration](./user-task-node) section)
You can configure **Data to Send** option only when the action **trigger type** is **Manual**.

For more information about Kafka, check the following sections:
### Example of a send message task usage
Send a message to a CRM integration to request a search in the local database:
#### Action Edit
* **Name** - pick a name that makes it easy to figure out what this action does, for example, `sendRequestToSearchClient`
* **Order** - 1
* **Timer Expression** - this remains empty if we want the action to be triggered as soon as the token reaches this node
* **Action Type** - Kafka Send Action
* **Trigger Type** - *Automatic* - to trigger this action automatically
* **Required Type** - *Mandatory* - to make sure this action will be run before advancing to the next node
* **Repeatable** - false, it only needs to run once
#### Parameters
Parameters can be added either using the **Custom** option (where you configure everything on the spot) or by using **From Integration** and import parameters already defined in an integration.
More details about **Integrations Management** you can find [here](../../platform-deep-dive/core-extensions/integration-management/integration-management-overview).
##### Custom
* **Topics** - `ai.flowx.in.crm.search.v1` the Kafka topic on which the CRM listens for requests
* **Message** - `{ "clientType": "${application.client.clientType}", "personalNumber": "${personalNumber.client.personalNumber}" }` - the message payload will have two keys, `clientType` and `personalNumber`, both with values from the process instance
* **Headers** - `{"processInstanceId": ${processInstanceId}}`



## Receive Message Task
This type of node is used when we need to wait for a reply from an external system.

The reply from the external system will be saved in the process instance values, on a specified key. If the message needs to be processed at a later time, a timeout can be set using the [ISO 8601](./timer-events/timer-expressions) format.
For example, let's think about a CRM microservice that waits to receive requests to look for a user in a database. It will send back the response when a topic is configured to listen for the response.

### Configuring a Receive Message Task
The values you need to configure for this node are the following:
* **Topic Name** - the topic name where the [process engine](../../platform-deep-dive/core-components/flowx-engine) listens for the response (this should be added to the platform and match the topic naming rule for the engine to listen to it) - `ai.flowx.out.crm.search.v1`
A naming pattern must be defined on the process engine to use the defined topics. It is important to know that all the events that start with a configured pattern will be consumed by the Engine. For example, `KAFKA_TOPIC_PATTERN` is the topic name pattern that the Engine listens to for incoming Kafka events.
* **Key Name** - will hold the result received from the external system; if the key already exists in the process values, it will be overwritten - `crmResponse`
For more information about Kafka configuration, click [here](../../../setup-guides/flowx-engine-setup-guide/engine-setup#configuring-kafka).

#### From integration
After defining one integration (inside [Integration Management](../../platform-deep-dive/core-extensions/integration-management/)), you can open a compatible node and start using already defined integrations.
* **Topics** - topics defined in your integration
* **Message** - the **Message Data Model** from your integration
* **Headers** - all integrations have `processInstanceId` as a default header parameter; add any other relevant parameters

# BPMN nodes
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/node/node
A Business Process Model and Notation (BPMN) node is a visual representation of a point in your process. Nodes are added at specific process points to denote the entrance or transition of a record within the process.
For a comprehensive understanding of BPMN, start with the following section:
[**Intro to BPMN**](/4.0/docs/platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-bpmn).

FlowX.AI platforms support various node types, each requiring distinct configurations to fulfill its role in the business flow.
## Types of BPMN nodes
Let's explore the key types of BPMN nodes available in FlowX:
* **Start and End nodes** - Mark the initiation and conclusion of a [process flow](../../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-bpmn). A [process definition](../process/process-definition) may have multiple start nodes (each linked with a start condition) and end nodes based on the flow outcomes.
* **Send Message and Receive Message tasks** - Used for communication with external systems, integrations, and plugins.
* **Message Events** - Capture interactions between different process participants by referencing messages.
* **Timer Events** - Introduce time-based behavior into processes, triggering actions at specific intervals, durations, or cycles. They support configurations like specific dates, durations, or recurring cycles and can pause, start, or monitor tasks based on time conditions.
* **Error Events** - Manage error handling in processes by interrupting tasks or subprocesses upon specific error conditions. Configured as boundary events, they catch and handle errors thrown within the process, ensuring robust error management and control.
* **Task** nodes - Added when a [business rule](../actions/business-rule-action/business-rule-action) needs to execute during a process flow.
* **User Task** nodes - Configure the appearance and behavior of the UI and send data to custom components.
* **Exclusive Gateways** - Mark decision points in the process flow, determining the branch to be followed.
* **Parallel Gateways** - Split the process flow into two or more [branches](../../flowx-designer/managing-a-process-flow/adding-more-flow-branches) occurring simultaneously.
* **Call Subprocess Tasks**:
* **Call Activity** - Call activity is a node that provides advanced options for starting **subprocesses**.
* **Start Embedded Subprocess** - The Start Embedded Subprocess node initiates subprocesses within a parent process, allowing for encapsulated functionality and enhanced process management.
* **Boundary Events** - Specialized nodes that attach to specific tasks or subprocesses to handle predefined events during execution. They can be configured as interrupting or non-interrupting, with the following key types:
* **Message Catch Boundary Event** - Waits for a specific message while the task or subprocess is active, interrupting or initiating a parallel flow based on configuration.
* **Timer Boundary Event** - Triggers based on a specific time duration, date, or cycle. It can interrupt a task or start an additional flow without stopping the task.
* **Error Boundary Event** - Interrupts the associated task or subprocess upon an error, redirecting the process to a defined error-handling flow. Boundary events enhance process flexibility by enabling workflows to adapt dynamically to real-time conditions, ensuring robust error, message, and timeout handling.
For comprehensive insights into BPMN and its various node types, explore our course at FlowX Academy:
* What's BPMN (Business Process Model Notation) and how does it work?
* How is BPMN used in FlowX?
### Boundary events
Boundary events attach to the boundary of specific nodes (e.g., User Task, Service Task, Subprocess, or Call Activity) and are triggered when predefined conditions occur. These events can interrupt the ongoing activity or allow it to continue while starting a parallel flow. They include:
* **Message Catch Boundary Event**: Waits for and responds to a specific incoming message during a task.
* **Timer Boundary Event**: Activates based on elapsed time, a specific date, or a recurring cycle, useful for timeouts or deadlines.
* **Error Boundary Event**: Catches errors during a task or subprocess and redirects the process flow to handle them appropriately.

**Compatibility Matrix**:
Boundary events can attach to the following node types:
* **User Task**
* **Service Task**
* **Send Message/Receive Message Tasks**
* **Subprocess (Embedded and Call Activity)**
Boundary events are a critical element for creating resilient and adaptive processes, enabling efficient error handling, timeout management, and real-time interactions.
After gaining a comprehensive overview of each node, you can experiment with them to create a process. More details are available in the following section:
[Managing a process flow](../../flowx-designer/managing-a-process-flow/)
# Parallel gateway
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/node/parallel-gateway
When you have multiple operations that can be executed concurrently, the Parallel Gateway becomes a valuable tool. This type of node creates a parallel section within the process, particularly useful for tasks that can run independently without waiting for each other. It's essential to close each parallel section with another Parallel Gateway node.
## Configuring parallel paths

This node requires no special configuration and can initiate two or more parallel paths. It's important to note that the closing Parallel node, which is required to conclude the parallel section, will wait for all branches to complete before advancing to the next node.

### Configuration example
Let's consider a scenario involving a Paid Time Off (PTO) request. We have two distinct flows: one for the HR department and another for the manager. Initially, two tokens are generated—one for each parallel path. A third token is created when both parallel paths converge at the closing parallel gateway.
In the HR flow, in our example, the request is automatically approved.

Now, we await the second flow, which requires user input.

After the tokens from the parallel paths have completed their execution, a third token initiates from the closing parallel gateway.
# Start/end nodes
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/node/start-end-node
Let's go through all the options for configuring start and end nodes for a process definition.
## Start node
The start node represents the beginning of a process and it is mandatory to add one when creating a process.

A process can have one or more start nodes. If you defined multiple start nodes, each should have a start condition value configured. When starting a new process instance the desired start condition should be used.

### Configuring a start node
Node configuration is done by accessing the **Node Config** tab. You have the following configuration options for a **start node**:
* [General Config](#general-config)
* [Start condition](#start-condition)
#### General Config
* **Node name** - the name of the node
* **Can go back** - switching this option to true will allow users to return to this step after completing it
When encountering a step with `canGoBack` switched to false, all steps found behind it will become unavailable.
* [**Swimlane**](../../platform-deep-dive/user-roles-management/swimlanes) - choose a swimlane (if there are multiple swimlanes on the process) to make sure only certain user roles have access to certain process nodes- if there are no multiple swimlanes, the value is **Default**
* [**Stage**](../../platform-deep-dive/core-extensions/task-management/using-stages)- assign a stage to the node
#### Start condition
The start condition should be set as a string value. This string value will need to be set on the payload for the start process request on the `startCondition` key.

To test the start condition, we can send a start request via REST:
```
POST {{processUrl}}/api/process/{{processName}}/start
{
"startCondition": "PF"
}
```
#### Error handling on start condition
If a request is made to start a process with a start condition that does not match any start node, an error will be generated. Let's take the previous example and assume we send an incorrect value for the start condition:
```
POST {{processUrl}}/api/process/{{processName}}/start
{
"startCondition": "PJ"
}
```
A response with the error code `bad request` and title `Start node for process definition not found`will be sent in this case:
```json
{
"entityName": "ai.flowx.process.definition.domain.NodeDefinition",
"defaultMessage": "Start node for process definition not found.",
"errorKey": "error.validation.process_instance.start_node_for_process_def_missing",
"type": "https://www.jhipster.tech/problem/problem-with-message",
"title": "Start node for process definition not found.",
"status": 400,
"message": "error.validation.process_instance.start_node_for_process_def_missing",
"params": "ai.flowx.process.definition.domain.NodeDefinition"
}
```
## End node

An end node is used to mark where the process finishes. When the process reaches this node, the process is considered completed and its status will be set to `Finished`.
### Configuring an end node
Multiple end nodes can be used to show different end states. The configuration is similar to the start node.

# Task node
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/node/task-node
A task node refers to a task that utilizes various services, such as Web services, automated applications, or other similar services, to accomplish a particular task.

This type of node finds application in multiple scenarios, including:
* Executing a [**business rule**](../actions/business-rule-action/) on the process instance data
* Initiating a [**subprocess**](../actions/start-subprocess-action)
* Transferring data from a subprocess to the parent process
* Transmitting data to frontend applications
## Configuring task nodes
One or more actions can be configured on a task node. The actions are executed in the configured order.
Node configuration is done by accessing the **Node Config** tab. You have the following configuration options for a task node:
#### General Config
* **Node name** - the name of the node
* **Can go back** - switching this option to true will allow users to return to this step after completing it

When encountering a step with `canGoBack` switched to false, all steps found behind it will become unavailable.
* [**Swimlane**](../../platform-deep-dive/user-roles-management/swimlanes) - choose a swimlane (if there are multiple swimlanes on the process) to make sure only certain user roles have access to certain process nodes- if there are no multiple swimlanes, the value is **Default**
* [**Stage** ](../../platform-deep-dive/core-extensions/task-management/using-stages)- assign a stage to the node
#### Response Timeout
* **Response timeout** - can be triggered if, for example, a topic that you define and add in the [Data stream topics](#data-stream-topics) tab does not respect the pattern, the format used for this is [ISO 8601 duration format](https://www.w3.org/TR/NOTE-datetime)(for example, a delay of 30s will be set up like `PT30S`)

#### Data stream topics
* **Topic Name** - the topic name where the [process engine](../../platform-deep-dive/core-components/flowx-engine) listens for the response (this should be added to the platform and match the topic naming rule for the engine to listen to it) - available for UPDATES topics (Kafka receive events)
A naming pattern must be defined on the [process engine configuration](../../../setup-guides/flowx-engine-setup-guide/engine-setup#configuring-kafka) to use the defined topics. It is important to know that all the events that start with a configured pattern will be consumed by the Engine. For example, `KAFKA_TOPIC_PATTERN` is the topic name pattern where the Engine listens for incoming Kafka events.
* **Key Name** - will hold the result received from the external system, if the key already exists in the process values, it will be overwritten
#### Task Management
* **Update task management** - force [Task Management Plugin](../../platform-deep-dive/core-extensions/task-management/task-management-overview) to update information about this process after this node

## Configuring task nodes actions
Multiple options are available when configuring an action on a task node. To configure and add an action to a node, use the **Actions** tab at the node level, which has the following configuration options:
* [Action Edit](#action-edit)
* [Parameters](#parameters)
#### Action Edit
Depending on the type of the [**action**](../actions/actions), different properties are available, let's take a [**business rule**](../actions/business-rule-action/business-rule-action) as an example.
1. **Name** - used internally to differentiate between different actions on nodes in the process. We recommend defining an action naming standard to be able to quickly find the process actions.
2. **Order** - if multiple actions are defined on the same node, their running order should be set using this option
3. **Timer Expression** - can be used if a delay is required on that action. The format used for this is [ISO 8601 duration format ](https://www.w3.org/TR/NOTE-datetime)(for example, a delay of 30s will be set up like `PT30S`)
4. **Action type** - defines the appropriate action type
5. **Trigger type** - (options are Automatic/Manual) - choose if this action should be triggered automatically (when the process flow reaches this step) or manually (triggered by the user); In most use cases, this will be set to automatic.
6. **Required type** - (options are Mandatory/Optional) - automatic actions can only be defined as mandatory. Manual actions can be defined as mandatory or optional.
7. **Repeatable** - should be checked if the action can be triggered multiple times

#### Parameters
Depending on the type of the [**action**](../actions/actions), different properties are available. We refer to a **Business rule** as an example
1. **Business Rules** - business rules can be attached to a node by using actions with action rules on them, these can be specified using [DMN rules](../actions/business-rule-action/dmn-business-rule-action), [MVEL](../../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-mvel) expressions, or scripts written in Javascript, Python, or Groovy.
### Business Rule action
A [business rule](../actions/business-rule-action/business-rule-action) is a Task action that allows a script to run. For now, the following script languages are supported:
* [**MVEL**](../../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-mvel)
* **JavaScript (Nashorn)**
* **Python (Jython)**
* **Groovy**
* [**DMN**](../../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-dmn) - more details about a DMN business rule configuration can be found [here](../actions/business-rule-action/dmn-business-rule-action)
For more details on how to configure a Business Rule action, check the following section:
Being an event-driven platform FLOWX uses web socket communication in order to push events from the frontend application.
For more details on how to configure a Send data to user interface action, check the following section:
Upload file action will be used to upload a file from the frontend application and send it via a Kafka topic to the document management system.
For more details on how to configure an Upload File action, check the following section:
In order to create reusability between business processes, as well as split complex processes into smaller, easier-to-maintain flows, the start subprocess business rule can be used to trigger the same sequence multiple times.
For more details on how to configure a Business Rule action, check the following section:
Used for copying data in the subprocess from its parent process.
For more details about the configuration, check the following section:
# Timer boundary event
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/node/timer-events/timer-boundary-event
A Timer Boundary Event in Business Process Model and Notation (BPMN) is a specialized event attached to a specific task or subprocess within a process. It activates when a predefined time condition—such as a duration, deadline, or specific date—is met while the associated task or subprocess is still in progress.
This event is used to integrate time-related triggers into workflows, allowing processes to respond automatically to time-sensitive requirements. It is particularly useful for enforcing deadlines, scheduling periodic actions, or sending notifications, ensuring that time-critical steps are seamlessly incorporated into the overall process design.ary Events are utilized to incorporate time-related conditions into processes, enabling actions to be taken at specified time intervals, deadlines, or specific dates. This capability is especially valuable for scenarios where time-sensitive actions or notifications need to be integrated seamlessly within process flows.
### Compatibility matrix for Timer Events
**Timer Events** can be applied to various contexts within BPMN processes. The table below outlines their compatibility with different node types:
| **Node Type** | **Timer Event - Interrupting** | **Timer Event - Non-Interrupting** |
| ----------------------------- | ------------------------------ | ---------------------------------- |
| **User Task** | Yes | Yes |
| **Service Task** | Yes | Yes |
| **Send Message Task** | Yes | Yes |
| **Receive Message Task** | Yes | Yes |
| **Start Embedded Subprocess** | No | No |
| **Call Activity** | Yes | Yes |
## Timer boundary event - interrupting
A Timer Boundary Event is an event attached to a specific activity (task or subprocess) that is triggered when a specified time duration or date is reached. It can interrupt the ongoing activity and initiate a transition.

### Configuration
For Timer Boundary Events - Interrupting, the following values can be configured:
| Field | Validations | Accepted Values |
| ---------- | ----------- | -------------------------------- |
| Definition | Mandatory | ISO 8601 formats (date/duration) |
| | | Process param |

### General rules
* When the token enters the parent activity, a scheduler is set, and it waits for the timer event to be triggered.
* When the timer is triggered, the ongoing activity is terminated, and the process continues with the defined transition.
## Timer boundary event - non-interrupting
A Timer Boundary Event is an event attached to a specific activity (task or subprocess) that is triggered when a specified time duration or date is reached. It can trigger independently of the ongoing activity and initiate a parallel path.

### Configuration
For Timer Boundary Events - Non-Interrupting, the following values can be configured:
| Field | Validations | Accepted Values |
| ---------- | ----------- | -------------------------------- |
| Definition | Mandatory | ISO 8601 formats (date/duration) |
| | | Process param |
### General rules
* When a token arrives at a node with a Timer Boundary Event - Non-Interrupting associated:
* A trigger is scheduled, but the current token execution remains unaffected.
* When the token enters the parent activity, a scheduler is set, and it waits for the timer event to be triggered.
* If the timer is a cycle, it is rescheduled for the specified number of repetitions.
* The scheduler is canceled if the token leaves the activity before it is triggered.
# Timer events
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/node/timer-events/timer-events
Timer event nodes are a powerful feature in BPMN that allow you to introduce time-based behavior into your processes. These nodes enable you to trigger specific actions or events at predefined time intervals, durations, or cycles. With timer event nodes, you can design processes that respond to time-related conditions, ensuring smoother workflow execution and enhanced automation.
There are three primary types of timer event nodes:
* **Timer Start Event (interrupting/non-interrupting)**: This node initiates a process instance at a scheduled time, either interrupting or non-interrupting ongoing processes. It allows you to set a specific date, duration, or cycle for the process to start. You can configure it to trigger a process instance just once or repeatedly.
* **Timer Intermediate Event** (interrupting): This node introduces time-based triggers within a process. It's used to pause the process execution until a specified time duration or date is reached. Once triggered, the process continues its execution.
* **Timer Boundary Event (interrupting/non-interrupting)**: Attached to a task or subprocess, this node monitors the passage of time while the task is being executed. When the predefined time condition is met, the boundary event triggers an associated action, interrupting or non-interrupting the ongoing task or subprocess.

## Timers
Timers introduce the ability to trigger events at specific time intervals. They can be configured in three different ways: as a date, a duration, or a cycle. These configurations can use static values or dynamic/computed values.
* **Date**: Events triggered on a specific date and time.
* Format: ISO 8601 (e.g., `2019-10-01T12:00:00Z` or `2019-10-02T08:09:40+02:00`)
* **Time Duration**: Events triggered after a specified duration.
* Format: ISO 8601 duration expression (`P(n)Y(n)M(n)DT(n)H(n)M(n)S`)
* P: Duration designator
* Y: Years
* M: Months
* D: Days
* T: Time designator
* H: Hours
* M: Minutes
* S: Seconds
Examples:
* `PT15S` - 15 seconds
* `PT1H30M` - 1 hour and 30 minutes
* `P14D` - 14 days
* `P3Y6M4DT12H30M5S` - 3 years, 6 months, 4 days, 12 hours, 30 minutes, and 5 seconds

* **Time Cycle** (available for Timer Start Event): Events triggered at repeating intervals.
* Option 1: ISO 8601 repeating intervals format (`R`)
* Examples:
* `R5/2023-08-29T15:30:00Z/PT2H`: Every 2 hours seconds, up to five times, starting with 29 August, 15:30 UTC time
* `R/P1D`: Every day, infinitely

* Option 2: Using cron expressions
* Example: `0 0 9-17 * * MON-FRI`: Every hour on the hour from 9 a.m. to 5 p.m. UTC, Monday to Friday
Important: Only Spring cron expressions are permissible for configuration. Refer to the [**official documentation**](https://docs.spring.io/spring-framework/4.0/docs/current/javadoc-api/org/springframework/scheduling/support/CronExpression.html) for detailed information on configuring Spring Cron expressions.
Scheduled timer events are clearly indicated within the process definition list of an application at Runtime, as illustrated in the following example:

More information about timer expressions you can find in the below section:
[Timer expressions](./timer-expressions)
## Configuration
For each node type, the following timer types can be configured:
| Node Type | Date | Duration | Cycle |
| ------------------------ | ---- | -------- | ----- |
| Timer Start Event | Yes | No | Yes |
| Timer Intermediate Event | Yes | Yes | No |
| Timer Boundary Event | Yes | Yes | No |
A process definition version should have a single Timer Start Event per swmilane.
For comprehensive details on each timer event node in this section, please refer to the corresponding documentation:
# Timer expressions
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/node/timer-events/timer-expressions
When working with FlowX.AI components, there are multiple scenarios in which timer expressions are needed.
There are two timer expressions formats supported:
* [**Cron expressions**](#cron-expressions) - used to define the expiry date on processes
* [**ISO 8601**](#iso-8601) - used to define the duration of a response timeout or for a timer expression
### Cron expressions
A cron expression is a string made up of **six mandatory subexpressions (fields) that each specifies an aspect of the schedule** (for example, `* * * * * *`). These fields, separated by white space, can contain any of the allowed values with various combinations of the allowed characters for that field.
A field may be an asterisk (`*`), which always stands for “first-last”. For the day-of-the-month or day-of-the-week fields, a question mark (`?`) may be used instead of an asterisk.
Important: Only Spring cron expressions are permissible for configuration. Refer to the [**official documentation**](https://docs.spring.io/spring-framework/reference/integration/scheduling.html#scheduling-cron-expression) for detailed information on configuring Spring Cron expressions.
Subexpressions:
1. Seconds
2. Minutes
3. Hours
4. Day-of-Month
5. Month
6. Day-of-Week
7. Year (optional field)
An example of a complete cron-expression is the string `0 0 12 ? * FRI` - which means **every Friday at 12:00:00 PM**.
More details:
[Scheduling cron expressions](https://docs.spring.io/spring-framework/docs/current/reference/html/integration.html#scheduling-cron-expression)
#### Cron Expressions are used in the following example:
* [**Process definition**](../../../building-blocks/process/process-definition) - **Expiry time** - a user can set up a `expiryTime` function on a process, for example, a delay of 30s will be set up like:

### ISO 8601
ISO 8601 is an international standard covering the worldwide exchange and communication of date and time-related data. It can be used to standardize the following: dates, time of delay, time intervals, recurring time intervals, etc.
More details:
[ISO 8601 date format](https://www.digi.com/resources/documentation/digidocs/90001488-13/reference/r_iso_8601_date_format.htm)
[ISO 8601 duration format](https://www.digi.com/resources/documentation/digidocs//90001488-13/reference/r_iso_8601_duration_format.htm)
#### ISO 8601 format is used in the following examples:
* **Node config** - **Response Timeout** - can be triggered if, for example, a topic that you define and add in the **Data stream topics** tab does not respect the pattern
ISO 8601 dates and times:
| Format accepted | Value ranges |
| -------------------- | -------------------------------------------- |
| Year (Y) | YYYY, four-digit, abbreviatted to two-digit |
| Month (M) | MM, 01 to 12 |
| Week (W) | WW, 01 to 53 |
| Day (D) | D, day of the week, 1 to 7 |
| Hour (h) | hh, 00 to 23, 24:00:00 as the end time |
| Minute (m) | mm, 00 to 59 |
| Second (s) | ss, 00 to 59 |
| Decimal fraction (f) | Fractions of seconds, any degree of accuracy |

* [**Actions**](../../actions/actions) - **Timer expression** - it can be used if a delay is required on that action

# Timer Intermediate Event (interrupting)
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/node/timer-events/timer-intermediate-event
A Timer Intermediate Event (interrupting) is an event that is triggered based on a specified time duration or date. It is placed within the flow of a process and serves as a point of interruption and continuation.

## Configuring a timer intermediate event (interrupting)
| Field | Validations | Accepted Values |
| ---------- | ----------- | -------------------------------- |
| Definition | Mandatory | ISO 8601 formats (date/duration) |
| | | Process param |

### Timer type:
#### Date
* event triggered on a specific date-time
* ISO 8601 format (example: `2019-10-01T12:00:00Z` - UTC time, `2019-10-02T08:09:40+02:00`- UTC plus two hours zone offset)

#### Duration
Event triggered after x duration after the token reached the timer node (or parent node) (example: `PT6S`).
* Definition:
* ISO
* Cron
* Process param
## General Rules
* A Timer Intermediate Event is triggered based on its duration or date definition.
* When the token enters a Timer Intermediate Event, a scheduler is set, and it waits for the timer event to be triggered.
* After the timer is triggered, the process instance continues.
# Timer start event (interrupting)
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/node/timer-events/timer-start-event
A Timer Start Event initiates a process instance based on a specified time or schedule.

Please note that a process definition version can accommodate only one **Timer Start Event**.
## Configuration
The Timer Start Event supports two timer types:
| Timer Type | Description |
| ---------- | ------------------------------------------------------------------------------------- |
| Date | Specifies an exact date and time for triggering the event (ISO 8601 format) |
| Cycle | Specifies a repeating interval using ISO 8601 repeating intervals or cron expressions |
The Start Timer Event supports either ISO 8601 formats or Spring cron expressions for defining timer values.
Starting a process via registered timers requires sending a process start message to Kafka, necessitating a service account and authentication. For detailed guidance, refer to:
[**Service Accounts**](../../../../setup-guides/access-management/configuring-an-iam-solution#scheduler-service-account)
## Timer type details
### Date
Specifies an exact date and time for triggering the event. You can use ISO 8601 date format for accurate date-time representation.
When configuring a Date timer, you can set:
* **Date**: Select a specific date (format: yyyy-mm-dd) using the date picker
* **Time**: Set the specific time when the timer should trigger

### Cycle
Specifies a repeating interval for triggering the event. For the Cycle timer definition, you can use either:
#### ISO 8601 repeating intervals
For standardized time intervals (e.g., "R5/PT10M" for repeating 5 times with 10 minutes between each)

When configuring a Cycle timer (ISO 8601 repeating intervals) you can set:
* **Repeat Every**: The interval between triggers (e.g., "2 hours")
* **# of repeats**: How many times the timer should trigger (e.g., "3")
* **Infinite**: Option to make the timer repeat indefinitely
* **Start Time**: When the timer should begin (format: yyyy-mm-dd)
#### Cron expressions
For more complex scheduling patterns (e.g., "0 0 12 \* \* MON-FRI" for 12pm every weekday)

## Activate/deactivate start timer events
All timers can be activated/deactivated in the Runtime section under "Scheduled Processes":

If a project contains multiple versions with Start Timer Event nodes, a scheduler will be generated only for the ones included in the version set in the active policy.

## Usage examples
### Date timer example: Employee Onboarding Reminder
In this scenario, the Timer Start Event triggers an employee onboarding process at a specific date and time.

* Start Event (Timer Start Event) - New Hire Start Date
* Timer Definition: 2023-09-01T09:00:00Z (ISO 8601 format) → This means the process will initiate automatically at the specified date and time.
* This event serves as the trigger for the entire process.
* Transition → Employee Onboarding Notification

* Employee Onboarding Notification
* Notify new employee about onboarding requirements by sending an email notification with a template called "Important Onboarding Information"
* Actions: The HR team or automated system sends out necessary email information/documents, and instructions to the new employee.
* After the notification is sent, the process transitions to the Complete Onboarding node.

* Complete Onboarding
* Employee onboarding completed
* At this point, the employee's onboarding process is considered complete.
* Actions: The employee may have completed required tasks, paperwork, or orientation sessions.
### General rules
* Schedulers are generated only for builds that are part of the active policy.
* If you change the active policy, processes with Timer Start Event nodes might appear or disappear from the scheduled processes list if they aren't part of the active build.
* You can view scheduled processes in the Runtime section under "Scheduled Processes" (available since version 4.6.0).
* When a build in the active policy is updated with new Timer Start Event settings:
* The scheduler is updated based on the new settings.
* The scheduler state (active or suspended) remains the same as before.
# User task node
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/node/user-task-node
This node represents an interaction with the user. It is used to display a piece of UI (defined in the UI Designer or a custom Angular component. You can also define actions available for the users to interact with the process.
## Configuring a user task node

User task nodes allow you to define and configure UI templates and possible [actions](../actions/actions) for a certain template config node (ex: [button components](../ui-designer/ui-component-types/buttons)).
#### General Config
* **Node name** - the name of the node
* **Can go back** - setting this to true will allow users to return to this step after completing it. When encountering a step with `canGoBack` false, all steps found behind it will become unavailable.
* [**Stage**](../../platform-deep-dive/core-extensions/task-management/using-stages)- assign a stage to the node

When encountering a step with `canGoBack` switched to false, all steps found behind it will become unavailable.
#### Data stream topics
* **Topic Name** - the topic name where the process engine listens for the response (this should be added to the platform and match the topic naming rule for the engine to listen to it) - available for UPDATES topics (Kafka receive events)
A naming pattern must be defined on the [process engine configuration](../../../setup-guides/flowx-engine-setup-guide/engine-setup#configuring-kafka) to use the defined topics. It is important to know that all the events that start with a configured pattern will be consumed by the Engine. For example, `KAFKA_TOPIC_PATTERN` is the topic name pattern where the Engine listens for incoming Kafka events.
* **Key Name** - will hold the result received from the external system, if the key already exists in the process values, it will be overwritten
#### Task Management
* **Update task management** - force [Task Management](../../platform-deep-dive/core-extensions/task-management/task-management-overview) plugin to update information about this process after this node

## Configuring the UI
The **FlowX Designer** includes an intuitive [UI Designer](../ui-designer/ui-designer) (drag-and-drop editor) for creating diverse UI templates. You can use various elements from basic [buttons](../ui-designer/ui-component-types/buttons), indicators, and [forms](../ui-designer/ui-component-types/form-elements/), but also predefined [collections](../ui-designer/ui-component-types/collection/collection) or [prototypes](../ui-designer/ui-component-types/collection/collection-prototype).
### Accessing the UI Designer
To access the **UI Designer**, follow the next steps:
1. Open **FLOWX Designer** and from the **Processes** tab select **Definitions**.
2. Select a **process** from the process definitions list.
3. Click the **Edit** **process** button.
4. Select a **user task** **node** from the Pro dcess Designer then click the **brush** icon to open the **UI Designer**.

### Predefined components
UI can be defined using the available components provided by FLOWX, using the UI Designer available at node level.
Predefined components can be split in 3 categories:
These elements are used to group different types of components, each having a different purpose:
* [**Card**](../ui-designer/ui-component-types/root-components/card) - used to group and configure the layout for multiple **form elements.**
* [**Container**](../ui-designer/ui-component-types/root-components/container) - used to group and configure the layout for multiple **components** of any type.
* [**Custom**](../ui-designer/ui-component-types/root-components/custom) - these are Angular components developed in the container application and passed to the SDK at runtime, identified here by the component name
More details in the following section:
The root component can hold a hierarchical component structure.
Available children for **Card** and **Container** are:
* **Container** - used to group and align its children
* **Form** - used to group and align form field elements (**inputs**, **radios**, **checkboxes**, etc)
* **Image** - allows you to configure an image in the document
* **Text** - a simple text can be configured via this component, basic configuration is available
* **Hint** - multiple types of hints can be configured via this component
* **Link** - used to configure a hyperlink that opens in a new tab
* **Button** - Multiple options are available for configuration, the most important part being the possibility to add actions
* **File Upload** - A specific type of button that allows you to select a file
More details in the following section:
This type of elements are used to allow the user to input data, and can be added only in a **Form** Component. They have have multiple properties that can be managed.
1. [**Input**](../ui-designer/ui-component-types/form-elements/input-form-field) - FLOWX form element that allows you to generate an input form filed
2. [**Select**](../ui-designer/ui-component-types/form-elements/select-form-field) - to add a dropdown
3. [**Checkbox**](../ui-designer/ui-component-types/form-elements/checkbox-form-field) - the user can select zero or more input from a set of options
4. [**Radio**](../ui-designer/ui-component-types/form-elements/radio-form-field) - the user is required to select one and only one input from a set of options
5. [**Datepicker**](../ui-designer/ui-component-types/form-elements/datepicker-form-field) - to select a date from a calendar picker
6. [**Switch**](../ui-designer/ui-component-types/form-elements/switch-form-field) - allows the user to toggle an option on or off
More details in the following section:
### Custom components
These are components developed in the web application and referenced here by component identifier. This will dictate where the component is displayed in the component hierarchy and what actions are available for the component.
To add a custom component in the template config tree, we need to know its unique identifier and the data it should receive from the process model.
More details in the following section:
# Data model
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/process/data-model
The Data Model is a centralized configuration feature that enables efficient management of key-value attributes inside process definitions. It supports multiple attribute types, such as strings, numbers, booleans, objects, arrays, and enums, offering users the ability to define, update, delete, and apply data attributes seamlessly.

## Overview
The Data Model serves as the foundation for managing structured information throughout your process definitions. It provides a centralized approach to define, organize, and maintain the data attributes that drive your business processes and user interfaces.
### Attribute types
The Data Model supports the following attribute types:
* STRING
* NUMBER
* CURRENCY
* BOOLEAN
* OBJECT
* ARRAY
* ARRAY OF STRINGS
* ARRAY OF NUMBERS
* ARRAY OF BOOLEANS
* ARRAY OF OBJECTS
* ARRAY OF ENUMS
* ENUM
* DATE
#### Currency attribute
Currencies are managed using an object structure that ensures accurate representation and localization.

* **Currency Object Structure**:
* Includes `amount` (numerical value) and `code` (ISO 4217 currency code, e.g., USD, EUR).
* Example:
```json
{
"amount": 12000.78,
"code": "USD"
}
```
* **Regional Formatting**:
* Currency values adapt to regional conventions for grouping, decimals, and symbol placement. For instance:
* **en-US (United States)**: `$12,000.78` (symbol before the value, comma for grouping, dot for decimals).
* **ro-RO (Romania)**: `12.000,78 RON` (dot for grouping, comma for decimals, code appended).
* **Fallback Behavior**: If the `code` is null, the system defaults to the locale's predefined currency settings.
* **UI Integration**:
* Currency input fields dynamically format values based on locale settings and save the `amount` and `code` into the data store.
* Sliders and other components follow the same behavior, formatting values and labels according to the locale.
Check this section for more details about l10n & i18n
#### Date attribute
Dates are represented in ISO 8601 format and dynamically formatted based on locale and application settings.

* **Locale-Specific Date Formats**: FlowX dynamically applies regional date formatting rules based on the locale. For instance:
* **en-US (United States)**: `MM/DD/YYYY` → `09/28/2024`
* **fr-FR (France)**: `DD/MM/YYYY` → `28/09/2024`
* **Customizable Formats**: You can choose from predefined formats (e.g., short, medium, long, full) or define custom formats at both application and UI Designer levels.
* **Timezone Handling**:
* **Standard Date**: Adjusts to the current timezone.
* **Date Agnostic**: Ignores time zones, using GMT for consistent representation.
* **ISO 8601 Compliance**: Ensures compatibility with international standards.
Check this section for more details about l10n & i18n
#### Number attribute
The **Number** attribute type supports two subtypes: **integers** and **floating point numbers**, providing flexibility to represent whole numbers or decimal values as required.

* **Subtypes**
* **Integer**: Represents whole numbers without any fractional or decimal part.
* Example: `1, 42, 1000`
* **Floating Point**: Represents numbers with a decimal point, enabling precise storage and representation of fractional values.
* Example: `3.14, 0.01, -123.456`
* **Locale-Specific Formatting**:
* Numbers adapt to regional conventions for decimal separators and digit grouping. For example:
* **en-US (United States)**: `1,234.56` (comma for grouping, dot for decimals)
* **de-DE (Germany)**: `1.234,56` (dot for grouping, comma for decimals)
* **fr-FR (France)**: `1 234,56` (space for grouping, comma for decimals)
* **Precision Settings**:
* **Minimum Decimals**: Ensures a minimum number of decimal places are displayed, adding trailing zeros if necessary.
* **Maximum Decimals**: Limits the number of decimal places stored, rounding values to the defined precision.

These settings can be overriden at the application level or in the **UI Designer** for specific components.
* **Validation**:
* Enforce range constraints (e.g., minimum and maximum values).
* Input fields automatically apply formatting masks to prevent invalid data entry.
Check this section for more details about l10n & i18n
## Creating and managing a data model
In the Data Model, you can add new key-pair values, allowing seamless integration with the UI Designer. This functionality enables quick shortcuts for adding new keys without switching between menus.
**Key Naming Conventions**:
* Keys in the data model are case-sensitive. This means 'customerName' and 'CustomerName' would be treated as two distinct attributes.
* It's recommended to follow a consistent naming convention (such as camelCase) throughout your data model to avoid confusion.
* Special characters and spaces in keys should be avoided.
Example:
### Validation rules
You can define validation rules for your data model attributes to ensure data integrity throughout your processes:
* **Required Fields**: Mark attributes that must have values before a process can proceed
* **Range Validation**: Set minimum and maximum values for numeric attributes
* **Pattern Matching**: Define regular expression patterns for string validation
* **Custom Validation**: Implement custom validation logic using business rules
Validation rules can be defined at the data model level and enforced throughout UI components, business rules, and integration points, ensuring consistent data quality.
## Data model reference
The "View References" feature allows you to see where specific attributes are used within the data model. This includes:
* **Process Keys**: Lists all process keys linked to an attribute.
* **UI Elements**: Displays references such as the element label, node name, and UI Element key.

## Sensitive data
Protect sensitive information by flagging specific keys in the data model. This ensures data is hidden from process details and browser console outputs.
## Reporting
The **Use in Reporting** tag allows you to designate keys for use in the reporting plugin, enabling efficient tracking and analysis.

Learn more about how to use Data Model attributes in the Reporting plugin.
## Integration with other platform features
The Data Model serves as the backbone for numerous platform features, enabling seamless data flow between different components of your FlowX application.
### Integration with UI Designer
The Data Model directly integrates with the UI Designer, allowing you to:
* Bind UI components to data model attributes
* Create dynamic UI layouts based on data model values
* Implement validation and formatting rules consistently across interfaces
### Integration with workflows
When designing workflows in the Integration Designer, you can leverage the Data Model to:
* Pass data from processes to workflows using the Start Integration Workflow action
* Map data between your process and external systems
* Transform and validate data during integration flows
Learn more about integrating Data Model with workflows
### Decision model integration
Data model attributes can be used as inputs for business rules, enabling:
* Complex business rule implementation using structured data
* Consistent data handling across decision points
## Best practices
### Structuring your data model
* **Use hierarchical structures**: Organize related data in nested objects to improve maintainability
* **Consistent naming**: Adopt a consistent naming convention for all attributes
* **Documentation**: Add clear descriptions for complex attributes to improve team understanding
* **Modularization**: Group related attributes into logical objects rather than using flat structures
### Version control and migration
When evolving your data model across versions:
* **Backwards compatibility**: Ensure changes don't break existing processes
* **Migration strategy**: Plan for how existing process instances will handle schema changes
* **Documentation**: Maintain clear documentation of data model changes between versions
## Troubleshooting
### Common issues
* **Missing data**: Ensure all required attributes are defined in your data model before referencing them
* **Type mismatches**: Verify that attributes are assigned the correct data type
* **Validation errors**: Check validation rules when data appears to be rejected
### Debugging tips
* Use the data model reference feature to track attribute usage
* Examine process instance data to verify attribute values
* Review UI component bindings to ensure proper data mapping
## Summary
The Data Model is a powerful feature that enables structured data management across your FlowX applications. By following best practices for design, validation, and integration, you can build robust process applications with consistent data handling throughout all stages of your business processes.
# Navigation areas
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/process/navigation-areas
Navigation areas play a pivotal role in user interface design. They enhance the user experience by providing a structured, organized, and efficient way for users to interact with and explore various features and solutions.
In the navigation panel, the navigation hierarchy should be displayed beneath platform tabs, which include options for both web and mobile platforms. The default tab selected upon opening should be the web platform.

## Adding new areas
To create a new navigation area, follow these steps:
1. Open **FlowX Designer**.
2. Either open an existing process definition or create a new one.
3. Toggle the navigation areas menu by clicking on **Navigation view**.
4. Select **Create New Area**.
Navigation configurations made on one platform (for example, Web) are not automatically duplicated to the other platform by default. You must copy or recreate them.

## Interacting with areas
Once you've added a new navigation area, you can interact with it through the breadcrumbs menu, which provides the following options:
Use **Area Settings** to configure the following properties:
* **Name** - a name for the navigation area element that is visible in the **Navigation menu**
This does not represent the label for your navigation area (for example, for a step element) that would be visible in the process at runtime. To set the label for your element, use the UI Designer to edit it.
To do that, trigger the **Navigation View** menu, then navigate to your element and click on the breadcrumbs. Afterward, click "UI Designer."
* **Alternative Flows** - There might be cases when you want to include or exclude process nodes based on some information that is available as a start condition.

Use UI Designer to configure the following:
* **Settings**:
* **Generic**: Properties available cross-platform (Web, Android and iOS), available for all platforms
* **Platform-specific configuration and styling**: For components across Web, iOS, and Android.

The "Copy Navigation Areas" feature allows you to replicate navigation hierarchies and elements between different platforms.
The Copy/Paste Navigation Areas feature facilitates the duplication of navigation configurations within the same process definition and environment. It does not support copying across different process definitions or environments.

To delete a navigation area:
1. Access the **Navigation view** menu within the FlowX Designer.
2. Choose the intended navigation area, then click on the breadcrumbs menu located on the right.
3. Select **Delete Area**.
Note: Deleting the navigation area will also remove any associated child areas. Additionally, any user tasks contained within will be unassigned and relocated to the User Tasks section within the **Navigation view**.
## Navigation areas types
You can create the following types of navigation areas:
A stepper guides users through a sequence of logical and numbered steps, aiding in navigation.
A user interface element presenting multiple tabs for navigation or organizing content. It allows users to switch between different sections or views within the application.
This type displays basic full-page content for an immersive user experience.
A modal temporarily takes control of the user's interaction within a process as an overlay, requiring user interaction before returning to the main application or website functionality.
A container-like element grouping specific navigation areas or user tasks, commonly used for organizing content such as headers and footers within a process definition.
Allows subprocess design under parent process hierarchy; ensures validation and design restrictions.

### Stepper
A stepper breaks down progress into logical and numbered steps, making navigation intuitive.
You can also define a stepper in step structure with the possibility to track progress and also returning to a previously visited step (as you can see in the navigation areas in the example above).
#### Stepper in step example

#### Steps
Steps are individual elements within a stepper, simplifying complex processes.

### Tab bar
The Tab Bar is a crucial component in user interfaces, facilitating seamless navigation and content organization.

Key Features and usage guidelines:
The navigation Tab Bar offers specialized support for **parallel zones** within the same **user task**. This feature allows users to navigate effortlessly between different sections or functionalities.
Users can access multiple tabs within the tab bar, enabling quick switching between various views or tasks.

When setting up a tab bar zone in your BPMN diagram, ensure that you always initiate a parallel zone using **Parallel Gateway** nodes. This involves using one Parallel Gateway node to open the zone and another to close it, as demonstrated in the example above. This approach ensures clarity and consistency in your diagram structure, facilitating better understanding.
**Mobile development considerations**: when developing for mobile devices, ensure the tab bar is **always** positioned as a **root component** in the navigation. It should remain consistently visible, prioritizing ease of navigation.

#### Tabs
Tabs are clickable navigation elements within the Tab Bar that allow users to switch between different sections or functionalities within an application.

Each tab could contain a user task or multiple user tasks that can be rendered in parallel.
You can also use [**Start embedded subprocess**](../node/call-subprocess-tasks/start-embedded-subprocess) nodes to render defined subprocesses as a tab. Check the [**Start embedded subprocess**](../node/call-subprocess-tasks/start-embedded-subprocess) for more details.
#### Tab bars inside tabs
In addition to regular tab bars, you have the option to implement nested tab bars, allowing for the display of another set of tabs when accessing a specific tab.

To achieve this configuration, you'll need to create an additional parallel zone inside the previously created one for the main tab bar. This section should encompass the child tabs of the primary tab bar in your diagram. Once these child tabs are defined, close the parallel zone accordingly.

### Page
Page navigation involves user interaction, typically through clicking on links, buttons, or other interactive elements, to transition from one page to another.

A page could contain multiple user tasks displayed either in parallel (single page application) or one by one (wizard style).

#### Navigation type (only for Web)
This property is available starting with **v4.1.0** platform release.
You have the possibility to add step-by-step or wizard-style navigation within a page (applicable when a page contains more than one User Task). This means users can navigate through the application in a guided manner, accessing each step within the designated area and page.

* **Single page form** (default): The Web Renderer will display all User Tasks within the same page (in parallel).

For optimal navigation, we suggest utilizing cards to guide users through the content.
To maintain a clean UI while displaying user tasks on a single page, use cards as the root UI elements and apply accordions to them. This allows you to collapse each card after it is validated by an action.

Child areas will be rendered on the same page.
* **Wizard**: For the Wizard option, the Web Renderer will display one user task at a time, allowing navigation using custom Next and Back buttons.

Child areas will be presented sequentially. It's crucial to configure actions properly to ensure smooth user navigation.
### Modal
Modals offer temporary control over user interaction within a process, typically appearing as pop-ups or overlays. They guide users through specific tasks before returning to the main functionality of the application or website.

To enable user interaction with a modal, you can configure it to be dismissable in two ways:
* Dismiss on backdrop click
* Display dismiss confirmation alert
Here's how to configure these options:
1. Navigate to your configured modal and access the UI Designer.

2. In the UI Designer navigation panel, select the **Settings** tab, then choose **Generic**.
3. Check the box labeled "dismissable." If you also want to display a dismiss confirmation alert, configure the:
* Title
* Message
* Confirm button label
* Cancel button label

By configuring these options, you can customize the behavior of your modals to enhance user experience and guide them effectively through tasks.

### Zone
A Zone serves as a container designed to govern UI navigation by grouping specific elements together. Its optimal application is in scenarios involving processes featuring both a header and footer.
#### Navigation type (only for Web)
You have the possibility to add step-by-step or wizard-style navigation within a specific zone (applicable when a zone contains more than one User Task). This means users can navigate through the application in a guided manner, accessing each step within the designated area and page.
* **Single page form** (default): The Web Renderer will display all User Tasks within the same zone.

For optimal navigation, we suggest utilizing cards to guide users through the content.
Child areas will be rendered on the same page.
* **Wizard**: For the Wizard option, the Web Renderer will display one user task at a time, allowing navigation using custom Next and Back buttons.

Child areas will be presented sequentially. It's crucial to configure actions properly to ensure smooth user navigation.
Zones with headers and footers are exclusively accessible in web configurations. They are not supported as navigation areas for mobile development.

#### How to Create a Header/Footer Zone
To establish a header/footer zone, follow these steps:
1. Begin by opening a new parallel zone and ensure to close it where you want the header/footer zone to end.

2. Introduce two new user tasks nodes within your process, designated to function as the header, respectively as footer.
3. Connect the first parallel gateway node to both of them.

4. Now connect the header and footer user tasks to the closing Parallel Gateway.

3. In the navigation areas menu, incorporate a new zone labeled "Header Zone" and a new zone "Footer Zone".
4. Position the header at the top of your navigation structure or at the top within your primary navigation element and the footer at the bottom.
5. Assign the user tasks to the "Header Zone" and to the "Footer Zone".
When working with containers directly within navigation zones, you have the flexibility to set the sticky/static property directly on your specific navigation zone using the UI Designer, without having to add specific user tasks. However, determining the best practice depends on the specific use case you're aiming to implement.
For instance, if you're looking to incorporate footers containing actions or buttons such as "Cancel application" or "Continue," it's advisable to include user tasks, allowing you to configure node actions effectively.
On the other hand, if your goal is to integrate static headers and footers featuring branding assets and external URLs, it's simpler to directly add them to the navigation areas. In this scenario, you can effortlessly incorporate images or text with external URLs on your containers styling.

6. Proceed to customize the user tasks UI according to your preferences.

#### Styling options
To customize the appearance of headers and footers, you need to utilize containers as the root UI elements for **user tasks** or navigation areas.

You have two styling options available for these containers:

* **Static**: This style remains fixed and does not scroll along with the page content.
* **Sticky**: When the sticky property is enabled, the container maintains its position even during scrolling.
* **Layout**: You have the option to specify minimum distances between the container and its parent element while scrolling. At runtime, sticky containers will keep their position on scroll relative to top/ bottom/ right/ left margin of the parent element

In mobile configurations, the right and left properties for sticky layout are ignored by the mobile renderer.

## Navigation areas demo
For more information, you can also check the following demo on our Academy website:
## Alternative flows
There might be cases when you want to include or exclude process nodes based on some information that is available at start.

For example, in case of a bank onboarding process, you might want a few extra BPMN nodes in the process in case the person trying to onboard is a minor.
For these cases, we have added the possibility of defining **alternative flows**.
For each navigation area or node, you can choose to set if they are only available in certain cases. In the example below, we will create two alternative flows in which we will include two different steppers and one modal without being part of an alternative flow.

When starting a process, you must mention for the which flow the process will be initiated using the "navigationFlow" parameter and the name of the alternative flow:
```json
{"navigationFlow": "First Flow"}
```

If a node is not assigned to an alternative flow, this means the node will be included in all possible flows.
A node could also be a part of multiple flow names.
When configuring alternative flows in the main process, remember that they will also be applied in any embedded subprocesses with identical names.
# Process Designer
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/process/process
The Process Designer workspace is tailored for creating and editing business processes, featuring a menu with all the essential elements needed to design a process.

When you encounter a process definition menu, it encompasses the following essential elements:
Awesome Process: This serves as the name of the process definition, providing a clear identifier for the workflow.
This section displays both the version branch name and the current state of the published version. It offers insights into the active development state of the workflow.
When selected, this icon opens up additional options to enhance visibility and control over the various process definitions and their branches.
To commit alterations to the workflow, you can employ this designated action found within the version menu. Triggering the submission action prompts the appearance of a modal window, inviting you to provide a commit message for context.
This reassuring notification indicates that any modifications made to the workflow have been automatically saved, eliminating the need for manual user intervention. It ensures the safety of your work.
Utilize this icon to switch the current work mode from "Edit mode" to "Readonly." It empowers you to control the accessibility and editing permissions for the workflow.
The Misconfigurations Warnings represent a proactive alert system that ensures process configurations align with selected platforms. Dynamic platform-specific settings provide users with alerts guiding optimal navigation and UI design, integrated into the frontend interface to empower informed decision-making and enhance process configuration.
In the Node details tab, you can set the configuration details for a node.
We have designed FlowX.AI components to closely resemble their BPMN counterparts for ease of use.

Check the following section for more details about BMPN nodes and how to use them:
In the following sections, we will provide more details on how to use the Process Designer and its components.
## Process definition
The process is the core building block of the platform. Think of it as a representation of your business use case, for example making a request for a new credit card, placing an online food order, registering your new car or creating an online fundraiser supporting your cause.

A process is nothing more than a series of steps that need to be followed in order to get to the desired outcome: getting a new credit card, gathering online donations from friends or having your lunch brought to you. These steps involve a series of actions, whether automated or handled by a human, that allows you to complete your chosen business objective.
## Process instance
Once the desired processes are defined in the platform, they are ready to be used. Each time a process needs to be used, for example, each time a customer wants to request a new credit card, a new instance of the specified process definition is started in the platform. Think of the process definition as a blueprint for a house, and of the process instance as each house of that type that is being built.

From this point on, the platform takes care of following the process flow, handling the necessary actions, storing and interpreting the resulting data.
# Process definition
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/process/process-definition
The core of the platform is the process definition, which is the blueprint of the business process made up of nodes that are linked by sequences.

Once a process is defined and set as published on the platform, it can be executed, monitored, and optimized. When a business process is started, a new process instance is created based on the process definition.
### Audit log
In the top-right contextual menu you will find the **Audit** feature containing the following information:
* Timestamp
* User
* Application version
* Subject
* Event
* Subject Identifier
* Status
Some items in the Audit log are filterable, making it easy to track changes in the process.
## Data model
In the Data Model, you can add new key-pair values, which enables you to use shortcuts when adding new keys using the UI Designer, without having to switch back and forth between menus.

For more information on how to work with the Data Model, explore the following section:
## Swimlanes
Swimlanes offer a useful method of organizing process nodes based on process participants. By utilizing swimlanes, you can establish controlled access to specific process nodes for particular user roles.
#### Adding new swimlanes
To add new swimlanes, please follow these steps:
1. Access the **FlowX.AI Designer**.
2. Open an existing process definition or create a new one.
3. Identify the default swimlane and select it to display the contextual menu.

With the contextual menu, you can easily perform various actions related to swimlanes, such as adding or removing swimlanes or reordering them.
4. Choose the desired location for the new swimlane, either below or above the default swimlane.
5. Locate and click the **add swimlane icon** to create the new swimlane.

For more details about access management, check the following sections:
## Settings
### Process Name
* **Process definition name**: Edit the process definition name.
Name can only contain letters, numbers and the following special characters: \[] () . \_ -
### General
In the General settings, you can edit the process definition name, include the process in reporting, set general data, and configure expiry time expression using Cron Expressions and ISO 8601 formatting.
* **Available Platforms** (Single Choice): This enable configuration (Navigation Areas, UI Designer, Misconfigurations) only for the specific platform selected:
* **Web Only**: If the navigation areas are defined exclusively for the Web, the process should remain set to Web. This is because it is optimized solely for web usage and would not provide the same level of functionality or user experience on mobile devices
* **Mobile Only**: If the navigation areas are defined only for Mobile, the process should be set to Mobile. This ensures that the process leverages mobile-specific features and design considerations, providing an optimal experience on mobile devices.
* **Omnichannel**: If the existing process has navigation areas defined for both Web and Mobile platforms, it should be set to Omnichannel. This ensures that users have a consistent experience regardless of the platform they are using.
This setting is available starting with **v4.1.0** platform release.
Navigation Areas, UI Designer and misconfigurations will be affected by this setting.
By default, new processes are set to **Web Only**. This ensures that they are initially optimized for web-based usage, providing a starting point for most users and scenarios.
* **Use process in reporting**: When enabled, this setting includes the process in reporting.
* **Use process in task management**: Enabling this option creates tasks that are displayed in the Task Manager plugin. For more details, refer to the [**Task Management**](../../platform-deep-dive/core-extensions/task-management/task-management-overview) section.
* **General data**: Refers to customizable data that can be both set and received in a response context.
* **Expiry time**: A user can set up an expiry time expression on a process definition to specify an expiry date.
| **Example** | **Expression** | **Explanation** |
| ------------------------- | ---------------------- | ----------------------------------------------------------------------------------------------------------------------- |
| Daily Expiry at Midnight | `0 0 0 * * ?` | Sets the process to expire at 00:00 (midnight) every day. The `?` is used in the day-of-week field to ignore its value. |
| Hourly Expiry on Weekdays | `0 0 9-17 * * MON-FRI` | Sets the process to expire at the start of each hour between 9 AM and 5 PM on weekdays (Monday to Friday). |
| Expiry After a Duration | `PT3M22S` | Sets the process to expire after a duration of 3 minutes and 22 seconds from the start, using **ISO 8601 format**. |
FlowX support Spring cron expressions. They **must always include six fields**: `seconds`, `minutes`, `hours`, `day of month`, `month`, and `day of week`.\
This differs from traditional UNIX cron, which uses only five fields. Be sure to include the `seconds` field in Spring cron expressions to avoid errors.
The cron expression format should include seconds (0), minutes (0), hours (0), and then wildcards for the day, month, and day of the week fields. The `?` sign in the day-of-week field is used when the day-of-month field is already specified (or ignored in this case).
You can use both ISO 8601 duration format (`PT3M22S`) and cron expressions (`0 0 0 * * ?`, `0 0 9-17 * * MON-FRI`) to define `expiryTime` expressions for process definitions.
For more information about **Cron Expressions** and **ISO 8601** formatting, check the following section:
[Timer Expressions](../node/timer-events/timer-expressions)

### Permissions
After defining roles in the identity provider solution, they will be available to be used in the process definition settings panel for configuring swimlane access.
When you create a new swimlane, it comes with two default permissions assigned based on a specific role: execute and self-assign. Other permissions can be added manually, depending on the needs of the user.

### Task management
* **Use process in task management**
* The Use Process in Task Management option enables the integration of a process definition with the Task Manager system.
* By activating this feature, the process becomes available for managing tasks within Task Management, allowing data, tasks, and status updates to flow between the process and the Task Manager.
* **Application url**
* The Application URL is the endpoint or link that directs users to the specific application instance where a process is executed. It serves as a reference point for accessing and managing tasks associated with a process directly from the Task Manager.
* Follows a standard format, e.g., `{baseURL}/appId/buildId/processes/resourceId/instance`.
* Can be dynamically defined using configuration parameters (e.g., `${genericParameter}`) or set at the process data level using business rules.
* **Keys to send to Task Manager**
* The Keys to Send to Task Manager configuration specifies the data fields (keys) from a process definition that are sent to the Task Manager for indexing and display.
* These keys are used to customize task-related views, filters, and parameters, making them accessible for tracking, filtering, and managing tasks effectively.
Important:
* Keys must exist in the process data model.
* Keys need re-indexing if their type or structure is updated in the data model.

### Data Search
The **Search indexing** tab allows you to configure process data keys used for indexing data stored in a process. These keys facilitate efficient searching and filtering of process instances based on the indexed data.

Process Data Keys: These are paths or fields in the process definition that store relevant data. For example, customerDetails.fullName might correspond to a field capturing the full name of a customer.
# Subprocess management
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/process/subprocess
Learn how to configure, start, and execute subprocesses efficiently within your process flow.
Subprocesses are smaller process flows triggered by actions in the main process. They can inherit some process parameter values from the parent process and communicate with front-end apps using the same connection details as their parent process.
## Overview
Subprocesses can be started in two modes:
* **Asynchronous**: Execute alongside the parent process without delaying it.
* **Synchronous**: The parent process waits until subprocesses are finished before advancing.
## Configuring & starting subprocesses
Design subprocesses within the FlowX Designer, mirroring the main process structure.
* Within user tasks or task nodes.
* Custom node type in the main process.
* Using the corresponding node.
### Parameter inheritance
Available for **Start Subprocess** action and **Call Activity** node.
By default, subprocesses inherit all parent process parameter values. Configure inheritance by:
* **Copy from Current State**: Select keys to copy.
* **Exclude from Current State**: Specify keys to ignore.
Sub-processes can also have an [**Append Params to Parent Process**](../actions/append-params-to-parent-process) action configured inside their process definitions which will append their results to the parent process parameter values.
## Executing subprocesses
Define subprocess execution mode:
* **Asynchronous/Synchronous**: Set the `startedAsync` parameter accordingly.
In synchronous mode, subprocesses notify completion to the parent process. The parent process then resumes its flow and handles subprocess data.
## Additional resources
For detailed guidance on configuring and running subprocesses:
# Dynamic & computed values
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/dynamic-and-computed-values
In modern application development, the ability to create dynamic and interactive user interfaces is essential for delivering personalized and responsive experiences to users. Dynamic values and computed values are powerful features that enable developers to achieve this level of flexibility and interactivity.
## Dynamic values
Dynamic values give you the flexibility to fill various UI properties at runtime, based on process parameters or substitution tags. By doing so, your application can adapt to specific scenarios or user inputs.
Use this feature to fine-tune how your application appears and behaves without needing to rebuild or redeploy. The table below outlines which UI elements support dynamic values and the corresponding properties that can accept parameters or substitution tags.
| **Element** | **Properties** | **Accepts** |
| ------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------ |
| [**Form Elements**](./ui-component-types/form-elements) *Input, Textarea, Select, Checkbox, Radio, Switch, Datepicker, Slider, Segmented Button* | - **Default Value** - **Label** - **Placeholder** - **Helper Text** - **Validators** - **Prefix**, **Suffix** | **Yes:** Process parameters or Substitution tags |
| [**Document Preview**](./ui-component-types/file-preview) | - **Title** - **Subtitle** | **Yes:** Process parameters or Substitution tags |
| [**Card**](./ui-component-types/root-components/card) | - **Title** - **Subtitle** | **Yes:** Process parameters or Substitution tags |
| **Form** | - **Title** | **Yes:** Process parameters or Substitution tags |
| **Message** | - **Message** | **Yes:** Process parameters or Substitution tags |
| [**Button**](./ui-component-types/buttons), [**Upload**](./ui-component-types/buttons) | - **Label** | **Yes:** Process parameters or Substitution tags |
| **Select**, **Checkbox**, **Radio**, **Segmented Button** (*Static source type only*) | - **Label** - **Value** | **Substitution tags only** |
| **Text** | - **Text** | **Yes:** Process parameters or Substitution tags |
| **Link** | - **Link Text** | **Yes:** Process parameters or Substitution tags |
| **Modal** *(`modalDismissAlert` properties)* | - **Title** - **Message** - **ConfirmLabel** - **CancelLabel** | **Yes:** Process parameters or Substitution tags |
| **Step** | - **Label** | **Yes:** Process parameters or Substitution tags |
| **Tab** | - **Title** | **Yes:** Process parameters or Substitution tags |
Default Value is not available for the **Switch** element.
### How it works
* **Process Parameters**: At runtime, values can be injected from backend logic or state, such as the outcome of an API call or an action, which then populate the relevant UI element properties.
* **Substitution Tags**: Whenever a UI property references a substitution tag key (e.g., `test`), the application replaces it with the appropriate content at runtime. This is particularly useful for rapid localization, real-time data injection, and user-specific content.

### Example using Substitution tags
Use keys beginning with "@@" to return their value. If a valid key isn't found, you'll get an empty string. If the key format is incorrect, the original string is returned.

### Example using process parameters

#### Business rule example
In the preceding example, an MVEL business rule demonstrates the population of specific keys with dynamic values from the task. This JSON object, assigned to the "app" key, captures the values for various UI properties:
```json
///assigning a JSON object containing dynamic values for the specified keys to the "app" key
output.put("app",{"label":"This is a label",
"title":"This is a title",
"placeholder":"This is a placeholder",
"helpertext":"This is a helper text",
"errorM":"This is a error message",
"prefix":"prx",
"suffix":"sfx",
"subtitile":"This is a subtitle",
"message":"This is a message",
"defaultV":"defaultValue",
"value":"Value101",
"value":"Value101",
"confirmLabel":"This is a confirm label",
"cancelLabel":"This is a cancel label",
"defaultValue":"dfs",
"defaultDate":"02.02.2025",
"defaultSlider": 90});
```
Note that for releases **\< 3.3.0**, concatenating process parameters with substitution tags isn't supported when utilizing dynamic values.
## Computed values
Computed values present a method to dynamically generate values using JavaScript expressions. Beyond adhering to predefined values, computed values enable the manipulation, calculation, and transformation of data grounded in particular rules or conditions.

Computed values can be created via JavaScript expressions that operate on process parameters or other variables within the application.
To introduce a computed value, you simply toggle the "Computed value" option (represented by the **f(x)** icon). This will transform the chosen field into a JavaScript editor.

By enabling computed values, the application provides flexibility and the ability to create dynamic and responsive user interfaces.

### Slider example (parsing keys as integers)
The instance above showcases computed values' usage in a Slider element. JavaScript expressions are used to dynamically compute minimum and maximum values based on a value sourced from a linked input UI element (connected via the process key `${application.client.amount}`).
#### Minimum Value
```js
if ( !isNaN(parseInt(${application.client.amount})) ) {
return 0.15 * parseInt(${application.client.amount})
} else {
return 10000
}
```
* `!isNaN(parseInt(${application.client.amount}))`: This part ascertains whether the value in the input field `(${application.client.amount})` can be effectively converted to an integer using `parseInt`. Moreover, it validates that the outcome isn't `NaN` (i.e., not a valid number), ensuring input validity.
* If the input is a valid number, the minimum value for the slider is calculated as 15% of the entered value `(0.15 * parseInt(${application.client.amount}))`.
* If the input is not a valid number `(NaN)`, the minimum value for the slider is set to 10000.
#### Maximum Value
```js
if ( !isNaN(parseInt(${application.client.amount})) ) {
return 0.35 * parseInt(${application.client.amount})
} else {
return 20000
}
```
* Similar to the previous expression, it checks if the value entered on the input field is a valid number using `!isNaN(parseInt(${application.client.amount}))`.
* If the input is a valid number, the maximum value for the slider is calculated as 35% of the entered value `(0.35 * parseInt(${application.client.amount}))`.
* If the input is not a valid number `(NaN)`, the maximum value for the slider is set to 20000.
#### Summary
In summary, the above expressions provide a dynamic range for the slider based on the value entered on the input field. If a valid numeric value is entered, the slider's range will be dynamically adjusted between 15% and 35% of that value. If the input is not a valid number, a default range of 10000 to 20000 is set for the slider. This can be useful for scenarios where you want the slider's range to be proportional to a user-provided value.
### Text example (using computed strings)
The following scenario outlines the functionality and implementation of dynamically displayed property types via a text UI element. This is done based on the chosen loan type through a select UI element in a user interface.

#### Scenario
The UI in focus showcases two primary UI elements:
* Select Element - "Loan type": This element allows users to choose from different loan types, including "Conventional," "FHA," "VA," and "USDA."


* Text Element - "Property type": This element displays property types based on the selected loan type.

The following code snippet illustrates how the dynamic property types are generated based on the selected loan type (JavaScript is used):
```javascript
if ("${application.loanType}" == "conventional") {
return "Single-Family Home, Townhouse CondoMulti-Family, Dwelling";
} else if ("${application.loanType}" == "fha") {
return "Single-Family Home, Townhouse, Condo, Manufactured Home";
} else if ("${application.loanType}" == "va") {
return "Single-Family Home, Townhouse, Condo, Multi-Family Dwelling";
} else if ("${application.loanType}" == "usda") {
return "Single-Family Home, Rural Property, Farm Property";
} else {
return "Please select a loan type first";
}
```
#### Summary
* **Loan Type Selection**: Users interact with the "Loan Type Select Element" to choose a loan type, such as "Conventional," "FHA," "VA," or "USDA."
* **Property Types Display**: Once a loan type is selected, the associated property types are dynamically generated and displayed in the "Text Element."
* **Fallback Message**: If no loan type is selected or an invalid loan type is chosen, a fallback message "Please select a loan type first" is displayed.
### Integration across the UI elements
The UI Designer allows the inclusion of JavaScript expressions for generating computed values. This functionality extends to the following UI elements and their associated properties:
| Element | Properties |
| -------------------------------------- | ----------------------------------- |
| Slider | min Value, max Value, default Value |
| Input | Default Value |
| Any UI Element that accepts validators | min, max, minLength, maxLength |
| Text | Text |
| Link | Link Text |
* **Slider**: The min value, max value, and default value for sliders can be set using JavaScript expressions applied to process parameters. This allows for dynamic configuration based on numeric values.
* **Any UI Element that accepts validators min, max, minLength, maxLength**: The "params" field for these elements can also accept JavaScript expressions applied to process parameters. This enables flexibility in setting validator parameters dynamically.
* **Default Value**: For input elements like text inputs or number inputs, the default value can be a variable from the process or a computed value determined by JavaScript expressions.
* **Text**: The content of a text element can be set using JavaScript expressions, allowing for dynamic text generation or displaying process-related information.
* **Link**: The link text can also accept JavaScript expressions, enabling dynamic generation of the link text based on process parameters or other conditions.
When working with computed values, it's important to note that they are designed to be displayed as integers and strings.
For input elements (e.g., text input), you may require a default value from a process variable, while a number input may need a computed value.
# Layout configuration
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/layout-configuration
Layout settings are available for all components that can group other types of elements (for example, Containers or Cards).
The layout configuration settings enable users to customize key properties, including layout direction, alignment, gap, sizing, and spacing, to create visually appealing and functional designs.
These settings can be applied practically in various ways, depending on the context and purpose of the design:
* **Layout Direction and Alignment**: Use these settings to control how content is displayed, ensuring a logical and visually appealing arrangement. For example, a left-to-right layout direction suits languages that read from left to right, while center alignment is ideal for headings or titles.
* **Gap, Sizing, and Spacing**: Manage the distance between elements to create a sense of hierarchy and balance. Adjusting spacing between paragraphs or sections can improve readability, while resizing elements can prioritize certain components within the design.
* **Accessibility Considerations**: Customizing layout direction, alignment, spacing, and sizing can enhance the accessibility and inclusivity of your design. For example, adjusting these settings can make content more readable for users with disabilities or those using assistive technologies.
## Linear layout
The Linear layout arranges child elements in a single line, either horizontally or vertically.
### Linear layout configuration properties
**Linear**: Selected by default for arranging child elements in a linear fashion.

* **Horizontal**: Aligns child elements horizontally in a row.
* **Vertical**: Aligns child elements vertically in a column.

Controls the alignment of child elements along the main axis (the direction set by Horizontal or Vertical).

Options include:
* **Start**: Aligns elements at the start of the container.
* **Center**: Centers elements along the main axis.
* **End**: Aligns elements at the end of the container.
* **Space Between**: Distributes elements evenly with space between them.
* **Space Around**: Distributes elements with space around them.
* **Space Evenly**: Distributes elements with equal space around them.
Controls the alignment of child elements along the cross axis (perpendicular to the main axis).

Options include:
* **Start**: Aligns elements at the start of the cross axis.
* **Center**: Centers elements along the cross axis.
* **End**: Aligns elements at the end of the cross axis.
* **Stretch**: Stretches elements to fill the container along the cross axis.

* **Enabled** (Yes): This option allows elements to wrap onto multiple lines when they exceed the container's width along the main axis, ensuring a flexible and responsive layout.
* **Disabled** (No): This option forces all elements to remain on a single line, even if they overflow beyond the container's width, potentially causing elements to be clipped or hidden.
Sets the spacing between child elements, measured in pixels.

To better understand how these layout configurations work and see real-time feedback on different combinations, please refer to the following link:
## Grid layout
In addition to linear (flex-based) layouts, you can configure components using a grid layout. This option is particularly useful for designs requiring a more structured and multi-dimensional arrangement of elements.

### Switching layout types
For components that contain child elements (such as Containers, Cards, and Forms), you can easily switch between a linear layout and a Grid layout using a layout picker. The default layout is linear, but you can select Grid to arrange your content in a more structured way.
### Platform-specific layouts
You can configure different layout settings for different platforms. For example, you can opt for a grid layout on the web but switch to a linear layout on mobile for a more streamlined user experience. This flexibility ensures that the design is responsive and optimized for each platform, improving usability across different devices.

#### Configuring the grid
Once the Grid layout is selected, you can define the number of columns. Initially, the columns will be distributed equally, meaning all columns will have the same width. Additional customization options include:
* **Number of Columns**: Set the number of columns for the grid. By default, grid will use two columns.
* **Alignment**: Control how child elements are aligned within the grid. Alignment can be adjusted both horizontally and vertically, ensuring that content is positioned exactly where needed.
* **Gap Configuration**: Customize the gap between columns and rows. The default gap is inherited from the parent component's theme settings, ensuring consistency across your design.

#### Grid child settings
When configuring a grid, individual child components can be further customized:
* **Column Span**: Set how many columns a child component should span across.
Col Span should not exceed the number of parent grid columns.

* **Row Span (Web Only)**: Set how many rows a child component should span.

#### Order property
The Order property controls the arrangement of elements within grid layouts. By default, elements follow the order in which they appear in the parent component's array. However, this order can be customized using the following options:
* **Auto**: The default setting that retains the order of elements as defined in the array (default value is `0`).
* **First**: Moves the element to the first position by setting its order to `-999`.

* **Last**: Moves the element to the last position by setting its order to `999`.
* **Manual**: Allows for custom ordering by setting a specific numerical value.
Example: In a Grid with 10 elements, setting one element's order to `5` while the others are set to `auto` (which equals `0`) will place that element last, because `5` is greater than `0`.
#### Alignment in grid layouts
Proper alignment within grid layouts is essential, particularly when working with fixed-width columns or rows. By default, child elements will inherit alignment settings from their parent component, ensuring a consistent look and feel.
However, you have the flexibility to override these default settings for individual child elements, allowing precise control over horizontal and vertical alignment. This customization is key to achieving the desired visual structure, especially when certain elements need specific positioning within the grid.
### Component details
* **Components with gridLayout Property**: The following components support the `gridLayout` property:
* [**CONTAINER**](./ui-component-types/root-components/container)
* [**CARD**](./ui-component-types/root-components/card)
* **FORM**
* [**COLLECTION**](./ui-component-types/collection)
* [**COLLECTION PROTOTYPE**](./ui-component-types/collection/collection-prototype)
* **Components with gridChild Property**: All components can have the `gridChild` property, depending on their parent component's layout.
* **Collection gridLayout**: Within collections, the `gridLayout` property specifies only the number of columns, rowGap, and columnGap.
* **Collection Prototype**: The collection prototype does not include the `gridChild` property.
* **Grid and Flex Layout Application**: The grid and flex layouts apply exclusively to the layout of child components. Elements such as Card titles, subtitles, and Form titles are excluded from these layout types.
### Default style properties
* **Grid Layout Defaults**:
* **Columns**: 2
* **ColumnGap**: Inherited from component theming gap
* **RowGap**: Inherited from component theming gap
* **Position**: Start Start (aligning items to the start both horizontally and vertically)

* **Grid Child Defaults**:
* **Position**: Start Start (aligned to the start of both axes)
* **ColumnSpan:** 1 (spans a single column)
* **RowSpan (Web)**: 1 (spans a single row)
* **Order**: 0 (Auto, maintaining the default order of elements)

***
### Example using grid layout
**Use case**: **Customer Information Form**
This scenario involves a banking application that requires detailed input from a company representative, either during onboarding or for a loan application. Given the need to gather extensive information, a form with a 2-column grid layout is used to ensure clean and consistent alignment of fields.

For forms requiring additional inputs, consider adjusting the **Columns** property to match the number of fields needed. Expanding the form layout will help organize the inputs efficiently and improve user experience.

* Collect the company representative’s role and name.
* A 2-column grid layout can display the *Role* in one column and *Name* in the other, ensuring fields are aligned and easy to access.
* A toggle or checkbox asks if the representative is the main shareholder and/or the ultimate beneficial owner.
* A 2-column grid places these toggles side by side for a cleaner interface.
* Fields like *First Name*, *Personal Identification Number*, *Date of Birth*, and *Country of Residence* are collected.
* The 2-column grid aligns these fields horizontally, with appropriate spacing for readability.
* Collects *Phone Number*, *Email*, and *Preferred Method of Contact*.
* The grid places *Phone Number* and *Email* side by side, while the *Preferred Method of Contact* dropdown spans the row.
* Collect *State*, *Country*, and *City*.
* A 3-column grid displays these fields in a single row, simplifying navigation for the user.
* Capture employment history such as *Total Service Duration* and *Verified By*.
* A 2-column grid aligns these fields for easy comparison, avoiding misalignment or unnecessary gaps.
* A simple yes/no radio button for union membership.
* A grid layout ensures the options are neatly aligned for a clean, straightforward selection.
***
### Best Practices
To ensure an optimal user experience and consistent visual design, consider these best practices when using grid layouts:
* **Minimize Fixed Widths**: Avoid using fixed widths for child components in a grid. Relying on flexible sizing allows the grid to adapt smoothly across different screen sizes and devices. Only use fixed widths when absolutely necessary (e.g., for icons or buttons with defined sizes).
* **Consider Total Width**: If you know the width of the screen is `X`, ensure that the total width of your fixed-width elements doesn't exceed `X`. This prevents layout issues like content overflow or misalignment across different devices.
* **Use Column Span Thoughtfully**: When setting the `columnSpan` for a grid child, ensure that the total spans across all elements do not exceed the number of columns. This ensures a consistent and predictable layout.
* **Avoid Overcrowding**: When adding numerous child components to a grid, be mindful of spacing (gaps) between elements. Proper use of `rowGap` and `columnGap` helps maintain clarity and reduces visual clutter.
* **Leverage Default Inheritance**: Utilize the default alignment inheritance from the parent grid to ensure consistency. Only override alignment on child components when specific visual differences are needed.
* **Use `Auto` Order When Possible**: Stick with the `auto` order setting for most child components, unless a specific element needs to appear first or last. This will help maintain logical reading and visual flow without complex manual ordering.
* **Always Be Mindful of Mobile Screen Width**: When designing for mobile, always consider the narrower screen width. Ensure that grid layouts adapt gracefully, perhaps switching from a grid to a linear layout, or adjusting spacing and sizing to fit within the mobile screen's constraints without causing overflow or misalignment.
* **Test Across Platforms**: Given potential differences in behavior between web, iOS, and Android platforms, test your grid layout across these platforms to ensure consistent performance and avoid unexpected behavior like overflow or misalignment.
* **Avoid Fixed Widths on Columns**: Instead of setting fixed widths on the entire column, apply fixed widths to the individual elements inside the grid cells. This ensures more flexible layouts that adapt to different screen sizes, especially on mobile devices, without causing issues like overflow or misalignment.
### FAQs
**A:** The order property works relative to other elements. If other elements are set to `auto` (which equals `0`), then the element with order `5` will be placed after all the `0` elements, making it appear last.

**A:** Be careful with using fixed widths in Grid layouts. Fixed widths can lead to unexpected behavior, especially on different devices. It's better to use flexible sizing options whenever possible to maintain a responsive and predictable design.
# Localization and internationalization
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/localization-and-i18n
FlowX localization and internationalization adapt applications to different languages, regions, and formats, enhancing the user experience with dynamic date, number and currency formatting.
## Internationalization
Internationalization (i18n) in FlowX enables applications to be easily adapted for multiple languages and regions without altering the core code. It sets the foundation for localization by handling language structure, layout adjustments, and supporting various formats.
To set the default language at the application level, navigate to **Projects -> Application -> Settings**.

## Localization
Locale settings impact all date, number, and currency formats based on the combination of region and language. The language dictates translations, while the region specifies formatting order and conventions.
### Locale sources
Locale settings are derived from two main sources:
* **Container Application**: Provides global locale settings across the application.
If not specified during deployment, the default locale will be set to `en-US`.
* **Application Level**: Enables context-specific overrides within the application for formatting Numbers, Dates, and Currencies.

## Core i18n & l10n features
### Date formats
The default date format in FlowX is automatically determined by the default locale set at the application or system level. Each locale follows its region-specific date convention.
For example:
* en-US locale (United States): `MM/DD/YYYY` → 09/28/2024
You can set date formats at the application level (as you can see in the example above), choosing from five predefined options: short, medium, long, full, or custom (e.g., dd/mm/yy).
Additionally, date formats can be overridden at the UI Designer level for specific UI elements that support date customization.

UI Elements supporting date formats:
* Text
* Link
* Message
* Datepicker
FlowX will apply the following formatting options, adapting to the region's standard conventions:
| Format Type | Format Pattern | Example | Description |
| ----------- | --------------------- | ------------------------------ | ------------------------------------------ |
| **Short** | `MM/dd/yy` | `08/28/24` | Month before day, two-digit year. |
| **Medium** | `MMM dd, yyyy` | `Sep 28, 2024` | Abbreviated month, day, four-digit year. |
| **Long** | `MMMM dd, yyyy` | `September 28, 2024` | Full month name, day, and four-digit year. |
| **Full** | `EEEE, MMMM dd, yyyy` | `Thursday, September 28, 2024` | Full day of the week, month, day, year. |
| **Custom** | `dd/MM/yyyy` | `28/08/2024` | User-defined format; day before month. |
The date formats shown in the example are based on the **en-US (United States English) locale**, which typically uses the month-day-year order.

If the predefined date formats do not met your needs you can declare and use a custom date format (both at application level or override in UI Designer).
The following example will demonstrate how to set and display various date formats using text UI elements. We will override the default format (originating from the application level) directly in the UI Designer to showcase all available formats:
FlowX formats dates using [**ISO 8601**](https://www.iso.org/iso-8601-date-and-time-format.html) for consistent and clear international representation.
### Number formatting
FlowX adjusts number formats to align with regional standards, including the use of appropriate decimal separators and digit grouping symbols. This ensures that numbers are displayed in a familiar format for users based on their locale.
#### Locale-specific formatting
Formatting the numbers to adapt decimal separators (`comma` vs. `dot`) and digit grouping (`1,000` vs. `1.000`) to match regional conventions.
The correct formatting for a number depends on the locale. Here's a quick look at how the same number might be formatted differently in various regions:
* **en-US (United States)**: 1,234.56 (comma for digit grouping, dot for decimals)
* **de-DE (Germany)**: 1.234,56 (dot for digit grouping, comma for decimals)
* **fr-FR (France)**: 1 234,56 (space for digit grouping, comma for decimals)
* **es-ES (Spain)**: 1.234,56 (dot for digit grouping, comma for decimals)
#### Decimal precision
You can set minimum and maximum decimal places for numbers in application settings to store data and display it with the required precision in the data store.

Formatting settings defined in the FlowX props in the UI Designer take precedence over the application-level formatting settings.

For UI elements that support `number` or `currency` types, FlowX checks the data model for precision settings and applies them during data storage, ensuring consistency to configured precision levels.

This means that when using a `float` data model attribute, the precision settings directly control how the number is saved in the data store, specifying the maximum number of decimal places that can be stored.
Additionally, an for input UI elements a mask is applied to prevent users from entering more decimal places than the precision set, ensuring data integrity and compliance with defined formatting rules.
For more details, refer to [Data model integration](#data-model-integration).
* **Minimum Decimals**: Sets the least number of decimal places that a number will display. If a number has fewer than the specified decimals, trailing zeros will be added to meet the requirement.
* **Maximum Decimals**: Limits the number of decimal places a number can have. If the number exceeds this limit, it will be rounded to the specified number of decimals.
**Example**:
If you set both Minimum and Maximum Decimals to 3, a number like 2 would display as 2.000, and 3.14159 would be 3.141.

At runtime, the system applies number formatting rules based on the locale and the settings defined in the application's configuration or UI Designer overrides.
If the number is linked to a data model key the formatting adheres to the metadata defined there, ensuring consistent rendering of numerical data.
### Currency formatting
FlowX provides currency formatting features that dynamically adapt to regional settings, ensuring accurate representation of financial data.
#### Currency object structure
Currencies are managed as a system object with `amount` and `code` keys, creating a wrapper that facilitates consistent handling. This design ensures that every currency display corresponds accurately to the regional and formatting standards defined by the locale. If the `code` is not provided, the system uses the locale to determine the appropriate currency symbol or code.
#### Display behavior
When displaying currency values, the system expects keys like `loanAmount` to have both `amount` and `code` properties. For example, with the locale set to `en-US`, the output will automatically follow the US formatting conventions, displaying amounts like "\$12,000.78" when the currency is USD.
* If the value found at the key path is not an object containing `amount` or `code`, the system will display the value as-is if it is primitive. If it's an object, it will show an empty string.
* If the key path is not defined, similar behavior applies: primitive values are displayed as-is, and objects result in an empty string.
#### Locale-sensitive formatting
Currency formatting depends primarily on the region defined by the locale, not the language.
When the currency `code` is `null`, the system defaults to the currency settings embedded within the locale, ensuring region-specific accuracy.
#### Dynamic formatting in UI
FlowX dynamically applies number formatting within UI components, such as inputs and sliders. These components adjust in real-time based on the current locale, ensuring that users always see numbers in their preferred format. UI components that support currency values dynamically format them in real time. For example:
* Input fields with `CURRENCY` types save values in `{key path}.amount` and will delete the entry from the data store if the input is empty.
* Sliders will save integer values (with no decimals) to `{key path}.amount` and format the displayed values, including min/max labels, according to the locale and currency.

**Formatting Text with locale and data from UI elements**
This example demonstrates how to format a dynamic text by substituting placeholders with values from a data store, taking into account locale-specific formatting. In this case, the goal is to insert the value coming from a key called loanAmount (added on an Input UI element) into a text and format it according to the en-US locale.
**Data store**:
A JSON object containing key-value pairs. The `loanAmount` key holds a currency object with two properties:
* amount: The actual loan value (a float or number).
* code: The currency code in ISO format (e.g., "USD" for US Dollars).
```json{
"loanAmount": {
"amount": 12000.78,
"code": "USD"
}
}
```
**Locale**
The locale determines the number and currency formatting rules, such as the use of commas, periods, and currency symbols.
Locale: en-US (English - United States)
**Processing**
* Step 1: The platform extracts the loanAmount.amount value (12000.78) and the loanAmount.code from the data store.
* Step 2: Format the amount according to the specified locale (en-US). In this locale:
* Use a comma , to separate thousands.
* Use a period . to denote decimal places.
* Step 3: Replace the `${loanAmount}` placeholder in the text with the formatted value \$12,000.78.
**Output**
The resulting text with the formatted loan amount and the appropriate currency symbol for the en-US locale.

Here is the data extracted from the data store (it is available on the process instance)
For instance, with a Romanian locale (`ro-RO`), currency is displayed like "12.000,78 RON" when the `code` is null or unavailable.
## Data model integration
FlowX integrates number formatting directly with the data model. When keys are set to number or currency types, the system automatically checks and applies precision settings during data storage. This approach ensures consistency across different data sources and UI components.
Integration Details:
* Numbers are broken down into integer and decimal parts, stored according to specified precision.
* Currency keys are managed as wrapper objects containing amount and code, formatted according to the locale settings.
## UI Designer integration
Formatting is dependent on the locale, which includes both language and region settings. The application container provides the locale for display formatting.

Formatting settings (which is by default `auto`) can be switched to `manual` and overridden for text, message, and link components within the UI Designer's properties tab.
The UI Designer enables application-level formatting settings for localized display of dynamic values.
## Supported components
* **Text/Message/Link**: Override general localization settings to match localized content presentation.
* **Input Fields**: Inputs correctly format currency and number types based on regional settings.
* **Sliders**: Currency formatting for sliders, with suffixes disabled when currency types are detected.
* **Datepickers**: Date formatting
# UI actions
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-actions
UI actions bridge the gap between multiple UI elements and specific actions, dictating how the interface responds to user interactions. They control behaviors such as displaying loaders, dismissing modals, or sending default data back to a process, enabling interaction between UI components and underlying logic.
## Overview
UI actions establish connections between [**UI components**](./ui-component-types/root-components/custom) or custom elements and predefined [**actions**](../actions/actions). These connections ensure that processes are executed efficiently while defining the UI's response. Examples include:
* Showing a loader during an action execution.
* Automatically dismissing a modal after a task is completed.
* Sending default data back to the process.
***
## Types of UI actions
UI actions can be categorized into two main types:
1. **Process UI Actions**: Directly interact with process nodes and manual actions.
2. **External UI Actions**: Perform actions that link to external URLs or open new tabs.
***
### Process UI actions
Process UI Actions (labeled `ACTION`) define how a [**Button**](../ui-designer/ui-component-types/buttons), whether generated or custom, interacts with a manual process action. Before configuring a UI action, ensure the corresponding [manual action](../actions/actions) is set up.
#### Adding a node action
To configure a UI action, first add a node action (type: manual) to a process node. For detailed steps, refer to:
**Example: Configuring a node action (Save Data)**
1. **Add an Action**: Attach an action to a node.
2. **Select the Action Type**: Choose **Save Data**, for instance.
3. **Action Type**: Set it to **manual**.

Both UI actions and [node actions](../actions/actions) must be configured on the same user task node.
### Configuring a UI action
Below are the key configuration options for defining a UI action:
* **Event**: Define when the action should trigger ([Learn about Events](#events)).
* **Action Type**: Specify the type of action ([Explore Action Types](#action-types)).
* **Node Action Name**: Select the corresponding manual action from the dropdown.
* **Use a different name for UI action**: Optionally, define a unique name to trigger the action in [**Custom Components**](./ui-component-types/root-components/custom).
* **Add custom keys**: Add custom keys beyond those in the data model.
* **Exclude keys**: Specify data keys to exclude.
* **Add custom body**: Provide a default JSON response, merged with additional parameters during execution.
* **Add form to submit**: Link the action to specific UI elements for validation.
* **Hide Subprocess Navigation**: Disable navigation to subprocesses.
* **Show loader**: Display a loader until a server-side event (SSE) updates the data or screen.

## UI actions elements
### Events
Events define how user interactions trigger UI actions. The available event types are:
* **CLICK**: Triggered when a button is clicked.
* **CHANGE**: Triggered when input fields are modified.
Events are not applicable for UI actions on [Custom Components](./ui-component-types/root-components/custom).
***
### Action types
The **Action Type** dropdown includes several predefined options:
* **DISMISS**: Closes a modal after the action is executed.
* **ACTION**: Links a UI action to a manual node action.
* **START\_PROJECT**: Initiates a new project instance. This action type is used to trigger a completely new process flow within a different project or application, based on the selected project and process configurations.
* **UPLOAD**: Initiates an upload action.
* **EXTERNAL**: Opens a link in a new browser tab or window.
***
## External UI actions
**External UI Actions** enable linking to external URLs and opening links in a new tab or the same tab.
When configuring an external UI action, additional options become available:
1. **URL**: Specify the web URL for the action.
2. **Open in New Tab**: Choose whether to execute the action in the current tab or a new tab.

For more information on how to add actions and how to configure a UI, check the following section:
[Managing a process flow](../../flowx-designer/managing-a-process-flow)
# Buttons
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-component-types/buttons
There are two types of buttons available, each with a different purpose.

## Basic button
Basic buttons are used to perform an action such as unblocking a token to move forward in the process, sending an OTP, and opening a new tab.
### Configuring a basic button
When configuring a basic button, you can customize the button's settings by using the following options:
* [**Properties**](#properties)
* [**UI action**](#ui-action)
* [**Button styling**](#button-styling)
Sections that can be configured regarding general settings:
#### Properties
* **Label** - it allows you to set the label that appears on the button
#### UI action
Here, you can define the UI action that the button will trigger.
* **Event** - possible value: `CLICK`
* **Action Type** - select the action type

More details on how to configure UI actions can be found [here](../ui-actions).
### Button styling
#### Properties
This section enables you to select the type of button using the styling tab in the UI Designer. There are four types available:
* Primary
* Secondary
* Ghost
* Text

For more information on valid CSS properties, click [here](../ui-designer#styling).
#### Icons
To further enhance the Button UI element with icons, you can include the following properties:
* **Icon Key** - the key associated in the Media library, select the icon from the **Media Library**
* **Icon Color** - select the color of the icon using the color picker
When setting the color, the entire icon is filled with that color, the SVG's fill. Avoid changing colors for multicolor icons.
* **Icon Position** - define the position of the icon within the button:c
* Left
* Right
* Center
When selecting the center position for an icon, the button will display the icon only.

By utilizing these properties, you can create visually appealing Button UI elements with customizable icons, colors, and positions to effectively communicate their purpose and enhance the user experience.
## File upload
This button will be used to select a file and do custom validation on it. Only the Flowx props will be different.
### Configuring a file upload button
When configuring a file upload button, you can customize the button's settings by using the following options:
* [**Properties**](#properties)
* [**UI action**](#ui-action)
* [**Button styling**](#button-styling)
Sections that can be configured regarding general settings:
#### Properties
* **Label** - it allows you to set the label that appears on the button
* **Accepted file types** - the accept attribute takes as its value a string containing one or more of these unique file type specifiers, [separated by commas](https://html.spec.whatwg.org/multipage/common-microsyntaxes.html#set-of-comma-separated-tokens), may take the following forms:
| Value | Defintion |
| ------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------- |
| audio/\* | Indicates that sound files are accepted |
| image/\* | Indicates that image files are accepted |
| video/\* | Indicates that video files are accepted |
| [MIME type](https://html.spec.whatwg.org/multipage/infrastructure.html#valid-mime-type-with-no-parameters) with no params | Indicates that files of the specified type are accepted |
| string starting with U+002E FULL STOP character (.) (for example, .doc, .docx, .xml) | Indicates that files with the specified file extension are accepted |
* **Invalid file type error**
* **Max file size**
* **Max file size error**
Example of an upload file button that accepts image files:

#### UI action
Here, you can define the UI action that the button will trigger.
* **Event** - possible value: `CLICK`
* **Action Type** - select the action type

More details on how to configure UI actions can be found [here](../ui-actions).
### Button styling
The file upload button can be styled using valid CSS properties (more details [here](../ui-designer#styling)).
# Collection component
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-component-types/collection/collection
The Collection component functions as a versatile container element, allowing you to iterate through a list of elements and display them according to their specific configurations.
## Configurable properties
* `collectionSource`: This property specifies the process key where a list can be found. It should be a valid array of objects.


## Example usage
Here's an example of configuring a Collection component to display a list of products:

Source collection data example using an [**MVEL business rule**](../../../actions/business-rule-action/business-rule-action):

```java
output.put("processData", //this is the key
{
"products": [ // this is the source that will populate the data on collection
{
"name": "Product One Plus",
"description": "The plus option",
"type": "normal"
},
{
"name": "Product Two Premium",
"description": "This is premium product",
"type": "recommended"
},
{
"name": "Product Basic",
"description": "The most basic option",
"type": "normal"
},
{
"name": "Gold Product",
"description": "The gold option",
"type": "normal"
}
]
}
);
```
The above configuration will render the Collection as shown below:

Components within a collection use **relative paths** to the collection source. This means that wherever the collection is found inside the process data, the components inside the collection need their keys configured relative to that collection.


To send and display dynamic data received on the keys you define to the frontend, make sure to include the following data structure in your root UI element using Message parameters. For instance, if you want to include data for the `processData` key mentioned earlier, your configuration should resemble this:
```json
{
"processData": ${processData}
}
```
To enable the definition of multiple prototypes for a single Collection and display elements from the same collection differently, an additional container known as a *collection prototype* is required. For more information on collection prototypes, please refer to the next section:
# Collection Prototype
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-component-types/collection/collection-prototype
Create dynamic, data-driven interfaces with different layouts based on item properties using collection prototypes.
Collection prototypes allow you to display items within a collection using different layouts based on their properties, creating more dynamic and context-aware user interfaces.
## What are Collection Prototypes?
Collection prototypes are specialized components that define different display formats for items within a single collection. They act as templates that are applied conditionally based on item properties, allowing you to:
* Display collection items with different layouts based on their data properties
* Create visually distinct displays for featured or highlighted items
* Apply specialized formatting for different item types or states
* Integrate custom components for enhanced functionality
* Add interactive features like item selection
Collection prototypes always work as child components within a parent [Collection component](../collection). While a Collection iterates through a data array, Collection Prototypes determine how each individual item should be displayed.
## How Collection Prototypes Work
### Core Concepts
Collection prototypes use a simple but powerful mechanism to determine which layout to apply to each item:
The data property used to determine which prototype to apply (e.g., `type`, `status`, `priority`)
The specific value that triggers this prototype layout (e.g., `featured`, `active`, `high`)
When the collection renders, each item is evaluated against all available prototypes. The first prototype whose identifier key and value match the item's data is used to render that item.
### Visual Example
Here's how collection prototypes can display products with different layouts based on their `type` property:



## Implementation Guide
### 1. Prepare Your Data
Before implementing collection prototypes, you need to prepare the data structure that will determine which prototype applies to each item.
#### Using a Service Task Node
The most common approach is to use a Service Task node with business rules:
In the FlowX.AI Designer, add a **Service Task** node before your User Task node and connect them with a sequence flow.

Select your Service Task node, go to the **Business Rules** tab, and add a new business rule or edit an existing one.

Use JavaScript to create your data structure and assign it to the process context:
```javascript
// Create the data structure for products
const products = [
{
"name": "Product One Plus",
"description": "The plus option",
"type": "normal", // This property will determine which prototype to use
"price": 99.99
},
{
"name": "Product Two Premium",
"description": "This is premium product",
"type": "recommended", // This will use a different prototype
"price": 149.99
},
{
"name": "Product Basic",
"description": "The most basic option",
"type": "normal",
"price": 49.99
}
];
// Add the products array to the process data
output.put("products", products);
```
Save your configuration, and test your business rule by using the **Test Rule** feature in the business rules editor to make sure the data is correctly added to the process context.

### 2. Create the UI Components
Once your data is prepared, you can implement the collection and its prototypes in the UI Designer:
In the FlowX.AI Designer, create or select an existing **User Task** node in your process and click the **brush icon** to open the UI Designer.

Add a **root component** (like a Card or Container) to your node, then add a **Collection** component inside it. Configure the Collection's **source** to point to your data array (e.g., `products`).

The `collectionSource` property specifies the process key where your list can be found. It should be a valid array of objects. For example, if your data is at `processData.products`, you would set the source to `products`.
For each different display type:
1. Click on your Collection component to select it
2. Add a **Collection Prototype** as a child component
3. Configure the prototype's settings:
* **Prototype Identifier Key**: The field to check (e.g., `type`)
* **Prototype Identifier Value**: The value to match (e.g., `normal`)

4. (If different prototypes identifier values are needed) Repeat for each different prototype you need, using different identifier values
For each prototype, add the UI components that will display your data:
1. Select a Collection Prototype
2. Add components like **Text**, **Image**, **Button**, etc.
3. Configure each component to use relative paths to the collection item data saved in the Data Model:
* Use text UI element with the key `${name}`
* Use text UI element with the key `${description}`

Components within a collection use **relative paths** to the collection source. This means that wherever the collection is found inside the process data, the components inside the collection need their keys configured relative to that collection.

### 3. Add Interactivity
To make your collection items interactive, such as allowing users to select an item:
Add an [Action](../../../actions/actions) to your [User task node](../../../node/user-task-node):
1. Go back to the process designer
2. Select your User Task node
3. Go to the **Actions** tab
4. Add a new action with:
* Type: **Manual**
* Name: (e.g., `selectItem`)

**Configuration options:**
* Set as **Manual** since users trigger it
* Mark as **Mandatory** if selection is required to proceed
* Enable **Repeatable** for changeable selections
Add a UI action to your collection prototype:
1. Return to the UI Designer
2. Select the component in your prototype that should trigger the action
3. In the Settings panel, add a UI Action
4. Add a **Collection Item Save Key**.
5. Add a **Custom Key** and make sure it matches the **Collection Item Save Key**.

Save your changes and test your process to verify that:
* Different items display with the correct prototype layouts
* Interactive elements work as expected
* Selected data is properly saved to your process data


## Best Practices
### Working with Custom Components
When integrating custom components into collection prototypes, follow these best practices:
1. **Use relative paths for data access**
* Configure input keys relative to collection items
* Example: Use `name` instead of `app.clients.name`
2. **Maintain consistent data structures**
* Ensure required data exists in collection items
* Follow consistent data patterns across all items
### Data Access Patterns
Components use full paths from data root:
```jsx
// Input keys:
// app.name
// app.signatureImageURL
data: {
app: {
name: "John Doe",
signatureImageURL: "http://example.com/signature1.png"
}
}
```
Components use relative paths:
```jsx
// Input keys:
// name
// signatureImageURL
data: {
name: "John Doe",
signatureImageURL: "http://example.com/signature1.png"
}
```
### Performance Optimization
* Limit the number of items in your collection when possible
* Simplify complex prototype layouts
* Consider pagination for large data sets
* Optimize images and other media within prototypes
* Avoid deeply nested components within prototypes
**Pro tips:**
* Test your collection prototypes with various data scenarios
* Verify data flow through the entire selection process
* Monitor the process data for correct updates
* Ensure your UI provides clear visual feedback when an item is selected
## Troubleshooting
When working with collection prototypes, you might encounter some common issues:
### Common Issues and Solutions
**Symptoms:**
* Collection items appear with incorrect layouts
* All items use the same prototype regardless of identifier values
* Some items don't render at all
**Solutions:**
* Verify that your **Prototype Identifier Key** exactly matches the property name in your data
* Ensure the **Prototype Identifier Value** matches the expected values in your data
* Check for case sensitivity issues in both keys and values
* Validate your data structure using the process debugger
* Confirm that the parent Collection component has the correct source path
**Symptoms:**
* Clicking on items doesn't trigger the expected action
* Selected data isn't saved to the process
* No visual feedback when items are selected
**Solutions:**
* Confirm that the **Collection Item Save Key** matches exactly with the **Data to send** field in your node action
* Verify that your node action is properly configured as **Manual** and **Repeatable** if needed
* Check that the UI action is attached to the correct element within the prototype
* Ensure the node action has the correct permissions and is properly linked
* Verify that your root UI element includes the necessary Message parameters to pass data
**Symptoms:**
* Components inside prototypes show empty or incorrect data
* Dynamic content doesn't update properly
* Error messages in the console related to undefined properties
**Solutions:**
* Remember that components inside collections must use relative paths (e.g., `name` instead of `app.clients.name`)
* Verify your data structure matches what the components expect
* Use the process debugger to inspect the actual data being passed to the collection
* Check for null or undefined values that might cause rendering issues
* Ensure your data is properly structured as an array of objects
### Known Limitations
* **Nested collections** (collections inside collection prototypes) may cause performance issues and should be used sparingly
* **Deep data structures** might require careful handling of relative paths for proper data binding
### Debugging Tips
1. **Use process data inspection:**
* Monitor the process data before and after interactions with collection prototypes
* Verify that selected items are correctly saved to the expected process data keys
2. **Test with simplified data:**
* Start with a minimal data set to confirm basic functionality
* Gradually add complexity to identify where issues might occur
3. **Isolate components:**
* Test individual UI components outside the collection context
* Add components to prototypes one by one to identify problematic elements
## Frequently Asked Questions
Yes, you can create platform-specific overrides for your collection prototypes. Configure different layouts, spacing, and styling for Web, iOS, and Android platforms through the platform tabs in the UI Designer.
If an item doesn't have the specified identifier key or value, it won't match any prototype. Consider adding a default prototype with a common value like "default" and ensure all items have at least this value as a fallback.
No, prototype identifier values must be static strings that exactly match the values in your data. However, you can transform your data before it reaches the collection to achieve dynamic prototype selection.
There's no hard limit on the number of prototypes, but for performance and maintainability reasons, it's recommended to keep the number reasonable (typically under 5-7 different prototypes).
A **Collection** is a container that iterates through an array of data items, while a **Collection Prototype** defines how each individual item should be displayed based on its properties. You need at least one Collection Prototype inside a Collection, but you can have multiple prototypes to handle different item types or states.
## Code Examples
### Different Prototype Layouts Based on Item Status
This example shows how to display items differently based on their status (active, pending, completed):
```java Process Data Setup
// Setting up process data with items of different statuses
output.put("processData", {
"tasks": [
{
"id": "task-001",
"title": "Review application",
"description": "Initial review of customer application",
"status": "active",
"priority": "high"
},
{
"id": "task-002",
"title": "Verify documents",
"description": "Check submitted documentation for completeness",
"status": "pending",
"priority": "medium"
},
{
"id": "task-003",
"title": "Send confirmation",
"description": "Email confirmation to customer",
"status": "completed",
"priority": "low"
}
]
});
```
```javascript Collection Configuration
// Collection component configuration
{
"type": "COLLECTION",
"settings": {
"source": "tasks",
"direction": "vertical"
},
"children": [
// Active task prototype
{
"type": "COLLECTION_PROTOTYPE",
"settings": {
"prototypeIdentifierKey": "status",
"prototypeIdentifierValue": "active"
},
"children": [
// Active task UI elements with highlighted styling
]
},
// Pending task prototype
{
"type": "COLLECTION_PROTOTYPE",
"settings": {
"prototypeIdentifierKey": "status",
"prototypeIdentifierValue": "pending"
},
"children": [
// Pending task UI elements with standard styling
]
},
// Completed task prototype
{
"type": "COLLECTION_PROTOTYPE",
"settings": {
"prototypeIdentifierKey": "status",
"prototypeIdentifierValue": "completed"
},
"children": [
// Completed task UI elements with muted styling
]
}
]
}
```
### Multiple Identifier Keys for Complex Conditions
You can use multiple collection prototypes with different identifier keys to handle complex display logic:
```java Process Data with Multiple Attributes
// Setting up process data with items having multiple attributes
output.put("processData", {
"notifications": [
{
"id": "notif-001",
"message": "Your application has been approved",
"type": "success",
"priority": "high",
"read": false
},
{
"id": "notif-002",
"message": "Please review updated terms",
"type": "info",
"priority": "low",
"read": true
},
{
"id": "notif-003",
"message": "Action required: Missing information",
"type": "warning",
"priority": "high",
"read": false
}
]
});
```
```javascript Multiple Prototype Configuration
// Collection with multiple prototype conditions
{
"type": "COLLECTION",
"settings": {
"source": "notifications",
"direction": "vertical"
},
"children": [
// High priority unread success notification
{
"type": "COLLECTION_PROTOTYPE",
"settings": {
"prototypeIdentifierKey": "type",
"prototypeIdentifierValue": "success",
"additionalConditions": [
{
"key": "priority",
"value": "high"
},
{
"key": "read",
"value": false
}
]
},
"children": [
// UI elements for high priority unread success notifications
]
},
// Other prototypes for different combinations...
]
}
```
### Nested Data Structures
Working with nested data in collection prototypes:
```java Nested Data Structure
// Setting up process data with nested objects
output.put("processData", {
"orders": [
{
"id": "order-001",
"customer": {
"name": "John Smith",
"tier": "premium"
},
"items": [
{ "name": "Product A", "quantity": 2 },
{ "name": "Product B", "quantity": 1 }
],
"status": "processing"
},
{
"id": "order-002",
"customer": {
"name": "Jane Doe",
"tier": "standard"
},
"items": [
{ "name": "Product C", "quantity": 3 }
],
"status": "shipped"
}
]
});
```
```javascript Accessing Nested Data
// Collection prototype accessing nested data
{
"type": "COLLECTION",
"settings": {
"source": "orders",
"direction": "vertical"
},
"children": [
// Premium customer prototype
{
"type": "COLLECTION_PROTOTYPE",
"settings": {
"prototypeIdentifierKey": "customer.tier",
"prototypeIdentifierValue": "premium"
},
"children": [
// UI elements for premium customers
{
"type": "TEXT",
"settings": {
"text": "Premium Customer",
"style": "heading"
}
},
{
"type": "TEXT",
"settings": {
"text": "${customer.name}",
"style": "subheading"
}
},
// Collection for order items
{
"type": "COLLECTION",
"settings": {
"source": "items",
"direction": "horizontal"
},
"children": [
// Item prototype
{
"type": "COLLECTION_PROTOTYPE",
"settings": {},
"children": [
{
"type": "TEXT",
"settings": {
"text": "${name} (${quantity})",
"style": "body"
}
}
]
}
]
}
]
},
// Standard customer prototype
{
"type": "COLLECTION_PROTOTYPE",
"settings": {
"prototypeIdentifierKey": "customer.tier",
"prototypeIdentifierValue": "standard"
},
"children": [
// UI elements for standard customers
]
}
]
}
```
## Related Components
The parent component that iterates through data arrays and contains Collection Prototypes
Add interactivity to your Collection Prototypes with UI Actions
# File preview
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-component-types/file-preview
The File Preview UI element is a user interface component that enables users to preview the contents of files quickly and easily without fully opening them. It can save time and enhance productivity, providing a glimpse of what's inside a file without having to launch it entirely.

File preview UI elements offer various benefits such as conveying information, improving the aesthetic appeal of an interface, providing visual cues and feedback or presenting complex data or concepts in a more accessible way.
## Configuring a File preview element
A File Preview element can be configured for both mobile and web applications.
### File preview properties (web)

The settings for the File Preview element include the following properties:
* **Title**: Specifies the title of the element. If the file is downloaded or shared, the file name should serve as the title used in the preview component.
* **Subtitle**: Allows for the inclusion of a subtitle for the element.
* **Display mode**: Depending on the selected display method, the following properties are available:
* **Inline** → **Has accordion**:
* `false`: Displays the preview inline without an expand/collapse option.
* `true` (Default View: Collapsed): Displays the preview inline with an expand/collapse option, initially collapsed.
* `true` (Default View: Expanded): Displays the preview inline with an expand/collapse option, initially expanded.
* **Modal** → view icon is enabled
* **Has accordion**: Introduces a Bootstrap accordion, facilitating the organization of content within collapsible items. It ensures that only one collapsed item is displayed at a time.
* **Source Type**:
* **Media Library**: Refers to PDF documents uploaded in the Media Library.
PDF documents uploaded to the Media Library must adhere to a maximum file size limit of 10 MB.
* **Process Data**: Identifies the key location within the process where the document is sourced, establishing the linkage between the document and process data.
* **Static**: Denotes the document's URL, serving as a fixed reference point.
It's worth noting that the inline modal view can raise accessibility issues if the file preview's height exceeds the screen height.
### File preview properties (mobile)
Both iOS and Android devices support the share button.
#### iOS

#### Android

### File preview styling
The File Preview styling property enables you to customize the appearance of the element by adding valid CSS properties, for more details, click [here](../../ui-designer/ui-designer#styling).
When drag and drop a File Preview element in UI Designer, it comes with the following default styling properties:
#### Sizing
* **Fit W** - auto
* **Fit H** - fixed / Height - 400 px

## File Preview example
Below is an example of a File Preview UI element with the following properties:
* **Display mode** - Inline
* **Has accordion** - True
* **Default view** - Expanded
* **Source Type** - Static

# Checkbox
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-component-types/form-elements/checkbox-form-field
A checkbox form field is an interactive element in a web form that provides users with multiple selectable options. It allows users to choose one or more options from a pre-determined set by simply checking the corresponding checkboxes.

This type of form field can be used to gather information such as interests, preferences, or approvals, and it provides a simple and intuitive way for users to interact with a form.
## Configuring the Checkbox element
### Checkbox generic settings
The available configuration options for this form element are:
* [**Process data key**](#process-data-key)
* [**Properties**](#properties)
* [**Datasource**](#datasource)
* [**Validators**](#validators)
* [**Expressions**](#expressions)
* [**UI actions**](#ui-actions)
* [**Checkbox styling**](#checkbox-styling)
#### Process data key
Process data key establishes the binding between the checkbox element and process data, enabling its later use in [decisions](../../../node/exclusive-gateway-node), [business rules](../../../actions/business-rule-action/business-rule-action) or [integrations](../../../node/message-send-received-task-node#from-integration).
#### Properties
* **Label**: The visible label for the checkbox.
* **Helpertext**: Additional information about the checkbox element, which can be optionally hidden within an infopoint.

#### Datasource configuration
* **Default value**: The default value of the checkbox.
* **Source Type**: The source can be Static, Enumeration, or Process Data.
* **Add option** : Define label and value pairs here.

#### Validators
The following validators can be added to a checkbox: `required` and `custom` (more details [here](../../validators)).

#### Hide/disable expressions
The checkbox behavior can be defined using JavaScript expressions for hiding or disabling the element. The following properties can be configured for expressions:
* **Hide condition**: A JavaScript expression that hides the checkbox element when it evaluates to the specified result.
* **Disabled condition**: A JavaScript expression that disables the checkbox when it returns a truthy value.
These expressions can be used with any form element. See the following example for details:
It's important to make sure that disabled fields have the same expression configured under the path expressions → hide.
#### UI actions
UI actions can be added to the checkbox element to define its behavior and interactions.
* **Event**: Possible value - `CHANGE`.
* **Action Type**: Select the type of the action to be performed.
For more details on how to configure a UI action, click [**here**](../../ui-actions).
### Checkbox settings overrides
There are instances where you may need to tailor settings configured in the **Generic** settings tab. This proves especially beneficial when you wish to adjust these settings to appear differently across various platforms such as Web, Android, or iOS.
Available override settings:
* Properties:
* **Label**: Override the checkbox label.
* **Helper**: Override helper text/info point.
* Expressions:
* **Hide**: Override the hide expression.
Overrides can always be imported/pushed from one platform to another:

### Checkbox styling
* **Type**: Set the type of the checkbox. Possible values:
* bordered
* clear
Adjusting the size of components is crucial for a responsive design. Fit W (width) offers three options:
* **fill**: Fills the available space.
* **fixed**: Maintains a fixed width.
* **auto**: Adjusts the width automatically based on content.
* **Direction**: Determines the orientation of the layout, which can be either horizontal or vertical.
* **Gap**: Specifies the size of the space between rows and columns.
* **Columns**: Indicates the number of columns in the layout.
Similar styling considerations apply to iOS as for web.
However, for mobile applications, there's an additional sizing style property specific to select elements:
* **Height** (pt - points): Determines the vertical size of the select element on the screen.
Similar styling considerations apply to Android as for web.
However, for mobile applications, there's an additional sizing style property specific to select elements:
* **Height** (dp - density-independent pixels): Determines the vertical size of the select element on the screen.
#### Checkbox style overrides options
Theme overrides refer to the ability to modify or customize the appearance and behavior of UI components by overriding default theme settings. This can be applied at various levels, such as specific elements or entire sections, and can be platform-specific (Web, iOS, Android).

Style options:
* Border radius **\[TEXT]**
* Border width **\[TEXT]**
* Text style **\[FONT]**

* Default state
* Text color **\[COLOR]**
* Disabled state
* Text color **\[COLOR]**
* Text style **\[FONT]**

* Text color **\[COLOR]**
* Text style **\[FONT]**
* Helper Tooltip
* Text style **\[FONT]**
* Text color **\[COLOR]**
* Background color **\[COLOR]**
* Icon Color **\[COLOR]**

* Text color **\[COLOR]**
* Text style **\[FONT]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Icon color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**

You can import or push the overrides from one platform to another without having to configure them multiple times.

# Datepicker
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-component-types/form-elements/datepicker-form-field
The datepicker (Calendar Picker) is a lightweight component that allows end users to enter or select a date value.

The datepicker (Calendar Picker) is a lightweight component that allows end users to enter or select a date value.
The default datepicker value is `DD.MM.YYYY`.
## Configuring the datepicker element
### Datepicker generic settings
The available configuration options for this form element are:
* [**Process data key**](#process-data-key)
* [**Properties**](#properties)
* [**Datasource**](#datasource)
* [**Validators**](#validators)
* [**Expressions**](#expressions)
* [**UI actions**](#ui-actions)
* [**Datepicker styling**](#datepicker-styling)
#### Process data key
Process data key establishes the binding between the datepicker element and process data, enabling its later use in [decisions](../../../node/exclusive-gateway-node), [business rules](../../../actions/business-rule-action/business-rule-action) or [integrations](../../../node/message-send-received-task-node#from-integration).
#### Properties
* **Label**: The visible label for the datepicker element.
* **Placeholder**: Text that appears within the datepicker element when it is empty.
* **Min Date**: Set the minimum valid date selectable in the datepicker.
* **Max Date**: Set the maximum valid date selectable in the datepicker.
* **Min Date, Max Date error**: When a date is introduced by typing, define the error message to be displayed.
* **Helpertext**: Additional information about the datepicker element, which can be optionally hidden within an infopoint.

#### Datasource configuration
**Default Value**: The default values of the datepicker element, this will autofill the datepicker when you will run the process.
#### Validators
The following validators can be added to a datepicker: `required`, `custom`, `isSameOrBeforeToday` or `isSameOrAfterToday` (more details [here](../../validators)).

#### Hide/disable expressions
The datepicker behavior can be defined using JavaScript expressions for hiding or disabling the element. The following properties can be configured for expressions:
* **Hide condition**: A JavaScript expression that hides the datepicker element when it returns a truthy value.
* **Disabled condition**: A JavaScript expression that disables the datepicker element when it returns a truthy value.
It's important to make sure that disabled fields have the same expression configured under the path expressions → hide.
#### UI actions
UI actions can be added to the datepicker element to define its behavior and interactions.
* **Event**: Possible value - `CHANGE`.
* **Action Type**: Select the action type.
For more details on how to configure a UI action, click [**here**](../../ui-actions).

### Datepicker settings overrides
There are instances where you may need to tailor settings configured in the **Generic** settings tab. This proves especially beneficial when you wish to adjust these settings to appear differently across various platforms such as Web, Android, or iOS.
Available override settings:
* Properties:
* **Label**: Override the datepicker label.
* **Helper**: Override helper text/info point.
* Expressions:
* **Hide**: Override the hide expression.
Overrides can always be imported/pushed from one platform to another:

### Datepicker styling
#### Sizing
Adjusting the size of components is crucial for a responsive design. Fit W (width) offers three options:
* **fill**: Fills the available space.
* **fixed**: Maintains a fixed width.
* **auto**: Adjusts the width automatically based on content.
Similar styling considerations apply to iOS as for web.
However, for mobile applications, there’s an additional sizing style property:
* **Height** (pt - points): Determines the vertical size of the datepicker element on the screen.
Similar styling considerations apply to Android as for web.
However, for mobile applications, there’s an additional sizing style property:
* **Height** (dp - density-independent pixels): Determines the vertical size of the datepicker element on the screen.
#### Datepicker style overrides options
Theme overrides refer to the ability to modify or customize the appearance and behavior of UI components by overriding default theme settings. This can be applied at various levels, such as specific elements or entire sections, and can be platform-specific (Web, iOS, Android).

Style options:
* Border radius **\[TEXT]**
* Border width **\[TEXT]**
* Text style **\[FONT]**

* Default state
* Text color **\[COLOR]**
* Disabled state
* Text color **\[COLOR]**
* Text style **\[FONT]**

* Text color **\[COLOR]**
* Text style **\[FONT]**
* Helper Tooltip
* Text style **\[FONT]**
* Text color **\[COLOR]**
* Background color **\[COLOR]**
* Icon Color **\[COLOR]**

* Text color **\[COLOR]**
* Text style **\[FONT]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

You can import or push the overrides from one platform to another without having to configure them multiple times.

# Input
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-component-types/form-elements/input-form-field
An input field is a form element that enables users to input data with validations and can be hidden or disabled.

## Configuring the Input element
### Input generic settings
These settings added in the Generic tab are available and they apply to all platforms including Web, iOS, and Android:
* [**Process data key**](#process-data-key)
* [**Properties**](#properties)
* [**Datasource**](#datasource-configuration)
* [**Validators**](#validators)
* [**Expressions**](#expressions)
* [**UI actions**](#ui-actions)
* [**Input styling**](#input-styling)

#### Process data key
Process data key establishes the binding between the input element and process data, enabling its later use in [decisions](../../../node/exclusive-gateway-node), [business rules](../../../actions/business-rule-action/business-rule-action) or [integrations](../../../node/message-send-received-task-node#from-integration).
#### Properties
* **Label**: The visible label for the input field.
* **Placeholder**: Text that appears within the input field when it is empty.
* **Type**: Defines the type of data the input field can accept, such as text, number, email, or password.
* **Prefix**: Label appearing at the start of the input field.
* **Suffix**: Label appearing at the end of the input field.
* **Has Clear**: Option to include a content clear mechanism.
* **Helpertext**: Additional information about the input field, which can be optionally hidden within an infopoint.
* **Update on Blur**: Update behavior triggered when the input field loses focus.
#### Datasource configuration
The default value for the element can be configured here, this will autofill the input field when you will run the process.


#### Computed datasource value
To add a computed value, you have to explicitly check “Computed value” option (represented by the `f(x)` icon), which will transform the desired field into a JavaScript editor.
Check the following example:
#### Validators
Incorporating validators into your inputs adds an extra layer of assurance to your data. (For further insights, refer to [this link](../../validators)).
Witness the essential role of validators in action with this required validator example:

#### Hide/disable expressions
Define the input field's behavior using JavaScript expressions to control its visibility or disablement. Configure the following properties for expressions:
* **Hide condition**: A JavaScript expression that hides the input field when it evaluates to the specified result.
* **Disabled condition**: A JavaScript expression that disables the input field when it returns a truthy value.

In the example above, we used a rule to hide an input field if another one has a null value (it is not filled). The "Mortgage" input field, which remains hidden until users fill in the "Income" field.
#### Hide expression example
We will use the key defined on the "Income" input field to create the JavaScript hide condition to hide the "Mortgage" input field:

* Rule used:
```javascript
${application.input.income} === null || ${application.input.income} === ""
```
* Result:

#### Disable example
For example, you can use a disabled condition to disable an input element based on what values you have on a second input.

When you type 'TEST' in the first input (Name) the second input (Test) will be disabled:

* Rule used:
```javascript
${application.input.name} == 'TEST'
```
It's important to make sure that disabled fields have the same expression configured under the path expressions → hide.
#### UI actions
UI actions can be added to the Input Field to define its behavior and interactions.
* **Event**: Possible value -`CHANGE`.
* **Action Type**: Select the type of the action to be performed.

### Input settings overrides
There are instances where you may need to tailor settings configured in the **Generic** settings tab. This proves especially beneficial when you wish to adjust these settings to appear differently across various platforms such as Web, Android, or iOS.
Available override settings:
* Properties:
* **Label**: Override the input label.
* **Helper**: Override helper text/info point.
* **Placeholder**: Override the placeholder.
* **Prefix**: Override the prefix.
* **Suffix**: Override the suffix.
* Expressions:
* **Hide**: Override the hide expression.
Overrides can always be imported/pushed from one platform to another:

For more details on how to configure a UI action, click [**here**](../../ui-actions).
### Input styling
* **Left Icon**: You can include an icon on the left side of the Input element. This icon can serve as a visual cue or symbol associated with the input field's purpose or content.
* **Right Icon**: Same as left icon.
#### Icons properties
* **Icon Key**: The key associated in the Media library, select the icon from the **Media Library**.

Adjusting the size of components is crucial for a responsive design. Fit W (width) offers three options:
* **fill**: Fills the available space.
* **fixed**: Maintains a fixed width.
* **auto**: Adjusts the width automatically based on content.
Similar styling considerations apply to iOS as for web.
However, for mobile applications, there's an additional sizing style property specific to input elements:
* **Height** (pt - points): Determines the vertical size of the input box on the screen.
Similar styling considerations apply to Android as for web.
However, for mobile applications, there's an additional sizing style property specific to input elements:
* **Height** (dp - density-independent pixel): Determines the vertical size of the input box on the screen.
#### Input style overrides options
Theme overrides refer to the ability to modify or customize the appearance and behavior of UI components by overriding default theme settings. This can be applied at various levels, such as specific elements or entire sections, and can be platform-specific (Web, iOS, Android).

Style options:
* Border radius **\[TEXT]**
* Border width **\[TEXT]**
* Text style **\[FONT]**

* Default state
* Text color **\[COLOR]**
* Disabled state
* Text color **\[COLOR]**
* Text style **\[FONT]**

* Text color **\[COLOR]**
* Text style **\[FONT]**
* Helper Tooltip
* Text style **\[FONT]**
* Text color **\[COLOR]**
* Background color **\[COLOR]**
* Icon Color **\[COLOR]**

* Text color **\[COLOR]**
* Text style **\[FONT]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

You can import or push the overrides from one platform to another without having to configure them multiple times.

# Radio
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-component-types/form-elements/radio-form-field
Radio buttons are normally presented in radio groups (a collection of radio buttons describing a set of related options). Only one radio button in a group can be selected at the same time.

## Configuring the radio field element
### Radio generic settings
These allow you to customize the generic settings for the Radio element:
* [**Process data key**](#process-data-key)
* [**Properties**](#properties)
* [**Datasource**](#datasource)
* [**Validators**](#validators)
* [**Expressions**](#expressions)
* [**UI actions**](#ui-actions)
* [**Radio styling**](#radio-styling)
#### Process data key
Process data key establishes the binding between the radio element and process data, enabling its later use in [decisions](../../../node/exclusive-gateway-node), [business rules](../../../actions/business-rule-action/business-rule-action) or [integrations](../../../node/message-send-received-task-node#from-integration).

#### Properties
* **Label**: The visible label for the radio element.
* **Helpertext**: Additional information about the radio element, which can be optionally hidden within an infopoint.

#### Datasource configuration
* **Default value**: Autoselect an option from the radio element based on this value. You need to specify the value from the value/label pairs defined in the Datasource tab.
* **Source Type**: The source can be Static, Enumeration, or Process Data.
* **Add option**: Define label and value pairs here.

#### Validators
The following validators can be added to a radio: `required` and `custom` (more details [here](../../validators)).

#### Hide/disable expressions
The radio's element behavior can be defined using JavaScript expressions for hiding or disabling the element. The following properties can be configured for expressions:
* **Hide condition**: A JavaScript expression that hides the Radio element when it returns a truthy value.
* **Disabled condition**: A JavaScript expression that disables the Radio element when it returns a truthy value.
It's important to make sure that disabled fields have the same expression configured under the path expressions → hide.

#### UI actions
UI actions can be added to the radio element to define its behavior and interactions.
* **Event**: Possible value - `CHANGE`.
* **Action Type**: Select the type of the action to be performed.
For more details on how to configure a UI action, click [**here**](../../ui-actions).
### Radio settings overrides
There are instances where you may need to tailor settings configured in the **Generic** settings tab. This proves especially beneficial when you wish to adjust these settings to appear differently across various platforms such as Web, Android, or iOS.
Available override settings:
* Properties:
* **Label**: Override the radio label.
* **Helper**: Override helper text/info point.
* Expressions:
* **Hide**: Override the hide expression.
Overrides can always be imported/pushed from one platform to another:

### Radio styling
* **Type**: Set the type of the radio. Possible values:
* bordered
* clear
Adjusting the size of components is crucial for a responsive design. Fit W (width) offers three options:
* **fill**: Fills the available space.
* **fixed**: Maintains a fixed width.
* **auto**: Adjusts the width automatically based on content.
* **Direction**: Determines the orientation of the layout, which can be either horizontal or vertical.
* **Gap**: Specifies the size of the space between rows and columns.
* **Columns**: Indicates the number of columns in the layout.
Similar styling considerations apply to iOS as for web.
However, for mobile applications, there's a difference on Gap value:
* **Gap**: pt - points instead of pixels
Similar styling considerations apply to Android as for web.
However, for mobile applications, there's a difference on Gap value:
* **Gap**: dp - density-independent pixels
#### Radio style overrides options
Theme overrides refer to the ability to modify or customize the appearance and behavior of UI components by overriding default theme settings. This can be applied at various levels, such as specific elements or entire sections, and can be platform-specific (Web, iOS, Android).

Style options:
* Border radius **\[TEXT]**
* Border width **\[TEXT]**
* Text style **\[FONT]**

* Default state
* Text color **\[COLOR]**
* Disabled state
* Text color **\[COLOR]**
* Text style **\[FONT]**

* Text color **\[COLOR]**
* Text style **\[FONT]**
* Helper Tooltip
* Text style **\[FONT]**
* Text color **\[COLOR]**
* Background color **\[COLOR]**
* Icon Color **\[COLOR]**

* Text color **\[COLOR]**
* Text style **\[FONT]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Icon color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**

You can import or push the overrides from one platform to another without having to configure them multiple times.

# Segmented button
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-component-types/form-elements/segmented-button
It allows users to pick only one option from a group of options, and you can choose to have between 2 and 5 options in the group. The segmented button is easy to use, and can help make your application easier for people to use.

## Configuring the segmented button
### Segmented button generic settings
The available configuration options for this form element are:
* [**Process data key**](#process-data-key)
* [**Properties**](#properties)
* [**Datasource**](#datasource)
* [**Validators**](#validators)
* [**Expressions**](#expressions)
* [**UI actions**](#ui-actions)
* [**Segmented button styling**](#segmented-button-styling)
#### Process data key
Process data key establishes the binding between the segmented buton and process data, enabling its later use in [decisions](../../../node/exclusive-gateway-node), [business rules](../../../actions/business-rule-action/business-rule-action) or [integrations](../../../node/message-send-received-task-node#from-integration).
#### Properties
* **Label**: The visible label for the segmented button.
* **Helpertext**: Additional information about the segmented button, which can be optionally hidden within an infopoint.
#### Datasource configuration
* **Default Value**: The default value of the segmented button (it can be selected from one of the static source values)
* **Source Type**: It is by default Static.
* **Add option**: Define label and value pairs here.
#### Validators
The following validators can be added to a segmented button: `required` and `custom` (more details [here](../../validators)).

#### UI actions
UI actions can be added to the segmented button element to define its behavior and interactions.
* **Event**: Possible value - `CHANGE`.
* **Action Type**: Select the action type.
For more details on how to configure a UI action, click [**here**](../../ui-actions).
### Segmented button settings overrides
There are instances where you may need to tailor settings configured in the **Generic** settings tab. This proves especially beneficial when you wish to adjust these settings to appear differently across various platforms such as Web, Android, or iOS.
Available override settings:
* Properties:
* **Label**: Override the segmented button label.
* **Helper**: Override helper text/info point.
* Expressions:
* **Hide**: Override the hide expression.
Overrides can always be imported/pushed from one platform to another:

#### Sizing
Adjusting the size of components is crucial for a responsive design. Fit W (width) offers three options:
* **fill**: Fills the available space.
* **fixed**: Maintains a fixed width.
* **auto**: Adjusts the width automatically based on content.
Similar styling considerations apply to iOS as for web.
Similar styling considerations apply to Android as for web.
#### Segmented button style overrides options
Theme overrides refer to the ability to modify or customize the appearance and behavior of UI components by overriding default theme settings. This can be applied at various levels, such as specific elements or entire sections, and can be platform-specific (Web, iOS, Android).

Style options:
* Border radius **\[TEXT]**
* Border width **\[TEXT]**
* Text style **\[FONT]**

* Default state
* Text color **\[COLOR]**
* Disabled state
* Text color **\[COLOR]**
* Text style **\[FONT]**

* Text color **\[COLOR]**
* Text style **\[FONT]**
* Helper Tooltip
* Text style **\[FONT]**
* Text color **\[COLOR]**
* Background color **\[COLOR]**
* Icon Color **\[COLOR]**

* Text color **\[COLOR]**
* Text style **\[FONT]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Icon color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**

You can import or push the overrides from one platform to another without having to configure them multiple times.

# Select
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-component-types/form-elements/select-form-field
The Select form field is an element that allows users to choose from a list of predefined options. Each option consists of a label, which is displayed in the dropdown menu, and a corresponding value, which is stored upon selection.

For example, consider a scenario where you have a label "Sports" with the value "S" and "Music" with the value "M". When a user selects "Sports" in the process instance, the value "S" will be stored for the "Select" key.
## Configuring the Select element
### Select generic settings
These allow you to customize the generic settings for the Select element:
* [**Process data key**](#process-data-key)
* [**Properties**](#properties)
* [**Datasource**](#datasource-configuration)
* [**Validators**](#validators)
* [**Expressions**](#expressions)
* [**UI actions**](#ui-actions)
* [**Select styling**](#select-styling)
#### Process data key
Process data key establishes the binding between the select element and process data, enabling its later use in [decisions](../../../node/exclusive-gateway-node), [business rules](../../../actions/business-rule-action/business-rule-action) or [integrations](../../../node/message-send-received-task-node#from-integration).
#### Properties
* **Label**: The visible label for the select element.
* **Placeholder**: Text that appears within the select element when it is empty.
* **Empty message**: Text displayed for custom type when no results are found.
* **Search for options**: Displays a search to filter options.
* **Has Clear**: Option to include a content clear mechanism.
* **Helpertext**: Additional information about the select element, which can be optionally hidden within an infopoint.
#### Datasource configuration
* **Default value**: Autofill the select field with this value. You need to specify the value from the value/label pairs defined in the Datasource tab.

* **Source Type**: The source can be Static, Enumeration, or Process Data.
* **Add option** : Define label and value pairs here.
#### Validators
You can add multiple validators to a select field. For more details, refer to [**Validators**](../../validators).
#### Hide/disable expressions
The select field's behavior can be defined using JavaScript expressions for hiding or disabling the element. The following properties can be configured for expressions:
* **Hide condition**: A JavaScript expression that hides the Select field when it returns a truthy value.
* **Disabled condition**: A JavaScript expression that disables the Select field when it returns a truthy value.
It's important to make sure that disabled fields have the same expression configured under the path expressions → hide.
#### UI actions
UI actions can be added to the select element to define its behavior and interactions.
* **Event**: Possible value - `CHANGE`.
* **Action Type**: Select the type of the action to be performed.
For more details on how to configure a UI action, click [**here**](../../ui-actions).
### Select settings overrides
There are instances where you may need to tailor settings configured in the **Generic** settings tab. This proves especially beneficial when you wish to adjust these settings to appear differently across various platforms such as Web, Android, or iOS.
Available override settings:
* Properties:
* **Label**: Override the select label.
* **Helper**: Override helper text/info point.
* **Placeholder**: Override the placeholder.
* Expressions:
* **Hide**: Override the hide expression.
Overrides can always be imported/pushed from one platform to another:

### Select styling
* **Left Icon**: You can include an icon on the left side of the Select element. This icon can serve as a visual cue or symbol associated with the select element purpose or content.
#### Icons properties
* **Icon Key**: The key associated in the Media library, select the icon from the **Media Library**.

Adjusting the size of components is crucial for a responsive design. Fit W (width) offers three options:
* **fill**: Fills the available space.
* **fixed**: Maintains a fixed width.
* **auto**: Adjusts the width automatically based on content.
Similar styling considerations apply to iOS as for web.
However, for mobile applications, there's an additional sizing style property specific to select elements:
* **Height** (pt - points): Determines the vertical size of the select element on the screen.
Similar styling considerations apply to Android as for web.
However, for mobile applications, there's an additional sizing style property specific to select elements:
* **Height** (dp - density-independent pixels): Determines the vertical size of the select element on the screen.
#### Select style overrides options
Theme overrides refer to the ability to modify or customize the appearance and behavior of UI components by overriding default theme settings. This can be applied at various levels, such as specific elements or entire sections, and can be platform-specific (Web, iOS, Android).

Style options:
* Border radius **\[TEXT]**
* Border width **\[TEXT]**
* Text style **\[FONT]**

* Default state
* Text color **\[COLOR]**
* Disabled state
* Text color **\[COLOR]**
* Text style **\[FONT]**

* Text color **\[COLOR]**
* Text style **\[FONT]**
* Helper Tooltip
* Text style **\[FONT]**
* Text color **\[COLOR]**
* Background color **\[COLOR]**
* Icon Color **\[COLOR]**

* Text color **\[COLOR]**
* Text style **\[FONT]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

You can import or push the overrides from one platform to another without having to configure them multiple times.

## Example - dynamic dropdowns
As mentioned previously, you can create dropdowns including static data, enumerations, or **process data**. Let's create an example using **process data** to create a process that contains **dynamic dropdowns**.
To create this kind of process, we need the following elements:

* a [**task node**](../../../node/task-node) (this will be used to set which data will be displayed on the dropdowns - by adding a business rule on the node)

* a [**user task node**](../../../node/user-task-node) (here we have the client forms and here we add the SELECT elements)

### Creating the process
Follow the next steps to create the process from scratch:
1. Open **FlowX Designer** and from the **Processes** tab select **Definitions**.
2. Click on the breadcrumbs (top-right corner) then click **New process** (the Process Designer will now open).
3. Now add all the **necessary nodes** (as mentioned above).
### Configuring the nodes
1. On the **task node**, add a new **Action** (this will set the data for the dropdowns) with the following properties:
* Action type - **Business Rule**
* **Automatic**
* **Mandatory**
* **Language** (we used an [**MVEL**](../../../../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-mvel) script to create a list of objects)

Below you can find the MVEL script used in the above example:
```java
output.put("application",
{
"client": {
"identity": [
{
"value": "001",
"label": "Eddard Stark"
},
{
"value": "002",
"label": "Sansa Stark"
},
{
"value": "003",
"label": "Catelyn Stark"
}
]},
"contracts": {
"001": [
{
"value": "c001",
"label": "Eddard Contract 1"
},
{
"value": "c007",
"label": "Eddard Contract 2"
}
],
"003": [
{
"value": "c002",
"label": "Catelyn Contract 1",
},
{
"value": "c003",
"label": "Catelyn Contract 2 ",
},
{
"value": "c004",
"label": "Catelyn Contract 3"
}
],
"002": [
{
"value": "c005",
"label": "Sansa Contract 1",
}
]
}
});
```
2. On the **user task node**, add a new **Action** (submit action, this will validate the forms and save the date) with the following properties:
* **Action type** - Save Data
* **Manual**
* **Mandatory**
* **Data to send** (the key on which we added client details and contracts as objects in the business rule) - `application`

### Configuring the UI
Follow the next steps to configure the UI needed:
1. Select the **user task node** and click the **brush icon** to open [**UI Designer**](../../ui-designer).
2. Add a [**Card**](../root-components/card) UI element as a [**root component**](../root-components/) (this will group the other elements inside it) with the following properties:
* Generic:
* **Custom UI payload** - `{"application": ${application}}` - so the frontend will know which data to display dynamically when selecting values from the **SELECT** element
* **Title** - *Customer Contract*
3. Inside the **card**, add a [**form element**](./).
4. Inside the **form** add two **select elements**, first will represent, for example, the *Customer Name* and the second the *Contract ID.*
5. For first select element (Customer Name) set the following properties:
* **Process data key** - `application.client.selectedClient`
* **Label** - Customer Name
* **Placeholder** - Customer Name
* **Source type** - Process Data (to extract the data added in the **task node**)
* **Name** - `application.client.identity`

6. For the second select element (Contract ID) set the following properties:
* **Process data key** - `application.client.selectedContract`
* **Label** - Contract ID
* **Placeholder** - Contract ID
* **Source Type** - Process Data
* **Name** - `application.contracts`
* **Parent** - `SELECT` (choose from the dropdown list)

7. Add a button under the form that contains the select elements with the following properties:
* **Label** - Submit
* **Add UI action** - add the submit action attached earlier to the user task node

8. Test and run the process by clicking **Start process**.

# Slider
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-component-types/form-elements/slider
It allows users to pick only one option from a group of options, and you can choose to have between 2 and 5 options in the group. The segmented button is easy to use, and can help make your application easier for people to use.

## Configuring the slider element
### Slider generic settings
The available configuration options for this form element are:
* [**Process data key**](#process-data-key)
* [**Properties**](#properties)
* [**Datasource**](#datasource)
* [**Validators**](#validators)
* [**Expressions**](#expressions)
* [**UI actions**](#ui-actions)
* [**Slider styling**](#slider-styling)
#### Process data key
Process data key establishes the binding between the slider element and process data, enabling its later use in [decisions](../../../node/exclusive-gateway-node), [business rules](../../../actions/business-rule-action/business-rule-action) or [integrations](../../../node/message-send-received-task-node#from-integration).
#### Properties
* **Label**: The visible label for the slider element.
* **Show value label**: A toggle option that determines whether the current selected value of the slider is displayed as a label alongside the slider handle.
* **Helpertext**: Additional information about the slider element, which can be optionally hidden within an infopoint.
* **Min Value** : The minimum value or starting point of the slider's range, it defines the lowest selectable value on the slider.
* **Max Value**: The maximum value or end point of the slider's range, it sets the highest selectable value on the slider.
* **Suffix**: An optional text or symbol that can be added after the displayed value on the slider handle or value label, it is commonly used to provide context or units of measurement.
* **Step size**: The increment or granularity by which the slider value changes when moved, it defines the specific intervals or steps at which the slider can be adjusted, allowing users to make more precise or discrete value selections.
#### Datasource configuration
**Default Value**: The default value of the slider (static value - integer) the initial value set on the slider when it is first displayed or loaded, it represents a static value (integer), that serves as the starting point or pre-selected value for the slider, users can choose to keep the default value or adjust it as desired.

#### Validators
The following validators can be added to a slider: `required` and `custom` (more details [here](../../validators)).
#### Hide/disable expressions
* **Hide condition**: A JavaScript expression that hides the sloder element when it returns a truthy value.
* **Disabled condition**: A JavaScript expression that disables the slider element when it returns a truthy value.
It’s important to make sure that disabled fields have the same expression configured under the path expressions → hide.
#### UI actions
UI actions can be added to the slider element to define its behavior and interactions.
* **Event**: Possible value - `CHANGE`.
* **Action Type**: Select the action type, ❗️for more details on how to configure a UI action, click [**here**](../../ui-actions).
### Multiple sliders
You can also use multiple sliders UI elements that are interdependent, as you can see in the following example:

You can improve the configuration of the slider using computed values as in the example above. These values provide a more flexible and powerful approach for handling complex use cases. You can find an example by referring to the following documentation:
[**Dynamic & computed values**](../../dynamic-and-computed-values#computed-values)
### Slider settings overrides
There are instances where you may need to tailor settings configured in the **Generic** settings tab. This proves especially beneficial when you wish to adjust these settings to appear differently across various platforms such as Web, Android, or iOS.
Available override settings:
* Properties:
* **Label**: Override the slider label.
* **Helpertext**: Override helper text/info point.
* **Show value**: Override the show value option.
* **Suffix**: Override the suffix.
* Expressions:
* **Hide**: Override the hide expression.
Overrides can always be imported/pushed from one platform to another:

### Slider styling
#### Sizing
Adjusting the size of components is crucial for a responsive design. Fit W (width) offers three options:
* **fill**: Fills the available space.
* **fixed**: Maintains a fixed width.
* **auto**: Adjusts the width automatically based on content.
Similar styling considerations apply to iOS as for web.
Similar styling considerations apply to Android as for web.
#### Slider style overrides options
Theme overrides refer to the ability to modify or customize the appearance and behavior of UI components by overriding default theme settings. This can be applied at various levels, such as specific elements or entire sections, and can be platform-specific (Web, iOS, Android).

Style options:
* Limits font **\[FONT]**
* Text style **\[FONT]**

* Default state
* Text color **\[COLOR]**
* Disabled state
* Text color **\[COLOR]**
* Text style **\[FONT]**

* Text color **\[COLOR]**
* Text style **\[FONT]**
* Helper Tooltip
* Text style **\[FONT]**
* Text color **\[COLOR]**
* Background color **\[COLOR]**
* Icon Color **\[COLOR]**

* Text color **\[COLOR]**
* Text style **\[FONT]**

* Empty **\[COLOR]**
* Filled **\[COLOR]**
* Knob color **\[COLOR]**
* Limits **\[COLOR]**
* Value **\[COLOR]**
On iOS, overrides for the **Knob** are not available.

* Empty **\[COLOR]**
* Filled **\[COLOR]**
* Knob color **\[COLOR]**
* Limits **\[COLOR]**
* Value **\[COLOR]**
On iOS, overrides for the **Filled** and **Empty** states, as well as the **Knob**, are not available.

You can import or push the overrides from one platform to another without having to configure them multiple times.

# Switch
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-component-types/form-elements/switch-form-field
A switch, a toggle switch, is another form element that can be utilized to create an intuitive user interface. The switch allows users to select a response by toggling it between two states. Based on the selection made by the user, the corresponding Boolean value of either true or false will be recorded and stored in the process instance values for future reference.

## Configuring the switch element
### Switch generic settings
The available configuration options for this form element are:
* [**Process data key**](#process-data-key)
* [**Properties**](#properties)
* [**Datasource**](#datasource)
* [**Validators**](#validators)
* [**Expressions**](#expressions)
* [**UI actions**](#ui-actions)
* [**Switch styling**](#switch-styling)
#### Process data key
Process data key establishes the binding between the switch element and process data, enabling its later use in [decisions](../../../node/exclusive-gateway-node), [business rules](../../../actions/business-rule-action/business-rule-action) or [integrations](../../../node/message-send-received-task-node#from-integration).
#### Properties
* **Label**: The visible label for the switch element.
The Label field supports Markdown syntax, enabling you to customize the text appearance with ease. To explore the Markdown syntax and its various formatting options, click [**here**](https://www.markdownguide.org/cheat-sheet/).

* **Helpertext**: Additional information about the switch element, which can be optionally hidden within an infopoint.
#### Datasource configuration
**Default Value**: The default value of the switch element (it can be switched on or switched off). The default value is switched on.
#### Validators
The following validators can be added to a switch element: `requiredTrue` and `custom` (more details [here](../../validators)).

#### Hide/disable expressions
* **Hide condition**: A JavaScript expression that hides the Switch element when it returns a truthy value.
* **Disabled condition**: A JavaScript expression that disables the Switch element when it returns a truthy value.
It’s important to make sure that disabled fields have the same expression configured under the path expressions → hide.
#### UI actions
UI actions can be added to the Switch element to define its behavior and interactions.
* **Event**: Possible value - `CHANGE`.
* **Action Type**: Select the type of the action to be performed.
For more details on how to configure a UI action, click [**here**](../../ui-actions).
### Switch settings overrides
There are instances where you may need to tailor settings configured in the **Generic** settings tab. This proves especially beneficial when you wish to adjust these settings to appear differently across various platforms such as Web, Android, or iOS.
Available override settings:
* Properties:
* **Label**: Override the switch label.
* **Helper**: Override helper text/info point.
* Expressions:
* **Hide**: Override the hide expression.
Overrides can always be imported/pushed from one platform to another:

### Switch styling
**Label position**: The label of the Switch can be positioned either as `start` or `end`.
Adjusting the size of components is crucial for a responsive design. Fit W (width) offers three options:
* **fill**: Fills the available space.
* **fixed**: Maintains a fixed width.
* **auto**: Adjusts the width automatically based on content.
Similar styling considerations apply to iOS as for web.
Similar styling considerations apply to Android as for web.
#### Switch style overrides options
Theme overrides refer to the ability to modify or customize the appearance and behavior of UI components by overriding default theme settings. This can be applied at various levels, such as specific elements or entire sections, and can be platform-specific (Web, iOS, Android).

Style options:
* Default state
* Text color **\[COLOR]**
* Disabled state
* Text color **\[COLOR]**
* Text style **\[FONT]**

* Text color **\[COLOR]**
* Text style **\[FONT]**
* Helper Tooltip
* Text style **\[FONT]**
* Text color **\[COLOR]**
* Background color **\[COLOR]**
* Icon Color **\[COLOR]**

* Text color **\[COLOR]**
* Text style **\[FONT]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Knob color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Knob color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Knob color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Knob color **\[COLOR]**

You can import or push the overrides from one platform to another without having to configure them multiple times.

# Text area
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-component-types/form-elements/text-area
A text area is a form element used to capture multi-line input from users in a conversational interface. The text area component is typically used for longer inputs such as descriptions, comments, or feedback, providing users with more space to type their responses.

It is an important tool for creating intuitive and effective conversational interfaces that can collect and process large amounts of user input.
## Configuring the text area element
### Text area generic settings
These settings added in the Generic tab are available and they apply to all platforms including Web, iOS, and Android:
* [**Process data key**](#prcoess-data-key)
* [**Properties**](#properties)
* [**Datasource**](#datasource-configuration)
* [**Validators**](#validators)
* [**Expressions**](#expressions)
* [**UI actions**](#ui-actions)
* [**Text area styling**](#text-area-styling)
#### Prcoess data key
Process data key creates the binding between form element and process data, so it can be later used in [decisions](../../../node/exclusive-gateway-node), [business rules](../../../actions/business-rule-action/business-rule-action) or [integrations](../../../node/message-send-received-task-node#from-integration)
#### Properties
* **Label**: The visible label for the text area element.
* **Placeholder**: Text that appears within the text area when it is empty.
* **Has Clear**: Option to include a content clear mechanism.
* **Helpertext**: Additional information about the text area field (can be hidden inside an infopoint).
* **Update on Blur**: Update behavior triggered when the text area loses focus.
#### Datasource configuration
The default value for the element can be configured here, this will autofill the text field when you will run the process.
#### Validators
You can add multiple validators to a text area field. For more details, refer to [**Validators**](../../validators).

#### Hide/disable expressions
The text area's behavior can be defined using JavaScript expressions for hiding or disabling the element. The following properties can be configured for expressions:
* **Hide condition**: A JavaScript expression used to hide the text area when it is evaluated to your desired result.
* **Disabled condition**: A JavaScript expression used to disable the text area when it returns a truthy value

In the example above, we used a rule to hide a text area element if the value of the switch element above is false.
#### Hide expression example
We will use the key defined on the switch element to create a JavaScript hide condition to hide the text area element:

* Rule used:
```javascript
${application.client.hasHouse} === false
```
* Result:

#### Disable example
For example, you can use a disabled condition to disable a text area element based on what values you have on other elements.


When you choose a specific value on the radio element (Contact via SMS), the text area is disabled based on the disabled condition.
* Rule used:
```javascript
${application.client.contact} == "prfS"
```
It's important to make sure that disabled fields have the same expression configured under the path expressions → hide.
#### UI actions
UI actions can be added to the text area field to define its behavior and interactions.
* **Event**: Possible value - `CHANGE`.
* **Action Type**: Select the type of the action to be performed.
For more details on how to configure a UI action, click [**here**](../../ui-actions).
### Text area settings overrides
There are instances where you may need to tailor settings configured in the **Generic** settings tab. This proves especially beneficial when you wish to adjust these settings to appear differently across various platforms such as Web, Android, or iOS.
Available override settings:
* Properties:
* **Label**: Override the text area label.
* **Helper**: Override helper text/info point.
* **Placeholder**: Override the placeholder.
* Expressions:
* **Hide**: Override the hide expression.
Overrides can always be imported/pushed from one platform to another:

### Text area styling
#### Fit W (fit width)
Adjusting the size of components is crucial for a responsive design. Fit W (width) offers three options:
* **fill**: Fills the available space.
* **fixed**: Maintains a fixed width.
* **auto**: Adjusts the width automatically based on content.
#### H Type (height type)
* **fixed**: Maintains a fixed height (pixels)
* **auto**: Adjusts the height automatically based on the content.
#### Rows
* **Min Rows**: Sets the minimum number of rows.
* **Max Rows**: Sets the maximum number of rows.
Similar styling considerations apply to iOS as for web.
* **fixed height**: Measured in dp - density-independent pixels.
Similar styling considerations apply to Android as for web.
* **fixed height**: Measured in pt - points.
#### Text area style overrides options
Theme overrides refer to the ability to modify or customize the appearance and behavior of UI components by overriding default theme settings. This can be applied at various levels, such as specific elements or entire sections, and can be platform-specific (Web, iOS, Android).

Style options:
* Border radius **\[TEXT]**
* Border width **\[TEXT]**
* Text style **\[FONT]**

* Default state
* Text color **\[COLOR]**
* Disabled state
* Text color **\[COLOR]**
* Text style **\[FONT]**

* Text color **\[COLOR]**
* Text style **\[FONT]**
* Helper Tooltip
* Text style **\[FONT]**
* Text color **\[COLOR]**
* Background color **\[COLOR]**
* Icon Color **\[COLOR]**

* Text color **\[COLOR]**
* Text style **\[FONT]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**

You can import or push the overrides from one platform to another without having to configure them multiple times.

# Image
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-component-types/image
Image UI elements are graphical components of a user interface that display a static or dynamic visual representation of an object, concept, or content.

These elements can be added to your interface using the UI Designer tool, and they are often used to convey information, enhance the aesthetic appeal of an interface, provide visual cues and feedback, support branding and marketing efforts, or present complex data or concepts in a more intuitive and accessible way.

## Configuring an image
Configuring an image in the UI Designer involves specifying various settings and properties. Here are the key aspects of configuring an image:
### Image settings
The image settings consist of the following properties:
* **Source location** - the location from where the image is loaded:
* [**Media Library**](#media-library)
* [**Process Data**](#process-data)
* [**External**](#external)
Depending on which **Source location** is selected, different configurations are available:
### Media library

* **Image key** - the key of the image from the media library
* **Select from media library** - search for an item by key and select it from the media library

* **Upload to media library** - add a new item (upload an image on the spot)
* **upload item** - supported formats: PNG, JPG, GIF, SVG, WebP; ❗️(maximum size - 1 MB)
* **key** - the key must be unique and cannot be changed afterwards

### Process data

* Identify the **Source Type**. It can be either a **URL** or a **Base 64 string**.
* Locate the data using the **Process Data Key**.
* If using a URL, provide a **Placeholder URL** for public access. This is the URL where the image placeholder is available.

### External

* **Source Type**: it can be either a **URL** or a **Base 64 string**
* **Image source**: the valid URL of the image.
* **Placeholder URL**: the public URL where the image placeholder is available
## UI actions
The UI actions property allows you to add a UI Action, which must be configured on the same node. For more details on UI Actions, refer to the documentation [here](../ui-actions).

## Image styling
The image styling property allows you to add or to specify valid CSS properties for the image. For more details on CSS properties, click [here](../../ui-designer/ui-designer#styling).

# Indicators
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-component-types/indicators
The indicators (Message UI elements) allow you to display different types of messages.
Messages can be categorized into the following types:
* **Info**: Used to convey general information to users.
* **Warning**: Indicates potential issues or important notices.
* **Error**: Highlights errors or critical issues.
* **Success**: Communicates successful operations or completion.

## Properties
When configuring a message, you have the following properties:
* **Message**: The content of the message body, this property supports markdown attributes such as: bold, italic, bold italic, strikethrough and URLs, allowing you to format the content of the message.
* **Type**: as mentioned above, there are multiple indicators: info, warning, error, success
* **Expressions**: you can define expressions to control when the message should be hidden. This can be useful for dynamically showing or hiding messages based on specific conditions.

Info example with markdown:
```markdown
If you are encountering any difficulties, please [contact our support team](mailto:support@flowx.ai).
```
When executed, will look like this:

## Types and Usage
Here's how you can use the Message UI element in your UI design:
### Info
If you are encountering any difficulties, please [contact our support team](mailto:support@flowx.ai).

### Error
An error occurred while processing your request. Please try again later.

### Success
Your payment was successfully processed. Thank you for using our services!

## Indicators styling
To create an indicator with specific styling, sizing, typography, and color settings, you can use the following configuration:
### Style
The Style section allows you to customize the appearance of your indicator UI element. You can apply the following style to achieve the desired visual effect:
* **Text**: Displays only the icon and the text.

* **Border**: Displays the icon, the text and the border.

* **Fill**: It will fill the UI element's area.

For more valid CSS properties, click [**here**](../../ui-designer/ui-designer#styling).
# Card
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-component-types/root-components/card
A card in FlowX.AI is a graphical component designed for the purpose of grouping and aligning various elements. It offers added functionality by incorporating an accordion feature, allowing users to expand and collapse content as needed.

The following properties that can be configured:
## Properties and settings
### Settings (applicable across all platforms)
These settings added in the **Generic** tab are available and they apply to all platforms including Web, iOS, and Android.
#### When used as root
When the Card is utilized as the root component, it offers the following settings:
* **Custom UI Payload**: A valid JSON describing the custom data transmitted to the frontend when the process reaches a specific user task.
* **Title**: The title displayed on the card.
* **Subtitle**: Additional descriptive text accompanying the card.
* **Has accordion**: Introduces a Bootstrap accordion, facilitating the organization of content within collapsible items. It ensures that only one collapsed item is displayed at a time.
The accordion feature is not available for mobile configuration.
#### Mobile configuration (iOS & Android)
For mobile configuration (iOS and Android), you can also configure the following property (not available on Web configuration):
* **Screen title**: Set the screen title used in the navigation bar on mobile devices (available only when the card element is set as the root).

### Card settings overrides
You may want to override the card title or subtitle set as **Generic** to be displayed differently on mobile devices. For example, on the web, titles might be shorter.
Available properties overrides for web (overriding properties set in **Generic** settings tab):
* Title
* Subtitle

Available properties overrides for Android (overriding properties set in **Generic** settings tab):
* Title
* Subtitle

Available properties overrides for iOS (overriding properties set in **Generic** settings tab):
* Title
* Subtitle
#### When not used as root
When the card is not the root, you can configure: **Title**, **Subtitle**, **Card Style** and **Has Accordion**.
Leverage cards in your designs to organize and present content, enhancing the overall user experience.
## Styling
When designing for the web, consider the layout options available for the card. These options include:
* **Direction**: Choose between **Horizontal** or **Vertical** alignment to define the flow of components. For example, select Horizontal for a left-to-right layout.
* **Justify (H)**: Specify how content is aligned along the main axis. For instance, select end to align items to the end of the card.
* **Align (V)**: Align components vertically within their card using options such as top, center, or bottom alignment.
* **Wrap**: Enable wrapping to automatically move items to the next line when they reach the end of the card. Useful for creating multi-line layouts.
* **Gap**: Define the space between components to control the distance between each item. Adjusting the gap enhances visual clarity and organization.
Adjusting the size of components is crucial for a responsive design. Fit W (width) offers three options:
* **fill**: Fills the available space.
* **fixed**: Maintains a fixed width.
* **auto**: Adjusts the width automatically based on content.
Similar styling considerations apply to iOS as for web.
However, for mobile applications, there's an additional layout style property specific to cards when used as the root component:
* **Scrollable**: This property allows you to define the desired behavior of the screen, specifying whether it should be scrollable or not. By default, this property is set to true, enabling scrolling functionality.
Similar styling considerations apply to Android as for web.
However, for mobile applications, there's an additional layout style property specific to cards when used as the root component:
* **Scrollable**: This property allows you to define the desired behavior of the screen, specifying whether it should be scrollable or not. By default, this property is set to true, enabling scrolling functionality.
### Theme overrides
Customize the appearance by overriding style options coming from your default theme. Available overrides:
* Border width
* Border radius
* Border color
* Background color
* Shadow
* Title
* Title Color
* Subtitle
* Subtitle Color
## Validating elements
To ensure the validation of all form elements and custom components within a card upon executing a Save Data action such as **Submit** or **Continue**, follow these steps:
1. When adding a UI action to a button inside a card, locate the dropdown menu labeled **Add forms to submit**.
2. From the dropdown menu, select the specific forms, individual form elements, or custom components you wish to validate.
3. If you select custom components, ensure they are properly configured to handle validation and data saving responsibilities.

***
### Custom component validation
UI actions enable validation and data management for custom components alongside form elements. This provides flexibility if you want to design custom components for entire screens, such as custom forms not natively supported by the platform.

#### Key features
* **Validation API**: The renderer provides a public API for validating custom components before triggering an action.
* **Custom Component Responsibilities**:
* Validate custom forms and return a boolean (`true` for valid, `false` for invalid).
* Display validation errors in the UI for invalid data.
* Populate a `saveData` property with the structured data to be submitted.
* **Configurator Responsibilities**:
* Select custom components in the **Add forms to submit** dropdown when configuring a UI action.
* Set the custom key to store the data submitted by the custom component.
By including custom components in the validation process, you can now validate and retrieve data from custom form sections, ensuring comprehensive and reliable form submission.
# Container
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-component-types/root-components/container
A container in Flowx is a versatile building block that empowers you to group components and arrange them as needed, providing flexibility in UI design. It can also serve as the root component for your design.

The following properties can be configured in the container:
## Properties and settings
### Settings (applicable across all platforms)
These settings added in the **Generic** tab are available and they apply to all platforms including Web, iOS, and Android.
#### When used as root
When employed as the root component, the container offers the following settings:
* **Custom UI Payload**: A valid JSON describing the data sent to the frontend when the process reaches a specific user task.
* **Expressions (Hide condition)**: JavaScript expressions utilized to dynamically hide components based on conditions.

#### When not used as root
When the container is not used as the root, you can configure only the **Hide Condition** property.

By leveraging containers, you gain the ability to structure your UI elements efficiently, enhancing the overall design and usability of your application.
### Container settings overrides
You may want to override settings configured in the **Generic** tab to be displayed differently on mobile devices.
* **Hide expressions**: Use Overrides in the Settings tab to hide a container on a specific platform.
For instance, you can set a container to appear on all platforms, or create an override to hide it on mobile but show it on web.

To achieve this:
1. Select a Container element in the UI Designer, then navigate to Settings -> your desired platform -> Overrides (+) -> Expressions -> Hide.
2. Add your JavaScript Hide condition.
## Styling
When designing for the web, consider the layout options available for the container. These options include:
* **Position**
* **Static**: This style remains fixed and does not scroll along with the page content.
* **Sticky**: When the sticky property is enabled, the container maintains its position even during scrolling.
* **Sticky layout**: You have the option to specify minimum distances between the container and its parent element while scrolling. At runtime, sticky containers will keep their position on scroll relative to top/ bottom/ right/ left margin of the parent element.

* **Direction**: Choose between **Horizontal** or **Vertical** alignment to define the flow of components. For example, select Horizontal for a left-to-right layout.
* **Justify (H)**: Specify how content is aligned along the main axis. For instance, select end to align items to the end of the container.
* **Align (V)**: Align components vertically within their container using options such as top, center, or bottom alignment.
* **Wrap**: Enable wrapping to automatically move items to the next line when they reach the end of the container. Useful for creating multi-line layouts.
* **Gap**: Define the space between components to control the distance between each item. Adjusting the gap enhances visual clarity and organization.
Adjusting the size of components is crucial for a responsive design. Fit W (width) offers three options:
* **fill**: Fills the available space.
* **fixed**: Maintains a fixed width.
* **auto**: Adjusts the width automatically based on content.
Similar styling considerations apply to iOS as for web.
However, there are exceptions, particularly with **Sticky layout**:

In mobile configurations, the right and left properties for **Sticky layout** are ignored by the iOS renderer.
Similar styling considerations apply to Android as for web.
However, there are exceptions, particularly with **Sticky layout**:

In mobile configurations, the right and left properties for **Sticky layout** are ignored by the Android renderer.
### Theme overrides
Customize the appearance by overriding style options coming from your default theme. Available overrides:
* Border width
* Border radius
* Border color
* Background color
* Shadow
More layout demos available below:
For more information about styling and layout configuration, check the following section:
***
Use our [**feedback form**](https://www.cognitoforms.com/FlowXAi1/FeedbackForm) if you would like to provide feedback on this page. You could also [**raise issues/requests**](https://flowxai.canny.io/documentation-feedback).
# Custom component
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-component-types/root-components/custom
Custom components are developed in the web application and referenced here by component identifier. This will dictate where the component is displayed in the component hierarchy and what actions are available for the component.
Starting with **3.4.7** platform version, for User Tasks containing UI Elements, when the page is rendered, the Backend (BE) should, by default, send to the Frontend (FE) all available data as process variables with matching keys.
If the User Task also includes a **custom component**, the BE should send, in addition to default keys, objects mentioned in the "Message" option of the root element.
To add a custom component in the template config tree, we need to know its unique identifier and the data it should receive from the process model.

The properties that can be configured are as follows:
* **Identifier** - This enables the custom component to be displayed within the component hierarchy and determines the actions available for the component.
* **Input keys** - These are used to specify the pathway to the process data that components will utilize to receive their information.
* [**UI Actions**](../../ui-actions) - actions defined here will be made available to the custom component

## Prerequisites (before creation)
* **Angular Knowledge**: You should have a good understanding of Angular, as custom components are created and imported using Angular.
* **Angular CLI**: Ensure that you have Angular CLI installed.
* **Development Environment**: Set up a development environment for Angular development, including Node.js and npm (Node Package Manager).
* **Component Identifier**: You need a unique identifier for your custom component. This identifier is used for referencing the component within the application.
## Creating a custom component (Web)
To create a Custom Component in Angular, follow these steps:
1. Create a new Angular component using the Angular CLI or manually.
2. Implement the necessary HTML structure, TypeScript logic, and SCSS styling to define the appearance and behavior of your custom component.

## Importing the component
After creating the Custom Component, you need to import it into your application.
In your `app.module.ts` file (located at src → app → app.module.ts), add the following import statement:
```ts
`import { YourComponent } from '@app/components/yourComponent.component'`
```

## Declaration in AppModule
In the same `app.module.ts` file, declare your Custom Component within the `declarations` array in the `@NgModule` decorator:
```ts
@NgModule({
declarations: [
// ...other components
YourComponent
],
// ...other module configurations
})
```
## Declaration in FlxProcessModule
To make your Custom Component available for use in processes created in FLOWX Designer, you need to declare it in `FlxProcessModule`.
In your process.module.ts file (located at src > app > modules > process > process.module.ts), add the following import statement:
```ts
import { YourComponent } from '@app/components/yourComponent.component';
```
Then, declare your Custom Component in the `FlxProcessModule.forRoot` function:
```ts
FlxProcessModule.forRoot({
components: {
// ...other components
yourComponent: YourComponent
},
// ...other module configurations
})
```
## Using the custom component
Once your Custom Component is declared, you can use it for configuration within your application.

## Data input and actions
The Custom Component accepts input data from processes and can also include actions extracted from a process. These inputs and actions allow you to configure and interact with the component dynamically.

## Extracting data from processes
There are multiple ways to extract data from processes to use within your Custom Component. You can utilize the data provided by the process or map actions from the BPMN process to Angular actions within your component.

Make sure that the Angular actions that you declare match the names of the process actions.
## Styling with CSS
To apply CSS classes to UI elements within your Custom Component, you first need to identify the UI element identifiers within your component's HTML structure. Once identified, you can apply defined CSS classes to style these elements as desired.
Example:

## Custom component example
Below you can see an example of a basic custom loader component built with Angular:

## Additional considerations
* **Naming Conventions**: Be consistent with naming conventions for components, identifiers, and actions. Ensure that Angular actions match the names of process actions as mentioned in the documentation.
* **Component Hierarchy**: Understand how the component fits into the overall component hierarchy of your application. This will help determine where the component is displayed and what actions are available for it.
* **Documentation and Testing**: Document your custom component thoroughly for future reference. Additionally, testing is crucial to ensure that the component behaves as expected in various scenarios.
* **Security**: If your custom component interacts with sensitive data or performs critical actions, consider security measures to protect the application from potential vulnerabilities.
* **Integration with FLOWX Designer**: Ensure that your custom component integrates seamlessly with FLOWX Designer, as it is part of the application's process modeling capabilities.
## Creating a custom component (iOS)
Enhance your skills with our academy course! Learn how to develop and integrate a custom iOS component with FlowX.AI:
# Root Components in UI Design
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-component-types/root-components/root-components
Root components serve as the foundation for structuring user interfaces, providing the framework for arranging and configuring different types of components.

Root components play a crucial role in defining the layout and hierarchy of elements within an application. Here's an overview of key root components and their functionalities:
### Container
The Container component is a versatile element used to group and configure the layout for multiple components of any type. It provides a flexible structure for organizing content within a UI.
Learn more about [Container components](./container).
### Custom
Custom components are Angular components developed within the container application and dynamically passed to the SDK at runtime. These components are identified by their unique names and enable developers to extend the functionality of the UI.
Explore [Custom components](./custom) for advanced customization options.
### Card
The Card component functions similarly to a Container component but also offers the capability to function as an accordion, providing additional flexibility in UI design.
Discover more about [Card components](./card).
A card or a container can hold a hierarchical component structure as this example:

Available children for **Card** and **Container** are:
1. [**Form**](../form-elements/) - Used to group and align form field elements (inputs, radios, checkboxes, etc.).
For more information about the form elements, please refer to the [**Form elements**](../form-elements/) section.
2. [**Image**](../image) - Allows you to configure an image in the document.
3. **Text** - A simple text can be configured via this component; basic configuration is available.
4. **Link** - Used to configure a hyperlink that opens in a new tab.
5. [**Button**](../buttons) - Multiple options are available for configuration, with the most important part being the possibility to add actions.
6. [**File Upload**](../buttons) - A specific type of button that allows you to select a file.
7. [**Custom**](./custom) - Custom components.
8. [**Indicators**](../indicators) - Message UI elements to display different types of messages.
Learn more:
# Table
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-component-types/table
The Table component is a versatile UI element allowing structured data display with customizable columns, pagination, filtering, and styling options.
## Overview
The **Table** component is available **only for Web**, built to deliver a consistent design through FlowX.AI’s theming framework. It closely mirrors the [**Collection**](./collection/collection) component functionality, offering dynamic data handling and flexible row/column configurations.

**Web-only UI Component:** The Table component is designed specifically for web applications and includes theming options for consistent design across the platform.
## Table elements
1. **Table Header (`th`)**\
Displays column labels, typically one `th` element per column.
2. **Rows (`tr`)**\
Each row represents a single data entry from the source array. Rows automatically repeat based on your data size.
3. **Cells**\
Cells hold data points within each row. You can place multiple UI elements (text, image, link, buttons) inside a cell.
4. **Actions Column**\
If you enable row deletion or editing, an actions column is automatically created to handle row-level features (delete icon, edit icon, etc.).

By default, when you create a new Table, three columns and a matching set of cell placeholders are generated automatically.
## Configuring the table
When building a Table, you’ll primarily interact with:
* **Generic Settings** (source key, columns, table body, expressions, styling)
* **Cell-Specific Settings** (sorting, filtering, editing, validators)

Below is an example of a JavaScript business rule that populates the Table with mock data. The `users` array is assigned to `application.users`, which serves as the Table’s source.

```js
users = [
{
"firstName": "John",
"lastName": "Doe",
"loanAmount": {
"amount": 1000.00,
"code": "USD"
},
"birthDate": "1985-01-01T00:00:00Z",
"email": "john.doe@example.com"
},
{
"firstName": "John",
"lastName": "Does",
"loanAmount": {
"amount": 2000.00,
"code": "USD"
},
"birthDate": "1985-02-01T00:00:00Z",
"email": "john.does@example.com"
},
{
"firstName": "Jane",
"lastName": "Doe",
"loanAmount": {
"amount": 3000.00,
"code": "USD"
},
"birthDate": "1985-03-01T00:00:00Z",
"email": "jane.doe@example.com"
},
{
"firstName": "Jane",
"lastName": "Does",
"loanAmount": {
"amount": 4000.00,
"code": "USD"
},
"birthDate": "1985-04-01T00:00:00Z",
"email": "jane.does@example.com"
},
{
"firstName": "Jim",
"lastName": "Doe",
"loanAmount": {
"amount": 5000.00,
"code": "USD"
},
"birthDate": "1985-05-01T00:00:00Z",
"email": "jim.doe@example.com"
},
{
"firstName": "Jim",
"lastName": "Does",
"loanAmount": {
"amount": 6000.00,
"code": "USD"
},
"birthDate": "1985-06-01T00:00:00Z",
"email": "jim.does@example.com"
},
{
"firstName": "Jake",
"lastName": "Doe",
"loanAmount": {
"amount": 7000.00,
"code": "USD"
},
"birthDate": "1985-07-01T00:00:00Z",
"email": "jake.doe@example.com"
},
{
"firstName": "Jake",
"lastName": "Does",
"loanAmount": {
"amount": 8000.00,
"code": "USD"
},
"birthDate": "1985-08-01T00:00:00Z",
"email": "jake.does@example.com"
},
{
"firstName": "Jill",
"lastName": "Doe",
"loanAmount": {
"amount": 9000.00,
"code": "USD"
},
"birthDate": "1985-09-01T00:00:00Z",
"email": "jill.doe@example.com"
},
{
"firstName": "Jill",
"lastName": "Does",
"loanAmount": {
"amount": 10000.00,
"code": "USD"
},
"birthDate": "1985-10-01T00:00:00Z",
"email": "jill.does@example.com"
},
{
"firstName": "Joe",
"lastName": "Doe",
"loanAmount": {
"amount": 11000.00,
"code": "USD"
},
"birthDate": "1985-11-01T00:00:00Z",
"email": "joe.doe@example.com"
},
{
"firstName": "Joe",
"lastName": "Does",
"loanAmount": {
"amount": 12000.00,
"code": "USD"
},
"birthDate": "1985-12-01T00:00:00Z",
"email": "joe.does@example.com"
}
];
application = {
"users": users
};
output.put("application", application);
```
When creating a table, three columns with one corresponding cells are added by default.
### Table generic settings
The following generic settings are found in the Generic tab and apply to all platforms (Web, iOS, and Android):
* [**Source key**](#source-key)
* [**Columns**](#columns)
* [**Table body**](#table-body)
* [**Expressions**](#expressions)
* [**Table Styling**](#table-styling)
#### Source key
* Specify an array of objects (e.g., `application.users`), enabling dynamic row creation based on your data structure.
* Similar to the Collection component.


#### Columns
* Customize the Table’s columns: add, delete, rename, or reorder.
* Each column automatically includes a corresponding th (header cell) and td (body cell).

#### Table body
* **Default Sorting**: Sortable option must be enabled for the selected column for default sorting to work.
* **Column**: Select the desired column.
* **Direction**: Set the sort direction (**Ascending** or **Descending**).
* **Pagination**: Control how data is displayed by configuring pagination or enabling scrolling.
* **Page Size**: Set the maximum number of entries displayed per page.
* **Scrollable**: Disable pagination to enable continuous scrolling through data.
* **Deletable Rows**: Enable row deletion.

#### Hide condition
* Optionally hide the entire Table based on a JavaScript expression including the data key from another component.
* Useful for conditional flows (e.g., only show the Table if certain conditions are met).
**Demonstration**:
***
## Feature highlights
### Default sorting
* **Enable default sorting** in table body settings.
* **Select the desired column** and **sort direction** (ascending by default).
Sortable option must be enabled for the selected column at **cell level** for default sorting to work.

### Pagination & scrolling
Define how many rows appear on each page using **Pagination** feature.
* **Scrolling**: Disable pagination to allow continuous scrolling.

### Editable rows
Enable row editing in **cell-level → column settings**.


* Editing can be triggered by double-click or the dedicated Edit button icon.
Make sure each editable cell references a valid data model key to facilitate edit.
* Edits are validated and saved row-by-row.
### Deletable rows
Toggle **Deletable** in Table settings to enable row deletion.


A delete icon will appear in the Actions column, removing the corresponding row from the data source.
### Filters
Mark columns as filterable to enable filtering icons.

* Filter type automatically matches the column data type (string, number, date, boolean, or currency).

Filtering option is ideal for large data sets where quick column-based searches are necessary.
## Table styling
### Sizing
* **Fit Width**: By default, the Table stretches to occupy available width.
* **Fit Height**: Grows automatically based on content height.

### Cell styling
**Layout Options:**
* **Direction**: Horizontal (default).
* **Justify**: Space-around (evenly spaces elements within each cell).
* **Align**: Start (left-aligned).
* **Wrap**: Enables text wrapping.
* **Gap**: 8px spacing between cell elements.
**Column Style Options:**
* **Width Fit Options**:
* **Fill**: Fills available container space.
* **Fixed**: Keeps a fixed column width.
* **Auto**: Adjusts column width to fit content.

* **User Resizable Columns**: Adjust column width by dragging the column edges in the header, enhancing customization.

This Table component enhances flexibility and offers a cohesive design, integrated with FlowX.AI’s theming framework for a consistent, web-optimized user experience.
## Table UI action
Append a saveData action to the table and configure the UI action’s custom keys with the same key used as source for the table, in order for the backend to take into consideration the updated array.
See [**User Task configuration**](#user-task-configuration) section for more details.
## Actions in table cells
When configuring actions within a Table component, each action is bound to a unique table item key, ensuring that interactions within each cell are tracked and recorded precisely.
The key allows the action to target specific rows and store results or updates effectively.
* **Table Item Save Key**:
* This key is essential for identifying the exact cell or row where an action should be executed and saved.
* It ensures that data within each cell remains distinct and correctly mapped to each table entry.
* **Custom Key for Data Saving**:
* Important: To properly send the data from a selected cell/row to the backend, you must configure both a **Custom Key** (in your `SaveData` action) and a **Table Item Save Key** (in your column/cell configuration) using the same value.
* Having only a `Table Item Save Key` is not sufficient to propagate the updated information. The matching **Custom Key** in the **SaveData** action tells the system which row/cell data to capture and transmit.

* **Supported Action Types** - Available actions include:
* **Action**: Initiates predefined actions within the cell.
* **Start Process Inherit**: Enables workflows inherited from process configurations to be triggered.
* **Upload**: Allows file or data uploads directly within a table cell.
* **External**: Used to create an action that will open a link in a new tab.
## User Task configuration
When configuring a screen in User Task containing table, you must configure the following node actions:
* The `SaveData` action must be:
* **Manual**: Requires user initiation.
* **Optional**: Not mandatory for task completion.
* **Repeatable**: Can be executed multiple times.
* **Execution Order Constraint**:
* The `SaveData` action can only be executed **before** mandatory actions.
* The UI is built by configurators to ensure the correct execution order of actions.

* Add an UI action to the table by assigning the previously created node action.

It is important to also add a custom key that is exactly the source key of the table so the platform will be able to know where to save the edits applied in the table.
* An additional **manual** and **mandatory** action should be included.
* This ensures that the **token remains within the user task** where the table is rendered.

* You can add another node action if you want to assign an UI action to a table cell element (e.g, a text).
* Assign the UI action to that particular element.
* Make sure it is manual, optional and repeatable (can only be executed before mandatory actions).
## FAQs
The `table item key` is essential for identifying specific rows and cells within a table. When actions are triggered in table cells, this key ensures that the action applies to the correct item, allowing data to be saved accurately in the intended cell or row. Without this key, actions may not track or save data correctly.
While the Table component shares structural similarities with Collection, it is tailored specifically for tabular data. Unlike Collection, it supports easy customization of columns, row pagination, and in-place editing (in future versions), streamlining the handling of tabular data.
Conditional styling is a planned feature for version 5.0.0. Once available, it will allow you to apply specific styles to cells or rows based on conditions, such as highlighting critical items or overdue entries.
No, nested tables (tables within other tables) are currently unsupported and are not planned for future updates. This limitation keeps the Table component optimized for its intended use without overcomplicating its structure.
Table cells support various actions:
* **Action**: Executes a predefined action within the cell.
* **Start Process Inherit**: Triggers workflows based on inherited process configurations.
* **Upload**: Allows direct file or data uploads within a cell.
Each of these actions requires a `table item key` to ensure data accuracy.
Pagination can be customized to control the number of entries displayed per page. Alternatively, you can enable scrollable view mode by disabling pagination, which provides a continuous, scrollable data view.
Direct in-place editing is scheduled for version 4.6.0, allowing users to edit data directly within table cells. This feature will improve efficiency for workflows requiring frequent table data updates.
Yes, the Table component requires a source in the form of an array of objects. The source allows the Table to dynamically populate cells based on the data structure, ensuring rows and columns align with your data set.
Custom actions can be configured using the UI Designer. Each action added to a cell will leverage the `table item key` to perform tasks such as saving edits, initiating workflows, or uploading files directly from the table.
Yes, the Table component supports JavaScript expressions to control visibility dynamically. By setting up expressions, you can create conditions that hide certain columns or rows when specific criteria are met.
# Typography
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-component-types/typography
Typography is an important aspect of design that greatly influences how users perceive and interact with your content. In this section, we'll explore how to effectively utilize two essential UI elements, "Text" and "Link."

## Text
The "Text" UI element serves as a tool dedicated solely to presenting text within your user interface. Whether it's paragraphs or descriptions, the "Text" UI element. Through manipulation of embedded CSS properties, you're afforded to edit the visual appearance and formatting of text, aligning it with your design preferences.
### Markdown compatibility
The Text UI element gives you with the flexibility of Markdown formatting. You can enhance your text using various markdown tags, including:
* **Bold**
```markdown
**Bold**
```
* *italic*
```markdown
*italic*
```
* ***bold italic***
```markdown
***bold italic***
```
* strikethrough
```markdown
~~strikethrough~~
```
* URL
```markdown
[URL](https://url.net)
```
Let's take the following mardkown text example:
```markdown
Be among the *first* to receive updates about our **exciting new products** and releases. Subscribe [here](flowx.ai/newsletter) to stay in the loop! Do not ~~miss~~ it!
```
When running the process it will be displayed like this:

### Text styling
The Styling section provides you with granular control over how your text is displayed, ensuring it aligns with your design vision.
#### Spacing:
Adjust the spacing around your text, setting the margin as follows: 16px 0 0 0 0 0 0 16px.
#### Sizing:
Choose "Fit W" to ensure the text fits the width of its container.
#### Typography
Define the font properties:
* **Font family**: Choose the desired font family.
* **Font weight**: Define the thickness of the font.
* **Font size**: Specify the font size in pixels (px).
* **Height**: Set the line height in pixels (px).
* **Color**: Determine the text color.
#### Align
Determine the text alignment.
## Link
Links are essential for navigation and interaction. The "Link" UI element creates clickable text that directs users to other pages or external resources. Here's how to create a link:
# UI Designer
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/ui-designer
The FlowX platform offers a variety of ready-to-use UI components that can be used to create rich web interfaces. These include common form elements like input fields, dynamic dropdown menus, checkboxes, radio and switch buttons, as well as other UI elements like image, text, anchor links, etc. The properties of each component can be customized further using the details tab, and design flexibility is achieved by adding styles or CSS classes to the pre-defined components. The UI templates are built in a hierarchical structure, with a root component at the top.
## Using the UI Designer
The FlowX platform includes an intuitive **UI Designer** for creating diverse UI templates. You can use various elements such as basic buttons, indicators, and forms, as well as predefined [collections](./ui-component-types/collection/collection) and [prototypes](./ui-component-types/collection/collection-prototype). To access the UI Designer, follow these steps:
1. Open **FlowX Designer** and select **Definitions** from the **Processes** tab.
2. Select a **process** from the process definitions list.
3. Click the **Edit** **process** button.
4. Select a **node** or a **navigation area** then click the **brush icon** to open the **UI Designer**.

The UI designer is available for [**User task**](../node/user-task-node) nodes and **Navigation Areas** elements.
After adding a specific component to the node, the right-side menu will display more configuration options.
For more flexibility, undo or redo actions are available within the UI Designer. This includes tasks such as dragging, dropping, or deleting elements from the preview section, as well as adjusting settings within the styling and settings panel.
To undo or redo an action, users can simply click the corresponding icons in the UI Designer toolbar, or use the keyboard commands for even quicker access.
## UI components
FlowX offers a wide range of [UI components](./ui-designer#ui-components) that can be customized using the UI Designer. For example, when configuring a [card](./ui-component-types/root-components/card) element (which is a root component), the following properties can be customized:

### Settings tab

#### Generic tab
This is where you configure the logic and assign process keys, UI actions, and other component settings that are common across all platforms (Web, iOS, Android).
#### Platform-specific settings
For example, on Android, you might want to change the Card title to a shorter one.
To override a general property like a title, follow these steps:
Access the UI Designer and select a UI Element, such as a **Card**.
From the UI Designer navigation panel, select the **Settings** tab, then select the **desired platform**
Click the "+" button (next to "Overrides") and select **Properties -> Title**, then input your desired value.
Settings overrides can always be imported/pushed from one platform to another:

Preview your changes in the UI Designer by navigating from one platform to another or by comparing them.

Keep in mind that the preview generated in the UI Designer for iOS and Android platforms is an estimate meant to help you visualize how it might look on a mobile view.
#### Hide expressions
By utilizing **Overrides** in the **Settings** tab, you can selectively hide elements on a specific platform.
To achieve this:
Select a UI component in the **UI Designer**, then navigate to **Settings** -> **your desired platform** -> **Overrides (+)** -> **Expressions** -> **Hide**.
Add your JavaScript Hide condition.

### Styling tab
The Styles tab functions independently for three platforms: Web, iOS, and Android. Here, you can customize styles for each UI component on each platform.

If you want to customize the appearance of a component to differ from the theme settings, you must apply a **Theme Override**.

Theme overrides can be imported from one platform to another.

### Preview
When you are editing a process in **UI Designer** you have the possibility of having the preview of multiple themes:

Overrides are completely independent of the theme, regardless of which theme you choose in the preview mode.
### Layout
There are two main types of layouts for organizing child elements: **Linear** and **Grid**.
* **Linear Layout**: Arranges child elements in a single line, either horizontally or vertically. Ideal for simple, sequential content flow.
* **Grid Layout**: Organizes elements into a structured grid with multiple columns and rows, useful for more complex, multi-dimensional designs.
* **Platform-Specific Layouts**: You can customize layout settings per platform (e.g., Grid on web, Linear on mobile) to ensure optimal responsiveness.
Both layouts offer options to customize direction, alignment, spacing, and wrap behavior for flexibility in design.

### Sizing
By setting desired values for these props, you can ensure that all UI elements on the interface are the desired size and perfectly fit with each other.
When adjusting the Fit W and Fit H settings, users can control the size and shape of the elements as it appears on their screen:
* Fit W: fill, fixed or auto
* Fit H: fill, fixed or auto

### Spacing
Margin and padding are CSS properties used to create space between elements in a web page:
* **margin** - the space outside an element
* **padding** - the space inside an element

### Advanced
* **Advanced** - for advanced customization, users can add CSS classes to pre-defined components, this option is available under the **Advanced** section
* **Data Test ID** - add custom test identifiers for automated testing and element interaction
By utilizing these styling options in FLOWX.AI, users can create unique and visually appealing interfaces that meet their design requirements.
#### Data Test ID
The Advanced section includes a **Data Test ID** field, allowing you to assign custom identifiers to UI components. This feature enhances automated testing by providing meaningful, easily identifiable selectors for UI elements.
Key benefits:
* Replace auto-generated test IDs with custom, readable identifiers
* Simplify element targeting in test scripts

## Tree view
The Tree View panel displays the component hierarchy, allowing users to easily navigate through the different levels of their interface.
Clicking on a specific component in the tree will highlight the selection in the editor, making it easy to locate and modify.

## UI component types
Different UI component types can be configured using UI Designer. The UI components are available and can be configured only using **user task nodes** or **navigation areas**.
Depending on the component type different properties are available for configuration.
Understanding these component types will help you to better utilize the UI Designer tool and create rich web interfaces.
* [Container](./ui-component-types/root-components/container)
* [Card](./ui-component-types/root-components/card)
* [Custom](./ui-component-types/root-components/custom)
* [Collection](./ui-component-types/collection/collection)
* [Collection Prototype](./ui-component-types/collection/collection-prototype)
* [Button](./ui-component-types/buttons)
* [File Upload](./ui-component-types/buttons#file-upload)
* [Image](./ui-component-types/image)
* Text
* Link
Form elements are a crucial aspect of creating user interfaces as they serve as the means of collecting information from the users. These elements come in various types, including simple forms, [inputs](./ui-component-types/form-elements/input-form-field), [text areas](./ui-component-types/form-elements/text-area), drop-down menus ([select](./ui-component-types/form-elements/select-form-field)), [checkboxes](./ui-component-types/form-elements/checkbox-form-field), [radio buttons](./ui-component-types/form-elements/radio-form-field), toggle switches ([switch](./ui-component-types/form-elements/switch-form-field)), [segmented buttons](./ui-component-types/form-elements/segmented-button), [sliders](./ui-component-types/form-elements/slider) and [date pickers](./ui-component-types/form-elements/datepicker-form-field). Each of these form elements serves a unique purpose and offers different options for capturing user input.
* Form
* [Input](./ui-component-types/form-elements/input-form-field)
* [Textarea](/4.0/docs/building-blocks/ui-designer/ui-component-types/form-elements/text-area)
* [Select](./ui-component-types/form-elements/select-form-field)
* [Checkbox](./ui-component-types/form-elements/checkbox-form-field)
* [Radio](./ui-component-types/form-elements/radio-form-field)
* [Switch](/4.0/docs/building-blocks/ui-designer/ui-component-types/form-elements/switch-form-field)
* [Segmented button](/4.0/docs/building-blocks/ui-designer/ui-component-types/form-elements/segmented-button)
* [Slider](/4.0/docs/building-blocks/ui-designer/ui-component-types/form-elements/slider)
* [Datepicker](./ui-component-types/form-elements/datepicker-form-field)
* [Message](./ui-component-types/indicators)
Navigation areas
* Page
* Stepper
* Step
* Modal
* Container
# Validators
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/validators
Validators are an essential part of building robust and reliable applications. They ensure that the data entered by the user is accurate, complete, and consistent. In Angular applications, validators provide a set of pre-defined validation rules that can be used to validate various form inputs such as text fields, number fields, email fields, date fields, and more.

Angular provides default validators such as:
## Predefined validators
This validator checks whether a numeric value is smaller than the specified value. If there are no characters at all, this validator will not trigger. It is advisable to use this validator with a required validator.

This validator checks whether a numeric value is larger than the specified value. If there are no characters at all, this validator will not trigger. It is advisable to use this validator with a [required](#required-validator) validator.

This validator checks whether the input value has a minimum number of characters. If there are no characters at all, this validator will not trigger. It is advisable to use this validator with a required validator.

This validator checks whether the input value has a maximum number of characters. If there are no characters at all, this validator will not trigger. It is advisable to use this validator with a [required](#required-validator) validator.

This validator checks whether a value exists in the input field.

It is recommended to use this validator with other validators like [minlength](#minlength-validator) to check if there is no value at all.

This validator checks whether the input value is a valid email. If there are no characters at all, this validator will not trigger. It is advisable to use this validator with a [required](#required-validator) validator.

This validator checks whether the input value matches the specified pattern (for example, a [regex expression](https://www.regexbuddy.com/regex.html)).

Other predefined validators are also available:
This validator can be used to validate [datepicker](/4.0/docs/building-blocks/ui-designer/ui-component-types/form-elements/datepicker-form-field) inputs. It checks whether the selected date is today or in the past. If there are no characters at all, this validator will not trigger. It is advisable to use this validator with a [required](#required-validator) validator.

This validator can be used to validate datepicker inputs. It checks whether the selected date is today or in the future. If there are no characters at all, this validator will not trigger. It is advisable to use this validator with a [required](#required-validator) validator.

To ensure the validation of all form elements within a card upon executing a Save Data action such as “Submit” or “Continue,” follow these steps:
* When adding a UI action to a button inside a card, locate the dropdown menu labeled **Add form to validate**.
* From the dropdown menu, select the specific form or individual form elements that you wish to validate.
* By choosing the appropriate form or elements from this dropdown, you can ensure comprehensive validation of your form.

## Custom validators
Additionally, custom validators can be created within the web application and referenced by name. These custom validators can have various configurations such as execution type, name, parameters, and error message.
1. **Execution type** - sync/async validator (for more details check [this](https://angular.io/api/forms/AsyncValidator))
2. **Name** - name provided by the developer to uniquely identify the validator
3. **Params** - if the validator needs inputs to decide if the field is valid or not, you can pass them using this list
4. **Error Message** - the message that will be displayed if the field is not valid
The error that the validator returns **MUST** match the validator name.

#### Custom validator example
Below you can find an example of a custom validator (`currentOrLastYear`) that restricts data selection to the current or the previous year:
##### currentOrLastYear
```typescript
currentOrLastYear: function currentOrLastYear(AC: AbstractControl): { [key: string]: any } {
if (!AC) {
return null;
}
const yearDate = moment(AC.value, YEAR_FORMAT, true);
const currentDateYear = moment(new Date()).startOf('year');
const lastYear = moment(new Date()).subtract(1, 'year').startOf('year');
if (!yearDate.isSame(currentDateYear) && !yearDate.isSame(lastYear)) {
return { currentOrLastYear: true };
}
return null;
```
##### smallerOrEqualsToNumber
Below is another custom validator example that returns `AsyncValidatorFn` param, which is a function that can be used to validate form input asynchronously. The validator is called `smallerOrEqualsToNumber` and takes an array of `params` as an input.
For this custom validator the execution type should be marked as `async` using the UI Designer.
```typescript
export function smallerOrEqualsToNumber (params$: Observable[]): AsyncValidatorFn {
return (AC): Promise | Observable => {
return new Observable((observer) => {
combineLatest(params$).subscribe(([maximumLoanAmount]) => {
const validationError =
maximumLoanAmount === undefined || !AC.value || Number(AC.value) <= maximumLoanAmount ? null : {smallerOrEqualsToNumber: true};
observer.next(validationError);
observer.complete();
});
});
};
}
```
If the input value is undefined or the input value is smaller or equal to the maximum loan amount value, the function returns `null`, indicating that the input is valid. If the input value is greater than the maximum loan amount value, the function returns a `ValidationErrors` object with a key `smallerOrEqualsToNumber` and a value of true, indicating that the input is invalid.
For more details about custom validators please check this [link](../../../sdks/angular-renderer).
Using validators in your application can help ensure that the data entered by users is valid, accurate, and consistent, improving the overall quality of your application.
It can also help prevent errors and bugs that may arise due to invalid data, saving time and effort in debugging and fixing issues.
# Fonts
Source: https://docs.flowx.ai/4.7.x/docs/flowx-designer/design-assets/font-files
Fonts management allows you to upload and manage multiple font files, which can be later utilized when configuring UI templates using the UI Designer.
## Managing fonts
The "Font Management" screen displays a table with uploaded fonts. The following details are available:
* **FontFamily**: The names of the uploaded font families.
* **File Name**: The name of the font file.
* **Weight**: The weight of the font, represented by a numeric value.
* **Style**: The style of the font, such as "italic" or "normal".
* **Actions**: This tab contains options for managing the uploaded fonts, such as deleting or downloading them.

## Uploading fonts
### Uploading theme font files
To upload new theme font files, follow these steps:
1. Open **FLOWX Designer**.
2. Navigate to the **Content Management** tab and select **Font files**.
3. Click **Upload font** and choose a valid font file.
The accepted font format is TTF (TrueType Font file).
4. Click **Upload**. You can upload multiple TTF font files.
5. For each uploaded font file, the system will automatically identify information such as font family, weight, and style.

## Exporting fonts
You can use the export feature to export a JSON file containing all the font files.

The exported JSON will look like this:
```json
{
"fonts": [
{
"fontFamily": "Open Sans",
"filename": "OpenSans-ExtraBoldItalic.ttf",
"weight": 800,
"style": "italic",
"size": 135688,
"storagePath": "https://d22tnnndi9lo60.cloudfront.net/devmain/flowx/fonts-folder/1690383294848_OpenSans-ExtraBoldItalic.ttf",
"contentType": "font/ttf",
"application": "flowx",
"flowxUuid": "ce0f75e2-72e4-40e3-afe5-3705d42cf0b2"
},
{
"fontFamily": "Open Sans",
"filename": "OpenSans-BoldItalic.ttf",
"weight": 700,
"style": "italic",
"size": 135108,
"storagePath": "https://d22tnnndi9lo60.cloudfront.net/devmain/flowx/fonts-folder/1690383295987_OpenSans-BoldItalic.ttf",
"contentType": "font/ttf",
"application": "flowx",
"flowxUuid": "d3e5e2a0-958a-4183-8625-967432c63005"
}
//...
],
"exportVersion": 1
}
```
## Importing fonts
You can use the import feature to import a JSON file containing the font files. If a font file already exists, you will be notified.

## Using fonts in UI Designer
For example, let's take an input UI element; you can customize the typography for this UI element by changing the following properties:
* Label:
* Font family
* Style and weight
* Font line size (px)
* Font line height (px)
* Text:
* Font family
* Style and weight
* Font line size (px)
* Font line height (px)
* Helper & errors:
* Font family
* Style and weight
* Font line size (px)
* Font line height (px)

# Global Media Library
Source: https://docs.flowx.ai/4.7.x/docs/flowx-designer/design-assets/global-media-library
System assets serve as a centralized hub for managing and organizing various types of media files outside an application, used on themes, including images, GIFs, and more.

# 🌐 Global Media Library
This section is accessible from the left-side navigation under:
> **FlowX.AI Designer → Design Assets → Global Media Library**
***
## 📁 Media Library overview
The main **Media Library** interface displays all uploaded media assets in a table format with the following columns:
* **Preview**: Thumbnail preview of the asset.
* **Key**: A unique identifier for the asset.
* **Format**: File type (e.g., SVG, PNG).
* **Size**: File size.
* **Used as icon**: Indicates if the asset is used as a global icon (✓ if true).

* **Edited by**: User who last modified the asset.
* **Edited at**: Timestamp of the last modification.
Each asset row also includes:
* ✏️ **Edit icon**: To update the asset.
* ⋮ **More actions menu**: For additional options.
***
## ➕ Adding a New Asset
To add a new media asset:
1. Click the **blue plus (+)** button in the top-right corner.
2. In the **Add new item** dialog:
* Click **Upload item** to select a file from your device.
* Enter a **Key** for the asset.
3. Click **Add** to upload the item to the library.
**Important:** The key must be unique and **cannot** be changed later. Choose it carefully.
***
## 📥 Import and Export
Click the **⋮ menu** next to the + button to access:
* **Import media assets**: Bulk upload multiple media files.
* **Export all**: Download all media assets as a ZIP archive.
***
## 🔍 Search Functionality
Use the **Search item by key** bar at the top to find media assets quickly. Results are filtered as you type.
***
## 📌 Icon Usage
If an asset is marked with a ✓ under the **Used as icon** column, it is actively used as a global icon across the FLOWX.AI platform.
# Themes
Source: https://docs.flowx.ai/4.7.x/docs/flowx-designer/design-assets/themes
Theme management feature enables you to easily change the appearance and styling of your application. You can personalize the look and feel of your application to your branding, preferences, or specific requirements.

## Key features of Theme Management
1. **Theme Management:**
* Creation, editing, and management of themes.
* Selection of predefined themes or customization of themes from scratch.
2. **Customization Options:**
* Modification of color schemes, typography, spacing, and visual effects.
* Upload of custom assets like fonts and icons.
3. **Overrides and Variations:**
* Ability to override default UI components styles properties on specific elements or sections.
* Creation of different themes to accommodate different users/clients preferences.
4. **Platform Consistency:**
* Consistency of theme styles across different platforms and devices.
5. **Preview:**
* Real-time visualization of theme changes.
6. **Export/Import Functionality:**
* Export of themes for backup, sharing, or reuse across multiple environments (UAT, DEV, etc.).
* Import of exported from other environments.
## Creating a new theme
You have two options for creating the theme. You can import a theme that was exported from another environment (for example, your UAT/DEV) or to manually create it.
To successfully create a new theme in FlowX Designer, follow these steps:
Locate the "Create New" button positioned above the right of the **Themes** list table.
Click the "Create New" button and enter details for your theme:
* **Theme Name** - pick a name for your theme
* **Font Family** - select your desired font family, by default, the system provides "Open Sans"
If you wish to add a new font, click on the link provided under the **Font Family** field, which redirects you to the **Fonts management** Selection.
* **Choose your primary color** - the default color is `#006BD8`.
Verify that the color format is in **HEX**. If not, an error message will indicate "Please insert a HEX color."
## Configuring a new theme
After creating a theme, you must configure it. To configure your theme effectively, follow these steps:
* Navigate to the settings or customization section of your application (in the UI Designer).
* Look for options related to styling and think of an overall design
The Themes styles mechanism is based on hierarchy. In this hierarchy we have the following elements: Design Tokens, Global Settings and Components.
Modify color schemes (using the design tokens), typography, spacing, and other visual elements to match your desired look and feel.
Use the provided tools or controls to adjust theme settings. This might include sliders, color pickers, or dropdown menus.
**The Design Tokens** represent values based on which the theme is built.
* **Color Palette, Shadows, Typography Tokens**: Configure these tokens based on your company's brand guidelines. They ensure reusability and consistency.

The **Global Settings** are properties that inherit values from the **Design Tokens** and sit on the top of the hierarchy. These properties are then inherited by the **Components**.
* **Platform-specific Settings**: Configure settings for each platform (web, iOS, Android) based on the global settings you've defined.
* **Styles and Utilities**: General settings applying to all components (styles) and labels, errors, and helper text settings (utilities).

When setting up your theme, remember that different platforms like web, iOS, and Android have their own settings. You need to configure each platform separately. Only color settings are the same across all platforms.
For example, you can configure a web theme, and then leverage the push and import options. You can push from the web the same configuration to iOS or Android.
**Component-level Configuration**: Customize the style of each component type.
Keep in mind, there are differences between platforms, for example, for button configuration there are different properties available. What you configure on a platform will not be inherited by the others.

* Before finalizing the theme configuration, it's crucial to review how the changes will appear across different platforms. This step ensures consistency and allows for platform-specific adjustments if needed.
* You can do that by either using the preview feature from **Themes** or by using the preview mode in the **UI Designer** by switching to your preffered platform.
Keep in mind that the preview generated in the UI Designer for iOS and Android platforms is an estimate meant to help you visualize how it might look on a mobile view.
Here is a quick walkthrough video on how to create and configure a theme:
## Managing themes - process level (theme overrides)
With the Overrides feature you have now the possibility to override default theme settings on specific elements or sections.
Use Theme Overrides in UI Designer to adjust styles or props for specific UI elements based on the desired platform (Web, iOS and Android).
All components can now be styled with token overrides, for color, typography and shadow settings defined in the theme.
**Theme overrides** in **UI Designer** are applied to the component itself, rather than to specific themes. This means that when switching the view to another theme, the overrides persist and apply to the new theme as well.
### Styles tab
The Styles tab functions independently for three platforms: Web, iOS, and Android. Here, you can customize styles for each UI component on each platform.
If you want to customize the appearance of a component to differ from the theme settings, you must apply a **Theme Override**.

Theme overrides can be imported from one platform to another.


Preview mode: In the UI Designer, overrides are entirely independent of the theme. Regardless of the theme selected in preview mode, you will see the applied override reflected at the UI Designer level.
### Preview
When you are editing a process in **UI Designer** you have the possibility of having the preview of multiple themes:

Overrides are completely independent of the theme, regardless of which theme you choose in the preview mode.
## Using a Theme in the container application
To integrate the theme into your container application, follow these steps:
* Copy the unique identifier (UUID) associated with the theme.
* Set the copied UUID within your container application.
* By doing so, ensure that the renderers within your application can recognize and apply the specified theme.

## Exporting/importing a theme
The Export/Import functionality in the theme management system allows users to export themes for various purposes such as backup, sharing, or reuse across multiple environments.
Additionally, it enables the seamless import of themes previously exported from other environments, facilitating swift integration and continuity across design workflows.
Import is restricted to internal FlowX mechanisms only; themes from external sources like Figma or Zeplin are not supported.
### Exporting a theme
Navigate to the **Theme Management** panel within FlowX Designer.
Select the theme(s) you wish to export.
From the breadcrumbs menu on the right, select **Export Theme**.
The exported theme is saved in a standard format (JSON) and can be downloaded to a local directory or storage location.

### Importing a theme
Navigate to the **Theme Management** panel within FlowX Designer.
From the contextual menu on the right, select **Import Theme**.
Import it as a new theme or if the theme already exists in other environment, you can override it.

### Setting a default theme
You can easily establish a default theme by accessing the contextual menu on the right side of a theme and selecting "Set as Default."

When a default theme is not set (or you haven't created a theme yet), the platform automatically assigns the FlowXTheme, which is the platform's default theme. This ensures that there's always a default theme in place to provide a consistent appearance across processes and interactions within the application.
In case you select a specific default theme but later you delete it, the platform will revert to the FlowX theme as the default. This safeguard ensures that there's always a default theme available, even if you remove your custom selection.

Upon opening any process within the UI Designer, the default theme is displayed as the initial preview. This gives users a clear starting point and ensures consistency in the appearance of the process until further customization is applied.

When creating a new process, you will notice the Default Theme (*FlowXTheme*) as the default preview.

Furthermore, when you start a process definition, the theme switch defaults to the default theme in the run process popup from the process definitions list. This ensures that the default theme is consistently applied during the execution of processes, maintaining visual coherence in user interactions.
# Adding a new node
Source: https://docs.flowx.ai/4.7.x/docs/flowx-designer/managing-a-process-flow/adding-a-new-node
Once you create a new process definition, you can start configuring it by adding new nodes.
You can choose between a series of available node types below. For an overview of what each node represents, see [BPMN 2.0 basic concepts](../../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-bpmn):
A BPMN (Business Process Model and Notation) node is a visual representation of a point in your process. Nodes are added at specific process points to denote the entrance or transition of a record within the process. FlowX supports various node types, each requiring distinct configurations to fulfill its role in the business flow.
### Steps for creating a new node
To create a new node on an existing process:
Open **FlowX.AI Designer** and from the **Processes** tab select **Definitions**.
Select your **process definition** or create a new one.
Make sure you are in edit mode.
Drag and drop one or more **node** elements.
To connect the node that you just created:
* Click the node, select the **arrow** command
* Click the node that you wish to link to the newly added node

Depending on the type of the **node**, you can define some node details, and a set of values (stages, data stream topics, key name) and you can also add various [actions](../../building-blocks/actions/actions) to it.
Now, check the next section to learn how to add an action to a node.
# Adding an action to a node
Source: https://docs.flowx.ai/4.7.x/docs/flowx-designer/managing-a-process-flow/adding-an-action-to-a-node
We use actions to add business decisions to the flow or link the process to custom integrations and plugins.
A node action is defined as the activity that a node has to handle within a process flow. These actions can vary in type and are utilized to specify communication details for plugins or integrations, include business rules in a process, save data and send various data to be displayed in front-end applications.
For more information about actions, check the following section:
### Steps for creating an action
To create an action:
Open **FlowX.AI Designer** and from the **Processes** tab select **Definitions**.
Select your **process definition**.
Click the **Edit** **process** button.
Add a new **node** or edit an existing one.
The nodes that support actions are [task nodes](../../building-blocks/node/task-node), [user task nodes](../../building-blocks/node/user-task-node), and [send message/receive message tasks](../../building-blocks/node/message-send-received-task-node).
Add an **action** to the node and choose the **action type**.

A few **action parameters** will need to be filled in depending on the selected action type.

# Adding more flow branches
Source: https://docs.flowx.ai/4.7.x/docs/flowx-designer/managing-a-process-flow/adding-more-flow-branches
To split the Process flow into more steps, you just need to use a parallel gateway node type.

### Steps for creating a flow with two branches
To create a flow with two branches:
Open **FlowX Designer** and go to the **Definitions** tab.
Click on the **New process** button, using the **breadcrumbs** from the top-right corner.
Add a **start node** and a **parallel gateway node**.
Create two parallel zones by adding different nodes and link the zones after the **parallel gateway node**.
Add another **parallel gateway** to merge the two flow branches back into one branch.
Add an **end node**.

When working with parallel gateways, tokens play a critical role in ensuring that the process flow is managed correctly. Here's an overview of token behavior in parallel gateways:
When a process reaches a parallel gateway, the gateway creates child tokens for each branch in the parallel paths. Each path operates independently.
Each child token advances through its respective path independently, proceeding from one node to the next based on the sequence and actions defined in the process.
A closing parallel gateway node is used to merge parallel paths back into a single flow. The parent token waits at this closing gateway until all child tokens have completed their respective paths.

# Creating a new process definition
Source: https://docs.flowx.ai/4.7.x/docs/flowx-designer/managing-a-process-flow/creating-a-new-process-definition
The first step of defining your business process in the FlowX.AI Designer is adding a new process definition for it.
This should include at least one [START](../../building-blocks/node/start-end-node#start-node) and [END](../../building-blocks/node/start-end-node#end-node) node.
## Steps for creating a new process definition
A process definition is the core building block of the platform, serving as the blueprint of a business process composed of nodes linked by sequences. Once defined and published on the platform, a process can be executed, monitored, and optimized. Starting a business process results in the creation of a new instance of this definition.
To create a new **process definition**:
Open **FlowX.AI Designer** and go to the **Definitions** tab.
Click the **New process** button, using the **breadcrumbs** from the top-right corner.
Enter a unique name for your process and click **Create**.
You're automatically taken to the **FlowX.AI Process Designer** editor where you can start building your process.

In the following section, you will learn how to add a new node to your newly created process.
# Creating a user interface
Source: https://docs.flowx.ai/4.7.x/docs/flowx-designer/managing-a-process-flow/creating-a-user-interface
You can configure interfaces for both generated and custom screens in FlowX Designer.
Create a simple process:
Go to **FlowX Designer** and navigate to the **Definitions** tab.
Click on the **New Process** button, using the **breadcrumbs** in the top-right corner.
Add a **Start Node**.
Add two **User Tasks** that will represent the screens of the application.
Finish your BPMN process with an **End Node**.

Now create a **Navigation Area** (Page) where we will include our user tasks.

In the FlowX Designer, you can create the following navigation areas:
* **Stepper**: Breaks progress into logical, numbered steps for intuitive navigation.
* **Tab Bar**: Allows users to switch between different sections or views within the application.
* **Page**: Displays full-page content for an immersive experience.
* **Modal**: Overlays that require user interaction before returning to the main interface.
* **Zone**: Groups specific navigation areas or tasks, like headers and footers.
* **Parent Process Area**: Supports subprocess design under a parent hierarchy, ensuring validation and design consistency.
## Configuring the UI
All visual properties of the UI elements and navigation areas are configured using the using the **FlowX UI Designer**.


### Navigation type
To begin, we need to define the type of navigation for our page application. The options are:
* Single page form
* Wizard
We will use the **Wizard** type for our example.

### Configuring the first screen (card)
Open the **UI Designer** for your first **user task**. This will represent the **first card**.
Add a **CARD** element to the UI.
Add a **Form** to the card to group the inputs.
Add an **input** field to the form.

Add a button with a save data action to advance to the next screen and save the input data.
First, configure the action at the node level. The action, called when the button is clicked, should be **Manual** (not automatic because it is triggered by a user).

### Testing the first screen
Start the process definition that you just configured.
The card with the **Form** and the **Input** is displayed.
Test the **Input**.

## Configuring the second screen (card)
Go to your second **user task** and add a new **CARD**.
Add other UI elements of your choice.

### Testing the final result
Start the process definition again to review the final result:

# Exporting / importing a process definition version
Source: https://docs.flowx.ai/4.7.x/docs/flowx-designer/managing-a-process-flow/export-import-a-process-definition
To export process definitions versions and move them between different environments, you can use the export version/ import process feature.
## Export a process definition
You can export a version of your process definition as a JSON file directly from the versioning menu in the **FlowX Designer**:

## Import a process definition
Given you have exported a version from another environment, when you press the "Import Process" button from the process definition list, then the system opens a local file browser with available JSON files.
There are multiple scenarios you can encounter:
* [**New process definition**](#new-process-definition)
* [**Existing process definition with no additional versions**](#existing-process-definition-with-no-additional-versions)
* [**Existing process definition with additional version**](#existing-process-definition-with-additional-version)
### New process definition
The process definition does not exist in the target environment and when the file is submitted then the process definition is added to the target environment and in the branching tree the imported version is displayed as active and unavailable versions as inactive/ placeholders.

#### Unavailable versions

### Existing process definition with no additional versions
The process definition exists in the target environment and does not contain additional versions compared to the ones in the import file. When you submit the file, the branching tree is updated. Imported and existing versions are displayed as active, while unavailable versions are shown as inactive/placeholders.

Additionally, if the current publish policy is "latest submitted" or "latest work in progress" and the import file indicates that the publish version will be overwritten, you'll see information about the new published version.
You'll also have the option to update the publish policy:

### Existing process definition with additional version
You have an existing process definition with additional versions, and you want to overwrite versions in conflict while being warned about the published version.
The process definition in the target environment contains additional versions compared to the ones in the import file. These versions are children of parent versions that should receive other children versions after import. In this case, you'll see a message about overwritten versions.

In case the process definition was exported using an incompatible FlowX Designer version, you will receive an error and not be able to import it. It will first need to be adjusted to match the format needed by your current FlowX Designer version.
# Handling decisions in the flow
Source: https://docs.flowx.ai/4.7.x/docs/flowx-designer/managing-a-process-flow/handling-decisions-in-the-flow
To add business decisions in the flow and use them to pick between a flow branch or another, we can use exclusive gateways.


### Steps for creating a flow with exclusive branches
To create flow with exclusive branches:
Open **FlowX Designer** and go to the **Definitions** tab.
Click on the **New process** button, using the **breadcrumbs** from the top-right corner.
Add a **start node** and an **exclusive gateway node**.
Add two different **task nodes** and link them after the **exclusive** **gateway node**.
Add a new **exclusive gateway** to merge the two flow branches back into one branch.
Add a **new rule** to a node to add a **business decision**:
For [business rules](../../building-blocks/actions/business-rule-action/business-rule-action), you need to check certain values from the process and pick an outgoing node in case the condition is met. The gateway node must be connected to the next nodes before configuring the rule.
* select a **scripting language** from the dropdown, for example `MVEL` and input your condition:
* `input.get("application.client.creditScore") >= 700` ← proceed to node for premium credit card request
* `input.get("application.client.creditScore") < 700` ← proceed to node for standard credit card request
Add a **closing exclusive gateway** to continue the flow.
Add and **end node**.

# Moving a token backwards in a process
Source: https://docs.flowx.ai/4.7.x/docs/flowx-designer/managing-a-process-flow/moving-a-token-backwards-in-a-process
Back in steps is a functionality that allows you to go back in a business process redo a series of previous actions in the process.
**Why is it useful?** Brings a whole new level of flexibility in using a business flow or journey, allowing the user to go back a step without losing all the data inputted so far.

In most cases, the **token** instance will just need to advance forward in the process as the actions on the process are completed.
But there might be cases when the token will need to be moved backward in order to redo a series of previous actions in the process.
We will call this behavior **resetting the token**.
The token can only be reset to certain actions on certain process nodes. These actions will be marked accordingly with a flag `Allow BACK on this action?`.
When such an action is triggered from the application, the current process token will be marked as aborted and a new one will be created and placed on the node that contains the action that was executed. If any sub-processes were started between the two token positions, they will also be aborted when the token is reset.
The newly created token will copy from the initial token all the information regarding the actions that were performed before the reset point.
There are a few configuration options available in order to decide which of the data to keep when resetting the token:
* `Remove the following objects from current state`: Process keys that should be deleted when the user is navigating back tho this action
* `Copy the following objects from current state`: Process keys that should retain their data as persisted prior to the user navigating back to this action.

# Initiating processes
Source: https://docs.flowx.ai/4.7.x/docs/flowx-designer/managing-a-process-flow/starting-a-process
Entering the realm of FlowX unlocks a spectrum of possibilities for elevating processes and workflows. From automation to data-driven decision-making, several straightforward approaches pave the way for leveraging this platform efficiently. Let's delve into the ways to kickstart a process.
## Starting a process via Kafka
To trigger a process using a Kafka **Send Action**, follow these steps:
1. **Access Your Project**
* Open **FlowX Designer**.
* Navigate to **Projects** and select your project.
* Choose an existing process definition or create a new one.
2. **Configure the Send Message Task**
* Add a **Send Message Task** to your workflow.
* Attach a **Kafka Send Action** to the **Send Message Task** node.
3. **Define the Kafka Topic**
* Specify the topic linked to the `KAFKA_TOPIC_PROCESS_START_BY_NAME` environment variable (details [here](../../../setup-guides/application-manager#process-topics)).
This variable is shared between the **application-manager** and **runtime-manager** deployments.

To check further what was sent on Kafka, you can use a tool like AKHQ to check the response sent on the topic defined at `KAFKA_TOPIC_PROCESS_START_BY_NAME_OUT` - details [here](../../../setup-guides/application-manager#process-topics).

To verify the topic in FlowX Designer, navigate to: **Platform status → FlowX Components → application-manager-mngt -> kafkaTopicHealthCheckIndicator → details → configuration → topic → process → start-by-name**:

4. **Add the Message Body**
* Include the message body with the necessary details (if applicable):
```json
{"test": "something"}
```
5. Configure headers in Advanced Settings
* Expand Advanced Configuration in the Send Message Task settings.
* By default, a custom header is set to:
```json
{"processInstanceId": "${processInstanceId}"}
```
This header is not required and can be ignored.
* Add the following headers:
* `Fx-ProcessName` → The name of the process you want to start via Kafka.
* `Fx-AppId` → The ID of the project (application) where the process resides.
* `jwt` → Your JWT token for authentication.

The headers section should resemble this structure:
```json
{
"Fx-ProcessName": "test_kafka",
"Fx-AppId": "afcc6452-f50e-4d95-ae3b-6b35caed68bd",
"jwt": "your_jwt"
}
```
Without an active policy, the process may not start even if the Kafka message is correctly configured.
## Timer start event
To initiate a process using a Start Timer Event:
1. Open FLOWX Designer, head to the Processes tab, then select Definitions.
2. Opt for an existing process definition or create a new one.
3. Incorporate a Start Timer Event and configure it as required, specifying either a specific date or a cycle.

Starting a process through registered timers necessitates sending a process start message to Kafka, requiring a service account and authentication. For detailed guidance, refer to:
[**Service Accounts**](../../../setup-guides/access-management/configuring-an-iam-solution#scheduler-service-account)
For deeper insights into the Start Timer Event, refer to the section below:
[Start Timer Event](../../building-blocks/node/timer-events/timer-start-event)
## Message catch start event
To initiate a process using a Message Catch Start Event, two processes are required. One utilizes a throw message event, while the other employs a start catch message event to initiate the process.
### Configuring the parent process
1. Access FlowX Designer, proceed to the Processes tab, then select Definitions.
2. Opt for an existing process definition or create a new one.
3. Configure your process and integrate a Message Throw Intermediate event.
4. Add a task or a user task where process data bound to a correlation key is included (e.g., 'key1').

5. Configure the node, considering message correlation.

Message correlation is vital and achieved through message subscriptions, involving the message name (must be identical for both throw and catch events) and the correlation key (also known as the correlation value).
### Configuring the process with catch event
Now, we will configure the process that will be started with the start catch message event:
1. Follow the initial three steps from the previous section.
2. Integrate a Start Message Catch event node.
3. Configure the node:
* Include the same message name for correlation as added in the throw message event (e.g., 'start\_correlation').
* In the Receive data tab, add the Process Key, which is the correlation key added in the throw event (e.g., 'key1').
Once both processes are configured, commence the parent process. At runtime, you'll notice the initiation of the second process:

## Task management using Hooks
Initiating processes through hooks involves the creation of a hook alongside two essential processes: one acts as the parent process, while the other is triggered by the hook.
### Creating a hook
Hooks play a crucial role in abstracting stateful logic from a component, facilitating independent testing and reusability.
Users granted task management permissions can utilize hooks to initiate specific process instances, such as triggering notifications upon event occurrences.
Follow the next steps to create a hook:
1. **Create a Hook**: Access FlowX Designer, navigate to the Plugins tab, and choose Task Manager → Hooks.
2. **Configure the Hook**:
* Name: Name of the hook
* Parent process: Process definition name of the parent process
* Type: *Process hook*
* Trigger: *Process Created*
* Triggered Process: Process definition name of the process that we want to trigger
* Activation status

For further details about hooks, refer to the section below:
[Hooks](../../platform-deep-dive/core-extensions/task-management/using-hooks)
### Setting up the parent process
1. In FlowX Designer, navigate to the Processes tab and select Definitions.
2. Choose an existing process definition or create a new one.
3. Customize your BPMN process to align with your requirements.
4. Ensure the process is integrated with task management. To do this, within your Process Definition, access Settings → General and activate **"Use process in task management"**.

Establishing appropriate roles and permissions within the parent process (or the service account used) is mandatory to enable it to trigger another process.
Now proceed to configure the process that the hook will trigger.
### Configuring the triggered process
To configure the process triggered by the hook, follow the initial three steps above. Ensure that the necessary roles and permissions are set within the process.
Upon running the parent process, instances will be created for both the parent and the child processes.

# FlowX.AI Designer
Source: https://docs.flowx.ai/4.7.x/docs/flowx-designer/overview
The FlowX.AI Designer is a collaborative, no-code, web-based application development environment, designed to facilitate the creation of web and mobile applications without the need for coding expertise.

# Overview
Let's go through the main options available in the **FlowX Designer**:
A project is a comprehensive workspace that groups all resources, dependencies, and configurations needed to implement a specific use case. It enables centralized resource management, version control, and build deployment across different environments, allowing teams to organize processes, integrations, media, and other assets in a modular and reusable manner.
A library is a specialized project type designed to store and share reusable resources like processes, enumerations, and media assets across multiple projects. Libraries enable centralized resource management, version-controlled dependency management, and modular development by allowing projects to reference specific builds of shared resources.
#### Process Definitions
* create, view, run and edit [processes](../building-blocks/process/process-definition)
* view versioning history
#### Active Process
* view active [process instances](../projects/runtime/active-process/process-instance)
* [token](../building-blocks/token) instance and its content
* [subprocesses](../building-blocks/process/subprocess)
#### Enumerations
* nomenclature containing static value definitions
* used to manage a list of values that can be used as content in UI components or templates
#### Substitution tags
* used to generate dynamic content across the platform
* list of values used for localization
#### Languages
* enumeration values can be defined for a specific language
#### Source systems
* used for multiple source systems, if multiple [**enumerations**](../platform-deep-dive/core-extensions/content-management/enumerations) values are needed to communicate with other systems
[Example here](../platform-deep-dive/core-extensions/content-management/content-management)
#### Media Library
* serves as a centralized hub for managing and organizing various types of media files, including images, GIFs, and more
#### Font files
* Font management allows you to upload and manage multiple font files, which can be later utilized when configuring UI templates using the UI Designer
#### Themes
* The Theme Management feature allows for the personalization of application appearance through easy management and customization of themes.
#### Task Management
* it is a **plugin** suitable for back-officers and supervisors as it can be used to easily track and assign activities/tasks inside a company
* for more information, check the [Task Management](../platform-deep-dive/core-extensions/task-management/task-management-overview) section
#### Notification templates
* send various types of notifications: SMS, push notifications to mobile devices, emails
* forward custom notifications to external outgoing services
* generate and validate [OTP](../platform-deep-dive/plugins/custom-plugins/notifications-plugin/otp-flow/) passwords for user identity verification
* for more information, check the [Notification templates plugin](../platform-deep-dive/plugins/custom-plugins/notifications-plugin/notifications-plugin-overview) section
#### Document templates
* store and make changes to documents
* generate documents based on predefined templates (docx or HTML) and custom process related data
* convert documents between various formats
* splitting bulk documents into smaller separate documents
* editing documents to add generated barcodes/signatures and pictures
* for more information, check the [Document templates plugin](../platform-deep-dive/plugins/custom-plugins/documents-plugin/documents-plugin-overview) section
#### Configuration parameters
* you can add configuration parameters by defining key-value pairs
* they are used for values that might change from one environment to another
* for example, an URL that has different values from a development environment to a production environment
#### Access management
* Access Management is used to administrate users, roles and groups
* Access Management is accessing keycloak through an API call, extracting all the necessary details
* it is based on user roles that need to be configured in the identity management solution
#### Integration management
* Integration management helps you configure integrations between the following components: [**FlowX Process engine**](../platform-deep-dive/core-components/flowx-engine), [**plugins**](../platform-deep-dive/plugins/custom-plugins/), or different adapters
* Integration management enables you to keep track of each integration and its correspondent component and different scenarios used: creating an OTP, document generation, notifications, etc
Platform status
* you can check the platform's health by using the **Platform Status** feature
* you can also check the installed versions against the suggested versions for each FlowX component
With the FlowX.AI Designer, you can:
* Create and manage projects and libraries with centralized resource management
* Design processes using BPMN 2.0 standards
* Configure user interfaces for both generated and custom screens
* Define business rules using DMN, MVEL, or other scripting languages
* Create visual integration connectors
* Build data models for your projects
* Extend functionality through plugins
* Manage user access and roles
* Generate document templates
* Create and manage notification templates
* Configure task management workflows
* Set up environment-specific configuration parameters
* Track and manage system integrations
Depending on your access rights, some tabs might not be visible. For more information, check [Configuring access rights for Admin](../../setup-guides/access-management/configuring-access-rights-for-admin) section.
## Managing process definitions
A **process definition** is uniquely identified by its name and version number.

## Viewing active process instances
The complete list of active **process instances** is visible from the FLOWX Designer. They can be filtered by **process definition** names and searched by their unique ID. You can also view the current process instance status and data.

## Managing CMS
Using the content management feature you can perform multiple actions that enable manipulation of the content and simplification of it. You need first to deploy the CMS service in your infrastructure, so you can start defining and using the custom content types described in the **Content Management** tab above.

## Managing tasks
The Task Manager **plugin** has the scope to show a process that you defined in Designer, offering a more business-oriented view. It also offers interactions at the assignment level.

## Managing notification templates
The notification templates plugin can be viewed, edited, and activated/inactivated from the **FlowX Designer**.

## Managing document templates
One of the main features of the [documents plugin](../platform-deep-dive/plugins/custom-plugins/documents-plugin/documents-plugin-overview) is the ability to generate new documents based on custom templates and prefilled with data related to the current process instance.

## Managing generic parameters
Through the **FLOWX Designer**, you can edit generic parameters, and import or export them. You can set generic parameters and assign the environment where they should apply.

The maximum length of an input value is 255 characters.
## Managing users access
Access Management is used to administrate users, roles and groups, directly in FLOWX Designer. Access Management helps you to access the identity management solution (keycloak/[RH-SSO](https://access.redhat.com/products/red-hat-single-sign-on)) through its API, extracting all the necessary details. Access Management is based on user roles that need to be configured in the identity management solution.

## Managing integrations
Integration management enables you to keep track of each integration and its correspondent component and different scenarios used: creating an OTP, document generation, notifications, etc.

## Checking platform status
You can quickly check the health status of all the **FlowX services** and all of your custom connectors.

Check the next section to learn how to create and manage a process from scratch:
# AI in FlowX.AI
Source: https://docs.flowx.ai/4.7.x/docs/getting-started/ai-in-flowx
Let's explore how our intelligent agents work together to transform banking operations through specialized capabilities across the entire application lifecycle.
FlowX agents can tackle everything from targeted modernization projects to total bank transformation. Our platform deploys a network of specialized AI agents working in concert to modernize banking operations across the entire application lifecycle.
**AI Agents** are built in all layers of FlowX Platform covering the entire timeline for creating, consuming and maintaining an application:
AI Agents are strategically embedded in every layer of the FlowX Platform. They're like the Avengers of banking software—if Iron Man specialized in UI design and Captain America was really good at compliance.
### Why Multi-Agent Architecture Matters?
Our multi-agent approach makes traditional monolithic AI systems look like they're still using dial-up internet:
| Feature | FlowX Multi-Agent Approach | Traditional Banking AI |
| ------------------ | ---------------------------------------------------------- | -------------------------------------------- |
| **Specialization** | Purpose-built agents for specific banking functions | Generic capabilities requiring customization |
| **Adaptability** | Modular updates to individual agents as regulations change | Complete system overhauls (ouch) |
| **Transparency** | Clear visibility into agent decision processes | Black box of mystery |
| **Resilience** | System continues despite component failures | One crash and everyone's having a bad day |
## FlowX.AI Agents
Our platform organizes intelligent agents into four functional layers—because three would be insufficient and five would be showing off:
### BUILD Agents
In the BUILD layer we have agents that automate and streamline the creation of data architectures, process models, integrations, user interfaces, and business rules by generating and validating content based on natural language prompts, existing documents, and best practices.
Builds and optimizes BPMN processes using AI to generate, validate, and enhance workflows based on user-provided documents
Answers questions about the FlowX.AI platform as well as the bank itself based on uploaded documentation.
Streamline UI design by converting natural language prompts, Figma Designs or sketches into complete UI layouts with validations and data models.
Generate specific business rules, or code expressions using natural language and automate the code necessary to implement them.
See them in action (no 3D glasses required):
### MAINTAIN Agents
The agents in the MAINTAIN layer ensure ongoing compliance and generate comprehensive documentation to keep processes aligned with regulations and up-to-date at every step.
**Coming soon**
Ensures compliance across complex processes, business rules & massive datasets using regulatory documentation.
**Coming soon**
Generate comprehensive documentation step-by-step using predefined templates, starting from smaller blocks and building up to a complete technical document.
### RUN Agents
The agents in the RUN layer monitor, debug, and optimize running processes, providing real-time supervision, automated testing, and on-demand reporting to ensure efficient operation.
**Coming soon**
Supports maintenance & debugging of running processes and systems, and can generate reports & graphs on-demand.
**Coming soon**
Inspect process configuration during build to detect misconfigurations and suggest now the business process can be enhanced.
### OPTIMIZE Agents
The agents in the OPTIMIZE layer enhance business processes by testing, predicting outcomes, and suggesting improvements to increase efficiency and drive better results.
**Coming soon**
Run business processes with synthetic data to test outcomes, detect potential issues, and improve process efficiency based on execution time results.
**Coming soon**
Predicts business outcomes and optimizes processes by analyzing metrics, audit logs, and generating on-demand reports.
## Enterprise-Grade Security for Banking
FlowX AI Agents are built for the stringent security requirements of the banking sector:
* **Zero-Trust Security Model**: FlowX AI Agents utilize a dynamic Zero-Trust security approach, ensuring that all components within the system authenticate each other. This prevents unauthorized access and ensures that interactions within the platform remain secure.
* **Input and Output Scanners**: The platform employs a comprehensive set of input and output scanners, including those for detecting prompt injection, banning sensitive substrings, and sanitizing outputs. This minimizes the risk of harmful or unauthorized data being processed or exposed.
* **Custom Fine-Tuned Models**: FlowX develops custom fine-tuned models for each use case, tailored with curated data, which enhances the accuracy and security of AI agents, particularly in sensitive environments like banking.
* **Human Oversight and Monitoring**: The platform includes robust human-in-control mechanisms, enabling human oversight and control over AI agents to ensure ethical decision-making and secure interactions, crucial for high-stakes environments like banking.
## Advanced Hallucination Prevention
* **RAG**: We use Retrieval-Augmented Generation (RAG) to ground responses in reliable, external data sources.
* **Architecture**: We employ a multi-agent architecture where each task is assigned to dedicated agents, allowing for efficient task splitting and specialized processing.
* **JSON**: We enforce structured JSON outputs because letting an AI freestyle its responses would be like letting a toddler freestyle with finger paints in your living room.
* **LLM EVALUATION**: We relentlessly test models to find the perfect fit for each banking task. It's like speed dating, but for AI models, and the stakes are your bank's efficiency.
* **PROMPT Tuning**: Prompt tuning for each task customizes the model’s responses, enhancing accuracy and relevance while reducing the likelihood of hallucinations
# Building with FlowX.AI
Source: https://docs.flowx.ai/4.7.x/docs/getting-started/building-your-first-proc
Let's explore how to build innovative solutions with FlowX.AI.
[Create a project](../projects/managing-applications/application).
[Design a BPMN Process](../flowx-designer/managing-a-process-flow).
Define and manage a process flow using [**FLOWX Process Designer**](../building-blocks/process/process).
Run a process instance with [**FlowX Engine**](../platform-deep-dive/core-components/flowx-engine).
Create the **Front-End Application**.
Connect **Plugins**.
## FlowX.AI implementation methodology
The implementation of FlowX.AI follows a structured approach comprising several phases, using a hybrid methodology that has proven effective in past implementations. These phases include:
* Mobilization
* Analysis & Solution Design
* Project Execution
* Production & Go-live
* Transition to Business as Usual (BaU)
These phases address various aspects of the implementation process, ensuring a comprehensive and successful deployment of FlowX.AI solutions.
Explore our Academy course on Implementation Methodology for in-depth insights:
* What are the project stages in a FlowX implementation?
* What are the key roles of an implementation team?
* What are the main responsibilities of each role in the team?
## Designing the BPMN Process: Requesting a New Credit Card from a Bank App
Let's initiate by designing the BPMN process diagram for a sample use case: requesting a new credit card from a bank app.
## Sample Process Steps
Taking a **business process example** of a credit card application, it involves the following steps:
Create a business process example of a credit card application.
A user initiates a request for a new credit card - ***Start Event***
The user fills in a form with their personal data - ***User Task***
The bank system performs a credit score check automatically using a send event that communicates with the credit score adapter, followed by a receive event to collect the response from the adapter - ***Automatic Task***
The process bifurcates based on the credit score using an ***Exclusive Gateway***
Each branch entails a service task that saves the appropriate credit card type to the process data - ***Automatic Task***
The branches reconvene through a ***Closing Gateway***
The user views the credit card details and confirms - ***User Task***
After user confirmation, the process divides into two parallel branches - ***Parallel Gateway***. One registers the request in the bank's systems (bank system adapter/integration), and the other sends a confirmation email (notification plugin) to the user
An additional automatic task follows: a call to an external API to compute the distance between the user's address and the bank locations ([Google Maps Distance Matrix API](https://developers.google.com/maps/documentation/distance-matrix/overview)) - ***Automatic Task***
A task is utilized to sort the location distances and present the top three to the user - ***Automatic Task***
The user selects the card pickup point from the bank location suggestions - ***User Task***
A receive task awaits confirmation from the bank that the user has collected the new card, concluding the process flow - ***End Event***
## Sample Process Diagram
Here's what the **BPMN** diagram illustrates:

# Introduction to FlowX.AI
Source: https://docs.flowx.ai/4.7.x/docs/introduction
FlowX.AI is an AI multi-experience development platform that sits on top of legacy systems and creates unified, scalable digital experiences.
Why FlowX.AI?
Revolutionize your digital transformation with our AI-powered platform that brings legacy systems into the future.
Captures and unifies data offering enterprises advanced AI-based optimization and innovation capabilities that enhance operational efficiency.
Integrates easily with any infrastructure and scales it as necessary, regardless of your current tech stack and legacy systems.
FlowX.AI is built on a modern event-driven platform
The microservices architecture provides unparalleled flexibility and scalability for all your business needs. This approach allows you to adapt quickly to changing requirements while maintaining robust performance and security across your digital ecosystem.
Powered by Kafka & Redis for maximum performance
Industry Standard Technologies
FlowX.AI is built using modern, industry-standard technologies and frameworks to ensure reliability, performance, and ease of integration.
Spring Boot for robust backend services
Angular and React for flexible frontend development
Kubernetes for containerized deployment
OAuth 2.0 and OpenID Connect for secure authentication
Flexible Deployment Options
FlowX.AI offers multiple deployment options to fit your organization's requirements and infrastructure preferences.
On-Premises
Deploy within your own infrastructure for maximum control and compliance with specific regulations.
Cloud-Native
Leverage cloud providers like AWS, Azure, or GCP for scalability and reduced maintenance overhead.
Why does it matter?
Did you know?
Studies show that around 65-75% of IT budgets go towards maintaining current infrastructure. FlowX.AI helps reduce this burden substantially, allowing you to invest in innovation rather than maintenance.
FlowX.AI can be deployed on top of existing legacy systems, eliminating the need for costly or risky upgrade projects.
FlowX.AI platform brings a layer of scalability to your existing stack, beyond their current capabilities thanks to our Kafka and Redis core.
Create one application that unifies the purpose and data from multiple systems, saving time and eliminating errors.
Build seamless experiences across all digital channels with hand-off capability between web and mobile applications.
The UI is generated on the fly by our AI model, requiring no coding or design skills to create interfaces.
Our platform is available to any citizen developer, bringing speed to development, supporting agile ways of working, and having a positive impact on creativity and innovation. Whether you're a professional developer or a business user, FlowX.AI adapts to your skill level and needs.
So, to start with, let's dive into FLOWX.AI and see what we can build! 🚀
Read about the frameworks and standards used to build the platform
Find about the core platform components and how they interact
Stay up to date with the latest features and improvements
If you have any questions regarding the content here or anything else that might be missing and you'd like to know, please get in touch with us! We'd be happy to help!
Get in touch with us
# FlowX.AI architecture
Source: https://docs.flowx.ai/4.7.x/docs/platform-overview/flowx-architecture
FlowX.AI is a comprehensive, event-driven platform designed to accelerate digital transformation by enabling rapid development of web and mobile applications without extensive coding. The architecture consists of several interconnected components that work together to provide a seamless experience for both developers and end users.

## Core components
The FlowX.AI platform is built on a microservices architecture, with each component serving a specific purpose in the overall ecosystem:
### FlowX.AI Designer
The **FlowX.AI Designer** is a collaborative, no-code/full-code web-based application development environment that serves as the central workspace for creating and managing processes, UIs, integrations, and other application components.
**Key capabilities:**
* Design processes using industry-standard [BPMN 2.0](./frameworks-and-standards/business-process-industry-standards/intro-to-bpmn) notation
* Configure user interfaces for both generated and custom components
* Define business rules and validations via [DMN](./frameworks-and-standards/business-process-industry-standards/intro-to-dmn) or [MVEL](./frameworks-and-standards/business-process-industry-standards/intro-to-mvel) scripting
* Create visual integration connectors to external systems
* Design and manage data models
* Add extensibility through [custom plugins](../platform-deep-dive/plugins/custom-plugins)
* Manage user access roles and permissions
The FlowX Designer is a web application that runs in the browser, residing outside a FlowX deployment, serving as the administrative interface for the entire platform.
The no-code/full-code capabilities allow both business users (analysts, product managers) and experienced developers to collaboratively build applications, reducing the typical development cycle from months to days.
### Microservices architecture
FlowX.AI is built on a suite of specialized microservices that provide the foundation for the platform's capabilities. These microservices communicate through an event-driven architecture, primarily using Kafka for messaging, enabling scalability, resilience, and extensibility:
#### FlowX.AI Engine
The **FlowX.AI Engine** is the core orchestration component of the platform, serving as the central nervous system that executes process definitions, manages process instances, and coordinates communications between all platform components.

**Key responsibilities:**
* Executing business processes based on BPMN 2.0 definitions
* Creating and managing process instances throughout their lifecycle
* Coordinating real-time interactions between users, systems, and data
* Orchestrating the event-driven communication across the platform
* Dynamically generating and delivering UI components based on process state
* Handling integration with external systems via Kafka messaging
The Engine is built on [Kafka](./frameworks-and-standards/event-driven-architecture-frameworks/intro-to-kafka-concepts), providing high-throughput, low-latency event processing. This architecture enables FlowX.AI to maintain a responsive user experience (0.2s response time) even when integrating with slow legacy systems by buffering load and managing asynchronous communication.
**Technical infrastructure:**
* PostgreSQL database for process definitions and instance data
* MongoDB for runtime build information
* Redis for caching process definitions and improving performance
* Multiple script engine support including Python, JavaScript, and MVEL
* Elasticsearch integration for efficient data indexing and searching
The Engine works closely with the **Advancing Controller** to ensure efficient process instance progression, particularly in scaled environments.
#### FlowX.AI Application Manager
The **Application Manager** is responsible for managing the application lifecycle, including:
* Creating, updating, and deleting applications and their resources
* Managing versions, manifests, and configurations
* Serving as a proxy for front-end resource requests
* Handling application builds and deployments
This microservice maintains a comprehensive data model for applications, including all their components, versions, and dependencies, ensuring consistency across environments.
#### FlowX.AI Runtime Manager
The **Runtime Manager** works in conjunction with the Application Manager to:
* Deploy application builds to runtime environments
* Manage runtime configurations and environment-specific settings
* Monitor and manage active application instances
* Handle the runtime data and state of deployed applications
#### FlowX.AI Integration Designer
The **Integration Designer** provides a visual interface for creating and managing integrations with external systems:
* Define REST API endpoints and authentication methods
* Create and configure integration workflows
* Map data between FlowX.AI processes and external systems
* Test and monitor integrations in real-time
This microservice simplifies the complex task of connecting to various enterprise systems, allowing for secure, scalable, and maintainable integrations without extensive coding.
#### FlowX.AI Content Management
The **Content Management** microservice handles all taxonomies and structured content within the platform:
* Manage enumerations (dropdown options, categories)
* Store and serve localization content and translations
* Organize media assets and reference data
* Centralize content that needs to be shared across applications
This Java-based service uses MongoDB for flexible storage of unstructured content, making it the go-to place for all shared taxonomies and content definitions.
#### FlowX.AI Scheduler
The **Scheduler** microservice handles time-based operations within processes:
* Set process expiration dates and reminders
* Trigger time-based events and activities
* Manage recurring tasks and scheduled operations
* Support delayed actions and follow-ups
It communicates with the FlowX Engine through Kafka, creating time-based events that can be processed when needed, similar to a reminder application for business processes.
#### FlowX.AI Admin
The **Admin** microservice is responsible for:
* Storing and editing process definitions
* Managing user roles and permissions
* Configuring system-wide settings
* Providing administrative functions for the platform
This service connects to the same database as the FlowX Engine, ensuring consistency in process definitions and configurations.
#### FlowX.AI Data Search
The **Data Search** microservice enables search capabilities across the platform, allowing users to find data within process instances:
* Searching for data across processes and applications using indexed keys
* Indexing and retrieving information based on specific criteria
* Supporting complex queries with filtering by process status, date ranges, and more
* Enabling cross-application data discovery and access
This service leverages Elasticsearch to execute efficient searches. It works by indexing process data automatically when process status changes or at specific trigger points, making the information searchable without impacting performance. The service communicates with the FlowX Engine through Kafka topics, receiving search requests and returning results that can be displayed in applications.
#### FlowX.AI Events Gateway
The **Events Gateway** microservice centralizes and manages the real-time communication between backend services and frontend clients through Server-Sent Events (SSE):
* Processes events from various sources like the FlowX Engine and Task Management
* Routes and distributes messages to appropriate components based on their destination
* Publishes events to frontend renderers enabling real-time UI updates
* Integrates with Redis for efficient event distribution and ensuring messages reach the correct instance with the SSE connection

This component is crucial for maintaining the real-time, responsive nature of FlowX applications. It ensures that all UI updates, notifications, and system changes are immediately reflected across the platform without requiring page refreshes or manual polling. The Events Gateway reads messages from Kafka topics and distributes them appropriately, enabling features like instant form rendering when reaching user tasks or displaying real-time configuration errors.
### FlowX.AI SDKs
The platform provides SDKs for different client platforms:
* **Web SDK (Angular)**: For rendering process screens in web applications
* **Android SDK**: For native Android application support
* **Custom Component SDK**: For developing reusable UI components
These SDKs communicate with the FlowX Engine to render dynamic UIs and orchestrate user interactions based on process definitions.
### FlowX.AI plugins
The platform's functionality can be extended through plugins:
* **Notifications Plugin**: For sending and managing notifications
* **Documents Plugin**: For document generation and management
* **OCR Plugin**: For optical character recognition and document processing
* **Task Management Plugin**: For handling human tasks and assignments

Plugins provide modular extensions to the core platform, allowing for customization without modifying the base architecture.
## Platform infrastructure
### Advancing Controller
The **Advancing Controller** is a critical supporting service for the FlowX.AI Engine that enhances process execution efficiency, particularly in scaled deployments:
* Manages the distribution of workload across Engine instances
* Facilitates even redistribution during scale-up and scale-down scenarios
* Utilizes database triggers in PostgreSQL or OracleDB configurations
* Prevents process instances from getting stuck if a worker pod fails
* Performs cleanup tasks and monitors worker pod status
The Advancing Controller works in close coordination with the Engine to ensure uninterrupted process advancement. It must run concurrently with the Engine for optimal performance, particularly in production environments where reliability is crucial.
### Authorization & session management
FlowX.AI recommends **Keycloak** or **Azure AD (Entra)** for identity and access management:
* Create and manage users and credentials
* Define groups and assign roles
* Secure API access through token-based authentication
* Integrate with existing identity providers
Every communication from client applications passes through a public entry point (API Gateway), which validates authentication tokens before allowing access to the platform. The system supports OAuth2 authentication with multiple configuration options for securing microservice communication.
### Integrations
FlowX.AI can connect to external systems through custom integrations called connectors
These integrations can be developed using any technology stack, with the only requirement being a connection to Kafka. This flexibility allows for seamless integration with:
* Legacy APIs and systems
* Custom file exchange solutions
* Third-party services and platforms
## Technical foundation
### Data flow and event-driven communication
The FlowX.AI platform uses an event-driven architecture based on Kafka for asynchronous communication between components:
1. **Process Initiation**: Client applications initiate processes through the API Gateway
2. **Event Processing**: The FlowX Engine processes events and coordinates activities
3. **Integration Orchestration**: External system interactions are managed through integration workflows
4. **UI Generation**: Dynamic user interfaces are generated and delivered to client applications
5. **Data Management**: Process data is stored and managed throughout the execution lifecycle
This event-driven approach enables the platform to handle complex, long-running processes while maintaining responsiveness and scalability. Each microservice communicates through predefined Kafka topics following a consistent naming convention (e.g., `ai.flowx.dev.core.trigger.advance.process.v1`), allowing for loosely coupled but highly cohesive system architecture.
### Script engine support
FlowX.AI supports multiple scripting languages for business rules, validations, and data transformations:
* **MVEL**: Default scripting language for business rules
* **Python**: Supported in both Python 2.7 (Jython) and Python 3 (GraalPy)
* **JavaScript**: Available using GraalJS for high-performance execution
This flexibility allows developers to use the most appropriate language for different use cases while maintaining performance and security.
## Deployment and scalability
FlowX.AI is designed for containerized deployment, typically using Kubernetes:
* Microservices can be scaled independently based on demand
* Stateless components allow for horizontal scaling
* Kafka provides resilient message handling and event streaming
* Redis supports caching and real-time event distribution
* Database configurations support both PostgreSQL and OracleDB
### Monitoring and health checks
The platform includes comprehensive monitoring and health check capabilities:
* Prometheus metrics export for performance monitoring
* Kubernetes health probes for service availability
* Database connection health checks
* Kafka cluster health monitoring
These features ensure that the platform remains reliable and observable in production environments, with the ability to detect and resolve issues proactively.
## Conclusion
FlowX.AI offers a comprehensive, event-driven platform for rapidly developing and deploying digital applications without extensive coding. Its microservices architecture, combined with industry-standard technologies and a user-friendly design environment, enables organizations to accelerate their digital transformation initiatives while maintaining flexibility, scalability, and integration with existing systems.
# Intro to BPMN
Source: https://docs.flowx.ai/4.7.x/docs/platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-bpmn
The core element of the platform is a process. Think of it as a representation of your business use case, for example making a request for a new credit card, placing an online food order, registering your new car or creating an online fundraiser supporting your cause.
To easily design and model process flows, we use the standard **BPMN 2.0** graphical representation.
## What is Business Process Model and Notation (BPMN)?
Business Process Model and Notation (BPMN) is a graphical representation for specifying business processes in a business process model.
It is **the most widely used standard for business process diagrams**. It is intended to be used directly by the stakeholders who design, manage and realize business processes, but at the same time be precise enough to allow BPMN diagrams to be translated into software process components.
This is why we chose it for modeling the process flows.
## BPMN 2.0 elements
A BPMN business process flow is represented as a set of process elements connected by sequences. Here are the most common types of elements:
### Events
Events describe something that happens during the course of a process. There are three main events types: start events, intermediate events, and end events. These three types are also defined as either catching events (they react to a trigger) or throwing events (they are triggered by the process).

### Activities
An activity represents a unit of work to be performed by the business process. An activity can be atomic (a task) or can represent a group of more activities (a subprocess).

### Gateways
Gateways are used to control how a process flows. They act as a decision point that picks which sequence flow should the [**process instance**](../../../projects/runtime/active-process/process-instance) take. This is based on the result of the evaluation of condition(s) specified (in case of exclusive gateways) or they can be used to split a process into more branches (in case of parallel gateways).

### Pools and lanes
Pools and lanes are used in order to group the process steps by process participants. To show that certain user roles are responsible for performing specific process steps you can divide the process using lanes.

## BPMN basic concepts
Let's get into a bit more details on the main types of BPMN process elements.
### Events
Events are signals that something happens within a process, including its start and end and any interactions with the process environment.
Types of Events:
* Start Events
* End Events
* Intermediate Events
### Start and End events
**Start & End events**
| Start Event Icon | End Event Icon |
| :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
|  |  |
| Event that triggers the process | Event that defines the state that terminates the process |
### Intermediate events
An intermediate event occurs between a start and an end event. It is represented by a circle with a double line, indicating its ability to both catch and throw information.
#### Message events
Message events serve as a means to incorporate messaging capabilities into business process modeling. These events are specifically designed to capture the interaction between different process participants by referencing messages.
### Activities
#### Task
An atomic activity within a process flow, created when the activity cannot be broken down further. A task belongs to one lane.
| User task | Service task |
| :-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
|  |  |
| A task that requires human action | A task that uses a web service, automated application, or other kinds of service in completing the task |
#### User Task
A task performed by the user without aid from a business process execution engine or application, requiring a certain action in the application.
#### Service Task
Executed by a business process engine. The task defines a script that the FlowX Engine can interpret and execute, completing when the script finishes. It can also run a [**business rule**](../../../building-blocks/actions/business-rule-action/business-rule-action) on the process data.
### BPMN Subprocesses
In BPMN, a subprocess is a compound activity that represents a collection of other tasks and subprocesses. Generally, we create BPMN diagrams to communicate processes with others. To facilitate effective communications, we really do not want to make a business process diagram too complex. By using subprocesses, you can split a complex process into multiple levels, which allows you to focus on a particular area in a single process diagram.
### Gateways
Gateways allow to control as well as merge and split the **process flow**.
#### Exclusive gateways
In business processes, you typically need to make choices — **business decisions**. The most common type of decision is choosing **either/or**. Exclusive Gateways limit the possible outcome of a decision to a single path, and circumstances choose which one to follow.
#### Parallel gateways
In many cases, you want to split up the flow within your business process. For example the sales and risk departments may examine a new mortgage application at the same time. This reduces the total cycle time for a case. To express parallel flow in BPMN, you use a **parallel gateway**.
| Exclusive gateway (XOR) | Parallel gateway (AND) |
| :---------------------------------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------: |
|  |  |
|
defines a decision point
|
no decision making
all outgoing branches are activated
|
**Closing gateway**
* Closes gateways by connecting branches with no logic involved
* The symbol used is determined by the initial gateway type.
* Parallel gateways:
* These gateways wait for all input tokens and merge them into a single token.
* Are aware of all preceding token flows, know the paths selected, and expect tokens from these paths.
## In depth docs
For comprehensive insights into BPMN and its various node types, explore our course at FlowX Academy:
* What's BPMN (Business Process Model Notation) and how does it work?
* How is BPMN used in FlowX?
# Intro to DMN
Source: https://docs.flowx.ai/4.7.x/docs/platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-dmn
As we've seen in the previous chapter, Business Process Model and Notation BPMN is used to define business processes as a sequence of activities. If we need to branch off different process paths, we use gateways. These have rules attached to them in order to decide on which outgoing path should the process continue on.
In version 4.7.2, we've deprecated the **DMN (Decision Model and Notation)** business rule actions.

For more information on how to define DMN gateway decisions, check the [**Exclusive gateway node**](../../../building-blocks/node/exclusive-gateway-node) section.
We needed a convenient way of specifying the **business rules** and we picked two possible ways of writing business rules:
* defining them as DMN decisions
You can define a DMN Business Rule Action directly in **FlowX Designer** . For more information, check the [**DMN business rule action**](../../../building-blocks/actions/business-rule-action/dmn-business-rule-action) section.
* adding [MVEL](./intro-to-mvel) scripts
### What is Decision Model and Notation (DMN)?
**Decision Model and Notation** (or DMN) is a graphical language that is used to specify business decisions. DMN acts as a translator, converting the code behind complex decision-making into easily readable diagrams.
**The Business Process Model and Notation** is used to create the majority of process models **(BPMN)**. The DMN standard was developed to complement BPMN by providing a mechanism for modeling decision-making represented by a Task within a process model. DMN does not have to be used in conjunction with BPMN, but it is highly compatible.
FLOWX.AI supports [DMN 1.3](https://www.omg.org/spec/DMN/1.3/) version.
### DMN Elements
There are 4 basic elements of the **Decision Model and Notation**:
* [Decision](#decision)
* [Business Knowledge Model](#business-knowledge-model)
* [Input Data](#input-data)
* [Knowledge Source](#knowledge-source)

#### Decision
It’s the center point of a DMN diagram and it symbolizes the action that determines as output the result of a decision.
**Decision service**
A decision service is a high-level decision with well-defined inputs that is made available as a service for invocation. An external application or business process can call the decision service (BPMN).
#### Business Knowledge Model
It portrays a specific knowledge within the business. It stores the origin of the information. Decisions that have the same logic but depend on different sub-input data or sub-decisions use business knowledge models to determine which procedure to follow.
**Example:** a decision, rule, or standard table.
#### Input Data
This is the information used as an input to the normal decision. It’s the variable that configures the result. Input data usually includes business-level concepts or objects relevant to the business.
**Example:** Entering a customer’s tax number and the amount requested in a credit assessment decision.
#### Knowledge Source
It’s a source of knowledge that conveys a kind of legitimacy to the business.
**Example**: policy, legislation, rules.
### DMN Decision Table
A decision table represents decision logic which can be depicted as a table in Decision Model and Notation. It consists of inputs, outputs, rules, and hit policy.
| Decision table elements | |
| ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Inputs | A decision table can have one or more input clauses, that represent the attributes on which the rule should be applied. |
| Outputs | Each entry with values for the input clause needs to be associated with output clauses. The output represents the result that we set if the rules applied to the input are met. |
| Rules | Each rule contains input and output entries. The input entries are the condition and the output entries are the conclusion of the rule. If each input entry (condition) is satisfied, then the rule is satisfied and the decision result contains the output entries (conclusion) of this rule. |
| Hit policy | The hit policy specifies what the result of the decision table is in cases of overlapping rules, for example, when more than one rule matches the input data. |
**Hit Policy examples**
unique result
only one rule will match, or no rule
unique result
the order matter
continues with the first rule that matches
rule outputs are prioritized
rules may overlap, but only match with the highest output priority counts
unique results
multiple rules can be satisfied
all satisfied rules must generate the same output, otherwise the rule is violated
multiple results
the rules are evaluated in the order they are defined
the satisfied rules can generate different outputs
multiple results
the rules are evaluated in an arbitrary order
the satisfied rules can generate different outputs
can contain aggregators - that apply an aggregation operation on all the outputs resulted from the rule evaluation:
SUM
MIN
MAX
COUNT
### DMN Model
DMN defines an XML schema that allows DMN models to be used across multiple DMN authoring platforms.
You can use this XML example with FLOWX Designer, adding it to a Business Rule Action - using an MVEL script. Then you can switch to DMN if you need to generate a graphical representation of the model.
### Using DMN with FLOWX Designer
As mentioned previously, DMN can be used with FLOWX Designer for the following scenarios:
* For defining gateway decisions, using [exclusive gateways](../../../building-blocks/node/exclusive-gateway-node)
* For defining [business rules actions](../../../building-blocks/actions/business-rule-action/business-rule-action) attached to a [task node](../../../building-blocks/node/task-node)
### In depth docs
# Intro to MVEL
Source: https://docs.flowx.ai/4.7.x/docs/platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-mvel
We can also specify the business rules logic using MVEL scripts. As opposed to DMN, with MVEL you can create complex business rules with multiple parameters and sub-calculations.
## What is MVEL?
**MVFLEX Expression Language** (MVEL) is an expression language with a syntax similar to the Java programming language. This makes it relatively easy to use in order to define more complex business rules and that cannot be defined using DMN.
The runtime allows MVEL expressions to be executed either interpretively, or through a pre-compilation process with support for runtime byte-code generation to remove overhead. We pre-compile most of the MVEL code in order to make sure the process flow advances as fast as possible.
## Example
```java
if( input.get("user.credit_score") >= 700 ) {
output.setNextNodeName("TASK_SET_CREDIT_CARD_TYPE_PREMIUM");
} else {
output.setNextNodeName("TASK_SET_CREDIT_CARD_TYPE_STANDARD");
}
```
## In depth docs
# Intro to Elasticsearch
Source: https://docs.flowx.ai/4.7.x/docs/platform-overview/frameworks-and-standards/event-driven-architecture-frameworks/intro-to-elasticsearch
Elasticsearch itself is not inherently event-driven, it can be integrated into event-driven architectures or workflows. External components or frameworks detect and trigger events, and Elasticsearch is utilized to efficiently index and make the event data searchable.
This integration allows event-driven systems to leverage Elasticsearch’s powerful search and analytics capabilities for real-time processing and retrieval of event data.
## What is Elasticsearch?
Elasticsearch is a powerful and highly scalable open-source search and analytics engine built on top of the [Apache Lucene](https://lucene.apache.org/) library. It is designed to handle a wide range of data types and is particularly well-suited for real-time search and data analysis use cases. Elasticsearch provides a distributed, document-oriented architecture, making it capable of handling large volumes of structured, semi-structured, and unstructured data.
## How it works?
At its core, Elasticsearch operates as a distributed search engine, allowing you to store, search, and retrieve large amounts of data in near real-time. It uses a schema-less JSON-based document model, where data is organized into indices, which can be thought of as databases. Within an index, documents are stored, indexed, and made searchable based on their fields. Elasticsearch also provides powerful querying capabilities, allowing you to perform complex searches, filter data, and aggregate results.
## Why it is useful?
One of the key features of Elasticsearch is its distributed nature. It supports automatic data sharding, replication, and node clustering, which enables it to handle massive amounts of data across multiple servers or nodes. This distributed architecture provides high availability and fault tolerance, ensuring that data remains accessible even in the event of hardware failures or network issues.
Elasticsearch integrates with various programming languages and frameworks through its comprehensive RESTful API. It also provides official clients for popular languages like Java, Python, and JavaScript, making it easy to interact with the search engine in your preferred development environment.
## Indexing & sharding
### Indexing
Indexing refers to the process of adding, updating, or deleting documents in Elasticsearch. It involves taking data, typically in JSON format, and transforming it into indexed documents within an index. Each document represents a data record and contains fields with corresponding values. Elasticsearch uses an inverted index data structure to efficiently map terms or keywords to the documents containing those terms. This enables fast full-text search capabilities and retrieval of relevant documents.
### Sharding
Sharding, on the other hand, is the practice of dividing index data into multiple smaller subsets called shards. Each shard is an independent, self-contained index that holds a portion of the data. By distributing data across multiple shards, Elasticsearch achieves horizontal scalability and improved performance. Sharding allows Elasticsearch to handle large amounts of data by parallelizing search and indexing operations across multiple nodes or servers.
Shards can be configured as primary or replica shards. Primary shards contain the original data, while replica shards are exact copies of the primary shards, providing redundancy and high availability. By having multiple replicas, Elasticsearch ensures data durability and fault tolerance. Replicas also enable parallel search operations, increasing search throughput.
Sharding offers several advantages. It allows data to be distributed across multiple nodes, enabling parallel processing and faster search operations. It also provides fault tolerance, as data is replicated across multiple shards. Additionally, sharding allows Elasticsearch to scale horizontally by adding more nodes and distributing the data across them.
The number of shards and their allocation can be determined during index creation or modified later. It is important to consider factors such as the size of the dataset, hardware resources, and search performance requirements when deciding on the number of shards.
For more details, check Elasticsearch documentation:
## Leveraging Elasticsearch for advanced indexing with FlowX.AI
The integration between FlowX.AI and Elasticsearch involves the indexing of specific keys or data from the [**UI Designer**](../../../building-blocks/ui-designer/) to [process definitions](../../../building-blocks/process/process-definition). The indexing process is initiated by the [FlowX Engine](../../../platform-deep-dive/core-components/flowx-engine) which sends the data to Elasticsearch, where data is then indexed in the "process\_instance" index.
There are two methods available for indexing data: Kafka and HTTP.
* **Kafka**: Data is sent to be indexed through a Kafka topic using the new strategy. Deploy the [**Kafka Connect with Elasticsearch Sink Connector**](../../../../setup-guides/flowx-engine-setup-guide/configuring-elasticsearch-indexing/elasticsearch-indexing.mdx#kafka-connect) in the infrastructure to utilize this method.
* **HTTP**: Data is indexed by establishing a direct connection from the FlowX Engine to Elasticsearch. Use HTTP calls to connect directly from the FlowX Engine to Elasticsearch.
To ensure effective indexing of process instances' details, a crucial step involves defining a mapping that specifies how Elasticsearch should index the received messages. This mapping is essential as the process instances' details often have specific formats. The process-engine takes care of this by automatically creating an index template during startup if it doesn't already exist. The index template acts as a blueprint, providing Elasticsearch with the necessary instructions on how to index and organize the incoming data accurately. By establishing and maintaining an appropriate index template, the integration between FLOWX.AI and Elasticsearch can seamlessly index and retrieve process instance information in a structured manner.
### Kafka transport strategy
[Kafka](../event-driven-architecture-frameworks/intro-to-kafka-concepts) transport strategy implies process-engine sending messages to a Kafka topic whenever there is data from a process instance to be indexed. Kafka Connect is then configured to read these messages from the topic and forward them to Elasticsearch for indexing.
This approach offers benefits such as fire-and-forget communication, where the process-engine no longer needs to spend time handling indexing requests. By decoupling the process-engine from the indexing process and leveraging Kafka as a messaging system, the overall system becomes more efficient and scalable. The process-engine can focus on its core responsibilities, while Kafka Connect takes care of transferring the messages to Elasticsearch for indexing.
To optimize indexing response time, Elasticsearch utilizes multiple indices created dynamically by the Kafka Connect connector. The creation of indices is based on the timestamp of the messages received in the Kafka topic. The frequency of index creation, such as per minute, hour, week, or month, is determined by the timestamp format configuration of the Kafka connector.
It's important to note that the timestamp used for indexing is the process instance's start date. This means that subsequent updates received for the same object will be directed to the original index for that process instance. To ensure proper identification and indexing, it is crucial that the timestamp of the message in the Kafka topic corresponds to the process instance's start date, while the key of the message aligns with the process instance's UUID. These two elements serve as unique identifiers for determining the index in which a process instance object was originally indexed.
For more details on how to configure process instance indexing through Kakfa transport, check the following section:
# Intro to Kafka concepts
Source: https://docs.flowx.ai/4.7.x/docs/platform-overview/frameworks-and-standards/event-driven-architecture-frameworks/intro-to-kafka-concepts
Apache Kafka is an open-source distributed event streaming platform that can handle a high volume of data and enables you to pass messages from one end-point to another.
Kafka is a unified platform for handling all the real-time data feeds. Kafka supports low latency message delivery and gives a guarantee for fault tolerance in the presence of machine failures. It can handle many diverse consumers. Kafka is very fast, and performs 2 million writes/sec. Kafka persists all data to the disk, which essentially means that all the writes go to the page cache of the OS (RAM). This makes it very efficient to transfer data from a page cache to a network socket.
### Benefits of using Kafka
* **Reliability** − Kafka is distributed, partitioned, replicated, and fault-tolerant
* **Scalability** − Kafka messaging system scales easily without downtime
* **Durability** − Kafka uses Distributed commit log which means messages persist on disk as fast as possible
* **Performance** − Kafka has high throughput for both publishing and subscribing messages. It maintains a stable performance even though many TB of messages are stored.
## Key Kafka concepts
### Events
Kafka encourages you to see the world as sequences of events, which it models as key-value pairs. The key and the value have some kind of structure, usually represented in your language’s type system, but fundamentally they can be anything. Events are immutable, as it is (sometimes tragically) impossible to change the past.
### Topics
Because the world is filled with so many events, Kafka gives us a means to organize them and keep them in order: topics. A topic is an ordered log of events. When an external system writes an event to Kafka, it is appended to the end of a topic.
In FLOWX.AI, Kafka handles all communication between the [FlowX Engine](../../../platform-deep-dive/core-components/flowx-engine) and external plugins and integrations. It is also used for notifying running process instances when certain events occur. More information about KAFKA configuration on the section below:
### Producer
A producer is an external application that writes messages to a Kafka cluster, communicating with the cluster using Kafka’s network protocol.
### Consumer
The consumer is an external application that reads messages from Kafka topics and does some work with them, like filtering, aggregating, or enriching them with other information sources.
## In-depth docs
# Intro to Kubernetes
Source: https://docs.flowx.ai/4.7.x/docs/platform-overview/frameworks-and-standards/event-driven-architecture-frameworks/intro-to-kubernetes
Kubernetes is an open-source container orchestration platform that automates many of the manual processes involved in containerized application deployment, management, and scaling.
The purpose of Kubernetes is to orchestrate containerized applications to run on a cluster of hosts. **Containerization** enables you to deploy multiple applications using the same operating system on a single virtual machine or server.
Kubernetes, as an open platform, enables you to build applications using your preferred programming language, operating system, libraries, or messaging bus. To schedule and deploy releases, existing continuous integration and continuous delivery (CI/CD) tools can be integrated with Kubernetes.
## Benefits of using Kubernetes
* A proper way of managing containers
* High availability
* Scalability
* Disaster recovery
## Key Kubernetes Concepts
### Node & PODs
A Kubernetes node is a machine that runs containerized workloads as part of a Kubernetes cluster. A node can be a physical machine or a virtual machine, and can be hosted on-premises or in the cloud.
A pod is composed of one or more containers that are colocated on the same host and share a network stack as well as other resources such as volumes. Pods are the foundation upon which Kubernetes applications are built.
Kubernetes uses pods to run an instance of your application. A pod represents a single instance of your application.
Pods are typically ephemeral, disposable resources. Individually scheduled pods miss some of the high availability and redundancy Kubernetes features. Instead, pods are deployed and managed by Kubernetes *Controllers*, such as the Deployment Controller.
### Service & Ingress
**Service** is an abstraction that defines a logical set of pods and a policy for accessing them. In Kubernetes, a Service is a REST object, similar to a pod. A Service definition, like all REST objects, can be POSTed to the API server to create a new instance. A Service object's name must be a valid [RFC 1035](https://www.ietf.org/rfc/rfc1035.txt) label name.
**Ingress** is a Kubernetes object that allows access to the Kubernetes services from outside of the Kubernetes cluster. You configure access by writing a set of rules that specify which inbound connections are allowed to reach which services. This allows combining all routing rules into a single resource.
**Ingress controllers** are pods, just like any other application, so they’re part of the cluster and can see and communicate with other pods. An Ingress can be configured to provide Services with externally accessible URLs, load balance traffic, terminate SSL / TLS, and provide name-based virtual hosting. An Ingress controller is in charge of fulfilling the Ingress, typically with a load balancer, but it may also configure your edge router or additional frontends to assist with the traffic.
FlowX.AI offers a predefined NGINX setup as Ingress Controller. The [NGINX Ingress Controller](https://www.nginx.com/products/nginx-ingress-controller/) works with the [NGINX](https://www.nginx.com/resources/glossary/nginx/) web server (as a proxy). For more information, check the below sections:
### ConfigMap & Secret
**ConfigMap** is an API object that makes it possible to store configuration for use by other objects. A ConfigMap, unlike most Kubernetes objects with a spec, has `data` and `binaryData` fields. As values, these fields accept key-value pairs. The `data` field and `binaryData` are both optional. The data field is intended to hold UTF-8 strings, whereas the `binaryData` field is intended to hold binary data as base64-encoded strings.
The name of a ConfigMap must be a valid [DNS subdomain name](https://www.ietf.org/rfc/rfc1035.txt).
**Secret** represents an amount of sensitive data, such as a password, token, or key. Alternatively, such information could be included in a pod specification or a container image. Secrets are similar to ConfigMaps but they are designed to keep confidential data.
### **Volumes**
A Kubernetes volume is a directory in the orchestration and scheduling platform that contains data accessible to containers in a specific pod. Volumes serve as a plug-in mechanism for connecting ephemeral containers to persistent data stores located elsewhere.
### **Deployment**
A deployment is a collection of identical pods that are managed by the Kubernetes Deployment Controller. A deployment specifies the number of pod replicas that will be created. If pods or nodes encounter problems, the Kubernetes Scheduler ensures that additional pods are scheduled on healthy nodes.
Typically, deployments are created and managed using `kubectl create` or `kubectl apply`. Make a deployment by defining a manifest file in YAML format.
## Kubernetes Architecture
Kubernetes architecture consists of the following main parts:
* Control Plane (master)
* kube-apiserver
* etcd
* kube-scheduler
* kube-controller-manager
* cloud-controller-manager
* Node components
* kubelet
* kube-proxy
* Container runtime
## Install tools
### kubectl
`kubectl` makes it possible to run commands against Kubernetes clusters using the `kubectl` command-line tool. `kubectl` can be used to deploy applications, inspect and manage cluster resources, and inspect logs. See the `kubectl` [reference documentation ](https://kubernetes.io/docs/reference/kubectl/)for more information.
### kind
`kind` command makes it possible to run Kubernetes on a local machine. As a prerequisite, Docker needs to be installed and configured. What `kind` is doing is to run local Kubernetes clusters using Docker container “nodes”.
## In depth docs
# Intro to NGINX
Source: https://docs.flowx.ai/4.7.x/docs/platform-overview/frameworks-and-standards/event-driven-architecture-frameworks/intro-to-nginx
NGINX is a free, open-source, high-performance web server with a rich feature set, simple configuration, and low resource consumption that can also function as a reverse proxy, load balancer, mail proxy, HTTP cache, and many other things.
### How NGINX is working?
NGINX allows you to hide a server application's complexity from a front-end application. It uses an event-driven, asynchronous approach to create a new process for each web request, with requests handled in a single thread.
### Using NGINX with FlowX Designer
[**The NGINX Ingress Controller for Kubernetes**](https://kubernetes.github.io/ingress-nginx/) - `ingress-nginx` is an ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer.
Ingress allows you to route requests to services based on the host or path of the request, centralizing a number of services into a single entry point.
The [ingress resource](https://www.nginx.com/products/nginx-ingress-controller/nginx-ingress-resources/) simplifies the configuration of **SSL/TLS** **termination**, **HTTP load-balancing**, and **layer routing**.
For more information, check the following section:
#### Integrating with FlowX Designer
FlowX Designer is using NGINX ingress controller for the following actions:
1. For routing calls to plugins
2. For routing calls to the [FlowX Engine](../../../platform-deep-dive/core-components/flowx-engine):
* Viewing current instances of processes running in the FlowX Engine
* Testing process definitions from the FlowX Designer - route the API calls and SSE communications to the FLOWX engine backend
* Accessing REST API of the backend microservice
3. For configuring the Single Page Application (SPA) - FlowX Designer SPA will use the backend service to manage the platform via REST calls
In the following section, you can find a suggested NGINX setup, the one used by FlowX.AI:
### Installing NGINX Open Source
For more information on how to install NGINX Open Source, check the following guide:
# Intro to Redis
Source: https://docs.flowx.ai/4.7.x/docs/platform-overview/frameworks-and-standards/event-driven-architecture-frameworks/intro-to-redis
Redis is a fast, open-source, in-memory key-value data store that is commonly used as a cache to store frequently accessed data in memory so that applications can be responsive to users. It delivers sub-millisecond response times enabling millions of requests per second for applications.
It is also used as a Pub/Sub messaging solution, allowing messages to be passed to channels and for all subscribers to that channel to receive that message. This feature enables information to flow quickly through the platform without using up space in the database as messages are not stored.
Redis offers a primary-replica architecture in a single node primary or a clustered topology. This allows you to build highly available solutions providing consistent performance and reliability. Scaling the cluster size up or down is done very easily, this allows the cluster to adjust to any demands.
## In depth docs
# Projects
Source: https://docs.flowx.ai/4.7.x/docs/projects/managing-applications/application
Projects group all resources and dependencies needed to implement an use case.

## Overview
A project groups resources that represent a project's entire lifecycle. It's not just a collection of processes; it's an organized workspace containing all dependencies required for that project, from themes and templates to integrations and other resources. Projects enable you to:
* Create Versions of a project to manage changes over time.
* Deploy Builds for consistent environments.
* Organize Resources to ensure clarity and reduce errors.

## Core features
Projects provide a single view of all resources referenced in a process, enabling you to manage everything from one central workspace. This approach reduces context-switching and keeps configuration focused.
Projects support the definition of multiple locales, allowing for easy handling of regional content and settings. Enumerations, substitution tags, and media can be localized to meet specific environment requirements.
Projects support a robust versioning mechanism that tracks changes to processes, resources, and configurations. The Project Version captures a snapshot of all included resources, while the Build consolidates everything into a deployable package. Each build contains only one version of a project, ensuring a consistent deployment.
Projects can leverage Libraries, which act similarly to projects but are designed for resource sharing across projects. A library can contain reusable components like processes or templates that can be included in other projects. Dependencies between projects and libraries are managed carefully to ensure compatibility.
## Config
Config mode is the environment where you set up, adjust, and manage your project's resources, processes, and configurations. It's the workspace where you fine-tune every aspect of the project before it's ready for deployment. Think of it as the design phase, where the focus is on setup, organization, and preparation.
## Project components
### Processes
* **Processes**: Process definitions that drive the project's core functionality and share a project's resources.

* **Subprocesses**: Processes that can be invoked within the main process. These can either be part of the same project or imported from a library set as a dependency for the project.

### Content Management System (CMS)
* **Enumerations**: Predefined sets of options or categories used across the project. These are useful for dropdown menus, filters, and other selection fields.

* **Substitution tags**: Here you have system predefined substitution tags and dynamic placeholders that allow for personalized content, such as user-specific data in notifications or documents.

* **Media Library**: A collection of images, videos, documents, and other media assets accessible within the project.

### Task Management
* **Views**: Views are configurable interfaces that present task-related data based on specific business process definitions. They allow users to create tailored visualizations of task information, utilizing filters, sorting options, and custom parameters to focus on relevant data.
* **Hooks**: Custom scripts or actions triggered at specific points in a task's lifecycle.
* **Stages**: The phases that a task goes through within a workflow (e.g., Pending, In Progress, Completed).
* **Allocation Rules**: Criteria for assigning tasks within workflows based on predefined rules (e.g., user roles, availability).

### Integrations
* **Systems (API Endpoints)**: Configurations for connecting to external systems via REST APIs. These endpoints facilitate data exchange and integration with third-party platforms.

* **Workflows**: Workflows are configurable sequences of tasks and decision nodes that automate data processing, system interactions, and integrations, enabling efficient communication and functionality across different projects and services.
### Dependencies
Dependencies refer to the relationship between a project and the external resources it relies on, typically housed in a **Library**. These dependencies allow projects to access and utilize common assets, like processes, enumerations, or media files, from another project without duplicating content. By managing dependencies effectively, organizations can ensure that their projects remain consistent, efficient, and modular, streamlining both development and maintenance.
* **Libraries**: A library is a special type of project that stores reusable resources. Projects can declare a library as a dependency to gain access to the resources stored within it.

* **Build-Specific Dependencies**: When a a project adds a library as a dependency, it does so by referencing a specific build of that library

This build-specific dependency ensures that changes in the library do not automatically propagate to the dependent projects, providing stability and control over which version is in use.
* **Versioning**: Dependencies are versioned, meaning a project can rely on a particular version of a library's build. This versioning capability allows projects to remain insulated from future changes until they are ready to update to a newer version.

Read more about libraries by accessing this section
***
### Configuration Parameters
Configuration Parameters are essential components that allow projects to be dynamic, flexible, and environment-specific. They provide a way to manage variables and values that are likely to change based on different deployment environments (e.g., Development, QA, Production), without requiring hardcoded changes within the project. This feature is particularly useful for managing sensitive information and environment-specific settings.
* **Set environment-specific values**: Tailor your project's behavior depending on the target environment.
* **Store sensitive data securely**: Store API keys, passwords, or tokens securely using environment variables.
* **Centralize settings**: Manage common values in one place, making it easier to update settings across multiple processes or integrations.

***
### Project level settings
#### Name
The name of the project, which serves as the main identifier for your project within FlowX AI. This name is used to categorize and manage all associated resources.
#### Type
You have two options:
* **Application**: A standard project that will contain processes, resources, integrations, and other elements.
* **Library**: A reusable set of resources that can be referenced by multiple projects.
#### Platform type
You can specify the platforms for the project:
* **Omnichannel**: The project will support web, mobile, and other platforms.
* **Web**: The project will be restricted to web-based access.
* **Mobile**: The project will only be accessible via mobile platforms.
#### Default theme
Choose a theme to apply a consistent look and feel across the project. Themes manage colors, fonts, and UI elements for a unified visual experience. In the screenshot, "FlowXTheme" is selected as the default.
#### Number formatting
* **Min Decimals** and **Max Decimals**: Configure how numbers are displayed in the project by setting the minimum and maximum decimal points. This helps ensure data consistency when dealing with financial or scientific information.
* **Date Format**: Choose the format for displaying dates (e.g., short or long formats) to ensure the information is localized or standardized based on your project's requirements.
* **Currency Format**: Set whether currency is displayed using the ISO code (e.g., USD) or using a symbol (\$). This affects how financial information is presented across the project.
#### Languages
This section allows you to manage languages supported by the project.
* **Default Language**: You can set one language as the default, which will be the primary language for users unless they specify otherwise. In the screenshot, English (EN) is set as the default language.
* **Add Multiple Languages**: This enables multi-language support. Substitution tags and enumerations can be localized to provide a better experience for users in different regions.

Fore more information about localization and internationalization, check the following section:
## Creating and managing projects
A project's lifecycle includes the following stages:
* Start by defining the name and type (Omnichannel, Web, Mobile).

* Set up initial configurations like themes, languages, and formats.

* Inherit system-wide design assets like themes or fonts.
* Add processes, define templates, set up enumerations, and manage integrations.
* Configure environment-specific parameters like API URLs or passwords using environment variables.
* Reuse resources from libraries by setting up dependencies, allowing access to shared content.
* Submit your configured resources to a version.
* Once finalized, a version can be committed for stability.
* Create a build from a version when ready to deploy to a different environment.
* A build packages the project version for a consistent deployment.
* Each build is immutable and cannot be modified once created, ensuring runtime stability.
## FAQs
To create a new version, navigate to the project dashboard, select "Create New Version," and make any changes. Once finalized, commit the version to lock it.
Creating a build captures a snapshot of the project version, consolidating resources into a single deployable package. Builds are immutable and cannot be edited once created.
Yes, projects support multiple locales. You can define regional settings, such as date formats and currencies, to cater to different environments.
When modifying shared resources, FlowX AI enforces versioning. You can track which processes or resources are affected by a change and revert if necessary.
A Project Version is a snapshot of resources and configurations that can be modified, tracked, and rolled back. A Build is a deployable package created from a committed version, and it is immutable once deployed.
Go to your project settings, navigate to dependencies, and select the desired library. Choose the build you want to use, and its resources will be accessible within your project.
No, a build is immutable. To make changes, modify the project version, create a new version, and deploy a new build.
Each resource has a Resource Definition ID (consistent across versions) and a Resource Version ID (specific to each version). These ensure that the correct version of a resource is used at runtime.
**A:** Dependencies allow projects to share and reuse resources like processes, enumerations, and templates that are stored in libraries. By setting a library as a dependency, a project can access the resources it needs without duplication, fostering modular development.
**A:** Dependencies are version-controlled, meaning you can choose specific library builds to ensure stability. Each build version of a library captures the state of its resources, allowing projects to lock onto a particular version until they are ready to upgrade.
**A:** Yes, you can remove a dependency from the project. Go to the **Dependencies** section in the project workspace and select the dependency you want to remove. However, make sure that no critical resources in the project rely on that dependency.
**A:** Circular dependencies occur when two libraries depend on each other. This can lead to conflicts and unexpected behavior. It's best to keep dependencies modular and avoid tightly coupling libraries to prevent such issues.
**A:** Use a controlled environment like Dev or UAT to test new builds of the library before updating the dependency in your main project. This allows you to validate changes and ensure they don't negatively impact your project.
**A:** When a dependency is updated to a newer build, any resources that were modified in the library will reflect the latest version. Projects have control over when to update, so older versions remain stable until the dependency is manually updated.
# Libraries
Source: https://docs.flowx.ai/4.7.x/docs/projects/managing-applications/libraries
Libraries are specialized projects that serve as reusable containers for resources that can be shared across multiple projects.

Unlike regular projects, which are intended for creating and managing specific workflows, libraries are designed to house common elements—like processes, enumerations, and media assets—that other projects may rely upon.
This makes libraries a cornerstone for establishing consistency and efficiency across various projects.
#### Key features
* **Resource Sharing**:
* Libraries facilitate the reuse of resources like processes, enumerations, and media assets across different projects. This allows for a more modular design, where commonly used elements are stored centrally.
* Resources in a library can be included in other projects by setting the library as a dependency.
* **Dependencies**:
* A project can add a library as a dependency, meaning it will have access to the resources stored in that library.
* Dependencies are managed through builds; each project can choose a specific version (build) of a library to depend on. This ensures that updates in the library do not automatically impact all projects unless intended.
* **Versioning**:
* Just like projects, libraries can be versioned. Each version of a library captures the state of its resources at a specific point in time.
* This versioning allows projects to lock dependencies to specific library versions, providing stability and predictability during runtime.
#### Managing libraries
1. **Creating a Library**:
* Libraries can be created similarly to projects. The creation process involves setting a name, defining the resources it will contain, and managing its configuration settings.
* Once created, resources can be added to the library incrementally, allowing for iterative development.

2. **Managing Library Resources**:
* Users can create, edit, delete, or version resources within the library. Resources include processes, enumerations, media files, and other elements that can be referenced by projects.
* A library can have multiple versions, each capturing the resource state. This allows backward compatibility with older project builds relying on specific library versions.

3. **Adding Library Dependencies to Projects**:
* In the project workspace, libraries can be set as dependencies under the **Dependencies** section.
* Users can select which build version of the library to reference. This allows projects to control how library changes affect them by choosing to update the dependency to newer library builds as needed.
Libraries can be added as a dependency only to Work-In-Progress (WIP) project versions.
#### Library build process
* **Builds in Libraries** are tagged versions, allowing a snapshot of the current library resources to be used by projects. Each build in a library captures the state of all its resources.
* Projects referencing a library can specify the exact build to use, ensuring that changes to the library do not inadvertently impact dependent projects.

#### Use cases
1. **Centralized Resource Management**:
* Organizations that need to maintain a standard set of processes or other resources can use libraries to store these centrally. Each project can then use the same library resources, ensuring consistency.
2. **Version-Controlled Dependency Management**:
* By utilizing builds and versioning within libraries, projects can safely depend on specific versions, reducing the risk of unexpected behavior due to resource updates.
3. **Streamlining Updates Across Projects**:
* When an update is required across multiple projects that rely on the same resources, it can be done by updating the library and releasing a new build. Each project can then choose to update to the new build when ready.
## Best practices
* Use libraries for resources that need to be standardized across multiple projects.
* Carefully manage dependencies and choose specific builds to avoid unexpected runtime behavior.
* Leverage versioning to maintain a clear history of resource changes, allowing rollbacks if necessary.
Libraries thus play a crucial role in modularizing the development process, enabling reusable and maintainable components across different projects. This leads to improved project organization, reduced redundancy, and simplified resource management.
## FAQs
**A:** In the project workspace, navigate to the **Dependencies** section. Select the library you want to add, choose the specific build version, and confirm. Once added, the resources from that library will be available for use within the project.
**A:** When a resource in a library is updated, dependent projects will not be affected unless the library build they reference is updated to a newer version. This allows projects to maintain stability by controlling when updates are adopted.
**A:** Yes, a project can depend on multiple libraries. Each library can be referenced by selecting the desired build version. However, ensure that dependencies are managed carefully to avoid conflicts or redundancy.
**A:** Before switching to a newer library build, test it in a development or staging environment to ensure compatibility. If everything functions as expected, update the project’s dependency to the new build in the **Dependencies** section.
# Versioning
Source: https://docs.flowx.ai/4.7.x/docs/projects/managing-applications/versioning
Easily track and manage your project's evolution with comprehensive versioning features.
Versioning enables you to manage changes, track progress, and collaborate effectively by capturing snapshots of your project's state at any point in time. The versioning system ensures that resources are grouped and tracked as part of the project, providing a comprehensive and structured approach to development.
## Project version
A **Project Version** is an editable snapshot of your project at a specific moment. It contains all resources (e.g., processes, integrations, templates) and configurations grouped under the project.

The tab above provides a summary of all accessible project versions and branches available in the current environment.
Resources within a project (e.g., processes, integrations, templates) are versioned as part of the project, not individually.
Certain resources are considered **global** and are not included in project-specific versioning. These resources are shared across projects and environments to maintain consistency and simplify their management. Examples of such global resources include:
* **Themes**: Predefined design themes used across multiple projects.
* **Fonts**: A library of fonts accessible globally.
* **Global Media Files**: Shared media assets.
* **Out of Office Settings**: Configurations for user availability and auto-responses that are managed at the platform level.
***
## Interface overview
### Left panel: version details
* **State**: Displays the current state of the version (e.g., `draft`, `committed`).
* **Branch**: Indicates the currently selected branch (e.g., `main`).
* **Last Saved By**: Shows the username of the person who last saved changes (e.g., "JS").
* **Last Saved At**: Displays the timestamp of the most recent save (e.g., "23 Jan 2025 at 10:11 AM").
* **ID**: A unique identifier for the version, with a copy button for convenience.
### Center panel: branch and commit graph
The graph visually organizes the project’s versioning structure, showing relationships between branches and commits.
* **Graph View**:
* Provides a visual representation of branches and commits.
* **Main Branch** (blue): The main development branch.
* **Secondary Branches** (yellow): Feature or development branches such as `branch3`, `branch4`, and `secondary_branch`.
* **Markers**:
* **Dotted Circles**: Represent draft versions.
* **Solid Circles**: Represent submitted versions.
### Right panel: submit messages
* Displays a chronological list of commit messages for the selected branch.
* **Details**:
* Commit messages (e.g., `v1`, `commit_secondary_branch`) are aligned with their respective branches.
* Each commit shows:
* The user responsible (e.g., "JS").
* The state of the commit (e.g., `draft`, `commited`).
### Top bar: global controls
* **Project Name**: Displays the current project name (`Docs_customer_onboarding`).
* **Branch Selector**: Dropdown to navigate between branches.
* **State Indicator**: Highlights the state of the current branch (e.g., `draft` for `main`).
* **Submit Changes to Version**: Button for submitting draft changes globally.
* **Config/Runtime Tabs**:
* **Config**: Likely for managing version configuration.
* **Runtime**: Likely for runtime options or monitoring.
***
## Core versioning operations
The table below summarizes key versioning operations and their functions:
| Operation | Description |
| ------------------- | ---------------------------------------------------------------------------------------- |
| **Create Project** | Creates a project and its initial draft version. |
| **Create Resource** | Adds a new resource in draft status to the current project version. |
| **Edit Resource** | Creates a deep copy of a resource for editing if the original is COMMITTED. |
| **Commit Project** | Updates the statuses of the project, its manifest, and resources to COMMITTED. |
| **Discard draft** | Deletes draft resources and the draft project version. |
| **Start New draft** | Creates a new draft version by copying the last COMMITTED project version. |
| **Create Branch** | Creates a new branch and a corresponding draft project version from the selected commit. |
### Detailed steps for operations
* A new project is created in draft (WIP) state.
* An initial draft project version is automatically generated.
* Adds a resource in draft status to the current project version.
* Updates the project manifest to include:
* **UUID** of the new resource.
* `last_change_time`: Set to the current timestamp.
* `last_committed_time`: Null (since the resource is WIP).
* **Scenario A**: Editing a draft Resource
* No new resource version is created.
* Updates the project manifest:
* `last_change_time`: Current timestamp.
* **Scenario B**: Editing a `COMMITTED` Resource
* A deep copy of the `COMMITTED` resource is created as draft.
* Updates the project manifest:
* Links to the new draft resource.
* `last_change_time`: Current timestamp.
* Commits the current draft project version:
1. **draft Resources**:
* Status: Updated to `COMMITTED`.
* `last_change_time` and `last_committed_time`: Set to the current timestamp.
2. **Project Version**:
* Status: Updated to `COMMITTED`.
* Deletes draft resources and the draft project version.
* Available only when there are changes after the last commit.
* Creates a new draft project version by copying:
* The last `COMMITTED` project version.
* Its manifest, linking to `COMMITTED` resources.
* Creates a new branch and draft project version:
* Copies the selected `COMMITTED` project version.
* Updates the manifest to reference existing `COMMITTED` resources.
***
## Lifecycle of a project version
1. **Create Project**: Automatically starts with a draft (work-in-progress) project version.
2. **Modify Resources**: Draft (WIP) resources can be edited directly, while `COMMITTED` resources are cloned into draft (WIP) before editing.
3. **Commit Changes**: Promotes the project version and its resources to `COMMITTED` status.
4. **Start New WIP**: Allows iterative development by creating a new draft version.
5. **Create Branch**: Enables parallel development with isolated draft versions.
***
## Starting a new version (draft)
You can initiate a new draft (work-in-progress) version while keeping the submitted version intact. A draft version is automatically created under the following circumstances:
* **New Project**: When you create a new project, a corresponding draft version is initiated. This ensures that ongoing changes are tracked separately from the submitted version.

* **New Branch Creation**: The creation of a new branch in the system also triggers the creation of a draft version (from a committed version only). This streamlines the process of branching and development, allowing for parallel progress without impacting the main submitted version.

* **Manual Draft Version Creation**: You have the flexibility to initiate a new draft version manually. This is particularly useful when building upon the latest version available on a branch.

***
## Advanced features
### Submitting changes
You can submit changes exclusively on work-in-progress (WIP) versions. Changes can be submitted using the designated action within the version menu. Upon triggering the submission action, a modal window appears, prompting you to provide a commit message.
A string of maximum 50 characters, mandatory for submission. Only letters, numbers, and characters \[] () . \_ - / are allowed.
The placeholder indicating work-in-progress is replaced with a "submitted" state within the graph view.
#### Updating submit messages
You have the flexibility to modify submit messages after changes are submitted. This can be accomplished using the action available in the version menu.
### Creating a new branch
Using versioning you can work on a stable copy of the project, isolated from ongoing updates by other users. You can create a new branch starting from a specific submit point.
The initiation of new branches is achieved using the dedicated action located in the left menu of the chosen submit point (used as the starting point for the branch).
A string of maximum 16 characters, mandatory for branch creation.
### Merging changes
You can incorporate updates made on a secondary branch into the main branch or another secondary branch. To ensure successful merging of changes, adhere to the following criteria:
* You can merge the latest version from a secondary branch into either its direct or indirect parent branch.

* Upon triggering the merge action, a modal window appears, giving the possibility to make the following selection:
* **Branch**: Displays the branches to which the current branch is a child (direct or indirect).
* **Message**: A string of maximum 50 characters (limited to letters), numbers and the following characters: \[] () . \_ - /.

The graph representation is updated to display the new version on the selected parent branch and the merged version is automatically selected, facilitating further development and tracking.
### Managing conflicts
The Conflict Resolution and Version Comparison feature provides a mechanism to identify and address conflicts between two process versions that, if merged, could potentially disrupt the integrity of a project.
The system displays both the version to be merged and the current version on a single screen, providing a clear visual representation of the differences. Conflicts and variations between the two versions are highlighted, enabling users to readily identify areas requiring attention.

Unless specified otherwise, changes from the source branch will be prioritized.
Not all changes are considered conflicts, changes in node positions are not treated as conflicts. Primary causes lie in identifying differences within business rules, expressions, and other scripts.
#### Merging without conflicts
Easily merge secondary branches into the main branch or other branches when no conflicts are detected. The updated merge modal includes:
* A clean, streamlined interface for branch selection.
* A mandatory, validated commit message field (max 50 characters).
* Real-time feedback for successful merges and updates to the branching graph.

#### Advanced conflict detection
The new **Conflicts Detected Modal** provides a detailed overview of conflicting changes, with features such as:
* Clear grouping by resource type (e.g., Processes, Enumerations, Media Library).
* Scrollable lists and clickable entries to resolve conflicts efficiently.
* A comprehensive comparison of source and target branch differences for context.

#### Resource-level conflict resolution
Resolve conflicts directly at the resource level with an intuitive interface:
* **JSON Comparisons:** Visualize differences with color-coded highlights (source: yellow, target: blue).
* **Navigation Support:** Quickly jump between differences for efficient resolution.
* **Progress Tracking:** Mark resources as “Reviewed” or “Seen” to monitor resolution progress.


#### Flexible merge overrides
Handle unresolved conflicts with the **Merge Anyway** option, which provides flexibility while maintaining control over outcomes:
* A confirmation modal explains how unresolved conflicts will be handled (e.g., prioritizing source branch changes).
* Allows merging to continue even when some conflicts remain unresolved.

### Read-only state
The Read-Only State feature allows you to access and view submitted versions of your projects while safeguarding the configuration from unintended modifications. By recognizing the visual indicators of the read-only state, you can confidently work within a controlled environment, ensuring the integrity of project's process definitions.

***
## Builds
* You can create multiple versions of a project committed before creating a build. You can create a build for any of the committed app versions;
* Once you create a build, you can't edit the contents of the build (enumerations, substitution tags, integrations). You'll need to create a new app version.
***
## Audit view
The "Open Audit View" provides you with a detailed audit log of actions related to work-in-progress (WIP) versions of a process. The primary goal is to ensure transparency and accountability for actions taken before the commit or save process.
You can quickly access and review the history of WIP versions, facilitating efficient decision-making and collaboration.

# Resources
Source: https://docs.flowx.ai/4.7.x/docs/projects/resources
Overview of Global and project resources, including their usage, dependencies, promotion, and configuration within the platform.
## Overview
Resources are categorized into **Global Resources** and **Project Resources**. Each type has unique characteristics that define its usage, dependencies, and promotion between environments. Understanding these distinctions ensures efficient project development, management, and deployment.
***
## Global resources
Global Resources are designed for **reuse across multiple projects** or business contexts.
These resources are often organized within **libraries**, enabling consistency and efficiency by providing a central repository of shared components.
### Key characteristics
* **Reusable**: Common design elements, themes, fonts, and other assets can be used in multiple projects.
* **Independent**: Global Resources exist outside individual projects, making them adaptable across different business cases.
* **No Dependencies**: These resources are not project-specific, making them versatile for broad use cases. Projects can reference libraries without requiring modifications to the core resources.
### Examples
#### Design assets
* **Themes**: Standardized themes for consistent styling.
* **Global Media Library**: Shared images, icons, and other assets.
* **Fonts**: Reusable font families to maintain branding.
#### Plugins
* **Languages**: Language packs or configurations that support multilingual capabilities across projects.
* **Templates**: Notification and document templates managed at a global level.
* **Out-of-Office Settings**: Platform-wide absence management settings.
#### General settings
* **Users**: User accounts and profiles that can be accessed across multiple projects.
* **Roles**: Defined roles that determine user permissions and access levels.
* **Groups**: User groupings that facilitate collective management and permissions.
* **Audit Log**: Logs that track changes and activities within the platform for security and compliance purposes.
#### Libraries
Organized resource containers including **enumerations**, **substitution tags**, and **CMS components** for multi-project use.
### Promotion workflow
* **Library Promotion**: Global Resources within libraries are promoted separately from projects. They are typically **imported into the Designer UI** in target environments as part of their own libraries.
* **Configuration Management**: When libraries are promoted, existing configurations, such as **generic parameters**, are replaced by project-level configurations in the target environment.
***
## Project resources
Project Resources are specific to a **business use case**. Unlike Global Resources, they belong to a single project, allowing for custom configurations and dependencies on libraries.
### Key characteristics
* **Project-Specific**: Tailored to a single project’s needs.
* **Library Dependencies**: Can reference and extend Global Resources.
* **Configurable**: Parameters can be updated in upper environments through environment variables or overrides.
### Examples
1. **Processes**: BPMN workflows customized for the project’s business logic.
2. **CMS Components**: Project-specific enumerations, substitution tags, and media items.
3. **Task Management**:
* **Views**: Configurable interfaces to display task-related data based on process definitions.
* **Hooks**: Users with task management permissions can create hooks to trigger specific process instances, such as sending notifications when events occur.
* **Stages**: Stages that allow task progression through different statuses.
* **Allocation rules**: Define how tasks are assigned to users or teams.
4. **Integration Designer**:
* **Systems**: A system is a collection of resources—endpoints, authentication, and variables—used to define and run integration workflows.
* **Workflows**: A workflow defines a series of tasks and processes to automate system integrations. Within the Integration Designer, workflows can be configured using different components to ensure efficient data exchange and process orchestration.
5. **Configuration Parameters**: Project-specific rendering settings like `applicationUuid`, `locale`, `language`, and `process parameters`.
6. **Project Builds**: Builds represent finalized versions of the project, including all associated metadata and linked library versions.
7. **Project Settings**: Configure various aspects of a project like platform type, default theme, formatting, etc.
### Promotion workflow
* **Build-Based Promotion**: Only **builds** (not commits) are promoted.
* **Design Asset Handling**: Referenced assets are created if missing but are not updated during import.
* **Configuration Parameters Overrides**: Environment variables replace default values in upper environments.
***
## Resources usage tracking
Track where resources are used to prevent unintended changes and ensure stability.


The **Resource Usage Tracking** feature provides real-time visibility into where and how resources are being used, preventing unintended changes and ensuring system stability.
**Key features**
* **View Dependencies** before making updates or deletions.
* **Understand Resource Interactions** across projects.
* **Prevent Issues** when modifying critical elements.
Each resource type includes a **Usage Overview modal**, which updates dynamically as references change.
### Process usage
1. **Access Usage Overview**
* Click the **Usage Overview icon** in the **Process List**.
2. **Usage Modal Details**
* Displays references by resource type (e.g., Process, UI Template, Workflow).
* Provides detailed context, including node configurations.
#### Deleting a process with references
* A **confirmation modal** warns users before deletion.
* The modal lists all affected references.
### Enumeration usage
1. **Access Usage Overview**
* Click the **Usage Overview icon** in the **Enumeration List**.
2. **Usage Modal Details**
* Shows where the enumeration is referenced (e.g., UI Designer, Data Model, Workflow)
### Media library usage
1. **Access Usage Overview**
* Click the **Usage Overview icon** in the **Media Library List**.
2. **Usage Modal Details**
* Displays where media files are used (e.g., UI Template, Process).
***
### Managing dependencies & tracking changes
* **Dynamic Updates**: Reference lists update automatically as resources change.
* **Dependency Checks**: Users see affected resources before deletion.
* **Bidirectional Tracking**: The **Usage Overview modal** also shows resources referenced **inside** a process.
## Copying resources
Use the **Copy to Another Project** feature to transfer resources while maintaining dependencies.

Choose the target project or library.

Select the target branch.

The system displays referenced resources and validates dependencies.


If necessary, copy all substitution tags from a project to the target project. Existing substitution tags in the target project with the same key will not be overwritten.


If an identifier already exists in the destination, you'll be prompted to either:
* **Keep Both**: Create a duplicate.
* **Replace**: Overwrite existing resource.
* **Use Destination**: Keep existing resource without copying.
**Handling conflicts:** The system automatically detects duplicate identifiers and provides a clear choice to avoid unintended overwrites.
### Supported resource types
The copy feature is available for the following resources:
* **Process definitions** (including referenced resources)
* **Systems**
* **Workflows**
* **Enumerations** (including child enumerations)
* **Media Library items**
* **Notification** and **Document** templates
## Duplicating resources
The **Duplicate Resource** feature allows users to quickly copy resources within the same project or library.
### Supported resource types
* **Processes**
* **Enumerations**
* **Media files (including Global media library files)**
* **Notification templates**
* **Document templates**
* **Views**
### Steps
Duplicate options are available from each resource’s three-dot menu (table row & secondary navigation).

When selected, a **"Duplicate" modal** opens with a prefilled name: *Copy of \[Resource Name]* (which can be edited).

# Active policy
Source: https://docs.flowx.ai/4.7.x/docs/projects/runtime/active-policy
The Active policy defines the strategy for selecting which build version of a project is active in the runtime environment.
This feature plays a key role in ensuring that the correct [**project**](../managing-applications/application) configuration is executed, managing how processes interact with specific builds, and controlling how changes propagate through environments. The active policy determines which project version is currently "live" and dictates how updates and new builds are adopted.
#### Key Concepts
1. **Active Build**
* The active build is the currently selected version of a project that is deployed in the runtime environment. It is the version that all process executions will reference when initiated.
* Only one build can be active per environment at a time, ensuring clarity and stability.
2. **Policy Types**
* **Draft Policy**: Configures the project to run the latest draft version. This is useful for environments like development or testing, where ongoing changes need to be reflected in real-time without committing to a finalized version.
* **Fixed Version Policy**: Points to a specific, committed build version of the project. This policy ensures that no unexpected updates are introduced and is typically used for UAT and Production environments.
3. **Version Control and Flexibility**
* Active Policy provides flexibility by allowing the user to switch between different build versions based on the environment's needs. This makes it easy to test, stage, and deploy without affecting production stability.
* By managing the policy, you control which changes go live and when, allowing for seamless testing and controlled rollouts.
#### Workflow for Managing Active Policy
1. **Setting the Active Policy**
* Navigate to the project's settings in the FlowX.AI Designer.
* Go to the **Active Policy** section, where you can manage the behavior of the active build.
* Choose the policy type—either Draft or Fixed Version—to specify how builds should be managed in the environment.
2. **Managing Draft Policy**
* When selecting the Draft Policy, the project will always refer to the latest draft build created on the chosen branch.
* This allows for continuous development and testing without having to commit to a specific version. However, it is important to use this policy only in non-production environments.
3. **Selecting a Fixed Version Policy**
* For stability, you can set the active policy to a Fixed Version. This locks the project to a specific build, ensuring that no changes are applied unless a new build is created and explicitly set as active.
* This policy is ideal for UAT, QA, and Production environments where stability and consistency are key.
4. **Publishing a New Build as Active**
* Once a new build is ready and verified, you can update the active policy to point to the new build version.
* The change takes effect immediately, making the new build the active version for all processes and interactions in that environment.
#### Key Features
1. **Environment-Specific Control**
* Each environment (Development, UAT, Production) can have its own active policy, providing control over what version of the project is running in each stage of the development lifecycle.
2. **Rollback Capability**
* The active policy makes it easy to revert to a previous build if needed. By simply changing the active policy, you can switch back to an earlier stable version, minimizing disruption.
3. **Branch-Specific Management**
* Active Policy supports management by branch. This means you can maintain different policies for different branches, allowing separate versions of the project to be active in separate environments.
4. **Process Consistency**
* By defining which build version is active, the active policy ensures that all processes reference the correct resources. This eliminates inconsistencies and ensures that the expected behavior is maintained throughout the project's lifecycle.
#### Benefits
* **Controlled Rollouts**: Provides a clear pathway to manage when updates go live, ensuring that no untested changes accidentally reach production.
* **Simplified Management**: Offers a straightforward way to manage the complexity of multiple project versions across various environments.
* **Enhanced Stability**: Reduces the risk of runtime errors by maintaining control over the specific version that is active.
* **Efficient Testing**: Allows for easy testing of new changes without affecting the stability of other environments by using Draft Policy.
#### Managing Active Policy
1. **Configuring a New Active Policy**
* In the project settings, navigate to the **Active Policy** tab.
* Choose between **Draft** or **Fixed Version**.
* If selecting Fixed Version, pick the specific build from the dropdown list.
2. **Changing the Active Build**
* To update which build is active, modify the policy to reference a new build.
* Confirm the changes to activate the selected build in the runtime environment.
3. **Branch Management**
* Use branches to manage draft builds independently. Each branch can have its own active policy, which allows parallel development without interference.
#### Technical Details
1. **Project Context and Resource References**
* The active policy is closely tied to the project context. It determines how resource references are handled during process execution.
* At runtime, the system identifies the active build based on the policy, ensuring that the correct resource definitions are used.
2. **Metadata and Versioning**
* Each active policy includes metadata about the selected build, including build ID, version tags, and branch references. This metadata helps track which version is running in each environment.
3. **Interaction with Runtime Environment**
* The runtime environment relies on the active policy to decide which build version of a project to execute. This is managed through an internal reference that points to the active build.
4. **Storage and Management**
* The active policy is stored separately from the build data, making it easy to update the active build without modifying the build itself. This separation ensures that changes to the active state do not affect the integrity of the builds.
#### Example Use Case
Imagine you have a project in production, and a new feature is developed. You:
* Use a **Draft Policy** to test the new feature in a development environment, pointing to the latest draft build.
* Once the feature is verified, create a build and switch the environment's active policy to a **Fixed Version**, ensuring that the feature is consistent and stable.
* In case a problem arises, you can easily revert the active policy to a previous build version without any reconfiguration.
#### Conclusion
The Active Policy feature in FlowX AI provides a flexible and reliable way to manage which version of a project is running in each environment. It offers a clear structure for testing, deployment, and rollback, making it a vital tool for maintaining stability and consistency throughout the project's lifecycle.
# Failed process start (exceptions)
Source: https://docs.flowx.ai/4.7.x/docs/projects/runtime/active-process/failed-process-start
Exceptions are types of errors meant to help you debug a failure in the execution of a process.

Exceptions can be accessed from multiple places:
* **Failed process start** tab from **Active process** menu in FlowX.AI Designer.
* **Process Status** view, accessible from **Process instances** list in FlowX.AI Designer.

If you open a process instance and it does not contain exceptions, the **Exceptions** tab will not be displayed.
### Exceptions data
When you click **view** button, a detailed exception will be displayed.

* **Process Definition**: The process where the exception was thrown.
* **Source**: The source of the exception (see the possible type of [sources](#possible-sources) below).
* **Message**: A hint type of message to help you understand what's wrong with your process.
* **Type**: Exception type.
* **Cause Type**: Cause type (or the name of the node if it is the case).
* **Process Instance UUID**: Process instance unique identifier (UUID).
* **Token UUID**: The token unique identifier.
* **Timestamp**: The default format is - `yyyy-MM-dd'T'HH:mm:ss.SSSZ`.
* **Details**: Stack trace (a **stack trace** is a list of the method calls that the process was in the middle of when an **Exception** was thrown).
#### Possible sources
### Exceptions type
Based on the exception type, there are multiple causes that could make a process fail. Here are some examples:
| Type | Cause |
| :----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Business Rule Evaluation | When executing action rules fails for any reason. |
| Condition Evaluation | When executing action conditions. |
| Engine |
When the connection with the database fails
when the connection with [Redis](../../../platform-overview/frameworks-and-standards/event-driven-architecture-frameworks/intro-to-redis) fails
|
| Definition | Misconfigurations: process def name, subprocess parent process id value, start node condition missing. |
| Node | When an outgoing node can’t be found (missing sequence etc). |
| Gateway Evaluation |
When the token can’t pass a gateway for any reason, possible causes:
Missing sequence/node
Failed node rule
|
| Subprocess | Exceptions will be saved for them just like for any other process, parent process ID will also be saved (we can use this to link them when displaying exceptions). |
# Process instance
Source: https://docs.flowx.ai/4.7.x/docs/projects/runtime/active-process/process-instance
A process instance is a specific execution of a business process that is defined on the FlowX.AI platform. Once a process definition is added to the platform, it can be executed, monitored, and optimized by creating an instance of the definition.

## Overview
Once the desired processes are defined in the platform, they are ready to be used. Each time a process needs to be used, for example each time a customer wants to request a new credit card, a new instance of the specified process definition is started in the platform. Think of the process definition as a blueprint for a house, and of the process instance as each house of that type being built.

The **FlowX.AI Engine** is responsible for executing the steps in the process definition and handling all the business logic. The token represents the current position in the process and moves from one node to the next based on the sequences and rules defined in the exclusive gateways. In the case of parallel gateways, child tokens are created and eventually merged back into the parent token.
Kafka events are used for communication between FLOWX.AI components such as the engine and integrations/plugins. Each event type is associated with a Kafka topic to track and orchestrate the messages sent on Kafka. The engine updates the UI by sending messages through sockets.
## Checking the Process Status
To check the status of a process or troubleshoot a failed process, follow these steps:
1. Open **FlowX.AI Designer**.
2. Go to **Processes → Active Process → Process instances**.
3. Click **Process status** icon.

## Understanding the Process Status data
Understanding the various elements within process status data is crucial. Here's what each component entails:
* The **Status** field indicates the state of the process instance, offering distinct values:
| Status | Indicates the state of the process instance. Offers distinct values: |
| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------- |
| **CREATED** | Visible if there's an error during process creation. Displays as `STARTED` if there were no errors during creation. |
| **STARTED** | Indicates the current running status of the process. |
| **DISMISSED** | Available for processes with subprocesses, seen when a user halts a subprocess. |
| **EXPIRED** | This status appears when the defined "expiryTime" expression in the process definition elapses since the process was initiated (`STARTED`). |
| **FINISHED** | Signifies successful completion of the process execution. |
| **TERMINATED** | Implies a termination request has been sent to the instance. |
| **ON HOLD** | Marks a state where the process is no longer editable. |
| **FAILED** | Occurs if a CronJob triggers at a specific hour, and the instance isn't finished by then. |
* **Active process instance**: The UUID of the process instance, with a copy action available.
* **Variables**: Displayed as an expanded JSON.

* **Tokens**: A token represents the state within the process instance and describe the current position in the process flow.

For more information about token status details, [here](../../../building-blocks/token).
* **Subprocesses**: Displayed only if the current process instance generated a [subprocess](../../../building-blocks/process/subprocess) instance.
* **Exceptions**: Errors that let you know where the process is blocked, with a direct link to the node where the process is breaking for easy editing.

For more information on token status details and exceptions, check the following section:
* **Audit Log**: The audit log displays events registered for process instances, tokens, tasks, and exceptions in reverse chronological order by timestamp.

### Color coding
In the **Process Status** view, some nodes are highlighted with different colors to easily identify any failures:
* **Green**: Nodes highlighted with green mark the nodes passed by the [token](../../../building-blocks/token).
* **Red**: The node highlighted with red marks the node where the token is stuck (process failure).

## Starting a new process instance
To start a new process instance, a request must be made to the [FlowX.AI Engine](../../../platform-deep-dive/core-components/flowx-engine). This is handled by the web/mobile application. The current user must have the appropriate role/permission to start a new process instance.

To be able to start a new process instance, the current user needs to have the appropriate role/permissions:
When starting a new process instance, we can also set it to [inherit some values from a previous process instance](../../../platform-deep-dive/core-components/flowx-engine#orchestration).
## Troubleshooting possible errors
If everything is configured correctly, the new process instance should be visible in the UI and added to the database. However, if you encounter issues, here are some common error messages and their possible solutions:
Possible errors include:
| Error Message | Description |
| ------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------ |
| *"Process definition not found."* | The process definition with the requested name was not set as published. |
| *"Start node for process definition not found."* | The start node was not properly configured. |
| *"Multiple start nodes found, but start condition not specified."* | Multiple start nodes were defined, but the start condition to choose the start node was not set. |
| *"Some mandatory params are missing."* | Some parameters set as mandatory were not included in the start request. |
| `HTTP code 403 - Forbidden` | The current user does not have the process access role for starting that process. |
| `HTTP code 401 - Unauthorized` | The current user is not logged in. |
# Builds
Source: https://docs.flowx.ai/4.7.x/docs/projects/runtime/builds
The Build feature allows for the creation of deployable packages of a project, encapsulating all its resources into a single unit.
A build is a snapshot of a specific project version, packaged and prepared for deployment to a runtime environment. The concept of builds plays a crucial role in ensuring that the correct version of a project, with all its configurations and dependencies, runs consistently across different environments (e.g., Dev, UAT, Production).
#### Key concepts
1. **Project Version vs. Build**
* **Project Version** is the editable snapshot of a project's state at a specific point in time. It contains all the resources (like processes, integrations, templates, etc.) and configurations grouped within the project.
* A **Build** is the deployable package of a project version. It is the compiled and immutable state that contains all the project resources transformed into executable components for the runtime environment.
2. **Single Version Rule**
* A build can only represent a single version of a project. If you have multiple versions of a project, each one can have a unique build. This ensures that you can clearly manage which version of the project is running in a specific environment without conflicts.
3. **Consistency and Deployment**
* Builds ensure that when you deploy a project to a different environment (like moving from Dev to UAT), the exact configuration, processes, templates, integrations, and all associated resources remain consistent.
* Builds are immutable—once created, a build cannot be altered. Any updates require creating a new version of the project and generating a new build.
#### Creating a build
1. **Develop the Project**
* First, work on the project in the Config mode. This involves creating processes, adding resources, and configuring integrations. Multiple versions of the project can be created and saved as drafts.
2. **Submit Changes to Version**
* When the project is ready for deployment, submit the changes to create a specific version of the project. This step involves finalizing the current configuration, grouping all resources and changes, and marking them under a specific version ID.
3. **Generate a Build**
* In the project settings, select the version you want to deploy and generate a build. This step compiles the selected project version into a package, converting all the resource definitions into executable components for the runtime environment.
* You can specify build metadata such as version tags, which help identify and manage builds during deployment.
4. **Publish the Build**
* Once the build is generated, it can be published. The published build is now available for execution in the chosen runtime environment, making it the active version that responds to process and integration calls.
#### Key features
1. **Immutability**
* Builds are immutable, ensuring that once a build is created, it reflects a fixed state of the project. This immutability prevents accidental changes and ensures consistency.
2. **Checksum and Integrity**
* Each build includes a checksum or identifier to verify its integrity. This ensures that the deployed build matches the expected configuration and no changes have been made post-build.
3. **Runtime Dependency**
* The runtime environment relies on builds to determine the active project configuration. This means the runtime does not directly interact with the editable project versions but uses builds to maintain stability and reliability.
4. **Version Control**
* Builds are version-controlled, and each build corresponds to a specific project version. This means you can trace exactly which project configuration is active in a particular environment at any time.
#### Benefits
* **Consistency Across Environments**: Builds ensure that the same version of an project runs in different environments, avoiding discrepancies between development and production.
* **Reduced Errors**: Immutable builds reduce the risk of runtime errors caused by unexpected changes.
* **Simplified Rollbacks**: If an issue is detected with a current build, previous builds can be deployed seamlessly, providing an efficient rollback strategy.
* **Streamlined Deployment**: Builds allow for a straightforward deployment process, reducing the complexity of transferring configurations and resources between environments.
#### Build management
1. **Creating a Build**
* Go to the project settings.
* Select the project version you want to deploy.
* Click the **Create Build** button and follow the prompts to configure build settings, including adding metadata or tags.
2. **Viewing and Managing Builds**
* Access the list of builds for a specific project through the Builds section.
* Each build entry provides details like version number, creation date, creator, and status (e.g., Draft, Published).
3. **Publishing a Build**
* Once a build is verified and ready, publish it to make it the active build in the desired environment.
* Only one build can be active per environment at any given time.
#### Technical details
1. **Manifest and Metadata**
* Each build contains a manifest that lists all the resources included, their version IDs, and their resource definitions.
* Metadata helps identify which project version the build is derived from, providing traceability.
2. **Separation of Design and Runtime**
* Builds serve as a bridge between the design (config) view and the runtime view. The config view is where changes are made and managed, while the runtime view uses builds to execute stable, tested versions of projects.
3. **Storage**
* Builds are stored in a dedicated storage system to ensure they are separate from the editable project versions. This storage supports easy retrieval and deployment of builds.
#### Example use case
Imagine you have a project for customer onboarding with multiple processes, templates, and integrations. After completing development and final testing, you:
* Submit changes to create **Version 1.0** of the project.
* Create a build for **Version 1.0**.
* Deploy this build to the UAT environment for final stakeholder review.
* If any adjustments are needed, you go back to the project, make changes, and submit them as **Version 1.1**.
* Create a new build for **Version 1.1**, ensuring that the changes are encapsulated separately from the previous version.
Once everything is approved, you can publish **Version 1.1 Build** to the Production environment, ensuring all environments are aligned without manual reconfiguration.
#### Conclusion
The Build feature is essential for managing projects across multiple environments. It provides a clear and organized pathway from development to deployment, ensuring consistency, integrity, and stability throughout the project's lifecycle.
# Configuration parameters overrides
Source: https://docs.flowx.ai/4.7.x/docs/projects/runtime/configuration-parameters-overrides
The Configuration parameters overrides feature allows you to set environment-specific values for certain parameters within a project.
This features enables flexibility and customization, making it possible to tailor a project’s behavior and integration settings according to the needs of each environment (e.g., Dev, UAT, Production). By defining overrides, users can easily manage variables like API endpoints, authentication tokens, environment-specific URLs, and other settings without altering the core project configuration.
#### Key concepts
1. **Configuration Parameters**
* Configuration parameters are placeholders used throughout a project to store key settings and values. These parameters can include anything from URLs, API keys, and credentials to integration-specific details.
* Parameters can be defined at the project level and referenced throughout processes, business rules, integrations, and templates.
If an override is needed for a configuration parameter defined in a library, it must be applied within the project where the library is used as a dependency. This ensures a clear distinction between library defaults and project-specific configurations while maintaining flexibility.
Overrides applied in a project take precedence over the values defined in the library.
2. **Environment Overrides**
* Overrides are specific values assigned to configuration parameters for a particular environment. This ensures that an project behaves appropriately for each environment without requiring changes to the project itself.
* Each environment (Development, UAT, Production) can have a unique set of overrides that replace the default parameter values when the project is deployed.
3. **Variable Precedence**
* Process Variables Override Configuration Parameters:
* If a process variable and a configuration parameter share the same name, the process variable takes precedence at runtime.
* If the process variable is null or undefined, the configuration parameter’s value is used as a fallback.
* Best Practice: Avoid naming process variables the same as configuration parameters to prevent unexpected overrides or conflicts.
4. **Centralized Management**
* Configuration parameters and their overrides are managed centrally within the project’s settings. This allows for quick adjustments and a clear overview of how each environment is configured.
#### Workflow for managing Configuration parameter overrides
1. **Defining Configuration Parameters**
* Go to the project settings and navigate to the **Configuration Parameters** section.
* Define parameters that will be used across the project, such as:
* **CRM URL**: Base URL for CRM integration.
* **API Key**: Secure key for external service authentication.
* **Environment URL**: Different base URLs for various environments (Dev, UAT, Production).
2. **Setting Up Environment-Specific Overrides**
* Access the **Overrides** section for configuration parameters.
* For each environment (e.g., Dev, UAT), assign specific values to the defined parameters. For example:
* Dev Environment: `CRM_URL` → `https://dev.crm.example.com`
* UAT Environment: `CRM_URL` → `https://uat.crm.example.com`
* Save the environment-specific overrides. These values will take precedence over the default settings when deployed in that environment.
3. **Applying Overrides at Runtime**
* When a project is deployed to a specific environment, the system automatically applies the relevant overrides.
* Overrides ensure that the configuration aligns with environment-specific requirements, like endpoint adjustments or security settings, without altering the core setup.
#### Key features
1. **Flexibility Across Environments**
* Easily adjust configuration parameters without changing the core application, ensuring that the project adapts to the target environment seamlessly.
2. **Centralized Parameter Management**
* Configuration parameters and their environment-specific overrides are managed centrally within the project settings, providing a single source of truth.
3. **Dynamic Integration Adaptation**
* Environment-specific overrides allow integrations to connect to different endpoints or use different credentials based on the environment, supporting a smoother development-to-production workflow.
4. **Improved Security**
* Sensitive data, such as API keys or credentials, can be stored as environment-specific overrides, ensuring that development and testing environments do not share the same sensitive values as production.
5. **Avoid Variable Naming Conflicts**
* Best Practice: Use distinct names for process variables and configuration parameters to avoid unintentional overrides. If process variables are undefined or null, they will default to the corresponding configuration parameter’s value.
#### Benefits
* **Simplified Deployment**: Overrides eliminate the need for manual configuration changes during deployment, streamlining the process and reducing the risk of errors.
* **Environment Consistency**: Each environment can have its own tailored settings, ensuring consistent behavior and performance across all stages.
* **Secure Configuration**: Overrides keep sensitive parameters isolated to the appropriate environments, enhancing security and minimizing the risk of exposure.
* **Ease of Maintenance**: Centralized management of parameters and overrides simplifies adjustments when changes are needed, minimizing maintenance overhead.
#### Managing Configuration parameters overrides
1. **Creating a Parameter**
* In the project settings, navigate to the **Configuration Parameters** section.
* Define a new parameter by specifying its name, type, and default value.
* This parameter will be used as a placeholder throughout the project, referenced in processes, templates, and integrations.
2. **Assigning Overrides**
* Access the **Overrides** tab for the configuration parameters.
* Choose the environment (e.g., Dev, UAT, Production) and assign a specific override value for each parameter.
* Confirm the changes to apply the overrides for the selected environment.
3. **Viewing and Editing Overrides**
* From the project’s settings, you can review all environment-specific overrides in one place.
* Edit existing overrides or add new ones as needed to adapt to changes in environment requirements.
#### Technical details
1. **Parameter Resolution at Runtime**
* During runtime, the system retrieves the active configuration parameters for the project. If overrides are set for the environment, they replace the default values.
* This resolution ensures that the correct parameters are used, based on the environment in which the application is running.
2. **Integration with Business Rules and Processes**
* Configuration parameters can be directly referenced in business rules and processes. Overrides ensure that these references point to the correct environment-specific values during execution.
3. **Storage and Management**
* Configuration parameters and their overrides are stored centrally, and the values are retrieved dynamically during runtime. This centralized approach ensures that updates to parameters are automatically reflected in the application without needing a new build.
#### Example use case
Imagine you have an application that integrates with a third-party CRM system. You:
* Define a `CRM_URL` parameter with a default value for the development environment.
* Create overrides for UAT and Production environments with their respective URLs.
* When the application is deployed to the UAT environment, the system automatically uses the UAT-specific CRM URL for all integration points without changing the project’s core configuration.
* When it's time to deploy to Production, the system uses the production-specific overrides, ensuring that the application connects to the right CRM instance.
# Scheduled processes
Source: https://docs.flowx.ai/4.7.x/docs/projects/runtime/scheduled-processes
Automate the initiation of process instances using predefined timer settings. Scheduled Processes trigger new process instances on specific dates, times, or intervals without manual intervention. A scheduler is generated for the published version of any process started by a Start Timer Event nodes.
## Overview
The **Scheduled Processes** feature automatically triggers process instances using Timer Start Events. In the **Runtime** tab, you can view scheduled processes and activate/deactivate timers without a redeployment. This feature is especially useful in upper environments, where you might need to pause or resume cron jobs without modifying the project build.
Check this section for more details about Timer Start Events and how to configure them.

## Key components
### 1. List of scheduled processes
The **Scheduled Processes** tab displays all timers that trigger process instances. Each row in the list shows:
* **Process Name:** The process linked to the timer.
* **Swimlane Name:** In multi-swimlane processes, each swimlane may have its own timer.
* **Timer Name:** The name of the timer node.
* **Timer Type:**
* **Date-based:** Triggers at a specific date and time.
* **Cycle-based:** Repeats at defined intervals (for example, every 10 seconds).
* **Location:** The execution environment (e.g., **Local**).
From this interface, you can view timer details or activate/deactivate timers as needed.
### 2. Timer Events schedule
To view a timer’s configuration, click the **View** icon next to the scheduled timer in the list.

### 3. Activating and deactivating timers
Manage timer activation directly from the UI:

* **Activate:** Click the activate button in the Scheduled Processes tab.
* **Deactivate:** Click the same button to suspend the timer.
You cannot edit timer configurations in the **Runtime** tab. To modify settings, update the process definition and redeploy.
***
## Use cases
A Timer Start Event can be scheduled to **automatically trigger** an onboarding process at a predefined time.
**Example**: Process starts on **January 28, 2025, at 12:14 PM**, sending onboarding notifications.
A Cycle timer can execute a job **every 10 seconds for 60 repetitions**, ensuring periodic execution.
A cycle timer with a cron expression can generate a report at **1 AM every day**.
***
## Additional notes
* **Runtime Timer Control:**\
Manage timers in the **Runtime** tab without changing the application build. Previously, timer activation was controlled by a checkbox in the **Start Timer node**, which is now read-only after deployment.
* **Timers in Multi-Swimlane Processes:**\
Each swimlane can have its own timer. The **Scheduled Processes** tab displays each timer separately for clear management.
* **Exporting and Importing Builds:**\
Timer activation settings are not editable during build export/import. All timer management occurs in the **Runtime** tab.
# Add new Kafka exchange mock
Source: https://docs.flowx.ai/4.7.x/docs/api/add-kafka-mock
POST {MOCK_ADAPTER_URL}/api/kafka-exchanges/
View all available Kafka exchanges
The URL of the mock adapter.
The mocked JSON message that the integration will send
The JSON message the integration should reply with
Should match the topic the engine listens on for replies from the integration
Should match the topic name that the integration listens on
# Enable misconfigurations
Source: https://docs.flowx.ai/4.7.x/docs/api/add-misconfigurations
GET {{baseUrlAdmin}}/api/process-versions/compute
To enable and to compute warnings for already existing processes from previous FlowX versions (< 4.1), you must use the following endpoint to compute all the warnings.
The URL of admin.
This is the specific operation performed to compute misconfigurations for older processes.
```bash Request
curl --request GET \
--url {baseUrlAdmin}/api/process-versions/compute
```
```
{ "status": "success" }
```
# Download a file
Source: https://docs.flowx.ai/4.7.x/docs/api/download-file
GET documentURL/internal/storage/download
This endpoint allows you to download a file by specifying its path or key.
The base URL of the document.
A segment of the path that specifies it is an internal call.
A segment of the path referring to storage resources.
The unique identifier for the download.
# List buckets
Source: https://docs.flowx.ai/4.7.x/docs/api/list-buckets
GET {{documentUrl}}/internal/storage/buckets
The Documents Plugin provides the following REST API endpoints for interacting with the stored files.
This endpoint returns a list of available buckets.
The base URL of the document.
A segment of the path that specifies it is an internal call.
A segment of the path referring to storage resources.
The particular resource in the storage being accessed.
# List objects in a bucket
Source: https://docs.flowx.ai/4.7.x/docs/api/list-objects-in-buckets
GET {{documentURL}}/internal/storage/buckets/{BUCKET_NAME}
This endpoint retrieves a list of objects stored within a specific bucket. Replace `{BUCKET_NAME}` with the actual name of the desired bucket
The base URL of the document.
A segment of the path that specifies it is an internal call.
A segment of the path referring to storage resources.
The particular resource in the storage being accessed.
The unique identifier for the storage bucket.
# Execute action
Source: https://docs.flowx.ai/4.7.x/docs/api/rest4
POST https://admin.devmain.flowxai.dev/onboarding/api/runtime/process/{processUuid}/token/{tokenUuid}/ui-action/{uiActionFlowxUuid}/context/main/execute
This endpoint executes an action within a specific process runtime.
## Base URL
* Always prepend the endpoint with the correct Base URL.
* For different environments (like staging or production), switch the base URL accordingly.
## Request Headers
Bearer token required for authentication. Format: `Bearer `.
## Path Parameters
The unique identifier of the process instance.
The unique identifier of the runtime token.
The unique identifier of the UI action to execute.
## Response fields
The unique identifier of the token associated with the process.
The ID of the current node in the process execution flow.
Reserved for future use. Currently always `null`.
Indicates whether a back action is triggered. Default is `false`.
Indicates whether the token should be reset. Default is `false`.
# Get Build Info
Source: https://docs.flowx.ai/4.7.x/docs/api/start-process/get-build-info
GET https://admin.devmain.flowxai.dev/rtm/api/runtime/app/{appId}/build/info
This endpoint retrieves information about the build of a specific project, as set by the Active policy.
## Base URL
* Always prepend the endpoint with the correct Base URL.
* For different environments (like staging or production), switch the base URL accordingly.
This endpoint is only accessible via the admin environment. Ensure that you are using the correct admin base URL when making requests.
## Authorization
Bearer authentication header of the form Bearer ``, where `` is your auth token.
## Request headers
Bearer token required for authentication. Format: `Bearer `.
Specifies the media types that are acceptable for the response. Defaults to accepting JSON and plain text responses.
Indicates supported content encoding algorithms for the response.
Specifies language preferences for the response.
## Path Parameters
The unique identifier of the application whose build information is being retrieved.
```bash
curl --request GET \
--url https://admin.devmain.flowxai.dev/rtm/api/runtime/app/{appId}/build/info \
--header "Authorization: Bearer " \
--header "Flowx-Platform: web"
```
## Response Fields
The unique identifier of the build.
The ID of the application associated with the build.
Contains configuration settings related to the build.
The version ID of the application associated with the build.
Indicates the supported platforms for the build (e.g., `OMNICHANNEL`).
The ID of the theme applied to the build.
Defines the format used for displaying dates.
Specifies the date format style (e.g., `short`).
Configuration for displaying numerical values, including decimal precision.
The minimum number of decimal places to display in numerical values.
The maximum number of decimal places to display in numerical values.
Defines the format used for displaying currency.
Specifies the format for displaying currency (e.g., `isoCode`).
The default language code for the build (e.g., `en` for English).
## Response Headers
Indicates whether the response can be exposed when credentials are present. Example: `true`.
Lists the allowed headers in the actual request.
Specifies the allowed HTTP methods.
Indicates the allowed origin for cross-origin requests.
Specifies the encoding used to compress the response.
Indicates the media type of the response content.
Enforces secure (HTTPS) connections to the server.
Specifies that the response may vary based on the `Accept-Encoding` request header.
Prevents the browser from MIME-sniffing a response away from the declared `Content-Type`.
Controls the cross-site scripting (XSS) filter. Example: `0` (disabled).
# Start Process
Source: https://docs.flowx.ai/4.7.x/docs/api/start-process/rest2
POST https://admin.devmain.flowxai.dev/rtm/api/runtime/app/{appId}/build/{buildId}/process-name/{processDefinitionName}/start
This endpoint initiates a process in the application runtime environment using the specified `buildId` and `processDefinitionName`.
## Base URL
* Always prepend the endpoint with the correct Base URL.
* For different environments (like staging or production), switch the base URL accordingly.
## Path Parameters
The ID of the application.
The build ID for the runtime environment.
The name of the process definition to start.
## Request headers
Bearer token required for authentication. Format: `Bearer `.
Indicates the media type of the request body. Must be set to `application/json`.
Identifies the client platform (e.g., `web`).
Specifies the media types that are acceptable for the response. Example: `application/json, text/plain, */*`.
Indicates supported content encoding algorithms for the response. Example: `gzip, deflate, br, zstd`.
Specifies language preferences for the response. Example: `en-GB,en-US;q=0.9,en;q=0.8`.
Identifies the client software making the request (e.g., browser or tool).
The address of the previous web page from which the request was made.
## Request Body
This endpoint requires a JSON request body. The exact payload structure should conform to the process definition schema associated with `{processDefinitionName}`.
Description of what key1 represents
Description of what key2 represents
### Example Request Body
```json
{
"key1": "value1",
"key2": "value2"
}
```
## Response
The unique identifier of the process definition within FlowX.
The ID of the build associated with the process.
The status of the build at the time the process was started (e.g., "COMMITTED").
The ID of the application where the process was initiated.
The ID of the parent application if applicable, otherwise `null`.
The ID of the root application if applicable, otherwise `null`.
The name of the process definition that was started.
A list of tokens representing active workflow states within the process.
The current state of the process instance (e.g., "STARTED").
The unique identifier for the process instance.
Metadata related to the process instance, including:
* `processInstanceId`: The numeric ID of the process instance.
* `processInstanceUuid`: The UUID of the process instance.
### Response Headers
Indicates whether the response can be exposed when credentials are present. Value is `true`.
Lists the allowed headers in the actual request. Example: `DNT, Keep-Alive, User-Agent, X-Requested-With, If-Modified-Since, Cache-Control, Content-Type, Range, Authorization, flowx-platform`.
Specifies the allowed HTTP methods. Example: `GET, PUT, POST, DELETE, PATCH`.
Indicates the allowed origin for cross-origin requests. Example: `https://demo-angular.devmain.flowxai.dev`.
Specifies the encoding used to compress the response. Example: `gzip`.
Indicates the media type of the response content. Example: `application/json`.
Enforces secure (HTTPS) connections to the server. Example: `max-age=31536000 ; includeSubDomains`.
Specifies that the response may vary based on the `Accept-Encoding` request header. Example: `Accept-Encoding`.
Prevents the browser from MIME-sniffing a response away from the declared `Content-Type`. Example: `nosniff`.
Controls the cross-site scripting (XSS) filter. Example: `0` (disabled).
```bash
curl -X POST "https://admin.devmain.flowxai.dev/rtm/api/runtime/app/{appId}/build/{buildId}/process-name/{processDefinitionName}/start" \
-H "Authorization: Bearer " \ # Required
-H "Content-Type: application/json" \ # Required
-H "flowx-platform: web" \ # Required
-H "Accept: application/json, text/plain, */*" \ # Optional
-H "Accept-Encoding: gzip, deflate, br, zstd" \ # Optional
-H "Accept-Language: en-GB,en-US;q=0.9,en;q=0.8" \ # Optional
-d '{
"key": "value"
}'
```
# Start process and inherit values
Source: https://docs.flowx.ai/4.7.x/docs/api/start-process/rest3
POST {ENGINE_URL}/api/process/{PROCESS_DEFINITION_NAME}/start/inheritFrom/{RELATED_PROCESS_INSTANCE_UUID}
The `paramsToInherit` map should hold the needed values on one the following keys, depending on the desired outcome:
* `paramsToCopy` - this is used to pick only a subset of parameters to be inherited from the parent process; it holds the list of key names that will be inherited from the parent parameters
* `withoutParams` - this is used in case we need to remove some parameter values from the parent process before inheriting them; it holds the list of key names that will be removed from the parent parameters
If none of these keys have values, all the parameter values from the parent process will be inherited by the new process.
The UUID of the related process instance from which values will be inherited.
The name of the process definition to be started.
A map containing information about which values to copy from the related process instance.
# View Kafka exchanges
Source: https://docs.flowx.ai/4.7.x/docs/api/view-kafka-exchanges
GET {MOCK_ADAPTER_URL}/api/kafka-exchanges/
View all available Kafka exchanges
The URL of the mock adapter.
# Supported scripts
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/supported-scripts
Scripts are used to define and run actions but also properties inside nodes. For now, the following script languages are supported.
## Business rules scripting
| Scripting Language | Language Version | Scripting Engine | Scripting Engine Version |
| ------------------ | -------------------- | ---------------------------------------- | ------------------------ |
| JavaScript | ECMAScript 15 (2024) | GraalJS | GraalVM 24.1.2 |
| Python 3 | 3.11.7 | GraalPy | GraalVM 24.1.2 |
| Python 2.7 | 2.7 | org.python.jython » jython-standalone | 2.7.2 |
| DMN | 1.3 | org.camunda.bpm.dmn » camunda-engine-dmn | 7.20.0 |
| MVEL | 2 | org.mvel » mvel2 | 2.5.2.Final |
| Groovy | 3.0.21 | org.codehaus.groovy » groovy-jsr223 | 3.0.21 |
In version 4.7.2, we've deprecated the **DMN (Decision Model and Notation)** business rule actions. This change affects how business rules are configured on task/user task nodes in business processes.
**Looking ahead**: Python 2.7 support will be deprecated in FlowX.AI 5.0. We recommend migrating your Python scripts to Python 3 to take advantage of improved performance and modern language features.
By default, FlowX.AI uses Python 2.7 (Jython) for script execution. Python 3 support is available but must be explicitly enabled. See [Configuring script engine](/4.7.x/setup-guides/flowx-engine-setup-guide/engine-setup#configuring-script-engine) example for more details.
## Integration designer scripting
| Scripting Language | Language Version | Scripting Engine | Scripting Engine Version |
| ------------------ | -------------------- | ---------------- | ------------------------ |
| JavaScript | ECMAScript 15 (2024) | GraalJS | GraalVM 24.1.2 |
| Python | 3.11.7 | GraalPy | GraalVM 24.1.2 |
***
## JavaScript
**New in v4.7.1**: JavaScript support has been upgraded from Nashorn (ECMAScript 5.1) to GraalJS (ECMAScript 15/2024), providing significantly improved performance and modern language features.
JavaScript in FlowX.AI is powered by GraalJS, which supports ECMAScript 15 (2024) standards. This provides modern JavaScript capabilities for your business rules and integrations.
### What is GraalJS?
GraalJS is an ECMAScript compliant JavaScript implementation built on GraalVM. It supports the latest ECMAScript features and offers high performance through the GraalVM's JIT compiler.
### Properties
* Supports ECMAScript 15 (2024) features including modern syntax and APIs
* Provides consistent scripting across business rules and integration designer
* Runs in a secure sandboxed environment
### Limitations
JavaScript scripts run in a sandboxed environment. Here is a list of JavaScript features not available in the sandbox:
* import.meta (ES2020)
* top-level await (ES2022)
* set operations (ES2024)
* Array.fromAsync (ES2024)
### Useful links
***
## Python 3
**Important Configuration Note**: Python 2.7 (Jython) is the default Python runtime in FlowX.AI. To enable Python 3 via GraalPy, you must set the feature toggle `FLOWX_SCRIPTENGINE_USEGRAALVM` to `true` in the [**Process Engine configuration**](/4.7.x/setup-guides/flowx-engine-setup-guide/engine-setup) and in the AI Developer agent configuration. If this configuration variable is missing or set to `false`, Python 2.7 will be used.
Python is a high-level, interpreted programming language known for its simplicity and readability. FlowX.AI uses Python 3.11.7 via GraalPy for executing Python scripts.
### What is GraalPy?
GraalPy is an implementation of Python that runs on the GraalVM. It offers high compatibility with standard Python (CPython) while providing the ability to run within the Java ecosystem. GraalPy supports Python 3 and provides access to a large subset of the standard Python library.
### Properties
* Supports Python 3.11.7 with access to most common Python libraries
* Runs up to 3x faster than Python 2.7 via Jython
* Runs in a sandboxed environment for better security
### Python Library Support
Python 3 support in FlowX comes with a subset of the [standard Python library](https://docs.python.org/3.11/library/index.html). Python runs in a sandboxed environment and the following modules are not available:
"stringprep", "sqlite3", "plistlib", "getpass", "curses", "curses.textpad", "curses.ascii", "curses.panel", "xml.parsers.expat", "xmlrpc.client", "xmlrpc.server", "turtle", "tkinter", "test.support", "symtable", "pyclbr", "msvcrt", "winreg", "winsound", "grp", "termios", "tty", "pty", "syslog", "audioop", "msilib", "nis", "ossaudiodev", "smtpd", "spwd", "crypt"
Available modules might provide limited access to system resources due to the execution in a sandbox environment.
### Useful links
***
## Python 2.7
Python 2.7 is the default runtime in FlowX.AI 4.7.1 and is implemented via Jython. While it remains fully supported in this version, we recommend migrating to Python 3 for all new development to prepare for future releases.
**Jython** is an implementation of the high-level, dynamic, object-oriented language [Python](http://www.python.org/) seamlessly integrated with the [Java](http://www.javasoft.com/) platform. Jython is an open-source solution.
### Properties
* Supports **Python 2.7** most common python libs can be imported, ex: math, time, etc.
* Java libs can also be imported: [details here ](https://www.tutorialspoint.com/jython/jython_importing_java_libraries.htm)
### Useful links
***
## DMN
Decision Model and Notation (DMN) is a standard for Business Decision Management.
FlowX.AI uses [BPMN.io](https://bpmn.io/) (based on **camunda-engine-dmn** version **7.20.0**) which is built on [DMN 1.3](https://www.omg.org/spec/DMN/1.3/PDF) standards.
### Properties
**camunda-engine-dmn** supports [DMN 1.3](https://www.omg.org/spec/DMN/1.3/PDF), including Decision Tables, Decision Literal Expressions, Decision Requirements Graphs, and the Friendly Enough Expression Language (FEEL)
### Useful links
**More information:**
***
## MVEL
MVEL is a powerful expression language for Java-based applications. It provides a plethora of features and is suited for everything from the smallest property binding and extraction, to full-blown scripts.
* FlowX.AI uses [**mvel2 - 2.5.2.Final version**](https://mvnrepository.com/artifact/org.mvel/mvel2/2.5.2.Final)
### Useful links
**More information:**
***
## Groovy
Groovy is a multi-faceted language for the Java platform. The language can be used to combine Java modules, extend existing Java applications and write new applications
We use and recommend **Groovy 3.0.21** version, using **groovy-jsr223** engine.
**Groovy** has multiple ways of integrating with Java, some of which provide richer options than available with **JSR-223** (e.g. greater configurability and more security control). **JSR-223** is recommended when you need to keep the choice of language used flexible and you don't require integration mechanisms not supported by **JSR-223**.
**JSR-223** (spec) is **a standard scripting API for Java Virtual Machine (JVM) languages** . The JVM languages provide varying levels of support for the JSR-223 API and interoperability with the Java runtime.
### Useful links
# Token
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/token
Token is the concept that describes the current position in the process flow. When you start the process you have a graph of nodes and based on the configuration you will go from one to another based on the defined sequence (connection between nodes).
The token is a [BPMN](../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-bpmn) concept that represents a state within a process instance. It keeps track of the current position in the process flow and is used to store data related to the current process instance state.
A token is created each time a new process instance is started. As the actions on the process instance are executed, the token advances from one node to the next. As a node can have several [actions](./actions/actions) that need to be executed, the token is also used for keeping track of the actions executed in each node.
In case of [parallel gateways](./node/parallel-gateway), child tokens are created for each flow branch. The parent token moves to the gateway sync node and only advances after all the child tokens also reach that node.
The image below shows how a token advances through a process flow:

The token will only move to the next node when there are no more mandatory actions from the current node that need to be executed. The token will also wait on a node in case the node is set to receive an event from an external system through Kafka.
There will be cases when the token needs to be stopped in a node until some input is received from the user. If the input from the user is needed for further advancing in the process, the token should only advance after all data was received. A mandatory manual action can be used in this case and linked to the user action. This way we make sure that the process flow advances only after the user input is received.
## Checking the token status
The current process instance status can be retrieved using the FlowX Designer. It will display some info on the tokens related to that process instance and the current nodes they are in.

In case more details are needed about the token, you can click the **Process status** view button, choose a token then click the **view button** again:

## Token details
* **Token status**: Describes the state of the token in the process.
* **Status Current Node**: Describes the token status in the current node.
* **Retry**: After correcting the errors, you can hit Retry and see if the token moves on.
* **See Token status**: Opens a modal displaying a detailed view of the token status.

If there are parallel gateways configured in a proces, you will have more tokens, created for earch parallel path.

### Token status
| Token Status | Description |
| ------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| ACTIVE | the token state is set to active when tokens are created; a parent token is reactivated when all child tokens reach the parallel gateway closing node. |
| INACTIVE | child tokens are set to inactive when they arrive in a parallel gateway closing node; the current token is set to inactive when it reaches a final node. |
| ABORTED | the current token is set to Aborted when it moves backward in order to redo a series of previous actions in the process - it is reset, and a new token is activated. |
| ON HOLD | when a parallel split gateway node is reached, the parent token is set to On hold until all the child tokens reach the parallel gateway closing node; the parent token does not have a "Retry" action icon until all the child tokens are finished. |
| DISMISSED | when the process/subprocess reaches a certain node and it is canceled/exited. |
| EXPIRED | when a defined "expiryTime" in the process definition passes the token will change to this status. |
| TERMINATED | when the process is terminated by a termination request. |
### Status current node
| Status Current Node | Definition |
| ----------------------------- | ----------------------------------------------------------------------------------------------------- |
| ARRIVED | when the token reaches the new node |
| EXECUTING | when the token execution starts |
| EXECUTED\_COMPLETE | after executing node actions, if all the mandatory actions from the node are completed |
| EXECUTED\_PARTIAL | after executing node actions, if there are still mandatory uncompleted actions on it |
| WAITING\_MESSAGE\_EVENT | when the token reaches an intermediate message catch event node, the token will be set to this status |
| WAITING\_TIMER\_EVENT | when the token reaches an intermediate timer event node, the token will be set to this status |
| WAITING\_MESSAGE | when the token waits for a message from another system |
| MESSAGE\_RECEIVED | after the message was received |
| MESSAGE\_RESPONSE\_TIMED\_OUT | if the message was not received in the set timeframe |
### See token status
You can access a detailed view of the token status by going to your Process instance -> Tokens -> View (eye icon):

Here you will find details like:
* **id**: The unique identifier of the token.
* **version**: The version of the token.
* **parentTokenId**: The identifier of the parent token, if any.
* **startNodeId**: The identifier of the node where the token started.
* **embedNodeId**: The identifier of the embedded node, if any.
* **mainSwimlaneId**: The identifier of the main swimlane associated with the token.
* **currentProcessVersionId**: The identifier of the current process version.
* **currentContext**: The current context of the token.
* **initiatorType**: The type of the initiator, if any.
* **initiatorId**: The identifier of the initiator, if any.
* **currentNodeId**: The identifier of the current node associated with the token.
* **currentNodeName**: The name of the current node.
* **state**: The state of the token (for example, INACTIVE, ACTIVE, etc.)
* **statusCurrentNode**: The status of the current node.
* **syncNodeTokensCount**: The count of synchronized node tokens.
* **syncNodeTokensFinished**:The count of finished synchronized node tokens.
* **dateUpdated**: The date and time when the token was last updated.
* **paramValues**: Parameter values associated with the token.
* **processInstanceId**: The identifier of the process instance.
* **currentNode**: Details of the current node.
* **nodesActionStates**: An array containing information about action states of nodes associated with the token.
* **uuid**: The unique identifer id of the token.
# Conditional styling
Source: https://docs.flowx.ai/4.7.x/docs/building-blocks/ui-designer/conditional-styling
Dynamically update styling and properties of UI elements based on conditions, reducing the need for multiple prototypes.
Conditional styling enables dynamic, data-driven design adjustments based on specific conditions. It helps reduce the need for multiple prototypes by applying styles conditionally, depending on the data and platform-specific configurations.
**Conflict Resolution:** When multiple conditions overlap, the latest condition (evaluated from top-to-bottom) takes priority.
***
## Configuring conditional styling
1. Open the **UI Designer**.
2. Select a **Text**, **Link**, or **Container** element.
3. Navigate to the **Styles** tab.
4. Locate the **Conditional Styling** section.
5. Click the **➕** icon to add new expressions and effects.
6. Use the **JS Editor** to configure and test your expressions for accurate behavior.
***
## Conditional styling properties

**Availability:** Conditional styling is available for **Text**, **Link**, and **Container** UI elements.
***
## Structure of conditional styling
1. **Condition:**
* A string expression evaluated similarly to hide/disable expressions.
* Supports referencing process data store keys for dynamic evaluations.
2. **Overrides:**
* A key-value map defining specific property-value pairs.
* Overrides are applied based on the evaluated condition.
### Example:
```json
{
"platformDisplayOptions": {
"platform": "web",
"style": {
"conditionals": [
{
"condition": "$user.age > 30",
"overrides": {
"backgroundColor": "#FFD700",
"fontWeight": "bold"
}
},
{
"condition": "$user.subscription == 'premium'",
"overrides": {
"borderColor": "#4CAF50",
"textColor": "#FFFFFF"
}
}
]
}
}
}
```
## Renderer behavior
* **Real-Time Evaluation**: Conditions are continuously evaluated based on live data updates.
* **Priority Handling**: If multiple conditions are met, the last condition in the sequence takes precedence.
* **Dynamic Application**: Styles are applied instantly upon condition satisfaction, enhancing UI responsiveness.
## Contextual menu options
* Conditional Styling Section: Copy To/From Platforms to reuse conditions across different environments.
* Expression + Effect Section: Copy To/From Platforms and Delete options for efficient management.

## Key notes
* **Dynamic Styling**: Facilitates platform-specific style overrides driven by real-time data conditions.
* **Scalability**: Designed to support iterative and scalable UI implementations.
* **Enhanced Usability**: Simplifies user interaction with intuitive UI/UX components, making style management more accessible and efficient.
Combine conditional styling with data-driven expressions to create responsive, adaptable UI designs effortlessly.
# AI Analyst
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/ai-core/ai-analyst
Builds and optimizes BPMN processes using AI to generate, validate, and enhance workflows based on user-provided documents
## Description
The AI Analyst is an intelligent agent that transforms various inputs into Business Process Model and Notation (BPMN) processes. It can generate complete workflows from text prompts, images, or documents, as well as modify existing BPMN processes. The agent uses advanced AI to interpret requirements and convert them into structured JSON representations that the FlowX.AI platform can interpret and visualize as proper BPMN diagrams.
## Capabilities
The AI Analyst offers two main categories of capabilities:
### Generate New Processes
1. **Generate from Prompt**
* Creates a complete BPMN process based on natural language descriptions
* Transforms textual requirements into a structured workflow
* Produces a JSON representation containing nodes (with name and type) and links between nodes
```mermaid
%%{
init: {
'theme': 'base',
'themeVariables': {
'primaryColor': '#f1774a',
'primaryTextColor': '#000',
'primaryBorderColor': '#000',
'lineColor': '#6b6b6b',
'secondaryColor': '#f3b861',
'tertiaryColor': '#bababa'
}
}
}%%
graph TD;
Start([Start]):::primary
Check_Prompt[Check Prompt]:::secondary
Classify_Image[Classify Image]:::tertiary
Generate_Process[Generate Process]:::secondary
Generate_from_Node[Generate from Node]:::tertiary
Generate_between_Nodes[Generate between Nodes]:::tertiary
PostProcessing[PostProcessing]:::secondary
END([END]):::primary
subgraph Parallel_Processing [Parallel Processing]
Process_Swimlanes[Process Swimlanes]:::tertiary
Split_Image[Split Image]:::tertiary
Image_Analysis[Image Analysis]:::tertiary
Generate_Prompt[Generate Prompt]:::tertiary
end
Start ==> Check_Prompt;
Start --> Classify_Image;
Classify_Image --> Parallel_Processing;
Parallel_Processing --> Generate_Process;
Check_Prompt ==> Generate_Process;
Check_Prompt --> Generate_from_Node;
Check_Prompt --> Generate_between_Nodes;
Generate_Process ==> PostProcessing;
Generate_from_Node --> PostProcessing;
Generate_between_Nodes --> PostProcessing;
PostProcessing ==> END;
%% Styling
classDef primary fill:#f1774a,stroke:#000,stroke-width:1px,color:#000;
classDef secondary fill:#f3b861,stroke:#000,stroke-width:1px,color:#000;
classDef tertiary fill:#bababa,stroke:#000,stroke-width:1px,color:#000;
```
2. **Generate from Image**
* Extracts BPMN process information from uploaded images
* Identifies swimlanes and components in diagram images
* Can process various image resolutions (from small \< 4K to large > 8K)
* Accepts optional text descriptions for additional context
* Combines visual recognition with language processing for accurate interpretation
```mermaid
%%{
init: {
'theme': 'base',
'themeVariables': {
'primaryColor': '#f1774a',
'primaryTextColor': '#000',
'primaryBorderColor': '#000',
'lineColor': '#6b6b6b',
'secondaryColor': '#f3b861',
'tertiaryColor': '#bababa'
}
}
}%%
graph TD;
Start([Start]):::primary
Classify_Image[Classify Image]:::secondary
Check_Prompt[Check Prompt]:::tertiary
Generate_Process[Generate Process]:::secondary
Generate_from_Node[Generate from Node]:::tertiary
Generate_between_Nodes[Generate between Nodes]:::tertiary
Post_Processing[Post Processing]:::secondary
END([END]):::primary
subgraph Parallel_Processing [Parallel Processing]
Process_Swimlanes[Process Swimlanes]:::secondary
Split_Image[Split Image]:::secondary
Image_Analysis[Image Analysis]:::secondary
Generate_Prompt[Generate Prompt]:::secondary
end
Start ==> Classify_Image;
Start --> Check_Prompt;
Classify_Image ==> Parallel_Processing;
Parallel_Processing ==> Generate_Process;
Check_Prompt --> Generate_Process;
Check_Prompt --> Generate_from_Node;
Check_Prompt --> Generate_between_Nodes;
Generate_Process ==> Post_Processing;
Generate_from_Node --> Post_Processing;
Generate_between_Nodes --> Post_Processing;
Post_Processing ==> END;
%% Styling
classDef primary fill:#f1774a,stroke:#000,stroke-width:1px,color:#000;
classDef secondary fill:#f3b861,stroke:#000,stroke-width:1px,color:#000;
classDef tertiary fill:#bababa,stroke:#000,stroke-width:1px,color:#000;
classDef secondary-highlight fill:#f3b861,stroke:#8a2be2,stroke-width:2px,color:#000;
```
3. **Generate from Document**
* Extracts process descriptions from uploaded documents
* Analyzes textual content to identify workflow components and relationships
* Creates a structured representation of the described process
### Edit Existing Processes
1. **Generate Process from BPMN File**
* Imports and interprets existing BPMN files
* Converts external BPMN formats into FlowX.AI-compatible representations
2. **Generate In-between Nodes**
* Adds missing steps between existing nodes in a workflow
* Enhances process completeness and logical flow
3. **Generate from Node**
* Expands a process by adding subsequent steps after a specified node
* Allows incremental development of complex workflows
4. **Edit Existing Process**
* Modifies components of an existing process based on user instruction
* Updates node properties, connections, or process structure
## User Experience
The AI Analyst is accessed through the FlowX.AI Platform interface. Users can:
1. Select the desired capability from the AI Analyst section
2. Provide input in the appropriate format (text prompt, image, document, or BPMN file)
3. Add additional context or requirements if needed
4. Review the generated process in a visual BPMN format
5. Make adjustments through the "Edit" capabilities if necessary
6. Implement the final workflow in their project
The interaction follows a human-in-the-loop approach, allowing for quality checks and iterative improvements to ensure the generated process meets requirements.
## Anatomy
The AI Analyst architecture integrates multiple AI components to process different input types and generate standardized BPMN output.
The workflow begins with a user query that gets rewritten for clarity. A quality check determines if more information is needed, in which case a human provides additional context. Once sufficient information is available, the process generation component creates the BPMN workflow, which is then delivered to the user.
## Top rules for Designer AI Agent prompting
### ✅ DO the following
**Be specific about process goals:**
Clearly state what the process should accomplish, including start and end points, major milestones, and desired outcomes.
**Describe key participants and roles:**
Mention the different stakeholders or departments involved in the process and their responsibilities.
**Include decision points:**
Specify any conditional logic or decision points that might create branches in the workflow.
**Provide process constraints:**
Mention any business rules, regulations, or time constraints that should be incorporated into the process design.
### ⛔ **DON’T** do these
**Don't be too technical with BPMN terms:**
You don't need to use precise BPMN terminology; the AI Analyst can interpret business language and convert it to appropriate BPMN elements.
**Don't request subprocess details:**
The AI Analyst has no knowledge about subprocesses or other platform-specific information not provided in your input.
**Don't upload low-quality images:**
When generating from images, ensure the diagram is clear and readable to improve accuracy of the generated process.
**Don't expect platform-specific optimizations:**
The AI Analyst generates standard BPMN representations and may not incorporate FlowX.AI platform-specific features automatically.
# AI Assistant
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/ai-core/ai-assistant
Answers questions about the FlowX.AI platform as well as the bank itself based on uploaded documentation.
## Description
The AI Assistant provides intelligent documentation support for both the FlowX.AI platform and banking operations. By leveraging comprehensive documentation repositories, this agent delivers:
* **Accurate Responses:** Utilizes up-to-date product documentation for precise answers.
* **Consistency:** Ensures uniformity in responses across all user interactions.
* **Efficiency:** Reduces response time, enhancing user satisfaction.
* **Scalability:** Handles multiple user queries simultaneously without human intervention.
* **Streamlined Onboarding:** Provides instant, accurate answers to new users' questions, facilitating a smoother and faster onboarding process.
The assistant functions as a knowledge hub that can quickly search, retrieve, and synthesize information from both built-in FlowX.AI documentation and custom uploaded documents.
## Capabilities
The AI Assistant offers specialized content search functionality across different documentation sources:
### Search into FlowX Documentation
* **Purpose:** Provides answers to questions about the FlowX.AI platform, its features, configuration options, and best practices.
* **Documentation Coverage:** Accesses a comprehensive repository of approximately 1,000 documentation chunks (318,000 tokens).
* **Query Types:** Handles direct questions, keyword searches, or general topic inquiries related to the platform.
* **Performance:** Documentation ingestion takes approximately 12 seconds, with rapid response times for most queries.
* **Response Format:** Delivers answers in well-formatted markdown for readability.
## User Experience
The AI Assistant is easily accessible within the FlowX.AI Platform:
1. **Access Point:** Users can find the AI Assistant in the help section of the platform interface.
2. **Conversation Interface:** The assistant uses a chat-based interface where users can:
* Type natural language questions
* View responses in real-time with markdown formatting
* Follow up with additional questions in a conversational manner
3. **Document Upload:** Users can upload custom documentation through:
* The document upload interface in the assistant section
* Selecting files from their local system
* Providing URLs to documentation resources
4. **Saved Conversations:** Users can reference past interactions, with the assistant maintaining context between sessions.
## Anatomy
The AI Assistant architecture follows a sophisticated retrieval-augmented generation approach:
```mermaid
%%{
init: {
'theme': 'base',
'themeVariables': {
'primaryColor': '#f1774a',
'primaryTextColor': '#000',
'primaryBorderColor': '#000',
'lineColor': '#6b6b6b',
'secondaryColor': '#f3b861',
'tertiaryColor': '#000'
}
}
}%%
graph TD;
Start([Start]):::primary
Check_Prompt[Check Prompt]:::secondary
Rewrite_Query[Rewrite Query]:::secondary
Retrieve_Documents[Retrieve Documents]:::secondary
Grade_Documents[Grade Documents]:::secondary
Generate_Answer[Generate Answer]:::secondary
Generate_Greetings[Generate Greetings]:::secondary
END([END]):::primary
Start --> Check_Prompt;
Check_Prompt --> Rewrite_Query;
Check_Prompt --> Generate_Greetings;
Rewrite_Query --> Retrieve_Documents;
Retrieve_Documents --> Grade_Documents;
Grade_Documents --> Generate_Answer;
Generate_Answer --> END;
Generate_Greetings --> END;
%% Styling
classDef primary fill:#f1774a,stroke:#000,stroke-width:1px,color:#000;
classDef secondary fill:#f3b861,stroke:#000,stroke-width:1px,color:#000;
```
The workflow begins with query refinement, followed by document retrieval and relevance grading. Based on the quality of retrieved information, the system can generate an immediate response, process a more complex response asynchronously, or refuse to answer if no relevant documentation is found.
## Top rules for Designer AI Agent prompting
### ✅ DO the following
**Be specific with your questions:**
Clearly articulate what information you need about the FlowX.AI platform or banking operations. The more specific your question, the more precise the answer will be.
**Reference platform components:**
Mention specific features, modules, or functions you're asking about to help the assistant retrieve the most relevant documentation.
**Provide context:**
When asking follow-up questions, include relevant context or reference your previous inquiry to help the assistant understand the conversation flow.
**Ask about best practices:**
The assistant can provide guidance on recommended approaches and implementation strategies based on official documentation.
### ⛔ **DON’T** do these
**Don't ask about undocumented features:**
The assistant relies on available documentation and may not be able to provide information about unreleased or undocumented functionality.
**Don't expect troubleshooting of specific implementations:**
While the assistant can provide general guidance, it cannot debug custom code or analyze specific implementation issues without detailed context.
**Don't assume complete knowledge:**
If documentation doesn't cover a particular topic, the assistant may not be able to provide comprehensive information on that subject.
**Don't ask unrelated questions:**
The assistant is specialized in FlowX.AI platform and banking documentation and may not provide accurate answers on unrelated topics.
# Overview
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/ai-core/ai-core-overview
FlowX AI Core consists of a series of AI Agents that are connected to a centralized repository of LLMs.
AI in FlowX is built on a system of interconnected language models that's refreshingly LLM-agnostic (because who wants to be locked into a single model these days?). It uses an agent-based architecture where specialized AI workers tackle various tasks, all while staying connected to the data flowing through the FlowX Platform.
**Being LLM Agnostic, FlowX AI-Core offers deployment flexibility that would make a yoga instructor jealous:**
1. **The DIY Option**: Run your own custom LLM models through what we like to call our "model registry mechanism." Bring your own models, we'll handle the wiring.
2. **The Integration Play**: Deploy FlowX without our LLM models and connect to your internal LLMs that support the OpenAI API format. Just point our agents to your endpoint via `ENV_VARS` and you're good to go.
3. **The Cloud Route**: Connect to public LLM providers like OpenAI, Google, Microsoft, Anthropic, IBM, or whatever new AI company popped up while you were reading this doc.
Access to the LLM can be done through a proxy or gateway as soon as access is granted to the AI agents to access and create an inference.
Using AI-Core in FlowX it can help enterprises save time, improve quality, and enhance creativity. By providing users with suggestions that are relevant and diverse, FlowX can help users generate content much faster.
Agents in FlowX were built based on specific use-cases to read, understand and process FlowX data by offering assisted-development similar to GitHub Copilot.
***
## Access AI in FlowX
In FlowX, one shortcut rules them all: Cmd/CTRL+K. Wherever you are in our platform, you can bring up the command palette by using the same keyboard shortcut.
Cmd/CTRL+K summons the FlowX Command Palette, your shortcut to getting things done fast.
A **command palette is a user interface** (UI) element that's basically a search bar with superpowers—giving you quick access to commands without clicking through endless menus. It narrows down options as you type, almost like it's reading your mind (but less creepy).
Not a keyboard shortcut fan? Just look for the "✨" icon, which marks the spots where AI features are hiding throughout the interface.

FlowX Command creates a central command headquarters where all platform actions live together in harmony. It simplifies the mental model of the platform from "where was that feature again?" to "let me just ask for it." Command palettes let users zip through tasks like they've been using the platform for years, even if they started yesterday—which explains why they're enjoying a renaissance in modern UIs.
# AI Designer
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/ai-core/ai-designer
Streamline UI design by converting natural language prompts, Figma Designs or sketches into complete UI layouts with validations and data models.
## Description
The AI Designer is a powerful tool that transforms design concepts into functional UI components for the FlowX.AI platform. Whether you start with a text description, an image of a design, or a form structure, the AI Designer can generate complete, interactive UI layouts that are immediately usable in your applications. It also handles mapping UI elements to your data model and can modify existing designs based on natural language instructions, significantly accelerating the UI development process.
## Capabilities
The AI Designer offers two main categories of capabilities:
### Generate New UI
1. **Generate UI from Text**
* Creates complete UI layouts based on natural language descriptions
* Interprets design requirements and converts them into structured UI elements
* Maps generated UI components to data model entities when available
* Produces JSON representations that are automatically parsed into FlowX.AI components
* Validates that prompts are UI-related before proceeding with generation
2. **Generate UI from Image**
* Extracts UI designs from uploaded images (sketches, mockups, screenshots)
* Works with hand-drawn sketches, Figma designs, or other visual UI representations
* Processes complex images by splitting them into sections for parallel analysis
* Reconstructs the visual design as functional FlowX.AI UI components
* Supports additional textual context to guide the interpretation
3. **Generate UI from Form**
* Creates structured form interfaces based on form descriptions or requirements
* Automatically adds appropriate validations for form fields
* Organizes form elements in a logical, user-friendly layout
* Connects form fields to the corresponding data model properties
### Edit Existing UI
1. **Edit UI from Text**
* Modifies specific containers or cards in existing UI designs
* Updates layouts, components, or properties based on natural language instructions
* Preserves the structure and functionality of unmodified UI elements
* Maintains data model connections while implementing requested changes
* Validates that edit requests are UI-related before applying modifications
## User Experience
The AI Designer is accessed through the FlowX.AI Platform interface. Users can:
1. Access the AI Designer from the platform's design tools section
2. Select the desired capability (Generate or Edit)
3. Provide input in the appropriate format:
* Text description for generating or editing UI
* Image upload for visual-based UI generation
* Form structure for form-specific UI creation
4. Optionally link to an existing data model to enable automatic field mapping
5. Preview the generated UI components in real-time
6. Make adjustments through the "Edit" capability if necessary
7. Implement the final UI components in their application
The agent performs validation checks to ensure inputs are appropriate for UI generation, providing helpful feedback when adjustments are needed.
## Anatomy
The AI Designer integrates visual recognition and natural language processing to transform various inputs into structured UI representations.
```mermaid
%%{
init: {
'theme': 'base',
'themeVariables': {
'primaryColor': '#f1774a',
'primaryTextColor': '#000',
'primaryBorderColor': '#000',
'lineColor': '#6b6b6b',
'secondaryColor': '#f3b861',
'tertiaryColor': '#bababa'
}
}
}%%
graph TD;
Start([Start]):::primary
Check_Prompt[Check Prompt]:::secondary
Classify_Image[Classify Image]:::tertiary
Edit_Interface[Edit Interface]:::tertiary
Design_UI[Design UI]:::secondary
Map_to_Data_Model[Map to Data Model]:::secondary
END([END]):::primary
subgraph Parallel_Processing [Parallel Processing]
Extract_Sections[Extract Sections]:::tertiary
Component_Identification[Component Identification]:::tertiary
Map_Components[Map Components]:::tertiary
Grid_Correction[Grid Correction]:::tertiary
Validate_Layout[Validate Layout]:::tertiary
end
Start --> Check_Prompt;
Start --> Classify_Image;
Check_Prompt --> Edit_Interface;
Check_Prompt --> Design_UI;
Edit_Interface --> Design_UI;
Classify_Image --> Parallel_Processing;
Parallel_Processing --> Design_UI;
Design_UI --> Map_to_Data_Model;
Map_to_Data_Model --> END;
%% Styling
classDef primary fill:#f1774a,stroke:#000,stroke-width:1px,color:#000;
classDef secondary fill:#f3b861,stroke:#000,stroke-width:1px,color:#000;
classDef tertiary fill:#bababa,stroke:#000,stroke-width:1px,color:#000;
```
The workflow begins with user input being categorized and processed based on its type. All inputs undergo validation to ensure they describe UI elements. For valid requests, the UI Generator creates the appropriate components, which are then mapped to the data model if available. The final output is compiled into a JSON representation that the FlowX.AI platform can interpret and display as a functional UI.
## Top rules for Designer AI Agent prompting
### ✅ DO the following
**Specify a type of screen:**
This means specifying a signup screen, a profile page or anything else that you want to add to your design. Use a context
**Add extra context:**
If designing a user bank app UI, and you want a screen solely about my expenses, include this in your prompt. For example, ‘`A expenses page that tracks a list of expenses that includes merchant value and transaction date`’. This ensures a tailored screen generation focused on expense-related content.
**Describe user interactions:**
Explain how users should interact with the UI, including any specific workflows, validation requirements, or dynamic behaviors the UI should support.
**Have fun and be creative:**
Most importantly, unleash creativity in your prompts. See what you can generate that will make your UI design stand out from the rest. The screen designer agent has a multitude of capabilities, so make sure to try them out.
### ⛔ **DON’T** do these
**Don’t specify colors:**
The single-screen generator cannot create a new design screen in a different color. However, it is easy to switch up the colors of your generated screen using theming feature.
**Don’t describe multiple screens**
The screen generator can only generate one type of screen at once. So if you use a prompt referring to two different screens, only one type will be generated.
**Don’t mention a different device:**
Our AI agents don’t understand designing in different design screens. So if you want a different device design screen, check in UI designer how your generated design looks on different platforms (Mobile / Web).
**Don't expect platform-specific knowledge:**
The AI Designer has no knowledge about other nodes or information in your project beyond what you provide in your prompt or the associated data model.
# AI Developer
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/ai-core/ai-developer
Generate specific business rules, or code expressions using natural language and automate the code necessary to implement them.
## Description
The AI Developer is an intelligent agent that transforms natural language descriptions into functional code. It helps developers generate business rules, JavaScript expressions, and test data without requiring manual coding. The agent can also edit existing code, provide explanations, and translate between programming languages, significantly accelerating the development process.
## Capabilities
The AI Developer offers two main categories of capabilities:
### Generate New Code
1. **Generate Business Rule**
* Transforms natural language descriptions into code using the data model as context
* Supports JavaScript, Python, and MVEL
* Produces clean, functional business rule code based on text prompts
2. **Generate Business Rule from Document**
* Extracts business rule definitions from uploaded documents
* Identifies relevant text on each page that describes business rules
* Summarizes and presents the information to users in a dropdown
* Generates corresponding code based on the selected rule description
3. **Generate (un)Hide JavaScript Expression**
* Creates JavaScript expressions for conditional visibility of elements
* Uses the data model as context for proper variable references
4. **Generate Computed JavaScript Expression**
* Produces JavaScript code for calculated or derived values
* Leverages the data model to ensure proper variable referencing
### Edit Existing Code
1. **Edit Existing Business Rule**
* Modifies existing code based on natural language instructions
* Uses the data model as context for accurate updates
2. **Fix / Explain Code**
* Fixes errors in business rules based on error logs
* Provides clear explanations of what the code does
3. **Translate Code Between Programming Languages**
* Converts business rules from one language to another
* Maintains functionality while adapting to language-specific syntax
## User Experience
The AI Developer is accessible through the FlowX.AI Platform interface. Users can:
1. Select the desired capability from the AI Developer section
2. Provide natural language instructions or upload relevant documents
3. Specify programming language preferences when applicable
4. Review and implement the generated code
For document-based generation, users can upload documents, view extracted rule descriptions in a dropdown, select the rule they want to generate, and receive the corresponding code.
## Anatomy
The AI Developer architecture integrates large language models with the FlowX.AI Platform to seamlessly convert natural language to code.
```mermaid
%%{
init: {
'theme': 'base',
'themeVariables': {
'primaryColor': '#f1774a',
'primaryTextColor': '#000',
'primaryBorderColor': '#000',
'lineColor': '#6b6b6b',
'secondaryColor': '#f3b861',
'tertiaryColor': '#000'
}
}
}%%
graph TD;
Start([Start]):::primary
subgraph DI_Platform_process [DI Platform process]
Classify_Document[Classify Document]:::secondary
Summarize[Summarize]:::secondary
Identify_process[Identify process]:::secondary
Extract_Prompt[Extract Prompt]:::secondary
end
Generate_Business_rule[Generate Business rule]:::secondary
END([END]):::primary
Start --> DI_Platform_process;
DI_Platform_process --> Generate_Business_rule;
Generate_Business_rule --> END;
%% Styling
classDef primary fill:#f1774a,stroke:#000,stroke-width:1px,color:#000;
classDef secondary fill:#f3b861,stroke:#000,stroke-width:1px,color:#000;
```
## Top rules for Designer AI Agent prompting
### ✅ DO the following
**Be specific with your requirements:**
Clearly state what the business rule should accomplish, including conditions, actions, and expected outcomes.
**Reference data model elements:**
Mention specific data fields or entities that the code should interact with to ensure proper context.
**Specify the programming language:**
Always indicate whether you need JavaScript, Python, or another supported language for your code generation.
**Provide edge cases:**
Mention special conditions or exceptions that the business rule should handle to ensure robust code.
### ⛔ **DON’T** do these
**Don't be vague:**
Avoid ambiguous descriptions that can lead to incorrect implementation. Be precise about the logic you need.
**Don't reference external systems:**
The AI Developer has no knowledge of external systems not defined in the provided context.
**Don't skip defining variables:**
Make sure to define all variables or data fields that your business rule will use.
**Don't expect platform-specific knowledge:**
The agent doesn't know about keys or information not defined in the data model or provided context.
# FlowX.AI Advancing Controller
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/core-components/advancing-controller
The Advancing Controller is a support service for the FlowX.AI Engine that enhances the efficiency of advancing operations. It facilitates equal distribution and redistribution of the workload during scale-up and scale-down scenarios.
To achieve its functionality, the Advancing Controller microservice utilizes database triggers in both PostgreSQL and OracleDB configurations.
A database trigger is a function that is automatically executed whenever specific events, such as inserts, updates, or deletions, occur in the database.
## Usage
The Advancing Controller service is responsible for managing and optimizing the advancement process in the PostgreSQL or OracleDB databases. It ensures efficient workload distribution, performs cleanup tasks, and monitors the status of worker pods.
If a worker pod fails, the service reassigns its work to other pods to prevent [**process instances**](../../projects/runtime/active-process/process-instance) from getting stuck.
It is essential to have the Advancing Controller service running alongside the FlowX Engine for uninterrupted instance advancement.
It is important to ensure that both the FlowX Engine and the Advancing Controller microservices are up and running concurrently for optimal performance.
## Configuration
For detailed instructions on how to set up the Advancing Controller microservice, refer to the following guide:
# FlowX.AI Events Gateway
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/core-components/events-gateway
The FlowX Events Gateway is a service that centralizes the communication with Server-Sent Events (SSE) messages from Backend to Frontend.

The FlowX Events Gateway serves as a central component for handling and distributing events within the system:
### Event processing
The Events Gateway system is responsible for receiving and processing events from various sources, such as the [**FlowX Engine**](./flowx-engine) and [**Task Management**](../core-extensions/task-management/task-management-overview). It acts as an intermediary between these systems and the rest of the system components.
It is a reusable component and is also used in administration scenarios to provide feedback without a refresh—for example, [**misconfigurations**](../../../../release-notes/v4.1.0-may-2024/v4.1.0-may-2024#misconfigurations) in platform version 4.1, allowing an error to be displayed in real-time during configuration.
### Message distribution
The Events Gateway system reads messages from the Kafka topic (`messaging_events_topic`) and distributes them to relevant components within the system. It ensures that the messages are appropriately routed to the intended recipients for further processing.
### Event publication
The Events Gateway system plays a crucial role in publishing events to the frontend renderer (FE renderer). It communicates with the frontend renderer using `HTTP` via `WebFlux`. By publishing events, it enables real-time updates and notifications on the user interface, keeping the user informed about the progress and changes in the system.
It is designed to efficiently send updates to the frontend in the following scenarios:
* When reaching a specific [**User Task (UT)**](../../building-blocks/node/user-task-node), a notification is sent to ensure the corresponding screen is rendered promptly.
* When specific actions require data to be sent to the user interface from a node.
### Integration with Redis
[**Redis**](../../platform-overview/frameworks-and-standards/event-driven-architecture-frameworks/intro-to-redis) plays an important role within the platform. It handles every message and is mandatory for the platform to function correctly, especially with the frontend. The platform relies on Redis to ensure that the messages are distributed efficiently and that the correct instance with the SSE connection pointer for the recipient is always reached.
The events-gateway system also interacts with Redis to publish events on a stream. This allows other components in the system to consume the events from the Redis stream and take appropriate actions based on the received events.
In these situations, the FlowX Engine places a message on Kafka for the Events Gateway. The Events Gateway then retrieves the message and stores it in Redis.
In summary, the events-gateway system acts as a hub for event processing, message distribution, and event publication within the system. It ensures efficient communication and coordination between various system components, facilitating real-time updates and maintaining system consistency.
For more details about how to configure events-gateway microservice, check the following section:
# FlowX.AI Engine
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/core-components/flowx-engine
The engine is the core of the platform, it is the service that runs instances of the process definitions, generates UI, communicates with the frontend and also with custom integrations and plugins. It keeps track of all currently running process instances and makes sure the process flows run correctly.

## Orchestration
Creating and interacting with process instances is pretty straightforward, as most of the interaction happens automatically and is handled by the engine.
The only points that need user interaction are starting the process and executing user tasks on it (for example when a user fills in a form on the screen and saves the results).
# FlowX.AI Audit
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/core-extensions/audit
The Audit service provides a centralized location for all audit events. The following details are available for each event.
* **Timestamp** - the date and time the event occurred, the timestamp is displayed in a reversed chronologically order
* **User** - the entity who initiated the event, could be a username or a system
* **Subject** - the area or component of the system affected by the event
Process Instance
Token
Task
Exception
Process definition
Node
Action
UI Component
General Settings
Swimlane
Swimlane Permissions
Connector
Enumeration
Enumeration Value
Substitution Tag
Content Model
Language
Source System
Image
Font file
* **Event** - the specific action that occurred
Create
Update
Update bulk
Update state
Export
Import
Delete
Clone
Start
Start with inherit
Advance
View
Expire
Message Send
Message Receive
Notification receive
Run scheduled action
Execute action
Finish
Dismiss
Retry
Abort
Assign
Unassign
Hold
Unhold
* **Subject identifier** - the name related to the subject, there are different types of identifiers based on the selected subject
* **Version** - the version of the process definition at the time of the event
* **Status** - the outcome of the event (e.g. success or failure)

## Filtering
Users can filter audit records by event date and by selecting specific options for User, Subject, and Subject Identifier.
* Filter by event date

* **User** - single selection, type at least 4 characters
* **Subject** - single selection
* **Subject identifier** - exact match
## Audit log details
To view additional details for a specific event, users can click the eye icon on the right of the event in the list. Additional information available in the audit log details window includes
Here you have the following information:
* **Event** - the specific action that occured
* **URL** - the URL associated with the event
* **Body** - any additional data or information related to the event

More details on how to deploy Audit microservice
# Configuration parameters
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/core-extensions/configuration-parameters
Configuration parameters allow projects to be dynamic, flexible, and environment-specific. They enable managing variables and values that change across deployment environments (e.g., Development, QA, Production), without requiring hardcoded updates. This feature is particularly valuable for managing sensitive information, environment-specific settings, and configurations for integrations.
Configuration Parameters are defined per **project version** or **library version**, ensuring consistency across builds while allowing for environment-specific overrides. These parameters can be used across various components, such as business rules, UI elements, integration designer, and gateways.
Libraries can define configuration parameters to facilitate testing of processes or workflows that depend on them. However, unlike project-level configuration parameters, libraries do not support direct overrides.
* Libraries provide default values for configuration parameters.
* Projects using these libraries can override the provided values as required.

***
## Default and override values
* **Default Value**: Defined during parameter creation and included in the application build. It serves as the fallback value when no environment-specific override is set.
* **Override Value**: A value defined post-deployment for a specific environment. Overrides take precedence during runtime.
* **Precedence Behavior** (process variables override):
* If a process variable and a configuration parameter share the same name, the process variable's value takes precedence during runtime.
* If the process variable is null or undefined, the configuration parameter's value is used as a fallback.
* Process variables override configuration parameters unless the process variable is `null` or undefined, in which case the configuration parameter is used.
* **Subprocess Behavior**:
* When a value is passed to a subprocess, the subprocess uses the resolved value (process variable or configuration parameter) from the parent process.
* If a new value is defined within the subprocess, it is appended back to the parent process.
To avoid conflicts, use distinct names for process variables and generic parameters whenever possible.
**Exception in Business Rules**:\
In business rules, values are taken directly from the map of configuration parameters without applying the above fallback logic.
For details on configuring runtime overrides, see:
***
## Types of configuration parameters
Configuration parameters are defined as **key-value pairs** and support the following types:
### Value
A static value directly used by the project. Suitable for settings that do not change across environments.
* **Use Cases**:
* Feature flags to toggle functionality (e.g., enabling/disabling insurance sales in a customer dashboard).
* Email addresses for notification recipients.
* Homebank redirect URLs for specific processes.
* **Example**:
* **Key**: `officeEmail`
* **Type**: `value`
* **Value**: `officeEmail@office.com`

***
### Environment variable
References an external variable set by the DevOps team. These variables are defined per environment and referenced using a **name convention**.
* **Use Cases**:
* Environment-specific API base URLs.
* Dynamic configuration of services or integrations.
* **Example**:
* **Key**: `baseUrl`
* **Type**: `environment variable`
* **Value**: `BASE_URL` (name convention pointing to an externally defined value)
Configuration details:
| Key | Type | Value | Description |
| ------- | ---------------------- | ---------- | ------------------------------------------------------------ |
| baseUrl | `environment variable` | `BASE_URL` | A reference to the base URL configured externally by DevOps. |
**Example values for different environments**
| Environment | External Variable Name | Actual Value |
| ----------- | ---------------------- | ----------------------------- |
| Development | `BASE_URL` | `https://dev.example.com/api` |
| QA | `BASE_URL` | `https://qa.example.com/api` |
| Production | `BASE_URL` | `https://api.example.com` |
***
### Secret environment variable
Used for sensitive data like passwords, API keys, or credentials. These values are securely managed by DevOps and referenced using a **name convention**.
* **Use Cases**:
* Passwords or tokens for integrations.
* Secure configuration of external services in the integration designer.
* **Example**:
* **Key**: `dbPassword`
* **Type**: `secret environment variable`
* **Value**: `DB_PASSWORD`
***
## Use cases for configuration parameters
Configuration parameters simplify the management of environment-specific or dynamic settings across multiple project components:
1. **Business Rules**:
* Define dynamic logic based on parameters such as feature toggles or environment-specific conditions.
2. **UI Elements**:
* Configure content dynamically based on the environment (e.g., URLs for redirects, conditional features).
* **Note**: Ensure variables referenced in UI components (e.g., for dynamic content or URLs) are uniquely named to avoid unexpected overrides by process variables.
3. **Integration Designer**:
* Reference the token parameter.


4. **Gateways**:
* Dynamically manage routing and decision-making logic using environment-specific parameters.
## Adding a configuration parameter
To add a new configuration parameter:
1. Navigate to **Your Project** → **Configuration Parameters**.
2. Click **New Parameter** and provide the following:
* **Key**: Name of the parameter.
* **Type**: Select `value`, `environment variable`, or `secret environment variable`.
* **Value**:
* For `value`: Enter the static value.
* For `environment variable` or `secret environment variable`: Enter the agreed name convention.
3. Click **Save** to include the parameter in the application build.

***
## Technical notes and best practices
### Security
* **Sensitive Values (ENV\_SECRET)**:
* Store and manage securely.
* Do not display in the frontend or expose via public APIs.
* Avoid logging sensitive information.
### Environment-specific updates
* Environment variable updates (`ENV`/`ENV_SECRET`) are managed by DevOps.
* Updates may require service restarts unless a caching or real-time mechanism (e.g., change streams) is implemented.
### Reserved keys
* Certain keys are reserved for system use (e.g., `processInstanceId`). Avoid using reserved keys for custom configurations.
### Variable naming
* **Avoid Shared Names**:
* Do not use the same name for configuration parameters and process variables to prevent unintentional overrides.
* **Fallback Logic Awareness**:
* Understand that `null` or `undefined` process variables will default to the corresponding configuration parameter's value during runtime.
* **Subprocess Behavior**:
* Variables in subprocesses are appended back to parent processes with their current state. Plan naming conventions and data flows accordingly.
When designing processes, ensure that variables in subprocesses and parent processes do not conflict with configuration parameters names. Test these interactions in scenarios where variables are dynamically assigned or left undefined.
***
# FlowX.AI CMS
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/core-extensions/content-management/content-management
The FlowX.AI Headless Content Management System (CMS) is a core component of the FlowX.AI platform, designed to enhance the platform's capabilities with specialized functionalities for managing taxonomies and diverse content types.

## Key features
The FlowX.AI CMS offers the following features:
Manage and configure enumerations for various content types
Utilize tags to dynamically insert content values
Support multiple languages for content localization.
Integrate with various external source systems
Organize and manage media assets
## Deployment and integration
The CMS can be rapidly deployed on your chosen infrastructure, preloaded with necessary taxonomies or content via a REST interface, and integrated with the FlowX Engine using Kafka events.
For deployment and configuration, refer to the:
## Using the CMS service
Once the CMS is deployed in your infrastructure, you can define and manage custom content types, such as lists with different values based on external systems, blog posts, and more.
### Kafka integration
You can use Kafka to translate/extract content values based on your defined lanaguages and source systems.
#### Request content
Manage content retrieval messages between the CMS and the FlowX Engine using the following Kafka topics:
| Environment Variable | Default FLOWX.AI value (Customizable) |
| ----------------------------------- | ------------------------------------------------------------------ |
| KAFKA\_TOPIC\_REQUEST\_CONTENT\_IN | ai.flowx.dev.plugin.cms.trigger.retrieve.content.v1 |
| KAFKA\_TOPIC\_REQUEST\_CONTENT\_OUT | ai.flowx.dev.engine.receive.plugin.cms.retrieve.content.results.v1 |
* `KAFKA_TOPIC_REQUEST_CONTENT_IN`: This variable defines the topic used by the CMS to listen for incoming content retrieval requests.
* `KAFKA_TOPIC_REQUEST_CONTENT_OUT`: This variable defines the topic where the CMS sends the results of content retrieval requests back to the FlowX Engine.
You can find the defined topics in two ways:
1. In the FlowX.AI Designer: Go to **Platform Status** -> **cms-core-mngt** -> **kafkaTopicsHealthCheckIndicator** -> **Details** -> **Configuration** -> **Topic** -> **Request** -> **Content** (use the `in` topic).

2. Alternatively, check the CMS microservice deployment for the `KAFKA_TOPIC_REQUEST_CONTENT_IN` environment variable.
#### Example: Request a label by language or source system code
To translate custom codes into labels using the specified [language](../../plugins/languages.) or [source system](./source-systems), use the following request format. For instance, when extracting values from a specific enumeration for a UI component:
For example when you want to use a UI component where you want to extract values from an specific enumeration.
Various external systems and integrations might use different labels for the same information. In the processes, it is easier to use the corresponding code and translate this into the needed label when necessary: for example when sending data to other integrations, when generating documents, etc.
#### Request content request
Add a [**Send Message Task** (kafka send event)](/4.1.x/docs/building-blocks/node/message-send-received-task-node) and configure it to send content requests to the FlowX.AI Engine.
The following values are expected in the request body of your Send Message Taks node:
* At least one of `language` and `sourceSystem` should be defined (if you only need the `sourceSystem` to be translated, you can leave `language` empty and vice versa, but they cannot both be empty)
* A list of `entries` and their `codes` to be translated
**Expected Request Body:**
```json
{
"language": "en",
"sourceSystem": "FlowX",
"entries": [
{
"codes": [
"ROMANIA",
"BAHAMAS"
],
"contentDescription": {
"name": "country", //the name of the enumeration we used in this example
"version": 1, //optional, only if you want to extract from a specific version of your enumeration
"draft": true //optional
}
}
]
}
```

The `version` and `draft` fields are optional. If not specified, the latest published content will be used.
#### Request content response
Add a **Receive Message Task** to handle the response from the CMS service. Configure it to listen to the topic where the Engine sends the response, e.g., we have the `ai.flowx.updates.contents.values.v1` topic.

**Response Message Structure**:
```json
{
"entries": [
{
"contentName": "country",
"code": "ROMANIA",
"label": "Romania",
"translatedCode": "ROMANIA-FlowX"
},
{
"contentName": "country",
"code": "BAHAMAS",
"label": "Bahamas",
"translatedCode": "BAHAMAS-FlowX"
}
],
"error": null
}
```
Next, we will change the system language and modify our process to display translations dinamycally on a another key on a separate screen.
# Enumerations
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/core-extensions/content-management/enumerations
Enumerations allow you to manage a collection of predefined values that can be used within UI components or templates. These values can be tailored for specific source systems or languages.

## Overview
Enumerations help you standardize and manage lists of values that are displayed in your applications. Each enumeration consists of one or more values with both an internal identifier (the code) and display strings (labels) for different languages. You can also establish hierarchies by creating child collections—for example, defining states within a country.
## User interface overview
The **Enumerations** tab is divided into the following sections:
### Header
* **New Enumeration** - Click this button to create a new enumeration.
* **Contextual Menu (three dots)** - Provides options for **Import** and **Export**.

### Main table
* **Name** - Displays the name of each enumeration.
* **Edited At** - Indicates the last update timestamp for each enumeration.
* **View Details** - Allows you to update the enumeration name.

***
## Enumeration entry details
When you open an enumeration entry, define the following properties:
1. **Code** - Not displayed in the end-user interface, but used to assure value uniqueness
2. **Labels** - These are the display strings shown to end users in various languages.
3. **External source systems codes** - Specifies the corresponding codes for each external system that consumes the data. Connectors use these codes for data validation.

### Adding a new enumeration
To create a new enumeration:
1. **Open your project:** Launch **FlowX Designer** and open your project.
2. **Access Enumerations:** Select **Enumerations** from the **CMS** tab.
3. **Name the Enumeration:** Enter a suggestive name and click **Add**.

### Configuring an enumeration
After creating an enumeration, add values to it by configuring:
* **Code:** A unique identifier (not visible to end-users).
* **Labels:** The display string for each language.
* **Source Systems:** The code values assigned for each external system consuming the data.

You can find the list of available source systems under **Projects → Integrations → Systems**.

### Creating a child collection
Enumerations can be organized hierarchically by defining child values for each entry. For example, you might define states as children under a country enumeration. Hierarchical structures enable cascading selections in the user interface; for instance, choosing a country in one dropdown can filter the states available in another.


### Importing/exporting an enumeration
You can import or export enumerations using the following formats:
* **ZIP**
* **CSV**

Every enumeration (root or child) must contain at least one **content value** (i.e., at least one label is provided) before attempting an import or export. Remember, the term *value* here refers to the display content for end-users, not the internal code.
### Enumerations example
Consider an example for **Activity Domain Companies**. You can use hierarchies to organize related domains and activities.
#### **Activity Domain Companies → Agriculture forestry and fishing:**
* **Agriculture, hunting, and related services →**
Cultivation of non-perennial plants:
* Cultivation of cereals (excluding rice), leguminous plants and oilseeds
* Cultivation of rice
* Growing of vegetables and melons, roots and tubers
* Cultivation of tobacco
Cultivation of plants from permanent crops:
* Cultivation of grapes
* Cultivation of grapes
* Cultivation of seeds and stone fruits
* Cultivation of oil seeds
Animal husbandry:
* Raising of dairy cattle
* Raising of other cattle
* Raising horses and other horses
* **Forestry and logging →**
Forestry and other forestry activities:
* Forestry and other forestry activities
Logging:
* Logging
Collection of non-wood forest products from spontaneous flora:
* Collection of non-wood forest products from spontaneous flor
* **Fisheries and aquaculture →**
Fishing:
* Sea fishing
* Freshwater fishing
Aquaculture:
* Maritime aquaculture
* Freshwater aquaculture
This is the output after adding all the lists/collections from above:

# Media library
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/core-extensions/content-management/media-library
The media library serves as a centralized hub for managing and organizing various types of media files, including images, GIFs, and more. It encompasses all the files that have been uploaded to the **processes**, providing a convenient location to view, organize, and upload new media files.

You can also upload an image directly to the Media Library on the spot when configuring a process using the [**UI Designer**](/4.0/docs/building-blocks/ui-designer/ui-designer). More information [**here**](/4.0/docs/building-blocks/ui-designer/ui-component-types/image).
## Uploading a new asset
To upload an asset to the Media Library, follow the next steps:
1. Open **FlowX Designer**.
2. Go to **Content Management** tab and select **Media Library**.
3. Click **Add new item**, the following details will be displayed:
* **Upload item** - opens a local file browser
* **Key** - the key must be unique, you cannot change it afterwards

4. Click **Upload item** button and select a file from your local browser.
5. Click **Upload item** button again to upload the asset.
* **Supported image formats**: PNG, JPEG, JPG, GIF, SVG or WebP format, 1 MB maximum size.
* **Supported files**: PDF documents, with a maximum file size limit of 10 MB.
## Displaying assets
Users can preview all the uploaded assets just be accessing the **Media Library**.
You have the following information about assets:
* Preview (thumbnail 48x48)
* Key
* Format ("-" for unknown format)
* Size
* Edited at
* Edited by

## Searching assets
You can search an asset by using its key (full or substring).

## Replacing assets
You can replace an item on a specific key (this will not break references to process definitions).

## Referencing assets in UI Designer
You have the following options when configuring image components using [UI Designer](../../../building-blocks/ui-designer/ui-designer):
* Source Location - here you must select **Media Library** as source location
* Image Key
* **Option 1**: trigger a dropdown with images keys - you can type and filter options or can select from the initial list in dropdown
* **Option 2**: open a popup with images thumbnails and keys then you can type and filter options or can select from the initial list

More details on how to configure an image component using UI Designer - [**here**](/4.0/docs/building-blocks/ui-designer/ui-component-types/image).
## Icons
The Icons feature allows you to personalize the icons used in UI elements. By uploading SVG files through the Media Library and marking them, you can choose icons from the available list in the UI Designer.

When selecting icons in the UI Designer, only SVG files marked as icons in the Media Library will be displayed.
To ensure optimal visual rendering and alignment within your UI elements, it is recommended to use icons with small sizes such as: 16px, 24px, 32px.
Using icons specifically designed for these sizes helps maintain consistency and ensures a visually pleasing user interface. It is advisable to select icons from icon sets that provide these size options or to resize icons proportionally to fit within these dimensions.
Icons are displayed or rendered at their original, inherent size.
### Customization
Content-specific icons pertain to the content of UI elements, such as icons for [input fields](../../../building-blocks/ui-designer/ui-component-types/form-elements/input-form-field) or [send message buttons](../../../building-blocks/ui-designer/ui-component-types/buttons). These icons are readily accessible in the [UI Designer](../../../building-blocks/ui-designer/).

More details on how to add icons on each element, check the sections below:
## Export/import media assets
The import/export feature allows you to import or export media assets, enabling easy transfer and management of supported types of media files.

### Import media assets
Use this function to import media assets of various supported types. It provides a convenient way to bring in images, videos, or other media resources.
### Export all
Use this function to export all media assets stored in your application or system. The exported data will be in JSON format, allowing for easy sharing, backup, or migration of the media assets.
The exported JSON structure will resemble the following example:
```json
{
"images": [
{
"key": "cart",
"application": "flowx",
"filename": "maxresdefault.jpg",
"format": "jpeg",
"contentType": "image/jpeg",
"size": 39593,
"storagePath": "https://d22tnnndi9lo60.cloudfront.net/devmain/flowx/cart/1681982352417_maxresdefault.jpg",
"thumbnailStoragePath": "https://d22tnnndi9lo60.cloudfront.net/devmain/flowx/cart/1681982352417_thumbnail_maxresdefault.jpg"
},
{
"key": "pizza",
"application": "flowx",
"filename": "pizza.jpeg",
"format": "jpeg",
"contentType": "image/jpeg",
"size": 22845,
"storagePath": "https://d22tnnndi9lo60.cloudfront.net/devmain/flowx/pizza/1681982352165_pizza.jpeg",
"thumbnailStoragePath": "https://d22tnnndi9lo60.cloudfront.net/devmain/flowx/pizza/1681982352165_thumbnail_pizza.jpeg"
}
],
"exportVersion": 1
}
```
* `images`- is an array that contains multiple objects, each representing an image
* `exportVersion` - represents the version number of the exported data, it holds the image-related information
* `key`- represents a unique identifier or name for the image, it helps identify and differentiate images within the context of the application
* `application` - specifies the name or identifier of the application associated with the image, it indicates which application or system the image is related to
* `filename` - the name of the file for the image, it represents the original filename of the image file
* `format` - a string property that specifies the format or file extension of the image
* `contentType` - the MIME type or content type of the image, it specifies the type of data contained within the image file
* `size` - represents the size of the image file in bytes, it indicates the file's storage size on a disk or in a data storage system
* `storagePath` - the URL or path to the location where the original image file is stored, it points to the location from where the image can be accessed or retrieved
* `thumbnailStoragePath` - the URL or path to the location where a thumbnail version of the image is stored, it points to the location from where the thumbnail image can be accessed or retrieved
# Substitution tags
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/core-extensions/content-management/substitution-tags
Substitution tags are used to generate dynamic content across the platform. As **enumerations**, substitution tags can be defined for each language set for the solution.

On the main screen inside **Substitution tags**, you have the following elements:
* **Key**
* **Values** - strings that are used in the end-user interface, according to the language set for the generated solution
* **Edit** - button used to edit substitution tags
* **Delete** - button used to delete substitution tags
* **New value** - button used to add a new substitution tag
* **Breadcrumbs menu**:
* **Import**
* from JSON
* from CSV
* **Export**
* to JSON
* to CSV
* **Search by** - search function used to easily look for a particular substitution tag
### Adding new substitution tags
To add a new substitution tag, follow the next steps.
1. Go to **FlowX Designer** and select the **Content Management** tab.
2. Select **Substitution tags** from the list.
3. Click **New value**.
4. Fill in the necessary details:
* Key
* Languages
5. Click **Add** after you finish.

When working with substitution tags or other elements that imply values from other languages defined in the CMS, when running a **process**, the default values extracted will be the ones marked by the default language.
### Getting a substitution tag by key
```
public func getTag(withKey key: String) -> String?
```
All substitution tags will be retrieved by the [**SDK**](../../../../sdks/angular-renderer) before starting the first process and will be stored in memory.
Whenever the container app needs a substitution tag value for populating the UI of the custom components, it can request the substitution tag using the method above, providing the key.
For example, substitution tags can be used to localize the content inside an application.
### Example
#### Localizing the app
You must first check and configure the FLOWX.AI Angular renderer to be able to replicate this example. Click [here](../../../../sdks/angular-renderer) for more information.
The `flxLocalize` pipe is found in the `FlxLocalizationModule`.
```typescript
import { FlxLocalizationModule } from 'flowx-process-renderer';
```
```typescript
import { Component } from '@angular/core';
@Component({
selector: 'app-dummy-component',
template: `
{{ "stringToLocalize" | flxLocalize}}
`,
})
export class DummyComponent{
stringToLocalize: string = `@@localizedString`
}
```
Strings that need to be localized must have the '**@@**' prefix which the **flxLocalize** pipe uses to extract and replace the string with a value found in the substitution tags enumeration.
Substitution tags are retrieved when a start process call is first made, and it's cached on subsequent start process calls.
# FlowX.AI Scheduler
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/core-extensions/scheduler
The Scheduler is part of the core extensions of the FlowX.AI platform. It can be easily added to your custom FlowX deployment to enhance the core platform capabilities with functionality specific to scheduling messages.
The service offers the possibility to schedule a message that you only need to process after a configured time period.
It can be quickly deployed on the chosen infrastructure and then connected to the **FLOWX.AI Engine** through Kafka events.
## Using the scheduler
After deploying the scheduler service in your infrastructure, you can start using it to schedule messages that you need to process at a later time.
One such example would be to use the scheduler service to expire processes that were started but haven't been finished.
First you need to check the configured kafka topics match the ones configured in the engine deployment.
For example the engine topics `KAFKA_TOPIC_PROCESS_SCHEDULE_OUT_SET` and `KAFKA_TOPIC_PROCESS_SCHEDULE_OUT_STOP` **must** be the same with the ones configured in the scheduler at `KAFKA_TOPIC_SCHEDULE_IN_SET` and `KAFKA_TOPIC_SCHEDULE_IN_STOP` environment variables.
When a process is scheduled to expire, the engine sends the following message to the scheduler service (on the topic `KAFKA_TOPIC_SCHEDULE_IN_SET`):
```json
{
"applicationName": "onboarding",
"applicationId": "04f82408-ee66-4c68-8162-b693b06bba00",
"payload": {
"scheduledEventType": "EXPIRE_PROCESS",
"processInstanceUuid": "04f82408-ee66-4c68-8162-b693b06bba00"
},
"scheduledTime": 1621412209.353327,
"responseTopicName": "ai.flowx.process.expire.staging"
}
```
The scheduled time should be defined as `java.time.Instant`.
At the scheduled time, the payload will be sent back to the response topic defined in the message, like so:
```json
{
"scheduledEventType": "EXPIRE_PROCESS",
"processInstanceUuid": "04f82408-ee66-4c68-8162-b693b06bba00"
}
```
If you don't need the scheduled message anymore, you can discard it by sending the following message (on the topic `KAFKA_TOPIC_SCHEDULE_IN_STOP`)
```json
{
"applicationName": "onboarding",
"applicationId": "04f82408-ee66-4c68-8162-b693b06bba00"
}
```
These fields, `applicationName` and `applicationId` are used to uniquely identify a scheduled message.
Steps needed in order to deploy and set up the service
# FlowX.AI Data Search
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/core-extensions/search-data-service
The Data Search service is a microservice that enables data searches within processes defined in the current or any other application in the platform. It facilitates the creation of processes capable of conducting searches and retrieving data by utilizing Kafka actions in tandem with Elasticsearch mechanisms.
Data Search service leverages Elasticsearch to execute searches based on indexed keys, using existing mechanisms.
## Using the Data Search service
### Use case
* Search for data within other processes
* Display results indicating where the search key was found in other processes
For our example, two process definitions are necessary:
* one process used to search data in another process - in our example ***"search\_process\_CDN"***

* one process where we look for data - in our example ***"add\_new\_clients"***

## Add data process example
Firstly, create a process where data will be added. Subsequently, the second process will be used to search for data in this initial process.

In the "Add Data Process Example" it's crucial to note that we add mock data here to simulate existing data within real processes.
Example of MVEL Business Rule:
```json
output.put ("application", {
"date": "22.08.2022",
"client": {
"identificationData": {
"firstName": "John",
"lastName": "Doe",
"cityOfBirth": "Anytown",
"primaryDocument": {
"number": 123456,
"series": "AB",
"issuedCountry": "USA",
"issuedBy": "Local Authority",
"issuedAt": "01.01.2010",
"type": "ID",
"expiresAt": "01.01.2030"
},
"countryOfBirth": "USA",
"personalIdentificationNumber": "1234567890",
"countyOfBirth": "Any County",
"isResident": true,
"residenceAddress": {
"country": "USA",
"city": "Anytown",
"street": "Main Street",
"streetNumber": 123
},
"mailingAddress": {
"country": "USA",
"city": "Anytown",
"street": "Main Street",
"streetNumber": 123
},
"pseudonym": null
},
}
}
);
```
Now we can play with this process and create some process instances with different states.
## Search process example
Configure the "Search process" to search data in the first created process instances:
Create a process using the [**Process Designer**](../../building-blocks/process/process).
Add a [**Task node**](../../building-blocks/node/task-node) within the process. Configure this node and add a business rule if you want to customize the display of results, e.g:
```java
output.put("searchResult", {"result": []});
output.put("resultsNumber", 0);
```
For displaying results in the UI, you can also consider utilizing [**Collections**](../../building-blocks/ui-designer/ui-component-types/collection/) UI element.
Add a **user task** and configure a send event using a [**Kafka send action**](../../building-blocks/node/message-send-received-task-node#send-message-task). Configure the following parameters:
The Kafka topic for the search service requests (defined at `KAFKA_TOPIC_DATA_SEARCH_IN` environment variable in your deployment).

```json
{
"searchKey": "application.client.identificationData.lastName",
"value": "12344",
"processStartDateAfter": "YYYY-MM-DDTHH:MM:SS",
"processStartDateBefore": "YYYY-MM-DDTHH:MM:SS",
"processDefinitionNames": ["processDef1", "processDef2"],
"states": ["STARTED", "FINISHED", "ONHOLD"],
"applicationIds": ["8dd20844-2dc5-4445-83a5-bbbcc82bed5f"]
}
```
* **searchKey** - Represents the process key used to search data stored in a process.
Indexing this key within the process is crucial for the Data Search service to effectively locate it. To enable indexing, navigate to your desired Application then choose the process definition and access **Process Settings → Data Search**.

❗️ Keys are indexed automatically when the process status changes (e.g., created, started, finished, failed, terminated, expired), when swimlanes are altered, or when stages are modified. To ensure immediate indexing, select the 'Update in Task Management' option either in the **node configuration** or within **Process Settings → General** tab.
* **value** - The dynamic process key added on our input element that will store and send the data entered by a user to the front end.

* **states** - `["STARTED", "FINISHED", "ONHOLD", "..."]` - If omitted, the process will display all statuses.
Check the Understanding the [Process Status Data](../../projects/runtime/active-process/process-instance) section for more example of possible states.
* **applicationIds**:
* If omitted, the search will be performed in the current application.
* If multiple application IDs are provided, the search will be executed across all specified applications.
* **Data to send (key)**: Used for validating data sent from the frontend via an action (refer to **User Task** configuration section).
* **Headers**: Mandatory - `{"processInstanceId": ${processInstanceId}}`
If you also use callbackActions, you will need to also add the following headers:
`{"destinationId": "search_node", "callbacksForAction": "search_for_client"}`
Example (dummy values extracted from a process):

A custom microservice (a core extension) will receive this event and search the value in the Elasticsearch.
It will respond to the engine via a Kafka topic (defined at `KAFKA_TOPIC_DATA_SEARCH_OUT` env variable in your deployment). Add the topic in the **Node config** of the User task where you previously added the Kafka Send Action.

### Response
The response's body message will look like this:
#### If there is no result:
```json
{
"result": [],
"searchKey": "application.client.name.identificationData.lastName",
"tooManyResults": "false",
"searchValue": "random"
}
```

Example (dummy values extracted from a process):
To access the view of your process variables, tokens and subprocesses go to **FLOWX.AI Designer > Active process > Process Instances**. Here you will find the response.
#### If there is a list of results:
```json
{
"searchKey": "application.client.identificationData.personalIdentificationNumber"
"result":[{
"processInstanceUUID": "UUID",
"status": "FINISHED",
"processStartDate": date,
"data" : {"all data in elastic for that process"}
}],
"tooManyResults": true|false
}
```
**NOTE**: Up to 50 results will be received if `tooManyResults` is true.
Example with dummy values extracted from a process:

#### Developer
Enabling Elasticsearch indexing **requires** activating the configuration in the **FlowX Engine**. Check the [**indexing section**](../../../setup-guides/flowx-engine-setup-guide/configuring-elasticsearch-indexing/) for more details.
For deployment and service setup instructions
# Task management
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/core-extensions/task-management/task-management-overview
Task Management in FlowX.AI is a core functionality that allows users to create, configure, and manage tasks within the platform, providing a structured way to handle work processes. It enables the definition of tasks based on business processes, offers tools for allocating, tracking, and managing tasks across various roles and departments, and supports customization through views, filters, and rules.

Task Management also includes capabilities for user roles, customizable tables, and integration with the project through both low-code and full-code approaches, ensuring flexibility for various use cases.
## Key features
* **Views**: Configurable interfaces to display task-related data based on process definitions.
* **Hooks**: Upon a token's entry/exit from the selected process, it initiates a specified process, swimlane, or stage based on the configured settings.
* **Stages**: used to monitor the progression of tasks within a process by identifying where it may have stalled. This functionality can be used for teams but also for various business stages: onboarding, verification, and validation processes.
* **Allocation Rules**: Facilitate the equal distribution of tasks among users with permissions in a specific swimlane.
## Task Management Views
Views offer a flexible way to tailor task data display according to business needs. By configuring views, users can create structured, customized interfaces that help specific roles, departments, or use cases access relevant data effectively.
Example of custom view:

### All tasks
In the All Tasks section, you can view all tasks generated from any process definition marked as "to be used in task manager" within a project. This provides a centralized view of all relevant tasks, as long as you have the appropriate permissions.

It provides an interactive visualization, with built-in filtering functionalities for refined data display.
Dataset Config and Table Config options are not accessible in the All Tasks view.
### Creating a view
In the Views section, you can create a new view by selecting a name and choosing a process definition. The process definition is crucial because it determines which keys you can access when configuring the view.
To set up a view, navigate to the Views section in the Task Management interface:
1. Click **Add View**.
2. Enter a **Name** for the view.
3. **Choose a Process Definition**: This will link the view's configuration to a specific process and its associated keys.
Once a view is created for a process (e.g., Process X), it cannot be reassigned to another process. This ensures consistent data structuring based on the selected process definition.
Upon creating a view, you are automatically redirected to configure its parameters.

The Task Management default view contains four primary columns, with two additional default columns that can be added.

You also have the option to add custom parameters to the table, with no limit on the number of columns.
**Columns explained**:
* **Stage**: Indicates where the process halted, providing a clear view of its current state.
* **Assignee**: Displays the individual to whom the task is assigned.
* **Priority**: Reflects the urgency level of the task.
* **Last Updated**: Shows the timestamp of the most recent action taken on the task.
* **Title**: Displays the designated name of the task, which can be customized by using Business Rules.
* **Status**: Represents the current state of a Token, such as 'Started' or 'Finished.'
* **Custom parameters**: User-defined keys within the process settings, which become available only after their configuration is complete.
### Customer parameters and display names
### Display names
You can rename default and custom parameters to make them contextually relevant for various business needs. For example:
* Rename an address field to clarify if it refers to "Residence" or "Issuing Location."

* Use Substitution Tags for dynamic display names.
Renaming a parameter’s Display Name will only change how it’s shown in the interface, without altering the actual data model. The rename option is also available for default parameters (not just custom parameters). Changing the Display Name also allows the use of Substitution Tags.
### Custom Parameters in Task Management
Custom parameters in Task Management provide a way to tailor task displays and ensure that task data aligns with specific business needs and contexts.
**Key setup and configuration**
1. **Adding Custom Parameters**:
* In **Process Settings** → **Task Management**, you can define the custom keys that will be indexed and made available for tasks.

* Each custom parameter can be renamed to suit different business contexts, ensuring clarity in cases where parameters may have multiple meanings.
Ensure that the custom key exists in the **Data Model** before it can be mapped in Task Management.
If the **attribute type** of a custom key is modified after it has been indexed, the key must be re-indexed in the **Task Management** section. This re-indexing step is crucial to ensure that Task Management reflects the updated attribute type correctly.

For data from custom parameters to flow correctly, ensure that **Forms to Validate** is set on the **UI Action button** in your UI for the corresponding process. This configuration is necessary for custom parameters to be validated and included in Task Management.
2. **Labeling Custom Parameters**:
* When adding a custom parameter, use the rename option to assign it a label relevant to the process context (as demonstrated in the example [**above**](#display-names)).
* This allows parameters to remain flexible for different roles or departments, adapting to each use case seamlessly.
3. **Enabling Task Management Integration**:
* To ensure that data flows correctly into Task Management, enable the **Use process in task management** toggle in **Process Settings** within your **project** in **FlowX Designer**.
* Some actions may be restricted based on user roles and access rights, so confirm that your role allows necessary access.
4. **Configuring Node-Specific Updates**:
* To enable Task Manager to send targeted updates from specific parts of a process:
* In **FlowX Designer**, open the relevant **project** and then the desired **process definition** and click **Edit**.
* Select the node(s) where updates should be triggered and enable the **Update task management?** switch.
* You can configure this action for multiple nodes, allowing flexibility in tracking and updating based on process flow.

Activating the **Use process in task management** flag at both the process settings level and node level is essential for ensuring data consistency and visibility in Task Management.
### Table config and Dataset config in Task Management
You can use **Table Config** and **Dataset Config** to configure and filter task data effectively. These configurations help create a customized and user-friendly interface for different roles, departments, or organizational needs.

#### Table config
**Table Config** is used to define the structure and content of the Task Management table view. Here, you can configure the columns displayed in the table and set up default sorting options.
1. **Configuring the Table Columns**:
* **Default Columns**: By default, the table includes the following columns: **Stage**, **Assignee**, **Priority**, and **Last Updated**.
* You can add additional columns, such as **Title**, **Status**, and **Custom Parameters**. Custom parameters can be chosen from the keys configured in **Process Settings** → **Task Management**.
2. **Setting Default Sorting**:
* You can select one column for **default sorting** in ascending or descending order. This configuration helps prioritize how data is initially displayed, often based on “Last Updated” or other relevant fields.
* If no specific sorting rule is configured, the table will automatically apply sorting based on the **Last Updated** column.

#### Dataset config
**Dataset Config** is used to filter and refine the data displayed in Task Management views. This helps create targeted views based on specific needs, such as differentiating data for front office vs. back office or specific roles like managers and operators.
1. **Adding Filters**:
* You can apply filters on the keys brought into the **Dataset Config** to customize the data shown. Filters can be applied based on various data types, such as enums, strings, numbers, dates, and booleans.
2. **Filtering Options by Data Type**:
* **Enums**: Can be filtered using the `In` operator. Only parent enums are available for mapping in Task Management (ensure enums are mapped in the data model beforehand).
Before you can map enums in Task Management, they must be configured in the Data Model. Only parent enums can be mapped.
If any changes are made to the Data Model after the keys have been configured and indexed in Task Management, these changes will not be automatically reflected. You must re-add and re-index the keys in the process settings to ensure that the updated information is indexed correctly.
* **Strings**: Available filters include `Not equal`, `In`, `Starts with`, `Ends with`, `Contains`, and `Not contains`.
* **Numbers**: Filters include `Equals`, `Not equal`, `Greater than`, `Less than`, `Greater than or equal`, and `Less than or equal`.
* **Dates and Currencies**: Filters include `Equals`, `Not equal`, `Greater than`, `Less than`, `Greater than or equal`, `Less than or equal`, and `In range`.
* **Booleans**: Can be filtered using the `Equals` operator.

3. **Role-Specific Configurations**:
* **Dataset Config** allows creating views tailored to specific audiences, such as different departments or roles within an organization. However, note that filters in Dataset Config do not override user permissions on task visibility.
Example with filter applied a number attribute:
### Managing data model changes
While creating a view, you may want to modify the **data model**. It's important to note that changes to the **data model** do not directly impact the views. Views are tied to the process definition, not the data model. Therefore, if you make changes to the data model, you do not need to create a new view unless the changes also impact the underlying process.
### Task details
The **Task Details** tab within **Task Manager** provides key process information, including:
* **Priority**: Enables task prioritization.
* **Status**: The current process status (in our example, `STARTED`)
* **Stage**: Specific stages during process execution.
* **Comments**: User comments.
* **History**: Information such as task creation, creator, and status changes.
* **Last Updated**: Displays the most recent timestamp of any changes made to a task.
* **View Application**: Provides direct access to the container application URL where the FlowX.AI process related to a specific task is running.

**Accessing Task details in Task Management**
To access the **Task Details** of a specific task in a Task Management view, follow these steps:
1. **Navigate to the Task Management Interface**:
* Open the Task Management section within the project and select the desired **View** that contains the list of tasks.
2. **Locate the Task**:
* In the selected view (e.g., All Tasks, Custom View), find the task you want to inspect. Use filters and sorting options if necessary to locate the task more efficiently.
3. **Open Task Details**:
* Click on the task or select the **Details** option (often represented by an icon or “Details” link) associated with the task entry.
* This action will open the **Task Details** panel, which provides an in-depth view of information specific to that task.
Please note that specific roles must be defined in a process to utilize all the task management features. For configuration details, see [**Configuring Access Roles for Task Manager**](../../../../setup-guides/plugins-access-rights/configuring-access-rights-for-task-management).
## Process status updates
Task Manager displays various statuses based on process state:
| Status | Definition |
| -------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Created** | This status is visible only if there is an issue with the process creation. If the process is error-free in its configuration, you will see the **Started** status instead. |
| **Started** | Indicates that the process is in progress and running. |
| **Finished** | The process has reached an end node and completed its execution. |
| **Failed** | Displayed when a [CronJob](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/) is enabled in the [FlowX Engine](../../core-components/flowx-engine). For example, if a CronJob is triggered but not completed on time, tasks move to the `FAILED` status. |
| **Expired** |
Displayed when expiryTime field is defined within the process definition. To set up an expiryTime function, follow these steps
Go to FLOWX Designer > Processes > Definitions.
Select a process and click the "⋮" button, then choose Settings.
Inside the General tab, you can edit the Expiry time field.
|
| **Aborted** | This status is available for processes that also contain subprocesses. When a subprocess is running (and the [token is moved backward](../../../flowx-designer/managing-a-process-flow/moving-a-token-backwards-in-a-process) to redo a series of previous actions) - the subprocess will be aborted. |
| **Dismissed** | Available for processes that contain subprocesses. It is displayed when a user stops a subprocess. |
| **On hold** | Freezes the process, blocking further actions. A superuser can trigger this status for clarification or unfreeze. |
| **Terminated** | A request is sent via Kafka to terminate a process instance, ending all active tokens in the current process or subprocesses. |
## Swimlanes and Stages updates
Task Manager also tracks swimlane and stage changes:
### Swimlanes updates
| Status | Definition |
| ------------------ | ------------------------------------ |
| **Swimlane Enter** | Marks token entering a new swimlane. |
| **Swimlane Exit** | Indicates token exiting a swimlane. |
### Stages updates
| Status | Definition |
| --------------- | --------------------------------- |
| **Stage Enter** | Marks token entering a new stage. |
| **Stage Exit** | Indicates token exiting a stage. |
## Using the plugin
The Task Manager plugin offers a range of features tailored to different roles, including:
* [Swimlane permissions for Task Management](#swmlane-permissions-for-task-management)
* [Assigning and unassigning Tasks](#task-assignment-and-reassignment)
* [Hold/unhold tasks](#hold/unhold-tasks)
* [Adding comments](#adding-comments)
* [Viewing the application](#viewing-the-application)
* [Bulk updates (via Kafka)](#bulk-updates)
### Swimlane permissions for Task Management
To perform specific actions within Task Management at process level, you must configure swimlane permissions at the process settings level. Each swimlane (e.g., BackOffice, FrontOffice, Manager) should be assigned appropriate roles and permissions based on the responsibilities and access needs of each user group.

**Example permissions configuration**
Below are example configurations for swimlane permissions based on roles commonly used in a loan application approval process:
1. **BackOffice Swimlane**:
* Role: `FLOWX_BACKOFFICE`
* Permissions:
* **Unhold**: Allows the user to resume tasks that have been put on hold.
* **Execute**: Enables the user to perform task actions.
* **Self Assign**: Permits users to assign tasks to themselves.
* **View**: Grants viewing rights for tasks.
* **Hold**: Allows tasks to be temporarily paused.
2. **Manager (Supervisor) Swimlane**:
* Role: `FLOWX_MANAGER`
* Permissions:
* **Unhold**: Allows the user to resume tasks that have been put on hold.
* **Execute**: Enables the user to perform task actions.
* **Self Assign**: Permits users to assign tasks to themselves.
* **View**: Grants viewing rights for tasks.
* **Hold**: Allows tasks to be temporarily paused.
These permissions can be customized depending on each use case and organizational needs. Ensure that permissions are aligned with the roles' responsibilities within the workflow.
### Task assignment and reassignment
Consider this scenario: you're the HR manager overseeing the onboarding process for new employees. In order to streamline this operation, you've opted to leverage the task manager plugin. This process consists of two key phases: the Initiation Stage and the Account Setup Stage, each requiring a designated team member.
The Initiation Stage has successfully concluded, marking the transition to the Account Setup Stage. At this juncture, it's essential to reassign the task, originally assigned to John Doe, to Jane Doe, a valuable member of the backoffice team.
### Hold/unhold tasks
As a project manager overseeing various ongoing projects, you may need to temporarily pause one due to unforeseen circumstances. To manage this, you use the "On Hold" status.

### Adding comments
When handling on-hold projects, document the reasons, inform the team, and plan for resumption. This pause helps address issues and ensures a smoother project flow upon resuming. Never forget to add comments:

### Viewing the application

Container App URL represents an URL or a configuration parameter used to redirect users directly to a specific FlowX process instance.

#### Process settings level
In the Process Definition settings, navigate to the **Task Management** tab and locate the **Container App URL** field. Here, paste the container application URL where the process is loaded, following this format:
```
{container app url}/process
```
**Example**:


You can also use a predefined generic parameter as the URL: `${genericParameter}`.


#### Process data level
If task.baseUrl is specified in the process parameters, it will be sent to the Task Manager to update the tasks accordingly.
```java
output.put("task", {"baseUrl": "https://your_base_url"});
```

The `baseURL` set in the process data (business rules) will override the `baseURL` set in the process definition settings.
## Bulk updates
Send bulk update requests via Kafka (using Process Engine) to perform multiple operations at once. Use the Kafka topic:
* `KAFKA_TOPIC_PROCESS_OPERATIONS_BULK_IN` (request sent from the Process Engine) to send bulk operations to `KAFKA_TOPIC_PROCESS_OPERATIONS_BULK_OUT` (Task Management) as an array, allowing multiple operations at once. More details [**here**](../../../../setup-guides/flowx-engine-setup-guide/engine-setup#topics-related-to-the-task-management-plugin).
Example of a bulk request:
```json
{
"operations": [
{
"operationType": "HOLD",
"taskId": "some task id",
"processInstanceUuid": "d3aabfd8-d041-4c62-892f-22d17923b223", // the id of the process instance
"swimlaneName": "Default", //name of the swimlane
"owner": null,
"author": "john.doe@flowx.ai",
"requestID": "1234567891"
},
{
"operationType": "HOLD",
"taskId": "some task id",
"processInstanceUuid": "d3aabfd8-d041-4c62-892f-22d17923b223",
"swimlaneName": "Default", //name of the swimlane
"owner": null,
"author": "jonh.doe@flowx.ai",
"requestID": "1234567890"
}
]
}
```
For more information on bulk updates configuration, see FlowX Engine setup:
## Updating Task Manager metadata through Business Rules
You can dynamically update task metadata (title and priority) during process execution using Business Rules. This allows you to customize how tasks are displayed and prioritized in the Task Manager based on process data.
### Updating task title and priority
Here's how to update both the title and priority:
```javascript
// Define your task title
taskTitle = "SME Branch Lending";
taskPriority = 1;
// Update the task title
output.put("task", {
"title": taskTitle,
"priority": taskPriority
});
```
### Setting dynamic value based on business logic
You can set priority dynamically based on business conditions:
```javascript
// Get client data and loan amount
const clientDataSection = input.clientDataSection;
const loanAmount = input.loanAmount;
// Create a descriptive task title
const taskTitle = `Loan Application # ${clientDataSection.firstName} - ${clientDataSection.lastName}`;
// Set priority based on loan amount
let taskPriority;
if (loanAmount > 50000) {
taskPriority = 1; // HIGH priority for large loans
} else if (loanAmount > 10000) {
taskPriority = 2; // MEDIUM priority for medium loans
} else {
taskPriority = 3; // LOW priority for small loans
}
// Update both title and priority
output.put("task", {
"title": taskTitle,
"priority": taskPriority
});
```

You can make task titles more dynamic and informative by incorporating process data:
```javascript
// Get relevant data from the process
const clientName = input.get("customer.fullName");
const applicationId = input.get("application.id");
const loanAmount = input.get("loan.amount");
// Create a descriptive task title
const taskTitle = `Loan Application #${applicationId} - ${clientName} - ${loanAmount}`;
// Update the task title
output.put("task", {
"title": taskTitle
});
```
## Full-Code implementation
For more customized UX, the **full-code** implementation using the **Task Management SDKs (React and Angular)** allows developers to build custom tables, cards, or any other UI elements based on the views and columns configured in Task Management.
## FAQs
A: The format changes will only affect how the data is displayed, not how it's indexed.
A: Yes, you can always switch to full-code and create custom views or tables using the Task Management SDK.
A: To use data from a subprocess, you must send it to the parent process first. Subprocess keys are currently displayed in the task manager once indexed through the parent process.
# Using allocation rules
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/core-extensions/task-management/using-allocation-rules
Allocation rules are meant to define when tasks should be auto-assigned to users when they reach a swimlane that has a specific role configured (for example, specific tasks will be assigned for the _front office_ and specific tasks for the _back office_ only).

Tasks will always be allocated depending on the users load (number of tasks) from current/other processes. If there are two or more users with the same number of assigned tasks, the task will be randomly assigned to one of them.
## Accessing allocation rules
To access the allocation rules, follow the next steps:
1. Open **FlowX Designer**.
2. Go to your **Application** and from the side menu, under **Task Management**, select the **Allocation rules** entry.
## Adding process and allocation rules
To add process and allocation rules, follow the next steps:
1. Click **Add process** button, in the top-right corner.

2. Select a [**process definition**](../../../building-blocks/process/process-definition) from the drop-down list.
3. Click **Add swimlane allocations button (+)** to add allocations.

If there are no users with execute rights in the swimlane you want to add (`hasExecute: false`), the following error message will be displayed:

4. **Option 1**: Allocate all users with `execute rights`.

5. **Option 2**: Allocate only users you choose from the drop-down list. You can use the search function to filter users by name.

6. Click **Save**.
Users with out-of-office status will be skipped by automatic allocation. More information about out-of-office feature, [here](using-out-of-office-records).
## Editing allocation rules
To edit allocation rules, follow the next steps:
1. Click **Edit** button.

2. Change the allocation method.

3. Click **Save**.
### Viewing allocation rules
The allocation rules list displays all the configured swimlanes grouped by process:
1. **Process** - the process definition name where the swimlanes were configured
2. **Swimlane** - the name of the swimlane
3. **Allocation** - applied allocation rules
4. **Edited at** - the last time when an allocation was edited
5. **Edited by** - the user who edited/created the allocation rules
## Exporting/importing process allocation rules
To copy process allocation rules and move them between different environments, you can use the export/import feature.
You can export process allocation rules as JSON files directly from the allocation rules list:


# Using hooks
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/core-extensions/task-management/using-hooks
Hooks allow you to extract stateful logic from a component, so it can be tested and reused independently.
Users with task management permissions can create hooks to trigger specific **process instances**, such as sending notifications when **events** occur. Follow the instructions below to set up roles for hooks scope usage:

Hooks can be linked to different events and define what will happen when they are triggered. Below you can find a list of all possible triggers for each hook.
unique result
only one rule will match, or no rule
rule outputs are prioritized
rules may overlap, but only match with the highest output priority counts
unique results
multiple rules can be satisfied
all satisfied rules must generate the same output, otherwise the rule is violated
## Creating a hook
To create a new hook, follow the next steps:
1. Open **FLOWX.AI Designer**.
2. Go to Task Manager and select **Hooks**.
3. Click **New Hook** (you can also import or export a hook).
4. Fill in the required details.

## Types of hooks
There are three types of hooks you can create in Task Manager:
* process hooks
* swimlane hooks
* stage hooks
Swimlane and stage hooks can be configured with an SLA (time when a triggered process is activated).

Dismiss SLA is available only for hooks configured with SLA.
[Here](../../../building-blocks/node/timer-events/timer-expressions) you can find more information about the SLA - duration formatting.
# Using out of office records
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/core-extensions/task-management/using-out-of-office-records
The Out-of-office feature allows you to register users availability to perform a task. It can be allocated manually or automatically.

Users with out-of-office status are excluded from the candidates for automatic task allocation list during the out-of-office period. More information about allocation rules, [here](./using-allocation-rules).
## Accessing out-of-office records
To add out-of-office records, follow the next steps:
1. Open **FlowX Designer**.
2. From the side menu, under **Task Management,** select the **Out office entry**.

## Adding out-of-office records
To add out-of-office records, follow the next steps:
1. Click **Add out-of-office** button, in the top-right corner.
2. Fill in the following mandatory details:
* Assignee - user single select
* Start Date (:exclamation:cannot be earlier than tomorrow)
* End Date (:exclamation:cannot be earlier than tomorrow)

3. Click **Save**.
## Editing out-of-office records
To edit out-of-office records, follow the next steps:
1. Click **Edit** button.
2. Modify the dates (:exclamation:cannot be earlier than tomorrow).
3. Click **Save**.

## Deleting out-of-office records
To delete out-of-office records, follow the next steps:
1. From the **out-of-office list**, select a **record**.
2. Click **Delete** button. A pop-up message will be displayed: *"By deleting this out-of-office record, the user will become eligible to receive tasks in the selected period. Do you want to proceed?"*

If you choose to delete an out-of-office record, the user is eligible to receive tasks allocation during the mentioned period. More information about automatic task allocation, [here](./using-allocation-rules).
3. Click **Yes, proceed** if you want to delete the record, click **Cancel** if you want to abort the deletion.
If the out-of-office period contains days selected in the past, the user cannot delete the record, the following message is displayed: *“You can’t delete this record because it already affected allocations in the past. Try to shorten the period, if it didn’t end.”*

## Viewing out-of-office records
The out-of-office records list contains the following elements:
1. **User** - firstName, lastName, userName
2. **Start Date** - the date when the out-of-office period will be effective
3. **End Date** - the date when the out-of-office period will end
4. **Edited at** - the last time when an out-of-office record was edited
5. **Edited by** - the user who edited/created the out-of-office record

The list is sorted in reverse chronological order by “edited at” `dateTime` (newest added on top).
# Using stages
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/core-extensions/task-management/using-stages
You can define specific stages during the execution of a process. Stages are configured on each node and they will be used to trigger an event when passing from one stage to another.
## Creating a new stage
To create a new stage, follow the next steps:
1. Open **FlowX Designer**.
2. Go to Task Manager and select **Stages**.
3. Click **New Stage.**
4. Fill in the required details.

## Assigning a node to a stage
To assign a node to a stage, follow the next steps:
1. Open **FlowX Designer** and then select your **process**.
2. Choose the node you want to assign and select the **Node Config** tab.
3. Scroll down until you find the **Stage** field and click the dropdown button.
4. Choose the stage you want to assign.

# null
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/integrations/building-a-connector
Connectors are the vital gateway to enhancing FlowX.AI's capabilities. They seamlessly integrate external systems, introducing new functionalities by operating as independently deployable, self-contained microservices.
## Connector essentials
At its core, a connector acts as an anti-corruption layer. It manages interactions with external systems and crucial data transformations for integrations.

## Key Functions
Connectors act as lightweight business logic layers, performing essential tasks:
1. **Data Transformation**: Ensure compatibility between different data formats, like date formats, value lists, and units.
2. **Information Enrichment:** Add non-critical integration information like flags and tracing GUIDs.
## Creating a connector
1. **Create a Kafka Consumer:** Follow [**this guide**](./creating-a-kafka-consumer) to configure a Kafka consumer for your Connector.
2. **Create a Kafka Producer:** Refer to [**this guide**](./creating-a-kafka-producer) for instructions on setting up a Kafka producer.
Adaptable Kafka settings can yield advantageous event-driven communication patterns. Fine-tuning partition counts and consumers based on load testing is crucial for optimal performance.
### Design considerations
Efficient Connector design within an event-driven architecture demands:
* Load balancing solutions for varying communication types between the Connector and legacy systems.
* Custom implementations for request load balancing, Connector scaling, and more.
Incorporate all received Kafka headers in responses to ensure seamless communication with the FlowX Engine.
### Connector configuration sample
Here's a basic setup example for a connector:
* Configurations and examples for Kafka listeners and message senders.
* **OPTIONAL**: Activation examples for custom health checks.
[Sample available here](https://github.com/flowx-ai/quickstart-connector/tree/feature/easy-start)
Follow these steps and check the provided code snippets to effectively implement your custom FLOWX connector:
1. **Name Your Connector**: Choose a meaningful name for your connector service in the configuration file (`quickstart-connector/src/main/resources/config/application.yml`):
```yaml
spring:
application:
name: easy-connector-name # TODO 1. Choose a meaningful name for your connector service.
jackson:
serialization:
write_dates_as_timestamps: false
fail-on-empty-beans: false
```
2. **Select Listening Topic:** Decide the primary topic for your connector to listen on ( you can do this at the following path → `quickstart-connector/src/main/resources/config/application-kafka.yml`):
If the connector needs to listen to multiple topics, ensure you add settings and configure a separate thread pool executor for each needed topic (refer to `KafkaConfiguration`, you can find it at `quickstart-connector/src/main/java/ai/flowx/quickstart/connector/config/KafkaConfiguration.java`).
3. **Define Reply Topic**: Determine the reply topic, aligning with the Engine's topic pattern.
4. **Adjust Consumer Threads**: Modify consumer thread counts to match partition numbers.
```yaml
kafka:
consumer.threads: 3 # TODO 4. Adjust number of consumer threads. Make sure number of instances * number of threads = number of partitions per topic.
auth-exception-retry-interval: 10
topic:
in: ai.flowx.easy-connector.in # TODO 2. Decide what topic should the connector listen on.
out: ai.flowx.easy-connector.out # TODO 3. Decide what topic should the connector reply on (this topic name must match the topic pattern the Engine listens on).
```
5. **Define Incoming Data Format (DTO)**: Specify the structure for incoming and outgoing data using DTOs. This can be found at the path: `quickstart-connector/src/main/java/ai/flowx/quickstart/connector/dto/KafkaRequestMessageDTO.java`.
```java
//Example for incoming DTO Format
package ai.flowx.quickstart.connector.dto;
import lombok.Getter;
import lombok.Setter;
import lombok.ToString;
@Getter
@Setter
@ToString
public class KafkaRequestMessageDTO { // TODO 5. Define incoming DTO format.
private String Id;
}
```
6. **Define Outgoing Data Format (DTO)**: Specify the structure for outgoing data at the following path → `quickstart-connector/src/main/java/ai/flowx/quickstart/connector/dto/KafkaResponseMessageDTO.java`.
```java
// Example for Outgoing DTO Format
package ai.flowx.quickstart.connector.dto;
import lombok.Builder;
import lombok.Getter;
import lombok.Setter;
import lombok.ToString;
@Getter
@Setter
@ToString
@Builder
public class KafkaResponseMessageDTO implements BaseApiResponseDTO { // TODO 6. Define outgoing DTO format.
private String name;
private String errorMessage;
}
```
7. **Implement Business Logic**: Develop logic for handling messages from the Engine and generating replies. Ensure to include the process instance UUID as a Kafka message key.
Optional Configuration Steps:
* **Health Checks:** Enable health checks for all utilized services in your setup.
```yaml
management: # TODO optional: enable health check for all the services you use in case you add any
health:
kafka.enabled: false
```
Upon completion, your configuration files (`application.yaml` and `application-kafka.yaml`) should resemble the provided samples, adjusting settings according to your requirements:
```yaml
logging:
level:
ROOT: INFO
ai.flowx.quickstart.connector: INFO
io.netty: INFO
reactor.netty: INFO
jdk.event.security: INFO
server:
port: 8080
spring:
application:
name: easy-connector-name
jackson:
serialization:
write_dates_as_timestamps: false
fail-on-empty-beans: false
management:
health:
kafka.enabled: false
spring.config.import: application-kafka.yml
logging.level.ROOT: DEBUG
logging.level.ai.flowx.quickstart.connector: DEBUG
```
And your Kafka configuration file (`application-kafka.yaml`) should look like this:
```yaml
spring:
kafka:
bootstrap-servers: localhost:9092
security.protocol: "PLAINTEXT"
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
properties:
interceptor:
classes: io.opentracing.contrib.kafka.TracingProducerInterceptor
message:
max:
bytes: ${KAFKA_MESSAGE_MAX_BYTES:52428800} #50MB
max:
request:
size: ${KAFKA_MESSAGE_MAX_BYTES:52428800} #50MB
consumer:
group-id: kafka-connector-group
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
properties:
interceptor:
classes: io.opentracing.contrib.kafka.TracingConsumerInterceptor
kafka:
consumer.threads: 3
auth-exception-retry-interval: 10
topic:
in: ai.flowx.easy-connector.in
out: ai.flowx.easy-connector.out
spring:
kafka:
security.protocol: "SASL_PLAINTEXT"
properties:
sasl:
mechanism: "OAUTHBEARER"
jaas.config: "org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.client.id=\"${KAFKA_OAUTH_CLIENT_ID:kafka}\" oauth.client.secret=\"${KAFKA_OAUTH_CLIENT_SECRET:kafka-secret}\" oauth.token.endpoint.uri=\"${KAFKA_OAUTH_TOKEN_ENDPOINT_URI:kafka.auth.localhost}\" ;"
login.callback.handler.class: io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler
```
## Setting up the connector locally
For detailed setup instructions, refer to the Setting Up FLOWX.AI Quickstart Connector Readme:
Prerequisites:
* a terminal to clone the GitHub repository
* a code editor and IDE
* JDK version 17
* the Docker Desktop app
* an internet browser
## Integrating a connector in FLOWX.AI Designer
To integrate and utilize the connector within FLOWX.AI Designer, follow these steps:
1. **Process Designer Configuration**: Utilize the designated communication nodes within the [Process Designer](../../building-blocks/process/process):
* [**Send Message Task**](../../building-blocks/node/message-send-received-task-node#message-send-task): Transmit a message to a topic monitored by the connector. Make sure you choose **Kafka Send Action** type.

* [**Receive Message Task**](../../building-blocks/node/message-send-received-task-node#message-receive-task): Await a message from the connector on a topic monitored by the engine.

2. **Connector Operations**: The connector identifies and processes the incoming message.
3. **Handling Response**: Upon receiving a response, the connector serializes and deposits the message onto the specified OUT topic.
4. **Engine Processing**: The engine detects the new message, captures the entire content, and stores it within its variables based on the configured variable settings.
You can check another example of a more complex connector by checking the following repository:
# Creating a Kafka consumer
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/integrations/creating-a-kafka-consumer
This guide focuses on creating a **Kafka** consumer using Spring Boot.
Here are some tips, including the required configurations and code samples, to help you implement a Kafka consumer in Java.
## Required dependencies
Ensure that you have the following dependencies in your project:
```xml
org.springframework.kafkaspring-kafkaio.strimzikafka-oauth-client0.6.1org.apache.kafkakafka-clients2.5.1io.opentracing.contribopentracing-kafka-client0.1.13
```
## Configuration
Ensure that you have the following configuration in your `application.yml` or `application.properties` file:
```yaml
spring:
kafka:
bootstrap-servers: localhost:9092
security.protocol: "PLAINTEXT"
consumer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
properties:
max:
partition:
fetch:
bytes: ${KAFKA_MESSAGE_MAX_BYTES:52428800} #50MB
kafka:
consumer:
group-id:
...
threads:
...
```
## Code sample for a Kafka Listener
Here's an example of a Kafka listener method:
```java
@KafkaListener(topics = "TOPIC_NAME_HERE")
public void listen(ConsumerRecord record) throws JsonProcessingException {
SomeDTO request = objectMapper.readValue(record.value(), SomeDTO.class);
// process received DTO
// Make sure to replace *"TOPIC_NAME_HERE"* with the actual name of the Kafka topic you want to consume from. Additionally, ensure that you have the necessary serialization and deserialization logic based on your specific use case.
}
```
# Creating a Kafka producer
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/integrations/creating-a-kafka-producer
This guide focuses on creating a **Kafka** producer using Spring Boot.
Here are some tips, including the required configurations and code samples, to help you implement a Kafka producer in Java.
## Required dependencies
Ensure that you have the following dependencies in your project:
```xml
org.springframework.kafkaspring-kafkaio.strimzikafka-oauth-client0.6.1org.apache.kafkakafka-clients2.5.1io.opentracing.contribopentracing-kafka-client0.1.13
```
## Configuration
Ensure that you have the following configuration in your `application.yml` or `application.properties` file:
```yaml
spring:
kafka:
bootstrap-servers: localhost:9092
security.protocol: "PLAINTEXT"
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
properties:
message:
max:
bytes: ${KAFKA_MESSAGE_MAX_BYTES:52428800} #50MB
max:
request:
size: ${KAFKA_MESSAGE_MAX_BYTES:52428800} #50MB
```
## Code sample for a Kafka producer
Ensure that you have the necessary KafkaTemplate bean autowired in your producer class. The sendMessage method demonstrates how to send a message to a Kafka topic with the specified headers and payload. Make sure to include all the received Kafka headers in the response that is sent back to the **FlowX Engine**.
```java
private final KafkaTemplate kafkaTemplate;
public void sendMessage(String topic, Headers headers, Object payload) {
ProducerRecord producerRecord = new ProducerRecord<>(topic, payload);
// make sure to send all the received headers back to the FlowX Engine
headers.forEach(header -> producerRecord.headers().add(header));
kafkaTemplate.send(producerRecord);
}
```
## Kafka headers
### Understanding Kafka headers in FlowX Integration
When integrating with FlowX Engine via Kafka, headers play a crucial role in message routing and processing. It's essential to preserve and include all received headers in responses back to the FlowX Engine.
### Important headers
| Header | Purpose | Consequences If Missing |
| ----------------------------------------- | -------------------------------------------------------------------- | ------------------------------------------------------------------------------ |
| `fxContext` | Identifies the target process/subprocess in the hierarchy | Messages may be incorrectly routed or not processed |
| `Fx-AppId` | Identifies the application processing the message | Application context may be lost |
| `Fx-RootAppId` | Identifies the original application that initiated the process chain | Process origin tracking may be lost; important for complex process hierarchies |
| `Fx-BuildId` | Contains the build identifier for versioning and traceability | Version-specific behaviors may be affected; debugging becomes more difficult |
| `processInstanceId`/`processInstanceUuid` | Primary keys for message correlation | Message correlation may fail |
| `Fx-ProcessName` | Required for cross-process communication | "Start another process via Kafka" functionality may break |
### The fxContext header explained
The `fxContext` header is particularly important for routing messages in architectures with embedded processes and subprocesses:
* For kafka-receive nodes in the root process: `fxContext = "main"`
* For an embedded subprocess with nodeId=4: `fxContext = "main:4"`
* For an embedded sub-subprocess with nodeId=12: `fxContext = "main:4:12"`
This hierarchical format ensures messages are delivered to their intended recipients within the process structure.
### Understanding Fx-RootAppId
The `Fx-RootAppId` header is used to track the originating application throughout the entire process chain. This is particularly important when:
* Multiple applications are involved in a workflow
* Processes spawn subprocesses across different components
* You need to trace a complete transaction back to its originating application
Unlike `Fx-AppId` which may change as a message passes through different components, `Fx-RootAppId` preserves the original initiator's identity.
### Best Practices
1. **Preserve All Headers**: Always include all received Kafka headers in responses to the FlowX Engine.
```java
headers.forEach(header -> producerRecord.headers().add(header));
```
# Integration Designer
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/integrations/integration-designer
The Integration Designer simplifies the integration of FlowX with external systems using REST APIs. It offers a user-friendly graphical interface with intuitive drag-and-drop functionality for defining data models, orchestrating workflows, and configuring system endpoints.
Unlike [Postman](https://www.postman.com/), which focuses on API testing, the Integration Designer automates workflows between systems. With drag-and-drop ease, it handles REST API connections, real-time processes, and error management, making integrations scalable and easy to mantain.
## Overview
Integration Designer facilitates the integration of the FlowX platform with external systems, applications, and data sources.

Integration Designer focuses on REST API integrations, with future updates expanding support for other protocols.
***
## Key features
You can easily build complex API workflows using a drag-and-drop interface, making it accessible for both technical and non-technical audience.
Specifically tailored for creating and managing REST API calls through a visual interface, streamlining the integration process without the need for extensive coding.
Allows for immediate testing and validation of REST API calls within the design interface.
***
## Managing integration endpoints
### Systems
A system is a collection of resources—endpoints, authentication, and variables—used to define and run integration workflows.

### Creating a new system definition
With **Systems** feature you can create, update, and organize endpoints used in API integrations. These endpoints are integral to building workflows within the Integration Designer, offering flexibility and ease of use for managing connections between systems. Endpoints can be configured, tested, and reused across multiple workflows, streamlining the integration process.

Go to the **Systems** section in FlowX Designer at **Projects** -> **Your project** -> **Integrations** -> **Systems**.
1. Add a **New System**, set the system’s unique code, name, and description:
* **Name**: The system's name.
* **Code**: A unique identifier for the external system.
* **Base URL**: The base URL is the main address of a website or web application, typically consisting of the protocol (`http` or `https`), domain name, and a path.
* **Description**: A description of the system and its purpose.
* **Enable enumeration value mapping**: If checked, this system will be listed under the mapped enumerations. See [enumerations](../core-extensions/content-management/enumerations) section for more details.

To dynamically adjust the base URL based on the upper environment (e.g., dev, QA, stage), you can use environment variables and configuration parameters. For example: `https://api.${environment}.example.com/v1`.
Additionally, keep in mind that the priority for determining the configuration parameter (e.g., base URL) follows this order: first, input from the user/process; second, configuration parameters overrides (set directly on FlowX.AI designer or environment variables); and lastly, configuration parameters.
2. Set up authorization (Service Token, Bearer Token, or No Auth). In our example, we will set the auth type as a bearer and we will set it at system level:

The value of the token might change depending on the environment so it is recommended to define it at system level and apply [Configuration Parameters Overrides](../../projects/runtime/configuration-parameters-overrides) at runtime.
### Defining REST integration endpoints
In this section you can define REST API endpoints that can be reused across different workflows.
1. Under the **Endpoints** section, add the necessary endpoints for system integration.

2. Configure an endpoint by filling in the following properties:
* **Method**: GET, POST, PUT, PATCH, DELETE.
* **Path**: Path for the endpoint.
* **Parameters**: Path, query, and header parameters.
* **Response Settings**: Expected response codes and formats.
* **Body**: JSON payload for requests.

### Defining variables
The Variables tab allows you to store system-specific variables that can be referenced throughout workflows using the format `${variableName}`.
These declared variables can be utilized not only in workflows but also in other sections, such as the Endpoint or Authorization tabs.
For example:
* For our integration example, you can declare configuration parameters and use the variables to store your tableId and baseId and reference them the **Variables** tab.

* Use variables in the **Base URL** to switch between different environments, such as UAT or production.
### Endpoint parameter types
When configuring endpoints, several parameter types help define how the endpoint interacts with external systems. These parameters ensure that requests are properly formatted and data is correctly passed.
#### Path parameters
Elements embedded directly within the URL path of an API request that acts as a placeholder for specific value.
* Used to specify variable parts of the endpoint URL (e.g., `/users/{userId}`).
* Defined with `${parameter}` format.
* Mandatory in the request URL.

Path parameters must always be included, while query and header parameters are optional but can be set as required based on the endpoint’s design.
#### Query parameters
Query parameters are added to the end of a URL to provide extra information to a web server when making requests.
* Query parameters are appended to the URL after a `?` symbol and are typically used for filtering or pagination (e.g., `?search=value`)
* Useful for filtering or pagination.
* Example URL with query parameters: [https://api.example.com/users?search=johndoe\&page=2](https://api.example.com/users?search=johndoe\&page=2).

These parameters must be defined in the Parameters table, not directly in the endpoint path.
To preview how query parameters are sent in the request, you can use the **Preview** feature to see the exact request in cURL format. This shows the complete URL, including query parameters.
#### Header parameters
Used to give information about the request and basically to give instructions to the API of how to handle the request
* Header parameters (HTTP headers) provide extra details about the request or its message body.
* They are not part of the URL. Default values can be set for testing and overridden in the workflow.
* Custom headers sent with the request (e.g., `Authorization: Bearer token`).
* Define metadata or authorization details.

#### Body parameters
The data sent to the server when an API request is made.
* These are the data fields included in the body of a request, usually in JSON format.
* Body parameters are used in POST, PUT, and PATCH requests to send data to the external system (e.g., creating or updating a resource).

#### Response body parameters
The data sent back from the server after an API request is made.
* These parameters are part of the response returned by the external system after a request is processed. They contain the data that the system sends back.
* Typically returned in GET, POST, PUT, and PATCH requests. Response body parameters provide details about the result of the request (e.g., confirmation of resource creation, or data retrieval)

### Enum mapper
The enum mapper for the request body enables you to configure enumerations for specific keys in the request body, aligning them with values from the External System or translations into another language.

On enumerations you can map both translation values from different languages or values for different source systems.

Make sure you have the enumerations created with corresponding translations and system values values in your application already:

Select whether to use in the integration the enumeration value corresponding to the External System or the translation into another language.
For translating into language a header parameter called 'Language' is required to specify the language for translation.
### Configuring authorization
* Select the required **Authorization Type** from a predefined list.
* Enter the relevant details based on the selected type (e.g., Realm and Client ID for Service Accounts).
* These details will be automatically included in the request headers when the integration is executed.
### Authorization methods
The Integration Designer supports several authorization methods, allowing you to configure the security settings for API calls. Depending on the external system's requirements, you can choose one of the following authorization formats:

#### Service account
Service Account authentication requires the following key fields:
* **Identity Provider Url**: The URL for the identity provider responsible for authenticating the service account.
* **Client Id**: The unique identifier for the client within the realm.
* **Client secret**: A secure secret used to authenticate the client alongside the Client ID.
* **Scope**: Specifies the access level or permissions for the service account.

When using Entra as an authentication solution, the **Scope** parameter is mandatory. Ensure it is defined correctly in the authorization settings.
#### Basic authentication
* Requires the following credentials:
* **Username**: The account's username.
* **Password**: The account's password.
* Suitable for systems that rely on simple username/password combinations for access.

#### Bearer
* Requires an **Access Token** to be included in the request headers.
* Commonly used for OAuth 2.0 implementations.
* Header Configuration: Use the format `Authorization: Bearer {access_token}` in headers of requests needing authentication.

* System-Level Example: You can store the Bearer token at the system level, as shown in the example below, ensuring it's applied automatically to future API calls:

Store tokens in a configuration parameter so updates propagate across all requests seamlessly when tokens are refreshed or changed.

#### Certificates
You might want to access another external system that require a certificate to do that. Use this setup to configure the secure communication with the system.
It includes paths to both a Keystore (which holds the client certificate) and a Truststore (which holds trusted certificates). You can toggle these features based on the security requirements of the integration.

When the Use Certificate option is enabled, you will need to provide the following certificate-related details:
* **Keystore Path**: Specifies the file path to the keystore, in this case, `/opt/certificates/testkeystore.jks`. The keystore contains the client certificate used for securing the connection.
* **Keystore Password**: The password used to unlock the keystore.
* **Keystore Type**: The format of the keystore, JKS or PKCS12, depending on the system requirements.
**Truststore credentials**
* **Truststore Path**: The file path is set to `/opt/certificates/testtruststore.jks`, specifying the location of the truststore that holds trusted certificates.
* **Truststore Password**: Password to access the truststore.

***
## Workflows
A workflow defines a series of tasks and processes to automate system integrations. Within the Integration Designer, workflows can be configured using different components to ensure efficient data exchange and process orchestration.

### Creating a workflow
1. Navigate to Workflow Designer:
* In FlowX.AI Designer to **Projects -> Your application -> Integrations -> Workflows**.
* Create a New Workflow, provide a name and description, and save it.
2. Start to design your workflow by adding nodes to represent the steps of your workflow:
* **Start Node**: Defines where the workflow begins and also defines the input parameter for subsequent nodes.
* **REST endpoint nodes**: Add REST API calls for fetching or sending data.
* **Fork nodes (conditions)**: Add conditional logic for decision-making.
* **Data mapping nodes (scripts)**: Write custom scripts in JavaScript or Python.
* **End Nodes**: Capture output data as the completed workflow result, ensuring the process concludes with all required information.

### Workflow nodes
Users can visually build workflows by adding various nodes, including:
* Workflow start node
* REST endpoint nodes
* Data mapping nodes (scripts)
* Fork nodes (conditions)
* End node
#### Worflow start node
The Start node is the default and mandatory first node in any workflow. It initializes the workflow and defines the input parameters defined on it for subsequent nodes.

The Start node defines the initial data model for the workflow. This input data model can be customized. You can enter custom JSON data by clicking inside the code editor and typing their input. This input data will be passed to subsequent nodes in the workflow.
For example, if you want to define a **first name** parameter, you can add it like this in the **Start Node**:
```json
{
"firstName": "John"
}
```
Later, in the body of a subsequent workflow node, you can reference this input using:
```json
{
"First Name": "${firstName}"
}
```
This ensures that the data from the Start node is dynamically passed through the workflow.
When you try to send input data from a process to a workflow, you can use the Start workflow node to map the data coming from a process and to send it acrross the entire workflow.

Make sure the data is also mapped in the **Start Integration Workflow** node action where you have the data.

Only one Start node is allowed per workflow. The Start node is always the first node in the workflow and cannot have any incoming connections. Its sole function is to provide the initial data for the workflow.
The Start node cannot be altered in name, nor can it be deleted from the workflow.
#### REST endpoint nodes
The REST endpoint node enables communication with external systems to retrieve or update data by making REST API calls. It supports multiple methods like GET, POST, PUT, PATCH, and DELETE. Endpoints are selected via a dropdown menu, where available endpoints are grouped by the system they belong to.

The node is added by selecting it from the "Add Connection" dropdown in the workflow designer.

You can include multiple REST endpoint nodes within the same workflow, allowing for integration with various systems or endpoints.
Unlike some nodes, the Endpoint Call node can be run independently, making it possible to test the connection or retrieve data without executing the entire workflow.

**Input and output**
Each REST endpoint node includes some essential tabs:
* **Params**:
* **Response key**: The response from the endpoint node, including both data and metadata, is organized under a predefined response key.
* **Input**:
* This tab contains read-only JSON data that is automatically populated with the output from the previous node in the workflow.
* **Output**:
* It displays the API response in JSON format.
#### Condition (fork) nodes
The Condition node evaluates incoming data from a connected node based on defined logical conditions(if/else if with). It directs the workflow along different paths depending on whether the condition evaluates to TRUE or FALSE.
**Defining Conditions in JavaScript or Python**
Logical conditions for the Condition Node can be written in either JavaScript or Python, depending on the requirements of your workflow.
* If the condition evaluates to TRUE, the workflow follows the If path.
* If the condition evaluates to FALSE, it follows the Else if path.

You can include multiple Condition nodes within a single workflow, enabling the creation of complex branching logic and decision-making flows.
**Parallel processing and forking**
The Condition node can split the workflow into parallel branches, allowing for multiple conditions to be evaluated simultaneously. This capability makes it ideal for efficiently processing different outcomes at the same time.
#### Data mapping nodes (scripts)
The Script node allows you to transform and map data between different systems during workflow execution by writing and executing custom code in JavaScript or Python. It enables complex data transformations and logic to be applied directly within the workflow.

#### End node
The End node signifies the termination of a workflow's execution. It collects the final output and completes the workflow process.
Multiple End nodes can be included within a single workflow. This allows the workflow to have multiple possible end points based on different execution paths.

The End node automatically receives input in JSON format from the previous node, and you can modify this input by editing it directly in the code editor. If the node's output doesn't meet mandatory requirements, it will be flagged as an error to ensure all necessary data is included.
The output of the End node represents the final data model of the workflow once execution is complete.
#### Testing the workflow
You can always test your endpoints in the context of the workflow. Run the endpoints separately (where is the case or run the entire workflow).
#### Debugging
Use the integrated console after running each workflow (either if you test your workflow in the workflow designer or in a process definition). It provides useful info like logs, input and output data about eacg endpoint and other details like execution time etc.
### Workflow integration
Integrating workflows into a BPMN process allows for structured handling of tasks like user interactions, data processing, and external system integrations.
This is achieved by connecting workflow nodes to User Tasks and Service Tasks using the [**Start Integration Workflow**](../../building-blocks/actions/start-integration-workflow) action.
1. **Open the FlowX Process Designer**:
* Navigate to **Projects -> Your application -> Processes**.
* Create a new process or edit an existing one.
2. **Define the Data Model**:
Needed if you want to send data from your user task to the workflow.
* Establish the data model that will be shared between the process and the workflow.
* Ensure all necessary data fields that the workflow will use are included.

1. **Add a Task**:
* Insert a **User Task** or **Service Task** into your BPMN diagram.
* A **User Task** requires user input, while a **Service Task** can trigger automated actions without manual intervention.
2. **Configure Actions for the Task**:
* In the node config, add a **Start Integration Workflow** action.
* Select the target workflow you want to integrate. This links the task with the predefined workflow in the Integration Designer.

3. **Map the Payload**:
* If input data is defined in the **Start Node** of the workflow, it will be **automatically mapped** in the **Start Integration Workflow** action. Ensure that the workflow’s Start Node contains the fields you need.
* Additional payload keys and values can also be set up as needed to facilitate data flow from the process to the workflow.

1. **Add a Receive Message Task**:
* To handle data returned by the workflow, add a **Receive Message Task** in the BPMN diagram.
* This task captures the workflow’s output data, such as processing status or results sent via Kafka.

2. **Set Up a Data Stream Topic**:
* In the **Receive Message Task**, select your workflow from the **Data Stream Topics** dropdown.
* Ensure that the workflow output data, including status or returned values, is accurately captured under a predefined key.
***
## Integration with external systems
This example demonstrates how to integrate FlowX with an external system, in this example, using Airtable, to manage and update user credit status data. It walks through the setup of an integration system, defining API endpoints, creating workflows, and linking them to BPMN processes in FlowX Designer.
Before going through this example of integration, we recommend:
* Create your own base and table in Airtable, details [here](https://www.airtable.com/guides/build/create-a-base).
* Check Airtable Web API docs [here](https://airtable.com/developers/web/api/introduction) to get familiarized with Airtable API.
### Integration in FlowX
Navigate to the **Integration Designer** and create a new system:
* Name: **Airtable Credit Data**
* **Base URL**: `https://api.airtable.com/v0/`

In the **Endpoints** section, add the necessary API endpoints for system integration:
1. **Get Records Endpoint**:
* **Method**: GET
* **Path**: `/${baseId}/${tableId}`
* **Path Parameters**: Add the values for the baseId and for the tableId so they will be available in the path.
* **Header Parameters**: Authorization Bearer token
See the [API docs](https://airtable.com/developers/web/api/list-records).

2. **Create Records Endpoint**:
* **Method**: POST
* **Path**: `/${baseId}/${tableId}`
* **Path Parameters**: Add the values for the baseId and for the tableId so they will be available in the path.
* **Header Parameters**:
* `Content-Type: application/json`
* Authorization Bearer token
* **Body**: JSON format containing the fields for the new record. Example:
```json
{
"typecast": true,
"records": [
{
"fields": {
"First Name": "${firstName}",
"Last Name": "${lastName}",
"Age": ${age},
"Gender": "${gender}",
"Email": "${email}",
"Phone": "${phone}",
"Address": "${address}",
"Occupation": "${occupation}",
"Monthly Income ($)": ${income},
"Credit Score": ${creditScore},
"Credit Status": "${creditStatus}"
}
}
]
}
```

1. **Open the Workflow Designer** and create a new workflow.
* Provide a name and description.
2. **Configure Workflow Nodes**:
* **Start Node**: Initialize the workflow.
On the start node add the data that you want to extract from the process. This way when you will add the **Start Workflow Integration** node action it will be populated with this data.

```json
{
"firstName": "${firstName}",
"lastName": "${lastName}",
"age": ${age},
"gender": "${gender}",
"email": "${email}",
"phone": "${phone}",
"address": "${address}",
"occupation": "${occupation}",
"income": ${income},
"creditScore": ${creditScore},
"creditStatus": "${creditStatus}"
}
```
Make sure this keys are also mapped in the data model of your process with their corresponding attributes.
* **REST Node**: Set up API calls:
* **GET Endpoint** for fetching records from Airtable.
* **POST Endpoint** for creating new records.
* **Condition Node**: Add logic to handle credit scores (e.g., triggering a warning if the credit score is below 300).
Condition example:
```json
input.responseKey.data.records[0].fields["Credit Score"] < 300
```
* **Script Node**: Include custom scripts if needed for processing data (not used in this example).
* **End Node**: Define the end of the workflow with success or failure outcomes.
1. **Integrate the workflow** into a BPMN process:
* Open the process diagram and include a **User Task** and a **Receive Message Task**.

In this example, we'll use a User Task because we need to capture user data and send it to our workflow.
2. **Map Data** in the **UI Designer**:
* Create the data model
* Link data attributes from the data model to form fields, ensuring the user input aligns with the expected parameters.


3. **Add a Start Integration Workflow** node action:
* Make sure all the input will be captured.
**Receive Workflow Output**:
* Use the **Receive Message Task** to capture workflow outputs like status or returned data.
* Set up a **Data stream topic** to ensure workflow output is mapped to a predefined key.

* Start your process to initiate the workflow integration. It should add a new user with the details captured in the user task.
* Check if it worked by going to your base in Airtable. You can see, our user has been added.
***
This example demonstrates how to integrate Airtable with FlowX to automate data management. You configured a system, set up endpoints, designed a workflow, and linked it to a BPMN process.
## FAQs
**A:** Currently, the Integration Designer only supports REST APIs, but future updates will include support for SOAP and JDBC.
**A:** The Integration Service handles all security aspects, including certificates and secret keys. Authorization methods like Service Token, Bearer Token, and OAuth 2.0 are supported.
**A**: Errors are logged within the workflow and can be reviewed in the monitoring dedicated console for troubleshooting and diagnostics
**A**: Currently, the Integration Designer only supports adding endpoint specifications manually. Import functionality (e.g., importing configurations from sources like Swagger) is planned for future releases.
For now, you can manually define your endpoints by entering the necessary details directly in the system.
# Overview
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/integrations/integrations-overview
Integrations play a crucial role in connecting legacy systems or third-party applications to the FlowX Engine. They enable seamless communication by leveraging custom code and the Kafka messaging system.

Integrations serve various purposes, including working with legacy APIs, implementing custom file exchange solutions, or integrating with RPAs.
#### High-level architecture

Integrations involve interaction with legacy systems and require custom development to integrate them into your FLOWX.AI setup.
## Developing a custom integration
Developing custom integrations for the FlowX.AI platform is a straightforward process. You can use your preferred technology to write the necessary custom code, with the requirement that it can send and receive messages from the **Kafka** cluster.
#### Steps to create a custom integration
Follow these steps to create a custom integration:
1. Develop a microservice, referred to as a "Connector," using your preferred tech stack. The Connector should listen for Kafka events, process the received data, interact with legacy systems if required, and send the data back to Kafka.
2. Configure the [process definition](../../building-blocks/process/process-definition) by adding a [message](../../building-blocks/node/message-send-received-task-node) send action in one of the [nodes](../../building-blocks/node/node). This action sends the required data to the Connector.
3. Once the custom integration's response is ready, send it back to the FLOWX.AI engine. Keep in mind that the process will wait in a receive message node until the response is received.
For Java-based Connector microservices, you can use the following startup code as a quickstart guide:
## Managing an integration
#### Managing Kafka topics
It's essential to configure the engine to consume events from topics that follow a predefined naming pattern. The naming pattern is defined using a topic prefix and suffix, such as "*ai.flowx.dev.engine.receive*."
We recommend the following naming convention for your topics:
```yaml
topic:
naming:
package: "ai.flowx."
environment: "dev."
version: ".v1"
prefix: ${kafka.topic.naming.package}${kafka.topic.naming.environment}
suffix: ${kafka.topic.naming.version}
engineReceivePattern: engine.receive
pattern: ${kafka.topic.naming.prefix}${kafka.topic.naming.engineReceivePattern}*
```
# Mock integrations
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/integrations/mock-integrations
If you need to test the business process flow but haven't completed all integrations, you can still do so by utilizing the mock integrations server included in the platform.
## Setup
To begin, configure the microservice's DB settings to use a Postgres DB. Then, deploy the mocked adapter microservice.
## Adding a new integration
Setting up a mocked integration requires only one step: adding a mock Kafka request and response.
You have two options for accomplishing this:
1. Add the information directly to the DB.
2. Use the provided [**API**](/4.0/docs/api/add-kafka-mock).
For each Kafka message exchange between the engine and the integration, you need to create a separate entry.
Check out the [**Add new exchange Kafka mock**](/4.0/docs/api/add-kafka-mock) API reference for more details.
Check out the [**View all available Kafka exchanges**](/4.0/docs/api/add-kafka-mock) API reference for more details.
# Observability with OpenTelemetry
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/integrations/open-telemetry
## What is Observability?
Observability is the capacity to infer the internal state of a system by analyzing its external outputs. In software development, this entails understanding the internal workings of a system through its telemetry data, which comprises traces, metrics, and logs.
## What is Open Telemetry?
OpenTelemetry is an observability framework and toolkit for generating and managing telemetry data, including traces, metrics, and logs. It is vendor-agnostic and compatible with various observability backends like Jaeger and Prometheus. Unlike observability backends, OpenTelemetry focuses on the creation, collection, and export of telemetry data, leaving storage and visualization to other tools.
Tracing with Open Telemetry is availabile starting with FlowX.AI v.4.1.0 release.
## How it works?
Our monitoring and performance analysis system leverages OpenTelemetry for comprehensive tracing and logging across our microservices architecture. By integrating with Grafana and other observability tools, we achieve detailed visibility into the lifecycle of requests, the performance of individual operations, and the interactions between different components of the system.

OTEL Collectors are designed in a vendor-agnostic way to receive, process and export telemetry data. More information about OTEL Collectors, you can find [**here**](https://opentelemetry.io/docs/collector/).
Recommended OpenTelemetry Collector Processors: Follow the [**recommended processors**](https://github.com/open-telemetry/opentelemetry-collector/tree/main/processor#recommended-processors).
## Prerequisites
### Microservices
* Custom code addition for manual instrumentation.
* Configuration and deployment of Java agent.
* Performance impact assessment.
### Kubernetes
* Use of a Kubernetes Operator for managing instrumentation and tracing configuration.
## Instrumentation
### Auto-instrumentation with Java agent
* **Works**: Automatically wraps methods at the application edges (HTTP calls, Kafka messages, DB calls), creating spans and adding default span attributes.
* **Configuration**: Configure the Java agent for auto-instrumentation.
### Manual instrumentation
* **Custom Spans**: These were created for methods important to the business flow and enriched with business attributes such as `fx.type`, `fx.methodName`, `fx.processInstanceUuid`, and others.
* **Custom BUSINESS Spans**: Create spans for business events.
## Business logic metadata in logs and spans
Spans now include custom FlowX attributes (e.g., node names, action names, process names, instance UUIDs), which ccan be used for filtering and searching in traces.
Here is the full list of included custom FlowX span attributes:
### Custom span attributes
* fx.type - BUSINESS/TECHNICAL
* fx.methodName
* fx.parentProcessInstanceId
* fx.parentProcessInstanceUuid
* fx.processInstanceUuid
* fx.processName
* fx.processVersionId
* fx.tokenInstanceUuid
* fx.nodeName
* fx.nodeId
* fx.nodeUuid
* fx.boundaryEventId
* fx.nextNodeId
* fx.triggeredByBoundaryEventId
* fx.actionUuid
* fx.actionName
* fx.context
* fx.platform
### Custom business spans
* identified by the `fx.type = BUSINESS` attribute
### Detailed trace operations
Trace specific operations and measure request time across different layers/services.
* **Process Start**: Auto-instrumentation enabled for Spring Data to show time spent in repository methods. JDBC query instrumentation can be added.
* **Token Creation and Advancing**: Custom tracing added.
* **Action Execution and Subprocess Start**: Custom tracing added.
## Troubleshooting scenarios and common usages
### Scenario examples
* **Process Trace**: Analyze DB vs cache times, token advancement, node actions.
* **Parallel Gateway**: Trace split tokens.
* **DB Query Time**: Enable JDBC query tracing.
* **Endpoint Data Issues**: Check traces for Redis or DB source.
* **Token Stuck**: Filter by node name and process UUID.
* **Action Execution**: Trace action names for stuck tokens.
* **Subprocess Failures**: Analyze subprocess start and failures.
* **Latency Analysis**: Identify latencies in automatic actions.
* **Boundary Events**: Ensure Kafka schedule messages are sent and received correctly.
* **External Service Tracking**: Trace between process engine and external plugins.
### Business operation analysis
* **Long Running Operations**: Use Uptrace for identifying slow operations.
* **Failed Requests**: Filter traces by error status.
### Visualization of Traces
We recommend to use Grafana, but any observability platform compatible with OpenTelemetry standards can be used.
Grafana integrates with tracing backends such as Tempo (for tracing) and Loki (for logging), allowing us to visualize the entire lifecycle of a request. This includes detailed views of spans, which are the basic units of work in a trace. By using Grafana, we can:
* **View Trace Trees**: Grafana provides an intuitive UI for viewing the hierarchy and relationships between spans, making it easier to understand the flow of a request through the system.
* **Filter and Search**: Use Grafana to filter and search spans based on custom attributes like `fx.processInstanceUuid`, `fx.nodeName`, `fx.actionName`, and others. This helps in pinpointing specific operations or issues within a trace.
* **Error Analysis**: Identify spans with errors and visualize the stack trace or error message, aiding in quick troubleshooting.


Resources:
# FlowX custom plugins
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/custom-plugins
Adding new capabilities to the core platform can be easily done by using plugins. FlowX plugins represent already-built functionality that can be added to a FlowX.AI platform deployment.

These could be either one of the provided custom **plugins** that we've already built or building your desired plugin.
On our roadmap, we’re also looking to enhance the **plugins library** with 3rd party providers, so stay tuned for more.
## High-level architecture

The plugins are microservice apps that can be developed using any tech stack. The only requirement is that they need to be able to connect to the core platform using Kafka events.
To interact with plugins, you need to understand a few details about them:
* the events that can trigger them
* the infrastructure components needed
* the needed configurations
## Custom plugins
The currently available plugins are:
* [**Documents**](./custom-plugins/documents-plugin/documents-plugin-overview) - easily generate, host and access any kind of documents
* [**Notifications**](./custom-plugins/notifications-plugin/notifications-plugin-overview) - enhance your project with the option of sending custom emails or SMS notifications
* [**OCR**](./custom-plugins/ocr-plugin) - helps you scan your documents and integrate them into a business process
* [**Task management**](../core-extensions/task-management/task-management-overview) - a plugin suitable for back-officers and supervisors as it can be used to easily track and assign activities/tasks inside a company.
* [**Reporting**](./custom-plugins/reporting/reporting-overview) - a plugin that will help you create and bootstrap custom reports built on generic information about usage and processes metrics
Let's get into a bit more detail about the custom plugins 🎛️
## Document management plugin
**Effortless document generation and safe-keeping**
The document management plugin securely stores documents, facilitates document generation based on predefined templates and also handles conversion between various document formats.
It offers an easy-to-use interface for handling documents on event-based Kafka streams.

## Notifications plugin
**Multi-channel notifications made easy**
The plugin handles various types of notifications:
* SMS (if a third party service is available for communication management)
* email notifications
* generating and validating OTP passwords for **user identity verification**
It can also be used to forward custom notifications to external outgoing services. It offers an intuitive interface for defining templates for each kind of notification and handles sending and auditing notifications easily.

## Task management
**Helper for back-officers and supervisors, easy track, assignment management**
The Task Management plugin has the scope to show a process that you defined using FlowX Designer, using a more business-oriented view. It also offers interactions at the assignment level.
## Customer management
**Convenient and secure access to user data**
Light CRM uses an Elasticsearch engine to retrieve user details using partial matches on intricate databases.

## OCR plugin
**Automatic key information extraction**
Used to easily read barcodes or extract handwritten signatures from PDF documents.

## Reporting plugin
**Easy-to-read dynamic dashboards**
Use reporting plugin to build and bootstrap custom reports built on generic information about usage and processes.

# Converting files
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/custom-plugins/documents-plugin/converting-documents-to-different-formats
Currently, the supported conversion method is limited to transforming **PDF** files into **JPEG** format.
This guide provides step-by-step instructions on how to convert an uploaded file (utilizing the provided example) from PDF to JPEG.
## Prerequisites
1. **Access Permissions**: Ensure that you have the necessary permissions to use the Documents Plugin. The user account used for these operations should have the required access rights.
2. **Kafka Configuration**: Verify that the Kafka messaging system is properly configured and accessible. The Documents Plugin relies on Kafka for communication between nodes.
* **Kafka Topics**: Familiarize yourself with the Kafka topics used for these operations (later in this section)
3. Before initiating the conversion process, it is essential to identify the file in the storage solution using its unique ID. This ensures that the conversion is performed on an already uploaded file.
You have two options to obtain the file ID:
* Extract the file ID from a [**Response Message**](./uploading-a-new-document) of an upload file request. For more details, refer to the [**upload process documentation**](./uploading-a-new-document).
* Extract the file ID from a [**Response Message**](./generating-from-html-templates) of a generate from template request. For more details, refer to the [**document generation reply documentation**](./generating-from-html-templates)
In the following example, we will use the `fileId` generated for [**Uploading a New Document**](./uploading-a-new-document) scenario.
```json
{
"customId": "119246",
"fileId": "96975e03-7fba-4a03-99b0-3b30c449dfe7",
"documentType": "BULK",
"documentLabel": null,
"minioPath": "flowx-dev-process-id-119246/119246/458_BULK.pdf",
"downloadPath": "internal/files/96975e03-7fba-4a03-99b0-3b30c449dfe7/download",
"noOfPages": null,
"error": null
}
```
## Configuring the process

To create a process that converts a document from PDF to JPEG format, follow these steps:
1. Create a process that includes a [**Send Message Task (Kafka)**](../../../../building-blocks/node/message-send-received-task-node) node and a [**Receive Message Task (Kafka)**](../../../../building-blocks/node/message-send-received-task-node):
* Use the **Send Message Task** node to send the conversion request.
* Use the **Receive Message Task** node to receive the reply.
2. Configure the first node (**Send Message Task**) by adding a **Kafka send action**.

3. Specify the [**Kafka topic**](../../../../../setup-guides/flowx-engine-setup-guide/engine-setup#kafka-configuration) where you send the conversion request.
To identify your defined topics in your current environment, follow the next steps:
1. From the FLOWX.AI main screen, navigate to the **Platform Status** menu at the bottom of the left sidebar.
2. In the FLOWX Components list, scroll to the **document-plugin-mngt** line and press the eye icon on the right side.
3. In the details screen, expand the `KafkaTopicsHealthCheckIndicator` line and then **details → configuration → topic → file → convert**. Here will find the in and out topics for converting files.

4. Fill in the body of the message request.

#### Message request example
This is an example of a message that follows the custom integration data model.
```json
{
"fileId": "96975e03-7fba-4a03-99b0-3b30c449dfe7",
"to": "image/jpeg"
}
```
* `fileId`: The file ID that will be converted
* `to`: The file extension to convert to (in this case, "JPEG")
5. Configure the second node (**Receive Message Task**) by adding a **Data stream topic**:

The response will be sent to this `..out` Kafka topic.
## Receiving the reply

The following values are expected in the reply body:
* **customId**: The unique identifier for your document (it could be for example the ID of a client)
* **fileId**: The file ID
* **documentType**: The document type
* **documentLabel**: The document label (if available)
* **minioPath**: The path where the converted file is saved. It represents the location of the file in the storage system, whether it's a MinIO path or an S3 path, depending on the specific storage solution
* **downloadPath**: The download path for the converted file
* **noOfPages**: If applicable
* **error**: Any error message in case of an error during the conversion process
#### Message response example
```json
{
"customId": "119246",
"fileId": "8ec75c0e-eaa6-4d80-b7e5-15a68bba7459",
"documentType": "BULK",
"documentLabel": null,
"minioPath": "flowx-dev-process-id-119246/119246/461_BULK.jpg",
"downloadPath": "internal/files/461/download",
"noOfPages": null,
"error": null
}
```
The converted file is now available in the storage solution and it can be downloaded:

Note that the actual values in the response will depend on the specific conversion request and the document being converted.
# Deleting files
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/custom-plugins/documents-plugin/deleting-a-file
The Documents plugin provides functionality for deleting files.
## Prerequisites
Before deleting files, ensure:
1. **Access Permissions**: Ensure that the user account used has the necessary access rights for updates or deletions.
2. **Kafka Configuration**:
* **Verify Kafka Setup**: Ensure proper configuration and accessibility of the Kafka messaging system.
* **Kafka Topics**: Understand the Kafka topics used for these operations.
3. **File IDs and Document Types**: Prepare information for updating or deleting files:
* `fileId`: ID of the file to delete.
* `customId`: Custom ID associated with the file.
In the example below, we use a `fileId` generated for a document using [**Uploading a New Document**](/4.0/docs/platform-deep-dive/plugins/custom-plugins/documents-plugin/uploading-a-new-document) scenario.
```json
{
"docs": [
{
"customId": "119407",
"fileId": "c4e6f0b0-b70a-4141-993b-d304f38ec8e2",
"documentType": "BULK",
"documentLabel": null,
"minioPath": "flowx-dev-process-id-119408/119407/466_BULK.pdf",
"downloadPath": "internal/files/c4e6f0b0-b70a-4141-993b-d304f38ec8e2/download",
"noOfPages": 2,
"error": null
}
],
"error": null
}
```
## Configuring the deletion process

To delete files, follow these steps:
1. Create a process that includes a [**Send Message Task (Kafka)**](../../../../building-blocks/node/message-send-received-task-node) node and [**Message Event Receive (Kafka)**](/4.0/docs/building-blocks/node/message-send-received-task-node#configuring-a-message-receive-task-node) node:
* Use the **Send Message Task** node to send the delete request.
* Use the **Receive Message Task** node to receive the delete reply.
2. Configure the **first node (Send Message Task)** by adding a **Kafka Send Action**.

3. Specify the [**Kafka topic**](../../../../../setup-guides/plugins-setup-guide/documents-plugin-setup) for sending the delete request.
To identify defined topics in your environment:
* Navigate to **Platform Status > FLOWX Components > document-plugin-mngt** and click the eye icon on the right side.
* In the details screen, expand the `KafkaTopicsHealthCheckIndicator` line and then **details → configuration → topic → file → delete**. Here will find the in and out topics for deleting files.

4. Fill in the request message body.

#### Message request example
Example of a message following the custom integration data model:
```json
{
"customId": "119408",
"fileId": "c4e6f0b0-b70a-4141-993b-d304f38ec8e2"
}
```
* **fileId**: The ID of the file.
* **customId**: The custom ID.
5. Configure the **second node (Receive Message Task)** by adding a Data stream topic:

The response will be sent to `..out` Kafka topic.
### Receiving the reply

The reply body should contain the following values:
* **customId**: The unique identifier for your document (it could be for example the ID of a client)
* **fileId**: The ID of the file
* **documentType**: The document type
* **error**: Any error message in case of an error during the deleting process
#### Message response example
```json
{
"customId": "119408",
"fileId": "c4e6f0b0-b70a-4141-993b-d304f38ec8e2",
"documentType": null,
"error": null
}
```
# Documents plugin
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/custom-plugins/documents-plugin/documents-plugin-overview
The Documents plugin can be easily added to your custom FlowX.AI deployment to enhance the core platform capabilities with functionality specific to document handling.

The plugin offers the following features:
* **Document storage and editing**: Easily store and make changes to documents.
* **Document generation**: Generate documents using predefined templates and custom process-related data.
* **WYSIWYG editor**: Create various templates using a user-friendly ["What You See Is What You Get" (WYSIWYG) editor](../../wysiwyg).

* **Template import**: Import templates created in other environments.

When exporting a document template, it is transformed into a JSON file that can be imported later.
* **Document conversion**: Convert documents from PDF to JPEG format.
* **Document splitting**: Split bulk documents into smaller separate documents.
* **Document editing**: Add generated barcodes, signatures, and assets to documents.
* **OCR integration**: When a document requires OCR (Optical Character Recognitionq) processing, the Documents Plugin initiates the interaction by passing the document data or reference to the [**OCR plugin**](../ocr-plugin).
The Documents Plugin can be easily deployed on your chosen infrastructure, preloaded with industry-specific document templates using an intuitive WYSIWYG editor, and connected to the FLOWX Engine through Kafka events.
* [**Send Message Task (Kafka)**](../../../../building-blocks/node/message-send-received-task-node#message-send-task)
* [**Receive Message Task(Kafka)**](../../../../building-blocks/node/message-send-received-task-node#message-receive-task)
Performance considerations:
To ensure optimal performance while using Documents Plugin, consider the following recommendations:
* For large or complex documents, it is recommended to allocate sufficient system resources, such as CPU and memory, to handle the conversion/editing process efficiently.
* Avoid processing extremely large files or a large number of files simultaneously, as it may impact performance and responsiveness.
* Monitor system resources during the generating/editing/converting etc. process and scale resources as needed to maintain smooth operations.
* Following these performance considerations will help optimize the document processing and improve overall system performance.
## Using Documents plugin
Once you have deployed the Documents Plugin in your infrastructure, you can start creating various document templates. After selecting a document template, proceed to create a process definition by including [**Send Message/Receive Message**](../../../../building-blocks/node/message-send-received-task-node) (Kafka nodes) and custom document-related actions in your process flow.
Before adding these actions to your **process definition**, follow these steps:
1. Ensure that all custom information is properly configured in the plugin database, such as the document templates to be used.
2. For each event type, you will need a corresponding Kafka topic.
The `..in` topic names configured for the plugin should match [**the `..out` topic names used when configuring the engine**](../../../../../setup-guides/flowx-engine-setup-guide/engine-setup#configuring-kafka). Make sure to use an outgoing topic name that matches the pattern configured in the Engine. The value can be found and overwritten in the `KAFKA_TOPIC_PATTERN` variable.
For more details about Process Engine Kafka topic configuration, click [**here**](../../../../../setup-guides/flowx-engine-setup-guide/engine-setup#configuring-kafka).
To make a request to the plugin, the process definition needs to include an action of type **Kafka send** defined on a [**Send Message Task**](../../../../building-blocks/node/message-send-received-task-node#message-send-task) node. The action parameter should have the key `topicName` and the corresponding topic name as its value.
To receive a reply from the plugin, the process definition needs to include a [**Receive Message Task**](../../../../building-blocks/node/message-send-received-task-node#message-receive-task) node with a node value having the key `topicName` and the topic name as its value.
Once the setup is complete, you can begin adding custom actions to your processes.
Let's explore a few examples that cover both the configuration and integration with the engine for all the use cases supported by the plugin:
# Generating documents
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/custom-plugins/documents-plugin/generating-from-html-templates
One of the key features of the Documents plugin is the ability to generate new documents using custom templates, which can be pre-filled with data relevant to the current process instance.
These templates can be easily configured using the [**What you see is what you get** (WYSIWYG)](../../wysiwyg). You can create and manage your templates by accessing the **Document Templates** section in [**FlowX Designer**](../../../../flowx-designer/overview).

## Generating documents from HTML templates
The Documents plugin simplifies the document generation process through predefined templates. This example focuses on generating documents using HTML templates.
## Prerequisites
1. **Access permissions**: Ensure that you have the necessary permissions to manage documents templates (more details, [**here**](../../../../../setup-guides/plugins-access-rights/configuring-access-rights-for-documents)). The user account used for these operations should have the required access rights.
2. **Kafka configuration**: Verify that the Kafka messaging system is properly configured and accessible. The documents plugin relies on Kafka for communication between nodes.
* **Kafka topics**: Familiarize yourself with the Kafka topics used for these operations (later in this section)
## Creating an HTML template
To begin the document generation process, HTML templates must be created or imported. Utilize the [**WYSIWYG**](/4.1.x/docs/platform-deep-dive/plugins/wysiwyg) editor accessible through **FLOWX Designer → Plugins → Document templates**.
Learn more about managing HTML templates:
Before using templates, ensure they are in a **Published** state. Document templates marked as **Draft/In Progress** will not undergo the generation process.

We've created a comprehensive FlowX.AI Academy course guiding you through the process of **Creating a Document Template in Designer**. Access the course [**here**](https://academy.flowx.ai/catalog/info/id:172) for detailed instructions and insights.
## Sending a document generation request
Consider a scenario where you need to send a personalized document to a customer based on specific details they provide. Create a process involving a [**User task**](../../../../building-blocks/node/user-task-node), a [**Send Message Task (Kafka)**](../../../../building-blocks/node/message-send-received-task-node#message-send-task), and a [**Receive Message Task (Kafka)**](../../../../building-blocks/node/message-send-received-task-node#message-receive-task).
In the initial user task node, users input information.
The second node (Kafka send) creates a request with a specified template and keys corresponding to user-filled values.
The third node sends the reply with the generated document under the specified key.

1. Add a **User task** and configure it with UI elements for user input.
In this example, three UI elements, comprising two input fields and a select (dropdown), will be used. Subsequently, leverage the keys associated with these UI elements to establish a binding with the template. This binding enables dynamic adjustments to the template based on user-input values, enhancing flexibility and customization.
2. Configure the second node (Send Message Task) by adding a **Kafka send action**.
3. Specify the [**Kafka topic**](../../../../../setup-guides/plugins-setup-guide/documents-plugin-setup#kafka-configuration) to which the request should be sent, enabling the Process Engine to process it; in our example it is `ai.flowx.in.document.html.in`.
To identify your defined topics in your current environment, follow the next steps:
1. From the **FlowX Designer** main screen, navigate to the **Platform Status** menu at the bottom of the left sidebar.
2. In the FlowX Components list, scroll to the **document-plugin-mngt** line and press the eye icon on the right side.
3. In the details screen, expand the `KafkaTopicsHealthCheckIndicator` line and then **details → configuration → topic → document → generate**. Under HTML and PDF you will find the in and out topics for generating HTML or PDF documents.


4. Fill in the message with the expected values in the request body:
```json
{
"documentList": [
{
"customId": "ClientsFolder",
"templateName": "AccountCreation",
"language": "en",
"data": {
"firstInput": "${application.client.firstName}",
"secondInput": "${application.client.lastName}",
"thirdInput": "${application.client.accountType}"
},
"includeBarcode": false //if you want to include a barcode, you can set it to true
}
]
}
```
* **documentList**: A list of documents to be generated with properties (name and value to be replaced in the document templates)
* **customId**: Client ID
* **templateName**: The name of the template that you want to use (defined in the **Document templates** section)
* **language**: Should match the language set on the template (a template can be created for multiple languages as long as they are defined in the system, see [**Languages**](/4.1.x/docs/platform-deep-dive/core-extensions/content-management/languages) section for more information)
When incorporating templates into the execution of a process, the extracted default values will be in accordance with the specifications of the default language configured in the system. For instance, if the default language is set to English, the template's default values will reflect those assigned to the English version. Make sure to match the language of your template with the default language of the system.
To verify the default language of the platform, navigate to **FlowX Designer → Content Management → Languages**.
* **includeBarcode**: True/False
* **data**: A map containing the values that should be replaced in the document template (data that comes from user input). The keys used in the map should match the ones defined in the HTML template and your UI elements.
Ultimately, the configuration should resemble the presented image:

5. Configure the third node (Receive Message Task):
* Add the topic where the response will be sent; in our example `ai.flowx.updates.document.html.generate.v1` and its key: `generatedDocuments`

## Receiving the document generation reply
The response, containing information about the generated documents, is sent to the output Kafka topic defined in the Kafka Receive Event Node. The response includes details such as file IDs, document types, and storage paths.

Here is an example of a response after generation (received on `generatedDocuments` key):
```json
{
"generatedFiles": {
"ClientsFolder": {
"AccountCreation": {
"customId": "ClientsFolder",
"fileId": "320f4ec2-a509-4aa9-b049-87224594802e",
"documentType": "AccountCreation",
"documentLabel": "GENERATED_PDF",
"minioPath": "{{your_bucket}}/2024/2024-01-15/process-id-865759/ClientsFolder/6869_AccountCreation.pdf",
"downloadPath": "internal/files/320f4ec2-a509-4aa9-b049-87224594802e/download",
"noOfPages": 1,
"error": null
}
}
},
"error": null
}
```
* **generatedFiles**: List of generated files.
* **customId**: Client ID.
* **fileId**: The ID of the generated file.
* **documentType**: The name of the document template.
* **documentLabel**: A label or description for the document.
* **minioPath**: The path where the converted file is saved. It represents the location of the file in the storage system, whether it's a MinIO path or an S3 path, depending on the specific storage solution.
* **downloadPath**: The download path for the converted file. It specifies the location from where the file can be downloaded.
* **noOfPages**: The number of pages in the generated file.
* **error**: If there were any errors encountered during the generation process, they would be specified here. In the provided example, the value is null, indicating no errors.
## Displaying the generated document
Upon document generation, you now have the capability to present it using the Document Preview UI element. To facilitate this, let's optimize the existing process by introducing two supplementary nodes:
* **Task node**: This node is designated to generate the document path from the storage solution, specifically tailored for the Document Preview.
* **User task**: In this phase, we seamlessly integrate the Document Preview UI Element. Here, we incorporate a key that contains the download path generated in the preceding node.
For detailed instructions on displaying a generated or uploaded document, refer to the example provided in the following section:
# Getting URLs
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/custom-plugins/documents-plugin/getting-urls-to-documents
In certain scenarios, obtaining URLs pointing to uploaded documents for use in integrations is essential. This process involves adding a custom action to your workflow that requests URLs from the Documents plugin.
## Prerequisites
Before retrieving document URLs, ensure:
1. **Access Permissions**: Ensure that the user account has the necessary access rights.
2. **Kafka Configuration**:
* **Verify Kafka Setup**: Ensure proper configuration and accessibility of the Kafka messaging system.
* **Kafka Topics**: Understand the Kafka topics used for these operations.
3. **Document Types**: Prepare information for updating or deleting files:
* `types`: A list of document types.
## Configuring the getting URLs process

To obtain document URLs, follow these steps:
Create a process that will contain the following types of nodes:
* [**Send Message Task (Kafka)**](../../../../building-blocks/node/message-send-received-task-node) - used to send the get URLs request
* [**Receive Message Task (Kafka)**](../../../../building-blocks/node/message-send-received-task-node) - used to receive the get URLs reply
* [**User Task**](../../../../building-blocks/node/user-task-node) - where you will perform the upload action
Start configuring the **User Task** node:
#### Node Config
* **Data stream topics**: Add the topic where the response will be sent; in this example `ai.flowx.updates.document.html.persist.v1` and its key: `uploadedDocument`.

#### Actions
We will configure the following node actions:
* Upload File action ("uploadDocument") will have two child actions:
* Send Data to User Interface ("uploadDocumentSendToInterface")
* Business Rule ("uploadDocumentBR")
* Save Data action ("save")

Configure the parameters for the **Upload Action**:

For more details on uploading a document and configuring the file upload child actions, refer to the following sections:
* [**Upload document**](./uploading-a-new-document)
* [**Upload action**](../../../../building-blocks/actions/upload-file-action)
Next, configure the **Send Message Tas (Kafka)** node by adding a **Kafka Send Action** and specifying the `..in` [**Kafka topic**](../../../../../setup-guides/plugins-setup-guide/documents-plugin-setup#kafka-configuration) to send the request.
Identify defined topics in your environment:
* Navigate to **Platform Status > FlowX Components > document-plugin-mngt** and press the eye icon on the right side.
* In the details screen, expand the `KafkaTopicsHealthCheckIndicator` line and then **details → configuration → topic → document → get**. Here will find the in and out topics for getting URLs for documents.

Fill in the body of the request message for the Kafka Send action to send the get URLs request:

* `types`: A list of document types.
#### Message request example
Example of a message following the custom integration data model:
```json
{
"types": [
"119435",
"119435"
]
}
```
Configure the [**Receive Message Task (Kafka)**](../../../../building-blocks/node/message-send-received-task-node#configuring-a-message-receive-task-node) by adding the `..out` kafka topic on which the response will be sent.

## Receiving the reply

The response body should include the following values:
* **success**: A boolean indicating whether the document exists and the URL was generated successfully.
* **fullName**: The full name of the document file, including the directory path.
* **fileName**: The name of the document file without the extension.
* **fileExtension**: The extension of the document file.
* **url**: The full download URL for the document.
#### Message response example
```json
[
{
"success": true,
"fullName": "2024/2024-08-27/process-id-1926248/1234_1926248/7715_1926248.pdf",
"fileName": "2024/2024-08-27/process-id-1926248/1234_1926248/7715_1926248",
"fileExtension": "pdf",
"url": "http://minio:9000/qualitance-dev-paperflow-devmain/2024/2024-08-27/process-id-1926248/1234_1926248/7715_1926248.pdf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=minio%2F20240827%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240827T150257Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=575333697714249e9deb295359f5ba9365f618f53303bf5583ca30a9b1c45d84"
}
]
```
# Listing stored files
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/custom-plugins/documents-plugin/listing-stored-files
If you are using an S3-compatible cloud storage solution such as MinIO, the stored files are organized into buckets. A bucket serves as a container for objects stored in Amazon S3. The Documents Plugin provides a REST API that allows you to easily view the files stored in the buckets.
To determine the partitioning strategy used for storing generated documents, you can access the following key in the configuration:
`application.file-storage.partition-strategy`
```yaml
application:
defaultLocale: en
supportedLocales: en, ro
#fileStorageType is the configuration that activates one FileContentService implementation. Valid values: minio / fileSystem
file-storage:
type: s3
disk-directory: MS_SVC_DOCUMENT
partition-strategy: NONE
```
The `partition-strategy` property can have two possible values:
* **NONE**: In this case, documents are saved in separate buckets for each process instance, following the previous method.
**PROCESS\_DATE**: Documents are saved in a single bucket with a subfolder structure based on the process date. For example: `bucket/2022/2022-07-04/process-id-xxxx/customer-id/file.pdf`.
## REST API
The Documents Plugin provides the following REST API endpoints for interacting with the stored files:
### List buckets
Check out the [**List buckets API reference**](/4.0/docs/api/list-buckets) for more details.
* This endpoint returns a list of available buckets.
### List objects in a bucket
Check out the [**List objects in a bucket API reference**](/4.0/docs/api/list-objects-in-buckets) for more details.
* This endpoint retrieves a list of objects stored within a specific bucket. Replace `BUCKET_NAME` with the actual name of the desired bucket
### Download file
Check out the [**Download file API reference**](/4.0/docs/api/download-file.mdx) for more details.
* This endpoint allows you to download a file by specifying its path or key.
# Managing templates
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/custom-plugins/documents-plugin/managing-html-templates
The Documents plugin provides the flexibility to define and manage HTML templates for document generation, enabling customization through various parameter types.
Additionally, the platform incorporates a [**What You See Is What You Get (WYSIWYG)**](../../wysiwyg) editor, allowing users to have a real-time, visual representation of the document or content during the editing process. Furthermore, you have the capability to test and review your template by downloading it as a PDF.
A WYSIWYG editor, typically provides two main views:
* **Design View (Visual View)**: In this view, you can see a visual representation of their content as it would appear when rendered in a web browser or other output medium. It resembles the final output closely, allowing you to format text, add images, and apply styles directly within the visual interface. This view aims to provide a real-time preview of the document's appearance.
* **HTML View (Source View)**: In this view, you can see and edit the underlying HTML code that corresponds to the content displayed in the Design View. It shows the raw HTML markup, providing a more detailed and technical representation of the document. You can manually edit the HTML code to make precise adjustments or to implement features not available in the visual interface.

Explore the different parameter types and their specifications:
## Configuring HTML templates
In the following example, we will create an example of a template using the HTML View (Source View).
To create a document template, navigate to the **Document Templates** section in the **Designer**, select ”**New document**” from the menu in the top-right corner, name your template, and click **Create**.
Now follow the next steps to design a new template:
1. **Open the WYSIWYG editor**:
Access the WYSIWYG editor within the Document Management Plugin, found in the **FlowX Designer → Plugins → Document templates** section.
* **Language Configuration**: Create a dedicated template for each language specified in your system.
To confirm the installed languages on the platform, go to **FLOWX.AI Designer → Content Management → Languages**.
2. **Design the document header**:
Begin by creating a header section for the document, including details such as the company logo and title.
```html
Offer Document
```
Data specifications (process data):
```json
"data": {
"companyLogo": "INSERT_BASE64_IMAGE",
"offerTitle": "Client Offer"
}
```
3. **Text Parameters for Client Information**:
Include a section for client-specific information using text parameters.
Text parameters enable the inclusion of dynamic text in templates, allowing for personalized content.
```html
Client Information
Client Name:
Client ID:
```
Data specifications:
```json
"data": {
"clientName": "John Doe",
"clientId": "JD123456"
}
```

4. **Dynamic table for offer details:**
Create a dynamic table to showcase various details of the offer.
```html
Offer Details
Item
Description
Price
```
Data specifications:
```json
"data": {
"offerItems": [
{ "name": "Product A", "description": "Description A", "price": "$100" },
{ "name": "Product B", "description": "Description B", "price": "$150" },
{ "name": "Product C", "description": "Description C", "price": "$200" }
]
}
```

5. **Dynamic sections for certain conditions:**
Dynamic sections allow displaying specific content based on certain conditions. For example, you can display a paragraph only when a certain condition is met.
```html
Dynamic Sections for Certain Conditions
This is displayed if it is a preferred client. They are eligible for special discounts!
This is displayed if the client has specific requests. Please review them carefully.
This is displayed if the client has an active contract with us.
```
Data specifications:
```json
"data": {
"clientName": "John Doe",
"clientId": "JD123456",
"isPreferredClient": false,
"hasSpecialRequest": false,
"isActiveContract": true
}
```

6. **Lists**:
Lists are useful for displaying values from selected items in a checkbox as a bulleted list.
```html
Income source:
```
Data specifications:
```json
{
"data": {
"incomeSource": [
"Income 1",
"Income 2",
"Income 3",
"Income 4"
]
}
}
```

7. **Include Image for Authorized Signature:**
Embed an image for the authorized signature at the end of the document.
```html
```
Data Specifications:
```json
"data": {
"signature": "INSERT_BASE64_IMAGE"
}
```

8. **Barcodes**:
Set the `includeBarcode` parameter to true if you want to include a barcode. For information on how to use barcodes and OCR plugin, check the following section:
9. **Checkboxes for Consent**:
Consent checkboxes in HTML templates are commonly used to obtain explicit agreement or permission from users before proceeding with certain actions or processing personal data.
```html
```

10. **Data Model**:
In the documents template we have the **Data Model** tab. Here you define parameters, which are dynamic data fields that will be replaced with the values you define in the payload, like first name, or company name.


## Styling
HTML template styling plays a crucial role in enhancing the visual appeal, readability, and user experience of generated documents.
We will apply the following styling to the previously created HTML template using **Source** view of the editor.
```css
```
In the end the template will look like this:

## Samples
### Without dynamic data
The final result after generating the template without dynamic data:
Download PDF sample [**here**](../../../assets/HTMLExample.pdf).
### With dynamic data
The final result after generating the template with the following dummy process data:
```json
"data": {
"offerTitle": "Client Offer",
"clientName": "John Doe",
"clientId": "JD123456",
"isPreferredClient": false,
"hasSpecialRequest": false,
"isActiveContract": true,
"offerItems": [
{ "name": "Product A", "description": "Description A", "price": "$100" },
{ "name": "Product B", "description": "Description B", "price": "$150" },
{ "name": "Product C", "description": "Description C", "price": "$200" },
],
"incomeSource": [
"Income 1",
"Income 2",
"Income 3",
"Income 4"
],
}
```
Download a PDF sample [**here**](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/726_ManageHTMLTemplate%20.pdf).
# Splitting documents
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/custom-plugins/documents-plugin/splitting-a-document
You can split a document into multiple parts using the Documents plugin.
This guide provides step-by-step instructions on how to split a document, such as when a user uploads a bulk scanned file that needs to be separated into distinct files.
## Prerequisites
1. **Access Permissions**: Ensure that you have the necessary permissions to use the Documents Plugin. The user account used for these operations should have the required access rights.
2. **Kafka Configuration**: Verify that the Kafka messaging system is properly configured and accessible. The Documents Plugin relies on Kafka for communication between nodes.
* **Kafka Topics**: Familiarize yourself with the Kafka topics used for these operations (later in this section)
3. Before initiating the splitting process, ensure you have the unique ID of the file in the storage solution. This ensures that the splitting is performed on an already uploaded file.
Ensure that the uploaded document contains more than one file.
You have two options to obtain the file ID:
* Extract the file ID from a [**Response Message**](./uploading-a-new-document#response-message-example-2) of an upload file request. For more details, refer to the [**upload process documentation**](./uploading-a-new-document).
* Extract the file ID from a [**Response Message**](./generating-from-html-templates#receiving-the-document-generation-reply) of a generate from template request. For more details, refer to the [**document generation reply documentation**](./generating-from-html-templates).
In the following example, we will use the `fileId` generated for a document with multiple files using [**Uploading a New Document**](./uploading-a-new-document) scenario.
```json
{
"customId": "119407",
"fileId": "446c69fb-32d2-44ba-a0b2-02dbb55e7eea",
"documentType": "BULK",
"documentLabel": null,
"minioPath": "flowx-dev-process-id-119407/119407/465_BULK.pdf",
"downloadPath": "internal/files/446c69fb-32d2-44ba-a0b2-02dbb55e7eea/download",
"noOfPages": null,
"error": null
}
```
## Configuring the splitting process

To create a process that splits a document into multiple parts, follow these steps:
1. Create a process that includes a [**Send Message Task (Kafka)**](../../../../building-blocks/node/message-send-received-task-node#configuring-a-message-send-task-node) node and a [**Receive Message Task (Kafka)**](../../../../building-blocks/node/message-send-received-task-node#configuring-a-message-receive-task-node) node:
* Use the **Send Message Task** node to send the splitting request.
* Use the **Receive Message Task** node to receive the splitting reply.
2. Configure the **first node (Send Message Task)** by adding a **Kafka Send Action**.
3. Specify the [**Kafka topic**](../../../../../setup-guides/plugins-setup-guide/documents-plugin-setup#kafka-configuration) where you want to send the splitting request.

To identify your defined topics in your current environment, follow the next steps:
1. From the FLOWX.AI main screen, navigate to the Platform Status menu at the bottom of the left sidebar.
2. In the FLOWX Components list, scroll to the document-plugin-mngt line and press the eye icon on the right side.
3. In the details screen, expand the `KafkaTopicsHealthCheckIndicator` line and then **details → configuration → topic → document → split**. Here you will find the in and out topics for splitting documents.

4. Fill in the body of the message request.

#### Message request example
```json
{
"parts": [
{
"documentType": "BULK",
"customId": "119407",
"pagesNo": [
1,
2
],
"shouldOverride": true
}
],
"fileId": "446c69fb-32d2-44ba-a0b2-02dbb55e7eea"
}
```
* **fileId**: The file ID of the document that will be split
* **parts**: A list containing information about the expected document parts
* **documentType**: The document type.
* **customId**: The unique identifier for your document (it could be for example the ID of a client)
* **shouldOverride**: A boolean value (true or false) indicating whether to override an existing document if one with the same name already exists
* **pagesNo**: The pages that you want to separate from the document
5. Configure the **second node (Receive Message Task)** by adding a Data stream topic:

The response will be sent to this `..out` Kafka topic.
## Receiving the reply

The following values are expected in the reply body:
* **docs**: A list of documents.
* **customId**: The unique identifier for your document (matching the name of the folder in the storage solution where the document is uploaded).
* **fileId**: The ID of the file.
* **documentType**: The document type.
* **minioPath**: The storage path for the document.
* **downloadPath**: The download path for the document.
* **noOfPages**: The number of pages in the document.
* **error**: Any error message in case of an error during the splitting process.
Here's an example of the response JSON:
#### Message response example
```json
{
"docs": [
{
"customId": "119407",
"fileId": "c4e6f0b0-b70a-4141-993b-d304f38ec8e2",
"documentType": "BULK",
"documentLabel": null,
"minioPath": "flowx-dev-process-id-119408/119407/466_BULK.pdf",
"downloadPath": "internal/files/c4e6f0b0-b70a-4141-993b-d304f38ec8e2/download",
"noOfPages": 2,
"error": null
}
],
"error": null
}
```
The split document is now available in the storage solution and can be downloaded:

# Creating and uploading a new document
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/custom-plugins/documents-plugin/uploading-a-new-document
A comprehensive guide to integrating document creation from templates, managing uploads, and configuring workflows for document processing.
User task nodes provide a flexible framework to defining and configuring UI templates and actions for specific template config nodes, such as an upload file button.

## Prerequisites
Before you begin, ensure the following prerequisites are met:
* **Access Permissions**: Ensure that you have the necessary permissions to use the Documents Plugin. The user account used for these operations should have the required access rights.
* **Kafka Configuration**: Verify that the Kafka messaging system is properly configured and accessible. The Documents Plugin relies on Kafka for communication between nodes.
* **Kafka Topics**: Familiarize yourself with the Kafka topics used for these operations (later in this section).
To upload a document using this process, follow the steps outlined below.
## Step-by-step guide: uploading an previewing a document
In the [previous section](./generating-from-html-templates), you learned how to generate documents from HTML templates. This section focuses on creating a process where users can generate a document, review and sign it, and subsequently upload it.
### Process overview
This process involves several key nodes, each performing specific tasks to ensure smooth document creation, preview, and upload:
* **Start and End Nodes**: These nodes mark the beginning and end of the process, respectively.
* **User Task Node**: Collects user input necessary for document generation.
* **Send and Receive Message Tasks (Kafka)**: Handle communication with Kafka for generating the document and retrieving it after processing.
* **Service Task Node**: Appends the path of the generated document to the process, enabling further actions.
* **User Task Nodes**: Facilitate the preview of the generated document and manage the upload of the signed document.
## Configuring the process
Follow the steps outlined in [**Generating from HTML templates**](./generating-from-html-templates) configure the document generation part of the process.

If you only need to upload a new file without generating it from templates, skip the template generation steps.
After configuration, your request and response messages should resemble the examples below.
#### Request message example
This JSON structure represents a Kafka message sent through the `..in` topic to initiate a request in the Process Engine. It includes information for generating an "AccountCreation" document with a custom ID "119237" in English. Data specifies client details extracted dynamically from user input (first name, last name, age, country) and company information (name, registration date).
This an example of a message that follows the custom integration data model.
```json
{
"documentList": [
{
"customId": "119246", //this will match the name of the folder in the storage solution
"templateName": "AccountCreation",
"language": "en",
"data": {
"application": {
"client": {
"firstName": "John",
"lastName": "Doe",
"age": "33",
"country": "AU"
},
"company": {
"name": "ACME",
"registrationDate": "24.01.2024"
}
}
},
"includeBarcode": false
}
]
}
```
#### Response message example
This JSON structure represents the response received on the `..out` Kafka topic, where the Process Engine expects a reply. It contains details about the generated PDF file corresponding to the custom ID "119237" and the "AccountCreation" template. The response provides file-related information such as file ID, document type, document label, storage path, download path, number of pages, and any potential errors (null if none). The paths provided in the response facilitate access and download of the generated PDF file.
```json
{
"generatedFiles": {
"119246": {
"AccountCreation": {
"customId": "119246",
"fileId": "f705ae5b-f301-4700-b594-a63b50df6854",
"documentType": "AccountCreation",
"documentLabel": "GENERATED_PDF",
"minioPath": "flowx-dev-process-id-119246/119246/457_AccountCreation.pdf", // path to the document in the storage solution
"downloadPath": "internal/files/f705ae5b-f301-4700-b594-a63b50df6854/download", //link for download
"noOfPages": 1,
"error": null
}
}
},
"error": null
}
```
Configure the **preview** part of the process.

#### Service task node
We will configure the service task node to construct the file path for the generated document.
Configure a business rule to construct a file path for the generated document. Ensure the base admin path is specified.
Ensuring the base admin path is specified is crucial, as it grants the required administrative rights to access the endpoint responsible for document generation.
#### Actions
**Action Edit**
* **Action Type**: Set to **Business Rule**
* **Trigger Type**: Choose **Automatic** because is not a user-triggered action
* **Required Type**: Set as **Mandatory**

**Parameters**
* **Language**: We will use **MVEL** for this example
* **Body Message**: Fill in the body message request
```js
adminPath = "https://admin-main.playground.flowxai.dev/document/";
processInstanceId = input.?generatedDocuments.?generatedFiles.keySet().toArray()[0];
downloadPath = input.?generatedDocuments.?generatedFiles.?get(processInstanceId).Company_Client_Document.downloadPath;
if(downloadPath != null){
output.put("generatedDocuments", {
"filePath" : adminPath + downloadPath
});
}
```
* **adminPath**: Base URL for the admin path.
```java
adminPath = "https://admin-main.playground.flowxai.dev/document/";
```
* **processInstanceId**: Extracts the process instance ID from the input. Assumes an input structure with a generatedDocuments property containing a generatedFiles property. Retrieves the keys, converts them to an array, and selects the first element.
```java
processInstanceId = input.?generatedDocuments.?generatedFiles.keySet().toArray()[0];
```
* **downloadPath**: Retrieves the downloadPath property using the obtained processInstanceId.
```java
downloadPath = input.?generatedDocuments.?generatedFiles.?get(processInstanceId).Company_Client_Document.downloadPath;`
```
* **if condition**: Checks if downloadPath is not null and constructs a new object in the output map.
```java
if(downloadPath != null){
output.put("generatedDocuments", {
"filePath" : adminPath + downloadPath
});
}
```
### User Task
Now we will configure the user task to preview the generated document.
#### Actions
Configure the **Actions edit** section:
* **Action Type**: Set to **Save Data**.
* **Trigger Type**: Choose **Manual** to allow user-triggered action.
* **Required Type**: Set as **Mandatory**.

Let's see what we have until now.
The screen where you can fill in the client details:

The screen where you can preview and download the generated document:

Configure the user task where we will upload the signed document genereated from the previous document template.
#### Node Config
* **Swimlane**: Choose a swimlane (if multiple) to restrict access to specific user roles.
* **Stage**: Assign a stage to the node.
* **Data stream topics**: Add the topic where the response will be sent; in this example `ai.flowx.updates.document.html.persist.v1` and its key: `uploadedDocument`.

#### Actions
Configure the following node actions:
* Upload File action with two child actions:
* Business Rule
* Send Data to User Interface
* Save Data action

#### Upload file action
This is a standard predefined FlowX Node Action for uploading files. This is done through Kafka and by using `persist` topics.
#### Action edit
* **Action Type**: Set to **Upload File**.
* **Trigger Type**: Choose **Manual** to allow user-triggered action.
* **Required Type**: Set it as **Optional**.
* **Repeatable**: Check this option if the action can be triggered multiple times.
#### Parameters
* **Topics**: Kafka topic where the file will be posted, in this example `ai.flowx.in.document.persist.v1`.
To identify your defined topics in your current environment, follow the next steps:
* From the FLOWX.AI main screen, navigate to the **Platform Status** menu at the bottom of the left sidebar.
* In the FLOWX Components list, scroll to the **document-plugin-mngt** line and press the eye icon on the right side.
* In the details screen, expand the `KafkaTopicsHealthCheckIndicator` line and then **details → configuration → topic → document → persist**. Here will find the in and out topics for persisting (uploading documents).

* **Document Type**: Metadata for the document plugin, in this example `BULK`.
* **Folder**: Configure a value by which the file will be identified, in this example it will be the`${processInstanceId}` (it will be replaced at runtime with a generated process instance id).
* **Advanced configuration (Show headers)**: This represents a JSON value that will be sent on the headers of the Kafka message, in this example:
```json
{"processInstanceId": ${processInstanceId}, "destinationId": "upload_document", "callbacksForAction": "uploadDocument"}`
```
`callbacksForAction` - the value of this key is a string that specifies a callback action associated with the "upload\_document" destination ID (node). This is part of an event-driven system (Kafka send action) where this callback will be called once the "upload\_document" action is completed.
#### Data to send
* **Keys**: Used when data is sent from the frontend via an action for data validation.


Now, configure the child actions of Upload File Action.
#### Business rule
This is necessary to create the path to display the uploaded document.
**Action Edit**
* **Order**: Set to **1** so it will be processed before the second child action.
* **Action Type**: Set to **Upload File**.
* **Trigger Type**: Choose **Automatic**, it does not need user intervention.
* **Required Type**: Set as **Mandatory**.
* **Repeatable**: Check this option if the action can be triggered multiple times.
**Parameters**
* **Language**: In this example we will use **MVEL**.
* **Body Message**: Fill in the body of the message request by applying a logic similar to the one utilized in configuring the "preview\_document" node. Establish a path that will be later employed to showcase the uploaded document within a preview UI component.
```js
adminPath = "https://admin-main.playground.flowxai.dev/document/";
downloadPath = input.?uploadedDocument.?downloadPath;
if(downloadPath != null){
output.put("uploadedDocument", {
"filePath" : adminPath + downloadPath
});
}
```


#### Send Data to User Interface
This is necessary to send the previously created information to the frontend.
**Action Edit**
* **Order**: Set to **2** so it will be processed after the previously created Business Rule.
* **Action Type**: Set to **Send data to user interface**.
* **Trigger Type**: Choose **Automatic**, it does not need user intervention.
* **Required Type**: Set as **Mandatory**.
* **Repeatable**: Check this option if the action can be triggered multiple times.
**Parameters**
* **Message Type**: Set to **Default**.
* **Body Message**: Populate the body of the message request; this object will be utilized to bind it to the document preview UI element.
```json
{
"uploadedDocument": {
"filePath": "${uploadedDocument.filePath}"
}
}
```
* **Target Process**: Used to specify to what running process instance should this message be sent - set to **Active Process**.


#### Save Data
Configure the last node action to save all the data.
**Action edit**
* **Order**: Set to **3**.
* **Action Type**: Set to **Save Data**.
* **Trigger Type**: Choose **Manual** to allow user-triggered action.
* **Required Type**: Set as **Mandatory**.
#### Request message example
To initiate the document processing, a Kafka request with the following JSON payload will be sent through `..in` topic:
This an example of a message that follows the custom integration data model.
```json
{
"tempFileId": "05081172-1f95-4ece-b2dd-1718936710f7", //a unique identifier for the temporary file
"customId": "119246", //a custom identifier associated with the document
"documentType": "BULK" //the type of the document
}
```
#### Response message example
Upon successful processing, you will receive a JSON response on the `..out` topic with details about the processed document:
```json
{
"customId": "119246",
"fileId": "96975e03-7fba-4a03-99b0-3b30c449dfe7",
"documentType": "BULK",
"documentLabel": null,
"minioPath": "flowx-dev-process-id-119246/119246/458_BULK.pdf",
"downloadPath": "internal/files/96975e03-7fba-4a03-99b0-3b30c449dfe7/download",
"noOfPages": null,
"error": null
}
```
Now the screen is configured for uploading the signed document:

## Receiving the reply
The response, containing information about the generated and uploaded documents as mentioned earlier, is sent to the output Kafka topic defined in the Kafka Receive Event Node. The response includes details such as file IDs, document types, and storage paths.

The reply body is expected to contain the following values:
* **customId**: A custom identifier associated with the document.
* **fileId**: The ID of the file.
* **documentType**: The document type.
* **minioPath**: The path where the uploaded file is saved. It represents the location of the file in the storage system, whether it's a MinIO path or an S3 path, depending on the specific storage solution.
* **downloadPath**: The download path for the uploaded file. It specifies the location from where the file can be downloaded.
* **noOfPages**: The number of pages in the document (if applicable).
* **filePath**: The path to the file that we built in our example so we can display the document.
# Forwarding notifications
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/custom-plugins/notifications-plugin/forwarding-notifications-to-an-external-system
If the Notification service is not directly connected to an SMTP / SMS server and you want to use an external system for sending the notifications, you can use the notification plugin just to forward the notifications to your custom implementation.
### Check needed Kafka topics
Yu will need the name of the kafka topics defined at the following environment variables:
* `KAFKA_TOPIC_NOTIFICATION_INTERNAL_IN` - topic used to trigger the request to send a notification
* `KAFKA_TOPIC_NOTIFICATION_EXTERNAL_OUT` - the notification will be forwarded on this topic to be handled by an external system
* `KAFKA_TOPIC_NOTIFICATION_INTERNAL_OUT` - topic used for sending replies after sending the notification
You can check the defined topics by going to **FlowX Designer > Platform Status > notification-plugin-mngt > kafkaTopicsHealthCheckIndicator > details > configuration > topic > notification**.
### Example: send a notification from a business flow
Let's pick a simple use case. Imagine we need to send a new welcome letter when we onboard a new customer. You must follow the next steps:
1. Configure the [template](./managing-notification-templates) that you want to use for the welcome email, use the [WYSIWYG Editor](../../wysiwyg)
Make sure that the **Forward on Kafka** checkbox is ticked, so the notification will be forwarded to an external adapter.
2. Configure the data model for the template.
3. To configure a document template, first, you need to define some information stored in the [Body](../../wysiwyg#notification-templates):
* **Type** - MAIL (for email notifications)
* ❗️**Forward on Kafka** - if this box is checked, the notification is not being sent directly by the plugin to the destination, but forwarded to another adapter
* **Language** - choose the language for your notification template
* **Subject** - enter a subject


4. Use the FlowX Designer to create a process definition.
5. Add a [**Send Message Task**](../../../../building-blocks/node/message-send-received-task-node#configuring-a-message-send-task-node) and a [**Receive Message Task**](../../../../building-blocks/node/message-send-received-task-node#configuring-a-message-receive-task-node) (one to send the request, one to receive the reply).
6. Check if the needed topic (defined at the following environment variable) is configured correctly: `KAFKA_TOPIC_NOTIFICATION_INTERNAL_IN`.
7. Add the proper configuration to the action, the Kafka topic, and the body message.

**Forward on Kafka** option will forward the notification to an external adapter, make sure the needed Kafka topic for forwarding is defined/overwritten using the following environment variable: `KAFKA_TOPIC_EXTERNAL_OUT`.
7. Run the process and look for the response (you can view it via the **Audit log**) or by checking the responses on the Kafka topic

Response example at `KAFKA_TOPIC_NOTIFICATION_INTERNAL_OUT`:
```json
{
"templateName": "welcomeLetter",
"receivers": [
"john.doe@mail.com"
],
"channel": "MAIL",
"language": "en"
}
```
# Managing notification templates
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/custom-plugins/notifications-plugin/managing-notification-templates
You can create and manage notification templates using FlowX.AI Designer by accessing the dedicated section.

### Configuring a template
To configure a document template, first, you need to select some information stored in the **Body**:
1. **Type** - could be either MAIL or SMS notifications
2. [**Forward on Kafka**](./forwarding-notifications-to-an-external-system) - if this checkbox is ticked, the notification is not being sent directly by the plugin to the destination, but forwarded to another adapter (this is mandatory for SMS notifications templates, as they require an external adapter)
3. **Language** - choose the language for your notification template
4. **Subject** - enter a subject

#### Editing the content
You can edit the content of a notification template by using the [WYSIWYG](../../wysiwyg) editor embedded in the body of the notification templates body.
### Configuring the data model
Using the **Data model** tab, you can define key pair values (parameters) that will be displayed and reused in the editor. Multiple parameters can be added:
* STRING
* NUMBER
* BOOLEAN
* OBJECT
* ARRAY (which has an additional `item` field)

After you defined some parameters in the **Data Model** tab, you can type "**#**" in the editor to trigger a dropdown where you can choose which one you want to use/reuse.

[WYSIWYG Editor](../../wysiwyg)
### Testing the template
You can use the test function to ensure that your template configuration is working as it should before publishing it.

In the example above, some keys (marked as mandatory) were not used in the template, letting you know that you've missed some important information. After you enter all the mandatory keys, the notification test will go through:

### Other actions
When opening the contextual menu (accessible by clicking on the breadcrumbs button), you have multiple actions to work with the notifications templates:
* Publish template - publish a template (it will be then displayed in the **Published** tab), you can also clone published templates
* Export template - export a template (JSON format)
* Show history - (version history and last edited)

# Notifications plugin
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/custom-plugins/notifications-plugin/notifications-plugin-overview
Notifications plugin can be easily added to your custom FlowX.AI deployment. The plugin will **enhance the core platform capabilities with functionality specific to sending various notifications**.
You have the possibility to send various types of notifications:
* Email notifications
* SMS templates to mobile devices
To use the plugin to send notifications, via SMS channel, a third party provider for SMS communication is needed, for example, [Twilio](https://www.twilio.com/).
* Forward custom notifications to external outgoing services
* Generate and validate OTP passwords for user identity verification
You can also use the plugin to track what notifications have been sent and to whom.
Let's go through the steps needed to deploy and set up the plugin:
## Using Notifications plugin
After deploying the notifications plugin in your infrastructure, you can start sending notifications by configuring related actions in your **process definitions**.
Before adding the corresponding **actions** in your process definition, you will need to follow a few steps:
* make sure all prerequisites are prepared, for example, the [notification templates](./managing-notification-templates) you want to use
* the database is configured properly
* for each **Kafka** event type, you will need two Kafka topics:
* one for the request sent from the FlowX Engine to the Notifications plugin
* one for the corresponding reply
**DEVELOPER**
The topic names configured for the plugin should match the ones used when configuring the engine and when adding plugin-related process actions:
* FlowX Engine is listening for messages on topics with names of a certain pattern, make sure to use an outgoing topic name that matches the pattern configured in the Engine
More details: [here](../../../../../setup-guides/flowx-engine-setup-guide/engine-setup#configuring-kafka)
* to make a request to the plugin, the process definition needs to have an action of type `Kafka send` that has an action parameter with key `topicName` and the needed topic name as a value
* to receive a reply from the plugin, the process definition needs to have a receiving node with a node value with key `topicName` and the topic name as the value
After all the setup is in place, you can start adding custom actions to the processes.
Let's go through a few examples. These examples cover both the configuration part, and the integration with the engine for all the use cases.
# Generate OTP
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/custom-plugins/notifications-plugin/otp-flow/generate-otp
There are some cases when you will need to generate an OTP (One Time Password) from a business flow, for example when validating an email account.
The notifications plugin handles both the actual OTP code generation and sending the code to the user using a defined [notification template](../managing-notification-templates).

## Define needed Kafka topics
**DEVELOPER**: Kafka topic names can be set/found by using the following environment variables in your Notifications plugin deployment:
* `KAFKA_TOPIC_OTP_GENERATE_IN`
* `KAFKA_TOPIC_OTP_GENERATE_OUT` - after the OTP is generated and sent to the user, this is the topic used to send the response back to the Engine.
The Engine is listening for messages on topics with names of a certain pattern, make sure to use an outgoing topic name that matches the pattern configured in the Engine.
## Request to generate an OTP
Values expected in the request body:
* **templateName**: the name of the notification template that is used (created using the [WYSIWYG](../../../wysiwyg) editor)
* **channe**l: notification channel (SMS / MAIL)
* **recipient**: notification receiver: email / phone number
* **notification template content parameters (for example, clientId)**: parameters that should be replaced in the [notification template](../managing-notification-templates)

## Response from generate OTP
Values expected in the reply body:
* **processInstanceId** = process instance ID
* **clientId** = the client id (in this case the SSN number of the client)
* **channel** = notification channel used
* **otpSent** = confirmation if the notification was sent: true or false
* **error** = error description, if any
**Example:**

## Example: generate an OTP from a business flow
It is important to identify what is the business identifier that you are going to use to validate that OTP, it can be, for example, a user identification number.
1. Configure the templates that you want to use (for example, an SMS template).
2. Make sure the Kafka communication through (`KAFKA_TOPIC_OTP_GENERATE_IN` topic - send the otp request) and the topic used to receive the response (`KAFKA_TOPIC_OTP_GENERATE_OUT`) is defined properly.
You can check the defined topics by going to **FlowX Designer > Platform Status > notification-plugin-mngt > kafkaTopicsHealthCheckIndicator > details > configuration > topic > otp**.
3. Use the FlowX Designer to add a new **Kafka send event** to the correct node in the process definition.
4. Add the proper configuration to the action, the Kafka topic, and configure the body message.

5. Add a node to the process definition (for the Kafka receive event).
6. Configure on what key you want to receive the response on the process instance params.

# Handling OTP
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/custom-plugins/notifications-plugin/otp-flow/otp-flow
The notifications plugin can also be used for handling the one time password (OTP) generation and validation flow.
The flow is made of two steps, the OTP generation and the OTP validation.
### OTP Configuration
The desired character size and expiration time of the generated one-time-passwords can also be configured using the following environment variables:
* `FLOWX_OTP_LENGTH`
* `FLOWX_OTP_EXPIRE_TIME_IN_SECONDS`
Let's go through the examples for both steps:
# Validate OTP
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/custom-plugins/notifications-plugin/otp-flow/validate-otp
## Define needed Kafka topics
Kafka topic names can be set by using environment variables:
* `KAFKA_TOPIC_OTP_VALIDATE_IN` - the event sent on this topic (with an OTP and an identifier) will check if the OTP is valid
* `KAFKA_TOPIC_OTP_VALIDATE_OUT` - the response for this request will validate an OTP, the reply is sent back to the Engine on this topic
You can check the defined topics by going to **FlowX Designer > Platform Status > notification-plugin-mngt > kafkaTopicsHealthCheckIndicator > details > configuration > topic > otp**.
The Engine is listening for messages on topics with names of a certain pattern, make sure to use an outgoing topic name that matches the pattern configured in the Engine.
## Request to validate an OTP
Values expected in the request body:
* **processInstanceId** = process instance ID
* **client id** = the user unique ID in the system
* **channel** = notification channel: SMS/MAIL
* **otp** = OTP code that you received, used to compare with the one that was sent from the system
Example:
```json
{
"processInstanceId": 12345,
"clientId": "1871201460101",
"channel": "MAIL",
"otp": "1111"
}
```
### Reply from validate OTP
Values expected in the reply body:
* **client id** = the user unique id in the system
* **channel** = notification channel used
* **otpValid** = confirmation if the provided OTP code was the same as the one sent from the system
Example:

## Example: validate an OTP from a business flow
Similar to the generation of the OTP you can validate the OTP that was generated for an identifier.
1. Check that the needed topics are configured correctly: (`KAFKA_TOPIC_OTP_VALIDATE_IN` and `KAFKA_TOPIC_OTP_VALIDATE_OUT`)
2. Add the actions for sending the request to validate the OTP on the node that contains the 'Generate OTP' actions
3. Add the proper configuration to the action, the Kafka topic and configure the body message.

4. Add a node to the process definition (for the [Receive Message Task](../../../../../building-blocks/node/message-send-received-task-node#receive-message-task))
5. Configure on what key you want to receive the response on the process instance parameters

# Sending a notification
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/custom-plugins/notifications-plugin/sending-a-notification
The plugin can be used for sending many kinds of notifications such as emails or SMS notifications. It can be easily integrated in one of your business processes.
## Configuring the process
To configure a business process that sends notifications you must follow the next steps:
* use **FlowX Designer** to create/edit a [notification template](./managing-notification-templates)
* use **Process Designer** to add a [**Send Message Task**](/4.0/docs/building-blocks/node/message-send-received-task-node) and a [**Receive Message Task**](../../../../building-blocks/node/message-send-received-task-node#receive-message-task)
* configure the needed node [actions](../../../../building-blocks/actions/actions)
* configure the request body
**DEVELOPER**: Make sure the needed [Kafka topics](../../../../../setup-guides/plugins-setup-guide/notifications-plugin-setup#kafka-configuration) are configured properly.
Kafka topic names can be set by using the following environment variables:
* `KAFKA_TOPIC_NOTIFICATION_INTERNAL_IN` - topic used to trigger the request to send a notification
* `KAFKA_TOPIC_NOTIFICATION_INTERNAL_OUT` - topic used for sending replies after sending the notification
The following values are expected in the request body:
| Key | Definition | |
| :----------: | :-----------------------------------------------------------------------------------------: | :-------: |
| language | The language that should be used | Mandatory |
| templateName | The name of the notification template that is used | Mandatory |
| channel | Notification channel: SMS/MAIL | Mandatory |
| receivers | Notification receivers: email/phone number | Mandatory |
| senderEmail | Notification sender email | Optional |
| senderName | Notification sender name | Optional |
| attachments | Attachments that are sent with the notification template (only used for MAIL notifications) | Optional |
Check the detailed [example](#example-send-a-notification-from-a-business-flow) below.

## Example: send a notification from a business flow

Let's pick a simple use-case, say we need to send a new welcome letter when we onboard a new customer. The steps are the following:
1. Configure the template that you want to use for the welcome email, see the previous section, [Managing notification templates](./managing-notification-templates) for more information.

2. Use the FlowX.AI Designer to add a [**Send Message Task**](../../../../building-blocks/node/message-send-received-task-node#send-message-task) and a [**Receive Message Task**](../../../../building-blocks/node/message-send-received-task-node#receive-message-task).
3. On the **Send Message Task** add a proper configuration to the action, the Kafka topic and request body message to be sent:
* **Topics** - `KAFKA_TOPIC_NOTIFICATION_INTERNAL_IN` - (in our example, `flowx-notifications-qa`)
You can check the defined topics by going to **FlowX Designer > Platform Status > notification-plugin-mngt > kafkaTopicsHealthCheckIndicator > details > configuration > topic > notifications**.
* **Message** (expected parameters):
* templateName
* channel
* language
* receivers
* **Headers** - it is always `{"processInstanceId": ${processInstanceId}}`

4. On the **Receive Message Task** add the needed topic to receive the kafka response - `KAFKA_TOPIC_NOTIFICATION_INTERNAL_OUT` - (in our example, `ai.flowx.updates.qa.notification.request.v1`).

5. Run the process and look for the response (you can view it via the **Audit log**) or checking the responses on the Kafka topic defined at `KAFKA_TOPIC_NOTIFICATION_INTERNAL_OUT` variable.

Response example at `KAFKA_TOPIC_NOTIFICATION_INTERNAL_OUT`:
```json
{
"identifier": null,
"templateName": "welcomeLetter",
"language": "en",
"error": null
}
```
# Sending an email
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/custom-plugins/notifications-plugin/sending-an-email-with-attachments
To use the notification plugin for sending emails with attachments, you must define the same topic configuration as for sending regular notifications. A notification template must be created, and the corresponding Kafka topics must be defined.
Check first the [Send a notification](./sending-a-notification) section.
## **Defining process actions**
### Example: send an email notification with attached files from a business flow
Let's pick a simple use-case. Imagine we need to send a copy of a contract signed by a new customer. Before setting the action for the notification, another action must be defined, so the first one will save the new contract using the documents plugin.

See the example from [**Generating a document from template**](../documents-plugin/generating-from-html-templates) section
The steps for sending the notification are the following:
Configure the template that you want to use for the email, see the [Managing notification templates](./managing-notification-templates) section for more information.
Check that the needed topics are defined by going to **`FlowX Designer > Platform Status > notification-plugin-mngt > kafkaTopicsHealthCheckIndicator > details > configuration > topic > notifications`**.
Use the **FlowX Designer** to add a new [Send Message Task (Kafka)](../../../../building-blocks/node/message-send-received-task-node#send-message-task) action to the correct node in the process definition.
Add the proper configuration to the action, the Kafka topic and message to be sent.
The message to be sent to Kafka will look something like:
```json
{
"templateName" : "contractCopy",
"identifier" : "text",
"language": "en",
"receivers" : [ "someone@somewhere.com" ],
"contentParams" : {
"clientId" : "clientId",
"firstName" : "first",
"lastName" : "last"
},
"attachments" : [ {
"filename" : "contract",
"path" : "MINIO_BUCKET_PATH/contract.pdf"
} ]
}
```
# OCR plugin
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/custom-plugins/ocr-plugin
The OCR (Optical Character Recognition) plugin is a powerful tool that enables you to read barcodes and extract handwritten signatures from .pdf documents with ease.
Before using the OCR service for reading barcodes and extracting signatures, please note the following requirements:
* All \*.pdf documents that are sent to the OCR service for reading barcodes and extracting handwritten signatures should be scanned at a minimum resolution of 200DPI (approximately 1654x2339 px for A4 pages)
* Barcode is searched on the top 15% of each image (scanned page)
* Signatures are detected on boxes with a border: 4px black solid
* Only two signatures per image (scanned page) are detected.
* All \*.pdf documents should be scanned at a minimum resolution of 200DPI (approximately 1654x2339 px for A4 pages).
* The barcode is searched in the top 15% of each scanned page.
* Signatures are detected within boxes with a 4px black solid border.
* The plugin detects up to two signatures per scanned page.
* Only two signatures per image (scanned page) are detected.
The plugin supports **1D Code 128** barcodes. For more information about this barcode type, please refer to the documentation [here](https://graphicore.github.io/librebarcode/documentation/code128.html).
## Using the OCR plugin
You can utilize the OCR plugin to process generic document templates by either using a specific flow on FLOWX.AI (HTML template) or any other document editor.
Using a specific flow on FlowX.AI offers several advantages:
* Centralized management of templates and flows within a single application.
* Access to template history and version control.
### Use case
1. Prepare and print generic document templates.
2. End-users complete, sign, and scan the documents.
3. Upload the scanned documents to the flow.
4. FlowX validates the template (barcode) and the signatures.
### Scenario for FlowX.AI generated documents
1. Utilize the [**Documents plugin**](./documents-plugin/documents-plugin-overview) to create a [**document template**](./documents-plugin/generating-from-html-templates).

2. Create a process and add a [**Kafka Send Action**](../../../building-blocks/node/message-send-received-task-node#configuring-a-message-send-task-node) to a [**Send Message Task**](../../../building-blocks/node/message-send-received-task-node#send-message-task) node. Here you specify the [**kafka topic**](../../../platform-overview/frameworks-and-standards/event-driven-architecture-frameworks/intro-to-kafka-concepts#topics) (address) where the template will be generated.

The Kafka topic for generating the template must match the topic defined in the **`KAFKA_TOPIC_DOCUMENT_GENERATE_HTML_IN`** variable. **DEVELOPER**: Refer to the [**Kafka configuration guide**](../../../../setup-guides/flowx-engine-setup-guide/engine-setup#configuring-kafka) for more details. For additional information, please see the [**Documents plugin setup guide**](../../../../setup-guides/plugins-setup-guide/documents-plugin-setup).
3. Fill in the **Message**. The request body should include the following values:
* **documentList** - a list of documents to be generated, including properties such as name and values to be replaced in the document templates
* **customId** - client ID
* **templateName** - the name of the template to be used
* **language**
* **includeBarcode** - true/false
* **data** - a map containing the values that should replace the placeholders in the document template, the keys used in the map should match those defined in the HTML template

The [**`data` parameters**](../wysiwyg) must be defined in the document template beforehand. For more information, check the [**WYSIWYG editor**](../wysiwyg) section.

4. Add a **barcode**.
* to include a **default barcode**, add the following parameter to the message body: `includeBarCode: true`.
* to include a **custom barcode**, set `includeBarCode: false` and provide the desired data in the `data` field
5. Add a [**Receive Message Task**](../../../building-blocks/node/message-send-received-task-node#receive-message-task) node and specify the topic where you want to receive the response.
Ensure that the topic matches the one defined in the **`KAFKA_TOPIC_DOCUMENT_GENERATE_HTML_OUT`** variable.

6. Add a [**User Task**](../../../building-blocks/node/user-task-node) node and configure an [**Upload file**](../../../building-blocks/actions/upload-file-action) action to send the file (defined by the **`KAFKA_TOPIC_DOCUMENT_PERSIST_IN`** variable) to the storage solution (for example, S3).

7. Next, the response will be sent back to the kafka topic defined by **`KAFKA_TOPIC_DOCUMENT_PERSIST_OUT`** environment variable through a callback action/subprocess.
8. Next, send the response to the OCR Kafka topic defined at **`KAFKA_TOPIC_OCR_IN`** variable (representing the path to the S3 file)
9. Display the result of the OCR validation on the kafka topic defined at **`KAFKA_TOPIC_OCR_OUT`**.
### Setup guide
**DEVELOPER**: Refer to the OCR plugin setup guide for detailed instructions on setting up the OCR plugin.
# Authorization & access roles
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/custom-plugins/reporting/access-and-authorization
## IAM solution
Superset is using be default flask-openid, as implemented in flask-security.
Superset can be integrated Keycloak, an open-source identity and access management solution. This integration enables users to manage authentication and authorization for their Superset dashboards.
### Prerequisites
* Keycloak server
* Keycloak Realm
* Keycloak Client & broker configured with OIDC protocols
* client\_secret.json
* admin username & password of postgres instance
* Superset Database created in postgresql
* optionally Cert-manager if you want to have SSL certificates on hostnames.
# Reporting plugin
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/custom-plugins/reporting/reporting-overview
The FlowX.AI Reporting plugin helps you build and bootstrap custom reports using data from your BPMN processes. Moreover, it supports technical reports based on process instance data. Integrated with the FlowX.AI Engine, this plugin transforms raw data into actionable insights, enhancing decision-making and optimizing business processes.
### Quick walkthrough video
Watch this quick walkthrough video to get started:
### Data exploration and visualization
The plugin uses **Superset** as a free data exploration and visualization tool.
You can however use your own BI tool like Tableau, PowerBI etc. as an alternative to Superset.
Use the suggested query structure and logic to ensure accurate reporting and avoid duplicates, even during database updates. Do not just use select on single reporting tables.

Apache Superset is an open-source software application for data exploration and data visualization able to handle data at a large scale. It enables users to connect their company's databases and perform data analysis, and easily build charts and assemble dashboards.
Superset is also an SQL IDE, so users can write SQL, join data create datasets and so on.
### Plugin architecture
Here’s an overview of the reporting plugin architecture:

### Setting up reporting for a process definition
If you want to extract custom business parameters for a process, first you should set up the master reporting data model in the published version of that process. This is done in Designer, as explained below in the [**Reporting Data Model**](#reporting-data-model).
The master reporting data model that is applied to all historical versions is the one belonging to the version that is set as Published. If the currently published version is read-only, you might need to create a new version, create the data model in it, mark it for reporting and set it as published (as explained in the section below).
See [**Reporting Data Model**](#reporting-data-model) section for more details.
This is accomplished by checking the “Use process in reporting” button in the settings / general page of each process version you want to report. Please note that you will not be able to change that setting on read-only versions.

It is essential that the currently Published version has this setting enabled, as it is the master version for reporting all historical data.
Only the process instances belonging to reporting-enabled versions will be extracted to the database, using the master data model from the currently Published version.
The reason why you should first set up the data model on the published (or to-be-published) version is that all changes in the data model of the published version that is also marked “use\_in\_reporting” are instantly sent to the reporting plugin, potentially causing all historical process instances to be re-computed multiple times before you are done setting up the data model.
To optimize operations, you should always aim to finish the modifications on the master data model on a version that is either not published, or not marked as "use\_in\_reporting", before marking it as "use\_in\_reporting and ensuring it is published.
See [**Enabling Process Reporting**](#enabling-process-reporting) section for more details.
## Reporting data model
You can only create or modify the reporting data model in non-committed versions, as commited versions are read-only.
1. Open the branch view.

2. If the currently published version is read-only, start a new version.


3. Make sure the new work-in-progress branch is selected, then go back to the process definition page.
4. Click on the Data Model icon and navigate in the object model to the target parameters.
5. Set up the business data structure to be reported by using the Edit button for each parameter.

* Check "Use in reporting" flag.
* Select the parameter type (number, string, Boolean or array).

There are three parameter structures that you can report:
**Singleton** or primitive, for which a single value is saved for each process instance. They can be of the types number, string or Boolean and will all be found in the reporting table named `params_{process_alias}`.
**Array of primitives**, in which several rows are saved for each process instance, in a dedicated table (one value per row).
**Array of objects**, in which the system extracts one or more “leaves” from each object of an array of objects. Also saved as a dedicated table.
### Primitive parameters reporting
In the following example, there are 3 simple parameters that will be extracted: `loan_approvalFee`, `loan_currency` and `loan_downPayment`.
They will be reported in the table `params_{process_alias}`, one row per instance.

### Arrays of primitives reporting
In the following example, the applicant can submit more than one email address, so each email will be extracted to a separate row in a dedicated table in the reporting database.

Extracting arrays is computationally intensive, do not use them if the only purpose is just to aggregate them afterward. Aggregation should be performed in the process, not in the reporting stage.
### Array of objects reporting
In this example, the applicant can have several real estate assets, so a subset of data items (`currentValue`, `mortgageBalance`, `propertyType`) will be extracted for each one of them.

Extracting arrays of objects is even more is computationally intensive, do not use them if the only purpose is just to aggregate them afterward. Aggregation should be performed in the process, not in the reporting stage.
## Enabling process reporting
Enable reporting by checking the “Use process in reporting” button on the settings/general page of each process version.
You can also make data model changes on an already published version (if it is not set as read-only), but the effects of the changes are immediate and every periodic refresh will try to re-compute all historical instances to adapt to the changing data model, potentially taking a lot of computing.
* Only the versions marked for reporting will be extracted to the database, using the master data model from the published version.
* Modifying the data model of a process version has no impact until the moment the version becomes published and is also set to "use\_in\_reporting".
* You can add / remove older process versions to be reported by opening the process branch modal, selecting the version, closing the branch modal, and then selecting Settings in the left bar, then in the General tab switching “Use process in reporting” on or off. You might not be able to do this on committed (read-only) versions.
The reporting refresh schedule is set for a fixed interval (currently 5 minutes):
* No processing overlaps are allowed. If processing takes more than 5 minutes, the next processing is automatically postponed until the current one is finished.
* The number of Spark executors is automatically set by the reporting plugin depending on the volume of data to be processed, up to a certain fixed limit that is established in the cloud environment. This limit can be adapted depending on your needs.
* Rebuilding the whole history for a process (if the master data model, process name or the Published version change) typically takes more time. It is better to make these changes after the working hours.
## Process reporting modification scenarios
**Modifying the data model for the Published version** causes all data for that process to be deleted from the database and to be rebuilt from scratch using the Data Model of the current Published version.
* This includes deleting process-specific array tables and rebuilding them, possibly with different names.
* All historical instances from previous versions are also rebuilt using the most recent Published data model.
* This may take a lot of time, so it is better to perform these operations during periods of low system use.
The same thing happens whenever the Published version changes, or if the process name is modified.
**Adding Process Versions**: Adds new rows to existing tables.
**Removing Non-Published Process Versions** from reporting (by unchecking their “Use in reporting” switch in Settings) simply deletes their corresponding rows from the table.
**Removing the Currently Published Version from Reporting**: **Not advisable** as it applies a random older data model and deletes all instances processed with this version.
**Reporting Data Structure Obsolescence and Backwards Compatibility**: The master data model is applied to all historical data from all reported versions. If only the name of a parameter changes, the plugin uses its UUID to map the old name to the new one. However, if the full path changes or the parameter does not exist in an older version, it returns a `Null` value to the database.
## Reporting database
### Main tables
Common fields for joins:
* `inst_id`: Unique identifier for each process instance
* `query_time`: Timestamp for when the plugin extracts data

**Useful fields from instances table**:
* `date_started/finished` timestamps
* `process_alias` (process name with "\_" instead of spaces)
The published version alias will be used for all the reported versions.
* State of the process (started, finished, etc.)
* `context_name`: Useful if the process is started by another process
**Useful fields from current\_nodes table**:
* `node_started/finished` timestamps for the nodes
* `node_name`, `node_type`
* `swimlane` of the current node
* `prev_node_end`: For calculating when a token is waiting for another process branch
**Useful fields from token\_history table**:
Similar to `current_nodes` but includes all nodes in the instance history.
### Parameters and object tables
Common fields, on which the joins are built:
* `inst_id`, unique identifier for each process instance
* `query_time`, recorded at the moment the plugin extracts data from the database

For the `params_` table, there is a single row per process instance. For arrays and arrays of objects tables, there are multiple rows per process instance.
## Using Superset for data exploration
### Data sources
The **Data** tab represents the sources of all information:
* Databases
* CSV files
* Tables
* Excel files
Reporting plugin can be used with Superset by connecting it with a PostgreSQL DB.
### Charts
Charts represent the output of the information. There are multiple visualization charts/ plugins available.
### Dashboards
With the use of dashboards, you can share persuading flows, show how metrics change in various scenarios and match your company efforts with logical, evidence‐based visual indicators.
### Datasets
Contains all the information for extracting and processing data from the DB, including SQL queries, calculated metrics information, cache settings, etc. Datasets can also be exported / imported.
### Connecting to a database
Before using Superset, ensure you have a PostgreSQL database installed and configured. Follow these guides for setup:
[FlowX Engine DB configuration](../../../../../setup-guides/flowx-engine-setup-guide/engine-setup)
[Reporting DB configuration](../../../../../setup-guides/plugins-setup-guide/reporting-setup)
Read-only users should be used in production in the reporting-plugin cronjob.
To connect Superset to a database, follow the next steps:
1. From the Toolbar, hover your cursor over the **"+"** icon, then select **Data**, and then select **Connect Database**.

2. The **Connect a database** window appears. Select the appropriate **database card** (in this case - PostgreSQL).

3. After you selected the DB, click **Connect this database with a SQLAlchemy URI string instead?**.
4. Fill in the **SQLALCHEMY URI** and then click **Connect**.

The **SQLAlchemy URI** for **reporting-db** should be in this format: `postgresql://postgres:XXXXXXXXXX@reporting-plugin-postgresql:{{port}}/reporting`.
### Creating and configuring charts
There are multiple ways in which you can create/configure a chart.
#### Creating charts using Datasets tab
To create a Chart using the first method, you must follow the next steps:
1. From the top toolbar, select **Data** and then **Datasets**.
You need to have a dataset added to Superset first. From that particular dataset you can build a visualization.
2. Select the desired **dataset**.
3. On the **explore** page, choose the **visualization type** and click **Create chart**.

4. When you select a dataset, by default **table** visualization type is selected.
To view all the existent visualization types, click **View all charts**, the charts' gallery will open.

#### Creating charts using Chart gallery
Using the Chart gallery is a useful method when you are not quite sure about which chart will be best for you.
To create a Chart using the second method, you must follow the next steps:
1. Select the **"+"** icon from the top toolbar and choose **Chart**.
2. Choose the **dataset** and **chart type**.
3. Review the description and example graphics of the selected chart, then click **Create new chart**.
If you wish to explore all the chart types available, filter by **All charts**. The charts are also grouped by categories.

### Configuring a Chart
Configure the **Query** fields by dragging and dropping relevant columns to the matching fields. Note that time and query form attributes vary by chart type.
### Exporting/importing a Chart
You can export and import charts to help you analyze your data and manipulate dashboards. To export/import a chart, follow the next steps:
1. Open **Superset** and navigate to **Charts** from the top navigation bar.
2. Select the desired **chart** and click the **breadcrumbs** menu in the top-right corner.
3. Choose an export option: .CSV, .JSON, or Download as image.

**Table example**:
* **Time** - time related form attributes
* **Query** - query attributes
* **Dimensions** - one or many columns to group by
* **Metrics** - metrics to display
* **Percentage metrics** - metrics for which percentage of total are to be displayed, calculated from only data within the row limit
* **Filters** - metric used for filtering
* **Sort by** - metric used to define how the top series are sorted if a series or row limit is present
* **Row limit** - limits the number of rows that get displayed

### Creating a dashboard
To create a dashboard follow the next steps:
1. Create a new chart and save it to a new dashboard.
2. To publish, click **Save and go to Dashboard**.

For details on how to configure the FlowX.AI reporting plugin, check the following section:
# Languages
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/languages
You can add a language and use it in document and notification templates.

On the main screen inside **Languages**, you have the following elements:
* **Code** - not displayed in the end-user interface, but used to assure value uniqueness
* **Name** - the name of the language
* **Default** - you can set a language as **Default** (default values can't be deleted)
When working with [substitution tags](../core-extensions/content-management/substitution-tags) or other elements that imply values from other languages defined in the CMS, when running a process, the default values extracted will be the ones marked by the default language.

* **Edit** - button used to edit a language

* **Delete** - button used to delete a language
Before deleting a language make sure that this is not used in any content management configuration.
* **New** - button used to add a new language
### Adding a new language
To add a new language, follow the next steps:
1. Go to **FLOWX Designer** and select the **Content Management** tab.
2. Select **Languages** from the list.
3. Choose a new **language** from the list.
4. Click **Add** after you finish.

# WYSIWYG editor
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/plugins/wysiwyg
FlowX Designer's WYSIWYG ("What You See Is What You Get") editor enables you to create and modify notification and document templates without the need for complicated coding from the developers. WYSIWYG editors make the creation/editing of any type of document or notification easier for the end-user.
Displaying how the document will be published or printed on the screen, the user can adjust the text, graphics, photos, or other document/notification elements before generating the final output.
## WYSIWYG Components
### Header
The formatting head of the editor allows users to manipulate/format the content of the document.
### Body
The Body is the main part of the editor where you can edit your template.
After you defined some parameters in the **Data Model** tab, you can type "**#**" in the body to trigger a dropdown where you can choose which one you want to use.
### Source
The **Source** button can be used to switch to the HTML editor. You can use the HTML view/editor as a debugging tool, or you can edit the template directly by writing code here.

## Document Templates
One of the main features of the [document management plugin](./custom-plugins/documents-plugin/documents-plugin-overview) is the ability to generate new documents based on custom templates and prefilled with data related to the current process instance.

## Notification Templates
Notification WYSIWYG body has some additional fields (other than documents template):
* **Type** - that could be either MAIL or SMS (SMS, only if there is an external adapter)
* **Forward on Kafka** - if this box is checked, the notification is not being sent directly by the plugin to the destination, but forwarded to another adapter

## Data Model
Using the data model, you can define key pair values (parameters) that will be displayed and reused in the body. Multiple parameters can be added:
* STRING
* NUMBER
* BOOLEAN
* OBJECT
* ARRAY (which has an additional `item` field)

Parameters can be defined as mandatory or not. When you try to generate a template without filling in all the mandatory parameters, the following error message will be displayed: "*Provided data cannot be empty if there are any required properties defined."*
# Third-party components
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/third-party-components
FlowX.AI uses a number of third-party software components
### Open-source
* [Keycloak](#keycloak)
* [Kafka](#kafka)
* [PostgreSQL](#postgresql)
* [MongoDB](#mongodb)
* [Redis](#redis)
* [NGINX](#nginx)
* [EFK (Elastic Search, Fluentd, Kibana)](#efk-kibana-fluentd-elastic-search)
* [S3 (MinIO)](#s3-minio)
### Not open-source
* [OracleDB](#oracle-db)
### Third-party open-source components supported/tested versions
FlowX.AI supports any version of the third-party components listed as prerequisites.
For optimal performance and reliability, our internal QA process validates new releases using specific versions as indicated in the provided table.
While exploring alternative versions that suit your company's specific requirements, we recommend referring to the compatibility matrix for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://support.flowx.ai/), and our dedicated team will address and resolve any identified bugs following our standard support process.
| FlowX.AI Version | 3rd Party Dependency | Recommended Supported Version |
| ---------------- | ------------------------ | ----------------------------- |
| 4.7.0 | Keycloak | 22.x, 26.x |
| 4.7.0 | Kafka | 3.2.x |
| 4.7.0 | PostgreSQL | 16.2.x |
| 4.7.0 | MongoDB | 7.0.x |
| 4.7.0 | Redis | 7.2.x |
| 4.7.0 | NGINX Ingress Controller | 1.2.x |
| 4.7.0 | Elasticsearch | 7.17.x |
| 4.7.0 | minio | 2024-02-26T09-33-48Z |
### Third-party components supported/tested versions
| FlowX.AI Version | 3rd Party Dependency | Recommended Supported Version |
| ---------------- | -------------------- | ----------------------------- |
| 4.7.0 | OracleDB | 21c, 23ai |
### Deprecation notice
Starting FlowX 5.0, the following versions of 3rd Party Dependencies will no longer be supported:
* Keycloak versions older than 26
* Kafka versions older than 3.9
* Redis versions older than 7.4
### Summary
#### Keycloak
Keycloak is an open-source software product to allow single sign-on with Identity and Access Management aimed at modern applications and services.
[Keycloak documentation](https://www.keycloak.org/documentation)
#### **Kafka**
Apache Kafka is an open-source distributed event streaming platform that can handle a high volume of data and enables you to pass messages from one end-point to another.
Kafka is a unified platform for handling all the real-time data feeds. Kafka supports low latency message delivery and gives a guarantee for fault tolerance in the presence of machine failures. It has the ability to handle a large number of diverse consumers.
Kafka is very fast and performs 2 million writes/sec. Kafka persists all data to the disk, which essentially means that all the writes go to the page cache of the OS (RAM). This makes it very efficient to transfer data from a page cache to a network socket.
[Intro to Kafka](../platform-overview/frameworks-and-standards/event-driven-architecture-frameworks/intro-to-kafka-concepts)
[Kafka documentation](https://kafka.apache.org/documentation/)
#### PostgreSQL
PostgreSQL, also known as Postgres, is a free and open-source relational database management system emphasizing extensibility and SQL compliance.
[PostgreSQL documentation](https://www.postgresql.org/docs/)
#### MongoDB
MongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional [schemas](https://en.wikipedia.org/wiki/Database_schema).
Used by FlowX to store business process data and configuration information on the core/plugin components.
[MongoDB documentation](https://www.mongodb.com/docs/)
#### Redis
Redis is a fast, open-source, in-memory key-value data store that is commonly used as a cache to store frequently accessed data in memory so that applications can be responsive to users.
It delivers sub-millisecond response times enabling millions of requests per second for applications.
It is also be used as a Pub/Sub messaging solution, allowing messages to be passed to channels and for all subscribers to that channel to receive that message. This feature enables information to flow quickly through the platform without using up space in the database as messages are not stored.
It is used by FLOWX.AI for caching the process definitions-related data.
[Intro to Redis](../platform-overview/frameworks-and-standards/event-driven-architecture-frameworks/intro-to-redis)
[Redis documentation](https://redis.io/docs/)
#### NGINX
Nginx Is a web server that can also be used as a reverse proxy, load balancer, mail proxy and HTTP cache.
FLOWX utilizes the Nginx engine as a load balancer and for routing the web traffic (API calls) from the SPA (single page application) to the backend service, to the engine, and to various plugins.
The FLOWX.AI Designer SPA will use the backend service to manage the platform via REST calls, will use API calls to manage specific content for the plugins, and will use REST and SSE calls to connect to the engine.
[Intro to NGINX](../platform-overview/frameworks-and-standards/event-driven-architecture-frameworks/intro-to-nginx)
[NGINX documentation](https://nginx.org/en/docs/)
#### EFK (Elastic Search, Fluentd, Kibana)
Elasticsearch is a distributed, RESTful search and analytics engine capable of addressing a growing number of use cases.
As the heart of the Elastic Stack, it centrally stores your data for lightning-fast search, fine‑tuned relevancy, and powerful analytics that scale with ease.
Used by FLOWX.AI in the core component and optionally to allow searching for business process transaction data.
[Elastic stack documentation](https://www.elastic.co/elastic-stack/)
[Fluentd documentation](https://docs.fluentd.org/)
#### Kafka Connect Elasticsearch Service Sink
The Kafka Connect Elasticsearch Service Sink connector moves data from Apache Kafka® to Elasticsearch. It writes data from a topic in Kafka to an index in Elasticsearch. All data for a topic have the same type in Elasticsearch. This allows an independent evolution of schemas for data from different topics. This simplifies the schema evolution because Elasticsearch has one enforcement on mappings; that is, all fields with the same name in the same index must have the same mapping type.
#### S3 (MinIO)
FLOWX.AI uses [Min.IO](http://min.io/) as a cloud storage solution.
[MIN.IO documentation](https://min.io/)
[Docker available here](https://quay.io/repository/minio/minio?tab=tags\&tag=RELEASE.2022-05-26T05-48-41Z)
#### Oracle DB
Oracle Database is a relational database management system (RDBMS).
[Oracle DB documentation](https://www.oracle.com/database/technologies/)
#### Superset
Apache Superset is a business intelligence web application. It helps users to explore and visualize their data, from simple pie charts to detailed dashboards.
[Superset](https://superset.apache.org/docs/intro)
# Business filters
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/user-roles-management/business-filters
An optional attribute, from the authorization token, that can be set in order to restrict access to process instances based on a business specific value (ex. bank branch name).
Using business filters we can make sure only the allowed users, with the same attribute, can access a [**process instance**](../../projects/runtime/active-process/process-instance).
In some cases it might be necessary to restrict access to process nodes based on certain [**business rules**](../../building-blocks/actions/business-rule-action/business-rule-action), for example only users from a specific bank branch can view the process instances started from that branch. This can be done by using business filters.
Before they can be used in the process definition the business filter attributes need to be set in the identity management platform. They have to be configured as a list of filters and should be made available on the authorization token. Application users will also have to be assigned this value.
When this filter needs to be applied, the process definition should include nodes with actions that will store the current business filter value to a custom `task.businessFilters` key on process parameters.
If this value is set in the process instance parameters, only users that have the correct business filter attribute will be able to interact with that process instance.
# Swimlanes
Source: https://docs.flowx.ai/4.7.x/docs/platform-deep-dive/user-roles-management/swimlanes
Swimlanes provide a way of grouping process nodes by process participants.
Using swimlanes we can make sure only certain user roles have access to certain process nodes.
In certain scenarios, it is necessary to restrict access to specific process [**nodes**](../../building-blocks/node/) based on user roles. This can be achieved by organizing nodes into different swimlanes.
Each swimlane can be configured to grant access only to users with specific roles defined in the chosen identity provider platform.

Depending on the type of node added within a swimlane, only users with the corresponding swimlane roles will have the ability to initiate process instances, view process instances, and perform actions on them.
[Scopes and roles for managing processes](../../../setup-guides/flowx-engine-setup-guide/configuring-access-roles-for-processes)

When creating a new process definition, a default swimlane will automatically be added.

As the token moves from one node to the next, it may transition between swimlanes. If a user interacting with the process instance no longer has access to the new swimlane, they will observe the process in read-only mode and will be unable to interact with it until the token returns to a swimlane they have access to.
Users will receive notifications when they can no longer interact with the process or when they can resume actions on it.
# Release Notes
Source: https://docs.flowx.ai/release-notes/overview
🚀 Welcome to FlowX.AI Release Notes.
Stay updated with the latest features and improvements! Follow the Release Notes space to discover what's happening in the FlowX.AI world and the exciting updates we have for you.
🚀 Introducing Release 4.7.3 - Released on **April 14th, 2025**
🚀 Introducing Release 4.7.2 - Released on **March 26th, 2025**
🚀 Introducing Release 4.7.1 - Released on **March 7th, 2025**
🚀 Introducing Release 4.7.0 - Released on **February 24th, 2025**
🚀 Introducing Release 4.6.1 - Released on **February 7th, 2025**
🚀 Introducing Release 4.6.0 - Released on **January 23rd, 2025**
🚀 Introducing Release 4.1.5 - Released on **September 23rd, 2024**
🚀 Introducing Release 4.1.4 - Released on **September 6th, 2024**
🚀 Introducing Release 4.1.3 - Released on **August 12th, 2024**
🚀 Introducing Release 4.1.2 - Released on **June 8th, 2024**
🚀 Introducing Release 4.1.1 - Released on **June 17th, 2024**
🚀 Introducing Release 4.1.0 - Released on **May 30th, 2024**
🚀 Introducing Release 4.0.0 - Released on **April 18th, 2024**
***
Explore the latest enhancements and fixes introduced in June 2024.
Explore the latest enhancements and fixes introduced in January 2024.
Explore the latest enhancements and fixes introduced in December 2023.
Explore the latest enhancements and fixes introduced in November 2023.
Discover the significant changes and improvements made in November 2023.
Discover the significant changes and improvements made in October 2023.
Discover the significant changes and improvements made in October 2023.
Discover the significant changes and improvements made in September 2023.
Discover the significant changes and improvements made in September 2023.
Discover the significant changes and improvements made in July 2023.
Discover the significant changes and improvements made in April 2023.
Discover the significant changes and improvements made in April 2023.
Discover the significant changes and improvements made in February 2023.
# Deployment guidelines v4.1.2
Source: https://docs.flowx.ai/release-notes/v4.1.2-june-2024/deployment-guidelines-v4.1.2
Do not forget, after upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FlowX.AI Designer > Platform Status**.
After upgrading to the 4.1.2 FlowX.AI release, you can import old process definitions into the new platform only if they are from versions 4.1.0 & 4.1.1.
It is not possible to import old process definitions from versions earlier than v4.1.0.

As of **FlowX.AI** release v4.1.0, the `paperflow-web-components` is deprecated, removing the dependencies. Instead, the new components can be found in `@flowx/ui-toolkit`.
## Component versions
| Component | 4.1.2 | 4.1.1 | 4.1 | 4.0 | 3.4.7 | 3.4.6 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 |
| ---------------------------- | ------------ | ------ | ------ | --------- | ---------- | --------- | --------- | --------- | ------- | ------ | ------ | ------ | ------- | ------ | ------ | ------ |
| **process-engine** | **6.1.3** | 6.1.2 | 6.0.3 | 5.10.3 | 4.3.5-2v11 | 4.3.5-2v6 | 4.3.5-2v2 | 4.3.5-2v1 | 4.3.5 | 4.3.2 | 4.3.1 | 4.1.0 | 3.6.0 | 2.2.1 | 2.1.2 | 2.0.7 |
| **admin** | **5.1.4** | 5.1.3 | 5.0.9 | 4.6.10 | 3.3.19-6 | 3.3.19-4 | 3.3.19-3 | 3.3.19-1 | 3.3.19 | 3.3.10 | 3.3.7 | 3.1.1 | 2.5.2 | 2.2.2 | 2.1.3 | 2.0.8 |
| **designer** | **4.17.1-2** | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/ui-sdk** | **4.17.1-2** | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/ui-toolkit** | **4.17.1-2** | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/ui-theme** | **4.17.1-2** | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **paperflow-web-components** | - | - | - | 3.35.18-5 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **flowx-process-renderer** | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| **cms-core** | 3.0.2 | 3.0.2 | 3.0.1 | 2.2.5 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.6 | 1.3.0 | 1.2.0 | 1.0.3 | 1.0.2 |
| **scheduler-core** | 3.0.1 | 3.0.1 | 3.0.0 | 2.1.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.1.0 | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 |
| **events-gateway** | 3.0.2 | 3.0.2 | 3.0.0 | 2.0.4 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.0.6 | 1.0.2 | - | - | - |
| **notification-plugin** | 4.0.3 | 4.0.3 | 4.0.1 | 3.0.6 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.8 | 2.0.8 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 |
| **document-plugin** | 4.0.2 | 4.0.2 | 4.0.0 | 3.0.6 | 2.0.10-1 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.8 | 2.0.8 | 2.0.6 | 2.0.4 | 2.0.3 | 2.0.3 | 2.0.2 |
| **ocr-plugin** | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.15 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.8 | 1.0.8 | 1.0.2 | 0.1.33 | 0.1.33 |
| **license-core** | 3.0.2 | 3.0.2 | 3.0.0 | 2.0.5 | 1.1.0 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.4 | 1.0.2 | 1.0.2 | 1.0.2 | 1.0.1 |
| **task-management-plugin** | 5.0.4 | 5.0.4 | 5.0.2 | 4.0.5 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.0 | 2.1.2 | 1.0.4 | 1.0.4 | 1.0.1 |
| **data-search** | 2.0.4 | 2.0.4 | 2.0.3 | 1.0.6 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.3 | 0.2.0 | 0.1.4 | 0.1.4 | 0.1.3 |
| **audit-core** | 4.0.4 | 4.0.4 | 4.0.3 | 3.1.4 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.1.3 | 2.1.3 | 2.1.0 | 1.0.6 | 1.0.5 | 1.0.4 | 1.0.1 |
| **reporting-plugin** | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.5 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 |
| **advancing-controller** | 2.0.2 | 2.0.2 | 2.0.0 | 1.1.4 | 0.3.5-1 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.2 | 0.3.0 | 0.1.4 | 0.1.4 | 0.1.2 |
| **iOS renderer** | 3.0.16 | 3.0.16 | 3.0.7 | 3.0.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.1.0 | 2.0.1 | 2.0.0 | 2.0.0 |
| **Android renderer** | 3.0.21 | 3.0.21 | 3.0.4 | 3.0.0 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 |
### Third-party recommended component versions
| FlowX.AI Platform Version | Component Name | Recommended Versions |
| ------------------------- | ----------------- | -------------------- |
| 4.1.2 | Keycloak | 22.x |
| 4.1.2 | Kafka | 3.2.x |
| 4.1.2 | PostgreSQL | 16.2.x |
| 4.1.2 | MongoDB | 7.0.x |
| 4.1.2 | Redis | 7.2.x |
| 4.1.2 | Elasticsearch | 7.17.x |
| 4.1.2 | OracleDB | 19.8.0.0.0 |
| 4.1.2 | Angular (Web SDK) | 16.3.x |
FlowX.AI supports the Recommended Versions of the prerequisite third-party components in the above table.
For optimal performance and reliability, our internal QA process validates new FlowX.AI releases using the latest Recommended Versions. While exploring alternative versions that suit your company's specific requirements, we recommend referring to the table above for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
A FlowX.AI release and its patch versions (such as 4.1.x) will *all* have the same the *Recommended Versions* for the third-party components.
# Migrating from previous versions to v4.1.2
Source: https://docs.flowx.ai/release-notes/v4.1.2-june-2024/migrating-from-previous-to-v4.1.2
If you're upgrading from a v3.4.x version, make sure to first review the migration guide for v4.0 to capture all the significant changes:
If you are upgrading from v4.0 to v4.1.2, do not forget to check the migration guides for the previous versions:
# FlowX.AI 4.1.2 Release Notes (LTS)
Source: https://docs.flowx.ai/release-notes/v4.1.2-june-2024/v4.1.2-june-2024
**Release Date:** 8th June 2024
This is a patch version part of the the long-term support for version 4.1.
Welcome to FLowX.AI 4.1.2 release. Let’s dive in and explore:
## Enhancements ✨
### Web Renderer
* Improved PDF viewer locale setting based on renderer language configuration. Now, the PDF viewer menu will display localized content according to the chosen language setting, enhancing user experience.
## Bug fixes 🛠️
### FlowX.AI Engine
* **Fixed the Server-Sent Events (SSE) message issue** where they were playing hide-and-seek in the console. The problem stemmed from swimlanes lacking proper permissions and an overly strict event filter. After granting correct permissions and adjusting the filter, SSE messages are now reliably displayed.
* **Export Process/Copy & Paste**: Addressed an issue where soft-deleted UI actions were erroneously included in `GET /data` responses. Now, during export or copy-paste operations, only active UI actions are included, ensuring deleted actions remain hidden as intended.
## Gremlins to watch out for
Keep an eye out for these quirks:
* **UI Designer**: When relocating UI elements between parents, the elements' order doesn't always get the memo, causing a mix-up in the family tree. We're untangling this knot!
## Resources
For deployment guidelines and a seamless transition to version 4.1.2:
# Deployment guidelines v4.1.3
Source: https://docs.flowx.ai/release-notes/v4.1.3-august-2024/deployment-guidelines-v4.1.3
Do not forget, after upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FlowX.AI Designer > Platform Status**.
After upgrading to the 4.1.3 FlowX.AI release, you can import old process definitions into the new platform only from version 4.1.2.
It is not possible to import old process definitions from versions earlier than v4.1.2.

As of **FlowX.AI** release v4.1.0, the `paperflow-web-components` is deprecated, removing the dependencies. Instead, the new components can be found in `@flowx/ui-toolkit`.
## Component versions
| Component | 4.1.3 | 4.1.2 | 4.1.1 | 4.1 | 4.0 | 3.4.7 | 3.4.6 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 |
| ---------------------------- | ------------ | -------- | ------ | ------ | --------- | ---------- | --------- | --------- | --------- | ------- | ------ | ------ | ------ | ------- | ------ | ------ | ------ |
| **process-engine** | **6.1.4** | 6.1.3 | 6.1.2 | 6.0.3 | 5.10.3 | 4.3.5-2v11 | 4.3.5-2v6 | 4.3.5-2v2 | 4.3.5-2v1 | 4.3.5 | 4.3.2 | 4.3.1 | 4.1.0 | 3.6.0 | 2.2.1 | 2.1.2 | 2.0.7 |
| **admin** | **5.1.5** | 5.1.4 | 5.1.3 | 5.0.9 | 4.6.10 | 3.3.19-6 | 3.3.19-4 | 3.3.19-3 | 3.3.19-1 | 3.3.19 | 3.3.10 | 3.3.7 | 3.1.1 | 2.5.2 | 2.2.2 | 2.1.3 | 2.0.8 |
| **designer** | **4.17.1-3** | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/ui-sdk** | **4.17.1-3** | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/ui-toolkit** | **4.17.1-3** | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/ui-theme** | **4.17.1-3** | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **paperflow-web-components** | - | - | - | - | 3.35.18-5 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **flowx-process-renderer** | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| **cms-core** | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.1 | 2.2.5 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.6 | 1.3.0 | 1.2.0 | 1.0.3 | 1.0.2 |
| **scheduler-core** | 3.0.1 | 3.0.1 | 3.0.1 | 3.0.0 | 2.1.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.1.0 | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 |
| **events-gateway** | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.0 | 2.0.4 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.0.6 | 1.0.2 | - | - | - |
| **notification-plugin** | 4.0.3 | 4.0.3 | 4.0.3 | 4.0.1 | 3.0.6 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.8 | 2.0.8 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 |
| **document-plugin** | 4.0.2 | 4.0.2 | 4.0.2 | 4.0.0 | 3.0.6 | 2.0.10-1 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.8 | 2.0.8 | 2.0.6 | 2.0.4 | 2.0.3 | 2.0.3 | 2.0.2 |
| **ocr-plugin** | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.15 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.8 | 1.0.8 | 1.0.2 | 0.1.33 | 0.1.33 |
| **license-core** | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.0 | 2.0.5 | 1.1.0 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.4 | 1.0.2 | 1.0.2 | 1.0.2 | 1.0.1 |
| **task-management-plugin** | 5.0.4 | 5.0.4 | 5.0.4 | 5.0.2 | 4.0.5 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.0 | 2.1.2 | 1.0.4 | 1.0.4 | 1.0.1 |
| **data-search** | 2.0.4 | 2.0.4 | 2.0.4 | 2.0.3 | 1.0.6 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.3 | 0.2.0 | 0.1.4 | 0.1.4 | 0.1.3 |
| **audit-core** | 4.0.4 | 4.0.4 | 4.0.4 | 4.0.3 | 3.1.4 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.1.3 | 2.1.3 | 2.1.0 | 1.0.6 | 1.0.5 | 1.0.4 | 1.0.1 |
| **reporting-plugin** | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.5 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 |
| **advancing-controller** | 2.0.2 | 2.0.2 | 2.0.2 | 2.0.0 | 1.1.4 | 0.3.5-1 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.2 | 0.3.0 | 0.1.4 | 0.1.4 | 0.1.2 |
| **iOS renderer** | 3.0.16 | 3.0.16 | 3.0.16 | 3.0.7 | 3.0.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.1.0 | 2.0.1 | 2.0.0 | 2.0.0 |
| **Android renderer** | 3.0.21 | 3.0.21 | 3.0.21 | 3.0.4 | 3.0.0 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 |
### Third-party recommended component versions
| FlowX.AI Platform Version | Component Name | Recommended Versions |
| ------------------------- | ----------------- | -------------------- |
| 4.1.3 | Keycloak | 22.x |
| 4.1.3 | Kafka | 3.2.x |
| 4.1.3 | PostgreSQL | 16.2.x |
| 4.1.3 | MongoDB | 7.0.x |
| 4.1.3 | Redis | 7.2.x |
| 4.1.3 | Elasticsearch | 7.17.x |
| 4.1.3 | OracleDB | 19.8.0.0.0 |
| 4.1.3 | Angular (Web SDK) | 16.3.x |
FlowX.AI supports the Recommended Versions of the prerequisite third-party components in the above table.
For optimal performance and reliability, our internal QA process validates new FlowX.AI releases using the latest Recommended Versions. While exploring alternative versions that suit your company's specific requirements, we recommend referring to the table above for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
A FlowX.AI release and its patch versions (such as 4.1.x) will *all* have the same the *Recommended Versions* for the third-party components.
# Migrating from previous versions to v4.1.3
Source: https://docs.flowx.ai/release-notes/v4.1.3-august-2024/migrating-from-previous-to-v4.1.3
If you're upgrading from a v3.4.x version, make sure to first review the migration guide for v4.0 to capture all the significant changes:
If you're moving from v4.0 to v4.1.3, don’t forget to check the relevant migration guides for each version to ensure nothing is missed:
If you’re upgrading from v4.1.1 or v4.1.2 to v4.1.3, simply review the release notes and deployment guidelines for any notable changes:
# FlowX.AI 4.1.3 Release Notes (LTS)
Source: https://docs.flowx.ai/release-notes/v4.1.3-august-2024/v4.1.3-august-2024
**Release Date:** 12th August 2024
This is a patch version part of the the long-term support for version 4.1.
Welcome to FLowX.AI 4.1.3 release. Let’s dive in and explore:
## Bug fixes 🛠️
### Process Designer
* **Copy/Paste Nodes**: Got a little too click-happy with those keyboard shortcuts in the Process Designer? You might have noticed multiple requests flooding in when you just wanted one. We’ve had a chat with the system, and it’s agreed to calm down. Now, one copy-paste equals one request. As it should be. ✂️📋
* **Stuck Nodes and Stubborn Swimlanes**: Tried to move a node or resize a swimlane after pasting and found everything frozen? The Process Designer had a little too much coffee. We’ve calmed it down, so now you can move and resize as you please. No more stuck nodes or swimlane stand-offs!
### FlowX.AI Engine
* **Business Rule Precision**: Occasionally, you might have encountered inconsistent outcomes when running the same business rule twice. This was due to a rare concurrency issue during rule compilation. We've addressed this by ensuring that, if an issue arises, the rule is recompiled rather than executing with an error. Now, your business rules run smoothly and consistently, just like you'd expect. 🎰🎯
## Gremlins to watch out for 👀
Keep an eye out for these quirks:
* **UI Designer**: When relocating UI elements between parents, the elements' order doesn't always get the memo, causing a mix-up in the family tree. We're untangling this knot!
## Resources 📚
Need more details for a seamless upgrade to v4.1.3? We've got you covered:
# Deployment guidelines v4.1.4
Source: https://docs.flowx.ai/release-notes/v4.1.4-september-2024/deployment-guidelines-v4.1.4
Do not forget, after upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FlowX.AI Designer > Platform Status**.
After upgrading to the 4.1.4 FlowX.AI release, you can import old process definitions into the new platform only from versions 4.1.2 & 4.1.3.
It is not possible to import old process definitions from versions earlier than v4.1.2.

As of **FlowX.AI** release v4.1.0, the `paperflow-web-components` is deprecated, removing the dependencies. Instead, the new components can be found in `@flowx/ui-toolkit`.
## Component versions
| Component | 4.1.4 | 4.1.3 | 4.1.2 | 4.1.1 | 4.1 | 4.0 | 3.4.7 | 3.4.6 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 |
| ---------------------------- | ------------ | -------- | -------- | ------ | ------ | --------- | ---------- | --------- | --------- | --------- | ------- | ------ | ------ | ------ | ------- | ------ | ------ | ------ |
| **process-engine** | **6.1.6** | 6.1.4 | 6.1.3 | 6.1.2 | 6.0.3 | 5.10.3 | 4.3.5-2v11 | 4.3.5-2v6 | 4.3.5-2v2 | 4.3.5-2v1 | 4.3.5 | 4.3.2 | 4.3.1 | 4.1.0 | 3.6.0 | 2.2.1 | 2.1.2 | 2.0.7 |
| **admin** | **5.1.6** | 5.1.5 | 5.1.4 | 5.1.3 | 5.0.9 | 4.6.10 | 3.3.19-6 | 3.3.19-4 | 3.3.19-3 | 3.3.19-1 | 3.3.19 | 3.3.10 | 3.3.7 | 3.1.1 | 2.5.2 | 2.2.2 | 2.1.3 | 2.0.8 |
| **designer** | **4.17.1-5** | 4.17.1-3 | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/ui-sdk** | **4.17.1-5** | 4.17.1-3 | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/ui-toolkit** | **4.17.1-5** | 4.17.1-3 | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/ui-theme** | **4.17.1-5** | 4.17.1-3 | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **paperflow-web-components** | - | - | - | - | - | 3.35.18-5 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **flowx-process-renderer** | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| **cms-core** | **3.0.3** | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.1 | 2.2.5 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.6 | 1.3.0 | 1.2.0 | 1.0.3 | 1.0.2 |
| **scheduler-core** | **3.0.2** | 3.0.1 | 3.0.1 | 3.0.1 | 3.0.0 | 2.1.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.1.0 | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 |
| **events-gateway** | **3.0.3** | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.0 | 2.0.4 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.0.6 | 1.0.2 | - | - | - |
| **notification-plugin** | **4.0.4** | 4.0.3 | 4.0.3 | 4.0.3 | 4.0.1 | 3.0.6 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.8 | 2.0.8 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 |
| **document-plugin** | **4.0.3** | 4.0.2 | 4.0.2 | 4.0.2 | 4.0.0 | 3.0.6 | 2.0.10-1 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.8 | 2.0.8 | 2.0.6 | 2.0.4 | 2.0.3 | 2.0.3 | 2.0.2 |
| **ocr-plugin** | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.15 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.8 | 1.0.8 | 1.0.2 | 0.1.33 | 0.1.33 |
| **license-core** | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.0 | 2.0.5 | 1.1.0 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.4 | 1.0.2 | 1.0.2 | 1.0.2 | 1.0.1 |
| **task-management-plugin** | **5.0.5** | 5.0.4 | 5.0.4 | 5.0.4 | 5.0.2 | 4.0.5 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.0 | 2.1.2 | 1.0.4 | 1.0.4 | 1.0.1 |
| **data-search** | **2.0.5** | 2.0.4 | 2.0.4 | 2.0.4 | 2.0.3 | 1.0.6 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.3 | 0.2.0 | 0.1.4 | 0.1.4 | 0.1.3 |
| **audit-core** | **4.0.5** | 4.0.4 | 4.0.4 | 4.0.4 | 4.0.3 | 3.1.4 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.1.3 | 2.1.3 | 2.1.0 | 1.0.6 | 1.0.5 | 1.0.4 | 1.0.1 |
| **reporting-plugin** | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.5 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 |
| **advancing-controller** | **2.0.3** | 2.0.2 | 2.0.2 | 2.0.2 | 2.0.0 | 1.1.4 | 0.3.5-1 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.2 | 0.3.0 | 0.1.4 | 0.1.4 | 0.1.2 |
| **iOS renderer** | 3.0.16 | 3.0.16 | 3.0.16 | 3.0.16 | 3.0.7 | 3.0.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.1.0 | 2.0.1 | 2.0.0 | 2.0.0 |
| **Android renderer** | 3.0.21 | 3.0.21 | 3.0.21 | 3.0.21 | 3.0.4 | 3.0.0 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 |
### Third-party recommended component versions
| FlowX.AI Platform Version | Component Name | Recommended Versions |
| ------------------------- | ----------------- | -------------------- |
| 4.1.4 | Keycloak | 22.x |
| 4.1.4 | Kafka | 3.2.x |
| 4.1.4 | PostgreSQL | 16.2.x |
| 4.1.4 | MongoDB | 7.0.x |
| 4.1.4 | Redis | 7.2.x |
| 4.1.4 | Elasticsearch | 7.17.x |
| 4.1.4 | OracleDB | 19.8.0.0.0 |
| 4.1.4 | Angular (Web SDK) | 16.3.x |
FlowX.AI supports the Recommended Versions of the prerequisite third-party components in the above table.
For optimal performance and reliability, our internal QA process validates new FlowX.AI releases using the latest Recommended Versions. While exploring alternative versions that suit your company's specific requirements, we recommend referring to the table above for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
A FlowX.AI release and its patch versions (such as 4.1.x) will *all* have the same the *Recommended Versions* for the third-party components.
# Migrating from previous versions to v4.1.4
Source: https://docs.flowx.ai/release-notes/v4.1.4-september-2024/migrating-from-previous-to-v4.1.4
If you're upgrading from a v3.4.x version, make sure to first review the migration guide for v4.0 to capture all the significant changes:
If you're moving from v4.0 to v4.1.3, don’t forget to check the relevant migration guides for each version to ensure nothing is missed:
If you’re upgrading from v4.1.1, v4.1.2 or v4.1.3 to v.4.1.4, simply review the release notes and deployment guidelines for any notable changes:
# FlowX.AI 4.1.4 Release Notes (LTS)
Source: https://docs.flowx.ai/release-notes/v4.1.4-september-2024/v4.1.4-september-2024
**Release Date:** 6th September 2024
This release is part of the long-term support (LTS) for version 4.1. While it’s a minor patch, we’ve focused on enhancing your security and overall experience with important fixes and updates.
## Fixes 🛠️
We kicked out a few troublemaking vulnerabilities to keep things running smoothly and securely for you! 🤺
## What's new? 🚀
Added support for [**EntraID**](https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-id), improving integration options and expanding authentication capabilities.
## Resources 📚
Need help upgrading to version 4.1.4? We've got everything you need for a smooth transition:
# Deployment guidelines v4.1.5
Source: https://docs.flowx.ai/release-notes/v4.1.5-september-2024/deployment-guidelines-v4.1.5
Do not forget, after upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FlowX.AI Designer > Platform Status**.
After upgrading to the 4.1.5 FlowX.AI release, you can import old process definitions into the new platform only from versions 4.1.2, 4.1.3 & 4.1.4.
It is not possible to import old process definitions from versions earlier than v4.1.2.

As of **FlowX.AI** release v4.1.0, the `paperflow-web-components` is deprecated, removing the dependencies. Instead, the new components can be found in `@flowx/ui-toolkit`.
## Component versions
| Component | 4.1.5 | 4.1.4 | 4.1.3 | 4.1.2 | 4.1.1 | 4.1 | 4.0 | 3.4.7 | 3.4.6 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 |
| ---------------------------- | ------------ | -------- | -------- | -------- | ------ | ------ | --------- | ---------- | --------- | --------- | --------- | ------- | ------ | ------ | ------ | ------- | ------ | ------ | ------ |
| **process-engine** | **6.1.7** | 6.1.6 | 6.1.4 | 6.1.3 | 6.1.2 | 6.0.3 | 5.10.3 | 4.3.5-2v11 | 4.3.5-2v6 | 4.3.5-2v2 | 4.3.5-2v1 | 4.3.5 | 4.3.2 | 4.3.1 | 4.1.0 | 3.6.0 | 2.2.1 | 2.1.2 | 2.0.7 |
| **admin** | **5.1.7** | 5.1.6 | 5.1.5 | 5.1.4 | 5.1.3 | 5.0.9 | 4.6.10 | 3.3.19-6 | 3.3.19-4 | 3.3.19-3 | 3.3.19-1 | 3.3.19 | 3.3.10 | 3.3.7 | 3.1.1 | 2.5.2 | 2.2.2 | 2.1.3 | 2.0.8 |
| **designer** | **4.17.1-6** | 4.17.1-5 | 4.17.1-3 | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/ui-sdk** | **4.17.1-6** | 4.17.1-5 | 4.17.1-3 | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/ui-toolkit** | **4.17.1-6** | 4.17.1-5 | 4.17.1-3 | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/ui-theme** | **4.17.1-6** | 4.17.1-5 | 4.17.1-3 | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **paperflow-web-components** | - | - | - | - | - | - | 3.35.18-5 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **flowx-process-renderer** | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| **cms-core** | 3.0.3 | 3.0.3 | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.1 | 2.2.5 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.6 | 1.3.0 | 1.2.0 | 1.0.3 | 1.0.2 |
| **scheduler-core** | 3.0.2 | 3.0.2 | 3.0.1 | 3.0.1 | 3.0.1 | 3.0.0 | 2.1.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.1.0 | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 |
| **events-gateway** | 3.0.3 | 3.0.3 | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.0 | 2.0.4 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.0.6 | 1.0.2 | - | - | - |
| **notification-plugin** | 4.0.4 | 4.0.4 | 4.0.3 | 4.0.3 | 4.0.3 | 4.0.1 | 3.0.6 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.8 | 2.0.8 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 |
| **document-plugin** | 4.0.3 | 4.0.3 | 4.0.2 | 4.0.2 | 4.0.2 | 4.0.0 | 3.0.6 | 2.0.10-1 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.8 | 2.0.8 | 2.0.6 | 2.0.4 | 2.0.3 | 2.0.3 | 2.0.2 |
| **ocr-plugin** | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.15 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.8 | 1.0.8 | 1.0.2 | 0.1.33 | 0.1.33 |
| **license-core** | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.0 | 2.0.5 | 1.1.0 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.4 | 1.0.2 | 1.0.2 | 1.0.2 | 1.0.1 |
| **task-management-plugin** | 5.0.5 | 5.0.5 | 5.0.4 | 5.0.4 | 5.0.4 | 5.0.2 | 4.0.5 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.0 | 2.1.2 | 1.0.4 | 1.0.4 | 1.0.1 |
| **data-search** | 2.0.5 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.4 | 2.0.3 | 1.0.6 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.3 | 0.2.0 | 0.1.4 | 0.1.4 | 0.1.3 |
| **audit-core** | 4.0.5 | 4.0.5 | 4.0.4 | 4.0.4 | 4.0.4 | 4.0.3 | 3.1.4 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.1.3 | 2.1.3 | 2.1.0 | 1.0.6 | 1.0.5 | 1.0.4 | 1.0.1 |
| **reporting-plugin** | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.5 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 |
| **advancing-controller** | 2.0.3 | 2.0.3 | 2.0.2 | 2.0.2 | 2.0.2 | 2.0.0 | 1.1.4 | 0.3.5-1 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.2 | 0.3.0 | 0.1.4 | 0.1.4 | 0.1.2 |
| **iOS renderer** | 3.0.16 | 3.0.16 | 3.0.16 | 3.0.16 | 3.0.16 | 3.0.7 | 3.0.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.1.0 | 2.0.1 | 2.0.0 | 2.0.0 |
| **Android renderer** | 3.0.21 | 3.0.21 | 3.0.21 | 3.0.21 | 3.0.21 | 3.0.4 | 3.0.0 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 |
### Third-party recommended component versions
| FlowX.AI Platform Version | Component Name | Recommended Versions |
| ------------------------- | ----------------- | -------------------- |
| 4.1.5 | Keycloak | 22.x |
| 4.1.5 | Kafka | 3.2.x |
| 4.1.5 | PostgreSQL | 16.2.x |
| 4.1.5 | MongoDB | 7.0.x |
| 4.1.5 | Redis | 7.2.x |
| 4.1.5 | Elasticsearch | 7.17.x |
| 4.1.5 | OracleDB | 19.8.0.0.0 |
| 4.1.5 | Angular (Web SDK) | 16.3.x |
FlowX.AI supports the Recommended Versions of the prerequisite third-party components in the above table.
For optimal performance and reliability, our internal QA process validates new FlowX.AI releases using the latest Recommended Versions. While exploring alternative versions that suit your company's specific requirements, we recommend referring to the table above for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
A FlowX.AI release and its patch versions (such as 4.1.x) will *all* have the same the *Recommended Versions* for the third-party components.
# Migrating from previous versions to v4.1.5
Source: https://docs.flowx.ai/release-notes/v4.1.5-september-2024/migrating-from-previous-to-v4.1.5
If you're upgrading from a v3.4.x version, make sure to first review the migration guide for v4.0 to capture all the significant changes:
If you're moving from v4.0 to v4.1.3, don’t forget to check the relevant migration guides for each version to ensure nothing is missed:
If you’re upgrading from > v4.1.0 to v4.1.5 simply review the release notes and deployment guidelines for any notable changes:
# FlowX.AI 4.1.5 Release Notes (LTS)
Source: https://docs.flowx.ai/release-notes/v4.1.5-september-2024/v4.1.5-september-2024
**Release Date:** 23rd September 2024
This release is part of the long-term support (LTS) for version 4.1. While it’s a minor patch, we’ve focused on enhancing your security and overall experience with important fixes and updates.
## Fixes 🛠️
* **UI templates - generic properties**: UI templates with generic properties (e.g., labels) were being overridden by platform-specific settings upon process start, causing unexpected changes. This override issue has been resolved—generic properties will now stay as configured.
* **Undo/redo (Windows)**: We’ve reminded the **Ctrl** key that it’s not the “Undo!” button, and it now behaves like a respectable key should. You can once again Copy, Paste, and Select All without your inputs mysteriously disappearing and reappearing.
## Resources 📚
Need help upgrading to version 4.1.5? We've got everything you need for a smooth transition:
# Deployment guidelines v4.6.0
Source: https://docs.flowx.ai/release-notes/v4.6.0-january-2025/deployment-guidelines-v4.6.0
Do not forget, after upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FlowX.AI Designer > Platform Status**.

After upgrading to the 4.6.0 FlowX.AI release, you cannot import old process definitions or resources into the new platform from older versions .
## Component versions
| Component | 4.6.0 | 4.1.4 | 4.1.3 | 4.1.2 | 4.1.1 | 4.1 | 4.0 | 3.4.7 | 3.4.6 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 |
| ----------------------------- | ---------- | -------- | -------- | -------- | ------ | ------ | ---------- | --------- | --------- | --------- | --------- | ------- | ------ | ------ | ------ | ------- | ------ | ------ | ------ |
| **process-engine** | **8.0.2** | 6.1.4 | 6.1.3 | 6.1.2 | 6.0.3 | 5.10.3 | 4.3.5-2v11 | 4.3.5-2v6 | 4.3.5-2v2 | 4.3.5-2v1 | 4.3.5 | 4.3.2 | 4.3.1 | 4.1.0 | 3.6.0 | 2.2.1 | 2.1.2 | 2.0.7 | |
| **admin** | **7.2.3** | 5.1.6 | 5.1.5 | 5.1.4 | 5.1.3 | 5.0.9 | 4.6.10 | 3.3.19-6 | 3.3.19-4 | 3.3.19-3 | 3.3.19-1 | 3.3.19 | 3.3.10 | 3.3.7 | 3.1.1 | 2.5.2 | 2.2.2 | 2.1.3 | 2.0.8 |
| **designer** | **5.64.5** | 4.17.1-5 | 4.17.1-3 | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/angular-sdk** | **5.64.5** | - | - | | | | | | | | | | | | | | | | |
| **@flowx/angular-theme** | **5.64.5** | - | - | | | | | | | | | | | | | | | | |
| **@flowx/angular-ui-toolkit** | **5.64.5** | 4.17.1-5 | 4.17.1-3 | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/react-sdk** | **5.64.5** | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/react-theme** | **5.64.5** | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/react-ui-toolkit** | **5.64.5** | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/core-sdk** | **5.64.5** | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/core-theme** | **5.64.5** | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **paperflow-web-components** | - | - | - | - | - | - | 3.35.18-5 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **cms-core** | **5.1.2** | 3.0.3 | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.1 | 2.2.5 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.6 | 1.3.0 | 1.2.0 | 1.0.3 | 1.0.2 |
| **scheduler-core** | **5.0.0** | 3.0.2 | 3.0.1 | 3.0.1 | 3.0.1 | 3.0.0 | 2.1.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.1.0 | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 |
| **events-gateway** | **5.0.2** | 3.0.3 | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.0 | 2.0.4 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.0.6 | 1.0.2 | - | - | - |
| **notification-plugin** | **6.0.3** | 4.0.4 | 4.0.3 | 4.0.3 | 4.0.3 | 4.0.1 | 3.0.6 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.8 | 2.0.8 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 |
| **document-plugin** | **6.0.3** | 4.0.3 | 4.0.2 | 4.0.2 | 4.0.2 | 4.0.0 | 3.0.6 | 2.0.10-1 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.8 | 2.0.8 | 2.0.6 | 2.0.4 | 2.0.3 | 2.0.3 | 2.0.2 |
| **ocr-plugin** | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.15 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.8 | 1.0.8 | 1.0.2 | 0.1.33 | 0.1.33 |
| **license-core** | **4.1.3** | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.0 | 2.0.5 | 1.1.0 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.4 | 1.0.2 | 1.0.2 | 1.0.2 | 1.0.1 |
| **task-management-plugin** | **7.0.3** | 5.0.5 | 5.0.4 | 5.0.4 | 5.0.4 | 5.0.2 | 4.0.5 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.0 | 2.1.2 | 1.0.4 | 1.0.4 | 1.0.1 |
| **data-search** | **4.0.0** | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.4 | 2.0.3 | 1.0.6 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.3 | 0.2.0 | 0.1.4 | 0.1.4 | 0.1.3 |
| **audit-core** | **6.0.0** | 4.0.5 | 4.0.4 | 4.0.4 | 4.0.4 | 4.0.3 | 3.1.4 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.1.3 | 2.1.3 | 2.1.0 | 1.0.6 | 1.0.5 | 1.0.4 | 1.0.1 |
| **reporting-plugin** | **0.1.12** | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.5 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 |
| **advancing-controller** | **4.0.0** | 2.0.3 | 2.0.2 | 2.0.2 | 2.0.2 | 2.0.0 | 1.1.4 | 0.3.5-1 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.2 | 0.3.0 | 0.1.4 | 0.1.4 | 0.1.2 |
| **integration-designer** | **2.0.1** | | | | | | | | | | | | | | | | | | |
| **application-manager** | **2.0.18** | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **data-sync** | **2.0.2** | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **iOS renderer** | **4.0.3** | 3.0.20 | 3.0.20 | 3.0.20 | 3.0.20 | 3.0.20 | 3.0.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.1.0 | 2.0.1 | 2.0.0 | 2.0.0 |
| **Android renderer** | **4.0.9** | 3.0.23 | 3.0.23 | 3.0.23 | 3.0.23 | 3.0.23 | 3.0.0 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 |
Note: The same container image and Helm chart are used for **Application Manager** and **Runtime Manager**.
## AI Agents
| Component | 4.6.0 | 4.1.4 | 4.1.3 | 4.1.2 | 4.1.1 | 4.1 | 4.0 | 3.4.7 | 3.4.6 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 |
| ---------------- | ---------- | ----- | ----- | ----- | ----- | --- | --- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
| **ai-designer** | **1.9.10** | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **ai-assistant** | **1.6.9** | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **ai-analyst** | **1.10.3** | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **di-platform** | **1.2.24** | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **ai-developer** | **1.5.5** | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
### Third-party recommended component versions
| FlowX.AI Platform Version | Component Name | Recommended Versions |
| ------------------------- | ----------------- | -------------------- |
| 4.6.0 | Keycloak | 22.x |
| 4.6.0 | Kafka | 3.2.x |
| 4.6.0 | PostgreSQL | 16.2.x |
| 4.6.0 | MongoDB | 7.0.x |
| 4.6.0 | Redis | 7.2.x |
| 4.6.0 | Elasticsearch | 7.17.x |
| 4.6.0 | OracleDB | 21C/ 21-XE |
| 4.6.0 | Angular (Web SDK) | 19.x |
| 4.6.0 | React (Web SDK) | 18.x |
FlowX.AI supports the Recommended Versions of the prerequisite third-party components in the above table.
For optimal performance and reliability, our internal QA process validates new FlowX.AI releases using the latest Recommended Versions. While exploring alternative versions that suit your company's specific requirements, we recommend referring to the table above for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
## New microservices - setup guides
## New access rights and roles
## Updated access rights and roles
Added new roles for **manage-views**.
## New service accounts
# Deployment changes for v4.6.0
Source: https://docs.flowx.ai/release-notes/v4.6.0-january-2025/migrating-from-v4.1.x-to-v4.6.0/migrating-from-v4.1.x
This document outlines the configuration and infrastructure changes introduced from v4.1.x to v4.6.0 for deploying the FlowX.AI platform.
## CMS Setup
### MongoDB configuration
In version 4.6.0, the CMS setup includes a more comprehensive MongoDB configuration, especially for runtime data handling:
* **Runtime MongoDB Instance**: A dedicated MongoDB instance for managing runtime data.
* **New Environment Variables**:
* `RUNTIME_DB_USERNAME`: Username for runtime MongoDB access.
* `SPRING_DATA_MONGODB_RUNTIME_URI`: Connection URI for the runtime MongoDB instance.
* `SPRING_DATA_MONGODB_STORAGE`: Specifies storage type for Azure environments (`mongodb` or `cosmosdb`). Default is `mongodb`.
* **Transaction Settings for Mongock Library**:
* `MONGOCK_TRANSACTIONENABLED`: Controls MongoDB transaction support with Mongock, defaulting to `false` due to compatibility concerns with MongoDB 5.
### Private storage configuration
Private CMS to securely store uploaded documents and AI-generated documents, ensuring they are accessible only via authenticated endpoints. This CMS will support AI services and workflows while maintaining strict access controls.
Private CMS ensures secure file storage by keeping documents hidden from the Media Library and accessible only through authenticated endpoints with access token permissions. Files can be retrieved using tags (e.g., ai\_document, ref:UUID\_doc) and are excluded from project builds as they aren't needed at runtime.
* `APPLICATION_FILE_STORAGE_S3_PRIVATE_SERVER_URL`: This environment variable specifies the URL of the S3 server used for private file storage.
* `APPLICATION_FILE_STORAGE_S3_PRIVATE_BUCKET_NAME`: This environment variable specifies the name of the S3 bucket dedicated to private file storage.
* `APPLICATION_FILE_STORAGE_S3_PRIVATE_CREATE_BUCKET`: This environment variable indicates whether the private S3 bucket should be created if it does not already exist. It can be set to true or false.
* `APPLICATION_FILE_STORAGE_S3_PRIVATE_ACCESS_KEY`: This environment variable holds the access key used to authenticate to the S3 server for private file storage.
* `APPLICATION_FILE_STORAGE_S3_PRIVATE_SECRET_KEY`: This environment variable holds the secret key used to authenticate to the S3 server for private file storage.
***
## Admin setup
### MongoDB configuration
Version 4.6.0 introduces a new MongoDB setup in the Admin service for managing data model information:
* **New MongoDB Data Model Configuration**:
* **Environment Variables**:
* `SPRING_DATA_MONGODB_URI`: URI for connecting to the MongoDB data model instance.
* `DB_USERNAME`: Set to `data-model` for data model access.
* `SPRING_DATA_MONGODB_STORAGE`: Specifies storage type for Azure environments (`mongodb` or `cosmosdb`).
***
## Engine setup
### MongoDB configuration
The Engine configuration now includes additional setup for a runtime MongoDB instance to manage runtime builds:
* **Runtime MongoDB for Engine**:
* **New Environment Variables**:
* `SPRING_DATA_MONGODB_RUNTIME_URI`: URI for connecting to the runtime MongoDB.
* `DB_USERNAME`: Set to `app-runtime` for runtime data access.
# Configuration and migration guide
Source: https://docs.flowx.ai/release-notes/v4.6.0-january-2025/migrating-from-v4.1.x-to-v4.6.0/process-configuration
This guide outlines changes in process and UI configuration from v4.1.x to 4.6.0 version.
## Projects
The **Projects** feature in FlowX AI v4.6.0 is a new structure that organizes all dependencies and resources required for a project into a single deployable view. This enhancement simplifies configuration, deployment, and maintenance by layering projects on top of processes, offering a centralized workspace that encapsulates everything needed for project execution.

Several key configuration changes impact how resources and dependencies are managed, deployed, and maintained. This guide provides a breakdown of the configuration changes, automatic migration processes, and manual steps required to ensure a smooth transition.

***
### Consolidated resource management
* **Change**: All resources that were previously managed individually within processes are now grouped within Projects. This includes content management elements, integrations, themes, task configurations, and permissions.
* **Impact**: Resources like **enumerations**, **substitution tags**, and **generic parameters** are now managed within Projects, allowing centralized configuration and version control.
* **Migration**: All process-related resources will be migrated automatically into a **default project**, ensuring continuity of functionality in the new framework. After migration, you should verify that all critical resources are correctly configured within this default project.
### Enhanced version control and dependency management
* **Impact**: Projects support **dependency management** through **Libraries** and enforce version-controlled resources. Setting up a project now requires careful dependency management and versioning to prevent unintended updates.
* **Benefit**: Allows for modular, reusable resources and stable deployments across projects, reducing resource duplication and enhancing project compatibility.
### Migration checklist
To ensure a smooth transition, complete the following steps:
1. **Verify Process Migration**: Confirm that all existing process definitions have been correctly migrated into the default project.
2. **Set Configuration Parameter Overrides**: Post-deployment, adjust environment-specific Configuration Parameters in the default project.
3. **Update Task Views**: Replace the global "All Tasks" feature with project-specific Views in each project.
4. **Transfer Processes and Dependencies Manually**: If moving a process from the default project to another project, manually transfer associated resources and re-check dependencies.
***
## Generic parameters migration
* **Overview**: In version 4.6.0, global generic parameters (from versions prior to 4.6.0) have been migrated to project-level as Configuration Parameters, consolidating parameter management within specific projects for improved organization and control.

* **Migration**: All generic parameters will be automatically migrated to Configuration Parameters section under a default project.
* **Business Rules Unaffected**: There is no impact on existing business rules; they will continue to function as before without requiring updates.
* **Process Export Considerations**: If you export a process from one project to another, ensure that you also transfer the associated configuration parameters. This step is crucial to maintain process functionality and consistency across projects.
* **Important Note**: Only values of generic parameters associated with the specific environment, or where `env = null` (displayed as "all" in the interface in versions prior to 4.6.0), will be migrated. You must ensure that you have correctly set the values for generic parameters, paying attention to environment values (which are case-sensitive), and export these generic parameters before migration to avoid any potential data loss.

* **Post-Deployment Step**: After the first deployment to an upper environment, you will need to create configuration parameter overrides with the specific values required for that environment. This ensures that all environment-specific configurations are accurately maintained and applied across different deployment stages.
To set configuration parameter overrides, navigate to **Your App -> Runtime -> Configuration Parameters Overrides**.

***
## Source systems migration
* **Change**: Source systems have not been migrated automatically and need to be manually recreated.
* **New Location**: In v4.6.0, source systems have been moved under Integration Designer.
* Manual Migration Required:
1. Navigate to Integration Designer.
2. Recreate each source system manually with the same code as in the original configuration.
3. Ensure the "Enable enumeration value mapping" checkbox is enabled.
* **Impact**: Failure to manually migrate source systems may result in broken integrations and missing data mappings.
**v4.6.0**:

**\< v.4.6.0**:

## Task management
* "All Tasks" as a View: The global "All Tasks" feature is no longer standalone and will now function as a View in a project.
**v4.6.0:**

**v4.1.x:**

***
## General
Before starting the migration, complete the following steps:
1. **Merge All Feature Branches**: Ensure all feature branches for processes are merged into the latest version on the main branch.
2. **Remove Unnecessary Resources**: Delete any test processes or resources that are no longer needed.
3. **Export Generic Parameters**: Export generic parameters as a backup to ensure they migrate accurately to project-specific Configuration Parameters.
### Migration steps
During migration, resources will be transferred into a single **default project** with one committed version.
* **Process Definitions**: Only the last committed process version on the main branch will migrate. If no committed version exists, the latest WIP version will be used.
* **Enumerations, Substitution Tags, and Task Manager**: These resources will be migrated to the default project.
* **Generic Parameters**: Migrated as **Configuration Parameters** within the default project, covering only values where `env = null` or that match the platform’s environment setting.
* **Languages**: Language settings (available languages and default) will be moved to project settings. Languages remain globally available.
### Resources excluded from migration
Some resources will remain globally available or are deprecated:
* **Themes**
* **Fonts**
* **Global Media Library** (for media used in themes)
* **Out of Office (Task Manager)**
* **Integration Management** (will not be available in v4.6.0)
***
## Datepicker Migration
In version 4.6.0, significant updates have been introduced to the **Datepicker** component to ensure compatibility with ISO 8601 date formats and enhanced handling of date attributes within the **Data Model**. This migration affects both newly created and existing processes.
### Key changes
1. **Introduction of Date Types**:
* **Standard Date**: Stores and displays date values in ISO 8601 format, respecting the project's locale and timezone settings.
* **Legacy Date**: Retains previous formatting to ensure compatibility with existing business rules and processes.
2. **Properties Updates**:
* The **Datepicker** now supports dependent properties such as `minDate`, `maxDate`, and `defaultValue`. These properties:
* Follow the formatting rules of the selected date type (Standard or Legacy).
* Ensure that dynamic date values pulled from the **Data Model** are displayed correctly.
3. **Backward Compatibility**:
* **Existing Processes**: All migrated processes with legacy Datepicker components will default to `Legacy Date` type. This preserves the original formatting and ensures no disruption in business rules or workflows.
* **New Processes**: Newly created processes will default to `Standard Date` type, saving values in ISO 8601 format.
***
### Migration process
1. **Legacy Datepickers**:
* Automatically flagged during migration.
* Continue to work with existing business rules.
* Require manual review for future updates to transition to the **Standard Date** format.
2. **Business Rules Updates**:
* Legacy Datepickers may require manual adjustments if associated business rules reference hardcoded date formats.
* Ensure that any dynamic dates used in business rules are compatible with the ISO 8601 standard.
3. **Standard Datepickers**:
* All new Datepicker components save date values directly in ISO 8601 format.
* Fully compatible with updated **Data Model** attributes, allowing seamless integration with external systems, adaptors, and reporting plugins.
### Considerations
* **Default Values**:
* For **Standard Datepickers**, the `defaultValue` must be in ISO 8601 format.
* Dynamic defaults can also be set using **Data Model** attributes or process data.
* **Data Model Integration**:
* All external and internal date attributes, including those used by adaptors or business rules, must be explicitly defined in the **Data Model**.
* **UI Designer Overrides**:
* Overrides can be applied to display dates differently for specific UI elements, ensuring flexibility for localized formatting.
### Recommendations for transition
* **Existing Processes**:
* Leave Datepicker components in `Legacy` mode unless business rules and workflows are updated to support ISO 8601.
* **New Processes**:
* Use `Standard Date` to ensure future compatibility and alignment with ISO 8601 formatting.
* **Documentation**:
* Review and update all timer expressions, adaptors, and external data feeds to use ISO 8601 format for consistency.
***
## Updates to expression evaluation
**What Changed:**
* A fix was implemented in the web expression evaluator regarding how string values are handled during process value replacement.
* Previously, when evaluating dynamic and computed expressions, string variables required manual insertion of double quotes (`""`) because the system did not automatically add them.
* With the recent fix, the system now **automatically adds the necessary quotes** for string replacements.
**Impact:**
* **Existing Expressions:** If your current dynamic or computed expressions include manually added quotes around string values, these might now result in redundant quotes or incorrect string formatting.
* **Behavioral Shift:** The automatic quote addition alters how string values are processed, which might affect outputs if the expressions were structured with the expectation of manual quotes.
**Action Required:**
1. **Review Your Expressions:** Check any dynamic and computed expressions in your processes that currently include manual double quotes around string values.
2. **Modify as Needed:** Remove unnecessary double quotes where the system now handles them automatically to prevent duplicate quoting or formatting issues.
***
## Export/import enumerations (CSV)
In the transition from version 4.1.x to 4.6.0, the following columns have been removed from the enum CSV export format:
* description.application
* description.draft
* contentValue.childContentDescription.application
* contentValue.childContentDescription.draft
* contentValue.childContentDescription.type
Existing CSV export/import processes should be updated to remove references to these columns to ensure compatibility with the 4.6.0 release.
***
## Post-migration recommendations
For complex projects with multiple use cases, **do not use the default project for ongoing development or production builds**. Instead:
1. **Create Branches within the Default Project**: Organize and streamline resources by creating branches. This enables a lightweight build focused on production.
2. **Split the Default Project into Smaller Projects**: Use the import/export feature to separate the default project into individual projects by use case.
* **Note**: When importing processes into a new project, resource references may need to be manually reconfigured.
***
# Renderer SDKs
Source: https://docs.flowx.ai/release-notes/v4.6.0-january-2025/migrating-from-v4.1.x-to-v4.6.0/renderers
This guide assists in migrating from FlowX v4.1.x to v4.6.0.
## Android SDK migration guide
### Initialization config changes
A new configuration parameter named `locale` was added in order to improve formatting the dates, numbers and currencies.
When the SDK initialization happens through the `FlowxSdkApi.getInstance().init(...)` method, the argument must be included inside the `config: SdkConfig` parameter value:
```kotlin
FlowxSdkApi.getInstance().init(
...
config = SdkConfig(
baseUrl = "URL to FlowX backend",
imageBaseUrl = "URL to FlowX CMS Media Library",
enginePath = "some_path",
language = "en",
locale = Locale.getDefault(), // e.g. Locale("en", "US"), Locale("fr", "CA")
validators = mapOf("exact_25_in_length" to { it.length == 25 }),
enableLog = false,
),
...
)
```
### Changes when starting a Flowx process
Two new parameters were added:
| Name | Description | Type | Requirement |
| ---------------- | -------------------------------------------------------------------------------------- | --------------- | -------------------------------- |
| `projectId` | The id of the project containing the process to be started | `String` | Mandatory |
| `onProcessEnded` | Callback function where you can do additional processing when the started process ends | `(() -> Unit)?` | Optional. It defaults to `null`. |
```kotlin
fun startProcess(
projectId: String,
processName: String,
params: JSONObject = JSONObject(),
isModal: Boolean = false,
onProcessEnded: (() -> Unit)? = null,
closeModalFunc: ((processName: String) -> Unit)? = null,
): @Composable () -> Unit
```
### Changes when resuming a Flowx process
One new parameter was added:
| Name | Description | Type | Requirement |
| ---------------- | ---------------------------------------------------------------------------------------- | --------------- | -------------------------------- |
| `onProcessEnded` | Callback function where you can do additional processing when the continued process ends | `(() -> Unit)?` | Optional. It defaults to `null`. |
```kotlin
fun continueProcess(
processUuid: String,
isModal: Boolean = false,
onProcessEnded: (() -> Unit)? = null,
closeModalFunc: ((processName: String) -> Unit)? = null,
): @Composable () -> Unit
```
### Changes regarding the implementation of Custom Components
The support for classical **[Android View](https://developer.android.com/reference/android/view/View)** system has been dropped.
A Custom Component can now be implemented only by using the **[Compose](https://developer.android.com/compose)** UI system.
This means that the `CustomViewComponent` is now ignored in the internals of the SDK and has been marked as `@Deprecated`, in order to be completely removed in the next release.
There is no immediate need to update any of the existing components.
### Public API changes
The `changeLanguage` method has been updated and renamed to `changeLocaleSettings`, in order to accommodate the newly added `locale` config parameter:
```kotlin
fun changeLocaleSettings(locale: Locale, language: String)
```
### Library dependencies updates
* **[Kotlin](https://kotlinlang.org/)**: 1.9.22 **↗** **1.9.24**
* **[Compose BOM](https://developer.android.com/jetpack/compose/bom/bom-mapping)**: 2024.02.00 **↗** **2024.06.00**
* **[Compose Compiler](https://developer.android.com/jetpack/androidx/releases/compose-compiler)**: 1.5.9 **↗** **1.5.14**
* **[Gson](https://github.com/google/gson)**: 2.10.1 **↗** **2.11.0**
***
## iOS SDK migration guide
### Initialization config changes
A new configuration parameter, named `locale` was added in order to improve formatting the dates, numbers and currencies.
The locale needs to be set on the `FXConfig.sharedInstance.configure` method
```swift
FXConfig.sharedInstance.configure { (config) in
config.locale = "en-US"
...
}
```
### Changes when starting a process
Two new parameters were added on the 3 available start process methods:
| Name | Description | Type | Requirement |
| ---------------- | -------------------------------------------------------------------------------------- | --------------- | ------------------------------- |
| `projectId` | The uuid string of the project containing the process to be started | `String` | Mandatory |
| `onProcessEnded` | Callback function where you can do additional processing when the started process ends | `(() -> Void)?` | Optional. It defaults to `nil`. |
```swift
func startProcess(projectId: String,
name: String,
params: [String : Any]?,
isModal: Bool = false,
showLoader: Bool = false,
completion: ((UIViewController?) -> Void)?,
onProcessEnded: (() -> Void)? = nil)
```
### Changes when resuming a Flowx process
One new parameter was added on the 3 available continue process methods:
| Name | Description | Type | Requirement |
| ---------------- | -------------------------------------------------------------------------------------- | --------------- | ------------------------------- |
| `onProcessEnded` | Callback function where you can do additional processing when the started process ends | `(() -> Void)?` | Optional. It defaults to `nil`. |
```swift
func continueExistingProcess(uuid: String,
name: String,
isModal: Bool = false,
completion: ((UIViewController?) -> Void)? = nil,
onProcessEnded: (() -> Void)? = nil)
```
### Updated FXDataSource protocol
A new method has been added to the `FXDataSource` protocol. Update conformance to protocol by adding the implementation of the new func.
```swift
func newProcessStarted(processInstanceUuid: String) {
}
```
### Public API changes
The `changeLanguage` method has been updated and renamed to `changeLocaleSettings`, in order to accommodate the newly added `locale` config parameter:
```swift
func changeLocaleSettings(locale: String?, language: String?)
```
***
## Angular SDK migration guide
### Upgrading to the new SDK libraries
The Angular SDK node packages have been updated and will use the latest versions Angular (version 19), thus all container apps that want to use the new SDKs should first update the Angular version of the container apps.
In the following we cover the migration guide for a container app from Angular v17 to v19 and the new SDKs.
* remove `material-moment-adapter` unless its not used in custom components (usually together with material datepicker) and also the `moment` package (unless used elsewhere in the project )
```bash
npm uninstall @angular/material-moment-adapter
npm uninstall moment
```
* remove old FlowX SDK libraries:
```bash
npm uninstall @flowx/ui-sdk @flowx/ui-theme @flowx/ui-toolkit
```
* in `angular.json` remove references to old stylesheets:
```bash
"./node_modules/@flowx/ui-sdk/src/assets/scss/style.scss"
"./node_modules/paperflow-web-components/src/assets/scss"
"./node_modules/@flowx/ui-sdk/src/assets/scss"
```
* remove old FlowX SDKs explicit dependencies. Please check before removing that packages are not used in the container app or custom components.
```bash
npm uninstall date-fns event-source-polyfill inputmask ng2-pdfjs-viewer vanillajs-datepicker @ngneat/input-mask deepmerge-ts marked-mangle
```
* remove deprecated `Angular Flex` library (the new sdks don't use it anymore and the package has been deprecated by the Angular team) unless it is not used in container app or custom components
```bash
npm uninstall @angular/flex-layout
```
* Make sure to have the Angular CLI installed at version 18 (check with `ng --version`)
```bash
npm install -g @angular/cli@18
```
* Run the Angular update for version 18
```bash
ng update @angular/core@18 @angular/cli@18
```
* Update Angular CDK or Angular Material
```bash
ng update @angular/cdk@18
```
or, if the container app uses Angular Material, run:
```bash
ng update @angular/material@18
```
* Optionally, if your project uses other angular packages (ex. `NgRx`) please run migrations to v18 for each of them before proceeding. You can find details on the libraries documentation pages.
* Angular has changed the default build path in the `dist` folder, it now uses a `browser` folder for the build output. In order to place the output of the build command directly in the `dist/[APP_NAME]` folder, add the following change in `angular.json`:
```json
{
...
"architect": {
...
"build": {
...
"options": {
...
"outputPath": {
"base": "dist/flowx-demo-app",
"browser": "" <-- add this line
},
}
}
}
}
```
* Make sure to have the Angular CLI installed at version 19 (check with `ng --version`)
```bash
npm install -g @angular/cli@19
```
* Run the Angular update for version 19
```bash
ng update @angular/core@19 @angular/cli@19
```
* Update Angular CDK or Angular Material
```bash
ng update @angular/cdk@19
```
or, if the container app uses Angular material, run:
```bash
ng update @angular/material@19
```
* Optionally, if your project uses other Angular packages (ex. `NgRx`) please run migrations to v19 for each of them before proceeding. You can find details on the libraries documentation pages.
* Also, it is recommended that you update the `target` and `module` settings in the `tsconfig.json` file to the new version, although be advised that this might require additional changes in the codebase due to new compilation errors that might arise.
```json
{
...
"compilerOptions": {
...
"target": "ES2022",
"module": "ES2022"
}
}
```
**Adding overrides for libraries that have a dependency on Angular version \< 19**
For some libraries, the required angular version might not be up to date or compatible with the new Angular version. In this case, you can add an override in the `package.json` file to force the library to use the new Angular version.
* For example, the `ng2-pdfjs-viewer` library has a dependency on Angular version 18 and will not work with the installed Angular version 19. To force the library to use the new Angular version, add the following override in the `package.json` file:
```json
{
...
"overrides": {
"ng2-pdfjs-viewer": {
"@angular/common": "^19.0.0",
"@angular/core": "^19.0.0",
"ng-packagr": "^19.0.0"
}
}
}
```
**Install new packages**:
**ACTION REQUIRED**: Update to the latest version 4.6 to ensure optimal performance and compatibility.
```bash
npm install @flowx/core-sdk@5.63.0 @flowx/core-theme@5.63.0 @flowx/angular-sdk@5.63.0 @flowx/angular-theme@5.63.0 @flowx/angular-ui-toolkit@5.62.1
```
* Install a type dependency for the SSE library:
```bash
npm install --save-dev @types/event-source-polyfill
```
* run through all the migration steps in the [New SDK Api changes](#new-sdk-api-changes) section below.
Each migration requires a 'clean' repository state. Make sure to commit all changes before starting the next migration step.
After each Angular update it is recommended you restart your editor or TS Server, to use the new TS version since some import errors might appear.
### New SDK API changes
#### Renderer SDK component usage
In the Angular SDK, the `` component has two new parameters have been introduced: `projectInfo` and `locale`. These additions help support localization and project-specific configurations.
| Name | Description | Type | Requirement |
| ------------- | ----------------------------------------------------------------------------------------------- | --------------------- | ----------- |
| `projectInfo` | Object containing an `projectId` key, which identifies the project of the process to be started | `{projectId: string}` | Mandatory |
| `locale` | Provides region-specific settings for localization. | `string` | Mandatory |
Add the definitions for these properties in the class file of the component that uses the process renderer component:
```typescript
projectInfo = { projectId: 'your-project-id' },
locale = 'en-US',
```
Use these parameters in the template as inputs for the `` component:
```html
```
#### API changes
Process Renderer:
| Category | Old Approach | New Approach |
| -------------------- | --------------------------------- | ------------------------------------ |
| **Process Renderer** | `FlxProcessModule.forRoot({...})` | `FlxProcessModule.withConfig({...})` |
#### Import path updates
| Category | Old Approach | New Approach |
| -------------------------- | --------------------------------------------------------------------------------- | -------------------------------------------------------------------------------- |
| **Import Paths** | `@flowx/ui-sdk` | `@flowx/angular-sdk` (or, in some cases `@flowx/core-sdk`) |
| | `@flowx/ui-theme` | `@flowx/angular-theme` |
| | `@flowx/ui-toolkit` | `@flowx/angular-ui-toolkit` |
| **Process Module** | `import {FlxProcessModule} from '@flowx/ui-sdk';` | `import {FlxProcessModule} from @flowx/angular-sdk` |
| **Localization Module** | `import {FlxLocalizationModule} from '@flowx/ui-sdk';` Include in module imports | `import {FlxLocalizePipe} from '@flowx/angular-sdk';` Remove from module imports |
| **Task Management** | `import {FlxTaskManagementModule} from '@flowx/ui-sdk';` | `import {FlxTasksManagementComponent} from '@flowx/angular-sdk';` |
| **Client Store Interface** | `import {ClientStoreInterface} from '@flowx/ui-sdk';` | `import {ClientStoreInterface} from '@flowx/core-sdk';` |
You can also remove the `FlxLocalizationModule` from any `imports` arrays in modules, and import the `FlxLocalizePipe` instead
### Icon module update
The `withExtraIconSet` method has been replaced with `provideExtraIconSet`, which should now be used in the providers array.
```typescript
@NgModule({
imports: [
IconModule, // Import the IconModule
],
providers: [
// Use provideExtraIconSet to add your custom icon set
provideExtraIconSet({
customIcon1: 'path/to/custom-icon1.svg',
customIcon2: 'path/to/custom-icon2.svg',
// Add more icons as needed
})
]
})
export class AppModule {}
```
### Custom Interceptors
The new FlowX SDKs do not depend on Angular's `HttpClient` to make API calls, so existing interceptors must be adapted to a new format, for Request Interceptos (eg. custom headers) and Response Interceptors (eg. error handling)
For an overview of implementation details, please refer to the respective sections of the renderer documentation for the new API changes.
# FlowX.AI 4.6.0 Release Notes
Source: https://docs.flowx.ai/release-notes/v4.6.0-january-2025/v4.6.0-january-2025
Welcome, FlowX fam! The 4.6.0 update is here, and it’s packed with features designed to save you time, boost efficiency, and maybe even make you chuckle. Let’s dive into the details, feature by glorious feature. 🎉
## **What's new?** 🆕
### Projects: Order out of chaos
The new Project concept introduces a streamlined approach to managing and deploying complex projects within FlowX. By grouping all related resources—such as processes, enumerations, integrations, and assets—into a single, organized workspace, projects provide a cohesive view for efficient development and deployment.

💼 **Key Benefits**:
* **Unified Workspace**: Group related processes, integrations, assets, and configurations into one tidy space.
* **Version Control**: Manage multiple builds and versions effortlessly. Rollbacks? No sweat.
* **Resource Sharing**: Reuse processes and configurations across projects like a boss.
**Why you’ll love it**: It’s like going from a messy desk to an organized digital office.
🎥 **Quick Demo**: (Who needs text when you have visuals?)
For more details, check out this section
#### 🛠 Config vs Build Mode
**Config Mode**: Tinker, tweak, and perfect.
* Config mode is the environment where you set up, adjust, and manage your project's resources, processes, and configurations. It's the workspace where you fine-tune every aspect of the project before it's ready for deployment. Think of it as the design phase, where the focus is on setup, organization, and preparation.
**Build Mode**: Lock it, load it, deploy it.
* Build mode is the stage where the configurations you've set up are packaged into a deployable form. This is the runtime-ready version of your project, containing everything needed for it to function in a production environment. A build includes a snapshot of the project’s state at a given point, ensuring stability and predictability when deployed.
For more details, check out this section
For more details, check out this section
***
### Integration Designer: Connect like a pro
It’s like a dating app for your systems: drag, drop, connect, and let the workflows flow.

API integration just got easier! Integration Designer simplifies connecting FlowX with external systems using REST APIs.
✨ **The magic of it**:
* **Drag-and-Drop Interface**: Map data, define workflows, and configure endpoints like a breeze.
* **Flexible Authorization**: Use Service Tokens, Bearer tokens, or go authentication-free.
* **Error Management**: Built-in tools ensure your integrations don’t ghost you.
**Fun Fact**: Integration Designer doesn’t just test APIs (we love you, [Postman](https://www.postman.com/)❤️). It automates workflows, saving you tons of time.
For more details, check out this section
***
### Merge Conflicts 2.0: Merge without the drama
Merging branches? We've overhauled the process to make it faster, smarter, and far less painful.

🧩 **What’s New**:
* **Advanced Conflict Detection**: Pinpoint issues by resource type (Processes, Enumerations, Media, etc.).

* **Resource-Level Resolution**: Resolve conflicts directly in the UI with:
* JSON comparisons (color-coded for clarity!).
* Navigation tools to jump between differences.
* Progress tracking for reviewed items.


Handle unresolved conflicts with the **Merge Anyway** option, which provides flexibility while maintaining control over outcomes:
**Pro Tip**: Use the “Merge Anyway” option if you need to override unresolved conflicts (but use it wisely!).
* A confirmation modal explains how unresolved conflicts will be handled (e.g., prioritizing source branch changes).
* Allows merging to continue even when some conflicts remain unresolved.

***
### AI Agents: Smart, adaptable, always ready to help
FlowX AI Agents bring automation to your fingertips, powered by a flexible, LLM-agnostic core.
🤖 **Why It’s Cool**:
* Works with both open-source and private language models.
* Secure, on-premise deployments available.
* Automates business processes and interactions.
🎥 Check out a quick demo:
Stay tuned for more.
***
### Localization and Internationalization
FlowX is now more global-friendly with enhanced localization features.
Enhanced formatting options for dates, times, numbers, and currencies with support for various international standards including short, medium, long, full, and custom formats.
Support for currency formatting with options to display values using ISO codes or symbols, adapting to user-selected locales and regional preferences.
Comprehensive settings for pluralization, capitalization, alignment, and sorting that adapt to regional requirements, helping cater to a diverse global user base.
New options in the UI Designer to override general settings for text, messages, links, and form elements based on locale and region.
Static and dynamically generated legal documents can now be customized based on regional and locale settings, improving compliance and communication with local audiences.
For more details, check out this section
***
### Data Model 2.0: Data handling, but make it fashion
In this release, we've introduced Data Model 2.0, a major upgrade designed to simplify and enhance data handling. Key improvements include the introduction of reusable objects, a visual representation of complex data structures, and dynamic data binding, all aimed at creating a more reliable and intuitive experience. The new model ensures references are automatically updated when changes are made, streamlining data management across the platform.
**Highlights**:
* **New Root Element**: Provides a structural basis without impacting business logic, ensuring a clear and organized data structure.
* **Enhanced Data Binding**: Expanded to more areas, with dynamic updates based on changes in the data model.
* **Localization**: Added handling mechanism for locale-specific formatting, ensuring accurate data representation.
For more details, check out this section
***
### Task management 2.0
We've enhanced Task Management capabilities, making it easier to create, track, and manage tasks efficiently. The updated Task Management features include customizable views and advanced filtering and sorting options for task data. Additionally, users can now implement both low-code and full-code solutions for a tailored Task Management experience, ensuring maximum flexibility for different business needs.

For more details, check out this section
***
### UI Designer
#### Grid layout
We are excited to introduce the Grid Layout feature in this release, enhancing the flexibility and usability of the UI Designer for creating structured and responsive layouts. With the new Grid Layout, users can organize form elements, tables, and other components using a multi-column and row system, allowing for more complex, bidirectional designs compared to the previous flex-based layout.

#### Table
The Table component is a simple and flexible way to display data in web applications. It lets you customize columns, add pagination, sort and filter data, and apply your own styling. You can resize columns, scroll through data, and even add actions like editing or deleting rows directly in the table. It’s easy to set up, with default columns and rows generated automatically based on your data.

For more details, check out this section
#### FlowX.AI UI Designer new navigator
We are thrilled to introduce the new and improved **UI Designer Navigator**, a major update designed to streamline your UI design process.
The UI layer panel has been redesigned for a more intuitive experience, making it easier to manage and navigate through your design elements.
Dragging and dropping components in the preview is now smoother and more precise, allowing for faster and easier creation of UIs.
You can now change the root component from a form group to a container or vice versa, offering greater flexibility when copying nodes or resolving configuration problems.
It is now simpler to identify where you are placing a component within the hierarchy, thanks to the enhanced drag/reorder functionality in the right panel.
#### Conditional formatting for UI elements
Dynamically update styling and properties of UI elements based on conditions, reducing the need for multiple prototypes.

Conditional styling is available for **text**, **link**, and **container** UI elements based on specific conditions.
#### Hide expressions enhancements
Expanded hide expressions functionality to include:
* Cards
* Collection prototypes and children
* Table cell children (e.g., texts, images, buttons, links)
#### UI actions
* Added a new UI action **Start new project instance**
* Removed the **Start Process Inherit** UI action type.
#### UI action form updates
* Introduced **Functional** and **UX sections** for all action types, with a checkbox for **Add Analytics Name** in the UX section.
* Simplified custom and exclude key management.

***
### Start new project instance action
We’ve introduced the Start New Project Instance node action, enabling users to initiate a completely new process from within an ongoing one. This functionality is perfect for scenarios where isolated use cases are managed across different projects or environments.

***
### Custom components validation
You are able to validate and retrieve data from a generated screen, including data from a custom form section. Additionally, you’ll have the ability to trigger and manage data directly within that section. This will be a major benefit for clients who have chosen to create custom components for the entire screen (for example, a form that isn’t supported natively by the platform).
***
### Autocomputed data to send
In previous platform versions, when creating your UI screens and working with data, you had to ensure that all the data stored in your process keys was saved in the process instance. This required adding an extra parameter called "Data to send" to the "Save Data" node action.
**Older versions** vs **v4.6.0**:
Now, with the autocompute feature, this step is no longer necessary, as the data is automatically saved and sent on your process instance.
You no longer need to add "Save Data" node actions, but make sure to include "Forms to Validate" on the UI Actions. This helps ensure that data is submitted correctly and automatically activates any added validators where they exist.

However, you still have the option to customize which keys are included or excluded as needed.

***
### **Scripting UX Improvements**: *Write code like a wizard*
* **Autocomplete**: Get tailored suggestions based on your data model.
* **New Editors**: JSON and hide/disable expressions editors now include examples, syntax help, and better visual feedback.

***
### React & Angular SDKs
Dev friends, we heard you. FlowX now integrates smoothly into your React and Angular apps. It’s like peanut butter and jelly, but make it code.
#### React SDK
With the 4.6.0 release, we're excited to announce the launch of the FlowX React SDK! This new SDK empowers developers to seamlessly integrate FlowX.AI capabilities into React applications, making it easier than ever to build highly interactive, responsive, and dynamic user experiences.
To get started, simply install the FlowX React SDK via npm and check our documentation for examples and best practices!
#### New Angular SDK
We're excited to announce new releases for FlowX packages to support web applications:
* `@flowx/core-sdk`
* `@flowx/angular-sdk`
* `@flowx/angular-theme`
* `@flowx/angular-ui-toolkit`
* `@flowx/core-theme`
We recommend upgrading to the **latest** versions of all @flowx packages to take full advantage of these new features and improvements. Check the deployment guidelines for version details.
***
### Analytics integration for Container Apps
Container apps now support tracking "Screen Displayed" and "Action Executed" events, configurable in the UI Designer to improve insights into user interactions.
**UI Designer updates**
* **Analytics Screen Name:** Configurable under Root UI components (Cards and Containers) to enable tracking in analytics platforms like Google Analytics.
"Screen Displayed" events support user task-based reporting, such as when a user task screen is shown.

The configured values are stored in `flowxProps.analytics`, making them accessible for integration with analytics tools.
* **Action Analytics:** Tracks user actions (e.g., button clicks) with tags set directly in the UI Action Form.
Added analytics configuration in the **UX Section** for all UI action types.

Values are saved in `params.analytics` via the PATCH `/actions` request.
**Renderers updates**
Renderers now expose a public API for triggering analytics events:
* **Screen Events:** Triggered on user task display, using `flowxProps.analytics`.
* **Action Events:** Triggered on action execution, using `params.analytics`.
Dynamic values such as process store keys and replace tags are supported for more contextual tracking.
***
### Plugin updates
**Notifications** and **Documents** plugins have been integrated as **project resources**.
***
### Scheduled Processes: Timer management reimagined
We've completely redesigned how you manage timer-triggered processes. Before 4.6.0, timer activation was managed through a quick actions menu in process definition headers; now it's handled via dedicated UI section in the Runtime tab under Scheduled Processes.

🕒 Key Improvements:
* **Centralized Timer Management:** All process timers are now visible and manageable in one dedicated location
* **Persistent Settings:** Timer configurations persist across build exports/imports
* **Environment Control:** Activate or deactivate timers without redeploying your application
For more details, check out this section
## Bug fixes 🐞
### Rendering
* **Default Value Initialization for UI Elements**: Resolved an issue where certain UI elements were initialized with incorrect default values.
* **Improved Security for Hidden Fields**: Fixed a vulnerability where hidden fields could be accessed through browser developer tools.
* **Form Validation Reset on UI Dismissal**: Addressed an issue where form validation errors persisted after the UI was closed.
* **Keybinding Issues on Windows**: Fixed a bug where pressing specific keys triggered unintended operations in the designer.
* **Input Field Validation for Number Type**: Corrected the validation logic for number input fields to ensure consistent behavior.
* **Improved Datepicker Popup Positioning**: Resolved an issue where the datepicker popup overlapped the associated input field.
* **Manual Date Entry in Datepicker**: Fixed an issue where manually entering a date after selecting one from the datepicker caused errors.
* **Enhanced Update Mechanism for Custom Components**: Improved the reliability of updates for custom components to reflect changes consistently.
## Known issues 😢
We’re aware of a few quirks during merge conflicts and are working to fix them. Here’s a quick rundown:
* After a merge, sequences without a connection to an end node may appear on the canvas in cases of conflicts related to deleted nodes in one of the versions. This may occur because the merge mechanism does not detect that a sequence needs to be deleted if, in the final version, the node remains deleted.
* Navigation areas – After a merge, inconsistencies in the navigation areas hierarchy calculated during merging may result in navigation areas being hidden in the final merged version.
* No merge conflict detected if a user task node is deleted in the source branch and its UI is modified in the destination branch.
* Nodes may end up having two different outgoing sequences in certain scenarios: When a node is not connected to another node, and both the source and target branches define different sequences to different nodes, the merged version may contain both sequences. In particular, this may cause inconsistencies for boundary nodes, which cannot have multiple outgoing sequences.
* Gateway nodes may lose some of their configurations during merging when conflicts arise.
* Conflicts arising from moving nodes to different swimlanes in both the source and target branches are not detected correctly and result in a merge error.
* **Duplicate content values can be created during merging**: Merging branches with enumeration updates can result in duplicate content values or child enums with the same code. This occurs when identical names are added to the same enum in different branches, bypassing backend validation. Instead of flagging a conflict, the merge proceeds, creating duplicate entries. This issue disrupts the expected uniqueness constraint for enumeration values.
* **Enum content values might not save correctly in some cases**: In specific scenarios involving transformations of content values into child enums across branches, content values added in one branch may not appear after merging. Despite accepting changes from the branch where the content value was added, the merged version omits it, leading to incomplete or inconsistent enumeration data.
* **New substitution tags with the same name on different branches aren’t flagged**: Merging branches that include new substitution tags with identical names on different branches does not trigger a conflict. Instead, both tags are retained, resulting in duplicate substitution tags with the same name but different values. This issue bypasses expected conflict detection and can lead to inconsistencies in substitution tag usage.
* **Adding values to deleted data model keys does not raise conflicts**: When values are added to a data model key in one branch and the same key is deleted in another, merging does not generate a conflict. Instead, the added values from the first branch are silently lost in the merged version. This issue bypasses expected conflict detection, leading to data loss and inconsistencies in the resulting data model.
* Branches with media library assets sometimes fail to merge into the main branch: Merging branches with media library assets can result in a 500 Internal Server Error when changes to the same asset occur across multiple branches. This issue typically arises when one branch is merged into another, and then the combined branch is merged into the main branch. The failure occurs due to null parameter handling during the merge process, preventing the merge from completing successfully.
* Branches with notification and document templates fail to merge into the main branch
* After a successful merge, the merged version is not selected in the versioning modal.
* Some entries in the merge conflicts tree may show numbers instead of specific identifiers (e.g. forms);
* **Unnecessary scrollbars appear in the merge modal for single items**: The merge conflict modal displays unnecessary scrollbars when only a single item is present in the list. This occurs in scenarios where media library assets with long keys are involved, creating a layout issue that affects the user interface. While functionality is not impacted, this visual inconsistency can reduce the overall user experience during conflict resolution.
* **Fields like `originalCreationTimestamp` and `flowxUuid` are incorrectly flagged as conflicts**: The merge conflict modal displays unnecessary scrollbars when only a single item is present in the list. This occurs in scenarios where media library assets with long keys are involved, creating a layout issue that affects the user interface. While functionality is not impacted, this visual inconsistency can reduce the overall user experience during conflict resolution.
## Changes 🔧
* **Web Renderer Caching:** Improved resource caching with browser mechanisms keyed to build IDs.
* **Process Designer:** Enhanced drag-and-drop functionality for node placement.
***
## Additional information
For deployment guidelines and further details, refer to:
Migrating from v4.1.x:
# Deployment guidelines v4.6.1
Source: https://docs.flowx.ai/release-notes/v4.6.1-february-2025/deployment-guidelines-v4.6.1
Do not forget, after upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FlowX.AI Designer > Platform Status**.

After upgrading to the 4.6.1 FlowX.AI release, you cannot import old process definitions or resources from versions older than 4.6.0.
## Component versions
| Component | 4.6.1 | 4.6.0 | 4.1.4 | 4.1.3 | 4.1.2 | 4.1.1 | 4.1 | 4.0 | 3.4.7 | 3.4.6 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 |
| ----------------------------- | ---------- | ------ | -------- | -------- | -------- | ------ | ------ | ---------- | --------- | --------- | --------- | --------- | ------- | ------ | ------ | ------ | ------- | ------ | ------ | ------ |
| **process-engine** | **8.1.1** | 8.0.2 | 6.1.4 | 6.1.3 | 6.1.2 | 6.0.3 | 5.10.3 | 4.3.5-2v11 | 4.3.5-2v6 | 4.3.5-2v2 | 4.3.5-2v1 | 4.3.5 | 4.3.2 | 4.3.1 | 4.1.0 | 3.6.0 | 2.2.1 | 2.1.2 | 2.0.7 | |
| **admin** | **7.2.6** | 7.2.3 | 5.1.6 | 5.1.5 | 5.1.4 | 5.1.3 | 5.0.9 | 4.6.10 | 3.3.19-6 | 3.3.19-4 | 3.3.19-3 | 3.3.19-1 | 3.3.19 | 3.3.10 | 3.3.7 | 3.1.1 | 2.5.2 | 2.2.2 | 2.1.3 | 2.0.8 |
| **designer** | **5.69.3** | 5.64.5 | 4.17.1-5 | 4.17.1-3 | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/angular-sdk** | **5.69.3** | 5.64.5 | - | - | | | | | | | | | | | | | | | | |
| **@flowx/angular-theme** | **5.69.3** | 5.64.5 | - | - | | | | | | | | | | | | | | | | |
| **@flowx/angular-ui-toolkit** | **5.69.3** | 5.64.5 | 4.17.1-5 | 4.17.1-3 | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/react-sdk** | **5.69.3** | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/react-theme** | **5.69.3** | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/react-ui-toolkit** | **5.69.3** | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/core-sdk** | **5.69.3** | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/core-theme** | **5.69.3** | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **paperflow-web-components** | - | - | - | - | - | - | - | 3.35.18-5 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **cms-core** | **5.2.2** | 5.1.2 | 3.0.3 | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.1 | 2.2.5 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.6 | 1.3.0 | 1.2.0 | 1.0.3 | 1.0.2 |
| **scheduler-core** | 5.0.0 | 5.0.0 | 3.0.2 | 3.0.1 | 3.0.1 | 3.0.1 | 3.0.0 | 2.1.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.1.0 | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 |
| **events-gateway** | 5.0.2 | 5.0.2 | 3.0.3 | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.0 | 2.0.4 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.0.6 | 1.0.2 | - | - | - |
| **notification-plugin** | 6.0.3 | 6.0.3 | 4.0.4 | 4.0.3 | 4.0.3 | 4.0.3 | 4.0.1 | 3.0.6 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.8 | 2.0.8 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 |
| **document-plugin** | 6.0.3 | 6.0.3 | 4.0.3 | 4.0.2 | 4.0.2 | 4.0.2 | 4.0.0 | 3.0.6 | 2.0.10-1 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.8 | 2.0.8 | 2.0.6 | 2.0.4 | 2.0.3 | 2.0.3 | 2.0.2 |
| **ocr-plugin** | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.15 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.8 | 1.0.8 | 1.0.2 | 0.1.33 | 0.1.33 |
| **license-core** | 4.1.3 | 4.1.3 | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.0 | 2.0.5 | 1.1.0 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.4 | 1.0.2 | 1.0.2 | 1.0.2 | 1.0.1 |
| **task-management-plugin** | 7.0.3 | 7.0.3 | 5.0.5 | 5.0.4 | 5.0.4 | 5.0.4 | 5.0.2 | 4.0.5 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.0 | 2.1.2 | 1.0.4 | 1.0.4 | 1.0.1 |
| **data-search** | 4.0.0 | 4.0.0 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.4 | 2.0.3 | 1.0.6 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.3 | 0.2.0 | 0.1.4 | 0.1.4 | 0.1.3 |
| **audit-core** | 6.0.0 | 6.0.0 | 4.0.5 | 4.0.4 | 4.0.4 | 4.0.4 | 4.0.3 | 3.1.4 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.1.3 | 2.1.3 | 2.1.0 | 1.0.6 | 1.0.5 | 1.0.4 | 1.0.1 |
| **reporting-plugin** | 0.1.12 | 0.1.12 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.5 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 |
| **advancing-controller** | 4.0.0 | 4.0.0 | 2.0.3 | 2.0.2 | 2.0.2 | 2.0.2 | 2.0.0 | 1.1.4 | 0.3.5-1 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.2 | 0.3.0 | 0.1.4 | 0.1.4 | 0.1.2 |
| **integration-designer** | **2.0.4** | 2.0.1 | | | | | | | | | | | | | | | | | | |
| **application-manager** | **2.0.25** | 2.0.18 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **data-sync** | 2.0.2 | 2.0.2 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **iOS renderer** | **4.0.4** | 4.0.3 | 3.0.20 | 3.0.20 | 3.0.20 | 3.0.20 | 3.0.20 | 3.0.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.1.0 | 2.0.1 | 2.0.0 | 2.0.0 |
| **Android renderer** | **4.0.11** | 4.0.9 | 3.0.23 | 3.0.23 | 3.0.23 | 3.0.23 | 3.0.23 | 3.0.0 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 |
Note: The same container image and Helm chart are used for **Application Manager** and **Runtime Manager**.
## AI Agents
| Component | 4.6.1 | 4.6.0 | 4.1.4 | 4.1.3 | 4.1.2 | 4.1.1 | 4.1 | 4.0 | 3.4.7 | 3.4.6 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 |
| ---------------- | ---------- | ------ | ----- | ----- | ----- | ----- | --- | --- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
| **ai-designer** | **1.9.16** | 1.9.10 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **ai-assistant** | **1.6.15** | 1.6.9 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **ai-analyst** | **1.10.6** | 1.10.3 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **di-platform** | **1.2.27** | 1.2.24 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **ai-developer** | **1.6.5** | 1.5.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
### Third-party recommended component versions
| FlowX.AI Platform Version | Component Name | Recommended Versions |
| ------------------------- | ----------------- | -------------------- |
| 4.6.1 | Keycloak | 22.x |
| 4.6.1 | Kafka | 3.2.x |
| 4.6.1 | PostgreSQL | 16.2.x |
| 4.6.1 | MongoDB | 7.0.x |
| 4.6.1 | Redis | 7.2.x |
| 4.6.1 | Elasticsearch | 7.17.x |
| 4.6.1 | OracleDB | 21C/ 21-XE |
| 4.6.1 | Angular (Web SDK) | 19.x |
| 4.6.1 | React (Web SDK) | 18.x |
FlowX.AI supports the Recommended Versions of the prerequisite third-party components in the above table.
For optimal performance and reliability, our internal QA process validates new FlowX.AI releases using the latest Recommended Versions. While exploring alternative versions that suit your company's specific requirements, we recommend referring to the table above for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
# Migrating from previous versions to v4.6.1
Source: https://docs.flowx.ai/release-notes/v4.6.1-february-2025/migrating-from-v4.1.x-to-v4.6.1
If you're upgrading from a v4.6.0 version, make sure to first review the migration guide for v4.6.0 release documentation to capture all the significant changes and also the deployment guidelines:
If you're moving from v4.1.x to v4.6.1, don’t forget to check the relevant migration guides for each version to ensure nothing is missed:
Do not also forget to check all the deployment guidelines in between releases to do not miss important configurations or changes:
# FlowX.AI 4.6.1 Release Notes
Source: https://docs.flowx.ai/release-notes/v4.6.1-february-2025/v4.6.1-february-2025
FlowX.AI 4.6.1 is here with important fixes and improvements to keep things running smoothly. No new features this time—just focused updates to improve stability, fix bugs, and enhance overall performance. Simple, clean, and efficient.
## **What's new?** 🆕
### Rendering
**Form Elements** improvements:
* **Default Value Handling**: Defaults are applied only when no existing value is found, and set upon first display (with specific exceptions).
* **Data Consistency**: Form elements now save data in standardized formats based on type for improved reliability.
* **Smart Defaults**: Clear fallback behaviors for elements like switches, sliders, and checkboxes when no value is present.
* **Synchronized Inputs**: Linked form elements (e.g., input + slider) stay in sync with automatic value adjustments within set limits.
* **Dynamic Dependencies**: Form elements can dynamically adjust based on related components’ values for greater flexibility.
### Merge conflicts
* Improvement: merged version is now selected after a successful merge in versioning modal.
## Bug fixes 🐞
### Merge conflicts
* Fixed an issue that caused conflicts involving notification and document templates to fail.
* Fixed an issue where merges failed when a deleted configuration parameter was updated on a branch.
* Fixed an issue that resulted in duplicate substitution tag keys appearing in the merged version.
* Fixed an issue where merges failed due to the same dependency being added to both the source and target branches.
* Fixed issues related to workflow conflicts.
### Rendering
* Fixed an issue where documents from the media library were not loading correctly when added as a file preview in a user task.
### Integration Designer
* Fixed issue causing a 400 error when adding an enum mapper to the body tab due to incorrect data validation.
* Fixed issue where the last attribute in the objects table was not visible at higher zoom levels due to UI rendering constraints.
## Known issues 😢
We’re aware of a few quirks during merge conflicts and are working to fix them. Here’s a quick rundown:
* After a merge, sequences without a connection to an end node may appear on the canvas in cases of conflicts related to deleted nodes in one of the versions. This may occur because the merge mechanism does not detect that a sequence needs to be deleted if, in the final version, the node remains deleted.
* Navigation areas – After a merge, inconsistencies in the navigation areas hierarchy calculated during merging may result in navigation areas being hidden in the final merged version.
* No merge conflict detected if a user task node is deleted in the source branch and its UI is modified in the destination branch.
* Nodes may end up having two different outgoing sequences in certain scenarios: When a node is not connected to another node, and both the source and target branches define different sequences to different nodes, the merged version may contain both sequences. In particular, this may cause inconsistencies for boundary nodes, which cannot have multiple outgoing sequences.
* Gateway nodes may lose some of their configurations during merging when conflicts arise.
* Conflicts arising from moving nodes to different swimlanes in both the source and target branches are not detected correctly and result in a merge error.
* **Duplicate content values can be created during merging**: Merging branches with enumeration updates can result in duplicate content values or child enums with the same code. This occurs when identical names are added to the same enum in different branches, bypassing backend validation. Instead of flagging a conflict, the merge proceeds, creating duplicate entries. This issue disrupts the expected uniqueness constraint for enumeration values.
* **Enum content values might not save correctly in some cases**: In specific scenarios involving transformations of content values into child enums across branches, content values added in one branch may not appear after merging. Despite accepting changes from the branch where the content value was added, the merged version omits it, leading to incomplete or inconsistent enumeration data.
* **Adding values to deleted data model keys does not raise conflicts**: When values are added to a data model key in one branch and the same key is deleted in another, merging does not generate a conflict. Instead, the added values from the first branch are silently lost in the merged version. This issue bypasses expected conflict detection, leading to data loss and inconsistencies in the resulting data model.
* Branches with media library assets sometimes fail to merge into the main branch: Merging branches with media library assets can result in a 500 Internal Server Error when changes to the same asset occur across multiple branches. This issue typically arises when one branch is merged into another, and then the combined branch is merged into the main branch. The failure occurs due to null parameter handling during the merge process, preventing the merge from completing successfully.
* After a successful merge, the merged version is not selected in the versioning modal.
* Some entries in the merge conflicts tree may show numbers instead of specific identifiers (e.g. forms);
* **Unnecessary scrollbars appear in the merge modal for single items**: The merge conflict modal displays unnecessary scrollbars when only a single item is present in the list. This occurs in scenarios where media library assets with long keys are involved, creating a layout issue that affects the user interface. While functionality is not impacted, this visual inconsistency can reduce the overall user experience during conflict resolution.
* **Fields like `originalCreationTimestamp` and `flowxUuid` are incorrectly flagged as conflicts**: The merge conflict modal displays unnecessary scrollbars when only a single item is present in the list. This occurs in scenarios where media library assets with long keys are involved, creating a layout issue that affects the user interface. While functionality is not impacted, this visual inconsistency can reduce the overall user experience during conflict resolution.
## Additional information
For deployment guidelines and further details, refer to:
Migrating from v4.6.0:
# Deployment guidelines v4.7.0
Source: https://docs.flowx.ai/release-notes/v4.7.0-february-2025/deployment-guidelines-v4.7.0
Do not forget, after upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FlowX.AI Designer > Platform Status**.

After upgrading to the 4.7.0 FlowX.AI release, you cannot import old process definitions or resources from versions older than 4.6.0.
In version 4.7, the import-export functionality for application-version is no longer backward compatible.
* Zip files exported from version 4.7 cannot be imported into earlier FlowX versions.
* Similarly, zip files from earlier versions cannot be imported into 4.7.
Note: Zip files for builds and specific resources remain unaffected.
## Component versions
| Component | 4.7.0 | 4.6.1 | 4.6.0 | 4.1.4 | 4.1.3 | 4.1.2 | 4.1.1 | 4.1 | 4.0 | 3.4.7 | 3.4.6 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 |
| ----------------------------- | ---------- | ------ | ------ | -------- | -------- | -------- | ------ | ------ | ---------- | --------- | --------- | --------- | --------- | ------- | ------ | ------ | ------ | ------- | ------ | ------ | ------ |
| **process-engine** | **8.3.1** | 8.1.1 | 8.0.2 | 6.1.4 | 6.1.3 | 6.1.2 | 6.0.3 | 5.10.3 | 4.3.5-2v11 | 4.3.5-2v6 | 4.3.5-2v2 | 4.3.5-2v1 | 4.3.5 | 4.3.2 | 4.3.1 | 4.1.0 | 3.6.0 | 2.2.1 | 2.1.2 | 2.0.7 | |
| **admin** | **7.5.1** | 7.2.6 | 7.2.3 | 5.1.6 | 5.1.5 | 5.1.4 | 5.1.3 | 5.0.9 | 4.6.10 | 3.3.19-6 | 3.3.19-4 | 3.3.19-3 | 3.3.19-1 | 3.3.19 | 3.3.10 | 3.3.7 | 3.1.1 | 2.5.2 | 2.2.2 | 2.1.3 | 2.0.8 |
| **designer** | **5.80.6** | 5.69.3 | 5.64.5 | 4.17.1-5 | 4.17.1-3 | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/angular-sdk** | **5.80.6** | 5.69.3 | 5.64.5 | - | - | | | | | | | | | | | | | | | | |
| **@flowx/angular-theme** | **5.80.6** | 5.69.3 | 5.64.5 | - | - | | | | | | | | | | | | | | | | |
| **@flowx/angular-ui-toolkit** | **5.80.6** | 5.69.3 | 5.64.5 | 4.17.1-5 | 4.17.1-3 | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/react-sdk** | **5.80.6** | 5.69.3 | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/react-theme** | **5.80.6** | 5.69.3 | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/react-ui-toolkit** | **5.80.6** | 5.69.3 | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/core-sdk** | **5.80.6** | 5.69.3 | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/core-theme** | **5.80.6** | 5.69.3 | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **paperflow-web-components** | - | - | - | - | - | - | - | - | 3.35.18-5 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **cms-core** | **5.2.11** | 5.2.2 | 5.1.2 | 3.0.3 | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.1 | 2.2.5 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.6 | 1.3.0 | 1.2.0 | 1.0.3 | 1.0.2 |
| **scheduler-core** | **5.0.2** | 5.0.0 | 5.0.0 | 3.0.2 | 3.0.1 | 3.0.1 | 3.0.1 | 3.0.0 | 2.1.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.1.0 | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 |
| **events-gateway** | **5.0.5** | 5.0.2 | 5.0.2 | 3.0.3 | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.0 | 2.0.4 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.0.6 | 1.0.2 | - | - | - |
| **notification-plugin** | **6.1.0** | 6.0.3 | 6.0.3 | 4.0.4 | 4.0.3 | 4.0.3 | 4.0.3 | 4.0.1 | 3.0.6 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.8 | 2.0.8 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 |
| **document-plugin** | **6.3.1** | 6.0.3 | 6.0.3 | 4.0.3 | 4.0.2 | 4.0.2 | 4.0.2 | 4.0.0 | 3.0.6 | 2.0.10-1 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.8 | 2.0.8 | 2.0.6 | 2.0.4 | 2.0.3 | 2.0.3 | 2.0.2 |
| **ocr-plugin** | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.15 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.8 | 1.0.8 | 1.0.2 | 0.1.33 | 0.1.33 |
| **task-management-plugin** | **7.2.2** | 7.0.3 | 7.0.3 | 5.0.5 | 5.0.4 | 5.0.4 | 5.0.4 | 5.0.2 | 4.0.5 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.0 | 2.1.2 | 1.0.4 | 1.0.4 | 1.0.1 |
| **data-search** | **4.0.4** | 4.0.0 | 4.0.0 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.4 | 2.0.3 | 1.0.6 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.3 | 0.2.0 | 0.1.4 | 0.1.4 | 0.1.3 |
| **audit-core** | **6.0.4** | 6.0.0 | 6.0.0 | 4.0.5 | 4.0.4 | 4.0.4 | 4.0.4 | 4.0.3 | 3.1.4 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.1.3 | 2.1.3 | 2.1.0 | 1.0.6 | 1.0.5 | 1.0.4 | 1.0.1 |
| **reporting-plugin** | 0.1.12 | 0.1.12 | 0.1.12 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.5 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 |
| **advancing-controller** | **4.0.3** | 4.0.0 | 4.0.0 | 2.0.3 | 2.0.2 | 2.0.2 | 2.0.2 | 2.0.0 | 1.1.4 | 0.3.5-1 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.2 | 0.3.0 | 0.1.4 | 0.1.4 | 0.1.2 |
| **integration-designer** | **2.3.2** | 2.0.4 | 2.0.1 | | | | | | | | | | | | | | | | | | |
| **application-manager** | **2.6.0** | 2.0.25 | 2.0.18 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **data-sync** | **2.1.0** | 2.0.2 | 2.0.2 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **iOS renderer** | **4.0.5** | 4.0.4 | 4.0.3 | 3.0.20 | 3.0.20 | 3.0.20 | 3.0.20 | 3.0.20 | 3.0.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.1.0 | 2.0.1 | 2.0.0 | 2.0.0 |
| **Android renderer** | **4.0.12** | 4.0.11 | 4.0.9 | 3.0.23 | 3.0.23 | 3.0.23 | 3.0.23 | 3.0.23 | 3.0.0 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 |
Note: The same container image and Helm chart are used for **Application Manager** and **Runtime Manager**.
## AI Agents
| Component | 4.7.0 | 4.6.1 | 4.6.0 | 4.1.4 | 4.1.3 | 4.1.2 | 4.1.1 | 4.1 | 4.0 | 3.4.7 | 3.4.6 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 |
| ---------------- | ----------- | ------ | ------ | ----- | ----- | ----- | ----- | --- | --- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
| **ai-designer** | **1.9.21** | 1.9.16 | 1.9.10 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **ai-assistant** | **1.7.2** | 1.6.15 | 1.6.9 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **ai-analyst** | **1.10.13** | 1.10.6 | 1.10.3 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **di-platform** | **1.2.31** | 1.2.27 | 1.2.24 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **ai-developer** | **1.7.0** | 1.6.5 | 1.5.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
## Third-party recommended component versions
| FlowX.AI Version | 3rd Party Dependency | Recommended Supported Version |
| ---------------- | -------------------- | ----------------------------- |
| 4.7.0 | Keycloak | 22.x, 26.x |
| 4.7.0 | Kafka | 3.2.x |
| 4.7.0 | PostgreSQL | 16.2.x |
| 4.7.0 | OracleDB | 21c, 23ai |
| 4.7.0 | MongoDB | 7.0.x |
| 4.7.0 | Redis | 7.2.x |
| 4.7.0 | Elasticsearch | 7.17.x |
| 4.7.0 | Angular (Web SDK) | 19.x |
| 4.7.0 | React (Web SDK) | 18.x |
FlowX.AI supports the Recommended Versions of the prerequisite third-party components in the above table.
For optimal performance and reliability, our internal QA process validates new FlowX.AI releases using the latest Recommended Versions. While exploring alternative versions that suit your company's specific requirements, we recommend referring to the table above for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
### Deprecation notice
Starting FlowX 5.0, the following versions of 3rd Party Dependencies will no longer be supported:
* Keycloak versions older than 26
* Kafka versions older than 3.9
* Redis versions older than 7.4
# Migrating from previous versions to v4.7.0
Source: https://docs.flowx.ai/release-notes/v4.7.0-february-2025/migrating-from-v4.1.x-to-v4.7.0
If you're upgrading from a v4.6.x version, make sure to first review the migration guide for v4.6 release documentation to capture all the significant changes and also the deployment guidelines:
If you're moving from v4.1.x to v4.7.0, don’t forget to check the relevant migration guides for each version to ensure nothing is missed:
Do not also forget to check all the deployment guidelines in between releases to do not miss important configurations or changes:
# FlowX.AI 4.7.0 Release Notes
Source: https://docs.flowx.ai/release-notes/v4.7.0-february-2025/v4.7.0-february-2025
FlowX.AI 4.7.0 is here to keep your projects vibing, your merges drama-free, and your configs under control—because struggling with your platform is *so* last season. 😎🔥
## What's new? 🆕
✅ Copy resources across projects\
✅ Duplicate resources (in the same project/library)\
✅ Resources usage tracking\
✅ Smarter merge conflict detection & resolution\
✅ Libraries can now define configuration parameters\
✅ Dynamic kafka topics using configuration parameters
## Projects
### Copy resources across projects
Moving resources between projects is now **fast and seamless**. The new **Copy to Another Project** feature ensures all dependencies stay intact, eliminating the hassle of manual adjustments.
1️⃣ **Select a destination** – Choose the target project or library.\
2️⃣ **Pick a branch (if applicable)** – For WIP versions, select the target branch.\
3️⃣ **Review dependencies** – The system displays referenced resources and validates dependencies.\
4️⃣ **Resolve identifier conflicts** – If an identifier already exists in the destination, you'll be prompted to either:
* **Keep both** (create a duplicate)
* **Replace** (overwrite the existing resource)
* **Use destination** (retain the existing resource without copying)
* **No broken references** – Everything stays connected.
* **Auto-add missing dependencies** – No manual setup required.
* **Works with WIP & committed versions (only from committed to WIP)** – Flexibility for your workflow.

Check this section for more details.
***
### Duplicate resources
The **Duplicate Resource** feature lets users create a copy of an existing resource within the same project or library version, streamlining workflows and ensuring quick reusability.
You can duplicate the following resources:
* **Processes**
* **Enumerations**
* **Media files (including Global media library files)**
* **Notification templates**
* **Document templates**
* **Views**
* **Duplicate options** are available from each resource’s three-dot menu (table row & secondary navigation).
* When selected, a **"Duplicate" modal** opens with a prefilled name: *Copy of \[Resource Name]* (which can be edited).
* **Cancel** to discard changes or **Duplicate** to create an exact copy in the same project/library version.
* **Success confirmation:** The system navigates to the newly created resource and displays a success message.
* **Error handling:** If a resource with the same name/key exists, users will be prompted to rename it before proceeding.
💡 **Use cases:**
* Quickly iterate on existing resources.
* Reduce redundant manual setup.
* Maintain consistency within projects.
* Easily clone complex configurations for testing and modifications.
Check this section for more details.
***
### Track where your resources are used
Monitoring your resources has never been simpler and more efficient. Stay on top of where and how they’re being used.


Track where a process is referenced, with a clear **Usage Overview modal** that dynamically updates when references change.
See where enumerations are used across **UI, Data Models, Systems, and Workflows** before making changes.
Identify **which UI elements reference media assets** so you don’t accidentally break designs.
📌 **Before deleting a resource, a confirmation modal lists all affected references. No more surprises!**
***
### Configuration parameters overrides
🔹 **Libraries can now define configuration parameters** to help test processes or workflows that use those parameters.
🔹 However, **libraries do not support overrides**.
* **Overrides for library config parameters must be done within a project** where the library is added as a dependency.
* **Overrides cannot be added in libraries directly.**
💡 **What this means:**
* Libraries provide **default values** for configuration parameters.
* Projects using those libraries **can override these values** as needed.
* This ensures **clear separation between library defaults and project-specific configurations** while maintaining flexibility.
✨ **Overrides take precedence** over library values—so you stay in control.
***
### Merge conflict enhancements
Merge conflicts are never fun, but at least now they’re **easier to handle**.
🚨 Prevents merging if identical resource identifiers exist in both branches (e.g., same name, key, or code).
🖥️ Resize the merge conflict tree for better visibility and navigation.
📢 A new conflict detection system for parent changes/deletions in both branches.
***
## SDK changes
### Expose Enumerations API in Angular SDK
Developers can now dynamically retrieve enumerations via the Angular SDK, making it easier to use enums in custom components like dropdowns.
How it Works?
A new method in the Angular SDK allows direct access to enumerations, reducing manual imports and improving flexibility.
**Usage**
The `getEnumeration()` function returns key-value pairs that can be used in components that need enumeration data:
```typescript
getEnumeration(enumName: string): { value: string; label: string }[]
```
* `enumName` (string) – The name of the enumeration to retrieve (e.g., 'UserRoles', 'cities')
* Returns: An array of objects containing value-label pairs representing the requested enumeration
The enumeration service works seamlessly with Angular's resource API for reactive state management:
```typescript
import { Component, effect, model, resource } from '@angular/core';
import { getEnumeration } from '@flowx/angular-sdk';
@Component({
selector: 'app-dynamic-enum-select',
template: `
`
})
export class DynamicEnumSelectComponent {
// Create a signal to track the currently selected enum
enumName = model('UserRoles');
// Use resource API to reactively load enumerations when enumName changes
options = resource({
request: this.enumName,
loader: ({ request: enumName }) => getEnumeration(enumName),
});
constructor() {
effect(() => {
if (this.options.isLoading()) {
console.log(`Loading enumeration: ${this.enumName()}`);
} else {
console.log(`Loaded enumeration: ${this.enumName()}`, this.options.value());
}
});
}
}
```
***
## **Bug fixes** 🐞
✔️ Improved **merge conflict detection and resolution**\
✔️ Fixed **UI rendering issues**
***
## Changes
### Dynamic kafka topics
FlowX 4.7.0 introduces **Dynamic Kafka Topics**, enabling more flexible and configurable messaging across processes.
🔹 **What’s new?**
* Use **Configuration Parameters** to dynamically assign Kafka topics in **Kafka Send** and **Kafka Receive** actions.
* Concatenate predefined parameters with process variables for **on-the-fly topic selection**.
* Reduce hardcoded values and **simplify environment-specific configurations**.
💡 **Why it matters:**
* Makes **Kafka integrations more scalable** across different deployments.
* Supports **multi-topic messaging** for different use cases without manual updates.
* Ensures better **reusability and maintainability** in process configurations.
📌 Check the [Kafka Send Action](../../4.7.0/docs/building-blocks/actions/kafka-send-action#dynamic-kafka-topics) section for setup details.
***
### Export/import compatibility
In version 4.7, the import-export functionality for application-version is no longer backward compatible.
* Zip files exported from version 4.7 cannot be imported into earlier FlowX versions.
* Similarly, zip files from earlier versions cannot be imported into 4.7.
Zip files for builds and specific resources remain unaffected.
***
## **Additional information**
📌 [Deployment Guidelines v4.7.0](./deployment-guidelines-v4.7.0)\
📌 [Migration Guide (from v4.6.0)](./migrating-from-v4.1.x-to-v4.7.0)
# Deployment guidelines v4.7.1
Source: https://docs.flowx.ai/release-notes/v4.7.1-march-2025/deployment-guidelines-v4.7.1
Do not forget, after upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FlowX.AI Designer > Platform Status**.

After upgrading to the 4.7.1 FlowX.AI release, you cannot import old process definitions or resources from versions older than 4.7.0.
Starting with 4.7.0 release, the import-export functionality for application-version is no longer backward compatible.
* Zip files exported from version 4.7 cannot be imported into earlier FlowX versions.
* Similarly, zip files from earlier versions cannot be imported into 4.7.
Note: Zip files for builds and specific resources remain unaffected.
## Component versions
| Component | 4.7.1 | 4.7.0 | 4.6.1 | 4.6.0 | 4.1.4 | 4.1.3 | 4.1.2 | 4.1.1 | 4.1 | 4.0 | 3.4.7 | 3.4.6 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 |
| ----------------------------- | ---------- | ------ | ------ | ------ | -------- | -------- | -------- | ------ | ------ | ---------- | --------- | --------- | --------- | --------- | ------- | ------ | ------ | ------ | ------- | ------ | ------ | ------ |
| **process-engine** | **8.5.1** | 8.3.1 | 8.1.1 | 8.0.2 | 6.1.4 | 6.1.3 | 6.1.2 | 6.0.3 | 5.10.3 | 4.3.5-2v11 | 4.3.5-2v6 | 4.3.5-2v2 | 4.3.5-2v1 | 4.3.5 | 4.3.2 | 4.3.1 | 4.1.0 | 3.6.0 | 2.2.1 | 2.1.2 | 2.0.7 | |
| **admin** | **7.7.6** | 7.5.1 | 7.2.6 | 7.2.3 | 5.1.6 | 5.1.5 | 5.1.4 | 5.1.3 | 5.0.9 | 4.6.10 | 3.3.19-6 | 3.3.19-4 | 3.3.19-3 | 3.3.19-1 | 3.3.19 | 3.3.10 | 3.3.7 | 3.1.1 | 2.5.2 | 2.2.2 | 2.1.3 | 2.0.8 |
| **designer** | **5.83.8** | 5.80.6 | 5.69.3 | 5.64.5 | 4.17.1-5 | 4.17.1-3 | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/angular-sdk** | **5.83.8** | 5.80.6 | 5.69.3 | 5.64.5 | - | - | | | | | | | | | | | | | | | | |
| **@flowx/angular-theme** | **5.83.8** | 5.80.6 | 5.69.3 | 5.64.5 | - | - | | | | | | | | | | | | | | | | |
| **@flowx/angular-ui-toolkit** | **5.83.8** | 5.80.6 | 5.69.3 | 5.64.5 | 4.17.1-5 | 4.17.1-3 | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/react-sdk** | **5.83.8** | 5.80.6 | 5.69.3 | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/react-theme** | **5.83.8** | 5.80.6 | 5.69.3 | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/react-ui-toolkit** | **5.83.8** | 5.80.6 | 5.69.3 | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/core-sdk** | **5.83.8** | 5.80.6 | 5.69.3 | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/core-theme** | **5.83.8** | 5.80.6 | 5.69.3 | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **paperflow-web-components** | - | - | - | - | - | - | - | - | - | 3.35.18-5 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **cms-core** | **5.3.3** | 5.2.11 | 5.2.2 | 5.1.2 | 3.0.3 | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.1 | 2.2.5 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.6 | 1.3.0 | 1.2.0 | 1.0.3 | 1.0.2 |
| **scheduler-core** | **5.0.4** | 5.0.2 | 5.0.0 | 5.0.0 | 3.0.2 | 3.0.1 | 3.0.1 | 3.0.1 | 3.0.0 | 2.1.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.1.0 | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 |
| **events-gateway** | **5.0.7** | 5.0.5 | 5.0.2 | 5.0.2 | 3.0.3 | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.0 | 2.0.4 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.0.6 | 1.0.2 | - | - | - |
| **notification-plugin** | **6.1.3** | 6.1.0 | 6.0.3 | 6.0.3 | 4.0.4 | 4.0.3 | 4.0.3 | 4.0.3 | 4.0.1 | 3.0.6 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.8 | 2.0.8 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 |
| **document-plugin** | **6.3.4** | 6.3.1 | 6.0.3 | 6.0.3 | 4.0.3 | 4.0.2 | 4.0.2 | 4.0.2 | 4.0.0 | 3.0.6 | 2.0.10-1 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.8 | 2.0.8 | 2.0.6 | 2.0.4 | 2.0.3 | 2.0.3 | 2.0.2 |
| **ocr-plugin** | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.15 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.8 | 1.0.8 | 1.0.2 | 0.1.33 | 0.1.33 |
| **task-management-plugin** | **7.2.6** | 7.2.2 | 7.0.3 | 7.0.3 | 5.0.5 | 5.0.4 | 5.0.4 | 5.0.4 | 5.0.2 | 4.0.5 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.0 | 2.1.2 | 1.0.4 | 1.0.4 | 1.0.1 |
| **data-search** | **4.0.6** | 4.0.4 | 4.0.0 | 4.0.0 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.4 | 2.0.3 | 1.0.6 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.3 | 0.2.0 | 0.1.4 | 0.1.4 | 0.1.3 |
| **audit-core** | **6.0.6** | 6.0.4 | 6.0.0 | 6.0.0 | 4.0.5 | 4.0.4 | 4.0.4 | 4.0.4 | 4.0.3 | 3.1.4 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.1.3 | 2.1.3 | 2.1.0 | 1.0.6 | 1.0.5 | 1.0.4 | 1.0.1 |
| **reporting-plugin** | 0.1.12 | 0.1.12 | 0.1.12 | 0.1.12 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.5 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 |
| **advancing-controller** | **4.0.5** | 4.0.3 | 4.0.0 | 4.0.0 | 2.0.3 | 2.0.2 | 2.0.2 | 2.0.2 | 2.0.0 | 1.1.4 | 0.3.5-1 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.2 | 0.3.0 | 0.1.4 | 0.1.4 | 0.1.2 |
| **integration-designer** | **2.3.9** | 2.3.2 | 2.0.4 | 2.0.1 | | | | | | | | | | | | | | | | | | |
| **application-manager** | **2.9.4** | 2.6.0 | 2.0.25 | 2.0.18 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **data-sync** | **2.2.2** | 2.1.0 | 2.0.2 | 2.0.2 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **iOS renderer** | **4.0.7** | 4.0.5 | 4.0.4 | 4.0.3 | 3.0.20 | 3.0.20 | 3.0.20 | 3.0.20 | 3.0.20 | 3.0.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.1.0 | 2.0.1 | 2.0.0 | 2.0.0 |
| **Android renderer** | **4.0.13** | 4.0.12 | 4.0.11 | 4.0.9 | 3.0.23 | 3.0.23 | 3.0.23 | 3.0.23 | 3.0.23 | 3.0.0 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 |
Note: The same container image and Helm chart are used for **Application Manager** and **Runtime Manager**.
## AI Agents
| Component | 4.7.1 | 4.7.0 | 4.6.1 | 4.6.0 | 4.1.4 | 4.1.3 | 4.1.2 | 4.1.1 | 4.1 | 4.0 | 3.4.7 | 3.4.6 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 |
| ---------------- | ----------- | ------- | ------ | ------ | ----- | ----- | ----- | ----- | --- | --- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
| **ai-designer** | **1.9.27** | 1.9.21 | 1.9.16 | 1.9.10 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **ai-assistant** | **1.8.2** | 1.7.2 | 1.6.15 | 1.6.9 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **ai-analyst** | **1.10.17** | 1.10.13 | 1.10.6 | 1.10.3 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **di-platform** | **1.2.35** | 1.2.31 | 1.2.27 | 1.2.24 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **ai-developer** | **1.7.4** | 1.7.0 | 1.6.5 | 1.5.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
## Third-party recommended component versions
| FlowX.AI Version | 3rd Party Dependency | Recommended Supported Version |
| ---------------- | -------------------- | ----------------------------- |
| 4.7.1 | Keycloak | 22.x, 26.x |
| 4.7.1 | Kafka | 3.2.x |
| 4.7.1 | PostgreSQL | 16.2.x |
| 4.7.1 | OracleDB | 21c, 23ai |
| 4.7.1 | MongoDB | 7.0.x |
| 4.7.1 | Redis | 7.2.x |
| 4.7.1 | Elasticsearch | 7.17.x |
| 4.7.1 | Angular (Web SDK) | 19.x |
| 4.7.1 | React (Web SDK) | 18.x |
FlowX.AI supports the Recommended Versions of the prerequisite third-party components in the above table.
For optimal performance and reliability, our internal QA process validates new FlowX.AI releases using the latest Recommended Versions. While exploring alternative versions that suit your company's specific requirements, we recommend referring to the table above for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
### Deprecation notice
Starting FlowX 5.0, the following versions of 3rd Party Dependencies will no longer be supported:
* Keycloak versions older than 26
* Kafka versions older than 3.9
* Redis versions older than 7.4
# Migrating from previous versions to v4.7.1
Source: https://docs.flowx.ai/release-notes/v4.7.1-march-2025/migrating-from-v4.1.x-to-v4.7.1
If you're upgrading from a v4.7.0 version, make sure to first review the deployment guidelines for v4.7.0 release documentation to capture all the significant changes and also the deployment guidelines:
If you're upgrading from a v4.6.x version, make sure to first review the migration guide for v4.6 release documentation to capture all the significant changes and also the deployment guidelines:
If you're moving from v4.1.x to v4.7.0, don’t forget to check the relevant migration guides for each version to ensure nothing is missed:
Do not also forget to check all the deployment guidelines in between releases to do not miss important configurations or changes:
# FlowX.AI 4.7.1 Release Notes
Source: https://docs.flowx.ai/release-notes/v4.7.1-march-2025/v4.7.1-march-2025
FlowX.AI 4.7.1 brings enhanced resource navigation, better error detection, and workflow improvements to make your development journey even smoother! 🚀
## What's new? 🆕
✅ Enhanced resource usage tracking with direct navigation\
✅ Upgraded JavaScript support to ECMAScript 15 (2024) in Business Rules\
✅ Added Python 3 support with 3x performance improvement\
✅ Major performance enhancements for both design time and runtime\
✅ Security enhancements for runtime endpoints
## Projects
### Enhanced resource usage navigation
We've significantly improved how you navigate between resources and their references with direct navigation capabilities. Now you can easily jump from the usage tracking panel to the exact location where a resource is referenced.
1️⃣ **Access resource usage** – View where a resource is being used through the usage panel.\
2️⃣ **One-click navigation** – Use the new redirect button to jump directly to the referenced location.\
3️⃣ **Missing resource detection** – Easily identify and fix missing resource references with clear indicators.
* **Faster troubleshooting** – Directly navigate to problem areas without manual searching.
* **Improved productivity** – Less time spent hunting down references means more time building.
* **Better error prevention** – Quickly spot and fix missing resources before they cause issues.
***
## Enhanced scripting capabilities
### ECMAScript 15 (2024) support for Business Rules
FlowX.AI 4.7.1 brings the latest JavaScript features to your business rules with an upgrade to ECMAScript 15 (2024).
* Access to the latest JavaScript language features
* Modern syntax for cleaner, more maintainable rules
* Enhanced tooling and IDE support
* Write more concise and expressive business rules
* Leverage modern JavaScript patterns and best practices
* Improved developer experience with better IDE integration
**Important Configuration Note:** Nashorn engine remains the default runtime. To enable GraalJS support with its performance improvements, you must set the feature toggle `FLOWX_SCRIPTENGINE_USEGRAALVM` to `true` in the [**Process Engine configuration**](/4.7.x/setup-guides/flowx-engine-setup-guide/engine-setup) and in the **AI Developer** (if used) agent configuration. If this configuration variable is missing or set to `false`, Nashorn will be used.
### Python 3 support for Business Rules
We've upgraded our Python scripting capabilities to Python 3, delivering significant performance improvements while maintaining compatibility.
Python scripts now run at least 3 times faster than with Python 2.7.
Take advantage of Python 3's improved syntax, libraries, and security features.
**Important Configuration Note:** Python 2.7 remains the default runtime. To enable Python 3 support with its performance improvements, you must set the feature toggle `FLOWX_SCRIPTENGINE_USEGRAALVM` to `true` in the [**Process Engine configuration**](/4.7.x/setup-guides/flowx-engine-setup-guide/engine-setup) and in the **AI Developer** (if used) agent configuration. If this configuration variable is missing or set to `false`, Python 2.7 will be used.
More details about the scripting capabilities can be found in the [Supported scripting languages](../../4.7.x/docs/building-blocks/supported-scripts) section.
***
## Performance improvements 🚀
We've made significant performance enhancements throughout the platform to make your development and runtime experience smoother and faster.
### Design time improvements
For large processes, you'll notice dramatic improvements:
| Operation | Improvement |
| ----------------- | -------------- |
| Cloning a process | **5x faster** |
| Opening a process | **70x faster** |
### Runtime enhancements
Platform average response time has been **reduced by 20%** in our comprehensive benchmark\*.
\*The benchmark simulates a typical business process with a combination of user and service tasks. It includes rules that automate decision-making, as well as events for time-sensitive actions and communication. The process also covers various system functions, ensuring the platform can efficiently manage complex workflows with both individual and parallel tasks.
***
## Security enhancements 🔒
We've significantly improved platform security by implementing proper endpoint access controls and separation between internal and public-facing APIs.
Runtime execution proxy vulnerability addressed with comprehensive endpoint updates.
Better separation between design-time and runtime endpoints with proper access controls.
### Endpoint security updates
We've implemented a comprehensive security update for runtime endpoints to prevent potential proxy vulnerabilities:
* Redesigned API structure to separate internal (`runtime-internal`) from external-facing (`runtime`) endpoints
* Migrated design-time operations to use `runtime-internal` endpoints for enhanced security
* Moved specific management operations to dedicated `build-mgmt` endpoints
* Updated all API clients to use the new secure endpoints
**Important:** Make sure to update to the corresponding version of the [**`flowx-admin`**](./deployment-guidelines-v4.7.1#component-versions) to benefit from these security enhancements.
## **Bug fixes** 🐞
* Fixed various minor issues to improve overall stability
* Resolved some UI inconsistencies
* Addresses performance optimizations
* Fixed general usability issues
***
## **Additional information**
📌 [Deployment Guidelines v4.7.1](./deployment-guidelines-v4.7.1)\
📌 [Migration Guide (from v4.7.0)](./migrating-from-v4.7.0-to-v4.7.1)\
📌 [Supported scripting languages](../../4.7.x/docs/building-blocks/supported-scripts)
# Deployment guidelines v4.7.2
Source: https://docs.flowx.ai/release-notes/v4.7.2-march-2025/deployment-guidelines-v4.7.2
Do not forget, after upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FlowX.AI Designer > Platform Status**.

After upgrading to the 4.7. FlowX.AI release, you cannot import old process definitions or resources from versions older than 4.7.0.
Starting with 4.7.0 release, the import-export functionality for application-version is no longer backward compatible.
* Zip files exported from version 4.7 cannot be imported into earlier FlowX versions.
* Similarly, zip files from earlier versions cannot be imported into 4.7.
Note: Zip files for builds and specific resources remain unaffected.
## Component versions
| Component | 4.7.2 | 4.7.1 | 4.7.0 | 4.6.1 | 4.6.0 | 4.1.4 | 4.1.3 | 4.1.2 | 4.1.1 | 4.1 | 4.0 | 3.4.7 | 3.4.6 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 |
| ----------------------------- | ---------- | ------ | ------ | ------ | ------ | -------- | -------- | -------- | ------ | ------ | ---------- | --------- | --------- | --------- | --------- | ------- | ------ | ------ | ------ | ------- | ------ | ------ | ------ |
| **process-engine** | **8.9.0** | 8.5.1 | 8.3.1 | 8.1.1 | 8.0.2 | 6.1.4 | 6.1.3 | 6.1.2 | 6.0.3 | 5.10.3 | 4.3.5-2v11 | 4.3.5-2v6 | 4.3.5-2v2 | 4.3.5-2v1 | 4.3.5 | 4.3.2 | 4.3.1 | 4.1.0 | 3.6.0 | 2.2.1 | 2.1.2 | 2.0.7 | |
| **admin** | **7.8.5** | 7.7.6 | 7.5.1 | 7.2.6 | 7.2.3 | 5.1.6 | 5.1.5 | 5.1.4 | 5.1.3 | 5.0.9 | 4.6.10 | 3.3.19-6 | 3.3.19-4 | 3.3.19-3 | 3.3.19-1 | 3.3.19 | 3.3.10 | 3.3.7 | 3.1.1 | 2.5.2 | 2.2.2 | 2.1.3 | 2.0.8 |
| **designer** | **5.84.5** | 5.83.8 | 5.80.6 | 5.69.3 | 5.64.5 | 4.17.1-5 | 4.17.1-3 | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/angular-sdk** | **5.84.5** | 5.83.8 | 5.80.6 | 5.69.3 | 5.64.5 | - | - | | | | | | | | | | | | | | | | |
| **@flowx/angular-theme** | **5.84.5** | 5.83.8 | 5.80.6 | 5.69.3 | 5.64.5 | - | - | | | | | | | | | | | | | | | | |
| **@flowx/angular-ui-toolkit** | **5.84.5** | 5.83.8 | 5.80.6 | 5.69.3 | 5.64.5 | 4.17.1-5 | 4.17.1-3 | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/react-sdk** | **5.84.5** | 5.83.8 | 5.80.6 | 5.69.3 | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/react-theme** | **5.84.5** | 5.83.8 | 5.80.6 | 5.69.3 | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/react-ui-toolkit** | **5.84.5** | 5.83.8 | 5.80.6 | 5.69.3 | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/core-sdk** | **5.84.5** | 5.83.8 | 5.80.6 | 5.69.3 | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/core-theme** | **5.84.5** | 5.83.8 | 5.80.6 | 5.69.3 | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **paperflow-web-components** | - | - | - | - | - | - | - | - | - | - | 3.35.18-5 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **cms-core** | **5.4.8** | 5.3.3 | 5.2.11 | 5.2.2 | 5.1.2 | 3.0.3 | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.1 | 2.2.5 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.6 | 1.3.0 | 1.2.0 | 1.0.3 | 1.0.2 |
| **scheduler-core** | **5.0.10** | 5.0.4 | 5.0.2 | 5.0.0 | 5.0.0 | 3.0.2 | 3.0.1 | 3.0.1 | 3.0.1 | 3.0.0 | 2.1.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.1.0 | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 |
| **events-gateway** | **5.0.16** | 5.0.7 | 5.0.5 | 5.0.2 | 5.0.2 | 3.0.3 | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.0 | 2.0.4 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.0.6 | 1.0.2 | - | - | - |
| **notification-plugin** | **6.1.12** | 6.1.3 | 6.1.0 | 6.0.3 | 6.0.3 | 4.0.4 | 4.0.3 | 4.0.3 | 4.0.3 | 4.0.1 | 3.0.6 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.8 | 2.0.8 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 |
| **document-plugin** | **6.3.13** | 6.3.4 | 6.3.1 | 6.0.3 | 6.0.3 | 4.0.3 | 4.0.2 | 4.0.2 | 4.0.2 | 4.0.0 | 3.0.6 | 2.0.10-1 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.8 | 2.0.8 | 2.0.6 | 2.0.4 | 2.0.3 | 2.0.3 | 2.0.2 |
| **ocr-plugin** | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.15 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.8 | 1.0.8 | 1.0.2 | 0.1.33 | 0.1.33 |
| **task-management-plugin** | **7.2.14** | 7.2.6 | 7.2.2 | 7.0.3 | 7.0.3 | 5.0.5 | 5.0.4 | 5.0.4 | 5.0.4 | 5.0.2 | 4.0.5 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.0 | 2.1.2 | 1.0.4 | 1.0.4 | 1.0.1 |
| **data-search** | **4.0.13** | 4.0.6 | 4.0.4 | 4.0.0 | 4.0.0 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.4 | 2.0.3 | 1.0.6 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.3 | 0.2.0 | 0.1.4 | 0.1.4 | 0.1.3 |
| **audit-core** | **6.0.14** | 6.0.6 | 6.0.4 | 6.0.0 | 6.0.0 | 4.0.5 | 4.0.4 | 4.0.4 | 4.0.4 | 4.0.3 | 3.1.4 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.1.3 | 2.1.3 | 2.1.0 | 1.0.6 | 1.0.5 | 1.0.4 | 1.0.1 |
| **reporting-plugin** | **0.2.0** | 0.1.12 | 0.1.12 | 0.1.12 | 0.1.12 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.5 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 |
| **advancing-controller** | **4.0.12** | 4.0.5 | 4.0.3 | 4.0.0 | 4.0.0 | 2.0.3 | 2.0.2 | 2.0.2 | 2.0.2 | 2.0.0 | 1.1.4 | 0.3.5-1 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.2 | 0.3.0 | 0.1.4 | 0.1.4 | 0.1.2 |
| **integration-designer** | **2.3.16** | 2.3.9 | 2.3.2 | 2.0.4 | 2.0.1 | | | | | | | | | | | | | | | | | | |
| **application-manager** | **2.10.9** | 2.9.4 | 2.6.0 | 2.0.25 | 2.0.18 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **data-sync** | **2.2.9** | 2.2.2 | 2.1.0 | 2.0.2 | 2.0.2 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **iOS renderer** | 4.0.7 | 4.0.7 | 4.0.5 | 4.0.4 | 4.0.3 | 3.0.20 | 3.0.20 | 3.0.20 | 3.0.20 | 3.0.20 | 3.0.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.1.0 | 2.0.1 | 2.0.0 | 2.0.0 |
| **Android renderer** | 4.0.13 | 4.0.13 | 4.0.12 | 4.0.11 | 4.0.9 | 3.0.23 | 3.0.23 | 3.0.23 | 3.0.23 | 3.0.23 | 3.0.0 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 |
Note: The same container image and Helm chart are used for **Application Manager** and **Runtime Manager**.
## AI Agents
| Component | 4.7.2 | 4.7.1 | 4.7.0 | 4.6.1 | 4.6.0 | 4.1.4 | 4.1.3 | 4.1.2 | 4.1.1 | 4.1 | 4.0 | 3.4.7 | 3.4.6 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 |
| ---------------- | ----------- | ------- | ------- | ------ | ------ | ----- | ----- | ----- | ----- | --- | --- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
| **ai-designer** | **1.9.35** | 1.9.27 | 1.9.21 | 1.9.16 | 1.9.10 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **ai-assistant** | **1.8.9** | 1.8.2 | 1.7.2 | 1.6.15 | 1.6.9 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **ai-analyst** | **1.10.24** | 1.10.17 | 1.10.13 | 1.10.6 | 1.10.3 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **di-platform** | **1.4.1** | 1.2.35 | 1.2.31 | 1.2.27 | 1.2.24 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **ai-developer** | **1.7.9** | 1.7.4 | 1.7.0 | 1.6.5 | 1.5.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
## Third-party recommended component versions
| FlowX.AI Version | 3rd Party Dependency | Recommended Supported Version |
| ---------------- | -------------------- | ----------------------------- |
| 4.7.2 | Keycloak | 22.x, 26.x |
| 4.7.2 | Kafka | 3.2.x |
| 4.7.2 | PostgreSQL | 16.2.x |
| 4.7.2 | OracleDB | 21c, 23ai |
| 4.7.2 | MongoDB | 7.0.x |
| 4.7.2 | Redis | 7.2.x |
| 4.7.2 | Elasticsearch | 7.17.x |
| 4.7.2 | Angular (Web SDK) | 19.x |
| 4.7.2 | React (Web SDK) | 18.x |
FlowX.AI supports the Recommended Versions of the prerequisite third-party components in the above table.
For optimal performance and reliability, our internal QA process validates new FlowX.AI releases using the latest Recommended Versions. While exploring alternative versions that suit your company's specific requirements, we recommend referring to the table above for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
### Deprecation notice
Starting FlowX 5.0, the following versions of 3rd Party Dependencies will no longer be supported:
* Keycloak versions older than 26
* Kafka versions older than 3.9
* Redis versions older than 7.4
## Additional configuration
Added new environment variable on Process Engine:
### Message routing configuration
| Environment variable | Description | Default value |
| -------------------------- | --------------------------------------------------------------------- | ------------------- |
| `KAFKA_DEFAULT_FX_CONTEXT` | Default context value for message routing when no context is provided | `""` (empty string) |
When `KAFKA_DEFAULT_FX_CONTEXT` is set and an event is received on Kafka without an fxContext header, the system will automatically apply the default context value to the message.
# Migrating from previous versions to v4.7.2
Source: https://docs.flowx.ai/release-notes/v4.7.2-march-2025/migrating-from-v4.1.x-to-v4.7.2
If you're upgrading from a v4.7.0 version, make sure to first review the deployment guidelines for v4.7.0 and v4.7.1 release documentation to capture all the significant changes and also the deployment guidelines:
If you're upgrading from a v4.6.x version, make sure to first review the migration guide for v4.6 release documentation to capture all the significant changes and also the deployment guidelines:
If you're moving from v4.1.x to v4.7.0, don’t forget to check the relevant migration guides for each version to ensure nothing is missed:
Do not also forget to check all the deployment guidelines in between releases to do not miss important configurations or changes:
# FlowX.AI 4.7.2 Release Notes
Source: https://docs.flowx.ai/release-notes/v4.7.2-march-2025/v4.7.2-march-2025
FlowX.AI 4.7.2 delivers improved merge conflict handling with retry capabilities, deprecates DMN business rules, and fixes issues with subprocess handling, media file imports, and OCR functionality to enhance your development workflow! 🚀
## **What's new? 🆕**
✅ Improved merge conflict handling with retry functionality\
✅ Enhanced error messages during failed merges\
✅ Streamlined workflow for resolving merge conflicts
## Merge conflicts
### Improved error messages when merge fails
We've enhanced the merge conflict resolution experience with more informative error messages and a new retry functionality:
* When a merge fails, users can now see a "Retry merge" button on the Merge failed modal
* Clicking this button allows users to attempt the merge again without losing previous work
The system now handles two distinct scenarios:
**1. When conflicts exist:**
* The system opens the Merge conflicts panel with the previous state of the merge before the failed attempt
* This preserves all your conflict resolution work so you can fix the specific issues that caused the failure

**2. When no conflicts exist:**
* The system automatically retries the merge operation
* This streamlines the workflow when the merge failure was due to temporary issues rather than actual conflicts

## Process engine
### Deprecated DMN rules
In version 4.7.2, we've deprecated the [\[**DMN (Decision Model and Notation)**\]](../../4.7.x/docs/building-blocks/actions/business-rule-action/dmn-business-rule-action) business rule actions. This change affects how business rules are configured on task/user task nodes in business processes.
**What's changing:**
* DMN is no longer available as a language option when configuring business rule actions
**What's next:**
For guidance on using alternative business rule languages, please refer to our [supported scripting languages](../../4.7.x/docs/building-blocks/supported-scripts) documentation.
## **Bug fixes** 🐞
* Resolved export/import issues with media library files, particularly for applications migrated from earlier versions
* Fixed an issue where parallel multi-instance Call Activity would incorrectly trigger a subprocess when using an empty array list as a variable
* Fixed a display issue where not all subprocesses were visible in the subprocess list on smaller screens
* Fixed a bug where message events couldn't be caught when placed directly after an uninterrupting timer event
* Resolved "Unsupported Media Type" errors when importing application versions with media file binaries
* Fixed an issue where color palette changes in Theme settings couldn't be saved at 100% browser resolution on smaller screens
* Fixed a issue where OCR plugin failed due to missing correlation headers
***
## **Additional information**
📌 [Deployment Guidelines v4.7.2](./deployment-guidelines-v4.7.2)\
📌 [Migration Guide (from v4.7.0)](./migrating-from-v4.7.0-to-v4.7.2)\
📌 [Supported scripting languages](../../4.7.x/docs/building-blocks/supported-scripts)
# Deployment guidelines v4.7.3
Source: https://docs.flowx.ai/release-notes/v4.7.3-april-2025/deployment-guidelines-v4.7.3
Do not forget, after upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FlowX.AI Designer > Platform Status**.

After upgrading to the 4.7.x FlowX.AI release, you cannot import old process definitions or resources from versions older than 4.7.0.
Starting with 4.7.0 release, the import-export functionality for application-version is no longer backward compatible.
* Zip files exported from version 4.7 cannot be imported into earlier FlowX versions.
* Similarly, zip files from earlier versions cannot be imported into 4.7.
Note: Zip files for builds and specific resources remain unaffected.
## Component versions
| Component | 4.7.3 | 4.7.2 | 4.7.1 | 4.7.0 | 4.6.1 | 4.6.0 | 4.1.4 | 4.1.3 | 4.1.2 | 4.1.1 | 4.1 | 4.0 | 3.4.7 | 3.4.6 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 |
| ----------------------------- | ---------- | ------ | ------ | ------ | ------ | ------ | -------- | -------- | -------- | ------ | ------ | ---------- | --------- | --------- | --------- | --------- | ------- | ------ | ------ | ------ | ------- | ------ | ------ | ------ |
| **process-engine** | **8.13.0** | 8.9.0 | 8.5.1 | 8.3.1 | 8.1.1 | 8.0.2 | 6.1.4 | 6.1.3 | 6.1.2 | 6.0.3 | 5.10.3 | 4.3.5-2v11 | 4.3.5-2v6 | 4.3.5-2v2 | 4.3.5-2v1 | 4.3.5 | 4.3.2 | 4.3.1 | 4.1.0 | 3.6.0 | 2.2.1 | 2.1.2 | 2.0.7 | |
| **admin** | **7.8.13** | 7.8.5 | 7.7.6 | 7.5.1 | 7.2.6 | 7.2.3 | 5.1.6 | 5.1.5 | 5.1.4 | 5.1.3 | 5.0.9 | 4.6.10 | 3.3.19-6 | 3.3.19-4 | 3.3.19-3 | 3.3.19-1 | 3.3.19 | 3.3.10 | 3.3.7 | 3.1.1 | 2.5.2 | 2.2.2 | 2.1.3 | 2.0.8 |
| **designer** | **5.91.2** | 5.84.5 | 5.83.8 | 5.80.6 | 5.69.3 | 5.64.5 | 4.17.1-5 | 4.17.1-3 | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/angular-sdk** | **5.91.2** | 5.84.5 | 5.83.8 | 5.80.6 | 5.69.3 | 5.64.5 | - | - | | | | | | | | | | | | | | | | |
| **@flowx/angular-theme** | **5.91.2** | 5.84.5 | 5.83.8 | 5.80.6 | 5.69.3 | 5.64.5 | - | - | | | | | | | | | | | | | | | | |
| **@flowx/angular-ui-toolkit** | **5.91.2** | 5.84.5 | 5.83.8 | 5.80.6 | 5.69.3 | 5.64.5 | 4.17.1-5 | 4.17.1-3 | 4.17.1-2 | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/react-sdk** | **5.91.2** | 5.84.5 | 5.83.8 | 5.80.6 | 5.69.3 | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/react-theme** | **5.91.2** | 5.84.5 | 5.83.8 | 5.80.6 | 5.69.3 | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/react-ui-toolkit** | **5.91.2** | 5.84.5 | 5.83.8 | 5.80.6 | 5.69.3 | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/core-sdk** | **5.91.2** | 5.84.5 | 5.83.8 | 5.80.6 | 5.69.3 | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **@flowx/core-theme** | **5.91.2** | 5.84.5 | 5.83.8 | 5.80.6 | 5.69.3 | 5.64.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **paperflow-web-components** | - | - | - | - | - | - | - | - | - | - | - | 3.35.18-5 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **cms-core** | **5.4.12** | 5.4.8 | 5.3.3 | 5.2.11 | 5.2.2 | 5.1.2 | 3.0.3 | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.1 | 2.2.5 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.6 | 1.3.0 | 1.2.0 | 1.0.3 | 1.0.2 |
| **scheduler-core** | **5.0.11** | 5.0.10 | 5.0.4 | 5.0.2 | 5.0.0 | 5.0.0 | 3.0.2 | 3.0.1 | 3.0.1 | 3.0.1 | 3.0.0 | 2.1.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.1.0 | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 |
| **events-gateway** | **5.0.17** | 5.0.16 | 5.0.7 | 5.0.5 | 5.0.2 | 5.0.2 | 3.0.3 | 3.0.2 | 3.0.2 | 3.0.2 | 3.0.0 | 2.0.4 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.0.6 | 1.0.2 | - | - | - |
| **notification-plugin** | **6.1.13** | 6.1.12 | 6.1.3 | 6.1.0 | 6.0.3 | 6.0.3 | 4.0.4 | 4.0.3 | 4.0.3 | 4.0.3 | 4.0.1 | 3.0.6 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.8 | 2.0.8 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 |
| **document-plugin** | **6.3.16** | 6.3.13 | 6.3.4 | 6.3.1 | 6.0.3 | 6.0.3 | 4.0.3 | 4.0.2 | 4.0.2 | 4.0.2 | 4.0.0 | 3.0.6 | 2.0.10-1 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.8 | 2.0.8 | 2.0.6 | 2.0.4 | 2.0.3 | 2.0.3 | 2.0.2 |
| **ocr-plugin** | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.15 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.8 | 1.0.8 | 1.0.2 | 0.1.33 | 0.1.33 |
| **task-management-plugin** | **7.2.15** | 7.2.14 | 7.2.6 | 7.2.2 | 7.0.3 | 7.0.3 | 5.0.5 | 5.0.4 | 5.0.4 | 5.0.4 | 5.0.2 | 4.0.5 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.0 | 2.1.2 | 1.0.4 | 1.0.4 | 1.0.1 |
| **data-search** | **4.0.14** | 4.0.13 | 4.0.6 | 4.0.4 | 4.0.0 | 4.0.0 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.4 | 2.0.3 | 1.0.6 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.3 | 0.2.0 | 0.1.4 | 0.1.4 | 0.1.3 |
| **audit-core** | **6.0.15** | 6.0.14 | 6.0.6 | 6.0.4 | 6.0.0 | 6.0.0 | 4.0.5 | 4.0.4 | 4.0.4 | 4.0.4 | 4.0.3 | 3.1.4 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.1.3 | 2.1.3 | 2.1.0 | 1.0.6 | 1.0.5 | 1.0.4 | 1.0.1 |
| **reporting-plugin** | 0.2.0 | 0.2.0 | 0.1.12 | 0.1.12 | 0.1.12 | 0.1.12 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.6 | 0.1.5 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 |
| **advancing-controller** | **4.0.14** | 4.0.12 | 4.0.5 | 4.0.3 | 4.0.0 | 4.0.0 | 2.0.3 | 2.0.2 | 2.0.2 | 2.0.2 | 2.0.0 | 1.1.4 | 0.3.5-1 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.2 | 0.3.0 | 0.1.4 | 0.1.4 | 0.1.2 |
| **integration-designer** | **2.6.0** | 2.3.16 | 2.3.9 | 2.3.2 | 2.0.4 | 2.0.1 | | | | | | | | | | | | | | | | | | |
| **application-manager** | **2.14.0** | 2.10.9 | 2.9.4 | 2.6.0 | 2.0.25 | 2.0.18 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **data-sync** | **2.2.11** | 2.2.9 | 2.2.2 | 2.1.0 | 2.0.2 | 2.0.2 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **iOS renderer** | **4.0.14** | 4.0.7 | 4.0.7 | 4.0.5 | 4.0.4 | 4.0.3 | 3.0.20 | 3.0.20 | 3.0.20 | 3.0.20 | 3.0.20 | 3.0.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.1.0 | 2.0.1 | 2.0.0 | 2.0.0 |
| **Android renderer** | **4.0.15** | 4.0.13 | 4.0.13 | 4.0.12 | 4.0.11 | 4.0.9 | 3.0.23 | 3.0.23 | 3.0.23 | 3.0.23 | 3.0.23 | 3.0.0 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 |
Note: The same container image and Helm chart are used for **Application Manager** and **Runtime Manager**.
## AI Agents
| Component | 4.7.3 | 4.7.2 | 4.7.1 | 4.7.0 | 4.6.1 | 4.6.0 | 4.1.4 | 4.1.3 | 4.1.2 | 4.1.1 | 4.1 | 4.0 | 3.4.7 | 3.4.6 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 |
| ---------------- | ----------- | ------- | ------- | ------- | ------ | ------ | ----- | ----- | ----- | ----- | --- | --- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
| **ai-designer** | **1.9.37** | 1.9.35 | 1.9.27 | 1.9.21 | 1.9.16 | 1.9.10 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **ai-assistant** | **1.8.12** | 1.8.9 | 1.8.2 | 1.7.2 | 1.6.15 | 1.6.9 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **ai-analyst** | **1.10.26** | 1.10.24 | 1.10.17 | 1.10.13 | 1.10.6 | 1.10.3 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **di-platform** | **1.4.3** | 1.4.1 | 1.2.35 | 1.2.31 | 1.2.27 | 1.2.24 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| **ai-developer** | **1.7.11** | 1.7.9 | 1.7.4 | 1.7.0 | 1.6.5 | 1.5.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
## Third-party recommended component versions
| FlowX.AI Version | 3rd Party Dependency | Recommended Supported Version |
| ---------------- | -------------------- | ----------------------------- |
| 4.7.3 | Keycloak | 22.x, 26.x |
| 4.7.3 | Kafka | 3.2.x |
| 4.7.3 | PostgreSQL | 16.2.x |
| 4.7.3 | OracleDB | 21c, 23ai |
| 4.7.3 | MongoDB | 7.0.x |
| 4.7.3 | Redis | 7.2.x |
| 4.7.3 | Elasticsearch | 7.17.x |
| 4.7.3 | Angular (Web SDK) | 19.x |
| 4.7.3 | React (Web SDK) | 18.x |
FlowX.AI supports the Recommended Versions of the prerequisite third-party components in the above table.
For optimal performance and reliability, our internal QA process validates new FlowX.AI releases using the latest Recommended Versions. While exploring alternative versions that suit your company's specific requirements, we recommend referring to the table above for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
### Deprecation notice
Starting FlowX 5.0, the following versions of 3rd Party Dependencies will no longer be supported:
* Keycloak versions older than 26
* Kafka versions older than 3.9
* Redis versions older than 7.4
# Migrating from previous versions to v4.7.3
Source: https://docs.flowx.ai/release-notes/v4.7.3-april-2025/migrating-from-v4.1.x-to-v4.7.3
If you're upgrading from a v4.7.0 version, make sure to first review the deployment guidelines for v4.7.0 and v4.7.1 release documentation to capture all the significant changes and also the deployment guidelines:
If you're upgrading from a v4.6.x version, make sure to first review the migration guide for v4.6 release documentation to capture all the significant changes and also the deployment guidelines:
If you're moving from v4.1.x to v4.7.0, don’t forget to check the relevant migration guides for each version to ensure nothing is missed:
Do not also forget to check all the deployment guidelines in between releases to do not miss important configurations or changes:
# FlowX.AI 4.7.3 Release Notes
Source: https://docs.flowx.ai/release-notes/v4.7.3-april-2025/v4.7.3-april-2025
FlowX.AI 4.7.3 enhances workflow conflict resolution, adds configurable enumeration ordering, and fixes data streams, UI templates, and application export/import issues to improve your development experience! 🚀
## **What's new? 🆕**
✅ Enhanced workflow conflict resolution with improved visibility\
✅ Added ability to configure order attributes on enumerations\
✅ Upgraded conflict management interface for better user experience
## Merge conflicts - workflow improvements
### Enhanced workflow conflict resolution
We've significantly improved how [**workflow**](../../4.7.x/docs/platform-deep-dive/integrations/integration-designer#workflows) conflicts are handled in this release:
* Upgraded to a new version (v2) of the workflow conflict resolution interface
* Now only showing workflow entities that have actual conflicts, making it easier to identify and address issues
* Streamlined the conflict resolution process for a more efficient workflow

**Important note about sequence conflicts:**
When resolving conflicts related to sequences, if you select one of the conflicting sequences, nodes that are no longer connected may remain "orphaned" on the canvas in the merged version. This design decision was made to prevent accidental node deletion. If these orphaned nodes are no longer needed, they can be manually removed in subsequent versions.
## Configurable enumeration ordering
You can now configure the **Order** attribute on enumerations, giving you greater control over how enumeration values are displayed in dropdown lists:
* Set specific order values for each enumeration entry
* Control the presentation sequence in UI components
* Improve user experience by arranging options in logical or prioritized order

**Limitation:** When changing the order value for an enumeration entry, the system does not automatically update the order values of other entries. You'll need to manually adjust the order values of other entries in the interface.
## **Bug fixes** 🐞
* Fixed an issue where workflow nodes could have two output sequences
* Resolved a problem with enumerations displaying in the wrong order in dropdown lists
* Fixed an authentication issue where users would receive 401 unauthorized errors when executing actions after token refresh
* Fixed a UI behavior issue where slider actions configured with "onChange" were incorrectly triggered when entering a screen instead of only when modified
* Fixed an issue with identical advancing IDs occurring in multi-pod deployments due to missing hostname command in GraalVM docker images
* Resolved an issue preventing media library migration from default application
* Fixed the "Go to reference" functionality from data model attributes that was not correctly redirecting to referenced elements
* Fixed an error during application export/import where "There already exists a tagged build for this project version" was preventing successful imports
* Restored support for multiple topics in Data Stream Topics feature, which was previously only listening to the first topic in the list
* Fixed a 500 error that occurred when attempting to create child actions on certain nodes
* Fixed a critical issue where UI templates would disappear when creating a new version of an application after performing undo operations on pasted templates
* Fixed a bug where updates to the subprocess in a Start subprocess action weren't being saved properly
* Fixed an issue with the "Send email" button remaining disabled in the Test notification template feature after correcting validation errors
* Fixed an error that occurred when trying to export document templates containing null values in templateMap
* Fixed a display issue where Tokens, Workflows, and Subprocesses information wasn't showing in process instances after migration from 4.7.1 to 4.7.2
* Fixed a UI issue where business rule code wasn't displayed when testing business rules in committed versions
***
## **Additional information**
📌 [Deployment Guidelines v4.7.3](./deployment-guidelines-v4.7.3)\
📌 [Migration Guide (from v4.7.0)](./migrating-from-v4.7.0-to-v4.7.3)
# Android SDK
Source: https://docs.flowx.ai/4.7.x/sdks/android-renderer
## Android project requirements
System requirements:
* **minSdk = 26** (Android 8.0)
* **compileSdk = 34**
The SDK library was build using:
* **[Android Gradle Plugin](https://developer.android.com/build/releases/gradle-plugin) 8.1.4**
* **[Kotlin](https://kotlinlang.org/) 1.9.24**
## Installing the library
1. Add the maven repository in your project's `settings.gradle.kts` file:
```kotlin
dependencyResolutionManagement {
...
repositories {
...
maven {
url = uri("https://nexus-jx.dev.rd.flowx.ai/repository/flowx-maven-releases/")
credentials {
username = "your_username"
password = "your_password"
}
}
}
}
```
2. Add the library as a dependency in your `app/build.gradle.kts` file:
```kotlin
dependencies {
...
implementation("ai.flowx.android:android-sdk:4.0.9")
...
}
```
### Library dependencies
Impactful dependencies:
* **[Koin](https://insert-koin.io/) 3.2.2**, including the implementation for **[Koin Context Isolation](https://insert-koin.io/docs/reference/koin-core/context-isolation/)**
* **[Compose BOM](https://developer.android.com/jetpack/compose/bom/bom-mapping) 2024.06.00** + **[Compose Compiler](https://developer.android.com/jetpack/androidx/releases/compose-compiler) 1.5.14**
* **[Accompanist](https://google.github.io/accompanist/) 0.32.0**
* **[Kotlin Coroutines](https://kotlinlang.org/docs/coroutines-overview.html) 1.8.0**
* **[OkHttp BOM](https://square.github.io/okhttp/) 4.11.0**
* **[Retrofit](https://square.github.io/retrofit/) 2.9.0**
* **[Coil Image Library](https://coil-kt.github.io/coil/) 2.5.0**
* **[Gson](https://github.com/google/gson) 2.11.0**
### Public API
The SDK library is managed through the `FlowxSdkApi` singleton instance, which exposes the following methods:
| Name | Description | Definition |
| ------------------------ | ------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `init` | Initializes the FlowX SDK. Must be called in your application's `onCreate()` | `fun init(context: Context, config: SdkConfig, accessTokenProvider: FlowxSdkApi.Companion.AccessTokenProvider? = null, customComponentsProvider: CustomComponentsProvider? = null, customStepperHeaderProvider: CustomStepperHeaderProvider? = null)` |
| `setAccessTokenProvider` | Updates the access token provider (i.e. a functional interface) inside the renderer | `fun setAccessTokenProvider(accessTokenProvider: FlowxSdkApi.Companion.AccessTokenProvider)` |
| `setupTheme` | Sets up the theme to be used when rendering a process | `fun setupTheme(themeUuid: String, fallbackThemeJsonFileAssetsPath: String? = null, @MainThread onCompletion: () -> Unit)` |
| `changeLocaleSettings` | Changes the current locale settings (i.e. locale and language) | `fun changeLocaleSettings(locale: Locale, language: String)` |
| `startProcess` | Starts a FlowX process instance, by returning a `@Composable` function where the process is rendered. | `fun startProcess(projectId: String, processName: String, params: JSONObject = JSONObject(), isModal: Boolean = false, closeModalFunc: ((processName: String) -> Unit)? = null): @Composable () -> Unit` |
| `continueProcess` | Continues an existing FlowX process instance, by returning a `@Composable` function where the process is rendered. | `fun continueProcess(processUuid: String, isModal: Boolean = false, closeModalFunc: ((processName: String) -> Unit)? = null): @Composable () -> Unit` |
| `executeAction` | Runs an action from a custom component | `fun executeAction(action: CustomComponentAction, params: JSONObject? = null)` |
| `getMediaResourceUrl` | Extracts a media item URL needed to populate the UI of a custom component | `fun getMediaResourceUrl(key: String): String?` |
| `replaceSubstitutionTag` | Extracts a substitution tag value needed to populate the UI of a custom component | `fun replaceSubstitutionTag(key: String): String` |
## Configuring the library
To configure the SDK, call the `init` method in your project's application class `onCreate()` method:
```kotlin
fun init(
context: Context,
config: SdkConfig,
accessTokenProvider: AccessTokenProvider? = null,
customComponentsProvider: CustomComponentsProvider? = null,
customStepperHeaderProvider: CustomStepperHeaderProvider? = null,
analyticsCollector = null,
onNewProcessStarted = null,
)
```
#### Parameters
| Name | Description | Type | Requirement |
| ----------------------------- | --------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------- | ----------------------------- |
| `context` | Android application `Context` | `Context` | Mandatory |
| `config` | SDK configuration parameters | `ai.flowx.android.sdk.process.model.SdkConfig` | Mandatory |
| `accessTokenProvider` | Functional interface provider for passing the access token | `ai.flowx.android.sdk.FlowxSdkApi.Companion.AccessTokenProvder?` | Optional. Defaults to `null`. |
| `customComponentsProvider` | Provider for the `@Composable`/`View` custom components | `ai.flowx.android.sdk.component.custom.CustomComponentsProvider?` | Optional. Defaults to `null`. |
| `customStepperHeaderProvider` | Provider for the `@Composable` custom stepper header view | `ai.flowx.android.sdk.component.custom.CustomStepperHeaderProvider?` | Optional. Defaults to `null`. |
| `analyticsCollector` | Collector interface for SDK analytics events | `ai.flowx.android.sdk.analytics.AnalyticsCollector` | Optional. Defaults to `null`. |
| `onNewProcessStarted` | Callback for when a new process was started as a consequence for executing a `START_PROJECT` action | `ai.flowx.android.sdk.NewProcessStartedHandler` | Optional. Defaults to `null`. |
• Providing the `access token` is explained in the [authentication](#authentication) section.
• The `custom components` implementation is explained in [its own section](#custom-components).
• The implementation for providing a `custom view for the header` of the [Stepper](../docs/building-blocks/process/navigation-areas#stepper) component is detailed in [its own section](#custom-header-view-for-the-stepper-component).
• Collecting analytics events from the SDK is explained in [its own section](#collecting-analytics-events).
• Handling the start of a new process while in a running process is explained in [its own section](#handling-start-of-a-new-process).
#### Sample
```kotlin
class MyApplication : Application() {
override fun onCreate() {
super.onCreate()
initFlowXSdk()
}
private fun initFlowXSdk() {
FlowxSdkApi.getInstance().init(
context = applicationContext,
config = SdkConfig(
baseUrl = "URL to FlowX backend",
imageBaseUrl = "URL to FlowX CMS Media Library",
enginePath = "some_path",
language = "en",
locale = Locale.getDefault(),
validators = mapOf("exact_25_in_length" to { it.length == 25 }),
enableLog = false,
),
accessTokenProvider = null, // null by default; can be set later, depending on the existing authentication logic
customComponentsProvider = object : CustomComponentsProvider {...},
customStepperHeaderProvider = object : CustomStepperHeaderProvider {...},
analyticsCollector = { event ->
Log.i("Analytics", "Event(type = ${event.type}, value = ${event.value})")
},
onNewProcessStarted = { processInstanceUuid ->
// Send a broadcast message to notify the Activity currently displaying the running process.
// The Activity should handle the broadcast to reload and display the newly started process identified by `processInstanceUuid`.
}
)
}
}
```
The configuration properties that should be passed as `SdkConfig` data for the `config` parameter above are:
| Name | Description | Type | Requirement |
| -------------- | ------------------------------------------------------------------- | ----------------------------------- | ------------------------------------------- |
| `baseUrl` | URL to connect to the FlowX back-end environment | `String` | Mandatory |
| `imageBaseUrl` | URL to connect to the FlowX Media Library module of the CMS | `String` | Mandatory |
| `enginePath` | URL path segment used to identify the process engine service | `String` | Mandatory |
| `language` | The language used for retrieving enumerations and substitution tags | `String` | Optional. Defaults to `en`. |
| `locale` | The locale used for date, number and currency formatting | `java.util.Locale` | Optional. Defaults to `Locale.getDefault()` |
| `validators` | Custom validators for form elements | `Map Boolean>?` | Optional. |
| `enableLog` | Flag indicating if logs should be printed | `Boolean` | Optional. Defaults to `false` |
#### Custom validators
The `custom validators` map is a collection of lambda functions, referenced by *name* (i.e. the value of the `key` in this map), each returning a `Boolean` based on the `String` which needs to be validated.
For a custom validator to be evaluated for a form field, its *name* must be specified in the form field process definition.
By looking at the example from above:
```kotlin
mapOf("exact_25_in_length" to { it.length == 25 })
```
if a form element should be validated using this lambda function, a custom validator named `"exact_25_in_length"` should be specified in the process definition.
## Using the library
### Authentication
To be able to use the SDK, **authentication is required**. Therefore, before calling any other method on the singleton instance, make sure that the access token provider is set by calling:
```kotlin
FlowxSdkApi.getInstance().setAccessTokenProvider(accessTokenProvider = { "your access token" })
```
The lambda passed in as parameter has the `ai.flowx.android.sdk.FlowxSdkApi.Companion.AccessTokenProvider` type, which is actually a functional interface defined like this:
```kotlin
fun interface AccessTokenProvider {
fun get(): String
}
```
Whenever the access token changes based on your own authentication logic, it must be updated in the renderer by calling the `setAccessTokenProvider` method again.
### Theming
Prior setting up the theme, make sure the `access token provider` was set. Check the [authentication](#authentication) section for details.
To be able to use styled components while rendering a process, the theming mechanism must be invoked by calling the `suspend`-ing `setupTheme(...)` method over the singleton instance of the SDK:
```kotlin
suspend fun setupTheme(
themeUuid: String,
fallbackThemeJsonFileAssetsPath: String? = null,
@MainThread onCompletion: () -> Unit
)
```
#### Parameters
| Name | Description | Type | Requirement |
| --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------ | ---------------------------- |
| `themeUuid` | UUID string of the theme configured in FlowX Designer | `String` | Mandatory. Can be empty |
| `fallbackThemeJsonFileAssetsPath` | Android asset relative path to the corresponding JSON file to be used as fallback, in case fetching the theme fails and there is no cached version available | `String?` | Optional. Defaults to `null` |
| `onCompletion` | `@MainThread` invoked closure, called when setting up the theme completes | `() -> Unit` | Mandatory |
If the `themeUuid` parameter value is empty (`""`), no theme will be fetched, and the mechanism will rely only on the fallback file, if set.
If the `fallbackThemeJsonFileAssetsPath` parameter value is `null`, there will be no fallback mechanism set in place, meaning if fetching the theme fails, the redered process will have no style applied over it's displayed components.
The SDK caches the fetched themes, so if a theme fetch fails, a cached version will be used, if available. Otherwise, it will use the file given as fallback.
#### Sample
```kotlin
viewModelScope.launch {
FlowxSdkApi.getInstance().setupTheme(
themeUuid = "some uuid string",
fallbackThemeJsonFileAssetsPath = "theme/a_fallback_theme.json",
) {
// theme setup complete
// do specific logic
}
}
```
The `fallbackThemeJsonFileAssetsPath` always search for files under your project's `assets/` directory, meaning the example parameter value is translated to `file://android_asset/theme/a_fallback_theme.json` before being evaluated.
Do not [start](#start-a-flowx-process) or [resume](#resume-a-flowx-process) a process before the completion of the theme setup mechanism.
### Changing current locale settings
The current `locale` and `language` can be also changed after the [initial setup](#configuring-the-library), by calling the `changeLocaleSettings` function:
```kotlin
fun changeLocaleSettings(locale: Locale, language: String)
```
#### Parameters
| Name | Description | Type | Requirement |
| ---------- | ----------------------------- | ------------------ | ----------- |
| `locale` | The new locale | `java.util.Locale` | Mandatory |
| `language` | The code for the new language | `String` | Mandatory |
**Do not change the locale or the language while a process is rendered.**
The change is successful only if made before [starting](#start-a-flowx-process) or [resuming](#resume-a-flowx-process) a process.
#### Sample
```kotlin
FlowxSdkApi.getInstance().changeLocaleSettings(locale = Locale("en", "US"), language = "en")
```
### Start a FlowX process
Prior starting a process, make sure the [authentication](#authentication) and [theming](#theming) were correctly set up
After performing all the above steps and all the prerequisites are fulfilled, a new instance of a FlowX process can be started, by using the `startProcess` function:
```kotlin
fun startProcess(
projectId: String,
processName: String,
params: JSONObject = JSONObject(),
isModal: Boolean = false,
onProcessEnded: (() -> Unit)? = null,
closeModalFunc: ((processName: String) -> Unit)? = null,
): @Composable () -> Unit
```
#### Parameters
| Name | Description | Type | Requirement |
| ---------------- | -------------------------------------------------------------------------------------------------- | ---------------------------------- | --------------------------------------------------- |
| `projectId` | The id of the project containing the process to be started | `String` | Mandatory |
| `processName` | The name of the process | `String` | Mandatory |
| `params` | The starting params for the process, if any | `JSONObject` | Optional. If omitted, if defaults to `JSONObject()` |
| `isModal` | Flag indicating whether the process can be closed at anytime by tapping the top-right close button | `Boolean` | Optional. It defaults to `false`. |
| `onProcessEnded` | Lambda function where you can do additional processing when the started process ends | `(() -> Unit)?` | Optional. It defaults to `null`. |
| `closeModalFunc` | Lambda function where you should handle closing the process when `isModal` flag is `true` | `((processName: String) -> Unit)?` | Optional. It defaults to `null`. |
The returned **[@Composable](https://developer.android.com/reference/kotlin/androidx/compose/runtime/Composable)** function must be included in its own **[Activity](https://developer.android.com/reference/android/app/Activity)**, which is part of (controlled and maintained by) the container application.
This wrapper activity must display only the `@Composable` returned from the SDK (i.e. it occupies the whole activity screen space).
#### Sample
```kotlin
class ProcessActivity : ComponentActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
...
setContent {
FlowxSdkApi.getInstance().startProcess(
projectId = "your project id",
processName = "your process name",
params: JSONObject = JSONObject(),
isModal = true,
onProcessEnded = {
// NOTE: possible processing could involve doing something in the container app (i.e. navigating to a different screen)
},
closeModalFunc = { processName ->
// NOTE: possible handling could involve doing something differently based on the `processName` value
},
).invoke()
}
}
...
}
```
### Resume a FlowX process
Prior resuming process, make sure the [authentication](#authentication) and [theming](#theming) were correctly set up
To resume an existing instance of a FlowX process, after fulfilling all the prerequisites, use the `continueProcess` function:
```kotlin
fun continueProcess(
processUuid: String,
isModal: Boolean = false,
onProcessEnded: (() -> Unit)? = null,
closeModalFunc: ((processName: String) -> Unit)? = null,
): @Composable () -> Unit
```
#### Parameters
| Name | Description | Type | Requirement |
| ---------------- | -------------------------------------------------------------------------------------------------- | ---------------------------------- | --------------------------------- |
| `processUuid` | The UUID string of the process | `String` | Mandatory |
| `isModal` | Flag indicating whether the process can be closed at anytime by tapping the top-right close button | `Boolean` | Optional. It defaults to `false`. |
| `onProcessEnded` | Lambda function where you can do additional processing when the continued process ends | `(() -> Unit)?` | Optional. It defaults to `null`. |
| `closeModalFunc` | Lambda function where you should handle closing the process when `isModal` flag is `true` | `((processName: String) -> Unit)?` | Optional. It defaults to `null`. |
The returned **[@Composable](https://developer.android.com/reference/kotlin/androidx/compose/runtime/Composable)** function must be included in its own **[Activity](https://developer.android.com/reference/android/app/Activity)**, which is part of (controlled and maintained by) the container application.
This wrapper activity must display only the `@Composable` returned from the SDK (i.e. it occupies the whole activity screen space).
#### Sample
```kotlin
class ProcessActivity : ComponentActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
...
setContent {
FlowxSdkApi.getInstance().continueProcess(
processUuid = "some process UUID string",
isModal = true,
onProcessEnded = {
// NOTE: possible processing could involve doing something in the container app (i.e. navigating to a different screen)
},
closeModalFunc = { processName ->
// NOTE: possible handling could involve doing something differently based on the `processName` value
},
).invoke()
}
}
...
}
```
## Custom components
The container application should decide which custom component view to provide using the `componentIdentifier` configured in the UI designer.
A custom component receives `data` to populate the view and `actions` available to execute, as described below.
It can also be validated and provide data back into the process when executing an action.
To handle custom components, an *implementation* of the `CustomComponentsProvider` interface should be passed as a parameter when initializing the SDK:
```kotlin
interface CustomComponentsProvider {
fun provideCustomComposableComponent(): CustomComposableComponent?
}
```
#### Sample
```kotlin
class CustomComponentsProviderImpl : CustomComponentsProvider {
override fun provideCustomComposableComponent(): CustomComposableComponent? {
return object : CustomComposableComponent {...}
}
}
```
### CustomComposableComponent
The implementation for providing a custom component is based on creating and binding a user defined [@Composable](https://developer.android.com/reference/kotlin/androidx/compose/runtime/Composable) function, through the `CustomComposable` interface:
```kotlin
interface CustomComposableComponent {
fun provideCustomComposable(componentIdentifier: String): CustomComposable
}
```
The returned `CustomComposable` object is an interface defined like this:
```kotlin
interface CustomComposable {
@Deprecated(message = "Will be removed in future releases")
val isDefined: Boolean // always MUST return `true`
// `@Composable` definitions for the custom components that can be handled
val composable: @Composable () -> Unit
/**
* Called when the data is available for the custom component
* (i.e. when the User Task that contains the custom component is displayed)
*
* @param data used to populate the custom component
*/
fun populateUi(data: Any?)
/**
* Called when the actions are available for the custom component
* (i.e. when the User Task that contains the custom component is displayed)
*
* @param actions that need to be attached to the custom component (e.g. onClick events)
*/
fun populateUi(actions: Map)
/**
* This will be called when executing an action from a FlowX.AI UI Component, when the platform needs to know if the specified/marked components are valid.
* Defaults to `true`.
*/
fun validate(): Boolean = true
/**
* This will be called when executing an action from a FlowX.AI UI Component, on computing the data to be sent as body on the network request.
* Returning `null` (i.e. default) means it does not contribute with any data to be sent.
*/
fun saveData(): JSONObject? = null
}
```
The value for the `data` parameter received in the `populateUi(data: Any?)` could be:
* `Boolean`
* `String`
* `java.lang.Number`
* `org.json.JSONObject`
* `org.json.JSONArray`
The appropriate way to check and cast the data accordingly to the needs must belong to the implementation details of the custom component.
Both validation and providing data back into process are optional, and, based on the needs, it may be included in the implementation or not.
#### Sample
Among multiple existing custom components, there may be one that:
* allows the input of a value representing an `age`
* the value should be validated (e.g. to be at least 35 years old)
* the value will be passed back into the process
```kotlin
class CustomComponentsProviderImpl : CustomComponentsProvider {
override fun provideCustomComposableComponent(): CustomComposableComponent? {
return object : CustomComposableComponent {
override fun provideCustomComposable(componentIdentifier: String): CustomComposable? =
when (componentIdentifier) {
"other-custom-component-identifier" -> OtherCustomComposable().invoke()
"age" -> AgeCustomComposable().invoke()
else -> null
}
}
}
}
private class AgeCustomComposable() {
operator fun invoke(): CustomComposable = object : CustomComposable {
val data: MutableStateFlow = MutableStateFlow(null)
var actions: Map = emptyMap()
private val viewModel: AgeViewModel by lazy {
AgeViewModel(data = data,actions = actions)
}
override val isDefined: Boolean = true // always MUST return `true`
override val composable: @Composable () -> Unit =
@Composable { Age(viewModel = viewModel) }
override fun populateUi(data: Any?) {
this.data.value = data
}
override fun populateUi(actions: Map) {
this.actions = actions
}
override fun validate(): Boolean = viewModel.isValid()
override fun saveData(): JSONObject? = viewModel.buildDataToSave()
}
}
@Composable
private fun Age(
viewModel: AgeViewModel = viewModel()
) {
val age by viewModel.ageFlow.collectAsState()
val error by viewModel.errorFlow.collectAsState()
val isError by remember(error) { mutableStateOf(error.isNotBlank()) }
Column {
OutlinedTextField(
value = age,
onValueChange = { viewModel.updateAge(it) },
modifier = Modifier.fillMaxWidth(),
label = { Text("Age") },
)
if (isError) {
Text(
modifier = Modifier.fillMaxWidth(),
text = error,
style = TextStyle(fontSize = 12.sp),
color = Color.Red,
textAlign = TextAlign.Start,
)
}
}
}
class AgeViewModel(
private val data: MutableStateFlow = MutableStateFlow(null),
private val actions: Map = emptyMap(),
) : ViewModel() {
private val _ageFlow = MutableStateFlow("")
val ageFlow: StateFlow = _ageFlow.asStateFlow()
private val _error = MutableStateFlow("")
val errorFlow: StateFlow = _error.asStateFlow()
fun updateAge(text: String) {
_ageFlow.value = text
}
fun isValid(): Boolean = ageFlow.value.toIntOrNull().let {
when {
it == null -> false.also { _error.update { "Unrecognized format" } }
it < 35 -> false.also { _error.update { "You have to be at least 35 years old" } }
else -> true.also { _error.update { "" } }
}
}
fun buildDataToSave(): JSONObject? = ageFlow.value.takeUnless { it.isBlank() }
?.let {
JSONObject(
"""
{
"app": {
"age": "$it"
}
}
""".trimIndent()
)
}
}
class OtherCustomComposable() {
operator fun invoke(): CustomComposable = object : CustomComposable {
// deprecated property: will be removed in future releases.
override val isDefined: Boolean = true // always MUST return `true`
override val composable: @Composable () -> Unit = @Composable {
/* add some @Composable implementation */
}
override fun populateUi(data: Any?) {
// extract the necessary data to be used for displaying the custom components
}
override fun populateUi(actions: Map) {
// extract the available actions that may be executed from the custom components
}
// Optional override, defaults to `true`.
// Here one can pass validation logic from viewModel (e.g. by calling `viewModel.isValid()`)
override fun validate(): Boolean = true
// Optional override, defaults to `null`.
// Here one can pass data to save from viewModel (e.g. by calling `viewModel.getDataToSave()`)
override fun saveData(): JSONObject? = null
}
}
```
### Execute action
The custom components which the container app provides may contain FlowX actions available for execution.
These actions are received through the `actions` parameter of the `populateUi(actions: Map)` method.
In order to run an action (i.e. on a click of a button in the custom component) you need to call the `executeAction` method:
```kotlin
fun executeAction(action: CustomComponentAction, params: JSONObject? = null)
```
#### Parameters
| Name | Description | Type | Requirement |
| -------- | --------------------------------------------------------------------------- | ------------------------------------------------------------- | ------------------------------- |
| `action` | Action object extracted from the `actions` received in the custom component | `ai.flowx.android.sdk.component.custom.CustomComponentAction` | Mandatory |
| `params` | Parameters needed to execute the `action` | `JSONObject?` | Optional. It defaults to `null` |
### Get a substitution tag value by key
```kotlin
fun replaceSubstitutionTag(key: String): String
```
All substitution tags will be retrieved by the SDK before starting the process and will be stored in memory.
Whenever the container app needs a substitution tag value for populating the UI of the custom components, it can request the substitution tag using the method above, by providing the `key`.
It returns:
* the key's counterpart, if the `key` is valid and found
* the empty string, if the `key` is valid, but not found
* the unaltered string, if the key has the wrong format (i.e. not starting with `@@`)
### Get a media item url by key
```kotlin
fun getMediaResourceUrl(key: String): String?
```
All media items will be retrieved by the SDK before starting the process and will be stored in memory.
Whenever the container app needs a media item url for populating the UI of the custom components, it can request the url using the method above, by providing the `key`.
It returns the `URL` string of the media resource, or `null`, if not found.
## Custom header view for the [STEPPER](../docs/building-blocks/process/navigation-areas#stepper) component
The container application can opt for providing a custom view in order to be used, for all the [Stepper](../docs/building-blocks/process/navigation-areas#stepper) components, as a replacement for the built-in header.
The custom view receives `data` to populate its UI, as described below.
To provide a custom header for the [Stepper](../docs/building-blocks/process/navigation-areas#stepper), an *implementation* of the `CustomStepperHeaderProvider` interface should be passed as a parameter when initializing the SDK:
```kotlin
interface CustomStepperHeaderProvider {
fun provideCustomComposableStepperHeader(): CustomComposableStepperHeader?
}
```
#### Sample
```kotlin
class CustomStepperHeaderProviderImpl : CustomStepperHeaderProvider {
override fun provideCustomComposableStepperHeader(): CustomComposableStepperHeader? {
return object : CustomComposableStepperHeader {...}
}
}
```
### CustomComposableStepperHeader
To provide the custom header view as a [@Composable](https://developer.android.com/reference/kotlin/androidx/compose/runtime/Composable) function, you have to implement the `CustomComposableStepperHeader` interface:
```kotlin
interface CustomComposableStepperHeader {
fun provideComposableStepperHeader(): ComposableStepperHeader
}
```
The returned `ComposableStepperHeader` object is an interface defined like this:
```kotlin
interface ComposableStepperHeader {
/**
* `@Composable` definition for the custom header view
* The received argument contains the stepper header necessary data to render the view.
*/
val composable: @Composable (data: CustomStepperHeaderData) -> Unit
}
```
The value for the `data` parameter received as function argument is an interface defined like this:
```kotlin
interface CustomStepperHeaderData {
// title for the current step; can be empty or null
val stepTitle: String?
// title for the current selected substep; optional;
// can be empty ("") if not defined or `null` if currently there is no selected substep
val substepTitle: String?
// 1-based index of the current step
val step: Int
// total number of steps
val totalSteps: Int
// 1-based index of the current substep; can be `null` when there are no defined substeps
val substep: Int?
// total number of substeps in the current step; can be `null` or `0`
val totalSubsteps: Int?
}
```
#### Sample
```kotlin
override fun provideComposableStepperHeader(): ComposableStepperHeader? {
return object : ComposableStepperHeader {
override val composable: @Composable (data: CustomStepperHeaderData) -> Unit
get() = @Composable { data ->
/* add some @Composable implementation which displays `data` */
}
}
}
```
## Collecting analytics events
To be able to collect analytics events from the SDK, an implementation for the `AnalyticsCollector` functional interface may be provided when initializing the SDK:
```kotlin
fun interface AnalyticsCollector {
fun onEvent(event: Event)
}
```
where the `Event` looks like this:
```kotlin
interface Event {
val type: String
val value: String
}
```
#### Sample
The implementation can be passed as a lambda, like:
```kotlin
analyticsCollector = { event ->
// do whatever is needed (e.g. log the event)
Log.i("Analytics", "Event(type = ${event.type}, value = ${event.value})")
}
```
## Handling "Start of a new process"
When an action of type `START_PROJECT` is executed, the `onNewProcessStarted` lambda provided in the `FlowxSdkApi.getInstance().init(...)` function is invoked.
This callback provides the UUID of the newly started process, which can be used to resume the process by calling the `FlowxSdkApi.getInstance().continueProcess(...)` method.
It is the responsibility of the container application's developer to implement the necessary logic for displaying the appropriate UI for the newly started process.
#### Sample
One way to handle this is to send a broadcast message to notify the Activity currently displaying the running process.
The Activity should handle the broadcast to reload and display the newly started process identified by `processInstanceUuid` (received in the broadcast intent).
```kotlin
FlowxSdkApi.getInstance().init(
...
onNewProcessStarted = { processInstanceUuid ->
applicationContext.sendBroadcast(
Intent("some.intent.filter.indentifier").apply {
putExtra("processInstanceUuid", processInstanceUuid)
setPackage("your.application.package")
}
)
}
...
)
class ProcessActivity : ComponentActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
...
setContent {
val yourBroadcastReceiver = remember {
YourBroadcastReceiver(handler = { processInstanceUuid -> /* do your own logic to refresh `ProcessContent()` */ })
}
val context = LocalContext.current
LifecycleStartEffect(true) {
ContextCompat.registerReceiver(context.applicationContext, yourBroadcastReceiver, IntentFilter("some.intent.filter.indentifier"), ContextCompat.RECEIVER_NOT_EXPORTED)
onStopOrDispose {
runCatching {
context.applicationContext.unregisterReceiver(yourBroadcastReceiver)
}
}
}
ProcessContent()
}
}
@Composable
private fun ProcessContent(...) { ... }
}
class YourBroadcastReceiver(private val handler: (String) -> Unit) : BroadcastReceiver() {
override fun onReceive(context: Context?, intent: Intent?) {
intent?.extras?.getString("processInstanceUuid")?.let { processUuid -> handler.invoke(processUuid) }
}
}
```
## Known issues
* shadows are rendered only on **Android >= 28** having [hardware acceleration](https://developer.android.com/topic/performance/hardware-accel) **enabled**
* there is no support yet for [subprocesses](../docs/building-blocks/process/subprocess) started using the [Call Activity node](../docs/building-blocks/node/call-subprocess-tasks/call-activity-node) when configuring a [TabBar](../docs/building-blocks/process/navigation-areas#tab-bar) [Navigation Area](../docs/building-blocks/process/navigation-areas)
{/*- only **[PORTRAIT](https://developer.android.com/guide/topics/manifest/activity-element#screen)** orientation is supported for now*/}
{/*- there is no support for **[Dark Mode](https://developer.android.com/develop/ui/views/theming/darktheme)** yet*/}
{/*- **[CONTAINER](../docs/building-blocks/node/milestone-node.md#container)** milestone nodes are not supported yet*/}
{/*- can not run multiple processes in parallel (e.g. in a Bottom Tab Navigation)*/}
# Angular SDK
Source: https://docs.flowx.ai/4.7.x/sdks/angular-renderer
FlowxProcessRenderer is a library designed to render the UI of processes created via the Flowx Process Designer.
**Breaking changes**: Starting with version 4.0 the ui-sdk will no longer expect the authToken to be present in the `LOCAL_STORAGE`. Instead, the authToken will be passed as an input to the flx-process-renderer component. This is mandatory for the SSE to work properly.
## Prerequisites
* Node.js min version 20 - [**Download Node.js**](https://nodejs.org/en/blog/release/v20.9.0)
* Angular CLI version 19. Install Angular CLI globally using the following command:
```npm
npm install -g @angular/cli@19
```
This will allow you to run ng related commands from the terminal.
## Angular project requirements
Your app MUST be created using the NG app from the @angular/cli\~19 package. It also MUST use SCSS for styling.
```npm
npm install -g @angular/cli@19
ng new my-flowx-app
```
To install the npm libraries provided by FLOWX you will need to obtain access to the private FlowX Nexus registry. Please consult with your project DevOps.
The library uses Angular version **@angular\~19**, **npm v10.1.0** and **node v20.9.0**.
If you are using an older version of Angular (for example, v16), please consult the following link for update instructions:
[**Update Angular from v16.0 to v19.0**](https://angular.dev/update-guide?v=16.0-19.0\&l=1)
## Installing the library
Use the following command to install the **renderer** library and its required dependencies:
```bash
npm install \
@flowx/core-sdk@\
@flowx/core-theme@ \
@flowx/angular-sdk@ \
@flowx/angular-theme@ \
@flowx/angular-ui-toolkit@ \
@angular/cdk@19 \
@types/event-source-polyfill
```
Replace `` with the correct version corresponding to your platform version.
To find the right version, navigate to: **Release Notes → Choose your platform version → Deployment guidelines → Component versions**.
A few configurations are needed in the projects `angular.json`:
* in order to successfully link the pdf viewer, add the following declaration in the assets property:
```json
{
"glob": "**/*",
"input": "node_modules/ng2-pdfjs-viewer/pdfjs",
"output": "/assets/pdfjs"
}
```
## Initial setup
Once installed, `FlxProcessModule` will be imported in the `AppModule` as `FlxProcessModule.withConfig({})`.
You **MUST** also import the dependencies of `FlxProcessModule`: `HttpClientModule` from `@angular/common/http`
### Theming
Component theming is done through the `@flowx/angular-theme` library. The theme id is a required input for the renderer SDK component and is used to fetch the theme configuration. The id can be obtained from the admin panel in the themes section.

### Authorization
Every request from the **FlowX** renderer SDK will be made using the **HttpClientModule** of the client app, which means those requests will go through every interceptor you define here. This is most important to know when building the auth method as it will be the job of the client app to intercept and decorate the requests with the necessary auth info (eg. `Authorziation: Bearer ...`).
It's the responsibility of the client app to implement the authorization flow (using the **OpenID Connect** standard). The renderer SDK will expect the authToken to be passed to the `flx-process-renderer` as an input.
```typescript
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { HttpClientModule, HTTP_INTERCEPTORS } from '@angular/common/http';
import { FlxProcessModule } from '@flowx/angular-sdk';
import { AppRoutingModule } from './app-routing.module';
import { AppComponent } from './app.component';
@NgModule({
declarations: [
AppComponent,
],
imports: [
BrowserModule,
AppRoutingModule,
// will be used by the renderer SDK to make requests
HttpClientModule,
// needed by the renderer SDK
FlxProcessModule.withConfig({
components: {},
services: {},
}),
],
// this interceptor with decorate the requests with the Authorization header
providers: [
{ provide: HTTP_INTERCEPTORS, useClass: AuthInterceptor, multi: true },
],
bootstrap: [AppComponent]
})
export class AppModule {}
```
The `withConfig()` call is required in the application module where the process will be rendered. The `withConfig()` method accepts a config argument where you can pass extra config info, register a **custom component**, **service**, or **custom validators**.
**Custom components** will be referenced by name when creating the template config for a user task.
**Custom validators** will be referenced by name (`currentOrLastYear`) in the template config panel in the validators section of each generated form field.
```typescript
// example with custom component, custom services and custom validator
FlxProcessModule.withConfig({
components: {
YourCustomComponentIdentifier: CustomComponent,
},
services: {
NomenclatorService,
LocalDataStoreService,
},
validators: { currentOrLastYear },
})
// example of a custom validator that restricts data selection to
// the current or the previous year
currentOrLastYear: function currentOrLastYear(AC: AbstractControl): { [key: string]: any } {
if (!AC) {
return null;
}
const yearDate = moment(AC.value, YEAR_FORMAT, true);
const currentDateYear = moment(new Date()).startOf('year');
const lastYear = moment(new Date()).subtract(1, 'year').startOf('year');
if (!yearDate.isSame(currentDateYear) && !yearDate.isSame(lastYear)) {
return { currentOrLastYear: true };
}
return null;
}
```
The error that the validator returns **MUST** match the validator name.
The entry point of the library is the `` component. A list of accepted inputs is found below:
```
```
**Parameters**:
| Name | Description | Type | Mandatory | Default value | Example |
| ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------- | --------- | ------------- | ------------------------------------------------ |
| apiUrl | Your base url | string | true | - | [https://yourDomain.dev](https://yourdomain.dev) |
| processApiPath | Process subpath | string | true | - | onboarding |
| authToken | Authorization token | string | true | - | 'eyJhbGciOiJSUzI1NiIsIn....' |
| themeId | Theme id used to style the process. Can be obtained from the themes section in the admin | string | true | - | '123-456-789' |
| processName | Identifies a process | string | true | - | client\_identification |
| processStartData | Data required to start the process | json | true | - | `{ "firstName": "John", "lastName": "Smith"}` |
| debugLogs | When set to true this will print WS messages in the console | boolean | false | false | - |
| language | Language used to localize the application. | string | false | en | - |
| keepState |
By default all process data is reset when the process renderer component gets destroyed. Setting this to true will keep process data even if the viewport gets destroyed
| boolean | false | false | - |
| isDraft | When true allows starting a process in draft state. \*Note that isDraft = true requires that processName be the **id** (number) of the process and NOT the name. | boolean | false | false | - |
| legacyHttpVersion | Set this to `true` only for HTTP versions \< 2 in order for SSE to work properly. Can be omitted otherwise. | boolean | false | false | - |
| projectInfo | Information about the project that contains the process that is being run. | object | true | - | `{ projectId: '1234-5678-9012' }` |
| locale | Locale used to localize the application. | string | false | en-US | 'en-US' |
| cache | Caching of static resources | boolean | false | true | - |
### Data and actions
Custom components will be hydrated with data through the \$data input observable which must be defined in the custom component class.
```typescript
@Component({
selector: 'my-custom-component',
templateUrl: './custom-component.component.html',
styleUrls: ['./custom-component.component.scss'],
})
export class CustomComponentComponent {
data$ = input | null>(null)
}
```
Component actions are always found under `data` -> `actionsFn` key.
Action names are configurable via the process editor.
```typescript
# data object example
data: {
actionsFn: {
action_one: () => void;
action_two: () => void;
}
}
```
#### Custom component validation
Custom components can validate their own status. We can inject the `FLX_VALIDATOR_SERVICE` service and use it to validate the component. Whe service exposes the following properties:
* `validate(isValid: boolean)` - used to validate the component
* `saveData(data: any)` - used to save data
* `validated$` - used to monitor external submission from the process
Example of the a custom component that validates an input with a required validator:
```ts
@Component({
selector: 'flx-custom-validation',
imports: [CommonModule, ReactiveFormsModule],
template: `
Custom validation:
@if (formSubmitted() && fc.invalid) {
error
}
`
})
export class FlxCustomValidationComponent implements OnInit {
data$ = input | null>(null) // can be used to get process data & actions
validationService = inject(FLX_VALIDATOR_SERVICE) // service used to validate the custom component - ony use in components that need validation
fc = new FormControl('', Validators.required) // the form control has a required validator.
formSubmitted = signal(false)
ngOnInit(): void {
// update validity
this.fc.statusChanges.subscribe((status) => {
this.validationService.validate(status === 'VALID')
})
// save data
this.fc.valueChanges.subscribe((value) => {
this.validationService.saveData({ app: { test1: value, test2: `${value}${value}` } })
})
// monitor external submission
this.validationService.validated$.subscribe(() => {
this.formSubmitted.set(true)
})
}
}
```
### Custom Interceptors
* Starting from the FlowX SDKs version 4.6, the Angular `HttpClientModule` is no longer used internally to make HTTP requests. Thus, we have a new mechanism that allows you to create custom interceptors for handling HTTP requests.
#### Request Interceptors
* Here is an example that illustrates how to create an interceptor that adds a custom header to all outgoing requests:
```typescript
// Import the necessary types
import { FLX_REQUEST_INTERCEPTORS_CONFIG } from '@flowx/angular-sdk'
import { HttpRequestInterceptor } from '@flowx/core-sdk'
// create the interceptor factory function
const customHeaderInterceptor: HttpRequestInterceptor[] = [
{
onFulfilled: (response) => {
response.headers['custom-header'] = 'custom-value'
return response
},
}
]
// Add the interceptor to the providers array in the main app module
{
provide: FLX_REQUEST_INTERCEPTORS_CONFIG,
useValue: customHeaderInterceptor,
}
```
#### Response Interceptors
* Here is an example that illustrates how to create an interceptor that shows a message when a response errors out:
```typescript
import { FLX_RESPONSE_INTERCEPTORS_CONFIG } from '@flowx/angular-sdk'
import { HttpResponseInterceptor } from '@flowx/core-sdk'
const customHErrorInterceptor: HttpResponseInterceptor[] = [
{
onRejected: (response) => {
if (response.status !== 200) {
console.error('Something went wrong, we should handle this!', response.message)
}
return response
}
}
]
// Add the interceptor to the providers array in the main app module
{
provide: FLX_RESPONSE_INTERCEPTORS_CONFIG,
useValue: customHErrorInterceptor,
}
```
#### Interceptors that use Dependency injection
* If you need to use a service in your interceptor, you can use provider factories coupled with the `deps` property to inject the service into the interceptor:
```typescript
// the interceptor factory function that receives the custom service as an argument through dependency injection:
const interceptor: (custom: any) => HttpRequestInterceptor[] = (customService: CustomService) => [{
onFulfilled: (response) => {
// do something with the custom service
// interceptor logic
return response
}
}]
// Add the interceptor to the providers array in the main app module
{
provide: FLX_REQUEST_INTERCEPTORS_CONFIG,
useFactory: (customService: CustomService) => [
interceptorFactory(customService),
],
deps: [CustomService], // provider factory dependencies
}
```
### Using custom icons
The SDK provides a mechanism for using custom icons. Providers for custom icons should be included in the main app module in order to be available for the whole application.
```ts
import { FlxIconModule } from '@flowx/angular-ui-toolkit'
// create a custom dictionary of icons
const customIconDictionary = {
'custom-icon': 'custom icon svg'
}
// add the custom icon dictionary to the providers array in the main app module.
providers: [
...
importProvidersFrom(FlxIconModule),
provideExtraIconSet(customIconDictionary),
]
```
### Enumerations and translations
The SDK library provides a mechanism for handling enumerations and translations. The api for the method that handles enumerations is provided by the `getEnumeration` method which is `Promise` based.
These methods should only be used in custom components and within a process context because internal requests depend on the presence of a running project details.
#### Custom enumerations
```ts
import {getEnumeration} from '@flowx/angular-sdk'
// get an enumeration by name
const enumeration = await getEnumeration('enumerationName')
// get enumeration with a parent
const enumeration = await getEnumeration('enumerationName', 'parentName')
// get enumerations and cache the result for subsequent calls
const enumerations = await getEnumerations('enumerationName', null, true)
```
#### Translations
The SDK provides a `FlxLocalizePipe` that allows you to manage translations within your application. The pipe is standalone and can be used both in templates and in custom components.
* Here is an example of how to use the pipe in a template:
```ts
// import the pipe in the module where you want to use it
import { FlxLocalizePipe } from '@flowx/angular-sdk'
imports: [
...
FlxLocalizePipe
]
```
```html
{{ 'hello' | flxLocalize }}
```
* Here is an example of how to use the pipe in a custom component:
```ts
const localize = new FlxLocalizePipe()
const translatedText = localize.transform('@@substitution_tag')
```
### Caching
The SDK provides a caching mechanism for static resources. The cache is enabled by default and can be disabled by setting the `cache` input of the `flx-process-renderer` component to `false`. When turned on, the cache will store the static resources in the browser's cache storage.
In order to reset the cache, you can go to the `Application` (Chrome) or `Storage` (Firefox) tab in the browser Dev tools and either clear the cache or disable the cache for the current site.
| Browser | Dev Tools Tab | Screenshot |
| :---------: | :-------------: | :--------------------------------------------------------------------------------------------------------------------------------------: |
| **Chrome** | Application tab |  |
| **Firefox** | Storage tab |  |
**How to clear the cache**:
* **Chrome**: Navigate to Application → Storage → Cache storage → Right-click on "flowx-resources-cache" → Clear
* **Firefox**: Navigate to Storage → Cache Storage → Right-click → Clear
### Interacting with the process
Data from the process is communicated via the **Server Send Event** protocol under the following keys:
| Name | Description | Example | |
| --------------- | :--------------------------------------------------------------------------------------: | :-----: | - |
| Data | data updates for process model bound to default/custom components | | |
| ProcessMetadata | updates about process metadata, ex: progress update, data about how to render components | | |
| RunAction | instructs the UI to perform the given action | | |
### Task management component
The `flx-task-manager` component is available in the `FlxTaskManagementComponent`. To use it, import the required module in your Angular project:
```bash
import { FlxTasksManagementComponent } from '@flowx/angular-sdk';
```
#### Usage
Include the component in your template:
```xml
```
**Parameters**:
| Name | Description | Type | Mandatory | Example |
| ------------------ | -------------------------------------- | -------- | --------- | ------------------------------ |
| `apiUrl` | Endpoint where the tasks are available | `string` | ✅ | `https://yourDomain.dev/tasks` |
| `authToken` | Authorization token | `string` | ✅ | (retrieved from local storage) |
| `appId` | The application ID | `string` | ✅ | (retrieved dynamically) |
| `viewDefinitionId` | The view configuration identifier | `string` | ✅ | (retrieved dynamically) |
| `themeId` | The theme identifier | `string` | ❌ | (retrieved dynamically) |
| `language` | The selected language | `string` | ❌ | (retrieved dynamically) |
| `locale` | The localization setting | `string` | ❌ | (retrieved dynamically) |
| `buildId` | The current build identifier | `string` | ❌ | (retrieved dynamically) |
| `staticAssetsPath` | Path for static resources | `string` | ❌ | (set via environment) |
#### Coding style tests
Always follow the Angular official [coding styles](https://angular.io/guide/styleguide).
Below you will find a Storybook which will demonstrate how components behave under different states, props, and conditions, it allows you to preview and interact with individual UI components in isolation, without the need for a full-fledged application:
# iOS SDK
Source: https://docs.flowx.ai/4.7.x/sdks/ios-renderer
# Using the iOS Renderer
## iOS Project Requirements
The minimum requirements are:
* iOS 15
* Swift 5.7
## Installing the library
The iOS Renderer is available through Cocoapods.
### Cocoapods
#### Prerequisites
* Cocoapods gem installed
#### Cocoapods private trunk setup
Add the private trunk repo to your local Cocoapods installation with the command:
```ruby
pod repo add flowx-specs git@github.com:flowx-ai-external/flowx-ios-specs.git
```
#### Adding the dependency
Add the source of the private repository in the Podfile
```ruby
source 'git@github.com:flowx-ai-external/flowx-ios-specs.git'
```
Add a post install hook in the Podfile setting `BUILD_LIBRARY_FOR_DISTRIBUTION` to `YES`.
```ruby
post_install do |installer|
installer.pods_project.targets.each do |target|
target.build_configurations.each do |config|
config.build_settings['BUILD_LIBRARY_FOR_DISTRIBUTION'] = 'YES'
end
end
end
```
Add the pod and then run `pod install`
```ruby
pod 'FlowX'
```
Example
```ruby
source 'https://github.com/flowx-ai-external/flowx-ios-specs.git'
source 'https://github.com/CocoaPods/Specs.git'
post_install do |installer|
installer.pods_project.targets.each do |target|
target.build_configurations.each do |config|
config.build_settings['BUILD_LIBRARY_FOR_DISTRIBUTION'] = 'YES'
end
end
end
target 'AppTemplate' do
# Comment the next line if you don't want to use dynamic frameworks
use_frameworks!
# Pods for AppTemplate
pod 'FlowXRenderer'
target 'AppTemplateTests' do
inherit! :search_paths
# Pods for testing
end
target 'AppTemplateUITests' do
# Pods for testing
end
end
```
### Library dependencies
* Alamofire
* SDWebImageSwiftUI
* SDWebImageSVGCoder
## Configuring the library
The SDK has 2 configurations, available through shared instances: `FXConfig` which holds general purpose properties, and `FXSessionConfig` which holds user session properties.
It is recommended to call the `FXConfig` configuration method at app launch.
Call the FXSessionConfig configure method after the user logs in and a valid user session is available.
### FXConfig
This config is used for general purpose properties.
#### Properties
| Name | Description | Type | Requirement |
| ------------ | ------------------------------------------------------------------- | ------ | ------------------------------ |
| baseURL | The base URL used for REST networking | String | Mandatory |
| enginePath | The process engine url path component | String | Mandatory |
| imageBaseURL | The base URL used for static assets | String | Mandatory |
| locale | The locale used for localization | String | Mandatory. Defaults to "en-us" |
| language | The language used for retrieving enumerations and substitution tags | String | Mandatory. Defaults to "en" |
| logLevel | Enum value indicating the log level logging. Default is none | Bool | Optional |
**Sample**
```swift
FXConfig.sharedInstance.configure { (config) in
config.baseURL = myBaseURL
config.enginePath = "engine"
config.imageBaseURL = myImageBaseURL
config.locale = "en-us"
config.language = "en"
config.logLevel = .verbose
}
```
#### Changing the current language
The current language and locale can be changed after the initial configuration, by calling the `changeLocaleSettings` method:
```swift
FXConfig.sharedInstance.changeLocaleSettings(locale: "ro-ro",
language: "en")
```
### FXSessionConfig
This config is used for providing networking or auth session-specific properties.
The library expects either the JWT access token or an Alamofire Session instance managed by the container app. In case a session object is provided, the request adapting should be handled by the container app.
#### Properties
| Name | Description | Type |
| -------------- | --------------------------------------------------- | ------- |
| sessionManager | Alamofire session instance used for REST networking | Session |
| token | JWT authentication access token | String |
#### Sample for access token
```swift
...
func configureFlowXSession() {
FXSessionConfig.sharedInstance.configure { config in
config.token = myAccessToken
}
}
```
#### Sample for session
```swift
import Alamofire
```
```swift
...
func configureFlowXSession() {
FXSessionConfig.sharedInstance.configure { config in
config.sessionManager = Session(interceptor: MyRequestInterceptor())
}
}
```
```swift
class MyRequestInterceptor: RequestInterceptor {
func adapt(_ urlRequest: URLRequest, for session: Session, completion: @escaping (Swift.Result) -> Void) {
var urlRequest = urlRequest
urlRequest.setValue("Bearer " + accessToken, forHTTPHeaderField: "Authorization")
completion(.success(urlRequest))
}
}
```
### Theming
Make sure the `FXSessionConfig` configure method was called with a valid session before setting up the theme.
Before starting or resuming a process, the theme setup API should be called.
The start or continue process APIs should be called only after the theme setup was completed.
### Theme setup
The setup theme is called using the shared instance of `FXTheme`
```swift
public func setupTheme(withUuid uuid: String,
localFileUrl: URL? = nil,
completion: (() -> Void)?)
```
* `uuid` - the UUID of the theme configured in the FlowX Designer.
* `localFileUrl` - optional parameter for providing a fallback theme file, in case the fetch theme request fails.
* `completion` - a completion closure called when the theme setup finishes.
In addition to the `completion` parameter, FXTheme's shared instance also provides a Combine publisher named `themeFetched` which sends `true` if the theme setup was finished.
#### Sample
```swift
FXTheme.sharedInstance.setupTheme(withUuid: myThemeUuid,
localFileUrl: Bundle.main.url(forResource: "theme", withExtension: "json"),
completion: {
print("theme setup finished")
})
```
```swift
...
var subscription: AnyCancellable?
func myMethodForThemeSetupFinished() {
subscription = FXTheme.sharedInstance.themeFetched.sink { result in
if result {
DispatchQueue.main.async {
// you can now start/continue a process
}
}
}
}
}
...
```
## Using the library
### Public API
The library's public APIs described in this section are called using the shared instance of FlowX, `FlowX.sharedInstance`.
### Check renderer compatibility
Before using the iOS SDK, it is recommended to check the compatibility between the renderer and the deployed FlowX services.
This can be done by calling the `checkRendererVersion` which has a completion handler containing a Bool value.
```swift
FlowX.sharedInstance.checkRendererVersion { compatible in
print(compatible)
}
```
### How to start and end FlowX session
After all the configurations are set, you can start a FlowX session by calling the `startSession()` method.
This is optional, as the session starts lazily when the first process is started.
`FlowX.sharedInstance.startSession()`
When you want to end a FlowX session, you can call the `endSession()` method. This also does a complete clean-up of the started processes.
You might want to use this method in a variety of scenarios, for instance when the user logs out.
`FlowX.sharedInstance.endSession()`
### How to start a process
There are 3 methods available for starting a FlowX process.
The container app is responsible with presenting the navigation controller or tab controller holding the process navigation.
1. Start a process which renders inside an instance of `UINavigationController` or `UITabBarController`, depending on the BPMN diagram of the process.
The controller to be presented will be provided inside the `completion` closure parameter of the method.
Use this method when you want the process to be rendered inside a controller themed using the FlowX Theme defined in the FlowX Designer.
```swift
public func startProcess(projectId: String,
name: String,
params: [String: Any]?,
isModal: Bool = false,
showLoader: Bool = false,
completion: ((UIViewController?) -> Void)?,
onProcessEnded: (() -> Void)? = nil)
```
* `projectId` - the uuid of the project
* `name` - the name of the process
* `params` - the start parameters, if any
* `isModal` - a boolean indicating whether the process navigation is modally displayed. When the process navigation is displayed modally, a close bar button item is displayed on each screen displayed throughout the process navigation.
* `showLoader` - a boolean indicating whether the loader should be displayed when starting the process.
* `completion` - a completion closure which passes either an instance of `UINavigationController` or `UITabBarController` to be presented.
* `onProcessEnded` - a closure called when the process ends. The closure is strongly referenced inside the SDK. Avoid reference cycles by using \[weak self]
2. Start a process which renders inside a provided instance of a `UINavigationController`.
Use this method when you want the process to be rendered inside a custom instance of `UINavigationController`. Optionally you can pass an instance of `FXNavigationViewController`, which has the appearance set in the FlowX Theme, using the `FXNavigationViewController`s class func `FXNavigationViewController.navigationController()`.
If you use this method, make sure that the process does not use a tab controller as root view.
```swift
public func startProcess(navigationController: UINavigationController,
projectId: String,
name: String,
params: [String: Any]?,
isModal: Bool = false,
showLoader: Bool = false,
onProcessEnded: (() -> Void)? = nil)
```
* `navigationController` - the instance of UINavigationController which will hold the process navigation stack
* `projectId` - the uuid of the project
* `name` - the name of the process
* `params` - the start parameters, if any
* `isModal` - a boolean indicating whether the process navigation is modally displayed. When the process navigation is displayed modally, a close bar button item is displayed on each screen displayed throughout the process navigation.
* `showLoader` - a boolean indicating whether the loader should be displayed when starting the process.
* `onProcessEnded` - a closure called when the process ends. The closure is strongly referenced inside the SDK. Avoid reference cycles by using \[weak self]
3. Start a process which renders inside a provided instance of a `UITabBarController`.
Use this method when you want the process to be rendered inside a custom instance of `UITabBarController`.If you use this method, make sure that the process has a tab controller as root view.
```swift
public func startProcess(tabBarController: UITabBarController,
projectId: String,
name: String,
params: [String: Any]?,
isModal: Bool = false,
showLoader: Bool = false,
onProcessEnded: (() -> Void)? = nil)
```
* `tabBarController` - the instance of UITabBarController which will hold the process navigation
* `projectId` - the uuid of the project
* `name` - the name of the process
* `params` - the start parameters, if any
* `isModal` - a boolean indicating whether the process navigation is modally displayed. When the process navigation is displayed modally, a close bar button item is displayed on each screen displayed throughout the process navigation.
* `showLoader` - a boolean indicating whether the loader should be displayed when starting the process.
* `onProcessEnded` - a closure called when the process ends. The closure is strongly referenced inside the SDK. Avoid reference cycles by using \[weak self]
#### Sample
```swift
FlowX.sharedInstance.startProcess(projectId: projectId,
name: processName,
params: [:],
isModal: true,
showLoader: true) { processRootViewController in
if let processRootViewController = processRootViewController {
processRootViewController.modalPresentationStyle = .overFullScreen
self.present(processRootViewController, animated: false)
}
} onProcessEnded: { [weak self] in
//TODO
}
```
or
```swift
FlowX.sharedInstance.startProcess(navigationController: processNavigationController,
projectId: projectId,
name: processName,
params: startParams,
isModal: true
showLoader: true)
self.present(processNavigationController, animated: true, completion: nil)
```
or
```swift
FlowX.sharedInstance.startProcess(tabBarController: processTabController,
projectId: projectId,
name: processName,
params: startParams,
isModal: true
showLoader: true)
self.present(processTabController, animated: true, completion: nil)
```
### How to resume an existing process
There are 3 methods available for resuming a FlowX process.
The container app is responsible with presenting the navigation controller or tab controller holding the process navigation.
1. Continue a process which renders inside an instance of `UINavigationController` or `UITabBarController`, depending on the BPMN diagram of the process.
The controller to be presented will be provided inside the `completion` closure parameter of the method.
Use this method when you want the process to be rendered inside a controller themed using the FlowX Theme defined in the FlowX Designer.
```swift
public func continueExistingProcess(uuid: String,
name: String,
isModal: Bool = false,
completion: ((UIViewController?) -> Void)? = nil,
onProcessEnded: (() -> Void)? = nil)
```
* `name` - the name of the process
* `isModal` - a boolean indicating whether the process navigation is modally displayed. When the process navigation is displayed modally, a close bar button item is displayed on each screen displayed throughout the process navigation.
* `showLoader` - a boolean indicating whether the loader should be displayed when starting the process.
* `completion` - a completion closure which passes either an instance of `UINavigationController` or `UITabBarController` to be presented.
* `onProcessEnded` - a closure called when the process ends. The closure is strongly referenced inside the SDK. Avoid reference cycles by using \[weak self]
2. Continue a process which renders inside a provided instance of a `UINavigationController`.
Use this method when you want the process to be rendered inside a custom instance of `UINavigationController`. Optionally you can pass an instance of `FXNavigationViewController`, which has the appearance set in the FlowX Theme, using the `FXNavigationViewController`s class func `FXNavigationViewController.navigationController()`.
If you use this method, make sure that the process does not use a tab controller as root view.
```swift
public func continueExistingProcess(uuid: String,
name: String,
navigationController: UINavigationController,
isModal: Bool = false,
onProcessEnded: (() -> Void)? = nil)
```
* `uuid` - the UUID string of the process
* `name` - the name of the process
* `navigationController` - the instance of UINavigationController which will hold the process navigation stack
* `isModal` - a boolean indicating whether the process navigation is modally displayed. When the process navigation is displayed modally, a close bar button item is displayed on each screen displayed throughout the process navigation.
* `onProcessEnded` - a closure called when the process ends. The closure is strongly referenced inside the SDK. Avoid reference cycles by using \[weak self]
3. Continue a process which renders inside a provided instance of a `UITabBarController`.
Use this method when you want the process to be rendered inside a custom instance of `UITabBarController`.If you use this method, make sure that the process has a tab controller as root view.
```swift
public func continueExistingProcess(uuid: String,
name: String,
tabBarController: UITabBarController,
isModal: Bool = false,
onProcessEnded: (() -> Void)? = nil)
```
* `uuid` - the UUID string of the process
* `name` - the name of the process
* `tabBarController` - the instance of UITabBarController which will hold the process navigation
* `isModal` - a boolean indicating whether the process navigation is modally displayed. When the process navigation is displayed modally, a close bar button item is displayed on each screen displayed throughout the process navigation.
* `onProcessEnded` - a closure called when the process ends. The closure is strongly referenced inside the SDK. Avoid reference cycles by using \[weak self]
#### Sample
```swift
FlowX.sharedInstance.continueExistingProcess(uuid: uuid,
name: processName,
isModal: true) { processRootViewController in
if let processRootViewController = processRootViewController {
processRootViewController.modalPresentationStyle = .overFullScreen
self.present(processRootViewController, animated: true)
}
} onProcessEnded: { [weak self] in
}
```
or
```swift
FlowX.sharedInstance.continueExistingProcess(uuid: uuid,
name: processName,
navigationController: processNavigationController,
isModal: true)
processNavigationController.modalPresentationStyle = .overFullScreen
self.present(processNavigationController, animated: true, completion: nil)
```
or
```swift
FlowX.sharedInstance.continueExistingProcess(uuid: uuid,
name: processName,
tabBarController: processTabBarController,
isModal: false)
processTabBarController.modalPresentationStyle = .overFullScreen
self.present(processTabBarController, animated: true, completion: nil)
```
### How to end a process
You can manually end a process by calling the `stopProcess(name: String)` method.
This is useful when you want to explicitly ask the FlowX shared instance to clean up the instance of the process sent as parameter.
For example, it could be used for modally displayed processes that are dismissed by the user, in which case the `dismissRequested(forProcess process: String, navigationController: UINavigationController)` method of the FXDataSource will be called.
#### Sample
```swift
FlowX.sharedInstance.stopProcess(name: processName)
```
### FXDataSource
The library offers a way of communication with the container app through the `FXDataSource` protocol.
The data source is a public property of FlowX shared instance.
`public weak var dataSource: FXDataSource?`
```swift
public protocol FXDataSource: AnyObject {
func controllerFor(componentIdentifier: String) -> FXController?
func viewFor(componentIdentifier: String) -> FXView?
func viewFor(componentIdentifier: String, customComponentViewModel: FXCustomComponentViewModel) -> AnyView?
func navigationController() -> UINavigationController?
func errorReceivedForAction(name: String?)
func validate(validatorName: String, value: String) -> Bool
func dismissRequested(forProcess process: String, navigationController: UINavigationController)
func viewForStepperHeader(stepViewModel: StepViewModel) -> AnyView?
func track(event: TrackEvent)
func newProcessStarted(processInstanceUuid: String)
}
```
* `func controllerFor(componentIdentifier: String) -> FXController?`
This method is used for providing a custom component using UIKit UIViewController, identified by the componentIdentifier argument.
* `func viewFor(componentIdentifier: String) -> FXView?`
This method is used for providing a custom component using UIKit UIView, identified by the componentIdentifier argument.
* `func viewFor(componentIdentifier: String, customComponentViewModel: FXCustomComponentViewModel) -> AnyView?`
This method is used for providing a custom component using SwiftUI View, identified by the componentIdentifier argument.
A view model is provided as an ObservableObject to be added as @ObservedObject inside the SwiftUI view for component data observation.
* `func navigationController() -> UINavigationController?`
This method is used for providing a navigation controller. It can be either a custom `UINavigationController` class, or just a regular `UINavigationController` instance themed by the container app.
* `func errorReceivedForAction(name: String?)`
This method is called when an error occurs after an action is executed.
* `func validate(validatorName: String, value: String) -> Bool`
This method is used for custom validators. It provides the name of the validator and the value to be validated. The method returns a boolean indicating whether the value is valid or not.
* `func dismissRequested(forProcess process: String, navigationController: UINavigationController)`
This method is called, on a modally displayed process navigation, when the user attempts to dismiss the modal navigation. Typically it is used when you want to present a confirmation pop-up.
The container app is responsible with dismissing the UI and calling the stop process APIs.
* `func viewForStepperHeader(stepViewModel: StepViewModel) -> AnyView?`
This method is used for providing a custom SwiftUI view for the stepper navigation header.
* `func track(event: TrackEvent)`
This method is used for collecting analytics events from the SDK. The parameter is a `TrackEvent` enum, which can represent a screen or an action.
```swift
public enum TrackEvent {
case screen(String)
case action(String)
}
```
* `func newProcessStarted(processInstanceUuid: String)`
This method is used for handling the start of another main process as a result of a `START_PROJECT` action. The parameter is the uuid of the process instance. The container app is responsible for dismissing the navigation of the current process and displaying the new process navigation.
#### Sample
```swift
class MyFXDataSource: FXDataSource {
func controllerFor(componentIdentifier: String) -> FXController? {
switch componentIdentifier {
case "customComponent1":
let customComponent: CustomViewController = viewController()
return customComponent
default:
return nil
}
}
func viewFor(componentIdentifier: String) -> FXView? {
switch componentIdentifier {
case "customComponent2":
return CustomView()
default:
return nil
}
}
func viewFor(componentIdentifier: String, customComponentViewModel: FXCustomComponentViewModel) -> AnyView? {
switch componentIdentifier {
case "customComponent2":
return AnyView(SUICustomView(viewModel: customComponentViewModel))
default:
return nil
}
}
func navigationController() -> UINavigationController? {
nil
}
func errorReceivedForAction(name: String?) {
}
func validate(validatorName: String, value: Any) -> Bool {
switch validatorName {
case "myCustomValidator":
let myCustomValidator = MyCustomValidator(input: value as? String)
return myCustomValidator.isValid()
default:
return true
}
}
func dismissRequested(forProcess process: String, navigationController: UINavigationController) {
navigationController.dismiss(animated: true, completion: nil)
FlowX.sharedInstance.stopProcess(name: process)
}
func viewForStepperHeader(stepViewModel: StepViewModel) -> AnyView? {
return AnyView(CustomStepperHeaderView(stepViewModel: stepViewModel))
}
func track(event: TrackEvent) {
//TODO: track event using the desired analytics tool.
}
func newProcessStarted(processInstanceUuid: String) {
//TODO present new process instance navigation
}
}
```
### Custom components
#### FXController
FXController is an open class subclassing UIViewController, which helps the container app provide full custom screens the renderer.
It needs to be subclassed for each custom screen.
Use this only when the custom component configured in the UI Designer is the root component of the User Task node.
```swift
open class FXController: UIViewController {
internal(set) public var data: Any?
internal(set) public var actions: [ProcessActionModel]?
open func titleForScreen() -> String? {
return nil
}
open func populateUI() {
}
open func updateUI() {
}
}
```
* `internal(set) public var data: Any?`
`data` is the property, containing the data model for the custom component. The type is Any, as it could be a primitive value, a dictionary or an array, depending on the component configuration.
* `internal(set) public var actions: [ProcessActionModel]?`
`actions` is the array of actions provided to the custom component.
* `func titleForScreen() -> String?`
This method is used for setting the screen title. It is called by the renderer when the view controller is displayed.
* `func populateUI()`
This method is called by the renderer, after the controller has been presented, when the data is available.
This will happen asynchronously. It is the container app's responsibility to make sure that the initial state of the view controller does not have default/residual values displayed.
* `func updateUI()`
This method is called by the renderer when an already displayed view controller needs to update the data shown.
#### FXView
FXView is a protocol that helps the container app provide custom UIKit subviews to the renderer. It needs to be implemented by `UIView` instances. Similar to `FXController` it has data and actions properties and a populate method.
```swift
public protocol FXView: UIView {
var data: Any? { get set }
var actions: [ProcessActionModel]? { get set }
func populateUI()
}
```
* `var data: [String: Any]?`
`data` is the property, containing the data model for the custom view. The type is Any, as it could be a primitive value, a dictionary or an array, depending on the component configuration.
* `var actions: [ProcessActionModel]?`
`actions` is the array of actions provided to the custom view.
* `func populateUI()`
This method is called by the renderer after the screen containing the view has been displayed.
It is the container app's responsibility to make sure that the initial state of the view does not have default/residual values displayed.
It is mandatory for views implementing the FXView protocol to provide the intrinsic content size.
```swift
override var intrinsicContentSize: CGSize {
return CGSize(width: UIScreen.main.bounds.width, height: 100)
}
```
#### SwiftUI Custom components
Custom SwiftUI components can be provided as type-erased views.
`FXCustomComponentViewModel` is a class implementing the `ObservableObject` protocol. It is used for managing the state of custom SwiftUI views.
It has two published properties, for data and actions. It also includes a `saveData` dictionary and a `validate` closure used for submitting and validating data from the custom components.
```swift
@Published public var data: Any?
@Published public var actions: [ProcessActionModel] = []
public var saveData: [String: Any]?
public var validate: (() -> Bool)?
```
Example
```swift
struct SampleView: View {
@ObservedObject var viewModel: FXCustomComponentViewModel
var body: some View {
Text("Lorem")
}
}
```
### Validating SwiftUI Custom Components
A SwiftUI Custom Component can validate and submit data from a custom component, when executing an action from a FlowX.AI UI Component.
* `public var saveData: [String: Any]?`
Used for setting data to be submitted from the custom component.
* `public var validate: (() -> Bool)?`
Used for validating the custom component data before executing the action.
#### Sample
```swift
struct MyCustomView: View {
@ObservedObject var viewModel: FXCustomComponentViewModel
var body: some View {
VStack {
...
}
.onAppear {
viewModel.saveData = ["customKey": "customValue"]
viewModel.validate = {
return true
}
}
.frame(height: 200)
}
}
```
### Custom header view for Stepper navigation
The container application can provide a custom view that will be used as the stepper navigation header, using the `FXDataSource` protocol method `viewForStepperHeader`.
The method has a parameter, which provides the data needed for populating the view's UI.
```swift
public struct StepViewModel {
// title for the current step; optional
public var stepTitle: String?
// title for the current substep, if there is a stepper in stepper configured; optional
public var substepTitle: String?
// 1-based index of the current step
public var step: Int
// total number of steps
public var totalSteps: Int
// 1-based index of the current substep, if there is a stepper in stepper configured; optional
public var substep: Int?
// total number of substeps in the current step, if there is a stepper in stepper configured; optional
public var totalSubsteps: Int?
}
```
#### Sample
```swift
struct CustomStepperHeaderView: View {
let viewModel: StepViewModel
var body: some View {
VStack(spacing: 16) {
ProgressView(value: Float(stepViewModel.step) / Float(stepViewModel.totalSteps))
.foregroundStyle(Color.blue)
if let stepTitle = stepViewModel.stepTitle {
Text(stepTitle)
}
if let substepTitle = stepViewModel.substepTitle {
Text(substepTitle)
}
}
.background(Color.white)
.shadow(radius: 10)
}
}
```
### How to run an action from a custom component
The custom components which the container app provides will contain FlowX actions to be executed. In order to run an action you need to call the following method:
```swift
public func runAction(action: ProcessActionModel,
params: [String: Any]? = nil)
```
`action` - the `ProcessActionModel` action object
`params` - the parameters for the action
### How to run an upload action from a custom component
```swift
public func runUploadAction(action: ProcessActionModel,
image: UIImage)
```
`action` - the `ProcessActionModel` action object
`image` - the image to upload
```swift
public func runUploadAction(action: ProcessActionModel,
fileURL: URL)
```
`action` - the `ProcessActionModel` action object
`fileURL` - the local URL of the image
### Getting a substitution tag value by key
```swift
public func getTag(withKey key: String) -> String?
```
All substitution tags will be retrieved by the SDK before starting the first process and will be stored in memory.
Whenever the container app needs a substitution tag value for populating the UI of the custom components, it can request the substitution tag using the method above, providing the key.
### Getting a media item url by key
```swift
public func getMediaItemURL(withKey key: String) -> String?
```
All media items will be retrieved by the SDK before starting the first process and will be stored in memory.
Whenever the container app needs a media item url for populating the UI of the custom components, it can request the url using the method above, providing the key.
### Handling authorization token changes
When the access token of the auth session changes, you can update it in the renderer using the `func updateAuthorization(token: String)` method.
```swift
FlowX.sharedInstance.updateAuthorization(token: accessToken)
```
# React SDK
Source: https://docs.flowx.ai/4.7.x/sdks/react-renderer
The FlowxProcessRenderer is a low code library designed to render UI configured via the Flowx Process Editor.
## React project requirements
Your app MUST use SCSS for styling.
To install the npm libraries provided by FLOWX you will need to obtain access to the private FLOWX Nexus registry. Please consult with your project DevOps.
The library uses React version **react\~18**, **npm v10.8.0** and **node v18.16.9**.
## Installing the library
Use the following command to install the **renderer** library and its required dependencies:
Installing `react` and `react-dom` can be skipped if you already have them installed in your project.
```bash
npm install \
react@18 \
react-dom@18 \
@flowx/core-sdk@ \
@flowx/core-theme@ \
@flowx/react-sdk@ \
@flowx/react-theme@ \
@flowx/react-ui-toolkit@ \
air-datepicker@3 \
axios \
ag-grid-react@32
```
Replace `` with the correct version corresponding to your platform version.
To find the right version, navigate to: **Release Notes → Choose your platform version → Deployment guidelines → Component versions**.
## Initial setup
Once installed, `FlxProcessRenderer` will be imported in the from the `@flowx/react-sdk` package.
### Theming
Component theming is done through the `@flowx/react-theme` library. The theme id is a required input for the renderer SDK component and is used to fetch the theme configuration. The id can be obtained from the admin panel in the themes section.

### Authorization
It's the responsibility of the client app to implement the authorization flow (using the **OpenID Connect** standard). The renderer SDK will expect the authToken to be passed to the `FlxProcessRenderer` as an input.
```typescript.tsx
import { FlxProcessRenderer } from '@flowx/react-sdk';
export function MyFlxContainer() {
return
}
```
The `FlxProcessRenderer` component is required in the application module where the process will be rendered. The component accepts a props where you can pass extra config info, register a **custom component** or **custom validators**.
**Custom components** will be referenced by name when creating the template config for a user task.
**Custom validators** will be referenced by name (`customValidator`) in the template config panel in the validators section of each generated form field.
```typescript.tsx
import { FlxProcessRenderer } from '@flowx/react-sdk';
export function MyFlxContainer() {
return (v: string) => v === '4.5'}}
staticAssetsPath={...}
locale="en-US"
language="en"
projectInfo={{
projectId: ...
}}
/>
}
```
The entry point of the library is the `` component. A list of accepted inputs is found below:
```
```
**Parameters**:
| Name | Description | Type | Mandatory | Default value | Example |
| ---------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | --------- | ------------- | ------------------------------------------------ |
| apiUrl | Your base url | string | true | - | [https://yourDomain.dev](https://yourdomain.dev) |
| processApiPath | Process subpath | string | true | - | onboarding |
| authToken | Authorization token | string | true | - | 'eyJhbGciOiJSUzI1NiIsIn....' |
| themeId | Theme id used to style the process. Can be obtained from the themes section in the admin | string | true | - | '123-456-789' |
| processName | Identifies a process | string | true | - | client\_identification |
| processStartData | Data required to start the process | json | true | - | `{ "firstName": "John", "lastName": "Smith"}` |
| language | Language used to localize the enumerations inside the application. | string | false | ro | - |
| isDraft | When true allows starting a process in draft state. \*Note that isDraft = true requires that processName be the **id** (number) of the process and NOT the name. | boolean | false | false | - |
| locale | Defines the locale of the process, used to apply date, currency and number formatting to data model values | boolean | false | ro-RO | - |
| projectInfo | Defines which FlowX Project will be run inside the process renderer. | json | true | - | `{ "projectId": "111111-222222-333333-44444"}` |
## Starting a process
### Prerequisites
* **Process Name**: You need to know the name of the process you want to start. This name is used to identify the process in the system.
* **FlowX Project UUID**: You need the UUID of the FlowX Project that contains the process you want to start. This UUID is used to identify the project in the system.
* **Locale**: You can specify the locale of the process to apply date, currency, and number formatting to data model values.
* **Language**: You can specify the language used to localize the enumerations inside the application.
### Getting the project UUID
The project UUID can be obtained from the FlowX Dashboard. Navigate to the Projects section and select the project you want to start a process in. The UUID can be copied from the project actions popover.

### Getting the process name
The process name can be obtained from the FlowX Designer. Navigate to the process you want to start and copy the process name from the breadcrumbs.

### Initializing the process renderer
To start a process, you need to initialize the `FlxProcessRenderer` component in your application. The component accepts various props that define the process to start, the theme to use, and other configuration options.
```typescript.tsx
import { FlxProcessRenderer } from '@flowx/react-sdk';
export function MyFlxContainer() {
return
}
```
## Custom components
Custom components will be hydrated with data through the data input prop which must be defined in the custom component.
Custom components will be provided through the `components` parameter to the `` component.
The object keys passed in the `components` prop **MUST** match the custom component names defined in the FlowX process.
Component data defined through an `inputKey` is available under `data` -> `data`
Component actions are always found under `data` -> `actionsFn` key.
```typescript.tsx
export const MyCustomComponent = ( {data }) => {...}
```
```typescript
# data object example
data: {
data: {
input1: ''
},
actionsFn: {
action_one: () => void;
action_two: () => void; }
}
```
To add a custom component in the template config tree, we need to know its unique identifier and the data it should receive from the process model.

The properties that can be configured are as follows:
* **Identifier** - This enables the custom component to be displayed within the component hierarchy and determines the actions available for the component.
* **Input keys** - These are used to specify the pathway to the process data that components will utilize to receive their information.
* [**UI Actions**](../../4.5.0/docs/building-blocks/ui-designer/ui-actions) - actions defined here will be made available to the custom component

### Prerequisites (before creation)
* **React Knowledge**: You should have a good understanding of React, as custom components are created and imported using React.
* **Development Environment**: Set up a development environment for React development, including Node.js and npm (Node Package Manager).
* **Component Identifier**: You need a unique identifier for your custom component. This identifier is used for referencing the component within the application.
### Creating a custom component
To create a Custom Component in React, follow these steps:
1. Create a new React component.
2. Implement the necessary HTML structure, TypeScript logic, and SCSS styling to define the appearance and behavior of your custom component.
### Importing the component
After creating the Custom Component, you need to import it into your application.
In your `` component, add the following property:
```tsx
```
### Using the custom component
Once your Custom Component is declared, you can use it for configuration within your application.

### Data input and actions
The Custom Component accepts input data from processes and can also include actions extracted from a process. These inputs and actions allow you to configure and interact with the component dynamically.

### Extracting data from processes
There are multiple ways to extract data from processes to use within your Custom Component. You can utilize the data provided by the process or map actions from the BPMN process to Angular actions within your component.

Make sure that the React actions that you declare match the names of the process actions.
### Styling with CSS
To apply CSS classes to UI elements within your Custom Component, you first need to identify the UI element identifiers within your component's HTML structure. Once identified, you can apply defined CSS classes to style these elements as desired.
Example:

### Additional considerations
* **Naming Conventions**: Be consistent with naming conventions for components, identifiers, and actions. Ensure that Angular actions match the names of process actions as mentioned in the documentation.
* **Component Hierarchy**: Understand how the component fits into the overall component hierarchy of your application. This will help determine where the component is displayed and what actions are available for it.
* **Documentation and Testing**: Document your custom component thoroughly for future reference. Additionally, testing is crucial to ensure that the component behaves as expected in various scenarios.
* **Security**: If your custom component interacts with sensitive data or performs critical actions, consider security measures to protect the application from potential vulnerabilities.
* **Integration with FLOWX Designer**: Ensure that your custom component integrates seamlessly with FLOWX Designer, as it is part of the application's process modeling capabilities.
## Custom validators
You may also define custom validators in your FlowX processes and pass their implementation through the `validators` prop of the `` component.
The validators are then processed and piped through the popular [React Hook Form](https://www.react-hook-form.com/api/useform/register/) library, taking into account how the error messages are defined in your process.
A validator must have the following type:
```typescript
const customValidator = (...params: string[]) => (v: any) => boolean | Promise
```
The object keys passed in the `validators` prop **MUST** match the custom validator names defined in the FlowX process.
## Custom CSS
The renderer SDK allows you to pass custom CSS classes on any component inside the process. These classes are then applied to the component's root element.
To add a CSS custom class to a component, you need to define the class in the process designer by navigating to the styles tab of the component, expanding the Advanced accordion and writing down the CSS class.

The classes will be applied last on the element, so they will override the classes already defined on the element.

## Storybook
Below you will find a Storybook which will demonstrate how components behave under different states, props, and conditions, it allows you to preview and interact with individual UI components in isolation, without the need for a full-fledged application:
# SDKs overview
Source: https://docs.flowx.ai/4.7.x/sdks/sdks-overview
FLOWX.AI provides web and native mobile SDKs. These SDKs enable developers to create applications that can be displayed in a browser, embedded in an internet banking interface, or in a mobile banking app. The SDKs automatically generate the user interface (UI) based on the business process and data points created by a business analyst, reducing the need for UX/UI expertise.
SDKs are used in the Angular, React, iOS, and Android applications to render the process screens and orchestrate the custom components.
# IAM solution
Source: https://docs.flowx.ai/4.7.x/setup-guides/access-management/access-management-overview
Identity and access management (IAM) is a framework of business processes, policies and technologies that facilitates the management of electronic or digital identities. With an IAM framework in place, you can control user access to critical information/components within an organization.
## What is an Identity Provider (IdP)?
The IdP, Identity-as-a-Service (IDaaS), Privileged Identity/Access Management (PIM/PAM), Multi-factor/Two-factor Authentication (MFA/2FA), and numerous other subcategories are included in the IAM category.
IdP is a subset of an IAM solution that is dedicated to handling fundamental user IDs. The IdP serves as the authoritative source for defining and confirming user identities.
The IdP can be considered maybe the most important subcategory of the IAM field because it often lays the foundation of an organization's overall identity management infrastructure. In fact, other IAM categories and solutions, such as [IDaaS](https://jumpcloud.com/blog/identity-as-a-service-idaas), PIM/PAM, MFA/2FA, and others are often layered on top of the core IdP and serve to federate core user identities from the IdP to various endpoints. Therefore, your choice in IdP will have a profound influence on your overall IAM architecture.
We recommend **Keycloak**, a component that allows you to create users and store credentials. It can be also used for authorization - defining groups, and assigning roles to users.
Every communication that comes from a consumer application, goes through a public entry point (API Gateway). To communicate with this component, the consumer application tries to start a process and the public entry point will check for authentication (Keycloak will send you a token) and the entry point validates it.
## Configuring access rights
Granular access rights can be configured for restricting access to the FLOWX.AI components and their features or to define allowed actions for each type of user. Access rights are based on user roles that need to be configured in the identity provider management solution.
To configure the roles for the users, they need to be added first to an identity provider (IdP) solution. **The access rights-related configuration needs to be set up for each microservice**. Default options are preconfigured. They can be overwritten using environment variables.
For more details you can check the next links:
For more information on how to add roles and how to configure an IdP solution, check the following section:
## Using Keycloak with an external IdP
Recommended keycloak version: **22.x**
In all cases, IdP authentication is mandatory but otherwise, all attribute mapping is configurable, including roles and groups or the entire authorization can be performed by keycloak.

### AD or LDAP provider
In Lightweight Directory Access Protocol (LDAP) and Active Directory, Keycloak functionality is called federation or external storage. Keycloak includes an LDAP/AD provider.

More details:
Configuration example:
### SAML, OpenID Connect, OAuth 2.0
Keycloak functionality is called brokering. Synchronization is performed during user login.
More details:
Configuration examples for ADFS:
# Application manager access rights
Source: https://docs.flowx.ai/4.7.x/setup-guides/access-management/app-manager-access-rights
Granular access rights can be configured for restricting access to the Application-manager component.
The **Application Manager** component provides granular access rights, allowing users to perform various actions depending on their assigned roles and the configured scopes.
These access rights are also used by to the runtime-manager microservice.
In order for users to view resources within the Application Manager, they must have, in addition to the appropriate `role_apps_manage_` role, at least **read access** on each [**resource**](../../docs/projects/resources).
### Available access scopes
1. **manage-applications**
* **Scopes**:
* **read**
* **Roles**:
* `ROLE_APPS_MANAGE_READ`
* `ROLE_APPS_MANAGE_IMPORT`
* `ROLE_APPS_MANAGE_EDIT`
* `ROLE_APPS_MANAGE_ADMIN`
* **edit**
* **Roles**:
* `ROLE_APPS_MANAGE_EDIT`
* `ROLE_APPS_MANAGE_ADMIN`
* **import**
* **Roles**:
* `ROLE_APPS_MANAGE_IMPORT`
* `ROLE_APPS_MANAGE_EDIT`
* `ROLE_APPS_MANAGE_ADMIN`
* **admin**
* **Roles**:
* `ROLE_APPS_MANAGE_ADMIN`
2. **manage-app-dependencies**
* **Scopes**:
* **read**
* **Roles**:
* `ROLE_APP_DEPENDENCIES_MANAGE_READ`
* `ROLE_APP_DEPENDENCIES_MANAGE_EDIT`
* `ROLE_APP_DEPENDENCIES_MANAGE_ADMIN`
* **edit**
* **Roles**:
* `ROLE_APP_DEPENDENCIES_MANAGE_EDIT`
* `ROLE_APP_DEPENDENCIES_MANAGE_ADMIN`
* **admin**
* **Roles**:
* `ROLE_APP_DEPENDENCIES_MANAGE_ADMIN`
3. **manage-builds**
* **Scopes**:
* **read**
* **Roles**:
* `ROLE_BUILDS_MANAGE_READ`
* `ROLE_BUILDS_MANAGE_EDIT`
* `ROLE_BUILDS_MANAGE_IMPORT`
* `ROLE_BUILDS_MANAGE_ADMIN`
* **edit**
* **Roles**:
* `ROLE_BUILDS_MANAGE_EDIT`
* `ROLE_BUILDS_MANAGE_ADMIN`
* **import**
* **Roles**:
* `ROLE_BUILDS_MANAGE_IMPORT`
* `ROLE_BUILDS_MANAGE_EDIT`
* `ROLE_BUILDS_MANAGE_ADMIN`
* **admin**
* **Roles**:
* `ROLE_BUILDS_MANAGE_ADMIN`
4. **manage-active-policy**
* **Scopes**:
* **read**
* **Roles**:
* `ROLE_ACTIVE_POLICY_MANAGE_READ`
* `ROLE_ACTIVE_POLICY_MANAGE_EDIT`
* **edit**
* **Roles**:
* `ROLE_ACTIVE_POLICY_MANAGE_EDIT`
5. **manage-app-configs**
* **Scopes**:
* **read**
* **Roles**:
* `ROLE_APP_CONFIG_MANAGE_READ`
* `ROLE_APP_CONFIG_MANAGE_IMPORT`
* `ROLE_APP_CONFIG_MANAGE_EDIT`
* `ROLE_APP_CONFIG_MANAGE_ADMIN`
* **edit**
* **Roles**:
* `ROLE_APP_CONFIG_MANAGE_EDIT`
* `ROLE_APP_CONFIG_MANAGE_ADMIN`
* **import**
* **Roles**:
* `ROLE_APP_CONFIG_MANAGE_IMPORT`
* `ROLE_APP_CONFIG_MANAGE_EDIT`
* `ROLE_APP_CONFIG_MANAGE_ADMIN`
* **admin**
* **Roles**:
* `ROLE_APP_CONFIG_MANAGE_ADMIN`
6. **manage-app-configs-overrides**
* **Scopes**:
* **read**
* **Roles**:
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_READ`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_IMPORT`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_EDIT`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_ADMIN`
* **import**
* **Roles**:
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_IMPORT`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_EDIT`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_ADMIN`
* **edit**
* **Roles**:
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_EDIT`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_ADMIN`
* **admin**
* **Roles**:
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_ADMIN`
### Permissions explained
* **Permissions**:
* Can view Projects entry in main menu
* Add icon for Applications and Libraries sections is hidden - cannot add application or library
* Can view application or library Config view in read-only mode (for draft application versions) with action buttons hidden
* Can export application version
* **Restrictions**:
* Cannot start a draft application version
* Cannot discard changes
* Cannot create build
* Cannot create new branch
* Cannot import application version
* Cannot commit a draft application version
* Cannot merge branches
* Can view draft application version in read-only mode with buttons hidden
* **Roles allowed**:
* `ROLE_APPS_MANAGE_READ`
* `ROLE_APPS_MANAGE_IMPORT`
* `ROLE_APPS_MANAGE_EDIT`
* `ROLE_APPS_MANAGE_ADMIN`
* **Permissions**:
* Can view Projects entry in main menu
* Can create new application or library
* Can merge branches
* Can create new branch
* Can start new application version
* Can submit application version
* Cannot delete application - Delete icon in contextual menu is hidden
* `ROLE_APPS_MANAGE_EDIT` is required for any type of edit operation on a resource
* **Roles allowed**:
* `ROLE_APPS_MANAGE_EDIT`
* `ROLE_APPS_MANAGE_ADMIN`
* **Permissions**:
* Can view Import Version entry on:
* Projects page
* Application versioning overlay
* Can view Export version button on application versioning overlay
* **Roles allowed**:
* `ROLE_APPS_MANAGE_IMPORT`
* `ROLE_APPS_MANAGE_EDIT`
* `ROLE_APPS_MANAGE_ADMIN`
* **Permissions**:
* All permissions under read, edit, import
* Can delete application or library
* **Roles allowed**: `ROLE_APPS_MANAGE_ADMIN`
* **Permissions**:
* Can view Builds entry in application Runtime tab menu
* Can view Builds page
* Can view Builds content (contextual menu > Build contents)
* Cannot import build
* Projects page > Import icon > Import build is not shown
* **Roles allowed**: `ROLE_BUILDS_MANAGE_READ`
* `ROLE_BUILDS_MANAGE_EDIT`
* `ROLE_BUILDS_MANAGE_IMPORT`
* `ROLE_BUILDS_MANAGE_ADMIN`
* **Permissions**:
* Can see Create build button on Application Versioning overlay for a committed application version
* **Roles allowed**:
* `ROLE_BUILDS_MANAGE_EDIT`
* `ROLE_BUILDS_MANAGE_ADMIN`
* **Permissions**:
* Can view Builds entry in application Runtime tab menu
* Can import builds
* **Roles allowed**:
* `ROLE_BUILDS_MANAGE_EDIT`
* `ROLE_BUILDS_MANAGE_IMPORT`
* `ROLE_BUILDS_MANAGE_ADMIN`
* **Permissions**:
* Can do all of the above
* **Roles allowed**:
* `ROLE_BUILDS_MANAGE_ADMIN`
* **Permissions**:
* Can view Active policy entry in application Runtime tab menu
* Can view Active policy page in read-only mode - Fields and Save button are hidden
* **Roles allowed**:
* `ROLE_ACTIVE_POLICY_MANAGE_READ`
* `ROLE_ACTIVE_POLICY_MANAGE_EDIT`
* **Permissions**:
* All permissions under read
* Can update active policy settings - fields and save button are enabled
* **Roles allowed**: `ROLE_ACTIVE_POLICY_MANAGE_EDIT`
* **Permissions**:
* Can view Configuration parameters in Application Config View menu
* Can view Configuration parameters page in read-only mode
* **Roles allowed**:
* `ROLE_APP_CONFIG_MANAGE_READ`
* `ROLE_APP_CONFIG_MANAGE_IMPORT`
* `ROLE_APP_CONFIG_MANAGE_EDIT`
* `ROLE_APP_CONFIG_MANAGE_ADMIN`
* **Permissions**:
* All permissions under read
* Can import configuration parameters
* **Roles allowed**:
* `ROLE_APP_CONFIG_MANAGE_IMPORT`
* `ROLE_APP_CONFIG_MANAGE_EDIT`
* `ROLE_APP_CONFIG_MANAGE_ADMIN`
* **Permissions**:
* All permissions under read
* Can add/edit/delete configuration parameters
* Cannot import configuration parameters
* **Roles allowed**:
* `ROLE_APP_CONFIG_MANAGE_EDIT`
* `ROLE_APP_CONFIG_MANAGE_ADMIN`
* **Permissions**:
* All permissions for read, edit, import
* **Roles allowed**: `ROLE_APP_CONFIG_MANAGE_ADMIN`
* **Permissions**:
* Can view Configuration parameters overrides in Application Runtime View menu
* Can view Configuration parameters overrides page in read-only mode:
* cannot add configuration param override
* cannot edit a configuration param override
* cannot delete a configuration param override
* **Roles allowed**:
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_READ`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_IMPORT`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_EDIT`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_ADMIN`
* **Permissions**:
* All permissions under read
* Can import configuration parameters
* **Roles allowed**:
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_IMPORT`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_EDIT`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_ADMIN`
* **Permissions**:
* All permissions under read
* Can add/edit configuration parameters overrides
* **Roles allowed**:
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_EDIT`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_ADMIN`
* **Permissions**:
* All permissions under read, edit, import
* Can delete app config overrides
* **Roles allowed**: `ROLE_APP_CONFIG_OVERRIDES_MANAGE_ADMIN`
* **Permissions**:
* Can view Dependencies entry in Application Config view menu
* Can view Dependencies page in read-only mode
* **Roles allowed**:
* `ROLE_APP_DEPENDENCIES_MANAGE_READ`
* `ROLE_APP_DEPENDENCIES_MANAGE_EDIT`
* `ROLE_APP_DEPENDENCIES_MANAGE_ADMIN`
* **Permissions**:
* All permissions under read
* Can add/edit dependencies
* **Roles allowed**:
* `ROLE_APP_DEPENDENCIES_MANAGE_EDIT`
* `ROLE_APP_DEPENDENCIES_MANAGE_ADMIN`
* **Permissions**:
* All permissions under read, edit
* Can delete dependency
* **Roles allowed**: `ROLE_APP_DEPENDENCIES_MANAGE_ADMIN`
### Configuring access
To define or adjust access for these roles, use the following format in your environment variables:
```plaintext
SECURITY_ACCESSAUTHORIZATIONS__SCOPES__ROLESALLOWED: NEEDED_ROLE_NAMES
```
Roles must be defined in your identity provider (e.g., Keycloak, RH-SSO, Entra or any compatible provider).
Custom roles can be configured as needed, and multiple roles can be assigned to each scope.
# Admin access rights
Source: https://docs.flowx.ai/4.7.x/setup-guides/access-management/configuring-access-rights-for-admin
Granular access rights can be configured for restricting access to the Admin component.
Access authorizations are provided, each with specified access scopes:
1. **Manage-platform** - for configuring access for managing platform details
Available scopes:
* **read** - users are able to view platform status
* **admin** - users are able to force health check scan
2. **Manage-processes** - for configuring access for managing process definitions
Available scopes:
* **import** - users are able to import process definitions and process stages
* **read** - users are able to view process definitions and stages
* **edit** - users are able to edit process definitions
* **admin** - users are able to publish and delete process definitions, delete stages, edit sensitive data for process definitions
3. **Manage-configurations** - for configuring access for managing generic parameters
Available scopes:
* **import** - users are able to import generic parameters
* **read** - users are able to view generic parameters
* **edit** - users are able to edit generic parameters
* **admin** - users are able to delete generic parameters
4. **Manage-users** - for configuring access for access management
Available scopes:
* **read** - users are able to read all users, groups and roles
* **edit** - users are able to create/update any user group or roles
* **admin** - users are able to delete users, groups or roles
5. **Manage-integrations** - for configuring integrations with adapters
Available scopes:
* **import** - users are able to import integrations
* **read** - users are able to view all the integrations, scenarios and scenarios configuration(topics/ input model/ output model/ headers)
* **edit** - users are able to create/update/delete any values for integrations/scenarios and also scenarios configuration (topics/input model/ output model/ headers)
* **admin** - users are able to delete integrations/scenarios with all children
The Admin service is configured with the following default users roles for each of the access scopes mentioned above:
* **manage-platform**
* read:
* ROLE\_ADMIN\_MANAGE\_PLATFORM\_READ
* ROLE\_ADMIN\_MANAGE\_PLATFORM\_ADMIN
* admin:
* ROLE\_ADMIN\_MANAGE\_PLATFORM\_ADMIN
* **manage-processes**
* import:
* ROLE\_ADMIN\_MANAGE\_PROCESS\_IMPORT
* ROLE\_ADMIN\_MANAGE\_PROCESS\_EDIT
* ROLE\_ADMIN\_MANAGE\_PROCESS\_ADMIN
* read:
* ROLE\_ADMIN\_MANAGE\_PROCESS\_READ
* ROLE\_ADMIN\_MANAGE\_PROCESS\_IMPORT
* ROLE\_ADMIN\_MANAGE\_PROCESS\_EDIT
* ROLE\_ADMIN\_MANAGE\_PROCESS\_ADMIN
* edit:
* ROLE\_ADMIN\_MANAGE\_PROCESS\_EDIT
* ROLE\_ADMIN\_MANAGE\_PROCESS\_ADMIN
* admin:
* ROLE\_ADMIN\_MANAGE\_PROCESS\_ADMIN
* **manage-configurations**
* import:
* ROLE\_ADMIN\_MANAGE\_CONFIG\_IMPORT
* ROLE\_ADMIN\_MANAGE\_CONFIG\_EDIT
* ROLE\_ADMIN\_MANAGE\_CONFIG\_ADMIN
* read:
* ROLE\_ADMIN\_MANAGE\_CONFIG\_READ
* ROLE\_ADMIN\_MANAGE\_CONFIG\_IMPORT
* ROLE\_ADMIN\_MANAGE\_CONFIG\_EDIT
* ROLE\_ADMIN\_MANAGE\_CONFIG\_ADMIN
* edit:
* ROLE\_ADMIN\_MANAGE\_CONFIG\_EDIT
* ROLE\_ADMIN\_MANAGE\_CONFIG\_ADMIN
* admin:
* ROLE\_ADMIN\_MANAGE\_CONFIG\_ADMIN
* **manage-users**
* read:
* ROLE\_ADMIN\_MANAGE\_USERS\_READ
* ROLE\_ADMIN\_MANAGE\_USERS\_EDIT
* ROLE\_ADMIN\_MANAGE\_USERS\_ADMIN
* edit:
* ROLE\_ADMIN\_MANAGE\_USERS\_EDIT
* ROLE\_ADMIN\_MANAGE\_USERS\_ADMIN
* admin:
* ROLE\_ADMIN\_MANAGE\_USERS\_ADMIN
* **manage-integrations**
* import:
* ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_IMPORT
* ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_EDIT
* ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_ADMIN
* read:
* ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_READ
* ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_IMPORT
* ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_EDIT
* ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_ADMIN
* edit:
* ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_EDIT
* ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_ADMIN
* admin:
* ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_ADMIN
These roles need to be defined in the chosen identity provider solution. It can be either kyecloak, RH-SSO, or other identity provider solution.
In case other custom roles are needed, you can configure them using environment variables. More than one role can be set for each access scope.
To configure access for each of the roles above, adapt the following input:
**`SECURITY_ACCESSAUTHORIZATIONS_AUTHORIZATIONNAME_SCOPES_SCOPENAME_ROLESALLOWED: NEEDED_ROLE_NAMES`**
Possible values for `AUTHORIZATIONNAME`: `MANAGEPLATFORM`, `MANAGEPROCESSES`, `MANAGECONFIGURATIONS`, `MANAGEUSERS`.
Possible values for `SCOPENAME`: import, read, edit, admin.
For example, if you need to configure role access for read, insert this:
```
SECURITY_ACCESSAUTHORIZATIONS_MANAGEPROCESSES_SCOPES_READ_ROLESALLOWED: ROLE_NAME_TEST
```
# FlowX CMS access rights
Source: https://docs.flowx.ai/4.7.x/setup-guides/access-management/configuring-access-rights-for-cms
Granular access rights can be configured for restricting access to the CMS component.
Four different access authorizations are provided, each with specified access scopes:
1. **Manage-contents** - for configuring access for manipulating CMS contents
Available scopes:
* import - users are able to import enumeration/substitution tags
* read - users are able to show enumeration/substitution tags, export enumeration/substitution tags
* edit - users are able to create/edit enumeration/substitution tags
* admin - users are able to delete enumeration/substitution tags
2. **Manage-taxonomies** - for configuring access for manipulating taxonomies
Available scopes:
* read - users are able to show languages/source systems
* edit - users are able to edit languages/source systems
* admin - users are able to delete languages/source systems
3. **Manage-media-library** - for configuring access rights to use Media Library
Available scopes:
* import - users are able to import assets
* read - users are able to view assets
* edit - users are able to edit assets
* admin - users are able to delete assets
4. **Manage-themes** - for configuring access rights to use themes, fonts and designer assets
Available scopes:
* import - users are able to import fonts
* read - users are able to view fonts
* edit - users are able to edit fonts
* admin - users are able to delete fonts
The CMS service is preconfigured with the following default users roles for each of the access scopes mentioned above:
* **manage-contents**
* import:
* ROLE\_CMS\_CONTENT\_IMPORT
* ROLE\_CMS\_CONTENT\_EDIT
* ROLE\_CMS\_CONTENT\_ADMIN
* read:
* ROLE\_CMS\_CONTENT\_EDIT
* ROLE\_CMS\_CONTENT\_ADMIN
* ROLE\_CMS\_CONTENT\_READ
* ROLE\_CMS\_CONTENT\_IMPORT
* edit:
* ROLE\_CMS\_CONTENT\_EDIT
* ROLE\_CMS\_CONTENT\_ADMIN
* admin:
* ROLE\_CMS\_CONTENT\_ADMIN
* **manage-taxonomies**
* import:
* ROLE\_CMS\_TAXONOMIES\_IMPORT
* ROLE\_CMS\_TAXONOMIES\_EDIT
* ROLE\_CMS\_TAXONOMIES\_ADMIN
* read:
* ROLE\_CMS\_TAXONOMIES\_READ
* ROLE\_CMS\_TAXONOMIES\_IMPORT
* ROLE\_CMS\_TAXONOMIES\_EDIT
* ROLE\_CMS\_TAXONOMIES\_ADMIN
* edit:
* ROLE\_CMS\_TAXONOMIES\_EDIT
* ROLE\_CMS\_TAXONOMIES\_ADMIN
* admin:
* ROLE\_CMS\_TAXONOMIES\_ADMIN
* **manage-media-library**
* import:
* ROLE\_MEDIA\_LIBRARY\_IMPORT
* ROLE\_MEDIA\_LIBRARY\_EDIT
* ROLE\_MEDIA\_LIBRARY\_ADMIN
* read:
* ROLE\_MEDIA\_LIBRARY\_READ
* ROLE\_MEDIA\_LIBRARY\_EDIT
* ROLE\_MEDIA\_LIBRARY\_ADMIN
* ROLE\_MEDIA\_LIBRARY\_IMPORT
* edit:
* ROLE\_MEDIA\_LIBRARY\_EDIT
* ROLE\_MEDIA\_LIBRARY\_ADMIN
* admin:
* ROLE\_MEDIA\_LIBRARY\_ADMIN
* **manage-themes**
* import:
* ROLE\_THEMES\_IMPORT
* ROLE\_THEMES\_EDIT
* ROLE\_THEMES\_ADMIN
* read:
* ROLE\_THEMES\_READ
* ROLE\_THEMES\_EDIT
* ROLE\_THEMES\_ADMIN
* ROLE\_THEMES\_IMPORT
* edit:
* ROLE\_THEMES\_EDIT
* ROLE\_THEMES\_ADMIN
* admin:
* ROLE\_THEMES\_ADMIN
The needed roles should be defined in the chosen identity provider solution.
In case other custom roles are needed, you can configure them using environment variables. More than one role can be set for each access scope.
To configure access for each of the roles above, adapt the following input:
`SECURITY_ACCESSAUTHORIZATIONS_AUTHORIZATIONNAME_SCOPES_SCOPENAME_ROLESALLOWED: NEEDED_ROLE_NAMES`
Possible values for `AUTHORIZATIONNAME`: `MANAGECONTENTS`, `MANAGETAXONOMIES`.
Possible values for `SCOPENAME`: import, read, edit, admin.
For example, if you need to configure role access for import, insert this:
```
SECURITY_ACCESSAUTHORIZATIONS_MANAGECONTENTS_SCOPES_IMPORT_ROLESALLOWED: ROLE_CMS_CONTENT_IMPORT
```
# FlowX Engine access rights
Source: https://docs.flowx.ai/4.7.x/setup-guides/access-management/configuring-access-rights-for-engine
Granular access rights can be configured for restricting access to the Engine component.
Two different access authorizations are provided, each with specified access scopes:
1. **Manage-processes** - for configuring access for running test processes
Available scopes:
* **edit** - users are able to start processes for testing and to test action rules
2. **Manage-instances** - for configuring access for manipulating process instances
Available scopes:
* **read** - users can view the list of process instances
* **admin** - users are able to retry an action on a process instance token
The Engine service is preconfigured with the following default users roles for each of the access scopes mentioned above:
* **manage-processes**
* edit:
* ROLE\_ADMIN\_MANAGE\_PROCESS\_EDIT
* ROLE\_ADMIN\_MANAGE\_PROCESS\_ADMIN
* admin:
* ROLE\_ADMIN\_MANAGE\_PROCESS\_ADMIN
* **manage-instances**
* read:
* ROLE\_ENGINE\_MANAGE\_INSTANCE\_READ
* ROLE\_ENGINE\_MANAGE\_INSTANCE\_ADMIN
* admin:
* ROLE\_ENGINE\_MANAGE\_INSTANCE\_ADMIN
These roles need to be defined in the chosen identity provider solution.
In case other custom roles are needed, you can configure them using environment variables. More than one role can be set for each access scope.
To configure access for each of the roles above, adapt the following input:
`SECURITY_ACCESSAUTHORIZATIONS_AUTHORIZATIONNAME_SCOPES_SCOPENAME_ROLESALLOWED:NEEDED_ROLE_NAMES`
Possible values for AUTHORIZATIONNAME: MANAGEPROCESSES, MANAGEINSTANCES.
Possible values for SCOPENAME: read, edit, admin.
For example, if you need to configure role access for read, insert this:
```
SECURITY_ACCESSAUTHORIZATIONS_MANAGEINSTANCES_SCOPES_READ_ROLESALLOWED: ROLE_NAME_TEST
```
# Integration Designer access rights
Source: https://docs.flowx.ai/4.7.x/setup-guides/access-management/configuring-access-rights-for-integration-designer
Granular access rights can be configured to restrict access to the Integration Designer.
Access authorizations in Integration Designer are provided with specified access scopes for both system and workflow management:
1. **Manage-systems** - for configuring access to integration systems.
**Available scopes:**
* **import** - allows users to import integration systems.
* **read** - allows users to view integration systems.
* **edit** - allows users to edit integration systems.
* **admin** - allows users to administer integration systems.
The workflow\_read role allows users to view and monitor integration workflows without making changes:
* Can view all existing workflows and their details (canvas and sidebar).
* Can access the log console, warnings, and audit logs for audit and troubleshooting.
* Cannot create, update, or delete workflows.
* Cannot run workflows or nodes individually.
2. **Manage-workflows** - for configuring access to integration workflows.
**Available scopes:**
* **import** - allows users to import integration workflows.
* **read\_restricted** - allows users to view restricted integration workflows.
* **read** - allows users to view all integration workflows.
* **edit** - allows users to edit integration workflows.
* **admin** - allows users to administer integration workflows.
The workflow\_read-restricted role provides view-only access to integration workflows with limited permissions:
* Can view all workflows and their details (canvas and sidebar).
* Cannot access the log console, audit logs, or warnings.
* Cannot create, update, or delete workflows.
* Cannot run workflows or nodes individually (cannot see running instances).
* Can see instances and nodes, but logs, input, and output will show “unauthorized.”
### Default Roles for Integration Designer
The Integration Designer service is configured with the following default user roles for each access scope mentioned above:
* **manage-systems**
* import:
* `ROLE_INTEGRATION_SYSTEM_IMPORT`
* `ROLE_INTEGRATION_SYSTEM_EDIT`
* `ROLE_INTEGRATION_SYSTEM_ADMIN`
* read:
* `ROLE_INTEGRATION_SYSTEM_READ`
* `ROLE_INTEGRATION_SYSTEM_EDIT`
* `ROLE_INTEGRATION_SYSTEM_ADMIN`
* edit:
* `ROLE_INTEGRATION_SYSTEM_EDIT`
* `ROLE_INTEGRATION_SYSTEM_ADMIN`
* admin:
* `ROLE_INTEGRATION_SYSTEM_ADMIN`
* **manage-workflows**
* import:
* `ROLE_INTEGRATION_WORKFLOW_IMPORT`
* `ROLE_INTEGRATION_WORKFLOW_EDIT`
* `ROLE_INTEGRATION_WORKFLOW_ADMIN`
* read\_restricted:
* `ROLE_INTEGRATION_WORKFLOW_READ_RESTRICTED`
* `ROLE_INTEGRATION_WORKFLOW_READ`
* `ROLE_INTEGRATION_WORKFLOW_EDIT`
* `ROLE_INTEGRATION_WORKFLOW_ADMIN`
* read:
* `ROLE_INTEGRATION_WORKFLOW_READ`
* `ROLE_INTEGRATION_WORKFLOW_EDIT`
* `ROLE_INTEGRATION_WORKFLOW_ADMIN`
* edit:
* `ROLE_INTEGRATION_WORKFLOW_EDIT`
* `ROLE_INTEGRATION_WORKFLOW_ADMIN`
* admin:
* `ROLE_INTEGRATION_WORKFLOW_ADMIN`
> **Warning:** These roles must be defined in the selected identity provider, such as Keycloak, Red Hat Single Sign-On (RH-SSO), or another compatible identity provider.
### Customizing Access Roles
In cases where additional custom roles are required, you can configure them using environment variables. Multiple roles can be assigned to each access scope as needed.
**Environment Variable Format:**
To configure access for each role, use the following format:
**`SECURITY_ACCESSAUTHORIZATIONS_AUTHORIZATIONNAME_SCOPES_SCOPENAME_ROLESALLOWED: NEEDED_ROLE_NAMES`**
* **Possible values for `AUTHORIZATIONNAME`:** `MANAGE_SYSTEMS`, `MANAGE_WORKFLOWS`.
* **Possible values for `SCOPENAME`:** `import`, `read`, `read_restricted`, `edit`, `admin`.
For example, to configure a custom role with read access to manage systems, use:
```plaintext
SECURITY_ACCESSAUTHORIZATIONS_MANAGE_SYSTEMS_SCOPES_READ_ROLESALLOWED: ROLE_CUSTOM_SYSTEM_READ
```
# Configuring an IAM solution (Keycloak)
Source: https://docs.flowx.ai/4.7.x/setup-guides/access-management/configuring-an-iam-solution
This guide provides step-by-step instructions for configuring a minimal Keycloak setup to manage users, roles, and applications efficiently.
We will walk you through configuring a minimal Keycloak setup to efficiently manage users, roles, and applications. Keycloak is an open-source Identity and Access Management (IAM) solution that makes it easy to secure applications and services with little to no coding.
## Prerequisites
Before you begin, ensure you have the following:
* Keycloak installed
* Administrative access to the Keycloak server
* Basic understanding of IAM concepts
Recommended keycloak version: **22.x**
## Recommended Keycloak setup
To configure a minimal required Keycloak setup, in this guide we will covere the following steps:
Define available roles and realm-level roles assigned to new users.
Configure the client authentication, valid redirect URIs, and enable the necessary flows.
Set up **admin**, **task management**, **process engine** and **scheduler** service accounts.
Before starting, if you need further information or a broader understanding of Keycloak, refer to the official Keycloak documentation:
## Creating a new realm
A realm is a space where you manage objects, including users, applications, roles, and groups. Creating a new realm is the first step in setting up Keycloak. Follow the steps below to create a new realm in Keycloak:
Log in to the Keycloak Admin Console using the appropriate URL for your environment (e.g., QA, development, production).

In the top left corner dropdown menu, click **Create Realm**. If you are logged in to the master realm, this dropdown menu lists all the realms created.
If you are logged in to the master realm, this dropdown menu lists all the realms created.
Enter a realm name and click **Create**.

Configure the **Realm Settings**, such as SSO Session Idle and Access Token Lifespan, according to your organization's needs:
**Sessions** -> **SSO Session idle**: Set to **30 Minutes** (recommended).

**Tokens** -> **Access Token Lifespan**: Set to **30 Minutes** (recommended).

**Common Pitfalls**:
* Ensure that the realm name is unique within your Keycloak instance.
* Double-check session idle and token lifespan settings to align with your security requirements.
## Creating/importing user groups and roles
User groups and roles are essential for managing user permissions and access levels within a realm. You can either create or import user groups into a realm.
To import a super admin group with the necessary default user roles, download and run the provided script.
Instructions:
* Unzip the downloaded file.
* Open a terminal and navigate to the unzipped folder.
* Run the script using the appropriate command for your operating system.
After importing, add an admin user to the group and assign the necessary roles.

Check the default roles to ensure correct import:

**Common Pitfalls**:
* Ensure the script has the necessary permissions to run on your system.
* Verify that the roles and groups align with your organizational structure.
## Creating new users
Creating new users is a fundamental part of managing access within Keycloak. Follow these steps to create a new user in a realm and generate a temporary password:
In the left menu bar, click **Users** to open the user list page.
On the right side of the empty user list, click **Add User**.
Fill in the user details and set **Email Verified** to **Yes**.

In the **Groups** section, search for a group, in our case: `FLOWX_SUPER_USERS` and click **Join**.

Save the user, go to the **Credentials** tab, and set a temporary password. Ensure the **Temporary** checkbox is checked.

**Common Pitfalls**:
* Ensure that the email address is valid and correctly formatted.
* Set the temporary password policy according to your organization’s security requirements.
## Adding clients
A client represents an instance of an application and is associated with a specific realm.
### Adding an OAuth 2.0 client
We'll add a client named `flowx-platform-authenticate`, which will be used for login, logout, and refresh token operations by web and mobile apps.
Click **Clients** in the top left menu, then click **Create client**.

In the **General Settings** tab configure the following properties:
* Set a client ID to `{your-client-name}-authenticate`.
* Set the **Client type** to `OpenID Connect`.

Now click **Next** and configure the **Capability config** details:
* Enable **Direct Access Grants**.
* Enable **Implicit Flow Enabled**.

Click **Next** and configure **Login settings**:
* Set **Valid redirect URIs**, specifying a valid URI pattern that a browser can redirect to after a successful login or logout, simple wildcards are allowed.

After creating the client, scroll down in the **Settings** tab and configure additional settings - **Logout Settings**:
* **Backchannel Logout Session Required**: Toggle OFF.
* **Front Channel Logout**: Toggle OFF.

Add **mappers** to `{your-client-name}-authenticate` client.
For instructions on adding mappers and understanding which mappers to add to your clients, refer to the section on [**Adding Protocol Mappers**](#adding-protocol-mappers).
### Adding an Authorizing client
To authorize REST requests to microservices and Kafka, create and configure the `{your-client-name}-platform-authorize` client.
Enter the client ID (`{your-client-name}-platform-authorize`).
Set **Client type** to **OpenID Connect**.

**Client Authentication**: Toggle ON
This setting defines the type of the OIDC client. When enabled, the client type is set to "confidential access." When disabled, it is set to "public access".
Disable **Direct Access Grants**.

**Valid Redirect URIs**: Populate this field with the appropriate URIs.
Save the configuration.

After creating the client, scroll down in the **Settings** tab and configure additional settings - **Logout Settings**:
* **Backchannel Logout Session Required**: Toggle OFF.
* **Front Channel Logout**: Toggle OFF.

Once you have configured these settings, the `{your-client-name}-platform-authorize` client will be created and can be used to authorize REST requests to microservices and Kafka within your application.
#### Example configuration for microservices
Below is an example of a minimal configuration for microservices using OAuth2 with the `{your-client-name}-platform-authorize` client:
```yaml
security:
type: oauth2 #Specifies the security type as OAuth2.
basic:
enabled: false #Disables basic authentication.
oauth2:
base-server-url: http://localhost:8080 #Sets the base server URL for the Keycloak server
realm: flowx #Specifies the Keycloak realm
client:
access-token-uri: ${security.oauth2.base-server-url}/realms/${security.oauth2.realm}/protocol/openid-connect/token
client-id: your-client-name-platform-authorize
client-secret: CLIENT_SECRET
resource:
user-info-uri: ${security.oauth2.base-server-url}/realms/${security.oauth2.realm}/protocol/openid-connect/userinfo
```
| Configuration Key | Value/Example | Description |
| ----------------------------------------- | ----------------------------------------------------------------------------------------------------- | -------------------------------------------------- |
| `security.type` | `oauth2` | Specifies the security type as OAuth2. |
| `security.basic.enabled` | `false` | Disables basic authentication. |
| `security.oauth2.base-server-url` | `http://localhost:8080` | Sets the base server URL for the Keycloak server. |
| `security.oauth2.realm` | `flowx` | Specifies the Keycloak realm. |
| `security.oauth2.client.access-token-uri` | `${security.oauth2.base-server-url}/realms/${security.oauth2.realm}/protocol/openid-connect/token` | Defines the URL for obtaining access tokens. |
| `security.oauth2.client.client-id` | `your-client-name-platform-authorize` | Sets the client ID for authorization. |
| `security.oauth2.client.client-secret` | `CLIENT_SECRET` | Provides the client secret for authentication. |
| `security.oauth2.resource.user-info-uri` | `${security.oauth2.base-server-url}/realms/${security.oauth2.realm}/protocol/openid-connect/userinfo` | Specifies the URL for retrieving user information. |
## Adding protocol mappers
Protocol mappers in Keycloak allow you to transform tokens and documents, enabling actions such as mapping user data into protocol claims and modifying requests between clients and the authentication server. This provides greater customization and control over the information contained in tokens and exchanged during authentication processes.

To enhance your clients' functionality, add the following common mappers:
* [Group Membership mapper ](#group-membership-mapper) (`realm-groups`)
* Maps user groups to the authorization token.
* [User Attribute mapper](#user-attribute-mapper) (`business filter mapper`)
* Maps custom attributes, for example, mapping the [businessFilters ](/4.0/docs/platform-deep-dive/user-roles-management/business-filters) list, to the token claim.
* [User Realm role](#user-realm-role) (`realm-roles`)
* Maps a user's realm role to a token claim.
The mappers we use can also be configured to control the data returned by the `/userinfo` endpoint, in addition to being included in tokens. This capability is a feature that not all Identity Providers (IDPs) support.
By incorporating these mappers, you can further customize and enrich the information contained within your tokens, ensuring they carry the necessary data for your applications.
### Group Membership mapper
Steps to add a Group Membership mapper:
From the Keycloak admin console, go to **Clients** and select your desired client.

In the client settings, click on **Client Scopes**.
Select the **dedicated client scope**: `{your-client-name}-authenticate-dedicated` to open its settings.

Make sure the **Mappers** tab is selected within the dedicated client scope.

Click **Add Mapper**. From the list of available mappers, choose **Group Membership**.

**Name**: Enter a descriptive name for the mapper to easily identify its purpose, for example `realm-groups`.
**Token Claim Name**: Set the token claim name, typically as `groups`, for including group information in the token.
**Add to ID Token**: Toggle OFF.

By configuring the group membership mapper, you will be able to include the user's group information in the token for authorization purposes.
### User Attribute mapper
To include custom attributes such as business filters in the token claim, follow these steps to add a user attribute mapper:
From the Keycloak admin console, go to **Clients** and select your desired client.
Click on **Client Scopes** and choose `{your-client-name}-authenticate-dedicated` to open its settings.
Ensure the **Mappers** tab is selected.
Click **Add mapper**. From the list of available mappers, select **User Attribute**.

* **Mapper Type**: Select **User Attribute**.
* **Name**: Enter a descriptive name for the mapper, such as "Business Filters Mapper".
* **User Attribute**: Enter `businessFilters`.
* **Token Claim Name**: Enter `attributes.businessFilters`.
* **Add to ID Token**: Toggle OFF.
* **Multivalued**: Toggle ON.

By adding this user attribute mapper, the custom attribute "businessFilters" will be included in the token claim under the name "attributes.businessFilters". This enables you toto access and utilize business filters information within your application.
For more information about business filters, refer to the following section:
### User realm role mapper
To add a roles mapper to the `{your-client-name}-authenticate` client, so roles will be available in the OAuth user info response, follow these steps:
From the Keycloak admin console, go to **Clients** and select your desired client.
Click on **Client Scopes** and choose `{your-client-name}-authenticate-dedicated` to open its settings.
Ensure the **Mappers** tab is selected.
Click **Add Mapper**. From the list of available mappers, select **User Realm Role**.

* **Name**: Enter a descriptive name for the mapper, such as "Roles Mapper".
* **Mapper Type**: Select **User Realm Role**.
* **Token Claim Name**: Enter `roles`.
* **Add to ID Token**: Toggle OFF.
* **Add to access token**: Toggle OFF.

By adding this roles mapper, the assigned realm roles of the user will be available in the OAuth user info response under the claim name "roles". This allows you to access and utilize the user's realm roles within your application.
Please note that you can repeat these steps to add multiple roles mappers if you need to include multiple realm roles in the token claim.
### Examples
#### Login
To request a login token:
```curl
curl --location --request POST 'http://localhost:8080/realms/flowx/protocol/openid-connect/token' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'grant_type=password' \
--data-urlencode 'username=admin@flowx.ai' \
--data-urlencode 'password=password' \
--data-urlencode 'client_id= your-client-name-authenticate'
```
#### Refresh token
To refresh an existing token:
```curl
curl --location --request POST 'http://localhost:8080/realms/flowx/protocol/openid-connect/token' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'grant_type=refresh_token' \
--data-urlencode 'client_id= your-client-name-authenticate' \
--data-urlencode 'refresh_token=ACCESS_TOKEN'
```
#### User info
To retrieve user information:
```
curl --location --request GET 'localhost:8080/realms/flowx/protocol/openid-connect/userinfo' \
--header 'Authorization: Bearer ACCESS_TOKEN' \
```
## Adding service accounts
**What is a service account?**
A service account grants direct access to the Keycloak API for a specific component. Each client can have a built-in service account that allows it to obtain an access token.
To use this feature you must enable the **Client authentncation** (access type) for your client. When you do this, the **Service Accounts Enabled** switch will appear.
### Admin service account
The admin service account is used by the admin microservice to connect with Keycloak, enabling user and group management features within the FlowX.AI Designer.
Steps to add an Admin service account:
Navigate to **Clients** and select **Create client**.
Enter a **Client ID** for your new client.

* Enable **Client authentication** (access type).
* Disable **Standard flow**.
* Disable **Direct access grants**.
* Enable **Service accounts roles**.

After creating the client, scroll down in the **Settings** tab and configure additional settings - **Logout Settings**:
* **Backchannel Logout Session Required**: Toggle OFF.
* **Front Channel Logout**: Toggle OFF.

In the newly created client, navigate to the **Service accounts roles** tab.

Click **Assign role** and in the Filter field, select **Filter by clients**.

Assign the necessary roles to the admin service account based on the required access scopes, such as:
* **manage-realm**
* **manage-users**
* **query-users**
In the end, you should have something similiar to this:

Ensure you have created a realm-management client to include the necessary client roles.

The admin service account does not require mappers as it doesn't utilize roles. Service account roles include client roles from `realm-management`.
For more detailed information on admin access rights, refer to the following section:
### Task Management service account
The task management service account facilitates process initiation and enables the use of the task management plugin (requiring the `FLOWX_ROLE` and role mapper), and access data from Keycloak.
Steps to Add a Task Management service account:
Follow steps **1**-**3** as in the Admin Service account configuration: [Admin service account](#admin-service-account).
Assign the necessary service accounts client roles to the Task Management plugin service account based on the required access scopes, such as:
* **view-realm**
* **view-users**

The task management plugin service account requires a realm roles mapper to function correctly. Make sure to configure this to ensure proper operation.
In the end, you should have something similiar to this:

In the newly created task management plugin service account, select **Client Scopes**:
Click on `{your-client-name}-service-account` to open its settings.
Ensure the Mappers tab is selected within the dedicated client scope.
Click **Add mapper**. From the list of available mappers, select **User Realm Role**.

**Name**: Enter a descriptive name for the mapper to easily identify its purpose, for example `realm-roles`.
**Token Claim Name**: Set it to `roles`.
Disable **Add to ID token**.

Click **Save**.

Assign the `FLOWX_ROLE` service account realm role (used to grant permissions for starting processes).
The `FLOWX_ROLE` is used to grant permissions for starting processes in the FlowX.AI Designer platform. By default, this role is named `FLOWX_ROLE`, but its name can be changed from the application configuration of the Engine by setting the following environment variable:
`FLOWX_PROCESS_DEFAULTROLES`
For more information about task management plugin access rights, check the following section:
### Process engine service account
The process engine requires a process engine service account to make direct calls to the Keycloak API.
This service account is also needed to be able to use [**Start Catch Event**](../../docs/building-blocks/node/message-events/message-catch-start-event) node.
**To create the process engine service account**:
* **1-3**: Follow the same steps as in the Admin Service Account Configuration: [Admin service account](#admin-service-account):
To assign the necessary service account roles:
This service account does not require service account client roles. It needs a realm role (to be able to start process instances) and realm-roles mapper.
3. Add the `FLOWX_ROLE` service account realm role (used to grant permissions for starting processes):

In the end, you should have something similiar to this:

4. Add a **realm-roles** mapper:

### Scheduler service account
This service account is used for [**Start Timer Event**](../../docs/building-blocks/node/timer-events/timer-start-event) node. The registered timers in the scheduler require sending a process start message to Kafka. Authentication is also necessary for this operation.
The configuration is similiar to the **process engine service account**:
* Assign the `FLOWX_ROLE` as service account role (this is needed to run process instances).

* Add a **realm-roles** mapper (as shown in the example for process-engine service account).

### Integration Designer service account
The Integration Designer service account is used by the integration designer microservice to interact securely with Keycloak, enabling it to manage various integrations within the FlowX.AI platform.
Steps to set up an Integration Designer service account:
* In Keycloak, navigate to **Clients** and select **Create client**.
* Enter a **Client ID** for your new client (e.g., `integration-designer-sa`).

* Enable **Client authentication** under access type.
* Enable **Service accounts roles** to allow the account to manage integrations.

* Skip the **Login settings** page.
* Click **Save** to apply the configuration.
For further details on configuring access rights and roles for the Integration Designer service account, refer to the following section:
### Runtime manager service account
The runtime manager service account is used by both Application Manager and Runtime Manager services to connect with Keycloak and perform export/import operations for builds, application versions, or other resource-specific tasks.
Steps to add a Runtime manager service account:
Navigate to **Clients** and select **Create client**.
Enter a **Client ID** for your new client.

* Enable **Client authentication** (access type).
* Disable **Standard flow**.
* Disable **Direct access grants**.
* Enable **Service accounts roles**.

After creating the client, scroll down in the **Settings** tab and configure additional settings - **Logout Settings**:
* **Backchannel Logout Session Required**: Toggle OFF.
* **Front Channel Logout**: Toggle OFF.

In the newly created client, navigate to the **Service accounts roles** tab.

Click **Assign role** and in the Filter field, select **Filter by realm roles**.

Make sure the service account has the necessary roles assigned—this is crucial for features like export/import of builds, application versions, resource files, and for invoking scenarios such as moving scheduled events from one build to another.
Assign the necessary roles to the admin service account based on the required access scopes, the following roles are required:
* `ROLE_TASK_MANAGER_HOOKS_ADMIN`
* `ROLE_TASK_MANAGER_VIEWS_ADMIN`
* `ROLE_DOCUMENT_TEMPLATES_ADMIN`
* `ROLE_ADMIN_MANAGE_PROCESS_ADMIN`
* `ROLE_INTEGRATION_SYSTEM_ADMIN`
* `ROLE_TASK_MANAGER_PROCESS_ALLOCATION_SETTINGS_ADMIN`
* `ROLE_CMS_CONTENT_ADMIN`
* `ROLE_MEDIA_LIBRARY_ADMIN`
* `ROLE_NOTIFICATION_TEMPLATES_ADMIN`
* `ROLE_INTEGRATION_WORKFLOW_ADMIN`
For more detailed information on application manager/runtime manager access rights, refer to the following section:
By following these steps, you will have a minimal Keycloak setup to manage users, roles, and applications efficiently. For more detailed configurations and advanced features, refer to the official Keycloak documentation.
# Configuring an IAM solution (EntraID)
Source: https://docs.flowx.ai/4.7.x/setup-guides/access-management/configuring-an-iam-solution-entra
This guide provides step-by-step instructions for configuring a minimal EntraId setup to manage users, roles, and applications efficiently.
## Overview
Microsoft Entra is Microsoft’s unified identity and access management solution designed to protect access to applications, resources, and user data across an organization’s environment. It provides a robust identity and access management (IAM) framework, allowing secure access control, role management, and integration of various applications under a single directory. Entra is crucial for managing multi-cloud and hybrid environments securely, enforcing policies, and supporting both on-premises and cloud resources.
## Prerequisites
* Application Administrator role
* Basic understanding of IAM and OIDC concepts
## Recommended EntraID setup
This setup configures Microsoft Entra to manage and secure access for FlowX.AI applications, handling user roles, custom attributes, and application-specific permissions. The setup covers these main components:
* Flowx-Web and Flowx-API are the core applications that act as entry points for the FlowX.AI platform. Additional applications like Flowx-Admin, Task Management Plugin, and Scheduler Core are registered to support specific functionalities.
* Each application registration includes settings for authentication, API permissions, and role assignments.
* Configures OAuth 2.0 and OIDC protocols, enabling secure access to resources.
* Roles and permissions are assigned through Entra, and single sign-on (SSO) is set up for ease of access across applications.
* Token Configuration includes defining claims (e.g., `email`, `groups`) for use in JWTs, which are used for secure identity validation across services.
* API Permissions are managed using Microsoft Graph, which governs access to resources like user profiles and groups within FlowX.AI.
Custom attribute extensions (e.g., `businessFilter`) allow organizations to apply additional filters or metadata to user and group profiles, configured and managed using Microsoft Graph CLI.
* Helm charts provide a structured setup for deploying FlowX.AI applications in containerized environments.
* Key values such as `tenant_id`, `client_id`, and `client_secret` are configured to support authentication and secure access.
JWT tokens are configured to carry user claims, roles, and custom attributes, ensuring that each token provides comprehensive identity details for FlowX.AI applications.
### Flowx-web app registration
The Flowx-web application serves as the main entry point for logging into FloWX Designer or container applications.
#### Appplication registration steps
To register the Flowx-web application, follow these steps:
1. Navigate to [https://portal.azure.com](https://portal.azure.com) and log in to your EntraID directory, which will host your FlowX.AI application registrations.
2. Go to **Microsoft EntraID > App registrations > New registration**

3. Enter a name for your application, then select **Accounts in this organizational directory only (Single tenant)** to limit access to your organization’s directory.

4. Click **Register** to complete the setup.
You will be redirected to the overview of the newly created app registration.
#### Authentication steps
Follow these steps to configure authentication for the Flowx-web application:
1. Go to the **Authentication** tab. Under **Platform configurations**, add a new platform by selecting **Single-page application (SPA)**. Then, set the **Redirect URIs** to point to the URIs of your Designer application.


2. Click **Configure** to save the platform settings.
3. Next, click **Add URI** to include an additional redirect URI, this time pointing to your container application's URI.


4. Click **Save** to confirm the redirect URI changes.
5. Scroll down to **Advanced Settings**. Under **Mobile and Desktop Applications**, toggle **Enable the following mobile and desktop flows** to **Yes**.

6. Click **Save** again to apply all changes.
#### API permissions
To configure the necessary API permissions, follow these steps:
1. Navigate the **API permissions** tab and click **Add a permission**.

2. In the permissions menu, select **Microsoft Graph** and then choose **Delegated permissions**.


3. Add the following permissions by selecting each option under **OpenId permissions**:
* email
* offline\_access
* openid
* profile

4. After adding these permissions, click **Add permissions** to confirm.
#### Token configuration
Configure the claims you want to include in the ID token.
1. Navigate to the **Token configuration** tab. Click **Add optional claim**, then select **Token type > Access**.
* Choose the following claims to include in the token:
* email
* family\_name
* given\_name
* preferred

2. Click **Add** to save these optional claims.

3. Next, add group claims to the token by clicking **Add groups claim**.
4. Select **All groups** and, under each token type, select **sAMAccountName** (this may differ for your specific organization).

#### Setting token expiration policies
For organizations that require specific control over token lifetimes, Microsoft Entra allows customization of token expiration policies.
1. **Create a Custom Token Lifetime Policy**: Define the desired expiration settings for access, ID, and refresh tokens in the policy.
2. **Assign the Policy to a Service Principal**: Apply the policy to your Flowx-web or Flowx-API app registrations to enforce token lifetime requirements.
For more details on creating and assigning policies for token expiration, refer to [**Microsoft's guide on configuring token lifetimes**](https://learn.microsoft.com/en-us/entra/identity-platform/configure-token-lifetimes#create-a-policy-and-assign-it-to-a-service-principal).
Adjusting token lifetimes can enhance security by reducing the window for unauthorized access.
***
### Flowx-API app registration
The Flowx-API application is used to configure the access token necessary for secure communication between the browser and all exposed FlowX APIs.
#### Appplication registration steps
To register the Flowx-API application, follow these steps:
1. Navigate to [https://portal.azure.com](https://portal.azure.com) and log in to your EntraID directory, which will host your FlowX.AI application registrations.
2. Go to **Microsoft EntraID > App registrations > New registration**

3. Enter a name for your application, then select **Accounts in this organizational directory only (Single tenant)** to limit access to your organization’s directory.

4. Click **Register** to complete the setup.
You will be redirected to the overview page of the newly created app registration.
#### API permissions
To configure the necessary API permissions, follow these steps:
1. Go to the **API permissions** tab and click **Add a permission**.

2. In the permissions menu, select **Microsoft Graph** and then choose **Delegated permissions**.


3. Add the following permissions by selecting each option under **OpenId permissions**:
* email
* offline\_access
* openid
* profile

4. After adding these permissions, click **Add permissions** to confirm.
#### Token configuration
Configure the claims you want to include in the ID token.
1. Navigate to the **Token configuration** tab. Click **Add optional claim**, then select **Token type > Access**.
* Choose the following claims to include in the token:
* email
* family\_name
* given\_name
* preferred

2. Click **Add** to save these optional claims.
3. Next, add group claims to the token by clicking **Add groups claim**.
4. Select **All groups** and, under each token type, select **sAMAccountName** (this may differ for your specific organization).

#### Expose an API
To configure the API exposure and define scopes:
1. In the **Expose an API** section, click **Add** under **Application ID URI**. It’s recommended to use the application’s name for consistency.

2. Click **Save**.
3. Under **Scopes defined by this API**, click **Add a scope** and configure it as follows:
* **Scope name**: `FlowxAI.ReadWrite.All`
* **Who can consent**: Admins and users
* **Admin consent display name**: Full API Access for FlowX.AI Platform
* **Admin consent description**: Grants this application full access to all available APIs, allowing it to read, write, and manage resources across the FlowX.AI platform.
* **User consent display name**: Same as admin consent display name
* **User consent description**: Same as admin consent description
* **State**: Enabled

This scope is not used directly to grant permissions. Instead, it is included in login requests made from a web client. When a client makes a login request with this scope, Entra ID uses it to identify and provide the appropriate access token configured here, ensuring secure access.
4. Under **Authorized client applications**, click **Add a client application**. Add each of the following client applications, selecting the `FlowxAI.ReadWrite.All` scope:
* flowx-web
* flowx-admin
* flowx-process-engine
* flowx-integration-designer
* flowx-task-management-plugin
* flowx-scheduler-core

Client IDs for these applications can be found on the **Overview** page of each respective application. If some applications are not created yet, you can return and add them to this section after their creation.

#### Application roles
To configure application roles, follow these steps:
1. Navigate to **App roles** and click **Create app role**.
2. Select **Allowed member types** select **Both (Users/Groups + Applications)**.

3. Complete the role details with the following information:
* Display name: The name displayed for the role.
* Value: A unique identifier for the role within applications.
* Description: A description of the role’s purpose.
App role list should be the same as the Keycloak setup. A list of default roles can be found [**here**](default-roles).
### Other applications registration
The following FlowX.AI applications require similar steps for registration:
* Flowx-Admin
* Flowx-Process-Engine
* Flowx-Integration-designer
* Flowx-Task-Management-Plugin
* Flowx-Scheduler-Core
***
### Flowx-Admin app registration
1. **Create a New Application Registration**
* Go to [https://portal.azure.com](https://portal.azure.com) and log in to your Entra ID directory where you will host FlowX.AI application registrations.
2. **Register the Application**
* Navigate to **Microsoft Entra ID > App registrations > New registration**.
* Set the application name and select **Accounts in this organizational directory only (Single tenant)**.
* Click **Register**. You will be redirected to the overview page of the newly created app registration.
You will now see the overview for your new app registration.
#### Configure client secrets
1. Navigate to **Certificates & secrets**.
2. Under **Client secrets**, click **New client secret**.
3. Set a **description** and choose an **expiration time** for the client secret, then click **Add**.

Copy the generated client secret value. This will be used to configure `SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENT_SECRET`.

#### Configure API permissions
1. Go to the **API permissions** tab and click **Add a permission**.
2. Select **Microsoft Graph > Application permissions**.
3. Add the following permissions for **flowx-admin**:
* **Application.Read.All**

If you have admin privileges, you can click **Grant admin consent** to apply these permissions. If not, contact your tenant administrator to grant consent.

***
### Flowx-Process-Engine app registration
Follow the same **Application Registration Steps** and **Configure Client Secrets** steps as above.
#### Configure API permissions
* **API Permissions**: No additional permissions required.
***
### Flowx-Integration-Designer app registration
Follow the same **Application Registration Steps** and **Configure Client Secrets** steps as above.
#### Configure API permissions
* **API Permissions**: No additional permissions required.
***
### Flowx-Task-Management-Plugin app registration
Follow the same **Application Registration Steps** and **Configure Client Secrets** steps as above.
#### Configure API permissions
1. Go to the **API permissions** tab and click **Add a permission**.
2. Select **Microsoft Graph > Application permissions**.
3. Add the following permissions for **flowx-task-management-plugin**:
* **Application.Read.All**
* **Group.Read.All**
* **User.Read.All**
If you have admin privileges, you can click **Grant admin consent** to apply these permissions. If not, contact your tenant administrator to grant consent.
***
### Flowx-Scheduler-Core app registration
Follow the same **Application Registration Steps** and **Configure Client Secrets** steps as above.
#### Configure API permissions
* **API Permissions**: No additional permissions required.
***
### Assigning a role to a user/group
To assign a role to a user or group for your FlowX.AI applications, follow these steps:
1. Go to [https://portal.azure.com](https://portal.azure.com) and log in to your Entra ID directory that hosts your FlowX.AI application registrations.
2. Navigate to **Microsoft Entra ID > Enterprise applications** and search for your **flowx-api** app registration name.\
(An enterprise application with the same name was automatically created when the app registration was set up.)


3. Under **Users and groups**, select **Add user/group**.

* Choose the user or group you want to assign.

* Select the appropriate role from the available options.
4. Click **Assign** to complete the role assignment.

It is recommended to provide roles through group membership for easier management and scalability.
***
### Adding custom attributes
Using Microsoft Graph CLI, you can add custom attributes such as `businessFilter`.
For more information about Microsoft Graph CLI, check the following docs:
#### Prerequisites
* [Install the Microsoft Graph CLI](https://learn.microsoft.com/en-us/graph/cli/installation?tabs=macos).
#### Create an attribute extension property
Create an Attribute Extension Property on the flowx-api app registration:
1. Log in to Microsoft Graph CLI with the necessary permissions:
```bash
$ mgc login --scopes Directory.Read.All
```
You can add additional permissions by repeating the mgc login command with the new permission scopes.
2. Create the attribute extension property by running the following command. Replace `` with the object ID of your flowx-api application:
```bash
$ mgc applications extension-properties create --application-id --body '
{
"dataType": "String",
"name": "businessFilter",
"targetObjects": [
"User", "Group"
]
}'
```
#### Retrieve the attribute extension name
To confirm the attribute extension name, use the command below. This will output the exact name of the created extension property.
```bash
$ mgc applications extension-properties list --application-id --select name
```
Example output:
```json
{
"@odata.context": "https://graph.microsoft.com/v1.0/$metadata#applications(\u0027\u0027)/extensionProperties(name)",
"value": [
{
"name": "extension_ec959542898b42bcb6922e7d3f9df282_businessFilter"
}
]
}
```
#### Configure token claim
1. Go to the **flowx-api** app registration in the Azure portal.
2. Navigate to **Token configuration**.
3. Click **Add optional claim**.
* Select **Token type** as **Access**.
* Check the box for `extn.businessFilter`.
4. Click Add to save the changes.

#### Assign attribute extension to a user
1. Log in with the required permissions to modify user attributes:
```bash
$ mgc login --scopes User.ReadWrite.All
```
2. Assign the `businessFilter` attribute to a user by running the command below. Replace `` with the user's `object ID`:
```bash
$ mgc users patch --user-id --body '
{
"extension_ec959542898b42bcb6922e7d3f9df282_businessFilter": "docs"
}'
```
#### Assign attribute extension to a group
Follow similar steps to assign the `businessFilter` attribute to a group. Replace `` with the group’s `object ID` and use the following command:
1. Log in with the required permissions to modify group attributes:
```bash
$ mgc login --scopes User.ReadWrite.All
```
2. Assign the custom attribute by the command below, replacing `` with the user’s object ID. The businessFilter attribute is set to "docs" in this example.
```bash
$ mgc groups patch --group-id --body '
{
"extension_ec959542898b42bcb6922e7d3f9df282_businessFilter": "docs"
}'
```
***
## Example JWT token for user
To verify that the custom attributes and roles have been correctly applied, you can inspect a sample JWT token issued to a user. This token will include standard claims along with any custom attributes and roles configured in your Entra ID setup.
### Steps to retrieve a JWT token
1. **Login to the FlowX.AI Application**\
Log in to the FlowX.AI application as the user for whom the JWT token needs to be inspected.
2. **Retrieve the JWT Token**\
After logging in, retrieve the JWT token by one of the following methods:
* Using browser developer tools to inspect network requests (look for requests with an `Authorization` header).
* Accessing a token endpoint if available, using a tool like Postman.
3. **Decode the JWT Token**\
Use a JWT decoding tool, such as [jwt.io](https://jwt.io/), to decode and inspect the token.
***
### Sample JWT token structure
Below is an example JWT token structure that includes key claims, custom attributes, and roles:
```json
{
"aud": "api://rd-p-example-flowx-api",
"iss": "https://sts.windows.net/673cac6c-3d63-40cf-a43f-07408dd91072/",
"iat": 1730720856,
"nbf": 1730720856,
"exp": 1730726397,
"acr": "1",
"aio": "ATQAy/8YAAAAj3ca5D/znraYUsif7RVc7TmWJPj66tqsUon0oon1xPamN1W7wN070R1JwaCwUQyQ",
"amr": [
"pwd"
],
"appid": "673b5314-a9c8-40ec-beb5-636fa9a781b4",
"appidacr": "0",
"email": "john.doe@flowx.ai",
"extn.businessFilter": [
"docs"
],
"family_name": "Doe",
"given_name": "John",
"groups": [
"ef731a0d-b44f-44da-bd78-67363c901bb1",
"db856713-0dfa-4d3d-aefa-bbb598257084",
"4336202b-6fc4-4132-afab-7f6573993325",
"5dc0b52e-823b-4ce9-b3e4-b3070912a4ef",
"ce006d40-555f-4247-890b-1053fa3cb172",
"291ac248-4e29-4c91-8e1d-19cbeec64eb8",
"b82dc551-f3f0-4d28-aaf0-a0a74fe3b3e3",
"42b39b5f-7545-48be-88d1-6e88716236db",
"cc0f776a-1cb2-4b8c-a472-8e1393764442",
"6eac9487-e04c-41e6-81ce-364f09c22bbf",
"01c30789-6862-4085-b5c4-f0cb02fb41b0",
"75ac188b-61c4-4aa9-ad7e-af1d543e199a",
"e726fda5-79f0-440b-b86c-8a9820d14d2e",
"259980bb-e881-4d93-9912-d2562441a257",
"9146edd4-6194-4487-b524-79956635f514",
"ce046ce2-6ef8-40f2-9f4e-a70f1ca14ecf",
"62d1f9f5-858c-43e2-af92-94bcc575681b",
"69df5ff6-1da9-49d1-9871-b7e62de2b212",
"043d25fc-a507-47ee-83e3-1d31ce0d9b35"
],
"ipaddr": "86.126.6.183",
"name": "Jonh Doe",
"oid": "61159071-3fd6-4373-8aec-77ee58675776",
"preferred_username": "john.doe@flowx.ai",
"rh": "1.AYEAbKw8Z2M9z0CkPwdAjdkQckKVleyLibxCtpIufT-d8oKBAAeBAA.",
"roles": [
"ROLE_DOCUMENT_TEMPLATES_IMPORT",
"FLOWX_ROLE",
"ROLE_TASK_MANAGER_OOO_ADMIN",
"ROLE_ADMIN_MANAGE_CONFIG_IMPORT",
"ROLE_INTEGRATION_SYSTEM_READ",
"ROLE_INTEGRATION_WORKFLOW_READ_RESTRICTED",
"ROLE_ADMIN_MANAGE_PROCESS_ADMIN",
"ROLE_MANAGE_NOTIFICATIONS_ADMIN",
"ROLE_ADMIN_MANAGE_INTEGRATIONS_EDIT",
"ROLE_TASK_MANAGER_HOOKS_IMPORT",
"ROLE_THEMES_READ",
"ROLE_ENGINE_MANAGE_PROCESS_EDIT",
"ROLE_TASK_MANAGER_PROCESS_ALLOCATION_SETTINGS_ADMIN",
"ROLE_AI_OPTIMIZER_EDIT",
"ROLE_ENGINE_MANAGE_INSTANCE_READ",
"ROLE_ADMIN_MANAGE_INTEGRATIONS_ADMIN",
"ROLE_ADMIN_MANAGE_PROCESS_IMPORT",
"ROLE_COPO_TELLER",
"ROLE_CMS_CONTENT_READ",
"ROLE_CMS_CONTENT_IMPORT",
"ROLE_INTEGRATION_WORKFLOW_ADMIN",
"ROLE_AI_ARCHITECT_EDIT",
"ROLE_TASK_MANAGER_HOOKS_READ",
"ROLE_AI_WRITER_EDIT",
"ROLE_TASK_MANAGER_HOOKS_ADMIN",
"ROLE_MEDIA_LIBRARY_EDIT",
"ROLE_CMS_TAXONOMIES_READ",
"ROLE_AI_INSPECTOR_EDIT",
"FLOWX_FRONTOFFICE",
"ROLE_START_EXTERNAL",
"ROLE_DOCUMENT_TEMPLATES_READ",
"ROLE_TASK_MANAGER_PROCESS_ALLOCATION_SETTINGS_IMPORT",
"VIEW_INSTANCES",
"ROLE_ADMIN_MANAGE_INTEGRATIONS_IMPORT",
"ROLE_ADMIN_MANAGE_USERS_READ",
"ROLE_THEMES_ADMIN",
"ROLE_ADMIN_MANAGE_PLATFORM_READ",
"ROLE_TASK_MANAGER_OOO_EDIT",
"ROLE_CMS_TAXONOMIES_EDIT",
"ROLE_THEMES_IMPORT",
"ROLE_AI_DEVELOPER_EDIT",
"ROLE_MANAGE_NOTIFICATIONS_READ",
"ROLE_INTEGRATION_SYSTEM_EDIT",
"ROLE_MANAGE_NOTIFICATIONS_SEND",
"FLOWX_ADMIN",
"ROLE_INTEGRATION_READ",
"FLOWX_BACKOFFICE",
"ROLE_DOCUMENT_TEMPLATES_EDIT",
"ROLE_MEDIA_LIBRARY_IMPORT",
"ROLE_AI_ASSISTANT_READ",
"ROLE_ADMIN_MANAGE_CONFIG_EDIT",
"ROLE_CMS_TAXONOMIES_IMPORT",
"ROLE_ADMIN_MANAGE_CONFIG_ADMIN",
"ROLE_TASK_MANAGER_TASKS_READ",
"ROLE_DOCUMENT_TEMPLATES_ADMIN",
"ROLE_INTEGRATION_WORKFLOW_READ",
"ROLE_COPO_VIEWER",
"ROLE_MEDIA_LIBRARY_ADMIN",
"ROLE_NOTIFICATION_TEMPLATES_EDIT",
"ROLE_ADMIN_MANAGE_PROCESS_READ",
"ROLE_AI_INTEGRATOR_EDIT",
"ROLE_AI_SUPERVISOR_EDIT",
"ROLE_INTEGRATION_WORKFLOW_EDIT",
"ROLE_CMS_CONTENT_ADMIN",
"ROLE_CMS_CONTENT_EDIT",
"FLOWX_SUPERVISOR",
"ROLE_ADMIN_MANAGE_CONFIG_READ",
"ROLE_TASK_MANAGER_OOO_READ",
"ROLE_TASK_MANAGER_PROCESS_ALLOCATION_SETTINGS_EDIT",
"ROLE_INTEGRATION_ADMIN",
"ROLE_NOTIFICATION_TEMPLATES_READ",
"ROLE_AI_AUDITOR_EDIT",
"ROLE_ENGINE_MANAGE_INSTANCE_ADMIN",
"ROLE_INTEGRATION_SYSTEM_ADMIN",
"ROLE_ADMIN_MANAGE_PROCESS_EDIT",
"ROLE_INTEGRATION_EDIT",
"ROLE_AI_DESIGNER_EDIT",
"ROLE_TASK_MANAGER_HOOKS_EDIT",
"ROLE_AI_COMMAND_READ",
"ROLE_CMS_TAXONOMIES_ADMIN",
"ROLE_ADMIN_MANAGE_USERS_ADMIN",
"ROLE_LICENSE_MANAGE_READ",
"ROLE_AI_ANALYST_EDIT",
"ROLE_TASK_MANAGER_VIEWS_ADMIN",
"ROLE_ADMIN_MANAGE_PLATFORM_ADMIN",
"ROLE_TASK_MANAGER_PROCESS_ALLOCATION_SETTINGS_READ",
"ROLE_AI_STRATEGIST_EDIT",
"ROLE_LICENSE_MANAGE_ADMIN",
"ROLE_THEMES_EDIT",
"ROLE_NOTIFICATION_TEMPLATES_ADMIN",
"ROLE_LICENSE_MANAGE_EDIT",
"ROLE_INTEGRATION_IMPORT",
"ROLE_NOTIFICATION_TEMPLATES_IMPORT",
"ROLE_ADMIN_MANAGE_INTEGRATIONS_READ",
"ROLE_ADMIN_MANAGE_USERS_EDIT",
"ROLE_MEDIA_LIBRARY_READ"
],
"scp": "FlowxAI.ReadWrite.All",
"sub": "tMG9A1npM9hK89AV9rdUvTAKVlli3oLkyI1E8F7bV5Y",
"tid": "673cac6c-3d63-40cf-a43f-07408dd91072",
"unique_name": "john.doe@flowx.ai",
"upn": "john.doe@flowx.ai",
"uti": "v3igRE_kEUqZC4nbXII3AA",
"ver": "1.0",
"wids": [
"9b895d92-2cd3-44c7-9d02-a6ac2d5ea5c3",
"fe930be7-5e62-47db-91af-98c3a49a38b1",
"e3973bdf-4987-49ae-837a-ba8e231c7286",
"158c047a-c907-4556-b7ef-446551a6b5f7",
"e8611ab8-c189-46e8-94e1-60213ab1f814",
"b79fbf4d-3ef9-4689-8143-76b194e85509"
]
}
```
## Configure helm charts
This section provides details on configuring Helm charts for FlowX.AI applications, including where to retrieve required values and setting environment variables for different application components.
***
### Where to get the values
* **tenant\_id**: The unique identifier for your Entra ID tenant.

* **client\_id**: The client ID for the specific FlowX.AI application.

* **client\_secret**: The client secret generated during app registration (only visible at creation).

***
### Helm chart values
These configurations are required for different FlowX.AI application components. Substitute ``, ``, and `` with your specific values.
***
#### Designer
For the Designer component, use the following settings:
```yaml
SKIP_ISSUER_CHECK: true
STRICT_DISCOVERY_DOCUMENT_VALIDATION: false
KEYCLOAK_ISSUER: https://login.microsoftonline.com//v2.0
KEYCLOAK_CLIENT_ID:
KEYCLOAK_SCOPES: "openid profile email offline_access api://rd-e-example-flowx-api/FlowxAI.ReadWrite.All"
KEYCLOAK_REDIRECT_URI: https://flowx.example.az1.cloud.flowxai.dev
```
#### All Java application
```yaml
SECURITY_TYPE: jwt-public-key
SPRING_SECURITY_OAUTH2_RESOURCESERVER_JWT_ISSUER_URI: https://sts.windows.net//
SPRING_SECURITY_OAUTH2_RESOURCESERVER_JWT_JWK_SET_URI: https://login.microsoftonline.com//discovery/v2.0/keys
```
#### Java applications with a Service Principal
These settings apply to Java applications that require a service principal, such as Admin, Integration Designer, Process Engine, Scheduler Core, and Task Management Plugin.
```yaml
SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_MAINAUTHPROVIDER_TOKEN_URI: https://login.microsoftonline.com//oauth2/v2.0/token
SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENT_ID:
SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENT_SECRET:
SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_SCOPE: api://rd-p-example-flowx-api/.default
```
#### Java applications with access to Microsoft Graph API
The following configuration is required for Java applications that need access to the Microsoft Graph API, such as Admin and Task Management Plugin.
```yaml
OPENID_PROVIDER: entra
OPENID_ENTRA_TENANT_ID:
OPENID_ENTRA_PRINCIPAL_ID:
```
# Default roles
Source: https://docs.flowx.ai/4.7.x/setup-guides/access-management/default-roles
Below you can find the list of all the default roles that you can add or import into the Identity and Access Management solution to properly manage the access to all the FlowX.AI microservices.
## Default roles
A complete list of all the default roles based on modules (access scope):
| Module | Scopes | Role default value | Microservice |
| ---------------------------------- | ---------------- | ---------------------------------------------------------- | -------------------- |
| manage-platform | read | ROLE\_ADMIN\_MANAGE\_PLATFORM\_READ | Admin |
| manage-platform | admin | ROLE\_ADMIN\_MANAGE\_PLATFORM\_ADMIN | Admin |
| manage-processes | import | ROLE\_ADMIN\_MANAGE\_PROCESS\_IMPORT | Admin |
| manage-processes | read | ROLE\_ADMIN\_MANAGE\_PROCESS\_READ | Admin |
| manage-processes | edit | ROLE\_ADMIN\_MANAGE\_PROCESS\_EDIT | Admin |
| manage-processes | admin | ROLE\_ADMIN\_MANAGE\_PROCESS\_ADMIN | Admin |
| manage-integrations | admin | ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_ADMIN | Admin |
| manage-integrations | read | ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_READ | Admin |
| manage-integrations | edit | ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_EDIT | Admin |
| manage-integrations | import | ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_IMPORT | Admin |
| manage-configurations | import | ROLE\_ADMIN\_MANAGE\_CONFIG\_IMPORT | Admin |
| manage-configurations | read | ROLE\_ADMIN\_MANAGE\_CONFIG\_READ | Admin |
| manage-configurations | edit | ROLE\_ADMIN\_MANAGE\_CONFIG\_EDIT | Admin |
| manage-configurations | admin | ROLE\_ADMIN\_MANAGE\_CONFIG\_ADMIN | Admin |
| manage-users | read | ROLE\_ADMIN\_MANAGE\_USERS\_READ | Admin |
| manage-users | edit | ROLE\_ADMIN\_MANAGE\_USERS\_EDIT | Admin |
| manage-users | admin | ROLE\_ADMIN\_MANAGE\_USERS\_ADMIN | Admin |
| manage-processes | edit | ROLE\_ENGINE\_MANAGE\_PROCESS\_EDIT | Engine |
| manage-processes | admin | ROLE\_ENGINE\_MANAGE\_PROCESS\_ADMIN | Engine |
| manage-instances | read | ROLE\_ENGINE\_MANAGE\_INSTANCE\_READ | Engine |
| manage-instances | admin | ROLE\_ENGINE\_MANAGE\_INSTANCE\_ADMIN | Engine |
| manage-licenses | read | ROLE\_LICENSE\_MANAGE\_READ | License |
| manage-licenses | edit | ROLE\_LICENSE\_MANAGE\_EDIT | License |
| manage-licenses | admin | ROLE\_LICENSE\_MANAGE\_ADMIN | License |
| manage-contents | import | ROLE\_CMS\_CONTENT\_IMPORT | CMS |
| manage-contents | read | ROLE\_CMS\_CONTENT\_READ | CMS |
| manage-contents | edit | ROLE\_CMS\_CONTENT\_EDIT | CMS |
| manage-contents | admin | ROLE\_CMS\_CONTENT\_ADMIN | CMS |
| manage-media-library | import | ROLE\_MEDIA\_LIBRARY\_IMPORT | CMS |
| manage-media-library | read | ROLE\_MEDIA\_LIBRARY\_READ | CMS |
| manage-media-library | edit | ROLE\_MEDIA\_LIBRARY\_EDIT | CMS |
| manage-media-library | admin | ROLE\_MEDIA\_LIBRARY\_ADMIN | CMS |
| manage-taxonomies | import | ROLE\_CMS\_TAXONOMIES\_IMPORT | CMS |
| manage-taxonomies | read | ROLE\_CMS\_TAXONOMIES\_READ | CMS |
| manage-taxonomies | edit | ROLE\_CMS\_TAXONOMIES\_EDIT | CMS |
| manage-taxonomies | admin | ROLE\_CMS\_TAXONOMIES\_ADMIN | CMS |
| manage-themes | admin | ROLE\_THEMES\_ADMIN | CMS |
| manage-themes | edit | ROLE\_THEMES\_EDIT | CMS |
| manage-themes | read | ROLE\_THEMES\_READ | CMS |
| manage-themes | import | ROLE\_THEMES\_IMPORT | CMS |
| manage-tasks | read | ROLE\_TASK\_MANAGER\_TASKS\_READ | Task Management |
| manage-hooks | import | ROLE\_TASK\_MANAGER\_HOOKS\_IMPORT | Task Management |
| manage-hooks | read | ROLE\_TASK\_MANAGER\_HOOKS\_READ | Task Management |
| manage-hooks | edit | ROLE\_TASK\_MANAGER\_HOOKS\_EDIT | Task Management |
| manage-hooks | admin | ROLE\_TASK\_MANAGER\_HOOKS\_ADMIN | Task Management |
| manage-process-allocation-settings | import | ROLE\_TASK\_MANAGER\_PROCESS\_ALLOCATION\_SETTINGS\_IMPORT | Task Management |
| manage-process-allocation-settings | read | ROLE\_TASK\_MANAGER\_PROCESS\_ALLOCATION\_SETTINGS\_READ | Task Management |
| manage-process-allocation-settings | edit | ROLE\_TASK\_MANAGER\_PROCESS\_ALLOCATION\_SETTINGS\_EDIT | Task Management |
| manage-process-allocation-settings | admin | ROLE\_TASK\_MANAGER\_PROCESS\_ALLOCATION\_SETTINGS\_ADMIN | Task Management |
| manage-out-of-office-users | import | ROLE\_TASK\_MANAGER\_OOO\_IMPORT | Task Management |
| manage-out-of-office-users | read | ROLE\_TASK\_MANAGER\_OOO\_READ | Task Management |
| manage-out-of-office-users | edit | ROLE\_TASK\_MANAGER\_OOO\_EDIT | Task Management |
| manage-out-of-office-users | admin | ROLE\_TASK\_MANAGER\_OOO\_ADMIN | Task Management |
| manage-views | import | ROLE\_TASK\_MANAGER\_VIEWS\_IMPORT | Task Management |
| manage-views | read | ROLE\_TASK\_MANAGER\_VIEWS\_READ | Task Management |
| manage-views | edit | ROLE\_TASK\_MANAGER\_VIEWS\_EDIT | Task Management |
| manage-views | admin | ROLE\_TASK\_MANAGER\_VIEWS\_ADMIN | Task Management |
| manage-notification-templates | import | ROLE\_NOTIFICATION\_TEMPLATES\_IMPORT | Notifications |
| manage-notification-templates | read | ROLE\_NOTIFICATION\_TEMPLATES\_READ | Notifications |
| manage-notification-templates | edit | ROLE\_NOTIFICATION\_TEMPLATES\_EDIT | Notifications |
| manage-notification-templates | admin | ROLE\_NOTIFICATION\_TEMPLATES\_ADMIN | Notifications |
| manage-notifications | read | ROLE\_MANAGE\_NOTIFICATIONS\_READ | Notifications |
| manage-notifications | send | ROLE\_MANAGE\_NOTIFICATIONS\_SEND | Notifications |
| manage-notifications | admin | ROLE\_MANAGE\_NOTIFICATIONS\_ADMIN | Notifications |
| manage-document-templates | import | ROLE\_DOCUMENT\_TEMPLATES\_IMPORT | Documents |
| manage-document-templates | read | ROLE\_DOCUMENT\_TEMPLATES\_READ | Documents |
| manage-document-templates | edit | ROLE\_DOCUMENT\_TEMPLATES\_EDIT | Documents |
| manage-document-templates | admin | ROLE\_DOCUMENT\_TEMPLATES\_ADMIN | Documents |
| manage-systems | admin | ROLE\_INTEGRATION\_SYSTEM\_ADMIN | Integration Designer |
| manage-systems | import | ROLE\_INTEGRATION\_SYSTEM\_IMPORT | Integration Designer |
| manage-systems | read | ROLE\_INTEGRATION\_SYSTEM\_READ | Integration Designer |
| manage-systems | edit | ROLE\_INTEGRATION\_SYSTEM\_EDIT | Integration Designer |
| manage-systems | admin | ROLE\_INTEGRATION\_SYSTEM\_ADMIN | Integration Designer |
| manage-workflows | import | ROLE\_INTEGRATION\_WORKFLOW\_IMPORT | Integration Designer |
| manage-workflows | read\_restricted | ROLE\_INTEGRATION\_WORKFLOW\_READ\_RESTRICTED | Integration Designer |
| manage-workflows | read | ROLE\_INTEGRATION\_WORKFLOW\_READ | Integration Designer |
| manage-workflows | edit | ROLE\_INTEGRATION\_WORKFLOW\_EDIT | Integration Designer |
| manage-workflows | admin | ROLE\_INTEGRATION\_WORKFLOW\_ADMIN | Integration Designer |
| manage-applications | read | ROLE\_APPS\_MANAGE\_READ | Application Manager |
| manage-applications | edit | ROLE\_APPS\_MANAGE\_EDIT | Application Manager |
| manage-applications | import | ROLE\_APPS\_MANAGE\_IMPORT | Application Manager |
| manage-applications | admin | ROLE\_APPS\_MANAGE\_ADMIN | Application Manager |
| manage-app-dependencies | read | ROLE\_APP\_DEPENDENCIES\_MANAGE\_READ | Application Manager |
| manage-app-dependencies | edit | ROLE\_APP\_DEPENDENCIES\_MANAGE\_EDIT | Application Manager |
| manage-app-dependencies | admin | ROLE\_APP\_DEPENDENCIES\_MANAGE\_ADMIN | Application Manager |
| manage-builds | read | ROLE\_BUILDS\_MANAGE\_READ | Application Manager |
| manage-builds | edit | ROLE\_BUILDS\_MANAGE\_EDIT | Application Manager |
| manage-builds | import | ROLE\_BUILDS\_MANAGE\_IMPORT | Application Manager |
| manage-builds | admin | ROLE\_BUILDS\_MANAGE\_ADMIN | Application Manager |
| manage-active-policy | read | ROLE\_ACTIVE\_POLICY\_MANAGE\_READ | Application Manager |
| manage-active-policy | edit | ROLE\_ACTIVE\_POLICY\_MANAGE\_EDIT | Application Manager |
| manage-app-configs | read | ROLE\_APP\_CONFIG\_MANAGE\_READ | Application Manager |
| manage-app-configs | edit | ROLE\_APP\_CONFIG\_MANAGE\_EDIT | Application Manager |
| manage-app-configs | import | ROLE\_APP\_CONFIG\_MANAGE\_IMPORT | Application Manager |
| manage-app-configs | admin | ROLE\_APP\_CONFIG\_MANAGE\_ADMIN | Application Manager |
| manage-app-configs-overrides | read | ROLE\_APP\_CONFIG\_OVERRIDES\_MANAGE\_READ | Application Manager |
| manage-app-configs-overrides | import | ROLE\_APP\_CONFIG\_OVERRIDES\_MANAGE\_IMPORT | Application Manager |
| manage-app-configs-overrides | edit | ROLE\_APP\_CONFIG\_OVERRIDES\_MANAGE\_EDIT | Application Manager |
| manage-app-configs-overrides | admin | ROLE\_APP\_CONFIG\_OVERRIDES\_MANAGE\_ADMIN | Application Manager |
## Importing roles
You can import a super admin group and its default roles in Keycloak using the following script file.
You need to edit the following script parameters:
* `baseAuthUrl`
* `username`
* `password`
* `realm`
* `the name of the group for super admins`
The requests package is needed in order to run the script. It can be installed with the following command:
```py
pip3 install requests
```
The script can be run with the following command:
```py
python3 importUsers.py
```
# FlowX Admin setup
Source: https://docs.flowx.ai/4.7.x/setup-guides/admin-setup-guide
Complete configuration reference for the FlowX Admin microservice, including logging, databases, Kafka, and various subsystems.
This guide provides a comprehensive reference for configuring the FlowX Admin microservice using environment variables and configuration files.
## Infrastructure Prerequisites
Before setting up the Admin microservice, ensure the following components are properly set up:
* **Database Instance**: The Admin microservice connects to the same database as the FlowX.AI Engine.
* **MongoDB**: For additional data management.
* **Redis**: For caching and transient data storage.
* **Kafka**: For audit logs, events, and messaging (if using FlowX.AI Audit functionality).
## Core configuration
### Server configuration
| Environment Variable | Description | Default Value |
| ------------------------------------------- | -------------------------------------------------- | ------------- |
| `SERVER_PORT` | Port on which the Admin service will run | `8080` |
| `SPRING_APPLICATION_NAME` | Name of the application used for service discovery | `admin` |
| `SPRING_JACKSON_SERIALIZATION_INDENTOUTPUT` | Enable indented JSON output | `true` |
## Database configuration
The Admin microservice connects to the same PostgreSQL or Oracle database as the FlowX.AI Engine for storing process definitions.
| Environment Variable | Description | Example Value |
| ---------------------------- | -------------------------------- | ---------------------------------------- |
| `SPRING_DATASOURCE_URL` | JDBC URL for database connection | `jdbc:postgresql://localhost:5432/flowx` |
| `SPRING_DATASOURCE_USERNAME` | Database username | `postgres` |
| `SPRING_DATASOURCE_PASSWORD` | Database password | `[your-secure-password]` |
You will need to make sure that the user, password, connection link and database name are configured correctly, otherwise, you will receive errors at start time.
The database schema is managed by a [liquibase](https://www.liquibase.org/) script provided with the Engine.
## MongoDB configuration
The Admin microservice also connects to a MongoDB database instance for additional data management.
| Environment Variable | Description | Example Value |
| ---------------------------------------- | ------------------------------------------ | ------------------------------------------------------------------------------------- |
| `DB_USERNAME` | MongoDB username | `admin` |
| `DB_PASSWORD` | MongoDB password | `[your-secure-password]` |
| `DB_NAME` | MongoDB database name | `admin` |
| `SPRING_DATA_MONGODB_URI` | MongoDB connection URI | `mongodb://${DB_USERNAME}:${DB_PASSWORD}@localhost:27017/${DB_NAME}?retryWrites=true` |
| `SPRING_DATA_MONGODB_UUIDREPRESENTATION` | UUID representation format | `standard` |
| `SPRING_DATA_MONGODB_STORAGE` | Storage type (Azure environments) | `mongodb` or `cosmosdb` |
| `MONGOCK_CHANGELOGSSCANPACKAGE_0_` | Mongock changelog scan package | `ai.flowx.admin.data.model.config.mongock` |
| `MONGOCK_TRANSACTIONENABLED` | Enable transactions for Mongock operations | `false` |
Ensure that the MongoDB configuration is compatible with the same database requirements as the FlowX.AI Engine, especially if sharing database instances.
## Redis and caching configuration
Redis is used for caching and storing transient data.
| Environment Variable | Description | Default Value | Status |
| ---------------------------- | --------------------------------- | ------------------------ | ---------------------- |
| `SPRING_DATA_REDIS_HOST` | Redis server hostname | `localhost` | **Recommended** |
| `SPRING_DATA_REDIS_PORT` | Redis server port | `6379` | **Recommended** |
| `SPRING_DATA_REDIS_PASSWORD` | Redis server password | `[your-secure-password]` | **Recommended** |
| `SPRING_REDIS_HOST` | Redis server hostname | `localhost` | **Deprecated** |
| `SPRING_REDIS_PORT` | Redis server port | `6379` | **Deprecated** |
| `SPRING_REDIS_PASSWORD` | Redis server password | `defaultpassword` | **Deprecated** |
| `SPRING_REDIS_TTL` | Default Redis TTL in milliseconds | `5000000` | Used in other settings |
The `SPRING_REDIS_*` variables are deprecated and will be removed in a future FlowX version. Please use the corresponding `SPRING_DATA_REDIS_*` variables instead.
## Kafka configuration
The Admin microservice uses Kafka for sending audit logs, managing scheduled timer events, platform component versions, and start timer event updates.
### General Kafka settings
| Environment Variable | Description | Default Value |
| -------------------------------- | ----------------------------- | ----------------- |
| `SPRING_KAFKA_BOOTSTRAPSERVERS` | Kafka broker addresses | `localhost:9092` |
| `SPRING_KAFKA_SECURITY_PROTOCOL` | Security protocol | `PLAINTEXT` |
| `KAFKA_MESSAGE_MAX_BYTES` | Maximum message size in bytes | `52428800` (50MB) |
### Kafka producer configuration
| Environment Variable | Description | Default Value |
| --------------------------------------- | ---------------------- | ------------------------------------------------------------- |
| `SPRING_KAFKA_PRODUCER_KEYSERIALIZER` | Key serializer class | `org.apache.kafka.common.serialization.StringSerializer` |
| `SPRING_KAFKA_PRODUCER_VALUESERIALIZER` | Value serializer class | `org.springframework.kafka.support.serializer.JsonSerializer` |
| `SPRING_KAFKA_PRODUCER_MAXREQUESTSIZE` | Maximum request size | `52428800` (50MB) |
### Kafka consumer configuration
| Environment Variable | Description | Default Value |
| -------------------------------------------- | ------------------------------------------ | -------------------------------- |
| `KAFKA_CONSUMER_GROUPID_GENERICPROCESSING` | Generic processing consumer group | `genericProcessingGroup` |
| `KAFKA_CONSUMER_THREADS_GENERICPROCESSING` | Generic processing threads | `6` |
| `KAFKA_CONSUMER_GROUPID_CONTENTTRANSLATE` | Content translation consumer group | `cms-consumer-preview` |
| `KAFKA_CONSUMER_GROUPID_RESUSAGEVALIDATION` | Resource usage validation consumer group | `cms-res-usage-validation-group` |
| `KAFKA_CONSUMER_THREADS_CONTENTTRANSLATE` | Content translation consumer threads | `1` |
| `KAFKA_CONSUMER_THREADS_RESUSAGEVALIDATION` | Resource usage validation consumer threads | `2` |
| `KAFKA_AUTHEXCEPTIONRETRYINTERVAL` | Auth exception retry interval (seconds) | `10` |
| `SPRING_KAFKA_AUTH_EXCEPTION_RETRY_INTERVAL` | Auth exception retry interval (seconds) | `10` |
### Topic naming configuration
| Environment Variable | Description | Default Value |
| -------------------------------- | ------------------------------------ | ---------------------------------------------------------------- |
| `DOT` | Reference to the primary separator | `${kafka.topic.naming.separator}` |
| `DASH` | Reference to the secondary separator | `${kafka.topic.naming.separator2}` |
| `KAFKA_TOPIC_NAMING_PACKAGE` | Base package name | `ai${dot}flowx${dot}` |
| `KAFKA_TOPIC_NAMING_ENVIRONMENT` | Environment name | `dev${dot}` |
| `KAFKA_TOPIC_NAMING_VERSION` | Topic version | `${dot}v1` |
| `KAFKA_TOPIC_NAMING_SEPARATOR` | Primary separator | `.` |
| `KAFKA_TOPIC_NAMING_SEPARATOR2` | Secondary separator | `-` |
| `KAFKA_TOPIC_NAMING_PREFIX` | Combined prefix | `${kafka.topic.naming.package}${kafka.topic.naming.environment}` |
| `KAFKA_TOPIC_NAMING_SUFFIX` | Combined suffix | `${kafka.topic.naming.version}` |
### Kafka topics configuration
#### Audit topics
| Environment Variable | Description | Pattern | Example Value |
| ----------------------- | ------------------ | ------------------------------------------------------------------------------------------------ | ----------------------------------------- |
| `KAFKA_TOPIC_AUDIT_OUT` | Audit output topic | `${kafka.topic.naming.prefix}core${dot}trigger${dot}save${dot}audit${kafka.topic.naming.suffix}` | `ai.flowx.dev.core.trigger.save.audit.v1` |
#### Platform Topics
| Environment Variable | Description | Pattern | Example Value |
| -------------------------------------------- | --------------------------------- | -------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------- |
| `KAFKA_TOPIC_PLATFORM_COMPONENTSVERSIONS_IN` | Components versions caching topic | `${kafka.topic.naming.prefix}core${dot}trigger${dot}platform${dot}versions${dot}caching${kafka.topic.naming.suffix}` | `ai.flowx.dev.core.trigger.platform.versions.caching.v1` |
#### Events gateway topics
| Environment Variable | Description | Pattern | Example Value |
| --------------------------------------- | ----------------------------- | --------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------- |
| `KAFKA_TOPIC_EVENTSGATEWAY_OUT_MESSAGE` | Commands message output topic | `${kafka.topic.naming.prefix}eventsgateway${dot}process${dot}commands${dot}message${kafka.topic.naming.suffix}` | `ai.flowx.dev.eventsgateway.process.commands.message.v1` |
#### Build topics
| Environment Variable | Description | Pattern | Example Value |
| ------------------------------------------------ | -------------------------------- | ------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------- |
| `KAFKA_TOPIC_BUILD_STARTTIMEREVENTS_OUT_UPDATES` | Start timer events updates topic | `${kafka.topic.naming.prefix}build${dot}start${dash}timer${dash}events${dot}updates${dot}in${kafka.topic.naming.suffix}` | `ai.flowx.dev.build.start-timer-events.updates.in.v1` |
#### Resource topics
| Environment Variable | Description | Pattern | Example Value |
| ----------------------------------------------- | ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------- |
| `KAFKA_TOPIC_RESOURCESUSAGES_REFRESH` | Resources usages refresh topic | `${kafka.topic.naming.prefix}application${dash}version${dot}resources${dash}usages${dot}refresh${kafka.topic.naming.suffix}` | `ai.flowx.dev.application-version.resources-usages.refresh.v1` |
| `KAFKA_TOPIC_REQUEST_CONTENT_IN` | Topic for content retrieval requests | `${kafka.topic.naming.prefix}plugin${dot}cms${dot}trigger${dot}retrieve${dot}content${kafka.topic.naming.suffix}` | `ai.flowx.dev.plugin.cms.trigger.retrieve.content.v1` |
| `KAFKA_TOPIC_REQUEST_CONTENT_OUT` | Topic for content retrieval results | `${kafka.topic.naming.prefix}${kafka.topic.naming.engineReceivePattern}plugin${dot}cms${dot}retrieve${dot}content${dot}results${kafka.topic.naming.suffix}` | `ai.flowx.dev.engine.receive.plugin.cms.retrieve.content.results.v1` |
| `KAFKA_TOPIC_APPLICATION_IN_RESUSAGEVALIDATION` | Topic for resource usage validation | `${kafka.topic.naming.prefix}application${dash}version${dot}resources${dash}usages${dot}sub${dash}res${dash}validation${dot}cms${kafka.topic.naming.suffix}` | `ai.flowx.dev.application-version.resources-usages.sub-res-validation.cms.v1` |
### OAuth authentication for Kafka
When using the `kafka-auth` profile, the following variables configure OAuth for Kafka:
| Environment Variable | Description | Default Value |
| -------------------------------- | ------------------------ | ---------------------- |
| `KAFKA_OAUTH_CLIENTID` | OAuth client ID | `kafka` |
| `KAFKA_OAUTH_CLIENTSECRET` | OAuth client secret | `kafka-secret` |
| `KAFKA_OAUTH_TOKEN_ENDPOINT_URI` | OAuth token endpoint URI | `kafka.auth.localhost` |
When using the `kafka-auth` profile, the security protocol will automatically be set to `SASL_PLAINTEXT` and the SASL mechanism will be set to `OAUTHBEARER`.
## Logging configuration
The FlowX Admin microservice provides granular control over logging levels for different components:
| Environment Variable | Description | Default Value | |
| -------------------- | ------------------------------------------- | ------------- | - |
| `LOGGING_LEVEL_ROOT` | Log level for root Spring Boot microservice | `INFO` | |
| `LOGGING_LEVEL_APP` | Log level for application-specific code | `DEBUG` | |
## Localization settings
| Environment Variable | Description | Default Value |
| ------------------------------ | ---------------------------------- | ------------- |
| `APPLICATION_DEFAULTLOCALE` | Default locale for the application | `en` |
| `APPLICATION_SUPPORTEDLOCALES` | List of supported locales | `en, ro` |
## Health monitoring
| Environment Variable | Description | Default Value |
| ---------------------------------------------- | ---------------------------------------- | --------------------------------------- |
| `MANAGEMENT_HEALTH_DB_ENABLED` | Enable database health checks | `true` |
| `MANAGEMENT_HEALTH_KAFKA_ENABLED` | Enable Kafka health checks | `true` |
| `MANAGEMENT_SERVER_ADDRESS` | Management server bind address | `0.0.0.0` |
| `MANAGEMENT_SERVER_PORT` | Management server port | `8081` |
| `MANAGEMENT_SERVER_BASEPATH` | Base path for management endpoints | `/manage` |
| `MANAGEMENT_SECURITY_ENABLED` | Enable security for management endpoints | `false` |
| `MANAGEMENT_ENDPOINTS_WEB_BASEPATH` | Base path for actuator endpoints | `/actuator` |
| `MANAGEMENT_ENDPOINTS_WEB_EXPOSURE_INCLUDE` | Endpoints to expose | `health,info,metrics,metric,prometheus` |
| `MANAGEMENT_ENDPOINT_HEALTH_PROBES_ENABLED` | Enable Kubernetes probes | `true` |
| `MANAGEMENT_ENDPOINT_HEALTH_SHOWDETAILS` | Show health check details | `always` |
| `MANAGEMENT_METRICS_EXPORT_PROMETHEUS_ENABLED` | Enable Prometheus metrics export | `false` |
### Platform health configuration
| Environment Variable | Description | Default Value |
| ----------------------------------------- | --------------------------------------------- | --------------------------------------- |
| `FLOWX_PLATFORMHEALTH_NAMESPACE` | Kubernetes namespace for health checks | `flowx` |
| `FLOWX_PLATFORMHEALTH_MANAGEMENTBASEPATH` | Base path for management endpoints | `${management.server.base-path}` |
| `FLOWX_PLATFORMHEALTH_ACTUATORBASEPATH` | Base path for actuator endpoints | `${management.endpoints.web.base-path}` |
| `FLOWX_PLATFORMHEALTH_ANNOTATIONNAME` | Kubernetes annotation name for health checks | `flowx.ai/health` |
| `FLOWX_PLATFORMHEALTH_ANNOTATIONVALUE` | Kubernetes annotation value for health checks | `true` |
## Multi-edit and undo/redo configuration
| Environment Variable | Description | Default Value |
| --------------------------------------- | ----------------------------------------------- | ------------- |
| `FLOWX_MULTIEDIT_TTL` | Time-to-live for multi-edit sessions in seconds | `45` |
| `FLOWX_UNDOREDO_TTL` | Time-to-live for undo/redo actions in seconds | `86400` |
| `FLOWX_UNDOREDO_CLEANUP_CRONEXPRESSION` | Cron expression for undo/redo cleanup | `0 0 2 ?` |
| `FLOWX_UNDOREDO_CLEANUP_DAYS` | Days to keep deleted undo/redo items | `2` |
## Resources usage configuration
| Environment Variable | Description | Default Value |
| ------------------------------------------------------------------------- | ----------------------------------------------------- | ----------------------------------------- |
| `FLOWX_LIB_RESOURCESUSAGES_ENABLED` | Enable resources usage tracking | `true` |
| `FLOWX_LIB_RESOURCESUSAGES_REFRESHLISTENER_ENABLED` | Enable listener for resource usage refreshes | `true` |
| `FLOWX_LIB_RESOURCESUSAGES_REFRESHLISTENER_COLLECTOR_THREADCOUNT` | Thread count for resource usage collector | `5` |
| `FLOWX_LIB_RESOURCESUSAGES_REFRESHLISTENER_COLLECTOR_MAXBATCHSIZE` | Maximum batch size for resource usage collection | `1000` |
| `FLOWX_LIB_RESOURCESUSAGES_KAFKA_CONSUMER_GROUPID_RESOURCESUSAGESREFRESH` | Consumer group ID for resource usage refresh | `adminResourcesUsagesRefreshGroup` |
| `FLOWX_LIB_RESOURCESUSAGES_KAFKA_CONSUMER_THREADS_RESOURCESUSAGESREFRESH` | Number of consumer threads for resource usage refresh | `3` |
| `FLOWX_LIB_RESOURCESUSAGES_KAFKA_TOPIC_RESOURCE_USAGES_REFRESH` | Kafka topic for resource usage refresh | `${kafka.topic.resources-usages.refresh}` |
| `FLOWX_LIB_RESOURCESUSAGES_KAFKA_AUTHEXCEPTIONRETRYINTERVAL` | Retry interval in seconds after auth exceptions | `3` |
## Authentication and Authorization Configuration
The FlowX Admin microservice supports authentication and authorization through OpenID Connect (with Keycloak as the default provider) and allows detailed role-based access control.
### OpenID Connect Configuration
| Environment Variable | Description | Default Value |
| ------------------------------------- | ----------------------------- | ------------- |
| `SECURITY_TYPE` | Security type | `oauth2` |
| `SECURITY_OAUTH2CLIENT` | Enable OAuth2 client | `enabled` |
| `SECURITY_OAUTH2_BASESERVERURL` | Base URL of the OAuth2 server | |
| `SECURITY_OAUTH2_REALM` | OAuth2 realm name | |
| `SECURITY_OAUTH2_CLIENT_CLIENTID` | OAuth2 client ID | |
| `SECURITY_OAUTH2_CLIENT_CLIENTSECRET` | OAuth2 client secret | |
### Service Account Configuration
The following service account configuration is deprecated but still supported for backward compatibility.
| Environment Variable | Description | Default Value |
| ---------------------------------------------------- | ----------------------------- | ------------------------------------- |
| `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENTID` | Service account client ID | `flowx-${SPRING_APPLICATION_NAME}-sa` |
| `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENTSECRET` | Service account client secret | `client-secret` |
### Spring Security OAuth2 Client Configuration
| Environment Variable | Description | Default Value |
| -------------------------------------------------------------------------------------- | ----------------------------- | ----------------------------------------------------------------------------------------------------------- |
| `SPRING_SECURITY_OAUTH2_RESOURCESERVER_OPAQUETOKEN_INTROSPECTIONURI` | Token introspection URI | `${SECURITY_OAUTH2_BASESERVERURL}/realms/${SECURITY_OAUTH2_REALM}/protocol/openid-connect/token/introspect` |
| `SPRING_SECURITY_OAUTH2_RESOURCESERVER_OPAQUETOKEN_CLIENTID` | Resource server client ID | `${SECURITY_OAUTH2_CLIENT_CLIENTID}` |
| `SPRING_SECURITY_OAUTH2_RESOURCESERVER_OPAQUETOKEN_CLIENTSECRET` | Resource server client secret | `${SECURITY_OAUTH2_CLIENT_CLIENTSECRET}` |
| `SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_PROVIDER` | Identity provider name | `mainAuthProvider` |
| `SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENTNAME` | Client name | `mainIdentity` |
| `SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENTID` | Client ID | `${SECURITY_OAUTH2_SERVICEACCOUNT_ADMIN_CLIENTID}` |
| `SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENTSECRET` | Client secret | `${SECURITY_OAUTH2_SERVICEACCOUNT_ADMIN_CLIENTSECRET}` |
| `SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_AUTHORIZATIONGRANTTYPE` | Authorization grant type | `client_credentials` |
| `SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENT_AUTHENTICATION_METHOD` | Client authentication method | `client_secret_post` |
| `SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_MAINAUTHPROVIDER_TOKENURI` | Provider token URI | `${SECURITY_OAUTH2_BASESERVERURL}/realms/${SECURITY_OAUTH2_REALM}/protocol/openid-connect/token` |
### Identity Provider Configuration
| Environment Variable | Description | Default Value |
| ----------------------------- | ------------------------------------ | ------------------------------------------------- |
| `OPENID_PROVIDER` | OpenID provider type | `keycloak` (possible values: `keycloak`, `entra`) |
| `FLOWX_AUTHENTICATE_CLIENTID` | Client ID for authentication service | `flowx-platform-authenticate` |
| `FLOWX_PROCESS_DEFAULTROLES` | Default roles for processes | `FLOWX_ROLE` |
#### Keycloak Configuration
| Environment Variable | Description | Default Value |
| -------------------------------------- | ---------------------- | ------------------------------------------------------- |
| `OPENID_KEYCLOAK_BASE_SERVER_URL` | Keycloak server URL | `${SECURITY_OAUTH2_BASESERVERURL}` |
| `OPENID_KEYCLOAK_REALM` | Keycloak realm | `${SECURITY_OAUTH2_REALM}` |
| `OPENID_KEYCLOAK_CLIENT_CLIENT_ID` | Keycloak client ID | `${SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENTID}` |
| `OPENID_KEYCLOAK_CLIENT_CLIENT_SECRET` | Keycloak client secret | `${SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENTSECRET}` |
#### Microsoft Entra ID configuration
| Environment Variable | Description | Default Value |
| ---------------------------- | ----------------------------- | ------------------------------------------------------------------------- |
| `OPENID_ENTRA_GRAPH_SCOPE` | Microsoft Graph API scope | `https://graph.microsoft.com/.default` |
| `OPENID_ENTRA_TENANT_ID` | Microsoft Entra tenant ID | |
| `OPENID_ENTRA_CLIENT_ID` | Microsoft Entra client ID | `${SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENTID}` |
| `OPENID_ENTRA_CLIENT_SECRET` | Microsoft Entra client secret | `${SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENTSECRET}` |
| `OPENID_ENTRA_PRINCIPAL_ID` | Microsoft Entra principal ID | |
The role-based access control is configured in the application YAML and grants specific permissions for platform management, user management, process management, integrations management, and configuration management.
In production environments, never use the default service account credentials. Always configure secure, environment-specific credentials for authentication.
Sensitive information such as passwords and client secrets should be managed securely using environment variables or a secrets management solution in production environments.
# FlowX Advancing Controller setup
Source: https://docs.flowx.ai/4.7.x/setup-guides/advancing-controller-setup-guide
This guide provides step-by-step instructions to help you configure and deploy the Advancing Controller effectively.
## Infrastructure prerequisites
Before setting up the Advancing Controller, ensure the following components are properly set up:
* **FlowX.AI Engine Deployment**: The Advancing Controller depends on the FlowX Engine and must be deployed in the same environment. Refer to the FlowX Engine [**setup guide**](./flowx-engine-setup-guide/engine-setup) for more information on setting up the Engine.
* **Database Instance**: The Advancing Controller uses a PostgreSQL or OracleDB instance as its database.
## Dependencies
Ensure the following dependencies are met:
* [Database](#database-configuration): Properly configured database instance.
* [Datasource](#configuring-datasource): Configuration details for connecting to the database.
* [FlowX.AI Engine](./flowx-engine-setup-guide/engine-setup): Must be set up and running. Refer to the FlowX Engine setup guide.
### Database compatibility
The Advancing Controller supports both PostgreSQL and OracleDB databases. However, the FlowX.AI Engine and the Advancing Controller must be configured to use the same type of database at any given time. The FlowX.AI Engine employs two databases: one shared with the FlowX.AI Admin microservice for process metadata and instances, and the other dedicated to advancement.
Mixing PostgreSQL and OracleDB is not supported; both databases must be of the same type.
## Database configuration
### PostgreSQL
A basic PostgreSQL configuration for Advancing:
```yaml
postgresql:
enabled: true
postgresqlUsername: "postgres"
postgresqlPassword: ""
postgresqlDatabase: "advancing"
existingSecret: "postgresql-generic"
postgresqlMaxConnections: 200
persistence:
enabled: true
storageClass: standard-rwo
size: 20Gi
resources:
limits:
cpu: 1000m
memory: 1024Mi
requests:
memory: 256Mi
cpu: 100m
metrics:
enabled: true
serviceMonitor:
enabled: false
prometheusRule:
enabled: false
primary:
nodeSelector:
preemptible: "false"
```
If the parallel advancing configuration already exists, you must reset the 'advancing' database by executing the SQL command `DROP DATABASE advancing;`. Once the database has been dropped, the Liquibase script will automatically re-enable it.
## Configuration
The Advancing Controller uses a PostgreSQL or an Oracle database as a dependency.
* Ensure that the user, password, connection link, and database name are correctly configured. If these details are not configured correctly, errors will occur at startup.
* The datasource is configured automatically via a Liquibase script inside the engine. All updates will include migration scripts.
### Configuring datasource
If you need to change the datasource configuration detail, you can use the following environment variables:
| Variable | Description | Example Value |
| ------------------------------------------------ | --------------------------------------------------------- | --------------------------------------------------- |
| `SPRING_DATASOURCE_URL` | JDBC URL for database connection | `jdbc:postgresql://jx-onboardingdb:5432/onboarding` |
| `SPRING_DATASOURCE_DRIVERCLASSNAME` | JDBC driver class name | `org.postgresql.Driver` |
| `SPRING_DATASOURCE_USERNAME` | Database username | `postgres` |
| `SPRING_DATASOURCE_PASSWORD` | Database password | `[your-secure-password]` |
| `SPRING_JPA_DATABASE` | Database type (accepted values: `oracle` or `postgresql`) | `postgresql` |
| `SPRING_JPA_SHOWSQL` | Toggle SQL query logging | `false` |
| `SPRING_JPA_PROPERTIES_HIBERNATE_DEFAULT_SCHEMA` | Default database schema (❗️only for Oracle DBs) | `public` |
| `SPRING_LIQUIBASE_CHANGELOG` | Path to Liquibase changelog for database migrations | `classpath:config/liquibase/master.xml` |
It's important to keep in mind that the Advancing Controller is tightly integrated with the FlowX.AI Engine. Therefore, it is crucial to ensure that both the Engine and the Advancing Controller are configured correctly and are in sync.
## Health monitoring
| Variable | Description | Example Value |
| ------------------------------ | ----------------------------- | ------------- |
| `MANAGEMENT_HEALTH_DB_ENABLED` | Enable database health checks | `true` |
# Application Manager setup
Source: https://docs.flowx.ai/4.7.x/setup-guides/application-manager
The Application Manager is a backend microservice for managing FlowX applications, libraries, versions, manifests, configurations, and builds. This guide provides detailed instructions for setting up the service and configuring its components through environment variables.
The Application Manager is a backend microservice in FlowX.AI that:
✅ Manages FlowX applications, versions, manifests, and configurations.\
✅ Acts as a proxy for front-end resource requests.
The **Application Manager** and [**Runtime Manager**](./runtime-manager) share the same container image and Helm chart. Check the **Deployment Guidelines** in the release notes for version compatibility.
## Infrastructure prerequisites
Before you start setting up the Application Manager service, ensure the following infrastructure components are in place:
| Component | Version | Purpose |
| ------------- | ------- | ------------------------------------- |
| PostgreSQL | 13+ | Storing application data |
| MongoDB | 4.4+ | Managing runtime builds |
| Redis | 6.0+ | Caching needs |
| Kafka | 2.8+ | Event-driven communication |
| OAuth2 Server | - | Authentication (Keycloak recommended) |
Ensure that the database for storing application data is properly set up and configured before starting the service.
## Dependencies
The Application Manager relies on other FlowX services and components to function properly:
* [**Database configuration**](#database-configuration): For storing application details, manifests, and configurations.
* [**Authorization & Access Management**](#authentication-configuration): For securing access to resources and managing roles.
* [**Kafka Event Bus**](#kafka-configuration): For enabling event-driven operations.
* [**Resource Proxy**](#resource-proxy-configuration): To forward resource-related requests to appropriate services.
## Core configuration environment variables
### Basic service configuration
| Environment Variable | Description | Example Value |
| ---------------------------- | ----------------------------- | --------------------------- |
| `CONFIG_PROFILE` | Spring configuration profiles | `k8stemplate_v2,kafka-auth` |
| `MULTIPART_MAX_FILE_SIZE` | Maximum file upload size | `25MB` |
| `MULTIPART_MAX_REQUEST_SIZE` | Maximum request size | `25MB` |
| `LOGGING_CONFIG_FILE` | Logging configuration file | `logback-spring.xml` |
### Database configuration
#### PostgreSQL configuration
| Environment Variable | Description | Example Value |
| ----------------------------------- | ------------------- | ----------------------------------------------- |
| `SPRING_DATASOURCE_URL` | PostgreSQL JDBC URL | `jdbc:postgresql://postgresql:5432/app_manager` |
| `SPRING_DATASOURCE_USERNAME` | Database username | `flowx` |
| `SPRING_DATASOURCE_PASSWORD` | Database password | `password` |
| `SPRING_DATASOURCE_DRIVERCLASSNAME` | JDBC driver class | `org.postgresql.Driver` |
#### MongoDB configuration
The Application Manager requires MongoDB to store runtime build information. Use the following environment variables for configuration:
| Environment Variable | Description | Example Value |
| ----------------------------- | -------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `SPRING_DATA_MONGODB_URI` | MongoDB connection URI | `mongodb://${DB_USERNAME}:${DB_PASSWORD}@mongodb-0.mongodb-headless,mongodb-1.mongodb-headless,mongodb-arbiter-0.mongodb-arbiter-headless:27017/app-runtime?retryWrites=false` |
| `DB_USERNAME` | MongoDB username | `app-runtime` |
| `DB_PASSWORD` | MongoDB password | `password` |
| `SPRING_DATA_MONGODB_STORAGE` | Storage type (Azure environments only) | `mongodb` (alternative: `cosmosdb`) |
### Redis configuration
If caching is required, configure Redis using the following environment variables:
| Environment Variable | Description | Example Value |
| ---------------------------- | --------------------------------- | -------------- |
| `SPRING_DATA_REDIS_HOST` | Redis server hostname | `redis-master` |
| `SPRING_DATA_REDIS_PASSWORD` | Redis password | `password` |
| `SPRING_DATA_REDIS_PORT` | Redis server port | `6379` |
| `SPRING_REDIS_TTL` | Default Redis TTL in milliseconds | `5000000` |
### Kafka configuration
#### Kafka connection and security variables
| Environment Variable | Description | Example Value |
| -------------------------------- | ---------------------- | ---------------------------------- |
| `SPRING_KAFKA_BOOTSTRAPSERVERS` | Kafka broker addresses | `kafka-flowx-kafka-bootstrap:9092` |
| `SPRING_KAFKA_SECURITY_PROTOCOL` | Security protocol | `PLAINTEXT` |
| `KAFKA_MESSAGE_MAX_BYTES` | Maximum message size | `52428800` (50MB) |
| `FLOWX_KAFKA_PAYLOADSIZELIMIT` | Payload size limit | `512000` (500KB) |
#### Kafka producer configuration
| Environment Variable | Description | Example Value |
| --------------------------------------------------- | -------------------- | -------------------------------------------------------- |
| `SPRING_KAFKA_PRODUCER_KEYSERIALIZER` | Key serializer class | `org.apache.kafka.common.serialization.StringSerializer` |
| `SPRING_KAFKA_PRODUCER_PROPERTIES_MAX_REQUEST_SIZE` | Maximum request size | `52428800` (50MB) |
#### OAuth authentication variables (when using SASL\_PLAINTEXT)
| Environment Variable | Description | Example Value |
| -------------------------------- | -------------------- | ----------------------------------------------------------------- |
| `KAFKA_OAUTH_CLIENTID` | OAuth client ID | `flowx-service-client` |
| `KAFKA_OAUTH_CLIENTSECRET` | OAuth client secret | `flowx-service-client-secret` |
| `KAFKA_OAUTH_TOKEN_ENDPOINT_URI` | OAuth token endpoint | `{baseUrl}/auth/realms/kafka-authz/protocol/openid-connect/token` |
#### Kafka consumer configuration
| Environment Variable | Description | Default Value |
| ------------------------------------------------------------------------ | --------------------------------------- | ----------------------------------- |
| `KAFKA_CONSUMER_GROUPID_APPLICATION_RESOURCE_EXPORT` | Application export consumer group | `appResourceExportGroup` |
| `KAFKA_CONSUMER_GROUPID_APPLICATION_RESOURCE_IMPORT` | Application import consumer group | `appResourceImportGroup` |
| `KAFKA_CONSUMER_GROUPID_APPLICATION_RESOURCE_USAGES` | Resource usages consumer group | `appResourceUsagesGroup` |
| `KAFKA_CONSUMER_GROUPID_APPLICATION_RESOURCE_RESELEMUSAGEVALIDATIONRESP` | Resource element validation group | `appResElemUsageValidationResp` |
| `KAFKA_CONSUMER_GROUPID_APPLICATION_RESOURCE_COPY` | Resource copy consumer group | `appResourceCopyGroup` |
| `KAFKA_CONSUMER_GROUPID_APPLICATION_MERGE` | Application merge consumer group | `appMergeItemGroup` |
| `KAFKA_CONSUMER_GROUPID_BUILD_CREATE` | Build create consumer group | `buildCreateGroup` |
| `KAFKA_CONSUMER_GROUPID_BUILD_UPDATE` | Build update consumer group | `buildUpdateGroup` |
| `KAFKA_CONSUMER_GROUPID_BUILD_RESOURCE_EXPORT` | Build export consumer group | `buildResourceExportGroup` |
| `KAFKA_CONSUMER_GROUPID_BUILD_RESOURCE_IMPORT` | Build import consumer group | `buildResourceImportGroup` |
| `KAFKA_CONSUMER_GROUPID_BUILD_STARTTIMEREVENTS_UPDATES` | Build timer events updates consumer | `buildStartTimerEventsUpdatesGroup` |
| `KAFKA_CONSUMER_GROUPID_PROCESS_START` | Process start consumer group | `processStartGroup` |
| `KAFKA_AUTH_EXCEPTION_RETRY_INTERVAL` | Auth exception retry interval (seconds) | `10` |
#### Kafka consumer threads configuration
| Environment Variable | Description | Default Value |
| ------------------------------------------------------------------------ | ------------------------------------------- | ------------- |
| `KAFKA_CONSUMER_THREADS_APPLICATION_RESOURCE_EXPORT` | Application export consumer threads | `3` |
| `KAFKA_CONSUMER_THREADS_APPLICATION_RESOURCE_IMPORT` | Application import consumer threads | `3` |
| `KAFKA_CONSUMER_THREADS_APPLICATION_RESOURCE_USAGES` | Resource usages consumer threads | `3` |
| `KAFKA_CONSUMER_THREADS_APPLICATION_RESOURCE_RESELEMUSAGEVALIDATIONRESP` | Resource validation response threads | `3` |
| `KAFKA_CONSUMER_THREADS_APPLICATION_RESOURCE_COPY` | Resource copy consumer threads | `3` |
| `KAFKA_CONSUMER_THREADS_APPLICATION_MERGE` | Application merge consumer threads | `3` |
| `KAFKA_CONSUMER_THREADS_BUILD_CREATE` | Build create consumer threads | `2` |
| `KAFKA_CONSUMER_THREADS_BUILD_UPDATE` | Build update consumer threads | `4` |
| `KAFKA_CONSUMER_THREADS_BUILD_RESOURCE_EXPORT` | Build export consumer threads | `3` |
| `KAFKA_CONSUMER_THREADS_BUILD_RESOURCE_IMPORT` | Build import consumer threads | `3` |
| `KAFKA_CONSUMER_THREADS_BUILD_STARTTIMEREVENTS_UPDATES` | Build timer events updates consumer threads | `3` |
### Topic naming convention and pattern creation
The Application Manager uses a sophisticated topic naming convention that follows a structured pattern. This ensures consistency across environments and makes topics easily identifiable.
#### Topic naming components
| Component | Default Value | Environment Variable | Description |
| ------------- | ---------------------------------------------------------------- | -------------------------------- | ------------------------------------- |
| `package` | `ai.flowx.` | `KAFKA_TOPIC_NAMING_PACKAGE` | Base package identifier |
| `environment` | `dev.` | `KAFKA_TOPIC_NAMING_ENVIRONMENT` | Deployment environment |
| `version` | `.v1` | `KAFKA_TOPIC_NAMING_VERSION` | Topic version |
| `separator` | `.` | `KAFKA_TOPIC_NAMING_SEPARATOR` | Main separator (referred to as `dot`) |
| `separator2` | `-` | `KAFKA_TOPIC_NAMING_SEPARATOR2` | Secondary separator (as `dash`) |
| `prefix` | `${KAFKA_TOPIC_NAMING_PACKAGE}${KAFKA_TOPIC_NAMING_ENVIRONMENT}` | `KAFKA_TOPIC_NAMING_PREFIX` | Combined `package` and `environment` |
| `suffix` | `${KAFKA_TOPIC_NAMING_VERSION}` | `KAFKA_TOPIC_NAMING_SUFFIX` | The version suffix |
#### Topic pattern creation
Topics are constructed using the following pattern:
```
{prefix} + service + {separator/dot} + action + {separator/dot} + detail + {suffix}
```
For example, a typical topic might look like:
```
ai.flowx.dev.application-version.export.v1
```
Where:
* `ai.flowx.dev.` is the prefix (package + environment)
* `application-version` is the service
* `export` is the action
* `.v1` is the suffix (version)
For more complex topics, additional components are added:
```
ai.flowx.dev.application-version.resources-usages.sub-res-validation.response.v1
```
Where:
* `resources-usages` represents the resource type
* `sub-res-validation` represents the operation type
* `response` indicates it's a response message
#### Kafka topic configuration
##### Application resource topics
| Environment Variable | Description | Default Pattern |
| ------------------------------------------------------------------------- | ------------------------------------------ | --------------------------------------------------------------------------------------------- |
| `KAFKA_TOPIC_APPLICATION_RESOURCE_EXPORT` | Application resource export topic | `ai.flowx.dev.application-version.export.v1` |
| `KAFKA_TOPIC_APPLICATION_RESOURCE_IMPORT` | Application resource import topic | `ai.flowx.dev.application-version.import.v1` |
| `KAFKA_TOPIC_APPLICATION_RESOURCE_USAGES_IN` | Resource usages in topic | `ai.flowx.dev.application-version.resources-usages.operations.bulk.v1` |
| `KAFKA_TOPIC_APPLICATION_RESOURCE_USAGES_OUT` | Resource usages out topic | `ai.flowx.dev.application-version.resources-usages.operations.bulk.v1` |
| `KAFKA_TOPIC_APPLICATION_RESOURCE_USAGES_REFRESH` | Resource usages refresh topic | `ai.flowx.dev.application-version.resources-usages.refresh.v1` |
| `KAFKA_TOPIC_APPLICATION_RESOURCE_RESELEMUSAGEVALIDATION_RESPONSE` | Resource element usage validation response | `ai.flowx.dev.application-version.resources-usages.sub-res-validation.response.v1` |
| `KAFKA_TOPIC_APPLICATION_RESOURCE_RESELEMUSAGEVALIDATION_OUT_INTEGRATION` | Resource validation integration topic | `ai.flowx.dev.application-version.resources-usages.sub-res-validation.request-integration.v1` |
| `KAFKA_TOPIC_APPLICATION_RESOURCE_RESELEMUSAGEVALIDATION_OUT_CMS` | Resource validation CMS topic | `ai.flowx.dev.application-version.resources-usages.sub-res-validation.cms.v1` |
| `KAFKA_TOPIC_APPLICATION_RESOURCE_COPY` | Resource copy topic | `ai.flowx.dev.application-version.copy-resource.v1` |
| `KAFKA_TOPIC_APPLICATION_MERGE` | Application merge topic | `ai.flowx.dev.application-version.merge.v1` |
##### Build resource topics
| Environment Variable | Description | Default Pattern |
| -------------------------------------------- | -------------------------- | ----------------------------------------------------- |
| `KAFKA_TOPIC_BUILD_UPDATE` | Build update topic | `ai.flowx.dev.build.update.v1` |
| `KAFKA_TOPIC_BUILD_CREATE` | Build create topic | `ai.flowx.dev.build.create.v1` |
| `KAFKA_TOPIC_BUILD_RESOURCE_EXPORT` | Build export topic | `ai.flowx.dev.build.export.v1` |
| `KAFKA_TOPIC_BUILD_RESOURCE_IMPORT` | Build import topic | `ai.flowx.dev.build.import.v1` |
| `KAFKA_TOPIC_BUILD_STARTTIMEREVENTS_UPDATES` | Timer events updates topic | `ai.flowx.dev.build.start-timer-events.updates.in.v1` |
##### Process topics
| Environment Variable | Description | Default Pattern |
| --------------------------------------------------- | ------------------------------- | -------------------------------------------------------- |
| `KAFKA_TOPIC_PROCESS_STARTFOREVENT_IN` | Process start for event topic | `ai.flowx.dev.core.trigger.start-for-event.process.v1` |
| `KAFKA_TOPIC_PROCESS_STARTBYNAME_IN` | Process start by name topic | `ai.flowx.dev.core.trigger.start-by-name.process.v1` |
| `KAFKA_TOPIC_PROCESS_STARTBYNAME_OUT` | Process start by name out topic | `ai.flowx.dev.core.trigger.start-by-name.process.out.v1` |
| `KAFKA_TOPIC_PROCESS_SCHEDULEDTIMEREVENTS_OUT_SET` | Set timer schedule topic | `ai.flowx.dev.core.trigger.set.timer-event-schedule.v1` |
| `KAFKA_TOPIC_PROCESS_SCHEDULEDTIMEREVENTS_OUT_STOP` | Stop timer schedule topic | `ai.flowx.dev.core.trigger.stop.timer-event-schedule.v1` |
##### Other topics
| Environment Variable | Description | Default Pattern |
| --------------------------------------- | ----------------------------- | ---------------------------------------------------- |
| `KAFKA_TOPIC_AUDIT_OUT` | Audit topic | `ai.flowx.dev.core.trigger.save.audit.v1` |
| `KAFKA_TOPIC_EVENTSGATEWAY_OUT_MESSAGE` | Events gateway messages topic | `ai.flowx.dev.eventsgateway.receive.copyresource.v1` |
These Kafka topics use predefined naming conventions for ease of use. Optional adjustments may be made if the desired topic name cannot be achieved with the standard structure.
### Authentication configuration
#### OpenID Connect configuration
| Environment Variable | Description | Default Value |
| ----------------------------------------- | ----------------------- | -------------------------------------------------------------------------------------------------- |
| `SECURITY_TYPE` | Security type | `oauth2` |
| `SECURITY_OAUTH2_CLIENT` | Enable OAuth2 client | `enabled` |
| `SECURITY_OAUTH2_BASE_SERVER_URL` | OAuth2 server base URL | |
| `SECURITY_OAUTH2_REALM` | OAuth2 realm name | |
| `SECURITY_OAUTH2_CLIENT_CLIENTID` | OAuth2 client ID | |
| `SECURITY_OAUTH2_CLIENT_CLIENTSECRET` | OAuth2 client secret | |
| `SECURITY_OAUTH2_CLIENT_ACCESS_TOKEN_URI` | OAuth2 access token URI | `${SECURITY_OAUTH2_BASE_SERVER_URL}/realms/${SECURITY_OAUTH2_REALM}/protocol/openid-connect/token` |
#### Service account configuration
| Environment Variable | Description | Default Value |
| ---------------------------------------------------- | ----------------------------- | -------------------------- |
| `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENTID` | Service account client ID | `flowx-runtime-manager-sa` |
| `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENTSECRET` | Service account client secret | |
#### Spring security OAuth2 client configuration
| Environment Variable | Description | Default Value |
| -------------------------------------------------------------------------------------- | ----------------------------- | ------------------------------------------------------------------------------------------------------------- |
| `SPRING_SECURITY_OAUTH2_RESOURCE_SERVER_OPAQUE_TOKEN_INTROSPECTION_URI` | Token introspection URI | `${SECURITY_OAUTH2_BASE_SERVER_URL}/realms/${SECURITY_OAUTH2_REALM}/protocol/openid-connect/token/introspect` |
| `SPRING_SECURITY_OAUTH2_RESOURCE_SERVER_OPAQUE_TOKEN_CLIENTID` | Resource server client ID | `${SECURITY_OAUTH2_CLIENT_CLIENTID}` |
| `SPRING_SECURITY_OAUTH2_RESOURCE_SERVER_OPAQUE_TOKEN_CLIENTSECRET` | Resource server client secret | `${SECURITY_OAUTH2_CLIENT_CLIENTSECRET}` |
| `SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_PROVIDER` | Identity provider name | `mainAuthProvider` |
| `SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENT_NAME` | Client name | `mainIdentity` |
| `SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENTID` | Client ID | `${SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENTID}` |
| `SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENTSECRET` | Client secret | `${SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENTSECRET}` |
| `SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_AUTHORIZATION_GRANT_TYPE` | Authorization grant type | `client_credentials` |
| `SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENT_AUTHENTICATION_METHOD` | Client authentication method | `client_secret_post` |
| `SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_MAINAUTHPROVIDER_TOKEN_URI` | Provider token URI | `${SECURITY_OAUTH2_BASE_SERVER_URL}/realms/${SECURITY_OAUTH2_REALM}/protocol/openid-connect/token` |
The Application Manager requires proper authentication settings to secure access to application resources and APIs. By default, the service is configured to use Keycloak as the OpenID provider, but it can be adapted to work with other OAuth2-compatible providers.
Refer to the dedicated section for configuring user roles and access rights:
### File storage configuration
S3 is used in the Application Manager for:
* Storing imported and exported resources
* Storing application versions and builds that are imported or exported
| Environment Variable | Description | Example Value | Default |
| ---------------------------------------------- | ---------------------- | ------------------------------------------ | --------------------- |
| `APPLICATION_FILESTORAGE_S3_SERVERURL` | S3 server URL | `http://minio:9000` | None |
| `APPLICATION_FILESTORAGE_S3_ACCESSKEY` | S3 access key | `Ha0wvtOE9gQ2NSzghEcs` | None |
| `APPLICATION_FILESTORAGE_S3_SECRETKEY` | S3 secret key | `jY7nYLVtNh9JzMflliQKu3noPpjxD3prxIkliErX` | None |
| `APPLICATION_FILESTORAGE_TYPE` | Storage type | `s3` | `s3` |
| `APPLICATION_FILESTORAGE_DELETIONSTRATEGY` | File deletion strategy | `delete` | `delete` |
| `APPLICATION_FILESTORAGE_S3_ENABLED` | Enable S3 storage | `true` | `true` |
| `APPLICATION_FILESTORAGE_S3_ENCRYPTIONENABLED` | Enable S3 encryption | `false` | `false` |
| `APPLICATION_FILESTORAGE_S3_BUCKETPREFIX` | S3 bucket name prefix | `applications-bucket` | `applications-bucket` |
### Monitoring and health check configuration
| Environment Variable | Description | Example Value | Default |
| ---------------------------------------------------- | ----------------------- | ------------------------------------------------------------------------------- | ------- |
| `MANAGEMENT_METRICS_EXPORT_PROMETHEUS_ENABLED` | Prometheus metrics | `true` | `false` |
| `MANAGEMENT_HEALTH_KUBERNETES_ENABLED` | Kubernetes health check | `false` | `true` |
| `MANAGEMENT_HEALTH_REDIS_ENABLED` | Redis health check | `false` | `true` |
| `MANAGEMENT_HEALTH_KAFKA_ENABLED` | Kafka health check | `true` | `true` |
| `MANAGEMENT_HEALTH_LIVENESSSTATE_ENABLED` | Liveness state | `true` | `false` |
| `MANAGEMENT_HEALTH_READINESSSTATE_ENABLED` | Readiness state | `true` | `false` |
| `MANAGEMENT_ENDPOINT_HEALTH_GROUP_LIVENESS_INCLUDE` | Liveness probes | `ping,diskSpace,accessInfo,buildInfo,db,mongo,kafkaClusterHealthCheckIndicator` | `ping` |
| `MANAGEMENT_ENDPOINT_HEALTH_GROUP_READINESS_INCLUDE` | Readiness probes | `ping,diskSpace,accessInfo,buildInfo` | `ping` |
### Resource proxy configuration
The Resource Proxy module forwards resource-related requests to appropriate services, handling CRUD operations on the manifest. It requires proper configuration of proxy endpoints:
| Environment Variable | Description | Example Value | Default |
| ------------------------------------------------------- | ---------------------------------- | ---------------- | --------- |
| `RESOURCE_PROXY_MANIFEST_URL` | Manifest URL for resource proxy | URL value | None |
| `RESOURCE_PROXY_TARGET_URL` | Target URL for resource forwarding | URL value | None |
| `FLOWX_RESOURCEPROXY_RETRYGETRESOURCETIMEOUTMS` | Resource retrieval timeout | `500` | `500` |
| `FLOWX_RESOURCEPROXY_RETRYGETRESOURCEMAXCOUNT` | Maximum resource retrieval retries | `10` | `10` |
| `FLOWX_RESOURCEPROXY_WEBCLIENT_RETRYATTEMPTS` | Web client retry attempts | `2` | `2` |
| `FLOWX_RESOURCEPROXY_WEBCLIENT_RETRYBACKOFF` | Retry backoff time (seconds) | `1` | `1` |
| `FLOWX_RESOURCEPROXY_WEBCLIENT_MAXINMEMORYSIZE` | Maximum in-memory size | `5MB` | `5MB` |
| `FLOWX_RUNTIMEEXECUTIONPROXY_WEBCLIENT_MAXINMEMORYSIZE` | Maximum REST request size | `5242880` (5 MB) | `5242880` |
`FLOWX_RUNTIMEEXECUTIONPROXY_WEBCLIENT_MAXINMEMORYSIZE` - Specifies the maximum size (in bytes) of in-memory data for REST requests. This is particularly useful when dealing with large payloads to prevent excessive memory consumption.
* Default Value: 5242880 (5 MB)
* Usage Example: Set to 10485760 (10 MB) to allow larger in-memory request sizes.
### Scheduler configuration
The Application Manager scheduler supports retrying failed deployments and master election for better coordination of tasks across instances:
| Environment Variable | Description | Example Value | Default |
| ----------------------------------------------- | ---------------------------- | ---------------- | ------- |
| `FLOWX_SCHEDULER_RETRYFAILEDDEPLOYMENTSCRON` | Failed deployment retry cron | `0 * * * * *` | None |
| `FLOWX_SCHEDULER_MASTERELECTION_ENABLED` | Enable master election | `true` | `false` |
| `FLOWX_SCHEDULER_MASTERELECTION_CRONEXPRESSION` | Master election cron | `*/30 * * * * *` | None |
| `FLOWX_SCHEDULER_MASTERELECTION_PROVIDER` | Election provider | `redis` | None |
#### Retry failed deployments
Configures a cron job to retry updating builds in the runtime database every minute when previous attempts have failed.
#### Master election
Enables master election for improved scheduling coordination when multiple instances of the Application Manager are running, ensuring that scheduled tasks are only executed once.
### Configuring logging
To control the logging levels for the Application Manager, use the following environment variables:
| Environment Variable | Description | Example Value |
| -------------------- | ------------------------------- | ------------- |
| `LOGGING_LEVEL_ROOT` | Root Spring Boot logs level | `INFO` |
| `LOGGING_LEVEL_APP` | Application-level logs level | `INFO` |
| `LOGGING_LEVEL_DB` | Database interactions log level | `INFO` |
## Data model overview
The Application Manager stores application data using a relational database schema, with key entities such as application, application\_version, and application\_manifest. Below are descriptions of primary entities:
* **Application** - Defines an application with its details like name, type, and metadata.
* **Application Branch** - Represents branches for versioning within an application.
* **Application Version** - Keeps track of each version of an application, including committed and WIP statuses.
* **Application Manifest** - Contains the list of resources associated with a specific application version.
## Ingress configuration
Configure ingress to control external access to Application Manager:
```yaml
ingress:
enabled: true
public:
enabled: false
admin:
enabled: true
hostname: "{{ .Values.flowx.ingress.admin }}"
path: /appmanager(/|$)(.*)
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/cors-allow-headers: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization,flowx-platform
```
# Audit setup
Source: https://docs.flowx.ai/4.7.x/setup-guides/audit-setup-guide
This guide will walk you through the process of setting up the Audit service and configuring it to meet your needs.
## Infrastructure prerequisites
The Audit service requires the following components to be set up before it can be started:
* **Docker engine** - version 17.06 or higher
* **Kafka** - version 2.8 or higher
* **Elasticsearch** - version 7.11.0 or higher
## Dependencies
The Audit service is built as a Docker image and runs on top of Kafka and Elasticsearch. Therefore, these services must be set up and running before starting the Audit service.
* [**Kafka configuration**](./setup-guides-overview#kafka)
* [**Authorization & access roles**](./setup-guides-overview#authorization--access-roles)
* [**Elasticsearch**](#configuring-elasticsearch)
* [**Logging**](./setup-guides-overview#logging)
## Configuration
### Configuring Kafka
To configure the Kafka server for the Audit service, set the following environment variables:
#### Connection settings
| Variable | Description | Example Value |
| ------------------------------- | ----------------------------------------- | ---------------- |
| `SPRING_KAFKA_BOOTSTRAPSERVERS` | Address of the Kafka server(s) | `localhost:9092` |
| `SPRING_KAFKA_SECURITYPROTOCOL` | Protocol used to communicate with brokers | `PLAINTEXT` |
#### Consumer configuration
| Variable | Description | Example Value |
| ------------------------------------------------------------ | ------------------------------------------- | ------------------------------------- |
| `SPRING_KAFKA_CONSUMER_GROUPID` | Consumer group ID for audit logs | `audit-gid` |
| `SPRING_KAFKA_CONSUMER_PROPERTIES_MAX_PARTITION_FETCH_BYTES` | Maximum data size per partition | `${KAFKA_MESSAGE_MAX_BYTES:52428800}` |
| `KAFKA_CONSUMER_THREADS` | Number of consumer threads | `1` |
| `KAFKA_AUTHEXCEPTIONRETRYINTERVAL` | Retry interval after auth failure (seconds) | `10` |
#### Topic naming configuration
| Variable | Description | Example Value |
| -------------------------------- | ----------------------------------- | ------------------------------------------------------------------------------------------------ |
| `DOT` | Reference to primary separator | `${kafka.topic.naming.separator}` |
| `DASH` | Reference to secondary separator | `${kafka.topic.naming.separator2}` |
| `KAFKA_TOPIC_NAMING_SEPARATOR` | Primary separator for topic names | `.` |
| `KAFKA_TOPIC_NAMING_SEPARATOR2` | Secondary separator for topic names | `-` |
| `KAFKA_TOPIC_NAMING_PACKAGE` | Base namespace for topics | `ai${dot}flowx${dot}` |
| `KAFKA_TOPIC_NAMING_ENVIRONMENT` | Environment indicator | `dev${dot}` |
| `KAFKA_TOPIC_NAMING_VERSION` | Version component | `${dot}v1` |
| `KAFKA_TOPIC_NAMING_PREFIX` | Combined prefix | `${kafka.topic.naming.package}${kafka.topic.naming.environment}` |
| `KAFKA_TOPIC_NAMING_SUFFIX` | Combined suffix | `${kafka.topic.naming.version}` |
| `KAFKA_TOPIC_AUDIT_IN` | Topic for receiving audit logs | `${kafka.topic.naming.prefix}core${dot}trigger${dot}save${dot}audit${kafka.topic.naming.suffix}` |
With default settings, the `KAFKA_TOPIC_AUDIT_IN` resolves to: `ai.flowx.dev.core.trigger.save.audit.v1`
### Configuring Elasticsearch
To configure Elasticsearch, set the following environment variables:
| Variable | Description | Example Value |
| ------------------------------------------------ | ----------------------------- | ---------------------- |
| `SPRING_ELASTICSEARCH_REST_PROTOCOL` | Protocol for ES connection | `https` |
| `SPRING_ELASTICSEARCH_REST_URIS` | URL(s) of Elasticsearch nodes | `localhost:9200` |
| `SPRING_ELASTICSEARCH_REST_DISABLESSL` | Disable SSL for connections | `false` |
| `SPRING_ELASTICSEARCH_REST_USERNAME` | Username for authentication | `elastic` |
| `SPRING_ELASTICSEARCH_REST_PASSWORD` | Password for authentication | `your-secure-password` |
| `SPRING_ELASTICSEARCH_INDEX_SETTINGS_DATASTREAM` | Name of the audit data stream | `audit-logs` |
| `SPRING_ELASTICSEARCH_INDEX_SETTINGS_SHARDS` | Number of primary shards | `2` |
| `SPRING_ELASTICSEARCH_INDEX_SETTINGS_REPLICAS` | Number of replica shards | `2` |
The Elasticsearch index settings determine how your audit data is distributed and replicated across the cluster. The number of shards affects search performance and indexing, while replicas provide redundancy.
### Configuring logging
To control the log levels, set the following environment variables:
| Variable | Description | Example Value |
| -------------------- | -------------------------- | ------------- |
| `LOGGING_LEVEL_ROOT` | Log level for root service | `INFO` |
| `LOGGING_LEVEL_APP` | Log level for application | `INFO` |
# FlowX CMS setup
Source: https://docs.flowx.ai/4.7.x/setup-guides/cms-setup
The CMS service is a microservice designed for managing taxonomies and content inside an application. Delivered as a Docker image, it simplifies content editing and analysis. This guide provides step-by-step instructions for setting up the service and configuring it to suit your needs.
Ensure the following infrastructure components are available before starting the CMS service:
* **Docker Engine**: Version 17.06 or higher.
* **MongoDB**: Version 4.4 or higher for storing taxonomies and content.
* **Redis**: Version 6.0 or higher.
* **Kafka**: Version 2.8 or higher.
* **Elasticsearch**: Version 7.11.0 or higher.
The service is pre-configured with most default values. However, some environment variables require customization during setup.
## Dependencies overview
* [**MongoDB instance**](#mongodb-database)
* [**Authorization & access roles**](#configuring-authorization-access-roles)
* [**Redis**](#configuring-redis)
* [**Kafka**](#configuring-kafka)
## Configuration
### Set application defaults
Define the default application name for retrieving content:
```yaml
application:
defaultApplication: ${DEFAULT_APPLICATION:flowx}
```
If this configuration is not provided, the default value will be set to `flowx`.
### Configuring authorization & access roles
Connect the CMS to an OAuth 2.0 identity management platform by setting the following variables:
| Environment variable | Description |
| ------------------------------------- | ------------------------------------------------ |
| `SECURITY_OAUTH2_BASESERVERURL` | Base URL for the OAuth 2.0 Authorization Server |
| `SECURITY_OAUTH2_CLIENT_CLIENTID` | Unique identifier for the client application |
| `SECURITY_OAUTH2_CLIENT_CLIENTSECRET` | Secret key to authenticate client requests |
| `SECURITY_OAUTH2_REALM` | Realm name for OAuth 2.0 provider authentication |
For detailed role and access configuration, refer to:
### Configuring MongoDB
The CMS requires MongoDB for taxonomy and content storage. Configure MongoDB with the following variables:
| Environment variable | Description | Default value |
| ---------------------------- | --------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- |
| `SPRING_DATA_MONGODB_URI` | URI for connecting to the CMS MongoDB instance | Format: `mongodb://${DB_USERNAME}:${DB_PASSWORD}@,,:/${DB_NAME}?retryWrites=false` |
| `DB_USERNAME` | MongoDB username | `cms-core` |
| `DB_NAME` | MongoDB database name | `cms-core` |
| `DB_PASSWORD` | MongoDB password | |
| `MONGOCK_TRANSACTIONENABLED` | Enables transactions in MongoDB for Mongock library | `false` (Set to `false` to support successful migrations) |
Set `MONGOCK_TRANSACTIONENABLED` to `false` due to known issues with transactions in MongoDB version 5.
#### Configuring MongoDB (runtime database - additional data)
CMS also connects to a Runtime MongoDB instance for operational data:
| Environment variable | Description | Default value |
| --------------------------------- | ---------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------- |
| `SPRING_DATA_MONGODB_RUNTIME_URI` | URI for connecting to Runtime MongoDB | Format: `mongodb://${RUNTIME_DB_USERNAME}:${RUNTIME_DB_PASSWORD}@,,:/${RUNTIME_DB_NAME}?retryWrites=false` |
| `RUNTIME_DB_USERNAME` | Runtime MongoDB username | `app-runtime` |
| `RUNTIME_DB_NAME` | Runtime MongoDB database name | `app-runtime` |
| `RUNTIME_DB_PASSWORD` | Runtime MongoDB password | |
| `SPRING_DATA_MONGODB_STORAGE` | Storage type for Runtime MongoDB (Azure environments only) | `mongodb` (Options: `mongodb`, `cosmosdb`) |
### Configuring Redis
The service can use the same Redis component deployed for the engine. See [Redis Configuration](./setup-guides-overview#redis-configuration).
| Environment variable | Description |
| ---------------------------- | ------------------------------------------------------ |
| `SPRING_DATA_REDIS_HOST` | Hostname or IP of the Redis server |
| `SPRING_DATA_REDIS_PASSWORD` | Authentication password for Redis |
| `SPRING_REDIS_TTL` | Maximum time-to-live for Redis cache keys (in seconds) |
### Configuring Kafka
#### Connection settings
| Environment variable | Description | Default value |
| -------------------------------- | --------------------------- | ---------------- |
| `SPRING_KAFKA_BOOTSTRAPSERVERS` | Address of the Kafka server | `localhost:9092` |
| `SPRING_KAFKA_SECURITY_PROTOCOL` | Security protocol for Kafka | `"PLAINTEXT"` |
#### Auth and retry configuration
| Environment variable | Description | Default value |
| ---------------------------------- | ----------------------------------------------- | ----------------- |
| `KAFKA_AUTHEXCEPTIONRETRYINTERVAL` | Retry interval after an authorization exception | `10` |
| `KAFKA_MESSAGE_MAX_BYTES` | Maximum message size in bytes | `52428800` (50MB) |
#### Consumer group configuration
| Environment variable | Description | Default value |
| ------------------------------------------- | -------------------------------------- | -------------------------------- |
| `KAFKA_CONSUMER_GROUPID_CONTENTTRANSLATE` | Group ID for content translation | `cms-consumer-preview` |
| `KAFKA_CONSUMER_GROUPID_RESUSAGEVALIDATION` | Group ID for resource usage validation | `cms-res-usage-validation-group` |
#### Consumer thread configuration
| Environment variable | Description | Default value |
| ------------------------------------------- | ------------------------------------- | ------------- |
| `KAFKA_CONSUMER_THREADS_CONTENTTRANSLATE` | Threads for content translation | `1` |
| `KAFKA_CONSUMER_THREADS_RESUSAGEVALIDATION` | Threads for resource usage validation | `2` |
#### Topic configuration
The suggested topic pattern naming convention:
```yaml
topic:
naming:
package: "ai.flowx."
environment: "dev."
version: ".v1"
prefix: ${kafka.topic.naming.package}${kafka.topic.naming.environment}
suffix: ${kafka.topic.naming.version}
engineReceivePattern: engine.receive.
```
#### Content request topics
| Environment variable | Description | Default value |
| --------------------------------- | --------------------------------------------- | -------------------------------------------------------------------- |
| `KAFKA_TOPIC_REQUEST_CONTENT_IN` | Topic for incoming content retrieval requests | `ai.flowx.dev.plugin.cms.trigger.retrieve.content.v1` |
| `KAFKA_TOPIC_REQUEST_CONTENT_OUT` | Topic for content retrieval results | `ai.flowx.dev.engine.receive.plugin.cms.retrieve.content.results.v1` |
#### Audit topics
| Environment variable | Description | Default value |
| ----------------------- | ---------------------------- | ----------------------------------------- |
| `KAFKA_TOPIC_AUDIT_OUT` | Topic for sending audit logs | `ai.flowx.dev.core.trigger.save.audit.v1` |
#### Application resource usage validation
| Environment variable | Description | Default value |
| ----------------------------------------------- | ----------------------------------- | ----------------------------------------------------------------------------- |
| `KAFKA_TOPIC_APPLICATION_IN_RESUSAGEVALIDATION` | Topic for resource usage validation | `ai.flowx.dev.application-version.resources-usages.sub-res-validation.cms.v1` |
All actions that match a configured pattern will be consumed by the engine.
#### Inter-Service topic coordination
When configuring Kafka topics in the FlowX ecosystem, it's critical to ensure proper coordination between services:
1. **Topic name matching**: Output topics from one service must match the expected input topics of another service.
For example:
* `KAFKA_TOPIC_APPLICATION_RESOURCE_RESELEMUSAGEVALIDATION_OUT_CMS` on Application Manager must match `KAFKA_TOPIC_APPLICATION_IN_RESUSAGEVALIDATION` on CMS
2. **Pattern consistency**: The pattern values must be consistent across services:
* Process Engine listens to topics matching: `ai.flowx.dev.engine.receive.*`
* Integration Designer listens to topics matching: `ai.flowx.dev.integration.receive.*`
3. **Communication flow**:
* Other services write to topics matching the Engine's pattern → Process Engine listens
* Process Engine writes to topics matching the Integration Designer's pattern → Integration Designer listens
The exact pattern value isn't critical, but it must be identical across all connected services. Some deployments require manually creating Kafka topics in advance rather than dynamically. In these cases, all topic names must be explicitly defined and coordinated.
#### Kafka authentication
For secure environments, enable OAuth authentication with the following configuration:
```yaml
spring.config.activate.on-profile: kafka-auth
spring:
kafka:
security.protocol: "SASL_PLAINTEXT"
properties:
sasl:
mechanism: "OAUTHBEARER"
jaas.config: "org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.client.id=\"${KAFKA_OAUTH_CLIENT_ID:kafka}\" oauth.client.secret=\"${KAFKA_OAUTH_CLIENT_SECRET:kafka-secret}\" oauth.token.endpoint.uri=\"${KAFKA_OAUTH_TOKEN_ENDPOINT_URI:kafka.auth.localhost}\" ;"
login.callback.handler.class: io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler
```
### Configuring logging
| Environment variable | Description |
| -------------------- | --------------------------------------- |
| `LOGGING_LEVEL_ROOT` | Log level for root service logs |
| `LOGGING_LEVEL_APP` | Log level for application-specific logs |
### Configuring file storage
#### Public storage
| Environment variable | Description |
| ------------------------------------------ | ------------------------------------------------------- |
| `APPLICATION_FILESTORAGE_S3_SERVERURL` | URL of S3 server for file storage |
| `APPLICATION_FILESTORAGE_S3_BUCKETNAME` | S3 bucket name |
| `APPLICATION_FILESTORAGE_S3_ROOTDIRECTORY` | Root directory in S3 bucket |
| `APPLICATION_FILESTORAGE_S3_CREATEBUCKET` | Auto-create bucket if it doesn't exist (`true`/`false`) |
| `APPLICATION_FILESTORAGE_S3_PUBLICURL` | Public URL for accessing files |
#### Private storage
Private CMS securely stores uploaded documents and AI-generated documents, ensuring they are accessible only via authenticated endpoints.
Private CMS ensures secure file storage by keeping documents hidden from the Media Library and accessible only through authenticated endpoints with access token permissions. Files can be retrieved using tags (e.g., ai\_document, ref:UUID\_doc) and are excluded from application builds.
| Environment variable | Description |
| ------------------------------------------------ | ------------------------------------------- |
| `APPLICATION_FILESTORAGE_S3_PRIVATESERVERURL` | URL of S3 server for private storage |
| `APPLICATION_FILESTORAGE_S3_PRIVATEBUCKETNAME` | S3 bucket name for private storage |
| `APPLICATION_FILESTORAGE_S3_PRIVATECREATEBUCKET` | Auto-create private bucket (`true`/`false`) |
| `APPLICATION_FILESTORAGE_S3_PRIVATEACCESSKEY` | Access key for private S3 server |
| `APPLICATION_FILESTORAGE_S3_PRIVATESECRETKEY` | Secret key for private S3 server |
### Configuring file upload size
| Environment variable | Description | Default value |
| ----------------------------------------- | -------------------------------- | ------------- |
| `SPRING_SERVLET_MULTIPART_MAXFILESIZE` | Maximum file size for uploads | `50MB` |
| `SPRING_SERVLET_MULTIPART_MAXREQUESTSIZE` | Maximum request size for uploads | `50MB` |
Setting high file size limits may increase vulnerability to potential attacks. Consider security implications before increasing these limits.
### Configuring application management
The following configuration from versions before 4.1 will be deprecated in version 5.0:
* `MANAGEMENT_METRICS_EXPORT_PROMETHEUS_ENABLED`: Enables or disables Prometheus metrics export.
Starting from version 4.1, use the following configuration. This setup is backwards compatible until version 5.0.
| Environment variable | Description | Default value |
| ---------------------------------------------- | --------------------------------- | ------------- |
| `MANAGEMENT_PROMETHEUS_METRICS_EXPORT_ENABLED` | Enables Prometheus metrics export | `false` |
# Data-Sync job setup
Source: https://docs.flowx.ai/4.7.x/setup-guides/data-sync
Comprehensive guide for configuring and deploying the Data-Sync Job in your Kubernetes environment
## Overview
The Data-Sync Job synchronizes data across multiple databases to maintain consistency and up-to-date information throughout your system. It operates by connecting to various databases, retrieving data, and synchronizing changes across them. The job logs all actions and can be scheduled to run at regular intervals.
## Quick Start
```bash
# 1. Configure your environment variables in a data-sync-job.yaml file
# 2. Apply the configuration
kubectl apply -f data-sync-job.yaml
# 3. Monitor the job status
kubectl get jobs
# 4. Check logs if needed
kubectl logs job/data-sync-job
```
## Required environment variables
### Core configuration
| Variable | Description | Example |
| ------------------------------- | --------------------------------------------------------------- | ------------------------------------- |
| `CONFIG_PROFILE` | Specifies which configuration profile to use (required) | `k8stemplate_v2,kafka-auth` |
| `FLOWX_SKIPPEDRESOURCESERVICES` | Comma-separated list of services to skip during synchronization | `document-plugin,notification-plugin` |
> ⚠️ **Warning**: Do not include spaces in the `FLOWX_SKIPPEDRESOURCESERVICES` value.
### Database connections
The Data-Sync Job requires connection details for multiple databases. Configure the following sections based on your deployment.
#### MongoDB connections
Each MongoDB-based service requires the following variables:
| Component | Required Variables |
| ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **CMS** | `FLOWX_DATASOURCE_CMS_URI`, `CMS_MONGO_USERNAME`, `CMS_MONGO_PASSWORD`, `CMS_MONGO_DATABASE` |
| **Scheduler** | `FLOWX_DATASOURCE_SCHEDULER_URI`, `SCHEDULER_MONGO_USERNAME`, `SCHEDULER_MONGO_PASSWORD`, `SCHEDULER_MONGO_DATABASE` |
| **Task Manager** | `FLOWX_DATASOURCE_TASKMANAGER_URI`, `TASKMANAGER_MONGO_USERNAME`, `TASKMANAGER_MONGO_PASSWORD`, `TASKMANAGER_MONGO_DATABASE` |
| **Document Plugin** | `FLOWX_DATASOURCE_DOCUMENTPLUGIN_URI`, `DOCUMENTPLUGIN_MONGO_USERNAME`, `DOCUMENTPLUGIN_MONGO_PASSWORD`, `DOCUMENTPLUGIN_MONGO_DATABASE` |
| **Notification Plugin** | `FLOWX_DATASOURCE_NOTIFICATIONPLUGIN_URI`, `NOTIFICATIONPLUGIN_MONGO_USERNAME`, `NOTIFICATIONPLUGIN_MONGO_PASSWORD`, `NOTIFICATIONPLUGIN_MONGO_DATABASE` |
| **App Runtime** | `FLOWX_DATASOURCE_APPRUNTIME_URI`, `APPRUNTIME_MONGO_USERNAME`, `APPRUNTIME_MONGO_PASSWORD`, `APPRUNTIME_MONGO_DATABASE` |
| **Integration Designer** | `FLOWX_DATASOURCE_INTEGRATIONDESIGNER_URI`, `INTEGRATIONDESIGNER_MONGO_USERNAME`, `INTEGRATIONDESIGNER_MONGO_PASSWORD`, `INTEGRATIONDESIGNER_MONGO_DATABASE` |
| **Admin** | `FLOWX_DATASOURCE_ADMIN_URI`, `ADMIN_MONGO_USERNAME`, `ADMIN_MONGO_PASSWORD`, `ADMIN_MONGO_DATABASE` |
##### MongoDB URI format
```
mongodb://${USERNAME}:${PASSWORD}@mongodb-0.mongodb-headless,mongodb-1.mongodb-headless,mongodb-arbiter-0.mongodb-arbiter-headless:27017/${DATABASE}
```
#### PostgreSQL connections
| Component | Required Variables |
| ----------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Process Engine** | `FLOWX_DATASOURCE_ENGINE_URL`, `FLOWX_DATASOURCE_ENGINE_USERNAME`, `FLOWX_DATASOURCE_ENGINE_PASSWORD`, `FLOWX_DATASOURCE_ENGINE_DRIVERCLASSNAME` |
| **Application Manager** | `FLOWX_DATASOURCE_APPMANAGER_URL`, `FLOWX_DATASOURCE_APPMANAGER_USERNAME`, `FLOWX_DATASOURCE_APPMANAGER_PASSWORD`, `FLOWX_DATASOURCE_APPMANAGER_DRIVERCLASSNAME` |
##### Driver class names
* PostgreSQL: `org.postgresql.Driver`
* Oracle: `oracle.jdbc.OracleDriver`
### Additional configuration
| Variable | Description |
| ----------------------------------------------- | ----------------------------------------------------------- |
| `SPRING_JPA_DATABASE` | Database type for Spring JPA (e.g., `postgresql`, `oracle`) |
| `SPRING_JPA_PROPERTIES_HIBERNATE_DEFAULTSCHEMA` | Default schema for Hibernate |
| `LOGGING_CONFIG_FILE` | Path to logging configuration file |
## Service to database mapping
Each service in your environment corresponds to specific database datasources:
| Service | Datasources |
| ------------------------ | ----------------------- |
| `scheduler-core` | scheduler |
| `cms-core` | cms |
| `task-management-plugin` | task-manager |
| `document-plugin` | document-plugin |
| `notification-plugin` | notification-plugin |
| `runtime-manager` | app-runtime, appmanager |
| `integration-designer` | integration-designer |
| `admin` | admin, engine |
| `process-engine` | engine |
| `application-manager` | appmanager |
## Sample configuration
```yaml
apiVersion: batch/v1
kind: Job
metadata:
name: data-sync-job
spec:
template:
spec:
containers:
- name: data-sync
image: your-registry/data-sync:latest
env:
- name: CONFIG_PROFILE
value: k8stemplate_v2,kafka-auth
- name: FLOWX_SKIPPEDRESOURCESERVICES
value: "document-plugin,notification-plugin"
# MongoDB connections
- name: FLOWX_DATASOURCE_CMS_URI
value: "mongodb://${CMS_MONGO_USERNAME}:${CMS_MONGO_PASSWORD}@mongodb-0.mongodb-headless:27017/${CMS_MONGO_DATABASE}"
# Add all other required environment variables
restartPolicy: Never
backoffLimit: 3
```
## Troubleshooting
### Common issues
1. **Database connection failures**:
* Verify MongoDB and PostgreSQL connection strings
* Check that database credentials are correct
* Ensure network connectivity between the job pod and databases
2. **Missing Required Variables**:
* Ensure all required environment variables are properly set
* Check for typos in environment variable names
3. **Service Synchronization Failures**:
* If a service isn't installed but data-sync is attempting to sync it, add it to `FLOWX_SKIPPEDRESOURCESERVICES`
### Logs
Monitor the Data-Sync Job logs to diagnose issues:
```bash
kubectl logs job/data-sync-job
```
## Best practices
1. Store sensitive credentials in Kubernetes Secrets and reference them in your deployment
2. Include the Data-Sync Job in your CI/CD pipeline for automated deployment
3. Schedule regular runs using a Kubernetes CronJob for periodic synchronization
4. Monitor job execution and set up alerts for failures
## Next steps
After successfully configuring and deploying the Data-Sync Job:
1. Verify data consistency across databases
2. Set up monitoring and alerting for job status
3. Consider automating deployment through your CI/CD pipeline
# FlowX Designer setup
Source: https://docs.flowx.ai/4.7.x/setup-guides/designer-setup-guide
To set up FlowX Designer in your environment, follow this guide.
## Prerequisites Management
### NGINX
For optimal operation the FlowX.AI Designer should use a separate [NGINX](../docs/platform-overview/frameworks-and-standards/event-driven-architecture-frameworks/intro-to-nginx) load balancer from the **FlowX Engine**. This routing mechanism handles API calls from the [SPA](./designer-setup-guide#for-configuring-the-spa) (single page application) to the backend service, to the engine and to various plugins.
Here's an example/suggestion of an NGINX setup:
#### For routing calls to plugins:
```jsx
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: GET, PUT, POST, DELETE, PATCH
nginx.ingress.kubernetes.io/cors-allow-origin: "http://localhost:4200,http://localhost:80,http://localhost:8080"
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: flowx-admin-plugins-subpaths
spec:
rules:
- host: {{host}}
http:
paths:
- path: /notification(/|$)(.*)
backend:
serviceName: notification
servicePort: 80
- path: /document(/|$)(.*)
backend:
serviceName: document
servicePort: 80
tls:
- hosts:
- {{host}}
secretName: {{tls secret}}
```
#### For routing calls to the engine
Three different configurations are needed:
1. For viewing the current instances of processes running in the Engine:
```jsx
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /api/instances/$2
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: GET, PUT, POST, DELETE, PATCH
nginx.ingress.kubernetes.io/cors-allow-origin: "http://localhost:4200,http://localhost:80,http://localhost:8080"
name: flowx-admin-engine-instances
spec:
rules:
- host: {{host}}
http:
paths:
- path: /api/instances(/|$)(.*)
backend:
serviceName: {{engine-service-name}}
servicePort: 80
```
2. For testing process definitions from the FLOWX Designer, route API calls and SSE communication to the Engine backend.
Setup for routing REST calls:
```jsx
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /api/$2
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: GET, PUT, POST, DELETE, PATCH
nginx.ingress.kubernetes.io/cors-allow-origin: "http://localhost:4200,http://localhost:80,http://localhost:8080"
name: flowx-admin-engine-rest-api
spec:
rules:
- host: {{host}}
http:
paths:
- path: /{{PROCESS_API_PATH}}/api(/|$)(.*)
backend:
serviceName: {{engine-service-name}}
servicePort: 80
```
Setup for routing SSE communication:
```jsx
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/cors-allow-headers: ""
name: flowx-public-subpath-events-rewrite
spec:
rules:
- host: {{host}}
http:
paths:
- backend:
service:
name: events-gateway
port:
name: http
path: /api/events(/|$)(.*)
```
3. For accessing the REST API of the backend microservice
```jsx
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "4m"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: GET, PUT, POST, DELETE, PATCH
nginx.ingress.kubernetes.io/cors-allow-origin: "http://localhost:4200,http://localhost:80,http://localhost:8080"
name: flowx-admin-api
spec:
rules:
- host: {{host}}
http:
paths:
- path: /
backend:
serviceName: {{flowx-admin-service-name}}
servicePort: 80
tls:
- hosts:
- {{host}}
secretName: {{tls secret}}
```
#### For configuring the SPA
```jsx
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
certmanager.k8s.io/issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/affinity: cookie
name: flowx-designer-spa
spec:
rules:
- host: {{host of web app}}
http:
paths:
- backend:
serviceName: {{flowx-designer-service-name}}
servicePort: 80
tls:
- hosts:
- {{host of web app}}
secretName: {{tls secret}}
```
## Steps to deploy Frontend app
The FlowX.AI Designer is an SPA application that is packaged in a docker image with `nginx:1.19.10`. The web application allows an authenticated user to administrate the FLOWX platform.
In order to configure the docker image you need to configure the next parameters:
```yaml
flowx-process-renderer:
env:
BASE_API_URL: {{the one configured as host in the nginx}}
PROCESS_API_PATH: {{something like /engine}}
KEYCLOAK_ISSUER: {{openid provider - ex: https://something/auth/realms/realmName}}
KEYCLOAK_REDIRECT_URI: {{url of the SPA}}
KEYCLOAK_CLIENT_ID: {{client ID}}
STATIC_ASSETS_PATH: {{mediaLibrary.s3.publicUrl }}/{{env}}
```
# FlowX Events Gateway setup
Source: https://docs.flowx.ai/4.7.x/setup-guides/events-gateway-setup
This guide will walk you through the process of setting up the events-gateway service.
## Infrastructure prerequisites
Before proceeding with the setup, ensure that the following components have been set up:
* **Redis** - version 6.0 or higher
* **Kafka** - version 2.8 or higher
## Configuration
### Configuring Kafka
Set the following Kafka-related configurations using environment variables:
* `SPRING_KAFKA_BOOTSTRAPSERVERS` - the address of the Kafka server, it should be in the format "host:port"
#### Groupd IDs
The configuration parameters "KAFKA\_CONSUMER\_GROUPID\_\*" are used to set the consumer group name for Kafka consumers that consume messages from topics. Consumer groups in Kafka allow for parallel message processing by distributing the workload among multiple consumer instances. By configuring the consumer group ID, you can specify the logical grouping of consumers that work together to process messages from the same topic, enabling scalable and fault-tolerant message consumption in your Kafka application.
| Configuration Parameter | Default value | Description |
| --------------------------------------------------------- | ---------------------------------- | -------------------------------------------------------------------- |
| `KAFKA_CONSUMER_GROUPID_PROCESSENGINECOMMANDS_MESSAGE` | `engine-commands-message` | Consumer group ID for processing engine commands messages |
| `KAFKA_CONSUMER_GROUPID_PROCESSENGINECOMMANDS_DISCONNECT` | `engine-commands-disconnect` | Consumer group ID for processing engine commands disconnect messages |
| `KAFKA_CONSUMER_GROUPID_PROCESSENGINECOMMANDS_CONNECT` | `engine-commands-connect` | Consumer group ID for processing engine commands connect messages |
| `KAFKA_CONSUMER_GROUPID_PROCESS_TASKCOMMANDS_MESSAGE` | `task-commands-message` | Consumer group ID for processing task commands |
| `KAFKA_CONSUMER_GROUPID_PROCESSVERSIONCOMMANDS_MESSAGE` | `process-version-commands-message` | Consumer group ID for processing process version commands messages |
| `KAFKA_CONSUMER_GROUPID_GENERICCOMMANDS` | `generic-commands-message` | Consumer group ID for processing generic commands messages |
| `KAFKA_CONSUMER_GROUPID_USERBROADCASTCOMMANDS` | `user-broadcast-commands-message` | Consumer group ID for processing user broadcast commands messages |
#### Threads
The configuration parameters "KAFKA\_CONSUMER\_THREADS\_\*" are utilized to specify the number of threads assigned to Kafka consumers for processing messages from topics. These parameters allow you to fine-tune the concurrency and parallelism of your Kafka consumer application, enabling efficient and scalable message consumption from Kafka topics.
| Configuration Parameter | Default value | Description |
| --------------------------------------------------------- | ------------- | -------------------------------------------------------------------- |
| `KAFKA_CONSUMER_THREADS_PROCESSENGINECOMMANDS_MESSAGE` | 10 | Number of threads for processing engine commands messages |
| `KAFKA_CONSUMER_THREADS_PROCESSENGINECOMMANDS_DISCONNECT` | 5 | Number of threads for processing engine commands disconnect messages |
| `KAFKA_CONSUMER_THREADS_PROCESSENGINECOMMANDS_CONNECT` | 5 | Number of threads for processing engine commands connect messages |
| `KAFKA_CONSUMER_THREADS_TASKCOMMANDS` | 10 | Number of threads for task commands |
| `KAFKA_CONSUMER_THREADS_PROCESSVERSIONCOMMANDS` | 10 | Number of threads for processing process version commands messages |
| `KAFKA_CONSUMER_THREADS_GENERICCOMMANDS` | 10 | Number of threads for processing generic commands messages |
| `KAFKA_CONSUMER_THREADS_USERBROADCASTCOMMANDS` | 10 | Number of threads for processing user broadcast commands messages |
#### Kafka topics related to process instances
| Configuration Parameter | Default value |
| --------------------------------------------------------- | ---------------------------------------------------------- |
| `KAFKA_TOPIC_EVENTSGATEWAY_PROCESSINSTANCE_IN_MESSAGE` | `ai.flowx.dev.eventsgateway.engine.commands.message.v1` |
| `KAFKA_TOPIC_EVENTSGATEWAY_PROCESSINSTANCE_IN_DISCONNECT` | `ai.flowx.dev.eventsgateway.engine.commands.disconnect.v1` |
| `KAFKA_TOPIC_EVENTSGATEWAY_PROCESSINSTANCE_IN_CONNECT` | `ai.flowx.dev.eventsgateway.engine.commands.connect.v1` |
#### Kafka topics related to process versions
| Configuration Parameter | Default value |
| ----------------------------------------------------- | ---------------------------------------------------------------- |
| `KAFKA_TOPIC_EVENTSGATEWAY_PROCESSVERSION_IN_MESSAGE` | `ai.flowx.dev.eventsgateway.process-version-commands.message.v1` |
#### Kafka topics related to user messages
| Configuration Parameter | Default value |
| --------------------------------------------------- | ------------------------------------------------- |
| `KAFKA_TOPIC_EVENTSGATEWAY_USERMESSAGES_IN_MESSAGE` | `ai.flowx.dev.core.designer.notification.user.v1` |
### Configuring authorization & access roles
Set the following environment variables to connect to the identity management platform:
| Configuration Parameter | Description |
| ------------------------------------- | --------------------------------------- |
| `SECURITY_OAUTH2_BASESERVERURL` | Base URL of the OAuth2 server |
| `SECURITY_OAUTH2_CLIENT_CLIENTID` | Client ID for OAuth2 authentication |
| `SECURITY_OAUTH2_CLIENT_CLIENTSECRET` | Client secret for OAuth2 authentication |
| `SECURITY_OAUTH2_REALM` | Realm for OAuth2 authentication |
### Redis
The process engine sends the messages to the events-gateway, which is responsible for sending them to Redis.
| Configuration Parameter | Description |
| ---------------------------- | ---------------------------- |
| `SPRING_DATA_REDIS_HOST` | Hostname of the Redis server |
| `SPRING_DATA_REDIS_PASSWORD` | Password for Redis server |
| `SPRING_DATA_REDIS_PORT` | Connect to the Redis server |
#### Master replica
The events-gateway can be configured to communicate with Redis using the MASTER\_REPLICA replication mode by configuring the following property:
`spring.data.redis.sentinel.nodes: replica1, replica2, replica3`, etc...
Correspondent environment variable:
* `SPRING_DATA_REDIS_SENTINEL_NODES`
##### Example
```makefile
spring.redis.sentinel.nodes=host1:26379,host2:26379,host3:26379
```
In the above example, the Spring Boot application will connect to three Redis Sentinel nodes: host1:26379, host2:26379, and host3:26379.
The property value should be a comma-separated list of host:port pairs, where each pair represents the hostname or IP address and the port number of a Redis Sentinel node.
By default, Redis is standalone, so the configuration with `redis-replicas` is optional for high load use cases.
In the context of Spring Boot and Redis Sentinel integration, the `spring.redis.sentinel.nodes` property is used to specify the list of Redis Sentinel nodes that the Spring application should connect to. These nodes are responsible for monitoring and managing Redis instances.
### Events
This configuration helps manage how event data is stored and accessed in Redis.
| Configuration Parameter | Default | Description |
| ------------------------------ | ------- | ----------------------------------------------------------------------------------------------------- |
| `EVENTS_REDIS_FREQUENCYMILLIS` | 200 | Time interval (in milliseconds) between Redis queries by the events gateway to check for new messages |
| `EVENTS_REDIS_TTLHOURS` | 4 | Sets the time-to-live for events in Redis to 4 hours |
### Configuring logging
The following environment variables could be set in order to control log levels:
| Configuration Parameter | Description |
| ----------------------- | -------------------------------------------------------- |
| `LOGGING_LEVEL_ROOT` | Logging level for the root Spring Boot microservice logs |
| `LOGGING_LEVEL_APP` | Logging level for the application-level logs |
# Configuring access roles for processes
Source: https://docs.flowx.ai/4.7.x/setup-guides/flowx-engine-setup-guide/configuring-access-roles-for-processes
## Access to a process definition
Setting up user role-based access on process definitions is done by configuring swimlanes on the process definition.
By default, all process nodes belong to the same swimlane. If more swimlanes are needed, they can be edited in the process definition settings panel.
Swimlane role settings apply to the whole process, the process nodes or the actions to be performed on the nodes.
First, the desired user roles need to be configured in the identity provider solution and users must be assigned the correct roles.
You can use the **Access management** tab under **General Settings** to administrate all the roles.

To be able to access the roles defined in the identity provider solution, a [**service account**](../access-management/configuring-an-iam-solution#process-engine-service-account) with appropriate permissions needs to be added in the identity provider. And the details of that service account [**need to be set up in the platform configuration**](../../setup-guides//designer-setup-guide#authorization--access-roles).
The defined roles will then be available to be used in the process definition settings (**Permissions** tab) panel for configuring swimlane access.
A **Default** swimlane comes with two default permissions assigned based on a specific role.

* **execute** - the user will be able to start process instances and run actions on them
* **self-assign** - the user can assign a process instance to them and start working on it
This is valid for **> 2.11.0** FLOWX.AI platform release.
Other **Permissions** can be added manually, depending on the needs of the user. Some permissions are needed to be configured so you can use features inside [Task Management](/4.0/docs/platform-deep-dive/plugins/custom-plugins/task-management/task-management-overview) plugin. Specific roles need to be assigned separately on a few available process operations. These are:
* **view** - the user will be able to view process instance data
* **assign** - user can assign tasks to other users (this operation is only accessible through the **Task management** plugin)
* **unassign** - user can unassign tasks from other users (this operation is only accessible through the **Task management** plugin)
* **hold** - user can mark the process instance as on hold (this operation is only accessible through the **Task management** plugin)
* **unhold** - user can mark the process instance as not on hold (this operation is only accessible through the **Task management** plugin)

**\< 2.11.0 platform release** - if no role is configured on an operation, no restrictions will be applied.
## Configuration examples
Valid for \< 2.11.0 release version.
### Regular user
Below you can find an example of configuration of roles for a regular user:

### Admin
Below you can find an example of configuration of roles for an admin user:

Starting with [**2.11.0**](https://old-docs.flowx.ai/release-notes/v2.11.0-august-2022/) release, specific roles are needed, otherwise, restrictions will be applied.
After setting up your preferred identity provider solution, you will need to add the desired access roles in the application configuration for the FLOWX Engine (using environment variables):
[Authorization & access roles](./engine-setup#authorization-and-access-roles)
## Restricting process instance access based on business filters
[Business filters](/4.0/docs/platform-deep-dive/user-roles-management/business-filters)
Before they can be used in the process definition the business filter attributes need to be set in the identity management platform. They have to be configured as a list of filters and should be made available on the authorization token. Application users will also have to be assigned this value.
## Viewing processes instances
Active process instances and their related data can be viewed from the FLOWX Designer. A user needs to be assigned to a specific role in the identity provider solution to be able to view this information.
By default, this role is named `FLOWX_ROLE`, but its name can be changed from the application configuration of the Engine by setting the following environment variable:
`FLOWX_PROCESS_DEFAULTROLES`
When viewing process instance-related data, it can be configured whether to hide specific sensitive user data. This can be configured using the `FLOWX_DATA_ANONYMIZATION` environment variable.
## Starting processes
The`FLOWX_ROLE` is also used to grant permissions for starting processes.
## Access to REST API
To restrict API calls by user role, you will need to add the user roles in the application config:
```yaml
security:
pathAuthorizations:
-
path: "/api/**"
rolesAllowed: "ANY_AUTHENTICATED_USER" or "USER_ROLE_FROM_IDENTITY_PROVIDER"
```
# Configuring Elasticsearch indexing
Source: https://docs.flowx.ai/4.7.x/setup-guides/flowx-engine-setup-guide/configuring-elasticsearch-indexing/elasticsearch-indexing
This section provides configuration steps for enabling process instance indexing using the Kafka transport strategy.
Before proceeding, it is recommended to familiarize yourself with Elasticsearch and its indexing process by referring to the Intro to Elasticsearch section.
## Configuration updates
To enable Kafka indexing strategy, the previous configuration parameter `flowx.use-elasticsearch` is being replaced. However, to ensure backward compatibility, it will still be preserved in the configuration. Below is an example of how to configure it:
```yaml
spring:
elasticsearch:
index-settings:
name: process_instance
shards: 1
replicas: 1
```
Instead of the removed configuration, a new configuration area has been added:
```yaml
flowx:
indexing:
enabled: true // true | false - specifies if the indexing with Elasticsearch for the whole app is enabled or disabled.
processInstance: // set of configurations for indexing process instances. Can be duplicated for other objects.
indexing-type: kafka // no-indexing | http | kafka - the chosen indexing strategy.
index-name: process_instance // the index name that is part of the search pattern.
shards: 1
replicas: 1
```
The `flowx.indexing.enabled` property determines whether indexing with Elasticsearch is enabled. When set to false or missing, no indexing will be performed for any entities defined below. When set to true, indexing with Elasticsearch is enabled.
If the FlowX indexing configuration is set to false, the following configuration information and guidelines are not applicable to your use case.
The `flowx.indexing.processInstance.indexing-type` property defines the indexing strategy for process instances. It can have one of the following values:
* **no-indexing**: No indexing will be performed for process instances.
* **http**: Direct connection from the process engine to Elasticsearch through HTTP calls.
* **kafka**: Data will be sent to be indexed via a Kafka topic using the new strategy. To implement this strategy, the Kafka Connect with Elasticsearch Sink Connector must be deployed in the infrastructure.
## Configuration steps
To enable indexing with Elasticsearch for the entire application, update the process-engine configuration with the following parameters:
* `FLOWX_INDEXING_ENABLED`: Set this parameter to `true` to enable indexing with Elastisearch for the entire application.
| Variable Name | Enabled | Description |
| ------------------------ | ------- | --------------------------------------------------------- |
| FLOWX\_INDEXING\_ENABLED | true | Indexing with Elasticsearch for the whole app is enabled |
| FLOWX\_INDEXING\_ENABLED | false | Indexing with Elasticsearch for the whole app is disabled |
* `FLOWX_INDEXING_PROCESSINSTANCE_INDEXING_TYPE`: Set this parameter to `kafka` to use the Kafka transport strategy for indexing process instances.
| Variable Name | Indexing Type - Values | Definition |
| ------------------------------------------------ | ---------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| FLOWX\_INDEXING\_PROCESSINSTANCE\_INDEXING\_TYPE | no-indexing | No indexing is performed for process instances |
| FLOWX\_INDEXING\_PROCESSINSTANCE\_INDEXING\_TYPE | http | Process instances are indexed via HTTP (direct connection from process-engine to Elasticsearch thorugh HTTP calls) |
| FLOWX\_INDEXING\_PROCESSINSTANCE\_INDEXING\_TYPE | kafka | Process instances are indexed via Kafka (send data to be indexed through a kafka topic - the new strategy for the applied solution) |
* `FLOWX_INDEXING_PROCESSINSTANCE_INDEX_NAME`: Specify the name of the index used for process instances.
| Variable Name | Values | Definition |
| ------------------------------------------------------- | ----------------- | ----------------------------------------------------------------------------------------------- |
| FLOWX\_INDEXING\_PROCESSINSTANCE\_INDEXING\_INDEX\_NAME | process\_instance | The name of the index used for storing process instances. It is also part of the search pattern |
* `FLOWX_INDEXING_PROCESSINSTANCE_SHARDS`: Set the number of shards for the index.
| Variable Name | Values | Definition |
| ---------------------------------------- | ------ | -------------------------------------------------------------------------- |
| FLOWX\_INDEXING\_PROCESSINSTANCE\_SHARDS | 1 | The number of shards for the Elasticsearch index storing process instances |
* `FLOWX_INDEXING_PROCESSINSTANCE_REPLICAS`: Set the number of replicas for the index.
| Variable Name | Values | Definition |
| ------------------------------------------ | ------ | ---------------------------------------------------------------------------- |
| FLOWX\_INDEXING\_PROCESSINSTANCE\_REPLICAS | 1 | The number of replicas for the Elasticsearch index storing process instances |
For Kafka indexing, the Kafka Connect with Elasticsearch Sink Connector must be deployed in the infrastructure.
[Elasticsearch Service Sink Connector](https://docs.confluent.io/kafka-connectors/elasticsearch/current/overview.html)
## Configuration examples
### Kafka Connect
* Assumes kafka cluster installed with strimzi operator and Elasticsearch with eck-operator
* Can save the image built by kafka connect to a local registry and comment build section
```yaml
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
name: kafka-connect-kafka-flowx
annotations:
strimzi.io/use-connector-resources: "true"
spec:
version: 3.0.0
replicas: 1
bootstrapServers: kafka-flowx-kafka-bootstrap:9093
tls:
trustedCertificates:
- secretName: kafka-flowx-cluster-ca-cert
certificate: ca.crt
image: ttl.sh/strimzi-connect-ttlsh266-3.0.0:24h
config:
group.id: flowx-kafka-connect
offset.storage.topic: kafka-connect-cluster-offsets
config.storage.topic: kafka-connect-cluster-configs
status.storage.topic: kafka-connect-cluster-status
# -1 means it will use the default replication factor configured in the broker
config.storage.replication.factor: -1
offset.storage.replication.factor: -1
status.storage.replication.factor: -1
topic.creation.enable: true
build:
output:
type: docker
# This image will last only for 24 hours and might be overwritten by other users
# Strimzi will use this tag to push the image. But it will use the digest to pull
# the container image to make sure it pulls exactly the image we just built. So
# it should not happen that you pull someone else's container image. However, we
# recommend changing this to your own container registry or using a different
# image name for any other than demo purposes.
image: ttl.sh/strimzi-connect-ttlsh266-3.0.0:24h
plugins:
- name: kafka-connect-elasticsearch
artifacts:
- type: zip
url: https://d1i4a15mxbxib1.cloudfront.net/api/plugins/confluentinc/kafka-connect-elasticsearch/versions/14.0.6/confluentinc-kafka-connect-elasticsearch-14.0.6.zip
externalConfiguration:
volumes:
- name: elasticsearch-keystore-volume
secret:
secretName: elasticsearch-keystore
env:
- name: SPRING_ELASTICSEARCH_REST_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-es-elastic-user
key: elastic
```
### Kafka Elasticsearch Connector
```yaml
spec:
class: io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasksMax: 2
config:
tasks.max: "2" # The maximum number of tasks that can be run in parallel for this connector, which is 2 in this case. You can start with 2 and increase in case of huge load, but pay attention to the load that will be produced on Elasticsearch and also to the resources that you allocate to KafkaConnect so that it can support the threads.
topics: "ai.flowx.core.index.process.v1" # Source Kafka topic. Must be the same as the one declared in the process defined as ${kafka.topic.naming.prefix}.core.index.process${kafka.topic.naming.suffix}
key.ignore: "false" # Don't change this value! This tells Kafka Connect (KC) to process the key of the message - it will be used as the ID of the object in Elasticsearch.
schema.ignore: "true" # Don't change this value!!! This tells KC to ignore the mapping from the Kafka message. Elasticsearch will use internal mapping. See below.
connection.url: "https://elasticsearch-es-http:9200" # The URL to Elasticsearch. You should configure this.
connection.username: "elastic" # The username to authenticate with Elasticsearch. You should configure this.
connection.password: "Yyh03ZI66310Hyw59MXcR8xt" # The password to authenticate with Elasticsearch. You should configure this.
elastic.security.protocol: "SSL" # The security protocol to use for connecting to Elasticsearch. You should use SSL if possible.
elastic.https.ssl.keystore.location: "/opt/kafka/external-configuration/elasticsearch-keystore-volume/keystore.jks" # You should configure the path to the keystore where the Elasticsearch key is added.
elastic.https.ssl.keystore.password: "MPx57vkACsRWKVap" # The password for the keystore file. You should configure this.
elastic.https.ssl.key.password: "MPx57vkACsRWKVap" # The password for the key within the keystore file.
elastic.https.ssl.keystore.type: "JKS" # The type of the keystore file. It is set to "JKS" (Java KeyStore).
elastic.https.ssl.truststore.location: "/opt/kafka/external-configuration/elasticsearch-keystore-volume/keystore.jks" # you should configure the path to the keystore where the Elasticsearch key is added.
elastic.https.ssl.truststore.password: "MPx57vkACsRWKVap" # The password for the truststore file. You should configure this
elastic.https.ssl.truststore.type: "JKS" # The type of the truststore file. It is set to "JKS".
elastic.https.ssl.protocol: "TLS" # The SSL/TLS protocol to use for communication. It is set to "TLS".
batch.size: 1000 # The size of the message batch that KC will process. This should be fine as default value. If you experience slowness and want to increase the speed, changing this may help but it will be based on your scenario. Consult the documentation for more details.
linger.ms: 1 # #Start with this value and change it only if needed. Consult the documentation for connector for more details.
read.timeout.ms: 30000 # Increased to 30000 from the default 3000 due to flush.synchronously = true.
flush.synchronously: "true" # Don't change this value! The way of writing to Elasticsearch. It must stay "true" for the router below to work.
drop.invalid.message: "true" # Don't change this value! If set to false, the connector will wait for a configuration that allows processing the message. If set to true, the connector will drop the invalid message.
behavior.on.null.values: "IGNORE" # Don't change this value! Must be set to IGNORE to avoid blocking the processing of null messages.
behavior.on.malformed.documents: "IGNORE" # Don't change this value! Must be IGNORE to avoid blocking the processing of invalid JSONs.
write.method: "UPSERT" # Don't change this value! UPSERT to create or update the index.
type.name: "_doc" # Don't change this value! This is the name of the Elasticsearch type for indexing.
key.converter: "org.apache.kafka.connect.storage.StringConverter" # Don't change this value!
key.converter.schemas.enable: "false" # Don't change this value! No schema defined for the key in the message.
value.converter: "org.apache.kafka.connect.json.JsonConverter" # Don't change this value!
value.converter.schemas.enable: "false" # Don't change this value! No schema defined for the value in the message body.
transforms: "routeTS" # Don't change this value! This represents router that helps create indices dynamically based on the timestamp (process instance start date).
transforms.routeTS.type: "org.apache.kafka.connect.transforms.TimestampRouter" # Don't change this value! It helps with routing the message to the correct index.
transforms.routeTS.topic.format: "process_instance-${timestamp}" # You should configure this. It is important that this value must start with the value defined in process-engine config: flowx.indexing.processInstance.index-name. The name of the index will start with a prefix ("process_instance-" in this example) and must have the timestamp appended after for dynamically creating indices. For backward compatibility (utilizing the data in the existing index), the prefix must be "process_instance-". However, backward compatibility isn't specifically required here.
transforms.routeTS.timestamp.format: "yyyyMMdd" # This format ensures that the timestamp is represented consistently and can be easily parsed when creating or searching for indices based on the process instance start date. You can change this with the value you want. If you want monthly indexes set it to "yyyyMM". But be aware that once you set it, when you change it, the existing object indexed will not be updated anymore. The update messages will be treated as new objects and indexed again because they are going to be sent to new indexes. This is important! Try to find your index size and stick with it.
```
### HTTP indexing
```yaml
flowx:
indexing:
enabled: true
processInstance:
indexing-type: http
index-name: process_instance
shards: 1
replicas: 1
```
If you don't want to remove the existing configuration parameters, you can use the following example:
```yaml
spring:
elasticsearch:
index-settings:
name: process_instance
shards: 1
replicas: 1
flowx.use-elasticsearch: true
flowx:
indexing:
enabled: ${flowx.use-elasticsearch}
processInstance:
indexing-type: http
index-name: ${spring.elasticsearch.index-settings.name}
shards: ${spring.elasticsearch.index-settings.shards}
replicas: ${spring.elasticsearch.index-settings.replicas}
```
## Querying Elasticsearch
To read from multiple indices, queries in Elasticsearch have been updated. The queries now run against an index pattern that identifies multiple indices instead of a single index. The index pattern is derived from the value defined in the configuration property:
`flowx.indexing.processInstance.index-name`
## Kafka topics - process events messages
This topic is used for sending the data to be indexed from Process engine. The data from this topic will be read by Kafka Connect.
* Key: `${kafka.topic.process.index.out}`
* Value: `${kafka.topic.naming.prefix}.core.index.process${kafka.topic.naming.suffix}`
| Default parameter (env var) | Default FLOWX.AI value (can be overwritten) |
| --------------------------------- | ------------------------------------------- |
| KAFKA\_TOPIC\_PROCESS\_INDEX\_OUT | ai.flowx.dev.core.index.process.v1 |
The topic name, defined in the value, will be used by Kafka Connect as source for the messages to be sent to Elasticsearch for indexing.
The attribute `indexLastUpdatedTime` is new and will be populated for the kafka-connect strategy. This will tell the timestamp when the last operation was done on the object in the index.ß
## Elasticsearch update (index template)
The mappings between messages and Elasticsearch data types need to be specified. This is achieved through an index template created by the process engine during startup. The template applies to indices starting with the value defined in `flowx.indexing.processInstance.index-name` config. Here's an example of the index template:
```json
//process_instance_template
{
"index_patterns": ["process_instance*"],
"priority": 300,
"template":
{
"mappings": {
"_doc": {
"properties": {
"_class": {
"type": "keyword",
"index": false,
"doc_values": false
},
"dateStarted": {
"type": "date",
"format": "date_optional_time||epoch_millis"
},
"id": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"indexLastUpdatedTime": {
"type": "date",
"format": "date_optional_time||epoch_millis"
},
"keyIdentifiers": {
"type": "nested",
"include_in_parent": true,
"properties": {
"_class": {
"type": "keyword",
"index": false,
"doc_values": false
},
"key": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"originalValue": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"path": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"value": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
},
"processDefinitionName": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"state": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
},
"settings":{
"number_of_shards":5, //This value will be overwritten by the value that you set at `FLOWX_INDEXING_PROCESSINSTANCE_SHARDS` environment variable.
"number_of_replicas":1
}
}
}
```
## Time-based partitioning and index deletion
When working with large volumes of data, it's recommended to implement time-based partitioning for Elasticsearch indices to improve performance and manageability.
### Partitioning with Kafka vs HTTP
While both HTTP and Kafka indexing strategies support basic Elasticsearch sharding, **only the Kafka strategy provides out-of-the-box support for time-based partitioning** through the `transforms.routeTS.timestamp.format` in the Kafka Sink Connector configuration.
Time-based partitioning (creating separate indices for different time periods like daily/weekly/monthly) is not available as a built-in feature when using the HTTP indexing strategy. For efficient time-based partitioning and index lifecycle management, we recommend using the Kafka indexing strategy.
### Efficient data deletion
When deleting data from Elasticsearch, it's significantly more efficient to delete entire indices rather than deleting individual documents. This is particularly important for maintaining performance in systems with high data volumes.
The Kafka indexing strategy automatically creates time-based indices that can be deleted as entire units when they're no longer needed. This aligns well with database partitioning strategies, allowing for consistent data lifecycle management across your database and Elasticsearch.
For optimal performance, align your Elasticsearch time-based partitioning with your database partitioning strategy:
| Database partitioning | Recommended Elasticsearch timestamp format |
| --------------------- | ------------------------------------------ |
| `MONTH` | `yyyyMM` (monthly indices) |
| `WEEK` | `yyyyww` (weekly indices) |
| `DAY` | `yyyyMMdd` (daily indices) |
Here are some guidelines to help you get started:
# Indexing config guidelines
Source: https://docs.flowx.ai/4.7.x/setup-guides/flowx-engine-setup-guide/configuring-elasticsearch-indexing/process-instance-indexing-config-guidelines
The configuration of Elasticsearch for process instances indexing depends on various factors related to the application load, the number of process instances, parallel requests, and indexed keys per process. Although the best approach to sizing and configuring Elasticsearch is through testing and monitoring under load, here are some guidelines to help you get started
## Indexing strategy
* Advantages of Multiple Small Indices:
* Fast indexing process.
* Flexibility in cleaning up old data.
* Potential Drawbacks:
* Hitting the maximum number of shards per node, resulting in exceptions when creating new indices.
* Increased search response time and memory footprint.
* Deletion
* When deleting data in Elasticsearch, it's recommended to delete entire indices instead of individual documents. Creating multiple smaller indices provides the flexibility to delete entire indices of old data that are no longer needed.
Alternatively, you can create fewer indices that span longer periods of time, such as one index per year. This approach offers small search response times but may result in longer indexing times and difficulty in cleaning up and recovering data in case of failure.
## Shard and replica configuration
The solution includes an index template that gets created with the settings from the process-engine app (name, shards, replicas) when running the app for the first time. This template controls the settings and mapping of all newly created indices.
Once an index is created, you cannot update its number of shards and replicas. However, you can update the settings from the index template at runtime in Elasticsearch, and new indices will be created with the updated settings. Note that the mapping should not be altered as it is required by the application.
## Recommendations for resource management
To manage functional indexing operations and resources efficiently, consider the following recommendations:
* [Sizing indexes upon creation](#sizing-indexes-upon-creation)
* [Balancing](#balancing)
* [Delete unneeded indices](#delete-unneeded-indices)
* [Reindex large indices](#reindex-large-indices)
* [Force merge indices](#force-merge-indices)
* [Shrink indices](#shrink-indices)
* [Combine indices](#combine-indices)
#### Sizing indexes upon creation
Recommendations:
* Start with monthly indexes that have 2 shards and 1 replica. This setup is typically sufficient for handling up to 200k process instances per day; ensures a parallel indexing in two main shards and has also 1 replica per each main shard (4 shards in total). This would create 48 shards per year in the Elasticsearch nodes; A lot less than the default 1000 shards, so you will have enough space for other indexes as well.
* If you observe that the indexing gets really, really slow, then you should look at the physical resources / shard size and start adapting the config.
* If you observe that indexing one monthly index gets massive and affects the performance, then think about switching to weekly indices.
* If you have huge spikes of parallel indexing load (even though that depends on the Kafka connect cluster configuration), then think about adding more main shards.
* Consider having at least one replica for high availability. However, keep in mind that the number of replicas is applied to each shard, so creating many replicas may lead to increased resource usage.
* Monitor the number of shards created and estimate when you might reach the maximum shards per node, taking into account the number of nodes in your cluster.
#### Balancing
When configuring index settings, consider the number of nodes in your cluster. The total number of shards (calculated by the formula: primary\_shards\_number \* (replicas\_number +1)) for an index should be directly proportional to the number of nodes. This helps Elasticsearch distribute the load evenly across nodes and avoid overloading a single node. Avoid adding shards and replicas unnecessarily.
#### Delete unneeded indices
Deleting unnecessary indices reduces memory footprint, the number of used shards, and search time.
#### Reindex large indices
If you have large indices, consider reindexing them. Process instance indexing involves multiple updates on an initially indexed process instance, resulting in multiple versions of the same document in the index. Reindexing creates a new index with only the latest version, reducing storage size, memory footprint, and search response time.
#### Force merge indices
If there are indices with no write operations performed anymore, perform force merge to reduce the number of segments in the index. This operation reduces memory footprint and response time. Only perform force merge during off-peak hours when the index is no longer used for writing.
#### Shrink indices
If you have indices with many shards, consider shrinking them using the shrink operation. This reindexes the data into an index with fewer shards. Perform this operation during off-peak hours.
#### Combine indices
If there are indices with no write operations performed anymore (e.g., process\_instance indices older than 6 months), combine these indices into a larger one and delete the smaller ones. Use the reindexing operation during off-peak hours. Ensure that write operations are no longer needed from the FLOWX platform for these indices.
# FlowX Engine setup
Source: https://docs.flowx.ai/4.7.x/setup-guides/flowx-engine-setup-guide/engine-setup
This guide provides instructions on how to set up and configure the FlowX.AI Engine.
## Infrastructure prerequisites
Before installing the FlowX.AI Engine, verify that the following infrastructure components are installed and configured:
* **Kafka**: Version 2.8 or higher
* **Elasticsearch**: Version 7.11.0 or higher
* **PostgreSQL**: Version 13 or higher for storing application data
* **MongoDB**: Version 4.4 or higher for managing runtime builds
## Dependencies
The FlowX Engine requires the following components:
* **Database**: Primary storage for the engine
* **Redis Server**: Used for caching. See [Redis Configuration](#redis-configuration)
* **Kafka**: Handles messaging and event-driven architecture. See [Configuring Kafka](#configuring-kafka)
For a microservices architecture, services typically manage their data via dedicated databases.
### Required external services
* **Redis Cluster**: Caches process definitions, compiled scripts, and Kafka responses
* **Kafka Cluster**: Enables communication with external plugins and integrations
## Configuration setup
FlowX.AI Engine uses environment variables for configuration. This section covers key configuration areas:
* [Database configuration](#database-configuration)
* [Script engine configuration](#configuring-script-engine)
* [Authorization and access roles](#authorization--access-roles)
* [Kafka configuration](#configuring-kafka)
* [Advancing controller configuration](#connecting-the-advancing-controller)
* [Cleanup mechanism configuration](#configuring-cleanup-mechanism)
* [File upload configuration](#configuring-file-upload-size)
* [Application management](#configuring-application-management)
* [RBAC configuration](#rbac-configuration)
## Database configuration
### PostgreSQL
| Environment variable | Description |
| ----------------------------------- | --------------------------- |
| `SPRING_DATASOURCE_URL` | Database URL for PostgreSQL |
| `SPRING_DATASOURCE_USERNAME` | Username for PostgreSQL |
| `SPRING_DATASOURCE_PASSWORD` | Password for PostgreSQL |
| `SPRING_DATASOURCE_DRIVERCLASSNAME` | Driver class for PostgreSQL |
### MongoDB configuration
Configure connection to the Runtime MongoDB instance:
| Environment variable | Description | Default value |
| --------------------------------- | ----------------------------- | ---------------------------------------------------------------------------------------------------------------------------- |
| `SPRING_DATA_MONGODB_RUNTIME_URI` | URI for connecting to MongoDB | Format: `mongodb://${RUNTIME_DB_USERNAME}:${DB_PASSWORD}@,,:/${DB_NAME}?retryWrites=false` |
| `RUNTIME_DB_USERNAME` | MongoDB username | `app-runtime` |
| `DB_NAME` | MongoDB database name | `app-runtime` |
## Configuring script engine
FlowX.AI Engine supports multiple scripting languages for business rules and actions.
You must also enable these environment variables on the **AI Developer** agent, if you have it set up.
### Python runtime configuration
By default, FlowX.AI 4.7.1 uses Python 2.7 (Jython) as the Python runtime. To enable Python 3 via GraalPy with its 3x performance improvements and JavaScript with GraalJS, you must explicitly set the feature toggle.
| Environment variable | Description | Default value | Possible values |
| ------------------------------- | ----------------------------------------------------- | ------------- | --------------- |
| `FLOWX_SCRIPTENGINE_USEGRAALVM` | Determines which Python and JavaScript runtime to use | `false` | `true`, `false` |
Python 2.7 support will be deprecated in FlowX.AI 5.0. We recommend migrating your Python scripts to Python 3 to take advantage of improved performance and modern language features.
### GraalVM cache configuration
When using GraalVM (`FLOWX_SCRIPTENGINE_USEGRAALVM=true`), ensure the engine has proper access to a cache directory within the container. By default, this is configured in the `/tmp` directory.
For environments with filesystem restrictions or custom configurations, you need to properly configure the GraalVM cache.
There are two methods to configure the GraalVM cache location:
#### Option 1: Using Java options (Preferred)
Add the following Java option to your deployment configuration:
```
-Dpolyglot.engine.userResourceCache=/tmp
```
This option is set by default in the standard Docker image but might need to be included if you're overriding the Java options.
#### Option 2: Using environment variables
Alternatively, set the following environment variable:
```
XDG_CACHE_HOME=/tmp
```
If you encounter errors related to Python script execution when using GraalVM, verify that the cache directory is properly configured and accessible.
For more details about supported scripting languages and their capabilities, see the [Supported scripts](../../4.7.x/docs/building-blocks/supported-scripts) documentation.
## Authorization & access roles
This section covers OAuth2 configuration settings for securing the Spring application.
### Resource server settings (OAuth2 configuration)
The following configuration from versions before 4.1 will be deprecated in version 5.0:
* `SECURITY_OAUTH2_BASE_SERVER_URL`: Base URL for the OAuth 2.0 Authorization Server
* `SECURITY_OAUTH2_CLIENT_CLIENT_ID`: Client identifier registered with the OAuth 2.0 Authorization Server
* `SECURITY_OAUTH2_CLIENT_CLIENT_SECRET`: Secret key for authenticating client requests
* `SECURITY_OAUTH2_REALM`: Realm name used for OAuth2 authentication
Starting from version 4.1, use the following configuration instead. This setup is backwards compatible until version 4.5.
| Environment variable | Description | Default value |
| ----------------------------------------------------------------------- | ------------------------------------- | ------------------------------------------------------------------------------------------------------------- |
| `SPRING_SECURITY_OAUTH2_RESOURCE_SERVER_OPAQUE_TOKEN_INTROSPECTION_URI` | URI for token introspection | `${security.oauth2.base-server-url}/realms/${security.oauth2.realm}/protocol/openid-connect/token/introspect` |
| `SPRING_SECURITY_OAUTH2_RESOURCE_SERVER_OPAQUE_TOKEN_CLIENT_ID` | Client ID for token introspection | `${security.oauth2.client.client-id}` |
| `SPRING_SECURITY_OAUTH2_RESOURCE_SERVER_OPAQUE_TOKEN_CLIENT_SECRET` | Client secret for token introspection | `${security.oauth2.client.client-secret}` |
### Service account settings
Configure the process engine service account:
| Environment variable | Description | Default value |
| ----------------------------------------------------- | ------------------------------------- | ------------------------------------- |
| `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_ID` | Client ID for the service account | `flowx-${SPRING_APPLICATION_NAME}-sa` |
| `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_SECRET` | Client secret for the service account | |
For more information about the necessary service account, see [Process Engine Service Account](../access-management/configuring-an-iam-solution#process-engine-service-account).
### Security configuration
| Environment variable | Description | Default value |
| ---------------------------------------------- | ---------------------------------------- | ----------------------------------- |
| `SECURITY_TYPE` | Type of security mechanism | `oauth2` |
| `SECURITY_BASIC_ENABLED` | Enable basic authentication | `false` |
| `SECURITY_PUBLIC_PATHS_0` | Public path not requiring authentication | `/api/platform/components-versions` |
| `SECURITY_PUBLIC_PATHS_1` | Public path not requiring authentication | `/manage/actuator/health` |
| `SECURITY_PATH_AUTHORIZATIONS_0_PATH` | Security path pattern | `"/api/**"` |
| `SECURITY_PATH_AUTHORIZATIONS_0_ROLES_ALLOWED` | Roles allowed for path access | `"ANY_AUTHENTICATED_USER"` |
## Configuring Kafka
Kafka handles all communication between the FlowX.AI Engine, external plugins, and integrations. It also notifies running process instances when certain events occur.
### Kafka connection settings
| Environment variable | Description | Default value |
| -------------------------------- | --------------------------- | ---------------- |
| `SPRING_KAFKA_BOOTSTRAPSERVERS` | Kafka bootstrap servers | `localhost:9092` |
| `SPRING_KAFKA_SECURITY_PROTOCOL` | Security protocol for Kafka | `"PLAINTEXT"` |
### Message routing configuration
| Environment variable | Description | Default value |
| ------------------------ | --------------------------------------------------------------------- | ------------------- |
| `KAFKA_DEFAULTFXCONTEXT` | Default context value for message routing when no context is provided | `""` (empty string) |
When `KAFKA_DEFAULTFXCONTEXT` is set and an event is received on Kafka without an fxContext header, the system will automatically apply the default context value to the message.
### Kafka consumer retry settings
| Environment variable | Description | Default value |
| ---------------------------------- | ----------------------------------------------------------------- | ------------- |
| `KAFKA_AUTHEXCEPTIONRETRYINTERVAL` | Interval between retries after `AuthorizationException` (seconds) | `10` |
### Consumer groups & consumer threads configuration
Both a producer and a consumer must be configured:

#### About consumer groups and threads
A consumer group is a set of consumers that jointly consume messages from one or more Kafka topics. Each consumer group has a unique identifier (group ID) that Kafka uses to manage message distribution.
Thread numbers refer to the number of threads a consumer application uses to process messages. Increasing thread count can improve parallelism and efficiency, especially with high message volumes.
#### Consumer group configuration
| Environment variable | Description | Default value |
| ------------------------------------------------ | ------------------------------------------------------------------- | ------------------ |
| `KAFKA_CONSUMER_GROUPID_NOTIFYADVANCE` | Group ID for notifying advance actions | `notif123-preview` |
| `KAFKA_CONSUMER_GROUPID_NOTIFYPARENT` | Group ID for notifying when a subprocess is blocked | `notif123-preview` |
| `KAFKA_CONSUMER_GROUPID_ADAPTERS` | Group ID for messages related to adapters | `notif123-preview` |
| `KAFKA_CONSUMER_GROUPID_SCHEDULERRUNACTION` | Group ID for running scheduled actions | `notif123-preview` |
| `KAFKA_CONSUMER_GROUPID_SCHEDULERADVANCING` | Group ID for messages indicating continuing advancement | `notif123-preview` |
| `KAFKA_CONSUMER_GROUPID_MESSAGEEVENTS` | Group ID for message events | `notif123-preview` |
| `KAFKA_CONSUMER_GROUPID_PROCESS_START` | Group ID for starting processes | `notif123-preview` |
| `KAFKA_CONSUMER_GROUPID_PROCESS_STARTFOREVENT` | Group ID for starting processes for an event | `notif123-preview` |
| `KAFKA_CONSUMER_GROUPID_PROCESS_EXPIRE` | Group ID for expiring processes | `notif123-preview` |
| `KAFKA_CONSUMER_GROUPID_PROCESS_OPERATIONS` | Group ID for processing operations from Task Management plugin | `notif123-preview` |
| `KAFKA_CONSUMER_GROUPID_PROCESS_BATCHPROCESSING` | Group ID for processing bulk operations from Task Management plugin | `notif123-preview` |
#### Consumer thread configuration
| Environment variable | Description | Default value |
| ------------------------------------------------ | --------------------------------------------------------------------- | ------------- |
| `KAFKA_CONSUMER_THREADS_NOTIFYADVANCE` | Number of threads for notifying advance actions | `6` |
| `KAFKA_CONSUMER_THREADS_NOTIFYPARENT` | Number of threads for notifying when a subprocess is blocked | `6` |
| `KAFKA_CONSUMER_THREADS_ADAPTERS` | Number of threads for processing messages related to adapters | `6` |
| `KAFKA_CONSUMER_THREADS_SCHEDULERADVANCING` | Number of threads for continuing advancement | `6` |
| `KAFKA_CONSUMER_THREADS_SCHEDULERRUNACTION` | Number of threads for running scheduled actions | `6` |
| `KAFKA_CONSUMER_THREADS_MESSAGEEVENTS` | Number of threads for message events | `6` |
| `KAFKA_CONSUMER_THREADS_PROCESS_START` | Number of threads for starting processes | `6` |
| `KAFKA_CONSUMER_THREADS_PROCESS_STARTFOREVENT` | Number of threads for starting processes for an event | `2` |
| `KAFKA_CONSUMER_THREADS_PROCESS_EXPIRE` | Number of threads for expiring processes | `6` |
| `KAFKA_CONSUMER_THREADS_PROCESS_OPERATIONS` | Number of threads for processing operations from task management | `6` |
| `KAFKA_CONSUMER_THREADS_PROCESS_BATCHPROCESSING` | Number of threads for processing bulk operations from task management | `6` |
All events that start with a configured pattern will be consumed by the Engine. This enables you to create new integrations and connect them to the engine without changing the configuration.
### Configuring Kafka topics
Recommended topic naming convention:
* `KAFKA_TOPIC_NAMING_PACKAGE`: `ai.flowx.dev`
* `KAFKA_TOPIC_NAMING_ENVIRONMENT`: `dev`
* `KAFKA_TOPIC_NAMING_VERSION`: `v1`
* `KAFKA_TOPIC_NAMING_SEPARATOR`: `.`
* `KAFKA_TOPIC_NAMING_SEPARATOR2`: `-`
* `KAFKA_TOPIC_NAMING_PREFIX`: `${KAFKA_TOPIC_NAMING_PACKAGE}${KAFKA_TOPIC_NAMING_ENVIRONMENT}`
* `KAFKA_TOPIC_NAMING_SUFFIX`: `${KAFKA_TOPIC_NAMING_VERSION}`
* `KAFKA_TOPIC_NAMING_ENGINERECEIVEPATTERN`: `engine.receive.`
* `KAFKA_TOPIC_NAMING_INTEGRATIONRECEIVEPATTERN`: `integration.receive.`
* `KAFKA_TOPIC_PATTERN`: `${KAFKA_TOPIC_NAMING_PREFIX}${KAFKA_TOPIC_NAMING_ENGINERECEIVEPATTERN}*`
* `KAFKA_TOPIC_INTEGRATIONPATTERN`: `${KAFKA_TOPIC_NAMING_PREFIX}${KAFKA_TOPIC_NAMING_INTEGRATIONRECEIVEPATTERN}*`
#### Core engine topics
| Environment variable | Description | Default value |
| ------------------------------------ | --------------------------------------------------------- | --------------------------------------------- |
| `KAFKA_TOPIC_PROCESS_NOTIFY_ADVANCE` | Topic used internally for advancing processes | `ai.flowx.dev.core.notify.advance.process.v1` |
| `KAFKA_TOPIC_PROCESS_NOTIFY_PARENT` | Topic used for sub-processes to notify the parent process | `ai.flowx.dev.core.notify.parent.process.v1` |
| `KAFKA_TOPIC_PATTERN` | Pattern the Engine listens on for incoming events | `ai.flowx.dev.engine.receive.*` |
| `KAFKA_TOPIC_LICENSE_OUT` | Topic used to generate licensing-related details | `ai.flowx.dev.core.trigger.save.license.v1` |
| `KAFKA_TOPIC_PROCESS_EVENT_MESSAGE` | Topic for process message events | `ai.flowx.dev.core.message.event.process.v1` |
#### Topics related to the Task Management plugin
| Environment variable | Description | Default value |
| --------------------------------------- | ---------------------------------------------------------- | ------------------------------------------------ |
| `KAFKA_TOPIC_TASK_OUT` | Topic used for sending notifications to the plugin | `ai.flowx.dev.plugin.tasks.trigger.save.task.v1` |
| `KAFKA_TOPIC_PROCESS_OPERATIONS_IN` | Topic for receiving information about operations performed | `ai.flowx.dev.core.trigger.operations.v1` |
| `KAFKA_TOPIC_PROCESS_OPERATIONS_BULKIN` | Topic where operations can be performed in bulk | `ai.flowx.core.trigger.operations.bulk.v1` |
##### OPERATIONS\_IN request example
```json
{
"operationType": "UNASSIGN", //type of operation performed in Task Management plugin
"taskId": "some task id",
"processInstanceUuid": "1cff0b7d-966b-4b35-9e9b-63b1d6757ec6",
"swimlaneName": "Default",
"swimlaneId": "51ec1241-fe06-4576-9c84-31598c05c527",
"owner": {
"firstName": null,
"lastName": null,
"username": "service-account-flowx-process-engine-account",
"enabled": false
},
"author": "admin@flowx.ai"
}
```
##### BULK\_IN request example
```json
{
"operations": [
{
"operationType": "HOLD",
"taskId": "some task id",
"processInstanceUuid": "d3aabfd8-d041-4c62-892f-22d17923b223", // the id of the process instance
"swimlaneName": "Default", //name of the swimlane
"owner": null,
"author": "john.doe@flowx.ai"
},
{
"operationType": "HOLD",
"taskId": "some task id",
"processInstanceUuid": "d3aabfd8-d041-4c62-892f-22d17923b223",
"swimlaneName": "Default", //name of the swimlane
"owner": null,
"author": "john.doe@flowx.ai"
}
]
}
```
To send additional keys in the response, attach them in the header. For example, you can use a `requestID` key.
A response should be sent on a `callbackTopic` if it is mentioned in the headers:

```json
{"processInstanceId": ${processInstanceId}, "callbackTopic": "test.operations.out", "requestID":"1234567890"}
```
Task manager operations include: assignment, unassignment, hold, unhold, terminate. These are matched with the `...operations.out` topic on the engine side. For more information, see the Task Management plugin documentation:
📄 [**Task management plugin**](/4.0/docs/platform-deep-dive/plugins/custom-plugins/task-management/task-management-overview)
#### Topics related to the scheduler extension
| Environment variable | Description | Default value |
| -------------------------------------------- | -------------------------------------------------------- | ---------------------------------------------- |
| `KAFKA_TOPIC_PROCESS_EXPIRE_IN` | Topic for requests to expire processes | `ai.flowx.dev.core.trigger.expire.process.v1` |
| `KAFKA_TOPIC_PROCESS_SCHEDULE_OUT_SET` | Topic used for scheduling process expiration | `ai.flowx.dev.core.trigger.set.schedule.v1` |
| `KAFKA_TOPIC_PROCESS_SCHEDULE_OUT_STOP` | Topic used for stopping process expiration | `ai.flowx.dev.core.trigger.stop.schedule.v1` |
| `KAFKA_TOPIC_PROCESS_SCHEDULE_IN_RUN_ACTION` | Topic for requests to run scheduled actions | `ai.flowx.dev.core.trigger.run.action.v1` |
| `KAFKA_TOPIC_PROCESS_SCHEDULE_IN_ADVANCE` | Topic for events related to advancing through a database | `ai.flowx.dev.core.trigger.advance.process.v1` |
#### Topics related to Timer Events
| Environment variable | Description | Default value |
| --------------------------------------------------- | ----------------------------------------------- | -------------------------------------------------------- |
| `KAFKA_TOPIC_PROCESS_SCHEDULEDTIMEREVENTS_OUT_SET` | Used to communicate with Scheduler microservice | `ai.flowx.dev.core.trigger.set.timer-event-schedule.v1` |
| `KAFKA_TOPIC_PROCESS_SCHEDULEDTIMEREVENTS_OUT_STOP` | Used to communicate with Scheduler microservice | `ai.flowx.dev.core.trigger.stop.timer-event-schedule.v1` |
#### Topics related to the Search Data service
| Environment variable | Description | Default value |
| ----------------------------- | --------------------------------------------------------- | --------------------------------------------------------- |
| `KAFKA_TOPIC_DATA_SEARCH_IN` | Topic that the Engine listens on for search requests | `ai.flowx.dev.core.trigger.search.data.v1` |
| `KAFKA_TOPIC_DATA_SEARCH_OUT` | Topic used by the Engine to reply after finding a process | `ai.flowx.dev.engine.receive.core.search.data.results.v1` |
#### Topics related to the Audit service
| Environment variable | Description | Default value |
| ----------------------- | ---------------------------- | ----------------------------------------- |
| `KAFKA_TOPIC_AUDIT_OUT` | Topic for sending audit logs | `ai.flowx.dev.core.trigger.save.audit.v1` |
#### Topics related to ES indexing
| Environment variable | Default value |
| ------------------------------- | ------------------------------------ |
| `KAFKA_TOPIC_PROCESS_INDEX_OUT` | `ai.flowx.dev.core.index.process.v1` |
#### Processes that can be started by sending messages to a Kafka topic
| Environment variable | Description | Default value |
| ------------------------------- | ----------------------------------------------------------------- | -------------------------------------------- |
| `KAFKA_TOPIC_PROCESS_START_IN` | Topic for requests to start a new process instance | `ai.flowx.dev.core.trigger.start.process.v1` |
| `KAFKA_TOPIC_PROCESS_START_OUT` | Topic for sending the reply after starting a new process instance | `ai.flowx.dev.core.confirm.start.process.v1` |
#### Topics related to Message Events
| Environment variable | Default value |
| ----------------------------------- | ------------------------------------------------------ |
| `KAFKA_TOPIC_PROCESS_STARTFOREVENT` | `ai.flowx.dev.core.trigger.start-for-event.process.v1` |
#### Topics related to Events-gateway microservice
| Environment variable | Description | Default value |
| ------------------------------------------ | --------------------------------------------------------- | ------------------------------------------------------ |
| `KAFKA_TOPIC_EVENTSGATEWAY_OUT_MESSAGE` | Outgoing messages from process-engine to events-gateway | `ai.flowx.eventsgateway.engine.commands.message.v1` |
| `KAFKA_TOPIC_EVENTSGATEWAY_OUT_DISCONNECT` | Disconnect commands from process-engine to events-gateway | `ai.flowx.eventsgateway.engine.commands.disconnect.v1` |
| `KAFKA_TOPIC_EVENTSGATEWAY_OUT_CONNECT` | Connect commands from process-engine to events-gateway | `ai.flowx.eventsgateway.engine.commands.connect.v1` |
#### Topics related to platform components
| Environment variable | Description | Default value |
| ---------------------------------------------- | ---------------------------------- | -------------------------------------------------------- |
| `KAFKA_TOPIC_PLATFORM_COMPONENTS_VERSIONS_OUT` | Topic for platform version caching | `ai.flowx.dev.core.trigger.platform.versions.caching.v1` |
#### Inter-service topic coordination
When configuring FlowX services, ensure the following:
1. The Engine's `pattern` must match the pattern used by services sending messages to the Engine
2. The `integrationPattern` must match the pattern used by the Integration Designer
3. Output topics from one service must match the expected input topics of another service
For example:
* Services send to topics matching `ai.flowx.dev.engine.receive.*` → Engine listens
* Engine sends to topics matching `ai.flowx.dev.integration.receive.*` → Integration Designer listens
#### Kafka message size configuration
| Environment variable | Description | Default value |
| ------------------------- | ----------------------------- | ----------------- |
| `KAFKA_MESSAGE_MAX_BYTES` | Maximum message size in bytes | `52428800` (50MB) |
This setting affects:
* Producer message max bytes
* Producer max request size
* Consumer max partition fetch bytes
#### Kafka authentication
For secure environments, you can enable OAuth authentication with the following configuration:
```yaml
spring.config.activate.on-profile: kafka-auth
spring:
kafka:
security.protocol: "SASL_PLAINTEXT"
properties:
sasl:
mechanism: "OAUTHBEARER"
jaas.config: "org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.client.id=\"${KAFKA_OAUTH_CLIENT_ID:kafka}\" oauth.client.secret=\"${KAFKA_OAUTH_CLIENT_SECRET:kafka-secret}\" oauth.token.endpoint.uri=\"${KAFKA_OAUTH_TOKEN_ENDPOINT_URI:kafka.auth.localhost}\" ;"
login.callback.handler.class: io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler
```
You need to set the following environment variables:
* `KAFKA_OAUTH_CLIENT_ID`
* `KAFKA_OAUTH_CLIENT_SECRET`
* `KAFKA_OAUTH_TOKEN_ENDPOINT_URI`
## Configuring file upload size
| Environment variable | Description | Default value |
| ----------------------------------------- | ---------------------------------------- | ------------- |
| `SPRING_SERVLET_MULTIPART_MAXFILESIZE` | Maximum file size allowed for uploads | `50MB` |
| `SPRING_SERVLET_MULTIPART_MAXREQUESTSIZE` | Maximum request size allowed for uploads | `50MB` |
## Connecting the Advancing controller
To use the advancing controller, configure the following variables:
| Environment variable | Description |
| ------------------------------- | ---------------------------------------- |
| `ADVANCING_DATASOURCE_JDBC_URL` | Connection URL for Advancing Postgres DB |
| `ADVANCING_DATASOURCE_USERNAME` | Username for Advancing DB connection |
| `ADVANCING_DATASOURCE_PASSWORD` | Password for Advancing DB connection |
### Configuring the Advancing controller
| Environment variable | Description | Default value |
| ---------------------------------------------- | -------------------------------------------------- | ---------------------------------------------- |
| `ADVANCING_TYPE` | Type of advancing mechanism | `PARALLEL` (alternatives: `KAFKA`, `PARALLEL`) |
| `ADVANCING_THREADS` | Number of parallel threads | `20` |
| `ADVANCING_PICKINGBATCHSIZE` | Number of tasks to pick in each batch | `10` |
| `ADVANCING_PICKINGPAUSEMILLIS` | Pause duration between batches (ms) | `100` |
| `ADVANCING_COOLDOWNAFTERSECONDS` | Cooldown period after processing a batch (seconds) | `120` |
| `ADVANCING_SCHEDULER_HEARTBEAT_CRONEXPRESSION` | Cron expression for the heartbeat | `"*/2 * * * * ?"` |
## Configuring cleanup mechanism
| Environment variable | Description | Default value |
| ----------------------------------------- | ------------------------------------------------- | ---------------------------------------------------------- |
| `SCHEDULER_THREADS` | Number of threads for the scheduler | `10` |
| `SCHEDULER_PROCESSCLEANUP_ENABLED` | Activates the cron job for process cleanup | `false` |
| `SCHEDULER_PROCESSCLEANUP_CRONEXPRESSION` | Cron expression for the process cleanup scheduler | `0 */5 0-5 * * ?` (every 5 minutes between 12 AM and 5 AM) |
| `SCHEDULER_PROCESSCLEANUP_BATCHSIZE` | Number of processes to clean up in one batch | `1000` |
| `SCHEDULER_MASTERELECTION_CRONEXPRESSION` | Cron expression for the master election process | `30 */3 * * * ?` (every 3 minutes) |
| | | |
## Managing subprocesses expiration
| Environment variable | Description | Default value |
| ---------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | ------------- |
| `FLOWX_PROCESS_EXPIRESUBPROCESSES` | When true, terminates all subprocesses upon parent process expiration. When false, subprocesses follow their individual expiration settings | `true` |
## Configuring application management
The following configuration from versions before 4.1 will be deprecated in version 5.0:
* `MANAGEMENT_METRICS_EXPORT_PROMETHEUS_ENABLED`: Enables or disables Prometheus metrics export.
Starting from version 4.1, use the following configuration instead. This setup is backwards compatible until version 5.0.
| Environment variable | Description | Default value |
| ---------------------------------------------- | --------------------------------- | ------------- |
| `MANAGEMENT_PROMETHEUS_METRICS_EXPORT_ENABLED` | Enables Prometheus metrics export | `false` |
## RBAC configuration
Process Engine requires specific RBAC permissions for proper access to Kubernetes resources:
```yaml
rbac:
create: true
rules:
- apiGroups:
- ""
resources:
- secrets
- configmaps
- pods
verbs:
- get
- list
- watch
```
# Partitioning & archiving
Source: https://docs.flowx.ai/4.7.x/setup-guides/flowx-engine-setup-guide/process-instance-data-archiving
Improving data management using data partitioning and the archival processes.
## Overview
Starting with release v4.1.1 you can enable data partitioning on FlowX.AI Engine.
Partitioning and archiving are data management strategies used to handle large volumes of data efficiently. They improve database performance, manageability, and storage optimization. By dividing data into partitions and archiving older or less frequently accessed data, organizations can ensure better data management, quicker query responses, and optimized storage use.
## Partitioning
Partitioning is the process of dividing a large database table into smaller, more manageable pieces, called partitions. Each partition can be managed and queried independently. The primary goal is to improve performance, enhance manageability, and simplify maintenance tasks such as backups and archiving.
Partitions can be created per day, week or month. This means that a partition ID is computed at insert time for each row of process\_instance and related tables.
Afterwards, a retention period can be setup (eg: 3 partitions). A flowx engine-driven cron job (with configurable start interval) will check if there is any partition which needs to be archived and will perform the necessary actions.
### Benefits of partitioning
* **Improved Query Performance**: By scanning only relevant partitions instead of the entire table.
* **Simplified Maintenance**: Easier to perform maintenance tasks like backups and index maintenance.
* **Data Management**: Better data organization and management by dividing data based on specific criteria such as date, range, or list.
Database Compatibility: OracleDB and PostgreSQL.
## Archiving
Archiving involves moving old data from the primary tables to separate archive tables. This helps in reducing the load on the primary tables, thus improving performance and manageability.
### Benefits of archiving
* **Storage Optimization**: Archived data can be compressed, saving storage space.
* **Performance Improvement**: Reduces the volume of data in primary tables, leading to faster query responses.
* **Historical Data Management**: Maintains historical data in a separate, manageable form.
## Partitioning and archiving in OracleDB
### OracleDB partitioning
OracleDB [**partitioned tables**](https://docs.oracle.com/en/database/oracle/oracle-database/18/vldbg/partition-concepts.html#GUID-6D369646-16AF-487B-BF32-5F6569D27C8A) are utilized, allowing for efficient data management and query performance.
Each partition can be managed and queried independently.
Example: A `process_instance` table can be partitioned by day, week, or month.
Improved query performance by scanning only relevant partitions.
Simplified maintenance tasks like backups and archiving.
### OracleDB archiving
Detaching the partition from the main table.
Converting the detached partition into a new table named `archived_${table_name}_${interval_name}_${reference}` for example: `archived_process_instance_monthly_2024_03` .
The DATE is in the format `yyyy-MM-dd` if the interval is `DAY`, or in the format `yyyy-MM` if the interval is `MONTH`, or in the format `yyyy-weekOfYear` if the interval is `WEEK`.
Identify partitions eligible for archiving based on retention settings.
Detach the partition from the main table.
Create a new table with the data from the detached partition.
Optionally compress the new table to save space.
Manages historical data by moving it to separate tables, reducing the load on the main table.
Compression options (OFF, BASIC, ADVANCED) further optimize storage.
Oracle offers several compression options—OFF, BASIC, and ADVANCED—that optimize storage. Each option provides varying levels of data compression, impacting storage savings and performance differently.
* **OFF**: No compression is applied.
* **BASIC**: Suitable for read-only or read-mostly environments, it compresses data without requiring additional licenses.
* **ADVANCED**: Offers the highest level of compression, supporting a wide range of data types and operations. It requires an Advanced Compression license and provides significant storage savings and performance improvements by keeping data compressed in memory.
For more details, you can refer to Oracle's [**Advanced Compression documentation**](https://www.oracle.com/database/advanced-compression/#rc30related).
## PostgreSQL archiving
Creating a new table for the partition being archived.
Moving data from the main table to the new archive table (in batches).
Since here the DB has more work to do than just changing some labels (actual data insert and delete vs relabel a partition into a table, for OracleDB) the data move is batched and the batch size is configurable.
Identify partitions eligible for archiving based on retention settings.
Create a new table following this naming convention: `archived__${table_name}__${interval_name}_${reference}`, example: *archived\_\_process\_instance\_\_weekly\_2024\_09* .
The new table is created (same name format as for OracleDB: `archived__${table_name}__${interval_name}_${reference}`) and data is moved from the primary table here.
Configure the batch size for data movement to control the load on the database.
Efficiently manages historical data.
Batch processing allows for better control over the archiving process and system performance.
Differences from Oracle DBs:
Archiving involves actual data movement (insert and delete operations), unlike OracleDBs where it is mainly a relabeling of partitions.
The batch size for data movement is configurable, allowing fine-tuning of the archiving process.
## Daily operations
Once set up, the partitioning and archiving process involves the following operations:
The `partition_id` is automatically calculated based on the configured interval (DAY, WEEK, MONTH).
Example for daily partitioning: `2024-03-01 13:00:00` results in `partition_id = 124061`. See the [**Partition ID calculation**](#partition-id-calculation) section.
Data older than the configured retention interval becomes eligible for archiving and compressing.
A cron job checks for eligible partitions.
Eligible partitions are archived and optionally compressed.
The process includes deactivating foreign keys, creating new archive tables, moving data references, reactivating foreign keys, and dropping the original partition.
Archived tables remain in the database but occupy less space if compression is enabled.
**Recommendation**: to free up space, consider moving archived tables to a different database or tablespace. Additionally, you have the option to move only process instances or copy definitions depending on your needs.
When enabling partitioning, please consider the following:
**Ensure Process Termination**: Make sure that process instances get terminated. Archiving removes process instance data from the working data, making it not available in FlowX. Started instances should be finished before archiving takes place.
**Set Process Expiry**: To ensure termination of process instances prior to archiving, it is recommended to configure process expiration. Refer to the following section for guidance on setting up process expiry using FlowX Designer:
[Timer Expressions](../../docs/building-blocks/node/timer-events/timer-expressions)
Future schema updates or migrations will not affect archived tables. They retain the schema from the moment of archiving.
## Configuring partitioning and archiving
The Partitioning and Archiving feature is optional and can be configured as needed.
When starting a new version of the process-engine, we recommend manually executing the setup SQL commands from Liquibase, as they may take more time. After setup, all existing information will go into the initial partition.
This section contains environment variables that control the settings for data partitioning and archiving and also for the archiving scheduler. These settings determine how data is partitioned, retained, and managed, including compression and batch processing.
| Environment Variable | Description | Default Value | Possible Values |
| -------------------------------------------------------- | ---------------------------------------------------------------------------- | ------------------------------------ | ------------------------------------- |
| `FLOWX_DATA_PARTITIONING_ENABLED` | Activates data partitioning. | `false` | `true`, `false` |
| `FLOWX_DATA_PARTITIONING_INTERVAL` | Interval for partitioning (the time interval contained in a partition). | `MONTH` | `DAY`, `WEEK`, `MONTH` |
| `FLOWX_DATA_PARTITIONING_RETENTION_INTERVALS` | Number of intervals retained in the FlowX database (for partitioned tables). | `3` | Integer values (e.g., `1`, `2`, `3`) |
| `FLOWX_DATA_PARTITIONING_DETACHED_PARTITION_COMPRESSION` | Enables compression for archived (detached) partitions (Oracle only). | `OFF` | `OFF`, `BASIC`, `ADVANCED` |
| `FLOWX_DATA_PARTITIONING_MOVED_DATA_BATCH_SIZE` | Batch size for moving data (PostgreSQL only). | `5000` | Integer values (e.g., `1000`, `5000`) |
| `SCHEDULER_DATA_PARTITIONING_ENABLED` | Activates the cron job for archiving partitions. | `true` | `true`, `false` |
| `SCHEDULER_DATA_PARTITIONING_CRON_EXPRESSION` | Cron expression for the data partitioning scheduler. | `0 0 1 * * ?` -> every day at 1:00AM | Cron expression (e.g., `0 0 1 * * ?`) |
Compression for archived (detached) partitions is available only for Oracle DBs.
The batch size setting for archiving data is available only for PostgreSQL DBs.
## Logging information
Partitioning and archiving actions are logged in two tables:
* `DATA_PARTITIONING_LOG`: For tracking archived partitions.

* `DATA_PARTITIONING_LOG_ENTRY`: For logging SQL commands executed for archiving.

## Enabling partitioning and Elasticsearch indexing strategy
When partitioning is enabled, the Elasticsearch indexing strategy must also be enabled and configured on FlowX Engine setup.
**Why?**
* When archiving process instances, data from Elasticsearch must be deleted, not just the cache but also indexed keys (e.g., those indexed for data search on process instances).
### Elasticsearch indexing configuration
Check the Elasticsearch indexing setup here:
The partitioning configuration must be aligned with the configuration extracted from the Kafka Elasticsearch Connector, especially with the following environment variables, so the intervals are similar:
### Index partitioning
* `transforms.routeTS.topic.format: "process_instance-${timestamp}"`: This value must start with the index name defined in the process-engine config: flowx.indexing.processInstance.index-name. In this example, the index name is prefixed with "process\_instance-" and appended with a timestamp for dynamic index creation. For backward compatibility, the prefix must be "process\_instance-". However, backward compatibility is not strictly required here.
yaml
* `transforms.routeTS.timestamp.format: "yyyyMMdd"`: This format ensures that timestamps are consistently represented and easily parsed when creating or searching for indices based on the process instance start date. You can adjust this value as needed (e.g., for monthly indexes, use "yyyyMM"). However, note that changing this format will cause existing indexed objects to remain unchanged, and update messages will be treated as new objects, indexed again in new indices. It is crucial to determine your index size and maintain consistency.
Check the following Kafka Elasticsearch Connector configuration example for more details:
## Technical details
### Partition ID calculation
* The `partition_id` format follows this structure: ``. This ID is calculated based on the start date of the `process_instance`, the partition interval, and the partition level.
* `LEVEL`: This represents the "Partitioning level," which increments with each change in the partitioning interval (for example, if it changes from `DAY` to `MONTH` or vice versa).
* `YEAR`: The year extracted from the `process_instance` date.
* `BIN_ID_OF_YEAR`: This is the ID of a bucket associated with the `YEAR`. It is created for all instances within the selected partitioning interval. The maximum number of buckets is determined by the partitioning frequency:
* **Daily**: Up to 366 buckets per year
* **Weekly**: Up to 53 buckets per year
* **Monthly**: Up to 12 buckets per year
#### Calculation example
For a timestamp of `2024-03-01 13:00:00` with a daily partitioning interval, the `partition_id`would be `124061`:
* `1`: Partitioning Level (`LEVEL`)
* `24`: Year - 2024 (`YEAR`)
* `061`: Bucket per configuration (61st day of the year)
### Archived tables
* Naming format: `archived__${table_name}__${interval_name}_${reference}`. Examples:
* archived\_\_process\_instance\_\_monthly\_2024\_03
* archived\_\_process\_instance\_\_weekly\_2024\_09
* archived\_\_process\_instance\_\_daily\_2024\_03\_06
# Integration Designer setup
Source: https://docs.flowx.ai/4.7.x/setup-guides/integration-designer-setup
This guide explains how to configure the Integration Designer service using environment variables.
## Infrastructure prerequisites
The Integration Designer service requires the following components to be set up before it can be started:
| Component | Version | Purpose |
| ------------- | ------- | ------------------------------------- |
| Kubernetes | 1.19+ | Container orchestration |
| PostgreSQL | 13+ | Advancing data source |
| MongoDB | 4.4+ | Integration configurations |
| Kafka | 2.8+ | Event-driven communication |
| OAuth2 Server | - | Authentication (Keycloak recommended) |
## Configuration
### Core service configuration
| Environment variable | Description | Example value |
| -------------------- | ----------------------------- | --------------------------- |
| `CONFIG_PROFILE` | Spring configuration profiles | `k8stemplate_v2,kafka-auth` |
| `LOGGING_LEVEL_APP` | Application logging level | `INFO` |
### WebClient configuration
Integration Designer interacts with various APIs, some of which return large responses. To handle such cases efficiently, the FlowX WebClient buffer size must be configured to accommodate larger payloads, especially when working with legacy APIs that do not support pagination.
| Environment variable | Description | Default value |
| ---------------------------- | ------------------------------------------ | --------------- |
| `FLOWX_WEBCLIENT_BUFFERSIZE` | Buffer size (in bytes) for FlowX WebClient | `1048576` (1MB) |
If you encounter **truncated API responses** or **unexpected errors when fetching large payloads**, consider **increasing the buffer size** to at least **10MB** by setting `FLOWX_WEBCLIENT_BUFFERSIZE=10485760`. This ensures smooth handling of large API responses, particularly for legacy APIs without pagination support.
### Database configuration
#### PostgreSQL
| Environment variable | Description | Example value |
| ---------------------------------------- | ------------------- | --------------------------------------------- |
| `ADVANCING_DATASOURCE_URL` | PostgreSQL JDBC URL | `jdbc:postgresql://postgresql:5432/advancing` |
| `ADVANCING_DATASOURCE_USERNAME` | Database username | `flowx` |
| `ADVANCING_DATASOURCE_PASSWORD` | Database password | `securePassword` |
| `ADVANCING_DATASOURCE_DRIVER_CLASS_NAME` | JDBC driver | `org.postgresql.Driver` |
#### MongoDB
Integration Designer requires two MongoDB databases for managing integration-specific data and runtime data:
* **Integration Designer Database** (`integration-designer`): Stores data specific to Integration Designer, such as integration configurations, metadata, and other operational data.
* **Shared Runtime Database** (`app-runtime`): Shared across multiple services, this database manages runtime data essential for integration and data flow execution.
| Environment variable | Description | Example value |
| --------------------------------- | -------------------------------------- | ------------------------------------------------------------------------ |
| `SPRING_DATA_MONGODB_URI` | Integration Designer MongoDB URI | `mongodb://mongodb-0.mongodb-headless:27017/integration-designer` |
| `MONGODB_USERNAME` | MongoDB username | `integration-designer` |
| `MONGODB_PASSWORD` | MongoDB password | `secureMongoPass` |
| `SPRING_DATA_MONGODB_STORAGE` | Storage type (Azure environments only) | `mongodb` (or `cosmosdb`) |
| `SPRING_DATA_MONGODB_RUNTIME_URI` | Runtime MongoDB URI | `mongodb://mongodb-0.mongodb-headless:27017/${MONGODB_RUNTIME_DATABASE}` |
| `MONGODB_RUNTIME_DATABASE` | Runtime MongoDB database | `app-runtime` |
| `MONGODB_RUNTIME_USERNAME` | Runtime MongoDB username | `app-runtime` |
| `MONGODB_RUNTIME_PASSWORD` | Runtime MongoDB password | `secureRuntimePass` |
Integration Designer requires a runtime connection to function correctly. Starting the service without a configured and active runtime MongoDB connection is not supported.
### Kafka configuration
#### Kafka connection and security variables
| Environment variable | Description | Example value |
| -------------------------------- | ---------------------- | ------------------------------- |
| `SPRING_KAFKA_BOOTSTRAPSERVERS` | Kafka broker addresses | `localhost:9092` |
| `SPRING_KAFKA_SECURITY_PROTOCOL` | Security protocol | `PLAINTEXT` or `SASL_PLAINTEXT` |
| `FLOWX_WORKFLOW_CREATETOPICS` | Auto-create topics | `false` (default) |
#### Message size configuration
| Environment variable | Description | Default value |
| ------------------------- | -------------------- | ----------------- |
| `KAFKA_MESSAGE_MAX_BYTES` | Maximum message size | `52428800` (50MB) |
This setting affects:
* Producer message max bytes
* Producer max request size
#### Consumer configuration
| Environment variable | Description | Default value |
| ----------------------------------------------- | ------------------------------------------ | ------------------------------------------------------ |
| `KAFKA_CONSUMER_GROUPID_STARTWORKFLOWS` | Start workflows consumer group | `start-workflows-group` |
| `KAFKA_CONSUMER_GROUPID_RESELEMUSAGEVALIDATION` | Resource usage validation consumer group | `integration-designer-res-elem-usage-validation-group` |
| `KAFKA_CONSUMER_THREADS_STARTWORKFLOWS` | Start workflows consumer threads | `3` |
| `KAFKA_CONSUMER_THREADS_RESELEMUSAGEVALIDATION` | Resource usage validation consumer threads | `3` |
| `KAFKA_AUTHEXCEPTIONRETRYINTERVAL` | Retry interval after authorization errors | `10` (seconds) |
#### Topic naming convention and pattern creation
The Integration Designer uses a structured topic naming convention that follows a standardized pattern, ensuring consistency across environments and making topics easily identifiable.
##### Topic naming components
| Environment variable | Description | Default value |
| ---------------------------------------------- | --------------------------- | ---------------------- |
| `KAFKA_TOPIC_NAMING_PACKAGE` | Base package for topics | `ai.flowx.` |
| `KAFKA_TOPIC_NAMING_ENVIRONMENT` | Environment identifier | `dev.` |
| `KAFKA_TOPIC_NAMING_VERSION` | Topic version | `.v1` |
| `KAFKA_TOPIC_NAMING_SEPARATOR` | Topic name separator | `.` |
| `KAFKA_TOPIC_NAMING_SEPARATOR2` | Alternative separator | `-` |
| `KAFKA_TOPIC_NAMING_ENGINERECEIVEPATTERN` | Engine receive pattern | `engine.receive.` |
| `KAFKA_TOPIC_NAMING_INTEGRATIONRECEIVEPATTERN` | Integration receive pattern | `integration.receive.` |
Topics are constructed using the following pattern:
```
{prefix} + service + {separator/dot} + action + {separator/dot} + detail + {suffix}
```
For example, a typical topic might look like:
```
ai.flowx.dev.eventsgateway.receive.workflowinstances.v1
```
Where:
* `ai.flowx.dev.` is the prefix (package + environment)
* `eventsgateway` is the service
* `receive` is the action
* `workflowinstances` is the detail
* `.v1` is the suffix (version)
#### Kafka topic configuration
##### Core topics
| Environment variable | Description | Default Pattern |
| ----------------------- | ---------------------------- | ----------------------------------------- |
| `KAFKA_TOPIC_AUDIT_OUT` | Topic for sending audit logs | `ai.flowx.dev.core.trigger.save.audit.v1` |
##### Events gateway topics
| Environment variable | Description | Default Pattern |
| --------------------------------------- | ------------------------------------------ | --------------------------------------------------------- |
| `KAFKA_TOPIC_EVENTSGATEWAY_OUT_MESSAGE` | Topic for workflow instances communication | `ai.flowx.dev.eventsgateway.receive.workflowinstances.v1` |
##### Engine and Integration communication topics
| Environment variable | Description | Default Pattern |
| -------------------------------- | ------------------------------------- | ------------------------------------ |
| `KAFKA_TOPIC_ENGINEPATTERN` | Pattern for Engine communication | `ai.flowx.dev.engine.receive.` |
| `KAFKA_TOPIC_INTEGRATIONPATTERN` | Pattern for Integration communication | `ai.flowx.dev.integration.receive.*` |
##### Application resource usage topics
| Environment variable | Description | Default Pattern |
| --------------------------------------------------- | -------------------------------------------- | --------------------------------------------------------------------------------------------- |
| `KAFKA_TOPIC_APPLICATION_IN_RESELEMUSAGEVALIDATION` | Topic for resource usage validation requests | `ai.flowx.dev.application-version.resources-usages.sub-res-validation.request-integration.v1` |
| `KAFKA_TOPIC_RESOURCESUSAGES_REFRESH` | Topic for resource usage refresh commands | `ai.flowx.dev.application-version.resources-usages.refresh.v1` |
#### OAuth authentication variables (when using SASL\_PLAINTEXT)
| Environment variable | Description | Example value |
| -------------------------------- | -------------------- | ---------------------- |
| `KAFKA_OAUTH_CLIENT_ID` | OAuth client ID | `kafka` |
| `KAFKA_OAUTH_CLIENT_SECRET` | OAuth client secret | `kafka-secret` |
| `KAFKA_OAUTH_TOKEN_ENDPOINT_URI` | OAuth token endpoint | `kafka.auth.localhost` |
#### Inter-Service topic coordination
When configuring Kafka topics in the FlowX ecosystem, ensure proper coordination between services:
1. **Topic name matching**: Output topics from one service must match the expected input topics of another service.
2. **Pattern consistency**: The pattern values must be consistent across services:
* Process Engine listens to topics matching: `ai.flowx.dev.engine.receive.*`
* Integration Designer listens to topics matching: `ai.flowx.dev.integration.receive.*`
3. **Communication flow**:
* Other services write to topics matching the Engine's pattern → Process Engine listens
* Process Engine writes to topics matching the Integration Designer's pattern → Integration Designer listens
The exact pattern value isn't critical, but it must be identical across all connected services. Some deployments require manually creating Kafka topics in advance rather than dynamically. In these cases, all topic names must be explicitly defined and coordinated.
### Kafka topics best practices
#### Large message handling for workflow instances topic
The workflow instances topic requires special configuration to handle large messages. By default, Kafka has message size limitations that may prevent Integration Designer from processing large workflow payloads.
**Recommended `max.message.bytes` value**: `10485760` (10 MB)
#### Method: Update using AKHQ (Recommended)
1. **Access AKHQ**
* Open the AKHQ web interface
* Log in if authentication is required
2. **Navigate to Topic**
* Go to the "Topics" section
* Find the topic: `ai.flowx.dev.eventsgateway.receive.workflowinstances.v1`
3. **Edit Configuration**
* Click on the topic name
* Go to the "Configuration" tab
* Locate or add `max.message.bytes`
* Set the value to `10485760`
* Save changes
### Configuring authentication and access roles
Integration Designer uses OAuth2 for secure access control. Set up OAuth2 configurations with these environment variables:
| Environment variable | Description | Example value |
| ----------------------------------------------------- | ---------------------------------------------------- | ----------------------------------- |
| `SECURITY_OAUTH2_BASE_SERVER_URL` | Base URL for OAuth2 authorization server | `https://keycloak.example.com/auth` |
| `SECURITY_OAUTH2_REALM` | Realm for OAuth2 authentication | `flowx` |
| `SECURITY_OAUTH2_CLIENT_CLIENT_ID` | Client ID for Integration Designer OAuth2 client | `integration-designer` |
| `SECURITY_OAUTH2_CLIENT_CLIENT_SECRET` | Client Secret for Integration Designer OAuth2 client | `client-secret` |
| `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_ID` | Client ID for admin service account | `admin-client` |
| `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_SECRET` | Client Secret for admin service account | `admin-secret` |
For detailed instructions on configuring user roles and access rights, refer to:
For configuring a service account, refer to:
### Configuring logging
To control the log levels for Integration Designer, set the following environment variables:
| Environment variable | Description | Example value |
| -------------------- | ---------------------------- | ------------- |
| `LOGGING_LEVEL_ROOT` | Root Spring Boot logs level | `INFO` |
| `LOGGING_LEVEL_APP` | Application-level logs level | `INFO` |
### Configuring admin ingress
Integration Designer provides an admin ingress route, which can be enabled and customized with additional annotations for SSL certificates or routing preferences.
```yaml
ingress:
enabled: true
admin:
enabled: true
hostname: "{{ .Values.flowx.ingress.admin }}"
path: /integration(/|$)(.*)
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/cors-allow-headers: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization,platform,Flowx-Platform
```
### Monitoring and maintenance
To monitor the performance and health of the Integration Designer, use tools like Prometheus or Grafana. Configure Prometheus metrics with:
| Environment variable | Description | Default value |
| ---------------------------------------------- | -------------------------------- | ------------- |
| `MANAGEMENT_PROMETHEUS_METRICS_EXPORT_ENABLED` | Enable Prometheus metrics export | `false` |
### RBAC configuration
Integration Designer requires specific RBAC (Role-Based Access Control) permissions to access Kubernetes ConfigMaps and Secrets, which store necessary configurations and credentials. Set up these permissions by enabling RBAC and defining the required rules:
```yaml
rbac:
create: true
rules:
- apiGroups:
- ""
resources:
- secrets
- configmaps
- pods
verbs:
- get
- list
- watch
```
This configuration grants read access (`get`, `list`, `watch`) to ConfigMaps, Secrets, and Pods, which is essential for retrieving application settings and credentials required by Integration Designer.
## Additional resources
* [Integration Designer Documentation](../docs/platform-deep-dive/integrations/integration-designer)
# Deployment configuration for OpenTelemetry
Source: https://docs.flowx.ai/4.7.x/setup-guides/open-telemetry-config
Guide to deploying OpenTelemetry components and configuring associated services.
### Step 1: Install OpenTelemetry Operator
Ensure you have the OpenTelemetry Operator version **0.56.1** or higher:
```yaml
- repoURL: https://open-telemetry.github.io/opentelemetry-helm-charts
chart: opentelemetry-operator
targetRevision: 0.56.1
```
#### Configuration:
```yaml
# Source: https://github.com/open-telemetry/opentelemetry-helm-charts/blob/opentelemetry-operator-0.56.1/charts/opentelemetry-operator/values.yaml
## Provide OpenTelemetry Operator manager container image and resources.
manager:
image:
repository: ghcr.io/open-telemetry/opentelemetry-operator/opentelemetry-operator
tag: ""
collectorImage:
repository: "otel/opentelemetry-collector-contrib"
tag: 0.98.0
opampBridgeImage:
repository: ""
tag: ""
targetAllocatorImage:
repository: ""
tag: ""
autoInstrumentationImage:
java:
repository: ""
tag: ""
nodejs:
repository: ""
tag: ""
python:
repository: ""
tag: ""
dotnet:
repository: ""
tag: ""
# The Go instrumentation support in the operator is disabled by default.
# To enable it, use the operator.autoinstrumentation.go feature gate.
go:
repository: ""
tag: ""
# Feature Gates are a comma-delimited list of feature gate identifiers.
# Prefix a gate with '-' to disable support.
# Prefixing a gate with '+' or no prefix will enable support.
# A full list of valid identifiers can be found here: https://github.com/open-telemetry/opentelemetry-operator/blob/main/pkg/featuregate/featuregate.go
featureGates: ""
ports:
metricsPort: 8080
webhookPort: 9443
healthzPort: 8081
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 64Mi
## Adds additional environment variables
## e.g ENV_VAR: env_value
env:
ENABLE_WEBHOOKS: "true"
## Admission webhooks make sure only requests with correctly formatted rules will get into the Operator.
## They also enable the sidecar injection for OpenTelemetryCollector and Instrumentation CR's
admissionWebhooks:
create: true
servicePort: 443
failurePolicy: Fail
```
Ensure you use the appropriate distribution and version:
```yaml
repository: "otel/opentelemetry-collector-contrib"
tag: 0.98.0
```
### Step 2: Deploy OpenTelemetry Resources
Apply the OpenTelemetry resources:
#### OpenTelemetry Collector
```yaml
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otel
namespace: {{ .Release.Namespace }}
spec:
mode: deployment
resources:
limits:
cpu: "2"
memory: 6Gi
requests:
cpu: 200m
memory: 2Gi
config: |
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
# Since this collector needs to receive data from the web, enable cors for all origins
# `allowed_origins` can be refined for your deployment domain
endpoint: 0.0.0.0:4318
cors:
allowed_origins:
- "http://*"
- "https://*"
zipkin:
exporters:
debug: {}
## Create an exporter to Jaeger using the standard `otlp` export format
otlp/tempo:
endpoint: 'otel-tempo:4317'
tls:
insecure: true
# Create an exporter to Prometheus (metrics)
otlphttp/prometheus:
endpoint: 'http://otel-prometheus-server:9090/api/v1/otlp'
tls:
insecure: true
loki:
endpoint: "http://otel-loki:3100/loki/api/v1/push"
default_labels_enabled:
exporter: true
job: true
extensions:
health_check:
memory_ballast: {}
processors:
batch: {}
k8sattributes:
extract:
metadata:
- k8s.namespace.name
- k8s.deployment.name
- k8s.statefulset.name
- k8s.daemonset.name
- k8s.cronjob.name
- k8s.job.name
- k8s.node.name
- k8s.pod.name
- k8s.pod.uid
- k8s.pod.start_time
passthrough: false
pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.ip
- sources:
- from: resource_attribute
name: k8s.pod.uid
- sources:
- from: connection
resource:
attributes:
- key: service.instance.id
from_attribute: k8s.pod.uid
action: insert
memory_limiter:
check_interval: 5s
limit_percentage: 80
spike_limit_percentage: 25
connectors:
spanmetrics: {}
service:
extensions:
- health_check
- memory_ballast
pipelines:
traces:
processors: [memory_limiter, resource, batch]
exporters: [otlp/tempo, debug, spanmetrics]
receivers: [otlp]
metrics:
receivers: [otlp, spanmetrics]
processors: [memory_limiter, resource, batch]
exporters: [otlphttp/prometheus, debug]
logs:
processors: [memory_limiter, resource, batch]
exporters: [loki, debug]
receivers: [otlp]
```
#### OpenTelemetry Instrumentation
```yaml
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: flowx-otel-instrumentation
namespace: {{ .Release.Namespace }}
spec:
exporter:
endpoint: http://otel-collector:4317
propagators:
- tracecontext
- baggage
sampler:
type: parentbased_traceidratio
argument: "1"
java:
image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:2.1.0
env:
- name: OTEL_INSTRUMENTATION_LOGBACKAPPENDER_ENABLED
value: "true"
- name: OTEL_LOGS_EXPORTER
value: otlp
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://otelcol-operator-collector:4317
- name: OTEL_EXPORTER_OTLP_PROTOCOL
value: grpc
```
### Step 3: Instrument Flowx Services
Update Flowx services to enable Java instrumentation by adding the following pod annotation:
```yaml
podAnnotations:
instrumentation.opentelemetry.io/inject-java: "true"
```
### Step 4: Configure Grafana, Prometheus and Tempo
#### Configure Grafana
```yaml
##Source: https://github.com/grafana/helm-charts/blob/grafana-7.3.10/charts/grafana/values.yaml
rbac:
create: true
namespaced: true
serviceAccount:
create: true
replicas: 1
ingress:
enabled: true
ingressClassName: nginx
path: /
hosts:
- {{ .Values.flowx.ingress.grafana }}
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
enabled: true
accessModes:
- ReadWriteOnce
size: 5Gi
finalizers: []
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Loki
uid: loki
type: loki
access: proxy
url: http://otel-loki:3100
timeout: 300
version: 1
jsonData:
derivedFields:
- datasourceUid: tempo
# matches variations of "traceID" fields, either in JSON or logfmt
# marian-fx: matcherRegex: '"traceid":"(\w+)"'
matcherRegex: (?:[Tt]race[-_]?[Ii][Dd])[\\]?["]?[=:][ ]?[\\]?["]?(\w+)
name: traceid
# url will be interpreted as query for the datasource
url: "$${__value.raw}"
urlDisplayLabel: See traces
- name: Prometheus
uid: prometheus
type: prometheus
url: 'http://otel-prometheus-server:9090'
editable: true
isDefault: true
jsonData:
exemplarTraceIdDestinations:
- datasourceUid: tempo
name: traceid
- name: Tempo
uid: tempo
type: tempo
access: proxy
url: http://otel-tempo:3100
version: 1
jsonData:
tracesToLogs:
datasourceUid: loki
lokiSearch:
datasourceUid: loki
nodeGraph:
enabled: true
serviceMap:
datasourceUid: prometheus
tracesToLogsV2:
customQuery: false
datasourceUid: loki
filterBySpanID: true
filterByTraceID: true
spanEndTimeShift: 1s
spanStartTimeShift: '-1s'
tags:
- key: service.name
value: job
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/default
resources:
limits:
memory: 150Mi
grafana.ini:
auth:
disable_login_form: true
auth.anonymous:
enabled: false
org_name: Main Org.
org_role: Admin
auth.generic_oauth:
enabled: true
name: Keycloak-OAuth
allow_sign_up: true
client_id: private-management-sa
client_secret: xTs4yGYySrHaNDIpCiniHJUGqBKbyCtp
scopes: openid email profile offline_access roles
email_attribute_path: email
login_attribute_path: username
name_attribute_path: full_name
auth_url: https://{{ .Values.flowx.keycloak.host }}/auth/realms/{{ .Values.flowx.keycloak.realm }}/protocol/openid-connect/auth
token_url: https://{{ .Values.flowx.keycloak.host }}/auth/realms/{{ .Values.flowx.keycloak.realm }}/protocol/openid-connect/token
api_url: https://{{ .Values.flowx.keycloak.host }}/auth/realms/{{ .Values.flowx.keycloak.realm }}/protocol/openid-connect/userinfo
# role_attribute_path: contains(roles[*], 'admin') && 'Admin' || contains(roles[*], 'editor') && 'Editor' || 'Viewer'
server:
root_url: "https://{{ .Values.flowx.ingress.grafana }}/grafana"
serve_from_sub_path: true
adminPassword: admin
# assertNoLeakedSecrets is a helper function defined in _helpers.tpl that checks if secret
# values are not exposed in the rendered grafana.ini configmap. It is enabled by default.
#
# To pass values into grafana.ini without exposing them in a configmap, use variable expansion:
# https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#variable-expansion
#
# Alternatively, if you wish to allow secret values to be exposed in the rendered grafana.ini configmap,
# you can disable this check by setting assertNoLeakedSecrets to false.
assertNoLeakedSecrets: false
```
#### Configure Prometheus
```yaml
# https://github.com/prometheus-community/helm-charts/blob/prometheus-22.6.7/charts/prometheus/values.yaml
rbac:
create: true
podSecurityPolicy:
enabled: false
## Define serviceAccount names for components. Defaults to component's fully qualified name.
##
serviceAccounts:
server:
create: true
alertmanager:
enabled: false
alertmanagerFiles:
alertmanager.yml:
global: {}
# slack_api_url: ''
receivers:
- name: default-receiver
# slack_configs:
# - channel: '@you'
# send_resolved: true
route:
group_wait: 10s
group_interval: 5m
receiver: default-receiver
repeat_interval: 3h
configmapReload:
prometheus:
enabled: false
kube-state-metrics:
enabled: false
prometheus-node-exporter:
enabled: false
prometheus-pushgateway:
enabled: false
server:
useExistingClusterRoleName: prometheus-server
## If set it will override prometheus.server.fullname value for ClusterRole and ClusterRoleBinding
##
clusterRoleNameOverride: ""
# Enable only the release namespace for monitoring. By default all namespaces are monitored.
# If releaseNamespace and namespaces are both set a merged list will be monitored.
releaseNamespace: false
## namespaces to monitor (instead of monitoring all - clusterwide). Needed if you want to run without Cluster-admin privileges.
# namespaces:
# - namespace
extraFlags:
- "web.enable-lifecycle"
- "enable-feature=exemplar-storage"
- "enable-feature=otlp-write-receiver"
global:
scrape_interval: 5s
scrape_timeout: 3s
evaluation_interval: 30s
persistentVolume:
enabled: true
mountPath: /data
## Prometheus server data Persistent Volume size
##
size: 10Gi
service:
servicePort: 9090
## Prometheus server resource requests and limits
## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
limits:
cpu: 2000m
memory: 4096Mi
requests:
cpu: 500m
memory: 2048Mi
## Prometheus data retention period (default if not specified is 15 days)
##
retention: "3d"
## Prometheus' data retention size. Supported units: B, KB, MB, GB, TB, PB, EB.
##
retentionSize: ""
serverFiles:
prometheus.yml:
scrape_configs:
- job_name: prometheus
static_configs:
- targets:
- localhost:9090
- job_name: 'otel-collector'
honor_labels: true
kubernetes_sd_configs:
- role: pod
namespaces:
own_namespace: true
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_opentelemetry_community_demo]
action: keep
regex: true
```
#### Configure Loki with a Minio backend
```yaml
# https://raw.githubusercontent.com/grafana/loki/helm-loki-6.2.0/production/helm/loki/single-binary-values.yaml
loki:
auth_enabled: false # https://grafana.com/docs/loki/latest/operations/multi-tenancy/#multi-tenancy
commonConfig:
replication_factor: 1
schemaConfig:
configs:
- from: 2024-04-01
store: tsdb
object_store: s3
schema: v13
index:
prefix: loki_index_
period: 24h
ingester:
chunk_encoding: snappy
tracing:
enabled: true
querier:
# Default is 4, if you have enough memory and CPU you can increase, reduce if OOMing
max_concurrent: 2
deploymentMode: SingleBinary
singleBinary:
replicas: 1
resources:
limits:
cpu: 3
memory: 4Gi
requests:
cpu: 1
memory: 2Gi
extraEnv:
# Keep a little bit lower than memory limits
- name: GOMEMLIMIT
value: 3750MiB
chunksCache:
# default is 500MB, with limited memory keep this smaller
writebackSizeLimit: 10MB
# Enable minio for storage
minio:
enabled: true
rootUser: enterprise-logs
rootPassword: supersecret
buckets:
- name: chunks
policy: none
purge: false
- name: ruler
policy: none
purge: false
- name: admin
policy: none
purge: false
persistence:
size: 10Gi
lokiCanary:
enabled: false
# Zero out replica counts of other deployment modes
backend:
replicas: 0
read:
replicas: 0
write:
replicas: 0
ingester:
replicas: 0
querier:
replicas: 0
queryFrontend:
replicas: 0
queryScheduler:
replicas: 0
distributor:
replicas: 0
compactor:
replicas: 0
indexGateway:
replicas: 0
bloomCompactor:
replicas: 0
bloomGateway:
replicas: 0
```
#### Configure Tempo
```yaml
# https://github.com/grafana/helm-charts/blob/tempo-1.7.2/charts/tempo/values.yaml
tempo:
# configure a 3 days retention by default
retention: 72h
# enable opentelemetry protocol & jaeger receivers
# this configuration will listen on all ports and protocols that tempo is capable of.
# the receives all come from the OpenTelemetry collector. more configuration information can
# be found there: https://github.com/open-telemetry/opentelemetry-collector/tree/master/receiver
receivers:
jaeger:
protocols:
grpc:
endpoint: 0.0.0.0:14250
thrift_binary:
endpoint: 0.0.0.0:6832
thrift_compact:
endpoint: 0.0.0.0:6831
thrift_http:
endpoint: 0.0.0.0:14268
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
persistence:
enabled: true
size: 10Gi
# -- Pod Annotations
podAnnotations:
prometheus.io/port: prom-metrics
prometheus.io/scrape: "true"
```
# Open Telemetry default properties
Source: https://docs.flowx.ai/4.7.x/setup-guides/ot-default-properties
* `otel.resource.attributes=service.name=flowx-process-engine,service.version=1.1.1`: Environment variable.
Will be overridden as environment variable by Kubernetes operator. Useful for local development.
### Java agent configuration
* `otel.javaagent.enabled=true`
* `otel.javaagent.logging=simple`
* `otel.javaagent.debug=false`
### Disable OTEL SDK
* `otel.sdk.disabled=false`
## Exporters configuration (common config for all exporters)
* `otel.traces.exporter=otlp`
* `otel.metrics.exporter=otlp`
* `otel.logs.exporter=otlp`
### OTLP exporter
* `otel.exporter.otlp.endpoint=http://localhost:4317`
Endpoint will be overridden by Kubernetes operator. Useful for local development.
* `otel.exporter.otlp.protocol=grpc`
* `otel.exporter.otlp.timeout=10000`
* `otel.exporter.otlp.compression=gzip`
* `otel.exporter.otlp.metrics.temporality.preference=cumulative`
* `otel.exporter.otlp.metrics.default.histogram.aggregation=explicit_bucket_histogram`
### Tracer provider
`SdkTracerProvider` specific configuration options.
* Sampler: [**here**](https://github.com/open-telemetry/opentelemetry-java/blob/main/sdk-extensions/autoconfigure/README.md#sampler)
* The default sampler is: `parentbased_always_on`
* To disable FlowX technical spans, add sampler: `fxTechnicalSpanFilterSampler`
* `otel.traces.sampler=parentbased_always_on`
### Batch span processor
* `otel.bsp.schedule.delay=5000`
* `otel.bsp.max.queue.size=2048`
* `otel.bsp.max.export.batch.size=512`
* `otel.bsp.export.timeout=30000`
### Meter provider
The following configuration options are specific to `SdkMeterProvider`.
* `otel.metric.export.interval=60000`
* `otel.metric.export.timeout=10000`
### Logger provider
The following configuration options are specific to `SdkLoggerProvider`.
* `otel.blrp.schedule.delay=1000`
* `otel.blrp.max.queue.size=2048`
* `otel.blrp.max.export.batch.size=512`
* `otel.blrp.export.timeout=30000`
## Agent auto-instrumentation
* `otel.instrumentation.messaging.experimental.receive-telemetry.enabled=false`
* `otel.instrumentation.common.experimental.controller-telemetry.enabled=true`
* `otel.instrumentation.common.experimental.view-telemetry.enabled=true`
* `otel.instrumentation.common.default-enabled=false`
Disable all auto instrumentation and enable only what's necessary. This has to be commented out entirely to work as default.
### Disable annotated methods
* `otel.instrumentation.opentelemetry-instrumentation-annotations.exclude-methods=my.package.MyClass1[method1,method2];my.package.MyClass2[method3]`
### Instrumentation config per library
Some instrumentation relies on other instrumentation to function properly. When selectively enabling instrumentation, be sure to enable the transitive dependencies too.
* `otel.instrumentation.opentelemetry-api.enabled=true`
* `otel.instrumentation.opentelemetry-instrumentation-annotations.enabled=true`
* `otel.instrumentation.opentelemetry-extension-annotations.enabled=false`
* `otel.instrumentation.methods.enabled=true`
* `otel.instrumentation.external-annotations.enabled=true`
* `otel.instrumentation.kafka.enabled=true`
* `otel.instrumentation.tomcat.enabled=true`
* `otel.instrumentation.elasticsearch-transport.enabled=true`
* `otel.instrumentation.elasticsearch-rest.enabled=true`
* `otel.instrumentation.grpc.enabled=true`
* `otel.instrumentation.hibernate.enabled=false`
Hibernate and JDBC kind of duplicate the queries traces
* `otel.instrumentation.hikaricp.enabled=false`
* `otel.instrumentation.java-http-client.enabled=true`
* `otel.instrumentation.http-url-connection.enabled=true`
* `otel.instrumentation.jdbc.enabled=false`
* `otel.instrumentation.jdbc-datasource.enabled=false`
* `otel.instrumentation.runtime-telemetry.enabled=true`
* `otel.instrumentation.servlet.enabled=true`
* `otel.instrumentation.executors.enabled=true`
* `otel.instrumentation.java-util-logging.enabled=true`
* `otel.instrumentation.log4j-appender.enabled=true`
* `otel.instrumentation.log4j-mdc.enabled=true`
* `otel.instrumentation.log4j-context-data.enabled=true`
* `otel.instrumentation.logback-appender.enabled=true`
* `otel.instrumentation.logback-mdc.enabled=true`
* `otel.instrumentation.mongo.enabled=true`
* `otel.instrumentation.rxjava.enabled=false`
* `otel.instrumentation.reactor.enabled=false`
### Redis client imported by spring-redis-data
* `otel.instrumentation.lettuce.enabled=true`
## Spring instrumentation props
* `otel.instrumentation.spring-batch.enabled=false`
* `otel.instrumentation.spring-core.enabled=true`
* `otel.instrumentation.spring-data.enabled=true`
* `otel.instrumentation.spring-jms.enabled=false`
* `otel.instrumentation.spring-integration.enabled=false`
* `otel.instrumentation.spring-kafka.enabled=true`
* `otel.instrumentation.spring-rabbit.enabled=false`
* `otel.instrumentation.spring-rmi.enabled=false`
* `otel.instrumentation.spring-scheduling.enabled=false`
* `otel.instrumentation.spring-web.enabled=true`
* `otel.instrumentation.spring-webflux.enabled=false`
* `otel.instrumentation.spring-webmvc.enabled=true`
* `otel.instrumentation.spring-ws.enabled=false`
# Documents plugin access rights
Source: https://docs.flowx.ai/4.7.x/setup-guides/plugins-access-rights/configuring-access-rights-for-documents
Granular access rights can be configured for restricting access to the Documents plugin component.
The following access authorizations is provided, with the specified access scopes:
1. **Manage-document-templates** - for configuring access for managing document templates
Available scopes:
* import - users are able to import document templates
* read - users are able to view document templates
* edit - users are able to edit document templates
* admin - users are able to publish or delete document templates
The Document plugin is preconfigured with the following default users roles for each of the access scopes mentioned above:
* manage-document-templates
* import:
* ROLE\_DOCUMENT\_TEMPLATES\_IMPORT
* ROLE\_DOCUMENT\_TEMPLATES\_EDIT
* ROLE\_DOCUMENT\_TEMPLATES\_ADMIN
* read
* ROLE\_DOCUMENT\_TEMPLATES\_READ
* ROLE\_DOCUMENT\_TEMPLATES\_IMPORT
* ROLE\_DOCUMENT\_TEMPLATES\_EDIT
* ROLE\_DOCUMENT\_TEMPLATES\_ADMIN
* edit:
* ROLE\_DOCUMENT\_TEMPLATES\_EDIT
* ROLE\_DOCUMENT\_TEMPLATES\_ADMIN
* admin:
* ROLE\_DOCUMENT\_TEMPLATES\_ADMIN
These roles need to be defined in the chosen identity provider solution.
In case other custom roles are needed, you can configure them using environment variables. More than one role can be set for each access scope.
To configure access for each of the roles above, adapt the following input:
**`SECURITY_ACCESSAUTHORIZATIONS_AUTHORIZATIONNAME_SCOPES_SCOPENAME_ROLESALLOWED: NEEDED_ROLE_NAMES`**
Possible values for AUTHORIZATIONNAME: MANAGEDOCUMENTTEMPLATES.
Possible values for SCOPENAME: import, read, edit, admin.
For example, if you need to configure role access for read, insert this:
```
SECURITY_ACCESSAUTHORIZATIONS_MANAGEDOCUMENTTEMPLATES_SCOPES_READ_ROLESALLOWED: ROLE_NAME_TEST
```
# Notifications plugin access rights
Source: https://docs.flowx.ai/4.7.x/setup-guides/plugins-access-rights/configuring-access-rights-for-notifications
Granular access rights can be configured for restricting access to the Notification plugin component.
The following access authorizations are provided, with the specified access scopes:
1. **Manage-notification-templates** - for configuring access for managing notification templates
Available scopes:
* import - users are able to import notification templates
* read - users are able to view notification templates
* edit - users are able to edit notification templates
* admin - users are able to publish or delete notification templates
The Notification plugin is preconfigured with the following default users roles for each of the access scopes mentioned above:
* manage-notification-templates
* import
* ROLE\_NOTIFICATION\_TEMPLATES\_IMPORT
* ROLE\_NOTIFICATION\_TEMPLATES\_EDIT
* ROLE\_NOTIFICATION\_TEMPLATES\_ADMIN
* read:
* ROLE\_NOTIFICATION\_TEMPLATES\_READ
* ROLE\_NOTIFICATION\_TEMPLATES\_IMPORT
* ROLE\_NOTIFICATION\_TEMPLATES\_EDIT
* ROLE\_NOTIFICATION\_TEMPLATES\_ADMIN
* edit:
* ROLE\_NOTIFICATION\_TEMPLATES\_EDIT"
* ROLE\_NOTIFICATION\_TEMPLATES\_ADMIN"
* admin:
* ROLE\_NOTIFICATION\_TEMPLATES\_ADMIN
These roles need to be defined in the chosen identity provider solution.
In case other custom roles are needed, you can configure them using environment variables. More than one role can be set for each access scope.
To configure access for each of the roles above, adapt the following input:
**`SECURITY_ACCESSAUTHORIZATIONS_AUTHORIZATIONNAME_SCOPES_SCOPENAME_ROLESALLOWED: NEEDED_ROLE_NAMES`**
Possible values for AUTHORIZATIONNAME: `MANAGENOTIFICATIONTEMPLATES`.
Possible values for SCOPENAME: import, read, edit, admin.
For example, if you need to configure role access for read, insert this:
```
SECURITY_ACCESSAUTHORIZATIONS_MANAGENOTIFICATIONTEMPLATES_SCOPES_READ_ROLESALLOWED: ROLE_NAME_TEST
```
# Task management plugin access rights
Source: https://docs.flowx.ai/4.7.x/setup-guides/plugins-access-rights/configuring-access-rights-for-task-management
Configure granular access rights to control access to the Task Management Plugin components.
The Task Management Plugin provides configurable access rights through specific authorizations, each with defined scopes. Here's a detailed breakdown:
## Access authorizations and scopes
1. **manage-tasks** - for configuring access for viewing the tasks lists
Available scopes:
* **read** - users are able to view tasks
2. **manage-hooks** - for configuring access for managing hooks
Available scopes:
* **import** - users are able to import hooks
* **read** - users are able to view hooks
* **edit** - users are able to edit hooks
* **admin** - users are able to delete hooks
3. **manage-process-allocation-settings** - for configuring access for managing process allocation settings
Available scopes:
* **import** - users are able to import allocation rules
* **read** - users are able to read/export allocation rules
* **edit** - users are able to edit access - create/edit allocation rules
* **admin** - users are able to delete allocation rules
4. **manage-out-of-office-users** - for configuring access for managing out-of-office users
Available scopes:
* **read** - users are able to view out-of-office records
* **edit** - users are able to create and edit out-of-office records
* **admin** - users are able to delete out-of-office records
5. **manage-views** - for managing views
Available scopes:
* **read** - users are able to access views
* **edit** - users are able to edit views
* **import** - users are able to import views
## Preconfigured roles for access scopes
The Task Management Plugin comes with predefined user roles for each access scope:
### Manage Tasks
* **read**:
* `ROLE_TASK_MANAGER_TASKS_READ`
### Manage Hooks
* **import**:
* `ROLE_TASK_MANAGER_HOOKS_IMPORT`
* `ROLE_TASK_MANAGER_HOOKS_EDIT`
* `ROLE_TASK_MANAGER_HOOKS_ADMIN`
* **read**:
* `ROLE_TASK_MANAGER_HOOKS_READ`
* `ROLE_TASK_MANAGER_HOOKS_IMPORT`
* `ROLE_TASK_MANAGER_HOOKS_EDIT`
* `ROLE_TASK_MANAGER_HOOKS_ADMIN`
* **edit**:
* `ROLE_TASK_MANAGER_HOOKS_EDIT`
* `ROLE_TASK_MANAGER_HOOKS_ADMIN`
* **admin**:
* `ROLE_TASK_MANAGER_HOOKS_ADMIN`
### Manage Process Allocation Settings
* **import**:
* `ROLE_TASK_MANAGER_PROCESS_ALLOCATION_SETTINGS_IMPORT`
* `ROLE_TASK_MANAGER_PROCESS_ALLOCATION_SETTINGS_EDIT`
* `ROLE_TASK_MANAGER_PROCESS_ALLOCATION_SETTINGS_ADMIN`
* **read**:
* `ROLE_TASK_MANAGER_PROCESS_ALLOCATION_SETTINGS_READ`
* `ROLE_TASK_MANAGER_PROCESS_ALLOCATION_SETTINGS_IMPORT`
* `ROLE_TASK_MANAGER_PROCESS_ALLOCATION_SETTINGS_EDIT`
* `ROLE_TASK_MANAGER_PROCESS_ALLOCATION_SETTINGS_ADMIN`
* **edit**:
* `ROLE_TASK_MANAGER_PROCESS_ALLOCATION_SETTINGS_EDIT`
* `ROLE_TASK_MANAGER_PROCESS_ALLOCATION_SETTINGS_ADMIN`
* **admin**:
* `ROLE_TASK_MANAGER_PROCESS_ALLOCATION_SETTINGS_ADMIN`
### Manage Out-of-Office Users
* **read**:
* `ROLE_TASK_MANAGER_OOO_READ`
* `ROLE_TASK_MANAGER_OOO_EDIT`
* `ROLE_TASK_MANAGER_OOO_ADMIN`
* **edit**:
* `ROLE_TASK_MANAGER_OOO_EDIT`
* `ROLE_TASK_MANAGER_OOO_ADMIN`
* **admin**:
* `ROLE_TASK_MANAGER_OOO_ADMIN`
### Manage Views
* **read**:
* `ROLE_TASK_MANAGER_VIEWS_READ`
* `ROLE_TASK_MANAGER_VIEWS_IMPORT`
* `ROLE_TASK_MANAGER_VIEWS_EDIT`
* `ROLE_TASK_MANAGER_VIEWS_ADMIN`
* **edit**:
* `ROLE_TASK_MANAGER_VIEWS_EDIT`
* `ROLE_TASK_MANAGER_VIEWS_ADMIN`
* **admin**:
* `ROLE_TASK_MANAGER_VIEWS_ADMIN`
These roles need to be defined in the chosen identity provider solution.
## Configuring custom roles
If additional custom roles are required, you can configure them using environment variables. Multiple roles can be set for each access scope.
**Configuration format**
```bash
SECURITY_ACCESSAUTHORIZATIONS__SCOPES__ROLESALLOWED:
```
# Documents plugin setup
Source: https://docs.flowx.ai/4.7.x/setup-guides/plugins-setup-guide/documents-plugin-setup
The Documents plugin provides functionality for generating, persisting, combining, and manipulating documents within the FlowX.AI system.
The plugin is available as a docker image.
## Dependencies
Before setting up the plugin, ensure that you have the following dependencies installed and configured:
* [PostgreSQL](https://www.postgresql.org/) Database: You will need a PostgreSQL database to store data related to document templates and documents.
* [MongoDB](https://www.mongodb.com/2) Database: MongoDB is required for the HTML templates feature of the plugin.
* Kafka: Establish a connection to the Kafka instance used by the FLOWX.AI engine.
* [Redis](https://redis.io/): Set up a Redis instance for caching purposes.
* S3-Compatible File Storage Solution: Deploy an S3-compatible file storage solution, such as [Min.io](https://min.io/), to store document files.
## Configuration
The plugin comes with pre-filled configuration properties, but you need to set up a few custom environment variables to tailor it to your specific setup. Here are the key configuration steps:
### Redis server
The plugin can utilize the [Redis component](https://app.gitbook.com/@flowx-ai/s/flowx-docs/flowx-engine/setup-guide#2-redis-server) already deployed for the FLOWX.AI engine. Make sure it is configured properly.
### Authorization configuration
To connect to the identity management platform, set the following environment variables:
| Environment Variable | Description | Default Value |
| ------------------------------------- | ----------------------------------------- | ----------------------------------- |
| `SECURITY_OAUTH2_BASESERVERURL` | Base URL of the OAuth2/OIDC server | `https://keycloak.example.com/auth` |
| `SECURITY_OAUTH2_CLIENT_CLIENTID` | OAuth2 client ID for the Documents Plugin | `document-plugin` |
| `SECURITY_OAUTH2_CLIENT_CLIENTSECRET` | OAuth2 client secret | `secret` |
| `SECURITY_OAUTH2_REALM` | OAuth2 realm name | `flowx` |
### Document processing configuration
| Environment Variable | Description | Default Value |
| ----------------------------------- | --------------------------------------- | ----------------------------------------------------------- |
| `FLOWX_DEFAULTCLIENTTYPE` | Default client type | `PF` |
| `FLOWX_HTML_TEMPLATES_ENABLED` | Enable HTML templates feature | `false` |
| `FLOWX_HTML_TEMPLATES_PDFFONTPATHS` | Paths to fonts for HTML templates | `/statics/fonts/Calibri.ttf, /statics/fonts/DejaVuSans.ttf` |
| `FLOWX_CONVERT_DPI` | DPI setting for PDF to image conversion | `150` |
Set the `FLOWX_HTML_TEMPLATES_PDFFONTPATHS` config to select the font used for generating documents based on PDF templates.
If you want to use specific fonts in your PDF templates, override the `FLOWX_HTML_TEMPLATES_PDFFONTPATHS` config. By default, Calibri and DejaVuSans are available fonts.
`FLOWX_CONVERT_DPI`: Sets the DPI (dots per inch) for PDF to JPEG conversion. Higher values result in higher resolution images. (Default value: `150`).
After making these configurations, the fonts will be available for use within PDF templates.
### Database configuration
#### SQL database (PostgreSQL)
The Documents Plugin uses a PostgreSQL database for relational data storage.
#### Primary MongoDB configuration
| Environment Variable | Description | Example Value |
| ----------------------------------------- | -------------------------- | ------------------------------------------------------------- |
| `SPRING_DATA_MONGODB_URI` | MongoDB connection URI | `mongodb://flowx:password@jx-document-mongodb:27017/document` |
| `MONGODB_USERNAME` | MongoDB username | `flowx` |
| `MONGODB_PASSWORD` | MongoDB password | `password` |
| `SPRING_DATA_MONGODB_UUID_REPRESENTATION` | UUID representation format | `standard` |
| `SPRING_DATA_MONGODB_STORAGE` | MongoDB storage type | `mongodb` |
#### Runtime MongoDB configuration
| Environment Variable | Description | Example Value |
| ------------------------------------------------ | ------------------------------------------------- | ----------------------------------------------------------------------------------------------------- |
| `SPRING_DATA_MONGODB_RUNTIME_ENABLED` | Enable runtime MongoDB connection | `true` |
| `SPRING_DATA_MONGODB_RUNTIME_URI` | Runtime MongoDB connection URI | `mongodb://${MONGODB_USERNAME:flowx}:${MONGODB_PASSWORD:password}@jx-document-mongodb:27017/document` |
| `MONGODB_USERNAME` | Runtime MongoDB username | `paperflow` |
| `MONGODB_PASSWORD` | Runtime MongoDB password | `password` |
| `SPRING_DATA_MONGODB_RUNTIME_AUTOINDEXCREATION` | Enable automatic index creation | `false` |
| `SPRING_DATA_MONGODB_RUNTIME_UUIDREPRESENTATION` | UUID representation format for runtime connection | `standard` |
### Redis configuration
Set the following values with the corresponding Redis-related values:
| Environment Variable | Description | Default Value |
| ---------------------------- | ---------------------------------- | ----------------------- |
| `SPRING_DATA_REDIS_HOST` | Hostname of the Redis server | `localhost` |
| `SPRING_DATA_REDIS_PORT` | Port number of the Redis server | `6379` |
| `SPRING_DATA_REDIS_PASSWORD` | Password for Redis authentication | `defaultpassword` |
| `REDIS_TTL` | Time-to-live (TTL) for Redis cache | `5000000` (miliseconds) |
### Multipart upload configuration
| Environment Variable | Description | Default Value |
| ----------------------------------------- | ------------------------------------------ | ------------- |
| `SPRING_SERVLET_CONTEXTPATH` | Servlet context path | `/` |
| `SPRING_SERVLET_MULTIPART_MAXFILESIZE` | Maximum file size for uploads | `50MB` |
| `SPRING_SERVLET_MULTIPART_MAXREQUESTSIZE` | Maximum request size for multipart uploads | `50MB` |
## Basic Kafka configuration
| Environment Variable | Description | Default Value |
| ---------------------------------- | ---------------------------------- | ----------------------------------------- |
| `SPRING_KAFKA_BOOTSTRAPSERVERS` | Address of Kafka server(s) | `localhost:9092` |
| `SPRING_KAFKA_SECURITY_PROTOCOL` | Security protocol for Kafka | `PLAINTEXT` |
| `SPRING_KAFKA_CONSUMER_GROUPID` | Consumer group ID for the service | `kafka-svc-document-consumer-local-test2` |
| `KAFKA_MESSAGE_MAX_BYTES` | Maximum message size in bytes | `52428800` (50MB) |
| `KAFKA_AUTHEXCEPTIONRETRYINTERVAL` | Retry interval for auth exceptions | `10` (seconds) |
## Topic naming configuration
| Environment Variable | Description | Default Value |
| -------------------------------- | ----------------------------------- | ------------- |
| `KAFKA_TOPIC_NAMING_SEPARATOR` | Primary separator for topic names | `.` |
| `KAFKA_TOPIC_NAMING_SEPARATOR2` | Secondary separator for topic names | `-` |
| `KAFKA_TOPIC_NAMING_PACKAGE` | Package prefix for topic names | `ai.flowx.` |
| `KAFKA_TOPIC_NAMING_ENVIRONMENT` | Environment segment for topic names | `dev.` |
| `KAFKA_TOPIC_NAMING_VERSION` | Version suffix for topic names | `.v1` |
## Thread configuration
| Environment Variable | Description | Default Value |
| --------------------------------------------------------------------------------------------- | ------------------------------------ | ------------- |
| `KAFKA_CONSUMER_THREADPOOLS_THREADPOOLGENERIC_THREADCOUNTPERCONTAINER_DOCUMENTGENERATEHTMLIN` | Threads for HTML document generation | `5` |
| `KAFKA_CONSUMER_THREADPOOLS_THREADPOOLGENERIC_THREADCOUNTPERCONTAINER_DOCUMENTPERSISTIN` | Threads for document persistence | `5` |
| `KAFKA_CONSUMER_THREADPOOLS_THREADPOOLGENERIC_THREADCOUNTPERCONTAINER_DOCUMENTSPLITTIN` | Threads for document splitting | `5` |
| `KAFKA_CONSUMER_THREADPOOLS_THREADPOOLGENERIC_THREADCOUNTPERCONTAINER_DOCUMENTGETURLSIN` | Threads for document URL retrieval | `5` |
| `KAFKA_CONSUMER_THREADPOOLS_THREADPOOLGENERIC_THREADCOUNTPERCONTAINER_OCRIN` | Threads for OCR processing | `5` |
| `KAFKA_CONSUMER_THREADPOOLS_THREADPOOLGENERIC_THREADCOUNTPERCONTAINER_FILEDELETEIN` | Threads for file deletion | `5` |
| `KAFKA_CONSUMER_THREADPOOLS_THREADPOOLGENERIC_THREADCOUNTPERCONTAINER_FILEUPDATEIN` | Threads for file updates | `5` |
| `KAFKA_CONSUMER_THREADPOOLS_THREADPOOLGENERIC_THREADCOUNTPERCONTAINER_FILECONVERTIN` | Threads for file conversion | `5` |
| `KAFKA_CONSUMER_THREADPOOLS_THREADPOOLGENERIC_THREADCOUNTPERCONTAINER_FILEPERSISTIN` | Threads for file persistence | `5` |
| `KAFKA_CONSUMER_THREADPOOLS_THREADPOOLGENERIC_THREADCOUNTPERCONTAINER_FILECOMBINEIN` | Threads for file combining | `5` |
## Document topics
### Generate HTML documents
| Environment Variable | Description | Default Value |
| ---------------------------------------- | ------------------------------------------- | ---------------------------------------------------------------------- |
| `KAFKA_TOPIC_DOCUMENT_GENERATE_HTML_IN` | Topic for incoming HTML generation requests | `ai.flowx.dev.plugin.document.trigger.generate.html.v1` |
| `KAFKA_TOPIC_DOCUMENT_GENERATE_HTML_OUT` | Topic for HTML generation results | `ai.flowx.dev.engine.receive.plugin.document.generate.html.results.v1` |
### Document persistence
| Environment Variable | Description | Default Value |
| ---------------------------------- | ------------------------------------------------ | ------------------------------------------------------------------------- |
| `KAFKA_TOPIC_DOCUMENT_PERSIST_IN` | Topic for incoming document persistence requests | `ai.flowx.dev.plugin.document.trigger.persist.document.v1` |
| `KAFKA_TOPIC_DOCUMENT_PERSIST_OUT` | Topic for document persistence results | `ai.flowx.dev.engine.receive.plugin.document.persist.document.results.v1` |
### Document splitting
| Environment Variable | Description | Default Value |
| -------------------------------- | ---------------------------------------------- | ----------------------------------------------------------------------- |
| `KAFKA_TOPIC_DOCUMENT_SPLIT_IN` | Topic for incoming document splitting requests | `ai.flowx.dev.plugin.document.trigger.split.document.v1` |
| `KAFKA_TOPIC_DOCUMENT_SPLIT_OUT` | Topic for document splitting results | `ai.flowx.dev.engine.receive.plugin.document.split.document.results.v1` |
### Document URL retrieval
| Environment Variable | Description | Default Value |
| ----------------------------------- | ----------------------------------------- | ------------------------------------------------------------- |
| `KAFKA_TOPIC_DOCUMENT_GET_URLS_IN` | Topic for incoming URL retrieval requests | `ai.flowx.dev.plugin.document.retrieve.urls.v1` |
| `KAFKA_TOPIC_DOCUMENT_GET_URLS_OUT` | Topic for URL retrieval results | `ai.flowx.dev.engine.receive.plugin.document.urls.results.v1` |
### OCR processing
| Environment Variable | Description | Default Value |
| --------------------- | -------------------------------- | ------------------------------------------------------------ |
| `KAFKA_TOPIC_OCR_IN` | Topic for incoming OCR requests | `ai.flowx.dev.plugin.document.store.ocr.v1` |
| `KAFKA_TOPIC_OCR_OUT` | Topic for OCR processing results | `ai.flowx.dev.engine.receive.plugin.document.ocr.results.v1` |
## File operation topics
### File deletion
| Environment Variable | Description | Default Value |
| ----------------------------- | ----------------------------------------- | -------------------------------------------------------------------- |
| `KAFKA_TOPIC_FILE_DELETE_IN` | Topic for incoming file deletion requests | `ai.flowx.dev.plugin.document.trigger.delete.file.v1` |
| `KAFKA_TOPIC_FILE_DELETE_OUT` | Topic for file deletion results | `ai.flowx.dev.engine.receive.plugin.document.delete.file.results.v1` |
### File update
| Environment Variable | Description | Default Value |
| ----------------------------- | --------------------------------------- | -------------------------------------------------------------------- |
| `KAFKA_TOPIC_FILE_UPDATE_IN` | Topic for incoming file update requests | `ai.flowx.dev.plugin.document.trigger.update.file.v1` |
| `KAFKA_TOPIC_FILE_UPDATE_OUT` | Topic for file update results | `ai.flowx.dev.engine.receive.plugin.document.update.file.results.v1` |
### File conversion
\| Environment Variable | Description | Default Value |
\| `----------------------------` | --------------------------------- | `-------------------------------------------------------------------` |
\| `KAFKA_TOPIC_FILE_CONVERT_OUT` | Topic for file conversion results | `ai.flowx.dev.engine.receive.plugin.document.convert.file.results.v1` |
### File persistence
| Environment Variable | Description | Default Value |
| ------------------------------ | -------------------------------------------- | --------------------------------------------------------------------- |
| `KAFKA_TOPIC_FILE_PERSIST_IN` | Topic for incoming file persistence requests | `ai.flowx.dev.plugin.document.trigger.persist.file.v1` |
| `KAFKA_TOPIC_FILE_PERSIST_IN` | Topic for incoming file persistence requests | `ai.flowx.dev.plugin.document.trigger.persist.file.v1` |
| `KAFKA_TOPIC_FILE_PERSIST_OUT` | Topic for file persistence results | `ai.flowx.dev.engine.receive.plugin.document.persist.file.results.v1` |
### File combination
| Environment Variable | Description | Default Value |
| ------------------------------ | -------------------------------------------- | --------------------------------------------------------------------- |
| `KAFKA_TOPIC_FILE_COMBINE_IN` | Topic for incoming file combination requests | `ai.flowx.dev.plugin.document.trigger.combine.file.v1` |
| `KAFKA_TOPIC_FILE_COMBINE_OUT` | Topic for file combination results | `ai.flowx.dev.engine.receive.plugin.document.combine.file.results.v1` |
## Audit
| Environment Variable | Description | Default Value |
| ----------------------- | ---------------------------- | ----------------------------------------- |
| `KAFKA_TOPIC_AUDIT_OUT` | Topic for sending audit logs | `ai.flowx.dev.core.trigger.save.audit.v1` |
### General storage configuration
| Environment Variable | Description | Default Value |
| ------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------- |
| `APPLICATION_DEFAULTLOCALE` | Default locale for the application | `en` |
| `APPLICATION_SUPPORTEDLOCALES` | Comma-separated list of supported locales | `en, ro` |
| `APPLICATION_FILESTORAGE_TYPE` | Storage type to use (`s3` or `fileSystem`) | `s3` |
| `APPLICATION_FILESTORAGE_DISKDIRECTORY` | Directory for file storage when using filesystem | `MS_SVC_DOCUMENT` |
| `APPLICATION_FILESTORAGE_PARTITIONSTRATEGY` | Strategy for file organization (`NONE` or `PROCESS_DATE`) | `NONE` |
| `APPLICATION_FILESTORAGE_DELETIONSTRATEGY` | Strategy for deleting files (`delete`, `disabled`, or `deleteBypassingGovernanceRetention`) | `delete` |
## S3-Compatible storage configuration
| Environment Variable | Description | Default Value |
| ---------------------------------------------- | ------------------------------------------------- | ----------------------------- |
| `APPLICATION_FILESTORAGE_S3_ENABLED` | Enable S3-compatible storage | `true` |
| `APPLICATION_FILESTORAGE_S3_SERVERURL` | URL of MinIO or S3-compatible server | `http://minio-service:9000` |
| `APPLICATION_FILESTORAGE_S3_ACCESSKEY` | Access key for MinIO/S3 | `minio` |
| `APPLICATION_FILESTORAGE_S3_SECRETKEY` | Secret key for MinIO/S3 | `minio123` |
| `APPLICATION_FILESTORAGE_S3_BUCKETPREFIX` | Prefix for bucket names | `qdevlocal-preview-paperflow` |
| `APPLICATION_FILESTORAGE_S3_TEMPBUCKET` | Name of temporary bucket for initial file uploads | `temp-bucket` |
| `APPLICATION_FILESTORAGE_S3_ENCRYPTIONENABLED` | Enable server-side encryption | `false` |
Make sure to follow the recommended [bucket naming rules](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html) when choosing the bucket prefix name.
### Logging configuration
| Environment Variable | Description | Default Value |
| ---------------------------- | ------------------------------------------- | ------------- |
| `LOGGING_LEVEL_APP` | Log level for application logs | `DEBUG` |
| `LOGGING_LEVEL_MONGO_DRIVER` | Log level for MongoDB driver | `INFO` |
| `LOGGING_LEVEL_LIQUIBASE` | Log level for Liquibase database migrations | `INFO` |
| `LOGGING_LEVEL_REDIS` | Log level for Redis/Lettuce client | `OFF` |
| `LOGGING_LEVEL_ROOT` | Root logging level for the application | - |
# Notifications plugin setup
Source: https://docs.flowx.ai/4.7.x/setup-guides/plugins-setup-guide/notifications-plugin-setup
The Notifications plugin is available as a docker image, and it has the following dependencies.
## Dependencies
Before setting up the plugin, ensure you have the following dependencies:
* A [MongoDB](https://www.mongodb.com/2) database for storing notification templates and records
* Access to the Kafka instance used by the FlowX.AI Engine
* A [Redis](https://redis.io/) instance for caching notification templates
* An S3-compatible file storage solution (e.g., [Min.io](https://min.io/)) if you need to attach documents to notifications
## Authorization configuration
Set these variables to connect to your identity management platform:
| Environment Variable | Description | Default Value |
| ------------------------------------- | --------------------------------------------- | ------------- |
| `SECURITY_OAUTH2_BASESERVERURL` | Base URL of the OAuth2/OIDC server | - |
| `SECURITY_OAUTH2_CLIENT_CLIENTID` | OAuth2 client ID for the Notifications Plugin | - |
| `SECURITY_OAUTH2_CLIENT_CLIENTSECRET` | OAuth2 client secret | - |
| `SECURITY_OAUTH2_REALM` | OAuth2 realm name | - |
## MongoDB configuration
The only thing that needs to be configured is the DB access info, the rest will be handled by the plugin.
| Environment Variable | Description | Default Value |
| ------------------------- | --------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `SPRING_DATA_MONGODB_URI` | MongoDB connection URI | `mongodb://${DB_USERNAME}:${DB_PASSWORD}@mongodb-0.mongodb-headless,mongodb-1.mongodb-headless,mongodb-arbiter-0.mongodb-arbiter-headless:27017/notification-plugin` |
| `DB_USERNAME` | Username for runtime MongoDB connection | `notification-plugin` |
| `DB_PASSWORD` | Password for runtime MongoDB connection | `password` |
## Redis configuration
Configure your Redis caching instance:
| Environment Variable | Description | Default Value |
| ----------------------- | ------------------------------------------- | ----------------- |
| `SPRING_REDIS_HOST` | Redis server hostname | `localhost` |
| `SPRING_REDIS_PORT` | Redis server port | `6379` |
| `SPRING_REDIS_PASSWORD` | Redis server password | `defaultpassword` |
| `REDIS_TTL` | Time-to-live for Redis cache (milliseconds) | `5000000` |
## Kafka configuration
### Core Kafka settings
| Environment Variable | Description | Default Value |
| ---------------------------------- | ------------------------------------------------------- | ------------------ |
| `SPRING_KAFKA_BOOTSTRAPSERVERS` | Address of the Kafka server(s) | `localhost:9092` |
| `SPRING_KAFKA_SECURITYPROTOCOL` | Security protocol for Kafka connections | `PLAINTEXT` |
| `SPRING_KAFKA_CONSUMERGROUPID` | Consumer group identifier | `notif123-preview` |
| `KAFKA_MESSAGEMAXBYTES` | Maximum message size (bytes) | `52428800` (50MB) |
| `KAFKA_AUTHEXCEPTIONRETRYINTERVAL` | Retry interval after authorization exceptions (seconds) | `10` |
| `KAFKA_CONSUMER_THREADS` | Number of consumer threads | `1` |
### Topic naming configuration
| Environment Variable | Description | Default Value |
| -------------------------------- | ----------------------------------- | ------------- |
| `KAFKA_TOPIC_NAMING_SEPARATOR` | Primary separator for topic names | `.` |
| `KAFKA_TOPIC_NAMING_SEPARATOR2` | Secondary separator for topic names | `-` |
| `KAFKA_TOPIC_NAMING_PACKAGE` | Package prefix for topic names | `ai.flowx.` |
| `KAFKA_TOPIC_NAMING_ENVIRONMENT` | Environment segment for topic names | `dev.` |
| `KAFKA_TOPIC_NAMING_VERSION` | Version suffix for topic names | `.v1` |
### Topic configurations
Each action in the service corresponds to a Kafka event on a specific topic. Configure the following topics:
#### OTP topics
| Environment Variable | Description | Default Value |
| ------------------------------ | ------------------------------------------ | ------------------------------------------------------------------------- |
| `KAFKA_TOPIC_OTP_GENERATE_IN` | Topic for incoming OTP generation requests | `ai.flowx.dev.plugin.notification.trigger.generate.otp.v1` |
| `KAFKA_TOPIC_OTP_GENERATE_OUT` | Topic for OTP generation results | `ai.flowx.dev.engine.receive.plugin.notification.generate.otp.results.v1` |
| `KAFKA_TOPIC_OTP_VALIDATE_IN` | Topic for incoming OTP validation requests | `ai.flowx.dev.plugin.notification.trigger.validate.otp.v1` |
| `KAFKA_TOPIC_OTP_VALIDATE_OUT` | Topic for OTP validation results | `ai.flowx.dev.engine.receive.plugin.notification.validate.otp.results.v1` |
#### Notification topics
| Environment Variable | Description | Default Value |
| --------------------------------------- | ------------------------------------------------------ | ------------------------------------------------------------------------------ |
| `KAFKA_TOPIC_NOTIFICATION_INTERNAL_IN` | Topic for incoming notification requests | `ai.flowx.dev.plugin.notification.trigger.send.notification.v1` |
| `KAFKA_TOPIC_NOTIFICATION_INTERNAL_OUT` | Topic for notification delivery confirmations | `ai.flowx.dev.engine.receive.plugin.notification.confirm.send.notification.v1` |
| `KAFKA_TOPIC_NOTIFICATION_EXTERNAL_OUT` | Topic for forwarding notifications to external systems | `ai.flowx.dev.plugin.notification.trigger.forward.notification.v1` |
#### Audit topic
| Environment Variable | Description | Default Value |
| ----------------------- | ---------------------------- | ----------------------------------------- |
| `KAFKA_TOPIC_AUDIT_OUT` | Topic for sending audit logs | `ai.flowx.dev.core.trigger.save.audit.v1` |
### File storage configuration
Based on use case you can use directly a file system or an S3 compatible cloud storage solution (for example [min.io](http://min.io/)).
The file storage solution can be configured using the following environment variables:
| Environment Variable | Description | Default Value |
| ---------------------------------------------- | ------------------------------------------------ | ----------------------------- |
| `APPLICATION_FILESTORAGE_TYPE` | Storage type to use (`s3` or `fileSystem`) | `s3` |
| `APPLICATION_FILESTORAGE_DISKDIRECTORY` | Directory for file storage when using filesystem | `MS_SVC_NOTIFICATION` |
| `APPLICATION_FILESTORAGE_S3_ENABLED` | Enable S3-compatible storage | `true` |
| `APPLICATION_FILESTORAGE_S3_SERVERURL` | URL of S3-compatible server | `http://minio-service:9000` |
| `APPLICATION_FILESTORAGE_S3_ENCRYPTIONENABLED` | Enable server-side encryption | `false` |
| `APPLICATION_FILESTORAGE_S3_ACCESSKEY` | Access key for S3 | `minio` |
| `APPLICATION_FILESTORAGE_S3_SECRETKEY` | Secret key for S3 | `secret` |
| `APPLICATION_FILESTORAGE_S3_BUCKETPREFIX` | Prefix for bucket names | `qdevlocal-preview-paperflow` |
### SMTP setup
Configure SMTP settings for sending email notifications:
| Environment Variable | Description | Default Value |
| ---------------------------------- | ----------------------------------------------------------- | ---------------------------- |
| `SIMPLEJAVAMAIL_SMTP_HOST` | SMTP server hostname | `smtp.gmail.com` |
| `SIMPLEJAVAMAIL_SMTP_PORT` | SMTP server port | `587` |
| `SIMPLEJAVAMAIL_SMTP_USERNAME` | SMTP server username | `notification.test@flowx.ai` |
| `SIMPLEJAVAMAIL_SMTP_PASSWORD` | SMTP server password | `paswword` |
| `SIMPLEJAVAMAIL_TRANSPORTSTRATEGY` | Email transport strategy (e.g., `SMTP`, `EXTERNAL_FORWARD`) | `SMTP` |
| `APPLICATION_MAIL_FROM_EMAIL` | Default sender email address | `notification.test@flowx.ai` |
| `APPLICATION_MAIL_FROM_NAME` | Default sender name | `Notification Test` |
### Email attachments configuration
Configure handling of email attachments:
| Environment Variable | Description | Default Value |
| -------------------------------------- | ------------------------------------------ | ------------- |
| `SPRING_HTTP_MULTIPART_MAXFILESIZE` | Maximum file size for attachments | `15MB` |
| `SPRING_HTTP_MULTIPART_MAXREQUESTSIZE` | Maximum request size for multipart uploads | `15MB` |
### OTP configuration
Configure One-Time Password generation and validation:
| Environment Variable | Description | Default Value |
| ------------------------------- | -------------------------------------- | ------------------- |
| `FLOWX_OTP_LENGTH` | Number of characters in generated OTPs | `4` |
| `FLOWX_OTP_EXPIRETIMEINSECONDS` | Expiry time for OTPs (seconds) | `6000` (10 minutes) |
### Logging configuration
Control logging levels for different components:
| Environment Variable | Description | Default Value |
| ---------------------------- | ----------------------------------------- | ------------- |
| `LOGGING_LEVEL_ROOT` | Root logging level | - |
| `LOGGING_LEVEL_APP` | Application-specific log level | `DEBUG` |
| `LOGGING_LEVEL_MONGO_DRIVER` | MongoDB driver log level | `INFO` |
| `LOGGING_LEVEL_THYMELEAF` | Thymeleaf template engine log level | `INFO` |
| `LOGGING_LEVEL_FCM_CLIENT` | Firebase Cloud Messaging client log level | `OFF` |
| `LOGGING_LEVEL_REDIS` | Redis/Lettuce client log level | `OFF` |
## Usage notes
### Topic naming convention
Topics follow a standardized naming convention:
* Example: `ai.flowx.dev.plugin.notification.trigger.generate.otp.v1`
* Structure: `{package}.{environment}.{component}.{action}.{subject}.{version}`
### Consumer error handling
When `KAFKA_CONSUMER_ERRORHANDLING_ENABLED` is set to `true`:
* The application will retry processing failed messages according to `KAFKA_CONSUMER_ERRORHANDLING_RETRIES`
* Between retries, the application will wait for the duration specified by `KAFKA_CONSUMER_ERRORHANDLING_RETRYINTERVAL`
For example, if `KAFKA_CONSUMER_ERRORHANDLING_RETRYINTERVAL` is set to 5000 (5 seconds) and `KAFKA_CONSUMER_ERROR_HANDLING_RETRIES` is set to 5, the consumer application will make up to 5 attempts, waiting 5 seconds between each attempt.
### Message size configuration
The `KAFKA_MESSAGE_MAX_BYTES` setting affects multiple Kafka properties:
* `spring.kafka.producer.properties.message.max.bytes`
* `spring.kafka.producer.properties.max.request.size`
* `spring.kafka.consumer.properties.max.partition.fetch.bytes`
### OAuth authentication
When using the 'kafka-auth' profile, the security protocol changes to 'SASL\_PLAINTEXT' and requires OAuth configuration via the `KAFKA_OAUTH_*` variables.
# OCR plugin setup
Source: https://docs.flowx.ai/4.7.x/setup-guides/plugins-setup-guide/ocr-plugin-setup
The OCR plugin is a docker image that can be deployed using the following infrastructure prerequisites.
## Infrastructure Prerequisites:
* S3 bucket or alternative (for example, minio)
* Kafka cluster
Starting with `ocr-plugin 1.X` it no longer requires RabbitMQ.
The following environment from previous releases must be removed in order to use OCR plugin: `CELERY_BROKER_URL`.
## Deployment/Configuration
To deploy the OCR plugin, you will need to deploy `ocr-plugin` helm chart with custom values file.
Most important sections are these, but more can be extracted from helm chart.
```yaml
image:
repository: /ocr-plugin
applicationSecrets: {}
replicaCount: 2
resources: {}
env: []
```
### Credentials
S3 bucket:
```yaml
applicationSecrets:
enable: true
envSecretKeyRef:
STORAGE_S3_ACCESS_KEY: access-key # default empty
STORAGE_S3_SECRET_KEY: secret-key # default empty
existingSecret: true
secretName: ocr-plugin-application-config
```
### Kafka configuration
You can override the following environment variables:
| Environment Variable | Definition | Default Value | Example |
| ------------------------------------- | ------------------------------------------------------------------------------------------------------------- | ------------- | -------------------- |
| `ENABLE_KAFKA_SASL` | Indicates whether Kafka SASL authentication is enabled | `False` | - |
| `KAFKA_ADDRESS` | The address of the Kafka bootstrap server in the format `:` | - | `kafka-server1:9092` |
| `KAFKA_CONSUME_SCHEDULE` | The interval (in seconds) at which Kafka messages are consumed | `30` | - |
| `KAFKA_INPUT_TOPIC` | The Kafka topic from which input messages are consumed | - | - |
| `KAFKA_OCR_CONSUMER_GROUPID` | The consumer group ID for the OCR Kafka consumer | `ocr_group` | - |
| `KAFKA_CONSUMER_AUTO_COMMIT` | Determines whether Kafka consumer commits offsets automatically | `True` | - |
| `KAFKA_CONSUMER_AUTO_COMMIT_INTERVAL` | The interval (in milliseconds) at which Kafka consumer commits offsets automatically | `1000` | - |
| `KAFKA_CONSUMER_TIMEOUT` | The timeout (in milliseconds) for Kafka consumer operations | `28000` | - |
| `KAFKA_CONSUMER_MAX_POLL_INTERVAL` | The maximum interval (in milliseconds) between consecutive polls for Kafka consume | `25000` | - |
| `KAFKA_CONSUMER_AUTO_OFFSET_RESET` | The strategy for resetting the offset when no initial offset is available or if the current offset is invalid | `earliest` | - |
| `KAFKA_OUTPUT_TOPIC` | The Kafka topic to which output messages are sent | - | - |
Please note that the default values and examples provided here are for illustrative purposes. Make sure to replace them with the appropriate values based on your Kafka configuration.
When configuring the OCR plugin, make sure to use the correct outgoing topic names that match [**the pattern expected by the Engine**](../flowx-engine-setup-guide/engine-setup#configuring-kafka), which listens for messages on topics with specific names.
### Authorization
You can override the following environment variables:
| Environment Variable | Definition | Default Value | Example |
| -------------------------- | ------------------------------------------------------ | ------------- | --------------------------------- |
| `OAUTH_CLIENT_ID` | The client ID for OAuth authentication | - | `your_client_id` |
| `OAUTH_CLIENT_SECRET` | The client secret for OAuth authentication | - | `your_client_secret` |
| `OAUTH_TOKEN_ENDPOINT_URI` | The URI of the token endpoint for OAuth authentication | - | `https://oauth.example.com/token` |
Please note that the default values and examples provided here are for illustrative purposes. Make sure to replace them with the appropriate values based on your OAuth authentication configuration.
### Storage (S3 configuration)
You can override the following environment variables:
| Environment Variable | Definition | Default Value | Example |
| ----------------------------------- | ------------------------------------------------------------------- | ------------- | --------------------------------------------------- |
| `STORAGE_S3_HOST` | The host address of the S3 storage service | - | `minio:9000`, `https://s3.eu-west-1.amazonaws.com/` |
| `STORAGE_S3_SECURE_CONNECTION` | Indicates whether to use a secure connection (HTTPS) for S3 storage | `False` | |
| `STORAGE_S3_LOCATION` | The location of the S3 storage service | - | `eu-west-1` |
| `STORAGE_S3_OCR_SCANS_BUCKET` | The name of the S3 bucket for storing OCR scans | - | `pdf-scans` |
| `STORAGE_S3_OCR_SIGNATURE_BUCKET` | The name of the S3 bucket for storing OCR signatures | - | `extracted-signatures` |
| `STORAGE_S3_OCR_SIGNATURE_FILENAME` | The filename pattern for extracted OCR signatures | - | `extracted_signature_{}.png` |
| `STORAGE_S3_ACCESS_KEY` | The access key for connecting to the S3 storage service | - | |
| `STORAGE_S3_SECRET_KEY` | The secret key for connecting to the S3 storage service | - | |
Please note that the default values and examples provided here are for illustrative purposes. Make sure to replace them with the appropriate values based on your S3 storage configuration.
### Performance
| Environment Variable | Definition | Default Value |
| ---------------------------- | ----------------------------------------------------------------------------------------------------------------- | ------------- |
| `ENABLE_PERFORMANCE_PAYLOAD` | When set to true, the response payload will contain performance metrics related to various stages of the process. | `true` |
#### Example
```yaml
"perf": {
"total_time": 998,
"split": {
"get_file": 248,
"extract_images": 172,
"extract_barcodes": 37,
"extract_signatures": 238,
"minio_signature_save": 301
}
}
```
### Certificates
You can override the following environment variables:
| Environment Variable | Definition | Default Value |
| -------------------- | ----------------------------------------------------------------------------------------------------------- | ----------------- |
| `REQUESTS_CA_BUNDLE` | The path to the certificate bundle file used for secure requests | `5` |
| `CERT_REQUESTS` | If no activity has occurred for a certain number of seconds, an attempt will be made to refresh the workers | `'CERT_REQUIRED'` |
### Workers Behavior
You can override the following environment variables:
| Environment Variable | Definition | Default Value |
| ------------------------ | ----------------------------------------------------------------------------------------------------------- | ------------- |
| `OCR_WORKER_COUNT` | Number of workers | `5` |
| `OCR_WORK_QUEUE_TIMEOUT` | If no activity has occurred for a certain number of seconds, an attempt will be made to refresh the workers | `10` |
If no worker is released after `OCR_WORK_QUEUE_TIMEOUT` seconds, the application will verify whether any workers have become unresponsive and need to be restarted.
If none of the workers have died, it means they are likely blocked in some process. In this case, the application will terminate all the workers and shut down itself, hoping that the container will be restarted.
### Control Aspect Ratio
| Environment Variable | Definition | Default Value |
| ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------- |
| `OCR_SIGNATURE_MAX_RATIO` | This variable sets the maximum acceptable aspect ratio for a signed scanned document (the OCR plugin will recognize a signature only if the document ratio is greater than or equal to the specified minimum ratio) | `1.43` |
| `OCR_SIGNATURE_MIN_RATIO` | This variable sets the minimum acceptable aspect ratio for a signed scanned document (in this context, the OCR plugin will consider a detected signature only if the document aspect ratio is less than or equal to the specified maximum ratio) | `1.39` |
The plugin has been tested with aspect ratio values between 1.38 and 1.43. However, caution is advised when using untested values outside this range, as they may potentially disrupt the functionality. Adjust these parameters at your own risk and consider potential consequences, as untested values might lead to plugin instability or undesired behavior.
# Plugins setup
Source: https://docs.flowx.ai/4.7.x/setup-guides/plugins-setup-guide/plugins-setup-guide-overview
To set up a plugin in your environment, you must go through the next steps.
* make sure you have all the prerequisites deployed on your environment (for example a [Redis](../../docs/platform-overview/frameworks-and-standards/event-driven-architecture-frameworks/intro-to-redis) cache instance, a DB instance, etc)
* make the necessary configurations for each plugin (DB connection data, related Kafka topic names, etc)
Once you have deployed the necessary plugins in your environment, you can start integrating them in your process definitions.
All of them listen for Kafka events sent by the **FlowX Engine** and performed certain actions depending on the received data. They can also send data back to the Engine.
Some of them require some custom templates to be configured, for these cases, a [WYSIWYG Editor](/4.0/docs/platform-deep-dive/plugins/wysiwyg) is provided.
Let's go into more details on setting up and using each of them:
# Reporting setup
Source: https://docs.flowx.ai/4.7.x/setup-guides/plugins-setup-guide/reporting-setup
The Reporting setup guide assists in configuring the reporting plugin, relying on specific dependencies and configurations.
The FlowX Reporting solution provides powerful data analytics and visualization capabilities for your FlowX platform. This guide offers step-by-step instructions for setting up and configuring all components of the reporting system.
The reporting solution consists of three main components:
* **Reporting Plugin**: Extracts and processes data from the FlowX Engine
* **Spark Application**: Handles data transformation and loading operations
* **Apache Superset**: Provides the visualization interface and dashboard capabilities
## Dependencies
The reporting plugin, available as a Docker image, requires the following dependencies:
* **PostgreSQL**: Dedicated instance for reporting data storage.
* **Reporting-plugin Helm Chart**:
* Utilizes a Spark Application to extract data from the FLOWX.AI Engine database and populate the Reporting plugin database.
* Utilizes Spark Operator (more info [**here**](https://www.kubeflow.org/docs/components/spark-operator)).
* **Superset**:
* Requires a dedicated PostgreSQL database for its operation.
* Utilizes Redis for efficient caching.
* Exposes its user interface via an ingress.
## Prerequisites
Before starting the installation, ensure you have:
* Kubernetes cluster with Helm installed
* Access to PostgreSQL databases for:
* FlowX Engine database (source)
* Reporting database (destination)
* Superset metadata database
* Docker registry access for the reporting images
* Redis instance for Superset caching
* Ingress controller for exposing Superset UI
## Reporting plugin helm chart configuration
Configuring the reporting plugin involves several steps:
### Installation of Spark Operator
1. Install the Spark Operator using Helm:
```bash
helm install local-spark-release spark-operator/spark-operator \
--namespace spark-operator --create-namespace \
--set webhook.enable=true \
--set logLevel=6
```
2. Apply RBAC configurations:
```bash
kubectl apply -f spark-rbac.yaml
```
3. Build the reporting image:
```bash
docker build ...
```
4. Update the `reporting-image` URL in the `spark-app.yml` file.
5. Configure the correct database ENV variables in the `spark-app.yml` file (check them in the above examples with/without webhook).
6. Deploy the application:
```bash
kubectl apply -f operator/spark-app.yaml
```
## Spark Operator deployment options
### Without webhook
For deployments without a webhook, manage secrets and environmental variables for security:
```yaml
sparkApplication: #Defines the Spark application configuration.
enabled: "true" #Indicates that the Spark application is enabled for deployment.
schedule: "@every 5m" #A cronJob that should run at every 5 minutes.
driver: # This section configures the driver component of the Spark application.
envVars: #Environment variables for driver setup.
ENGINE_DATABASE_USER: flowx
ENGINE_DATABASE_URL: postgresql:5432
ENGINE_DATABASE_NAME: process_engine
ENGINE_DATABASE_TYPE: postgres # To set the type of engine database, can be also changed to oracle
REPORTING_DATABASE_USER: flowx
REPORTING_DATABASE_URL: postgresql:5432
REPORTING_DATABASE_NAME: reporting
REPORTING_DATABASE_TYPE: postgres
APPLICATION_DATABASE_USER: postgres
APPLICATION_DATABASE_URL: localhost:5439
APPLICATION_DATABASE_NAME: app_manager
APPLICATION_DATABASE_TYPE: postgres
ENGINE_DATABASE_PASSWORD: "password"
REPORTING_DATABASE_PASSWORD: "password"
APPLICATION_DATABASE_PASSWORD: "password"
executor: #This section configures the executor component of the Spark application.
envVars: #Environment variables for executor setup.
ENGINE_DATABASE_USER: flowx
ENGINE_DATABASE_URL: postgresql:5432
ENGINE_DATABASE_NAME: process_engine
ENGINE_DATABASE_TYPE: postgres # To set the type of engine database, can be also changed to oracle
REPORTING_DATABASE_USER: flowx
REPORTING_DATABASE_URL: postgresql:5432
REPORTING_DATABASE_NAME: reporting
REPORTING_DATABASE_TYPE: postgres
APPLICATION_DATABASE_USER: postgres
APPLICATION_DATABASE_URL: localhost:5439
APPLICATION_DATABASE_NAME: app_manager
APPLICATION_DATABASE_TYPE: postgres
ENGINE_DATABASE_PASSWORD: "password"
REPORTING_DATABASE_PASSWORD: "password"
APPLICATION_DATABASE_PASSWORD: "password"
```
NOTE: Passwords are currently set as plain strings, which is not secure practice in a production environment.
### With webhook
When using the webhook, employ environmental variables with secrets for a balanced security approach:
```yaml
sparkApplication:
enabled: "true"
schedule: "@every 5m"
driver:
env: #Environment variables for driver setup with secrets.
ENGINE_DATABASE_USER: flowx
ENGINE_DATABASE_URL: postgresql:5432
ENGINE_DATABASE_NAME: process_engine
ENGINE_DATABASE_TYPE: postgres # To set the type of engine database, can be also changed to oracle
REPORTING_DATABASE_USER: flowx
REPORTING_DATABASE_URL: postgresql:5432
REPORTING_DATABASE_NAME: reporting
REPORTING_DATABASE_TYPE: postgres
APPLICATION_DATABASE_USER: postgres
APPLICATION_DATABASE_URL: localhost:5439
APPLICATION_DATABASE_NAME: app_manager
APPLICATION_DATABASE_TYPE: postgres
extraEnvVarsMultipleSecretsCustomKeys:
- name: postgresql-generic
secrets: #Secrets retrieved from a generic source.
ENGINE_DATABASE_PASSWORD: postgresql-password
REPORTING_DATABASE_PASSWORD: postgresql-password
APPLICATION_DATABASE_PASSWORD: postgresql-password
executor:
env: #Environment variables for executor setup with secrets.
ENGINE_DATABASE_USER: flowx
ENGINE_DATABASE_URL: postgresql:5432
ENGINE_DATABASE_NAME: process_engine
ENGINE_DATABASE_TYPE: postgres # To set the type of engine database, can be also changed to oracle
REPORTING_DATABASE_USER: flowx
REPORTING_DATABASE_URL: postgresql:5432
REPORTING_DATABASE_NAME: reporting
REPORTING_DATABASE_TYPE: postgres
APPLICATION_DATABASE_USER: postgres
APPLICATION_DATABASE_URL: localhost:5439
APPLICATION_DATABASE_NAME: app_manager
APPLICATION_DATABASE_TYPE: postgres
extraEnvVarsMultipleSecretsCustomKeys:
- name: postgresql-generic
secrets: #Secrets retrieved from a generic source.
ENGINE_DATABASE_PASSWORD: postgresql-password
REPORTING_DATABASE_PASSWORD: postgresql-password
APPLICATION_DATABASE_PASSWORD: postgresql-password
```
In Kubernetes-based Spark deployments managed by the Spark Operator, you can define the sparkApplication configuration to customize the behavior, resources, and environment for both the driver and executor components of Spark jobs. The driver section allows fine-tuning of parameters specifically pertinent to the driver part of the Spark application.
Below are the configurable values within the chart values.yml file (with webhook):
```yml
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: ScheduledSparkApplication
metadata:
name: reporting-plugin-spark-app
namespace: dev
labels:
app.kubernetes.io/component: reporting
app.kubernetes.io/instance: reporting-plugin
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: reporting-plugin
app.kubernetes.io/release: 0.0.1-FLOWXRELEASE
app.kubernetes.io/version: 0.0.1-FLOWXVERSION
helm.sh/chart: reporting-plugin-0.1.1-PR-9-4-20231122153650-e
spec:
schedule: '@every 5m'
concurrencyPolicy: Forbid
template:
type: Python
pythonVersion: "3"
mode: cluster
image: eu.gcr.io/prj-cicd-d-flowxai-jx-6401/reporting-plugin:0.1.1-PR-9-4-20231122153650-eb6c
imagePullPolicy: IfNotPresent
mainApplicationFile: local:///opt/spark/work-dir/main.py
sparkVersion: "3.1.1"
restartPolicy:
type: Never
onFailureRetries: 0
onFailureRetryInterval: 10
onSubmissionFailureRetries: 5
onSubmissionFailureRetryInterval: 20
driver:
cores: 1
coreLimit: 1200m
memory: 512m
labels:
version: 3.1.1
serviceAccount: spark
env:
ENGINE_DATABASE_USER: flowx
ENGINE_DATABASE_URL: postgresql:5432
ENGINE_DATABASE_NAME: process_engine
ENGINE_DATABASE_TYPE: postgres # To set the type of engine database, can be also changed to oracle
REPORTING_DATABASE_USER: flowx
REPORTING_DATABASE_URL: postgresql:5432
REPORTING_DATABASE_NAME: reporting
REPORTING_DATABASE_TYPE: postgres
APPLICATION_DATABASE_USER: postgres
APPLICATION_DATABASE_URL: localhost:5439
APPLICATION_DATABASE_NAME: app_manager
APPLICATION_DATABASE_TYPE: postgres
ENGINE_DATABASE_PASSWORD: "password"
REPORTING_DATABASE_PASSWORD: "password"
APPLICATION_DATABASE_PASSWORD: "password"
extraEnvVarsMultipleSecretsCustomKeys:
- name: postgresql-generic
secrets: #Secrets retrieved from a generic source.
ENGINE_DATABASE_PASSWORD: postgresql-password
REPORTING_DATABASE_PASSWORD: postgresql-password
APPLICATION_DATABASE_PASSWORD: postgresql-password
executor:
cores: 1
instances: 3
memory: 512m
labels:
version: 3.1.1
env: #Environment variables for executor setup with secrets.
ENGINE_DATABASE_USER: flowx
ENGINE_DATABASE_URL: postgresql:5432
ENGINE_DATABASE_NAME: process_engine
ENGINE_DATABASE_TYPE: postgres # To set the type of engine database, can be also changed to oracle
REPORTING_DATABASE_USER: flowx
REPORTING_DATABASE_URL: postgresql:5432
REPORTING_DATABASE_NAME: reporting
REPORTING_DATABASE_TYPE: postgres
APPLICATION_DATABASE_USER: postgres
APPLICATION_DATABASE_URL: localhost:5439
APPLICATION_DATABASE_NAME: app_manager
APPLICATION_DATABASE_TYPE: postgres
extraEnvVarsMultipleSecretsCustomKeys:
- name: postgresql-generic
secrets: #Secrets retrieved from a generic source.
ENGINE_DATABASE_PASSWORD: postgresql-password
REPORTING_DATABASE_PASSWORD: postgresql-password
APPLICATION_DATABASE_PASSWORD: postgresql-password
```
### Superset configuration
Detailed Superset Configuration Guide:
Refer to Superset Documentation for in-depth information:
## Post-installation steps
After installation, perform the following essential configurations:
### Datasource configuration
For document-related data storage, configure these environment variables:
* `SPRING_DATASOURCE_URL`
* `SPRING_DATASOURCE_USERNAME`
* `SPRING_DATASOURCE_PASSWORD`
Ensure accurate details to prevent startup errors. The Liquibase script manages schema and migrations.
### Redis configuration
The following values should be set with the corresponding Redis-related values:
* `SPRING_REDIS_HOST`
* `SPRING_REDIS_PORT`
## Keycloak configuration
To implement alternative user authentication:
* Override `AUTH_TYPE` in your `superset.yml` configuration file:
* Set `AUTH_TYPE: AUTH_OID`
* Provide the reference to your `openid-connect` realm:
* `OIDC_OPENID_REALM: 'flowx'`
With this configuration, the login page changes to a prompt where the user can select the desired OpenID provider.
### Extend the security manager
Firstly, you will want to make sure that flask stops using `flask-openid` and starts using `flask-oidc` instead.
To do so, you will need to create your own security manager that configures `flask-oidc` as its authentication provider.
```yml
extraSecrets:
keycloak_security_manager.py: |
from flask_appbuilder.security.manager import AUTH_OID
from superset.security import SupersetSecurityManager
from flask_oidc import OpenIDConnect
```
To enable OpenID in Superset, you would previously have had to set the authentication type to `AUTH_OID`.
The security manager still executes all the behavior of the super class, but overrides the OID attribute with the `OpenIDConnect` object.
Further, it replaces the default OpenID authentication view with a custom one:
```yml
from flask_appbuilder.security.views import AuthOIDView
from flask_login import login_user
from urllib.parse import quote
from flask_appbuilder.views import expose
from flask import request, redirect
class AuthOIDCView(AuthOIDView):
@expose('/login/', methods=['GET', 'POST'])
def login(self, flag=True):
sm = self.appbuilder.sm
oidc = sm.oid
superset_roles = ["Admin", "Alpha", "Gamma", "Public", "granter", "sql_lab"]
default_role = "Admin"
@self.appbuilder.sm.oid.require_login
def handle_login():
user = sm.auth_user_oid(oidc.user_getfield('email'))
if user is None:
info = oidc.user_getinfo(['preferred_username', 'given_name', 'family_name', 'email', 'roles'])
roles = [role for role in superset_roles if role in info.get('roles', [])]
roles += [default_role, ] if not roles else []
user = sm.add_user(info.get('preferred_username'), info.get('given_name', ''), info.get('family_name', ''),
info.get('email'), [sm.find_role(role) for role in roles])
login_user(user, remember=False)
return redirect(self.appbuilder.get_url_for_index)
return handle_login()
@expose('/logout/', methods=['GET', 'POST'])
def logout(self):
oidc = self.appbuilder.sm.oid
oidc.logout()
super(AuthOIDCView, self).logout()
redirect_url = request.url_root.strip('/')
# redirect_url = request.url_root.strip('/') + self.appbuilder.get_url_for_login
return redirect(
oidc.client_secrets.get('issuer') + '/protocol/openid-connect/logout?redirect_uri=' + quote(redirect_url))
```
On authentication, the user is redirected back to Superset.
### Configure Superset authentication
Finally, we need to add some parameters to the superset .yml file:
```yml
'''
---------------------------KEYCLOACK ----------------------------
'''
curr = os.path.abspath(os.getcwd())
AUTH_TYPE = AUTH_OID
OIDC_CLIENT_SECRETS = curr + '/pythonpath/client_secret.json'
OIDC_ID_TOKEN_COOKIE_SECURE = True
OIDC_REQUIRE_VERIFIED_EMAIL = True
OIDC_OPENID_REALM: 'flowx'
OIDC_INTROSPECTION_AUTH_METHOD: 'client_secret_post'
CUSTOM_SECURITY_MANAGER = OIDCSecurityManager
AUTH_USER_REGISTRATION = False
AUTH_USER_REGISTRATION_ROLE = 'Admin'
OVERWRITE_REDIRECT_URI = 'https://{{ .Values.flowx.ingress.reporting }}/oidc_callback'
'''
--------------------------------------------------------------
'''
```
# Task management setup
Source: https://docs.flowx.ai/4.7.x/setup-guides/plugins-setup-guide/task-management-plugin-setup
Configure and deploy the Task Management plugin.
The Task Management plugin is available as a Docker image and serves as a dedicated microservice within the FlowX platform ecosystem.
## Dependencies
Before setting up the plugin, ensure you have the following dependencies:
* A [MongoDB](https://www.mongodb.com/) database for task storage
* A connection to the RuntimeDB for operational data
* Access to the database used by the FlowX Engine
* Connection to the Kafka instance used by the FlowX Engine
* A [Redis](https://redis.io/) instance for caching and performance optimization
While many configuration properties come pre-configured, several environment variables must be explicitly set for proper functionality.
## Authorization configuration & access roles
Configure the following variables to connect to your identity management platform:
| Environment Variable | Description | Default Value |
| --------------------------------------------------- | ------------------------------------------------ | --------------------------------- |
| `SECURITY_OAUTH2_BASESERVERURL` | Base URL of your OpenID Connect provider | - |
| `SECURITY_OAUTH2_CLIENT_CLIENTID` | Client ID for authentication | - |
| `SECURITY_OAUTH2_CLIENT_CLIENTSECRET` | Client secret for authentication | - |
| `SECURITY_OAUTH2_REALM` | Security realm for the application | - |
| `SECURITY_OAUTH2_SERVICEACCOUNT_ADMIN_CLIENTID` | Service account client ID for process initiation | `flowx-task-management-plugin-sa` |
| `SECURITY_OAUTH2_SERVICEACCOUNT_ADMIN_CLIENTSECRET` | Service account client secret | - |
A dedicated service account must be configured in your OpenID provider to allow the Task Management microservice to access realm-specific data and perform operations.
### OpenID connect settings
The Task Management plugin supports Microsoft Entra ID as an OpenID provider.
| Environment Variable | Description | Default Value |
| -------------------------- | ----------------------------------- | ------------------------------------------------- |
| `OPENID_PROVIDER` | Type of OpenID provider | `keycloak` (possible values: `keycloak`, `entra`) |
| `OPENID_ENTRA_TENANTID` | Tenant ID for Microsoft Entra ID | - |
| `OPENID_ENTRA_PRINCIPALID` | Principal ID for Microsoft Entra ID | - |
| `OPENID_ENTRA_GRAPHSCOPE` | Graph scope for Microsoft Entra ID | `https://graph.microsoft.com/.default` |
For more detailed information about configuring the service account:
### FlowX Engine datasource configuration
The service needs access to process instance data from the engine database. Configure these connection parameters:
| Environment Variable | Description | Default Value |
| ---------------------------- | -------------------------------- | ------------------------------------------------ |
| `SPRING_DATASOURCE_URL` | JDBC URL for the engine database | `jdbc:postgresql://onboardingdb:5432/onboarding` |
| `SPRING_DATASOURCE_USERNAME` | Database username | `postgres` |
| `SPRING_DATASOURCE_PASSWORD` | Database password | `password` |
### MongoDB configuration
Configure access to the primary MongoDB instance:
| Environment Variable | Description | Default Value |
| ------------------------- | ---------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `SPRING_DATA_MONGODB_URI` | MongoDB connection URI | `mongodb://${DB_USERNAME}:${DB_PASSWORD}@mongodb-0.mongodb-headless,mongodb-1.mongodb-headless,mongodb-arbiter-0.mongodb-arbiter-headless:27017/task-management-plugin` |
| `DB_USERNAME` | MongoDB username | `task-management-plugin` |
| `DB_PASSWORD` | MongoDB password | `password` |
Task Manager requires a runtime connection to function correctly. Starting the service without a configured and active runtime MongoDB connection is not supported.
### Runtime MongoDB configuration
Task Manager requires a runtime connection to function correctly. Starting the service without a configured and active runtime MongoDB connection is not supported.
Enable the Runtime MongoDB connection:
| Environment Variable | Description | Default Value |
| ------------------------------------- | ------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `SPRING_DATA_MONGODB_RUNTIME_ENABLED` | Enable runtime MongoDB connection | `true` |
| `SPRING_DATA_MONGODB_RUNTIME_URI` | URI for connecting to MongoDB Runtime | `mongodb://${RUNTIME_DB_USERNAME}:${DB_PASSWORD}@mongodb-0.mongodb-headless,mongodb-1.mongodb-headless,mongodb-arbiter-0.mongodb-arbiter-headless:27017/app-runtime?retryWrites=false` |
| `RUNTIME_DB_USERNAME` | Username for runtime database | `app-runtime` |
| `RUNTIME_DB_PASSWORD` | Password for runtime database | `password` |
### Redis configuration
Configure the Redis cache with these parameters:
Configure the Redis cache with these parameters:
| Environment Variable | Description | Default Value |
| ----------------------- | -------------------------------------------- | ------------- |
| `SPRING_REDIS_HOST` | Redis server hostname or IP address | `localhost` |
| `SPRING_REDIS_PORT` | Redis server port | `6379` |
| `SPRING_REDIS_PASSWORD` | Authentication password for Redis | `password` |
| `REDIS_TTL` | Time-to-live for cached items (milliseconds) | `5000000` |
## Kafka configuration
Configure the Kafka integration using these environment variables:
### Core Kafka settings
| Environment Variable | Description | Default Value |
| ------------------------------------ | ------------------------------------------------------- | -------------------------------- |
| `SPRING_KAFKA_BOOTSTRAPSERVERS` | Address of the Kafka server(s) | `localhost:9092` |
| `SPRING_KAFKA_SECURITYPROTOCOL` | Security protocol for Kafka connections | `PLAINTEXT` |
| `SPRING_KAFKA_CONSUMER_GROUPID` | Consumer group identifier | `kafka-task-management-consumer` |
| `KAFKA_MESSAGE_MAX_BYTES` | Maximum message size (bytes) | `52428800` (50MB) |
| `KAFKA_AUTHEXCEPTIONRETRYINTERVAL` | Retry interval after authorization exceptions (seconds) | `10` |
| `KAFKA_CONSUMER_THREADS` | Number of consumer threads | `3` |
| `KAFKA_CONSUMER_EXCLUDEUSERSTHREADS` | Number of threads for processing user exclusion events | `3` |
### Topic naming configuration
| Environment Variable | Description | Default Value |
| -------------------------------- | ----------------------------------- | ------------- |
| `KAFKA_TOPIC_NAMING_SEPARATOR` | Primary separator for topic names | `.` |
| `KAFKA_TOPIC_NAMING_SEPARATOR2` | Secondary separator for topic names | `-` |
| `KAFKA_TOPIC_NAMING_PACKAGE` | Package prefix for topic names | `ai.flowx.` |
| `KAFKA_TOPIC_NAMING_ENVIRONMENT` | Environment segment for topic names | `dev.` |
| `KAFKA_TOPIC_NAMING_VERSION` | Version suffix for topic names | `.v1` |
### Kafka topics
#### Process management topics
| Environment Variable | Description | Default Value |
| ---------------------------------------- | -------------------------------------------------- | ---------------------------------------------- |
| `KAFKA_TOPIC_PROCESS_START_OUT` | Topic for running hooks | `ai.flowx.dev.core.trigger.start.process.v1` |
| `KAFKA_TOPIC_PROCESS_OPERATIONS_OUT` | Topic for task operations (assign, unassign, etc.) | `ai.flowx.dev.core.trigger.operation.v1` |
| `KAFKA_TOPIC_PROCESS_OPERATIONS_BULKOUT` | Topic for bulk operations on tasks | `ai.flowx.dev.core.trigger.operations.bulk.v1` |
#### Scheduling topics
| Environment Variable | Description | Default Value |
| --------------------------------------- | ------------------------------------------------ | ----------------------------------------------- |
| `KAFKA_TOPIC_PROCESS_SCHEDULE_IN` | Topic for receiving scheduler messages for hooks | `ai.flowx.dev.plugin.tasks.trigger.run.hook.v1` |
| `KAFKA_TOPIC_PROCESS_SCHEDULE_OUT_SET` | Topic for setting schedules | `ai.flowx.dev.core.trigger.set.schedule.v1` |
| `KAFKA_TOPIC_PROCESS_SCHEDULE_OUT_STOP` | Topic for stopping schedules | `ai.flowx.dev.core.trigger.stop.schedule.v1` |
#### User management topics
| Environment Variable | Description | Default Value |
| -------------------------------------- | ------------------------------- | ---------------------------------------------------- |
| `KAFKA_TOPIC_EXCLUDEUSERS_SCHEDULE_IN` | Topic for user exclusion events | `ai.flowx.dev.plugin.tasks.trigger.exclude.users.v1` |
#### Task operations topics
| Environment Variable | Description | Default Value |
| --------------------- | ----------------------------------------- | ------------------------------------------------ |
| `KAFKA_TOPIC_TASK_IN` | Topic for incoming task creation messages | `ai.flowx.dev.plugin.tasks.trigger.save.task.v1` |
#### Events and integration topics
| Environment Variable | Description | Default Value |
| --------------------------------------- | --------------------------------------- | -------------------------------------------------------------------- |
| `KAFKA_TOPIC_EVENTSGATEWAY_OUT_MESSAGE` | Topic for Events Gateway messages | `ai.flowx.dev.eventsgateway.receive.taskmanager.commands.message.v1` |
| `KAFKA_TOPIC_RESOURCESUSAGES_REFRESH` | Topic for resource usage refresh events | `ai.flowx.dev.application-version.resources-usages.refresh.v1` |
The Engine listens for messages on topics with specific naming patterns. Ensure you use the correct outgoing topic names when configuring the Task Management plugin to maintain proper communication with the engine.
## Logging configuration
Control logging verbosity with these environment variables:
| Environment Variable | Description | Default Value |
| ---------------------------- | ---------------------------------- | ------------- |
| `LOGGING_LEVEL_ROOT` | Root Spring Boot microservice logs | - |
| `LOGGING_LEVEL_APP` | Application-level logs | `DEBUG` |
| `LOGGING_LEVEL_MONGO_DRIVER` | MongoDB driver logs | `INFO` |
| `LOGGING_LEVEL_REDIS` | Redis/Lettuce client log level | `OFF` |
## Filtering feature
| Environment Variable | Description |
| ------------------------------------- | ----------------------------------------------------- |
| `FLOWX_ALLOW_USERNAME_SEARCH_PARTIAL` | Enables filtering possible assignees by partial names |
## Scheduled jobs
Configure scheduled maintenance jobs:
| Environment Variable | Description | Default Value |
| --------------------------------------------- | -------------------------------------- | -------------------------------- |
| `SCHEDULER_USERSCACHESCLEANUP_CRONEXPRESSION` | Cron expression for user cache cleanup | `0 0 0 * *?` (daily at midnight) |
### Resource usage monitoring
The plugin includes a resource usage monitoring feature that can be configured:
| Environment Variable | Description | Default Value |
| -------------------------------------------------------------------------- | ------------------------------------------------------ | ------------------------------------- |
| `FLOWX_LIB_RESOURCESUSAGES_ENABLED` | Enable resource usage tracking | `true` |
| `FLOWX_LIB_RESOURCESUSAGES_REFRESHLISTENER_ENABLED` | Enable refresh listener | `true` |
| `FLOWX_LIB_RESOURCESUSAGES_REFRESHLISTENER_COLLECTOR_THREADCOUNT` | Number of threads for resource collection | `5` |
| `FLOWX_LIB_RESOURCESUSAGES_REFRESHLISTENER_COLLECTOR_MAXBATCHSIZE` | Maximum batch size for collection | `1000` |
| `FLOWX_LIB_RESOURCESUSAGES_KAFKA_CONSUMER_GROUPID_RESOURCESUSAGES_REFRESH` | Consumer group for resource usage refresh events | `taskMgmtResourcesUsagesRefreshGroup` |
| `FLOWX_LIB_RESOURCESUSAGES_KAFKA_CONSUMER_THREADS_RESOURCESUSAGES_REFRESH` | Number of threads for processing resource usage events | `3` |
### Database migration
The Task Management plugin uses Mongock for MongoDB migrations:
| Environment Variable | Description | Default Value |
| ------------------------------- | ---------------------------------------- | ----------------------------------------- |
| `MONGOCK_CHANGELOGSSCANPACKAGE` | Package to scan for database change logs | `ai.flowx.task.management.config.mongock` |
# Runtime manager setup
Source: https://docs.flowx.ai/4.7.x/setup-guides/runtime-manager
This guide provides a step-by-step process for setting up and configuring the Runtime Manager module, including database, Kafka, and OAuth2 authentication settings to manage runtime and build configurations.
The [**Application Manager**](./application-manager) and **Runtime Manager** share the same container image and Helm chart. Refer to the **Deployment Guidelines** in the release notes to ensure compatibility and verify the correct version.
## Infrastructure prerequisites
The Runtime Manager service requires the following components to be set up before it can be started:
* **PostgreSQL** - version 13 or higher for managing application data
* **MongoDB** - version 4.4 or higher for managing runtime data
* **Redis** - version 6.0 or higher (if required)
* **Kafka** - version 2.8 or higher for event-driven communication between services
* **OAuth2 Authentication** - Ensure a compatible OAuth2 authorization server is configured.
## Dependencies
* [**Database configuration**](#database-configuration)
* [**Kafka configuration**](#configuring-kafka)
* [**Authentication & access roles**](#configuring-authentication-and-access-roles)
* [**Logging**](./setup-guides-overview#logging)
## Change the application name
| Environment Variable | Description | Example Value |
| ------------------------- | --------------------------------------------------------- | ----------------- |
| `SPRING_APPLICATION_NAME` | Service identifier used for service discovery and logging | `runtime-manager` |
Default Value: `application-manager -> must be changed to `runtime-manager
## Core service configuration
| Environment Variable | Description | Example Value |
| ---------------------------- | ------------------------------------------------- | -------------------- |
| `CONFIG_PROFILE` | Spring configuration profiles to activate | `k8stemplate_v2` |
| `FLOWX_ENVIRONMENT_NAME` | Environment identifier (dev, staging, prod, etc.) | `pr` |
| `LOGGING_CONFIG_FILE` | Path to logging configuration file | `logback-spring.xml` |
| `MULTIPART_MAX_FILE_SIZE` | Maximum file size for uploads | `25MB` |
| `MULTIPART_MAX_REQUEST_SIZE` | Maximum total request size | `25MB` |
## Database configuration
The Runtime Manager uses the same PostgreSQL (to store application data) and MongoDB (to manage runtime data) as [**application-manager**](application-manager). Configure these database connections with the following environment variables:
### PostgreSQL (Application data)
| Environment Variable | Description | Example Value |
| ---------------------------- | ---------------------------------- | ----------------------------------------------- |
| `SPRING_DATASOURCE_URL` | JDBC URL for PostgreSQL connection | `jdbc:postgresql://postgresql:5432/app_manager` |
| `SPRING_DATASOURCE_USERNAME` | PostgreSQL username | `flowx` |
| `SPRING_DATASOURCE_PASSWORD` | PostgreSQL password | *sensitive* |
### MongoDB (Runtime data)
| Environment Variable | Description | Example Value |
| ------------------------- | -------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `SPRING_DATA_MONGODB_URI` | URI for MongoDB connection | `mongodb://${DB_USERNAME}:${DB_PASSWORD}@mongodb-0.mongodb-headless,mongodb-1.mongodb-headless,mongodb-arbiter-0.mongodb-headless:27017/${DB_NAME}?retryWrites=false` |
| `DB_NAME` | MongoDB database name | `app-runtime` |
| `DB_USERNAME` | MongoDB username | `app-runtime` |
| `DB_PASSWORD` | MongoDB password | *sensitive* |
## Redis configuration
| Environment Variable | Description | Example Value |
| ---------------------------- | --------------------------------- | -------------- |
| `SPRING_DATA_REDIS_HOST` | Redis server hostname | `redis-master` |
| `SPRING_DATA_REDIS_PASSWORD` | Redis password | *sensitive* |
| `SPRING_DATA_REDIS_PORT` | Redis server port | `6379` |
| `SPRING_REDIS_TTL` | Default Redis TTL in milliseconds | `5000000` |
## Kafka configuration
### Kafka connection
| Environment Variable | Description | Example Value |
| -------------------------------- | ----------------------------------- | ---------------------------------- |
| `SPRING_KAFKA_BOOTSTRAPSERVERS` | Kafka broker addresses | `kafka-flowx-kafka-bootstrap:9092` |
| `KAFKA_TOPIC_NAMING_ENVIRONMENT` | Environment prefix for Kafka topics | |
### Kafka OAuth Authentication
| Environment Variable | Description | Example Value |
| -------------------------------- | ------------------------------ | ----------------------------------------------------------------- |
| `KAFKA_OAUTH_CLIENT_ID` | OAuth client ID for Kafka | `flowx-service-client` |
| `KAFKA_OAUTH_CLIENT_SECRET` | OAuth client secret for Kafka | `flowx-service-client-secret` |
| `KAFKA_OAUTH_TOKEN_ENDPOINT_URI` | OAuth token endpoint for Kafka | `{baseUrl}/auth/realms/kafka-authz/protocol/openid-connect/token` |
Kafka OAuth authentication secures communication between services using the Kafka message broker. The client ID and secret are used to obtain an access token from the token endpoint.
## Authentication configuration
### OpenID Connect configuration
| Environment Variable | Description | Example Value |
| ----------------------------------------------------- | ---------------------------- | -------------------------- |
| `SECURITY_OAUTH2_BASE_SERVER_URL` | OAuth2 server base URL | `{baseUrl}/auth` |
| `SECURITY_OAUTH2_REALM` | OAuth2 realm name | `flowx` |
| `SECURITY_OAUTH2_CLIENT_CLIENT_ID` | OAuth2 client ID | `flowx-platform-authorize` |
| `SECURITY_OAUTH2_CLIENT_CLIENT_SECRET` | OAuth2 client secret | *sensitive* |
| `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_SECRET` | Admin service account secret | *sensitive* |
The service account configuration approach is deprecated but still supported for backward compatibility. In newer deployments, consider using the standard OAuth2 client configuration.
## File storage configuration
| Environment Variable | Description | Example Value |
| ---------------------------------------- | -------------------------------- | ------------------- |
| `APPLICATION_FILE_STORAGE_S3_SERVER_URL` | S3-compatible storage server URL | `http://minio:9000` |
| `APPLICATION_FILE_STORAGE_S3_ACCESS_KEY` | S3 access key | *sensitive* |
| `APPLICATION_FILE_STORAGE_S3_SECRET_KEY` | S3 secret key | *sensitive* |
S3-compatible storage is used for storing application files, exports, and imports. The Runtime Manager supports MinIO, AWS S3, and other S3-compatible storage solutions.
## Ingress configuration
For exposing the Runtime manager service, configure public, admin and adminInstances ingress settings:
```yaml
ingress:
enabled: true
public:
enabled: true
hostname: "{{ .Values.flowx.ingress.public }}"
path: /rtm/api/runtime(/|$)(.*)
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /api/runtime/$2
nginx.ingress.kubernetes.io/cors-allow-headers: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization,platform,Flowx-Platform
admin:
enabled: true
hostname: "{{ .Values.flowx.ingress.admin }}"
path: /rtm/api/build-mgmt(/|$)(.*)
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /api/build-mgmt/$2
nginx.ingress.kubernetes.io/cors-allow-headers: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization,platform,Flowx-Platform
adminInstances:
enabled: true
hostname: "{{ .Values.flowx.ingress.admin }}"
path: /rtm/api/(runtime|runtime-internal)/(.*)
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /api/$1/$2
nginx.ingress.kubernetes.io/cors-allow-headers: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-
```
> **Note:** Replace placeholders in environment variables with the appropriate values for your environment before starting the service.
# Scheduler setup
Source: https://docs.flowx.ai/4.7.x/setup-guides/scheduler-setup-guide
This guide will walk you through the process of setting up the Scheduler service.
## Infrastructure prerequisites
* **MongoDB**: Version 4.4 or higher for storing taxonomies and contents.
* **Kafka**: Version 2.8 or higher.
* **OpenID Connect Settings**: Default settings are for Keycloak.
## Dependencies
* [MongoDB](https://www.mongodb.com/2) database
* Ability to connect to a Kafka instance used by the FlowX Engine
* Scheduler service account - required for using Start Timer event node - see [**here**](./access-management/configuring-an-iam-solution#scheduler-service-account)
The service comes with most of the required configuration properties pre-filled. However, certain custom environment variables need to be set up.
## Dependencies
### MongoDB helm example
Basic MongoDB configuration - helm values.yaml
```yaml
scheduler-mdb:
existingSecret: {{secretName}}
mongodbDatabase: {{SchedulerDatabaseName}}
mongodbUsername: {{SchedulerDatabaseUser}}
persistence:
enabled: true
mountPath: /bitnami/mongodb
size: 4Gi
replicaSet:
enabled: true
name: rs0
pdb:
enabled: trues
minAvailable:
arbiter: 1
secondary: 1
replicas:
arbiter: 1
secondary: 1
useHostnames: true
serviceAccount:
create: false
usePassword: true
```
This service needs to connect to a Mongo database that has replicas, in order to work correctly.
## Scheduler configuration
### Scheduler
```yaml
scheduler:
thread-count: 30 # Configure the number of threads to be used for sending expired messages.
callbacks-thread-count: 60 # Configure the number of threads for handling Kafka responses, whether the message was successfully sent or not
cronExpression: "*/10 * * * * *" #every 10 seconds
retry: # new retry mechanism
max-attempts: 3
seconds: 1
thread-count: 3
cronExpression: "*/10 * * * * *" #every 10 seconds
cleanup:
cronExpression: "*/25 * * * * *" #every 25 seconds
```
* `SCHEDULER_THREAD_COUNT`: Used to configure the number of threads to be used for sending expired.
* `SCHEDULER_CALLBACKS_THREAD_COUNT`: Used to configure the number of threads for handling Kafka responses, whether the message was successfully sent or not.
The "scheduler.cleanup.cronExpression" is valid for both scheduler and timer event scheduler.
#### Retry mechanism
* `SCHEDULER_RETRY_THREAD_COUNT`: Specify the number of threads to use for resending messages that need to be retried.
* `SCHEDULER_RETRY_MAX_ATTEMPTS`: This configuration parameter sets the number of retry attempts. For instance, if it's set to 3, it means that the system will make a maximum of three retry attempts for message resending.
* `SCHEDULER_RETRY_SECONDS`: This configuration parameter defines the time interval, in seconds, for retry attempts. For example, when set to 1, it indicates that the system will retry the operation after a one-second delay.
#### Cleanup
* `SCHEDULER_CLEANUP_CRONEXPRESSION`: It specifies how often, in seconds, events that have already been processed should be cleaned up from the database.
#### Recovery mechanism
```yaml
flowx:
timer-calculator:
delay-max-repetitions: 1000000
```
You have a "next execution" set for 10:25, and the cycle step is 10 minutes. If the instance goes down for 2 hours, the next execution time should be 12:25, not 10:35. To calculate this, you add 10 minutes repeatedly to 10:25 until you reach the current time. So, it would be 10:25 + 10 min + 10 min + 10 min, until you reach the current time of 12:25. This ensures that the next execution time is adjusted correctly after the downtime.
* `FLOWX_TIMER_CALCULATOR_DELAY_MAX_REPETITIONS`: This means that, for example, if our cycle step is set to one second and the system experiences a downtime of two weeks, which is equivalent to 1,209,600 seconds, and we have the "max repetitions" set to 1,000,000, it will attempt to calculate the next schedule. However, when it reaches the maximum repetitions, an exception is thrown, making it impossible to calculate the next schedule. As a result, the entry remains locked and needs to be rescheduled. This scenario represents a critical case where the system experiences extended downtime, and the cycle step is very short (e.g., 1 second), leading to the inability to determine the next scheduled event.
### Timer event scheduler
Configuration for Timer Event scheduler designed to manage timer events. Similar configuration to scheduler.
```yaml
timer-event-scheduler:
thread-count: 30
callbacks-thread-count: 60
cronExpression: "*/1 * * * * *" #every 1 seconds
retry:
max-attempts: 3
seconds: 1
thread-count: 3
cronExpression: "*/5 * * * * *" #every 5 seconds
```
## OpenID connect settings
Default settings are for Keycloak.
* `SECURITY_TYPE`: Indicates that OAuth 2.0 is the chosen security type, default value: `oauth2`.
* `SECURITY_PATHAUTHORIZATIONS_0_PATH`: Defines a security path or endpoint pattern. It specifies that the security settings apply to all paths under the "/api/" path. The `**` is a wildcard that means it includes all subpaths under "/api/\*\*".
* `SECURITY_PATHAUTHORIZATIONS_0_ROLESALLOWED`: Specifies the roles allowed for accessing the specified path. In this case, the roles allowed are empty (""). This might imply that access to the "/api/\*\*" paths is open to all users or that no specific roles are required for authorization.
```yaml example
security:
type: oauth2
pathAuthorizations:
- path: "/api/**"
rolesAllowed: "ANY_AUTHENTICATED_USER"
```
* `SECURITY_OAUTH2_BASE_SERVER_URL`: This setting specifies the base URL of the OpenID server, which is used for authentication and authorization.
* `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_ID`: This setting specifies the service account that is essential for enabling the [**Start Timer**](../docs/building-blocks/node/timer-events/timer-start-event) event node. Ensure that you provide the correct client ID for this service account.
* `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_SECRET`: Along with the client ID, you must also specify the client secret associated with the service account for proper authentication.
More details about the necessary service account, here:
[Scheduler service account](./access-management/configuring-an-iam-solution#scheduler-service-account)
```yaml example
oauth2:
base-server-url: http://localhost:8080/auth
realm: flowx
client:
access-token-uri: ${security.oauth2.base-server-url}/realms/${security.oauth2.realm}/protocol/openid-connect/token
client-id: flowx-platform-authorize
client-secret: wrongsecret
resource:
user-info-uri: ${security.oauth2.base-server-url}/realms/${security.oauth2.realm}/protocol/openid-connect/userinfo
service-account:
admin:
client-id: flowx-scheduler-core-sa
client-secret: wrongsecret
```
## Configuring datasoruce (MongoDB)
The MongoDB database is used to persist scheduled messages until they are sent back. The following configurations need to be set using environment variables:
* `SPRING_DATA_MONGODB_URI`: The URI for the MongoDB database.
## Configuring Kafka
The following Kafka related configurations can be set by using environment variables:
* `SPRING_KAFKA_BOOTSTRAP_SERVERS`: Address of the Kafka server
* `SPRING_KAFKA_CONSUMER_GROUP_ID`: Group of consumers
* `KAFKA_CONSUMER_THREADS` (default: 1): The number of Kafka consumer threads
* `KAFKA_CONSUMER_SCHEDULED_TIMER_EVENTS_THREADS` (default: 1): The number of Kafka consumer threads related to starting Timer Events
* `KAFKA_CONSUMER_SCHEDULED_TIMER_EVENTS_GROUP_ID`: Group of consumers related to starting timer events
* `KAFKA_CONSUMER_STOP_SCHEDULED_TIMER_EVENTS_THREADS`- (default: 1): The number of Kafka consumer threads related to stopping Timer events
* `KAFKA_CONSUMER_STOP_SCHEDULED_TIMER_EVENTS_GROUP_ID`: Group of consumers related to stopping timer events
* `KAFKA_AUTH_EXCEPTION_RETRY_INTERVAL`: The interval between retries after `AuthorizationException` is thrown by `KafkaConsumer`
* `KAFKA_TOPIC_SCHEDULE_IN_SET`: Receives scheduled message setting requests from the Admin and Process engine microservices
* `KAFKA_TOPIC_SCHEDULER_IN_STOP`: Handles requests from the Admin and Process engine microservices to terminate scheduled messages.
* `KAFKA_TOPIC_SCHEDULED_TIMER_EVENTS_IN_SET`: Needed to use Timer Events nodes
* `KAFKA_TOPIC_SCHEDULED_TIMER_EVENTS_IN_STOP`: Needed to use Timer Events nodes
Each action available in the service corresponds to a Kafka event. A separate Kafka topic must be configured for each use-case.
Make sure the topics configured for this service don't follow the engine pattern.
## Configuring logging
The following environment variables could be set in order to control log levels:
* `LOGGING_LEVEL_ROOT: INFO` (Default): Root Spring Boot microservice logs
* `LOGGING_LEVEL_APP`: App level logs
# FlowX Data Search setup
Source: https://docs.flowx.ai/4.7.x/setup-guides/search-data-service-setup-guide
Comprehensive guide for installing, configuring, and deploying the FlowX Data Search service
## Overview
The FlowX Data Search service enables powerful searching capabilities across your FlowX platform. This guide provides detailed instructions for setting up, configuring, and deploying the service in your environment.
## Quick start
```bash
# 1. Ensure infrastructure prerequisites are met (Redis, Kafka, Elasticsearch)
# 2. Configure your environment variables in a data-search.yaml file
# 3. Deploy the service
kubectl apply -f data-search.yaml
# 4. Verify the deployment
kubectl get deployment data-search
kubectl logs deployment/data-search
```
## Infrastructure prerequisites
The FlowX Data Search service requires the following infrastructure components:
| Component | Minimum Version | Purpose |
| ----------------- | --------------- | ------------------------------------------- |
| **Redis** | 6.0+ | Caching search results and configurations |
| **Kafka** | 2.8+ | Message-based communication with the engine |
| **Elasticsearch** | 7.11.0+ | Indexing and searching data |
## Configuration
### Kafka configuration
Configure Kafka communication using these environment variables and properties:
#### Basic Kafka settings
| Variable | Description | Default/Example |
| -------------------------------- | ---------------------------------------- | ---------------------- |
| `SPRING_KAFKA_BOOTSTRAPSERVERS` | Address of Kafka server(s) | `localhost:9092` |
| `SPRING_KAFKA_SECURITY_PROTOCOL` | Security protocol for Kafka | `PLAINTEXT` |
| `KAFKA_CONSUMER_THREADS` | Number of Kafka consumer threads | `1` |
| `KAFKA_MESSAGE_MAX_BYTES` | Maximum message size | `52428800` (50MB) |
| `KAFKA_OAUTH_CLIENT_ID` | OAuth client ID for Kafka authentication | `kafka` |
| `KAFKA_OAUTH_CLIENT_SECRET` | OAuth client secret | `kafka-secret` |
| `KAFKA_OAUTH_TOKEN_ENDPOINT_URI` | OAuth token endpoint | `kafka.auth.localhost` |
#### Topic naming configuration
The Data Search service uses a structured topic naming convention:
```
{package}.{environment}.{component}.{action}.{version}
```
For example: `ai.flowx.dev.core.trigger.search.data.v1`
| Variable | Description | Default |
| -------------------------------- | ---------------------------------- | ----------- |
| `KAFKA_TOPIC_NAMING_SEPARATOR` | Primary separator for topic naming | `.` |
| `KAFKA_TOPIC_NAMING_SEPARATOR2` | Secondary separator | `-` |
| `KAFKA_TOPIC_NAMING_PACKAGE` | Package prefix | `ai.flowx.` |
| `KAFKA_TOPIC_NAMING_ENVIRONMENT` | Environment name | `dev.` |
| `KAFKA_TOPIC_NAMING_VERSION` | Version suffix | `.v1` |
#### Kafka topics
The service uses these specific topics:
| Topic | Default Value | Purpose |
| ----------------------------- | --------------------------------------------------------- | ------------------------ |
| `KAFKA_TOPIC_DATA_SEARCH_IN` | `ai.flowx.dev.core.trigger.search.data.v1` | Incoming search requests |
| `KAFKA_TOPIC_DATA_SEARCH_OUT` | `ai.flowx.dev.engine.receive.core.search.data.results.v1` | Outgoing search results |
### Elasticsearch configuration
Set up Elasticsearch connectivity with these environment variables:
| Variable | Description | Default | Example |
| ------------------------------------------ | ---------------------------------------- | ------------------ | --------------------------- |
| `SPRING_ELASTICSEARCH_REST_URIS` | Elasticsearch server address(es) | - | `elasticsearch-master:9200` |
| `SPRING_ELASTICSEARCH_REST_PROTOCOL` | Protocol for Elasticsearch communication | `http` | `https` |
| `SPRING_ELASTICSEARCH_REST_DISABLESSL` | Whether to disable SSL verification | `false` | `true` |
| `SPRING_ELASTICSEARCH_REST_USERNAME` | Elasticsearch username | - | `elastic` |
| `SPRING_ELASTICSEARCH_REST_PASSWORD` | Elasticsearch password | - | `changeme` |
| `SPRING_ELASTICSEARCH_INDEX_SETTINGS_NAME` | Name of the index to use | `process_instance` | `flowx_data` |
### Security configuration
Configure authentication and authorization with these variables:
| Variable | Description | Example |
| ------------------------------------- | -------------------------- | ----------------------------------- |
| `SECURITY_OAUTH2_BASESERVERURL` | Base URL for OAuth2 server | `https://keycloak.example.com/auth` |
| `SECURITY_OAUTH2_CLIENT_CLIENTID` | OAuth2 client ID | `data-search-service` |
| `SECURITY_OAUTH2_CLIENT_CLIENTSECRET` | OAuth2 client secret | `data-search-service-secret` |
| `SECURITY_OAUTH2_REALM` | OAuth2 realm name | `flowx` |
### Logging configuration
Control the verbosity of logs with these variables:
| Variable | Description | Default | Example |
| -------------------- | ------------------------------ | ------- | ------- |
| `LOGGING_LEVEL_ROOT` | Root Spring Boot log level | `INFO` | `ERROR` |
| `LOGGING_LEVEL_APP` | Application-specific log level | `INFO` | `DEBUG` |
### Elasticsearch configuration
Set up Elasticsearch connectivity with these environment variables:
| Variable | Description | Default | Example |
| ----------------------------------------- | ---------------------------------------- | ------------------ | --------------------------- |
| `SPRING_ELASTICSEARCH_REST_URIS` | Elasticsearch server address(es) | `localhost:9200` | `elasticsearch-master:9200` |
| `SPRING_ELASTICSEARCH_REST_PROTOCOL` | Protocol for Elasticsearch communication | `https` | `https` |
| `SPRING_ELASTICSEARCH_REST_DISABLESSL` | Whether to disable SSL verification | `false` | `true` |
| `SPRING_ELASTICSEARCH_REST_USERNAME` | Elasticsearch username | `""` (empty) | `elastic` |
| `SPRING_ELASTICSEARCH_REST_PASSWORD` | Elasticsearch password | `""` (empty) | `changeme` |
| `SPRING_ELASTICSEARCH_INDEXSETTINGS_NAME` | Name of the index to use | `process_instance` | `flowx_data` |
## Elasticsearch index configuration
The Data Search service creates and manages Elasticsearch indices based on the configured index pattern. The default index name is `process_instance`.
### Index pattern
The service derives the index pattern from the `spring.elasticsearch.index-settings.name` property. This pattern is used to query across multiple indices that match the pattern.
### Sample search query
Below is an example of a search query generated by the Data Search service for Elasticsearch:
```json
{
"query": {
"bool": {
"adjust_pure_negative": true,
"boost": 1,
"must": [
{
"nested": {
"boost": 1,
"ignore_unmapped": false,
"path": "keyIdentifiers",
"query": {
"bool": {
"adjust_pure_negative": true,
"boost": 1,
"must": [
{
"match": {
"keyIdentifiers.key.keyword": {
"query": "astonishingAttribute",
"operator": "OR"
}
}
},
{
"match": {
"keyIdentifiers.originalValue.keyword": {
"query": "OriginalGangsta",
"operator": "OR"
}
}
}
]
}
},
"score_mode": "none"
}
},
{
"terms": {
"boost": 1,
"processDefinitionName.keyword": [
"TEST_PORCESS_NAME_0",
"TEST_PORCESS_NAME_1"
]
}
}
]
}
}
}
```
## Troubleshooting
### Common issues
1. **Elasticsearch connection problems**:
* Verify Elasticsearch is running and accessible
* Check if credentials are correct
* Ensure SSL settings match your environment
2. **Kafka Communication Issues**:
* Verify Kafka topics exist and are properly configured
* Check Kafka permissions for the service
* Ensure bootstrap servers are correctly specified
3. **Search Not Returning Results**:
* Verify index pattern matches existing indices
* Check if data is being properly indexed
* Review search query format for errors
### Logs analysis
Monitor logs for errors and warnings:
```bash
# For Docker
docker logs flowx-data-search
# For Kubernetes
kubectl logs deployment/data-search
```
## Integration with Kibana
Kibana provides a powerful interface for visualizing and exploring data indexed by the Data Search service.
### Using Kibana with FlowX Data Search
1. Connect Kibana to the same Elasticsearch instance
2. Create an index pattern matching your configured index name
3. Use the Discover tab to explore indexed data
4. Create visualizations and dashboards based on your data
Kibana is an open-source data visualization and exploration tool designed primarily for Elasticsearch. It serves as the visualization layer for the Elastic Stack, allowing users to interact with their data stored in Elasticsearch to perform various activities such as querying, analyzing, and visualizing data. For more information, visit the [Kibana official documentation](https://www.elastic.co/guide/en/kibana/current/index.html).
## Best practices
1. **Security**:
* Store sensitive credentials in Kubernetes Secrets
* Use TLS for Elasticsearch and Kafka communication
* Implement network policies to restrict access
2. **Performance**:
* Scale the number of replicas based on query load
* Adjust Kafka consumer threads based on message volume
* Configure appropriate resource limits and requests
3. **Monitoring**:
* Set up monitoring for Elasticsearch, Kafka, and Redis
* Create alerts for service availability and performance
* Monitor disk space for Elasticsearch data nodes
# Microservices setup guides
Source: https://docs.flowx.ai/4.7.x/setup-guides/setup-guides-overview
Complete reference for deploying and configuring FlowX.AI microservices in your environment
FlowX.AI is built on a modern microservices architecture, allowing for scalable, resilient, and flexible deployments. Each microservice operates independently while collaborating with others to provide a complete enterprise solution.
## Deployment strategy
Deploying FlowX.AI microservices involves breaking down the application into modular components that can be independently deployed, scaled, and maintained. All microservices are delivered as Docker containers, making them suitable for deployment on any container orchestration platform, such as Kubernetes or Docker Swarm.
### Deployment prerequisites
Before beginning the deployment process, ensure you have:
* A Kubernetes cluster or Docker environment
* Access to a container registry
* Persistent storage for databases
* Network policies configured for inter-service communication
* Resource quotas and limits defined for each environment
### Recommended installation order
Following the correct installation sequence is crucial for a successful deployment. This ordered approach prevents dependency issues and ensures each service has the required dependencies available when it initializes.
First, deploy the foundational infrastructure:
* **Databases**: PostgreSQL/Oracle for relational data, MongoDB for document storage
* **Message Broker**: Kafka and ZooKeeper
* **Caching**: Redis
* **Identity Management**: Keycloak or other OAuth2 provider
Next, deploy these core services in order:
1. **Advancing Controller**: Manages process advancement and orchestration
2. **Process Engine**: Handles business process execution and state management
Once the core components are operational, deploy these services (can be deployed in parallel):
* **Admin Service**: Platform administration and configuration management
* **Audit Core**: Compliance auditing and activity tracking
* **Task Management**: Human task assignment and workflow
* **Scheduler Core**: Job scheduling and time-based operations
* **Data Search**: Indexing and searching capabilities
* **License Core**: License management and validation
* **Events Gateway**: Event routing and processing
* **Document Plugin**: Document generation and management
* **Notification Plugin**: Communication and alerts management
* **Any additional plugins or extensions**
Finally, deploy the frontend services:
* **Designer**: Process design environment for business analysts
* **Web Components**: UI components for custom applications
* **Customer-facing UIs and portals**
## Environment variables reference
Environment variables are the primary configuration mechanism for FlowX.AI microservices. They provide a secure and flexible way to customize service behavior without modifying the container images.
### Common environment variables
The following sections detail the most commonly used environment variables across FlowX.AI microservices. For service-specific variables, refer to the dedicated setup guides for each component.
#### Authorization & access management
| Environment Variable | Description | Example Value | Required |
| --------------------------------------------------- | -------------------------------------------------- | ----------------------------------- | -------- |
| `SECURITY_OAUTH2_BASESERVERURL` | Base URL of the OAuth2/OIDC server | `https://keycloak.example.com/auth` | Yes |
| `SECURITY_OAUTH2_CLIENTCLIENTID` | OAuth2 client ID for the service | `flowx-admin-service` | Yes |
| `SECURITY_OAUTH2_CLIENTCLIENTSECRET` | OAuth2 client secret | `secret` | Yes |
| `SECURITY_OAUTH2_REALM` | OAuth2 realm name | `flowx` | Yes |
| `SECURITY_OAUTH2_SERVICEACCOUNT_ADMIN_CLIENTID` | Service account client ID (for inter-service auth) | `flowx-service-account` | No\* |
| `SECURITY_OAUTH2_SERVICEACCOUNT_ADMIN_CLIENTSECRET` | Service account client secret | `secret` | No\* |
\*Required for some services that need to make authenticated calls to other services
#### Database configuration
##### PostgreSQL/Oracle
| Environment Variable | Description | Example Value | Required |
| ----------------------------------------------- | ------------------------------- | --------------------------------------- | -------- |
| `SPRING_DATASOURCE_URL` | JDBC connection URL | `jdbc:postgresql://postgres:5432/flowx` | Yes |
| `SPRING_DATASOURCE_USERNAME` | Database username | `flowx_user` | Yes |
| `SPRING_DATASOURCE_PASSWORD` | Database password | `securePassword123` | Yes |
| `SPRING_DATASOURCE_DRIVERCLASSNAME` | JDBC driver class (Oracle only) | `oracle.jdbc.OracleDriver` | Yes\* |
| `SPRING_JPA_PROPERTIES_HIBERNATE_DEFAULTSCHEMA` | Default schema (Oracle only) | `FLOWX` | Yes\* |
\*Required only for Oracle databases
##### MongoDB
| Environment Variable | Description | Example Value | Required |
| ------------------------------------- | ------------------------------------------ | --------------------------------------------------------------------------- | -------- |
| `SPRING_DATA_MONGODB_URI` | MongoDB connection URI | `mongodb://mongo1,mongo2,mongo3:27017/flowx?replicaSet=rs0` | Yes |
| `DB_USERNAME` | MongoDB username for services that need it | `flowx_mongo_user` | Yes |
| `DB_PASSWORD` | MongoDB password | `mongoSecurePass456` | Yes |
| `SPRING_DATA_MONGODB_RUNTIME_ENABLED` | Enable runtime MongoDB connection | `true` | No |
| `SPRING_DATA_MONGODB_RUNTIME_URI` | URI for runtime MongoDB connection | `mongodb://${DB_USERNAME}:${DB_PASSWORD}@mongodb-runtime:27017/app-runtime` | No\* |
\*Required if runtime MongoDB connection is enabled
#### Kafka Configuration
| Environment Variable | Description | Example Value | Required |
| ---------------------------------- | -------------------------------------------- | ---------------------------------------- | -------- |
| `SPRING_KAFKA_BOOTSTRAPSERVERS` | Comma-separated list of Kafka brokers | `kafka-0:9092,kafka-1:9092,kafka-2:9092` | Yes |
| `SPRING_KAFKA_SECURITY_PROTOCOL` | Security protocol for Kafka communication | `SASL_PLAINTEXT` | No |
| `SPRING_KAFKA_CONSUMER_GROUPID` | Consumer group ID | `flowx-process-engine-group` | Yes |
| `KAFKA_CONSUMER_THREADS` | Number of Kafka consumer threads | `5` | No |
| `KAFKA_AUTHEXCEPTIONRETRYINTERVAL` | Retry interval for auth exceptions (seconds) | `10` | No |
| `KAFKA_MESSAGE_MAX_BYTES` | Maximum Kafka message size (bytes) | `52428800` (50MB) | No |
#### Topic Naming Configuration
For services that communicate via Kafka, the topic naming follows a convention:
| Environment Variable | Description | Example Value | Required |
| -------------------------------- | ------------------------------ | ------------- | -------- |
| `KAFKA_TOPIC_NAMING_SEPARATOR` | Primary topic name separator | `.` | No |
| `KAFKA_TOPIC_NAMING_SEPARATOR2` | Secondary topic name separator | `-` | No |
| `KAFKA_TOPIC_NAMING_PACKAGE` | Package prefix for topics | `ai.flowx.` | No |
| `KAFKA_TOPIC_NAMING_ENVIRONMENT` | Environment segment | `dev.` | No |
| `KAFKA_TOPIC_NAMING_VERSION` | Version suffix | `.v1` | No |
The engine and other services listen for messages on topics with specific naming patterns. Ensure you use correct topic names to maintain proper inter-service communication.
#### Redis configuration
| Environment Variable | Description | Example Value | Required |
| ----------------------- | ------------------------- | ------------------ | -------- |
| `SPRING_REDIS_HOST` | Redis server hostname | `redis-master` | Yes |
| `SPRING_REDIS_PORT` | Redis server port | `6379` | Yes |
| `SPRING_REDIS_PASSWORD` | Redis password | `redisPassword789` | Yes |
| `REDIS_TTL` | Cache TTL in milliseconds | `5000000` | No |
#### Logging configuration
| Environment Variable | Description | Example Value | Required |
| ---------------------------- | ---------------------------------- | ------------- | -------- |
| `LOGGING_LEVEL_ROOT` | Root logging level | `INFO` | No |
| `LOGGING_LEVEL_APP` | Application-specific logging level | `DEBUG` | No |
| `LOGGING_LEVEL_MONGO_DRIVER` | MongoDB driver logging level | `WARN` | No |
| `LOGGING_LEVEL_KAFKA` | Kafka client logging level | `WARN` | No |
| `LOGGING_LEVEL_REDIS` | Redis client logging level | `OFF` | No |
## Deployment best practices
### High availability considerations
For production environments, configure these high availability features:
* **Database Clustering**: Implement PostgreSQL/Oracle with replication
* **MongoDB Replica Sets**: Deploy MongoDB as a replica set with at least 3 nodes
* **Kafka Clustering**: Use at least 3 Kafka brokers with replication factor ≥ 3
* **Redis Sentinel/Cluster**: Configure Redis for high availability
* **Service Replicas**: Run multiple instances of each microservice
* **Load Balancing**: Implement proper load balancing for service instances
* **Affinity/Anti-Affinity Rules**: Distribute service instances across nodes
### Security recommendations
Secure your FLOWX.AI deployment with these measures:
1. **Network Segmentation**: Isolate microservices using network policies
2. **Secret Management**: Use Kubernetes Secrets or a vault solution
3. **TLS Everywhere**: Enable TLS for all service-to-service communication
4. **OAuth2 Scopes**: Configure fine-grained OAuth2 scopes for services
5. **Resource Isolation**: Use namespaces and pod security policies
6. **Regular Updates**: Keep all components updated with security patches
7. **Audit Logging**: Enable comprehensive audit logging via the Audit Core service
## Troubleshooting
### Common issues and solutions
| Issue | Possible cause | Solution |
| -------------------------- | ----------------------------- | --------------------------------------------------------------- |
| Service fails to start | Missing environment variables | Check logs for "Configuration property not found" errors |
| Database connection errors | Incorrect credentials or URL | Verify database connection parameters |
| Services can't communicate | Kafka misconfiguration | Ensure topic names match between producer and consumer services |
| Authentication failures | OAuth2 configuration issues | Verify client IDs, secrets, and server URLs |
| Performance degradation | Insufficient resources | Monitor CPU/memory usage and scale resources appropriately |
| Data inconsistency | Redis cache not synchronized | Check Redis connection and consider cache invalidation |
### Diagnostic procedures
When troubleshooting FlowX.AI microservices:
1. **Check Service Logs**: Examine logs for error messages
2. **Verify Configurations**: Ensure all required environment variables are set correctly
3. **Test Connectivity**: Verify network connectivity between services
4. **Monitor Resources**: Check CPU, memory, and disk usage
5. **Inspect Kafka Topics**: Use Kafka tools to inspect message flow
6. **Review Database State**: Examine database for data integrity issues
7. **Check OAuth2 Tokens**: Verify token validity and permissions
## Service-specific documentation
For detailed configuration of individual services, refer to:
# Deployment guidelines v3.0
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.0.0-february-2023/deployment-guidelines-v3.0.0
Do not forget, when upgrading to a new platform version, always check and make sure your installed component versions match the versions stated in the release. To do that, go to **FLOWX.AI Designer > Platform Status**.
After updating to **3.0.0** FLOWX.AI release, importing old processes definitions in the new platform release is not possible (available for exports from **\< 3.0.0** releases).

## Component versions
With the release of **FLOWX.AI 3.0**, there have been some changes that you need to be aware when upgrading to the latest version:
* The `flowx-process-renderer` has been migrated to `@flowx\ui-sdk`.
* In future FlowX.AI releases, the `paperflow-web-components` library will be deprecated (some old components still can be found inside this lib). Instead, the new components can be found in `@flowx/ui-toolkit@3.0`.
| :ballot\_box\_with\_check: | 3.0.0 | 2.14.0 | 2.13.0 | 2.12.0 | 2.11.0 | 2.10.0 | 2.9.0 | 2.8.1 | 2.8.0 | 2.7.0 | 2.6.0 | 2.5.0 | 2.4.0 | 2.3.0 | 2.2.0 | 2.1.0 |
| ------------------------------ | --------- | -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | --------- |
| **Process engine** | **2.0.7** | 0.4.104 | 0.4.95 | 0.4.90 | 0.4.83 | 0.4.60 | 0.4.49 | 0.4.44 | 0.4.42 | 0.4.42 | 0.4.36 | 0.4.29 | 0.4.22 | 0.4.21 | 0.4.18 | 0.4.13 |
| **Admin** | **2.0.8** | 0.3.119 | 0.3.103 | 0.3.92 | 0.3.81 | 0.3.60 | 0.3.55 | 0.3.47 | 0.3.43 | 0.3.40 | 0.3.36 | 0.3.34 | 0.3.29 | 0.3.23 | 0.3.21 | 0.3.13 |
| **Designer** | **3.2.1** | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 | 2.33.0 | 2.28.1 | 2.24.2 | 2.23.0 | 2.19.2 | 2.18.2 | 2.17.4 | 2.15.2 | 2.14.4 | 2.11.2 |
| **@flowx/ui-sdk** | **3.2.1** | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-toolkit** | **3.2.1** | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | |
| **@flowx/ui-theme** | **3.2.1** | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **flowx-process-renderer** | - | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 | 2.33.0 | 2.28.1 | 2.24.2 | 2.23.0 | 2.19.2 | 2.18.2 | 2.17.4 | 2.15.2 | 2.14.4 | 2.11.2 |
| **paperflow-web-components** | - | 2.78.4-1 | 2.63.6 | 2.60.7 | 0.2.10 | 0.2.10 | 0.2.10 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.5 | 0.2.4 |
| **CMS Core** | **1.0.2** | 0.2.38 | 0.2.36 | 0.2.33 | 0.2.30 | 0.2.25 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.20 | 0.2.20 | 0.2.18 | 0.2.17 | 0.2.17 |
| **Scheduler Core** | **1.0.1** | 0.0.34 | 0.0.34 | 0.0.34 | 0.0.33 | 0.0.28 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.24 | 0.0.24 | 0.0.23 | 0.0.23 | 0.0.23 |
| **Notification Plugin** | **2.0.1** | 1.0.206 | 1.0.206 | 1.0.206 | 1.0.205 | 1.0.200 | 1.0.198 | 1.0.198 | 1.0.197 | 1.0.194 | 1.0.194 | 1.0.191 | 1.0.191 | 1.0.190 | 1.0.190 | 1.0.186-1 |
| **Document Plugin** | **2.0.2** | 1.0.53 | 1.0.53 | 1.0.53 | 1.0.52 | 1.0.47 | 1.0.42 | 1.0.41 | 1.0.38 | 1.0.37 | 1.0.37 | 1.0.35 | 1.0.35 | 1.0.31 | 1.0.31 | 1.0.30 |
| **OCR Plugin** | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.0.109 | 0.0.109 | 0.0.109 |
| **License Core** | **1.0.1** | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 | 0.1.19 | 0.1.18 | 0.1.18 | 0.1.18 | 0.1.18 | 0.1.15 | 0.1.15 | 0.1.13 | 0.1.13 | 0.1.12 |
| **Customer Management Plugin** | **0.2.1** | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.20 | 0.1.20 | 0.1.18 | 0.1.18 | 0.1.18 |
| **Task Management Plugin** | **1.0.1** | 0.0.42 | 0.0.42 | 0.0.40 | 0.0.37 | 0.0.29 | 0.0.28 | 0.0.28 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.22 | 0.0.22 | 0.0.21 | 0.0.21 | 0.0.16 |
| **Data search** | **0.1.3** | 0.0.8 | 0.0.8 | 0.0.6 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **Audit Core** | **1.0.1** | 0.0.8 | 0.0.5 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **Reporting** | 0.0.39 | 0.0.39 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **advancing-controller** | **0.1.2** | 0.0.6 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **iOS renderer** | **2.0.0** | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **Android renderer** | **2.0.1** | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
### 3.0.0 Minimum Recommended Versions
| FLOWX.AI Platform Version | Component name | Minimum recommended version (tested versions) |
| ------------------------- | ---------------------------- | --------------------------------------------- |
| 3.0 | Keycloak | 18.0.x |
| 3.0 | Kafka | 3.2.0 |
| 3.0 | PostgreSQL | 14.3.0 |
| 3.0 | MongoDB | 5.0.8 |
| 3.0 | Redis | 6.2.6 |
| 3.0 | Elasticsearch | 7.17 |
| 3.0 | S3 (Min.IO) / minio-operator | 2022-05-26T05-48-41Z / 4.5.4 |
| 3.0 | OracleDB | 19.8.0.0.0 |
| 3.0 | Angular (Web SDK) | 14.2.2 |
FlowX.AI supports any version of the third-party components listed as prerequisites.
For optimal performance and reliability, our internal QA process validates new releases using specific versions as indicated in the provided table.
While exploring alternative versions that suit your company's specific requirements, we recommend referring to the compatibility matrix for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://support.flowx.ai/), and our dedicated team will address and resolve any identified bugs following our standard support process.
Compatibility Matrix:
* FLOWX.AI Platform: Recommended and tested versions
* Third-Party Components: Supported versions based on specific requirements and client preferences
## Additional configuration
### Updates
New updates have been made to the backend and designer configurations. The following changes have been made:
* **Backend**: The Java container base image has been updated from **11.0.15** to **11.0.17\_8**
* **Designer**: The Nginx container base image has been updated from **1.19** to **1.23.2**
### Redis configuration
```
disableCommands:
- CONFIG
# enable keyspace notif as CONFIG is disabled
notify-keyspace-events KEA
```
The above command is enabling key space notifications in Redis by setting the `notify-keyspace-events` configuration parameter to "KEA". The `notify-keyspace-events` configuration parameter is used to enable notifications for certain events that occur in the Redis key space.
The reason for enabling this as CONFIG is disabled is that, in this configuration, the CONFIG command is disabled, which means that the Redis server will not accept the command to change its configuration. Therefore, this line is allowing Redis to notify events in key space.
### Theming
There are two important items that need to be taken in consideration with it comes to theming:
* [`theme_components.json`](#theme_componentsjson)
* [`theme_tokens.json`](#theme_tokensjson)
#### `theme_components.json`
This JSON object is a collection of all components and their properties. Each element represents a different design element (e.g. "accordion","card", etc.).
Each element object has two main properties:
* `genericProperties` - is an array of objects that define properties such as background color, padding, and font styles for the element
* `modifiers` - is also an array of objects, each representing a different modification of the element's properties, each modifier object has a `name` property and a `properties` array, which contains more objects that define the modified properties of the element.
#### Example
```json
{
"elementName": "card",
"genericProperties": [
{
"name": "backgroundColor",
"reference": "color@shades-0",
"value": null,
"unit": null
},
{
"name": "color",
"reference": "color@neutrals-900",
"value": null,
"unit": null
},
{
"name": "borderRadius",
"reference": null,
"value": "12",
"unit": "px"
},
{
"name": "borderWidth",
"reference": null,
"value": "1",
"unit": "px"
},
{
"name": "paddingTop",
"reference": null,
"value": "16",
"unit": "px"
},
{
"name": "paddingRight",
"reference": null,
"value": "16",
"unit": "px"
},
{
"name": "paddingBottom",
"reference": null,
"value": "16",
"unit": "px"
},
{
"name": "paddingLeft",
"reference": null,
"value": "16",
"unit": "px"
},
{
"name": "titleFont",
"reference": "typography@Heading/H6/Semi Bold",
"value": null,
"unit": null
},
{
"name": "subtitleFont",
"reference": "typography@Paragraph/P2/Regular",
"value": null,
"unit": null
},
{
"name": "contentFont",
"reference": "typography@Paragraph/P1/Regular",
"value": null,
"unit": null
},
{
"name": "gap",
"reference": null,
"value": "24",
"unit": "px"
}
],
"modifiers": [
{
"name": "border",
"properties": [
{
"name": "borderColor",
"reference": "color@neutrals-300",
"value": null,
"unit": null
}
]
},
{
"name": "raised",
"properties": [
{
"name": "boxShadow",
"reference": "dropShadow@m",
"value": null,
"unit": null
}
]
},
{
"name": "ios",
"properties": [
{
"name": "titleFont",
"reference": "typography@Heading/H6/Semi Bold",
"value": null,
"unit": null
},
{
"name": "subtitleFont",
"reference": "typography@Paragraph/P2/Regular",
"value": null,
"unit": null
},
{
"name": "gap",
"reference": null,
"value": "16",
"unit": "px"
}
]
},
{
"name": "android",
"properties": [
{
"name": "titleFont",
"reference": "typography@Heading/H6/Semi Bold",
"value": null,
"unit": null
},
{
"name": "subtitleFont",
"reference": "typography@Paragraph/P2/Regular",
"value": null,
"unit": null
},
{
"name": "gap",
"reference": null,
"value": "16",
"unit": "px"
}
]
}
]
}
```
#### `theme_tokens.json`
This JSON object is a collection of color tokens for a design system. The object is structured with a "colors" object that contains multiple color objects, each representing a different color category (e.g. "primary", "secondary", "success", "warning", "error" and "neutrals").
Each color object has properties such as "name", "main" and "shades". The "name" property contains the name of the color category, the "main" property contains the main shade of the color, the "shades" is an object with key-value pairs that represent different shades of the color.
Each key is a number representing the shade level, and each value is a hex code representing the color at that level.
#### Example
```typescript
// COLORS
[
{
"name": "primary",
"shades": [
{"tint": 50, "hex": "#aabbcc"},
{"tint": 100, "hex": "#aabbcc"},
...
],
"main": 500
},
{
"name": "secondary",
"shades": [
{"tint": 50, "hex": "#aabbcc"},
{"tint": 100, "hex": "#aabbcc"},
...
],
"main": 500
}
]
interface ColorShade {
tint: number,
hex: string
}
interface Color {
name: string,
shades: ColorShade[]
}
```
For more information on how to use the web renderer, check the following section:
[Using the Angular Renderer - Theming](../../docs/3.0.0/platform-deep-dive/core-components/renderer-sdks/angular-renderer)
### Task management plugin
* `FLOWX_ALLOW_USERNAME_SEARCH_PARTIAL` - new environment variable added to filter users by partial username (default value: true)
[Task management plugin setup](../../docs/platform-deep-dive/plugins/plugins-setup-guide/task-management-plugin-setup)
### Data search
* `SPRING_ELASTICSEARCH_INDEX_SETTINGS_NAME` - must correspond to the index configured on `process-engine`
[Data search setup](../../docs/platform-setup-guides/search-data-service-setup-guide)
# v3.0.0 February 2023
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.0.0-february-2023/v3.0.0-february-2023
We are excited to announce the release of FLOWX.AI 3.0 🔥, featuring new and improved features that will elevate your workflow experience. From the new theming options to enhanced functionality, this update will take your productivity to the next level. 🚀
## **New features**
* **Theming:** FLOWX.AI 3.0 introduces a new theming feature, allowing users to personalize all components on both web and mobile. The new theming feature provides a better user experience and offers more flexibility in using the platform.
**Notes for FDEs Post-Migration**:
To ensure the proper functionality of the migrated styles, please follow check the post-migration steps, available [**here**](../../docs/building-blocks/ui-designer/render-ui-designer-changelog).
[Additional configuration](./deployment-guidelines-v3.0.0#theming)
* **Generic JSONs for components**: JSONs for components have been added.
* **UI Designer**: A refreshed new interface for UI Designer.

* **UI Components Redesign**: Redesigned UI components.
[UI components](../../docs/building-blocks/ui-designer/ui-component-types)
Below you will find a Storybook which will demonstrate how components behave under different states, props, and conditions:
[Storybook](https://storybook.demo.flowxai.dev/)
With the release of **FLOWX.AI 3.0**, there have been some changes that you need to be aware when upgrading to the latest version:
* The `flowx-process-renderer` has been migrated to `@flowx\ui-sdk`.
* As of \**FlowX 4.0*, the `paperflow-web-components` library will be deprecated. Instead, the new components can be found in `@flowx/ui-toolkit`.
For more information, check the section below:
[Using the Angular Renderer](../../docs/3.0.0/platform-deep-dive/core-components/renderer-sdks/angular-renderer)
## **Fixed**
* Fixed a bug in the FLOWX.AI Designer where the datepicker component's default value overlapped with the field placeholder.
* Fixed an issue in FLOWX.AI Designer where users could not copy/paste nodes that had UI actions without parameters.
* Forwarding external notifications is now possible with the Notifications plugin.
* Fixed an issue where the GET enumerations list (CMS) displayed an error message about memory exceeding.
## **Changed**
### UI Designer
* Indicators → deleted **Info Tooltip** and **Hint** UI components
* added Helpertext (to replace **Info tooltip** and **Hint**) - this new element can be found on [Form elements](../../docs/building-blocks/ui-designer/ui-component-types/form-elements) and provides additional information about each element, which can be hidden within an infopoint

### Documents plugin
* Updated document plugin file download path to use file UUID (string) instead of a numeric file ID.
### Task management plugin
* Improved filtering feature, now is possible to filter users by partial names.
[Additional configuration here](./deployment-guidelines-v3.0.0#task-management-plugin)
### FLOWX.AI Designer 👩🏭
#### Audit log
* The audit log now displays the name instead of identifiers for process definition, node, action, and swimlanes entities.

[Audit log](../../docs/platform-deep-dive/core-components/core-extensions/audit)
#### Sensitive data
* Sensitive data tab has been removed from the process definition settings and a new Sensitive data switch has been added in the Data model tab.

* Sensitive data migration is required when cloning old processes (sensitive data is deleted when cloning as it is not compatible with the new process version). Users must add the data model and keys for the new process.
[Data model](../../docs/building-blocks/process/process-definition#data-model)
#### Platform status report
* Added a new option to export the Platform Status, which will download a JSON file containing the state details of all components, enabling users to communicate the state of their instance to the support team.
* Added more data to the platform status report.

#### Kafka send/receive nodes
* New icons for Kafka send/receive nodes were added.

### Process Designer
#### Process Designer keyboard commands
* Added new keyboard commands for deleting, copying, and renaming nodes in the Process Designer:
* `backspace` - delete one or several selected nodes
* `Ctrl/Cmd + C` - copy one or several selected nodes
* `R` - rename a node
### FLOWX.AI Engine 🚂
#### Kafka
* Standardized Kafka topics and naming pattern in FLOWX.AI Engine, reducing confusion and errors by allowing for configuration only for the package name, environment name, and version, without having to list all topic names.
#### Performance
* Databases performance improvements, including clear caching (clear cache must be performed per process for process definitions).
For stages, integrations, and generic parameters, clear cache is performed globally, when it comes to process definitions, clear cache must be done per process.
### License model
* In the license model, another label/alias can be used instead of PII.
### Data search
* The ElasticSearch index for data search is now configurable. More info on:
[Deployment guidelines](./deployment-guidelines-v3.0.0#data-search)
## **UX/UI Improvements**
* The process instance search button has been removed, and search is now done automatically.
* Swimlanes interaction has been improved.
* It is possible now to select and delete multiple nodes in Process Designer.
Additional information regarding the deployment for v3.0 is available below:
## **Security**
* Improved security for Redis configuration.
[Redis configuration](./deployment-guidelines-v3.0.0#redis-configuration)
## **Known issues**
### Reporting
* Reporting plugin is not compatible with Oracle DBs.
[Deployment guidelines v3.0](./deployment-guidelines-v3.0.0)
# Deployment guidelines v3.1.0
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.1.0-march-2023/deployment-guidelines-v3.1.0
Do not forget, when upgrading to a new platform version, always check and make sure your installed component versions match the versions stated in the release. To do that, go to **FLOWX.AI Designer > Platform Status**.
After updating to **3.1.0** FLOWX.AI release, importing old processes definitions in the new platform release is not possible (available for exports from **\< 3.1.0** releases).

## Component versions
With the release of **FLOWX.AI 3.0**, there have been some changes that you need to be aware when upgrading to the latest version:
* The `flowx-process-renderer` has been migrated to `@flowx\ui-sdk`.
* As of **FlowX 4.0**, the `paperflow-web-components` library is no longer being maintained. Instead, the new components can be found in `@flowx/ui-toolkit`.
For more information, check [**Using the Angular Renderer**](../../docs/3.1.0/platform-deep-dive/core-components/renderer-sdks/angular-renderer) section.
| :ballot\_box\_with\_check: | 3.1.0 | 3.0.0 | 2.14.0 | 2.13.0 | 2.12.0 | 2.11.0 | 2.10.0 | 2.9.0 | 2.8.1 | 2.8.0 | 2.7.0 | 2.6.0 | 2.5.0 | 2.4.0 | 2.3.0 | 2.2.0 | 2.1.0 |
| ------------------------------ | ---------- | ------ | -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | --------- |
| **Process engine** | **2.1.2** | 2.0.7 | 0.4.104 | 0.4.95 | 0.4.90 | 0.4.83 | 0.4.60 | 0.4.49 | 0.4.44 | 0.4.42 | 0.4.42 | 0.4.36 | 0.4.29 | 0.4.22 | 0.4.21 | 0.4.18 | 0.4.13 |
| **Admin** | **2.1.3** | 2.0.8 | 0.3.119 | 0.3.103 | 0.3.92 | 0.3.81 | 0.3.60 | 0.3.55 | 0.3.47 | 0.3.43 | 0.3.40 | 0.3.36 | 0.3.34 | 0.3.29 | 0.3.23 | 0.3.21 | 0.3.13 |
| **Designer** | **3.15.1** | 3.2.1 | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 | 2.33.0 | 2.28.1 | 2.24.2 | 2.23.0 | 2.19.2 | 2.18.2 | 2.17.4 | 2.15.2 | 2.14.4 | 2.11.2 |
| **@flowx/ui-sdk** | **3.15.1** | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | 2.23.0 | n/a | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-toolkit** | **3.15.1** | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-theme** | **3.15.1** | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **paperflow-web-components** | - | - | 2.78.4-1 | 2.63.6 | 2.60.7 | 0.2.10 | 0.2.10 | 0.2.10 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.5 | 0.2.4 |
| **flowx-process-renderer** | - | - | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 | 2.33.0 | 2.28.1 | 2.24.2 | 2.23.0 | 2.19.2 | 2.18.2 | 2.17.4 | 2.15.2 | 2.14.4 | 2.11.2 |
| **CMS Core** | **1.0.3** | 1.0.2 | 0.2.38 | 0.2.36 | 0.2.33 | 0.2.30 | 0.2.25 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.20 | 0.2.20 | 0.2.18 | 0.2.17 | 0.2.17 |
| **Scheduler Core** | **1.0.4** | 1.0.1 | 0.0.34 | 0.0.34 | 0.0.34 | 0.0.33 | 0.0.28 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.24 | 0.0.24 | 0.0.23 | 0.0.23 | 0.0.23 |
| **Notification Plugin** | **2.0.3** | 2.0.1 | 1.0.206 | 1.0.206 | 1.0.206 | 1.0.205 | 1.0.200 | 1.0.198 | 1.0.198 | 1.0.197 | 1.0.194 | 1.0.194 | 1.0.191 | 1.0.191 | 1.0.190 | 1.0.190 | 1.0.186-1 |
| **Document Plugin** | **2.0.3** | 2.0.2 | 1.0.53 | 1.0.53 | 1.0.53 | 1.0.52 | 1.0.47 | 1.0.42 | 1.0.41 | 1.0.38 | 1.0.37 | 1.0.37 | 1.0.35 | 1.0.35 | 1.0.31 | 1.0.31 | 1.0.30 |
| **OCR Plugin** | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.0.109 | 0.0.109 | 0.0.109 |
| **License Core** | **1.0.2** | 1.0.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 | 0.1.19 | 0.1.18 | 0.1.18 | 0.1.18 | 0.1.18 | 0.1.15 | 0.1.15 | 0.1.13 | 0.1.13 | 0.1.12 |
| **Customer Management Plugin** | **0.2.3** | 0.2.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.20 | 0.1.20 | 0.1.18 | 0.1.18 | 0.1.18 |
| **Task Management Plugin** | **1.0.4** | 1.0.1 | 0.0.42 | 0.0.42 | 0.0.40 | 0.0.37 | 0.0.29 | 0.0.28 | 0.0.28 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.22 | 0.0.22 | 0.0.21 | 0.0.21 | 0.0.16 |
| **Data search** | **0.1.4** | 0.1.3 | 0.0.8 | 0.0.8 | 0.0.6 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **Audit Core** | **1.0.4** | 1.0.1 | 0.0.8 | 0.0.5 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **Reporting** | **0.0.40** | 0.0.39 | 0.0.39 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **advancing-controller** | **0.1.4** | 0.1.2 | 0.0.6 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **iOS renderer** | **2.0.4** | 2.0.0 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **Android renderer** | 2.0.1 | 2.0.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
### 3.1.0 Minimum Recommended Versions
| FLOWX.AI Platform Version | Component name | Minimum recommended version (tested versions) |
| ------------------------- | ---------------------------- | --------------------------------------------- |
| 3.1 | Keycloak | 18.0.x |
| 3.1 | Kafka | 3.2.0 |
| 3.1 | PostgreSQL | 14.3.0 |
| 3.1 | MongoDB | 5.0.8 |
| 3.1 | Redis | 6.2.6 |
| 3.1 | Elasticsearch | 7.17 |
| 3.1 | S3 (Min.IO) / minio-operator | 2022-05-26T05-48-41Z / 4.5.4 |
| 3.1 | OracleDB | 19.8.0.0.0 |
| 3.1 | Angular (Web SDK) | 14.2.2 |
FlowX.AI supports any version of the third-party components listed as prerequisites.
For optimal performance and reliability, our internal QA process validates new releases using specific versions as indicated in the provided table.
While exploring alternative versions that suit your company's specific requirements, we recommend referring to the compatibility matrix for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
Compatibility Matrix:
* FLOWX.AI Platform: Recommended and tested versions
* Third-Party Components: Supported versions based on specific requirements and client preferences
## Additional configuration
### Process engine - scheduler
Configuration for scheduler to be added on [**Process engine setup**](../../docs/platform-setup-guides/flowx-engine-setup-guide).
```yaml
scheduler:
processCleanup:
enabled: false
cronExpression: 0 */5 0-5 * * ? #every day during the night, every 5 minutes, at the start of the minute.
batchSize: 1000
masterElection:
cronExpression: 30 */3 * * * ? #master election every 3 minutes
websocket:
namespace:
cronExpression: 0 * * * * *
expireMinutes: 30
```
### Undo/redo
Configuration for undo/redo actions in UI Designer to be added on [**Admin setup**](../../docs/flowx-designer/designer-setup-guide).
```yaml
flowx:
undo-redo:
ttl: 86400 # in seconds
cleanup:
cronExpression: 0 0 2 ? * * # every day at 2am
days: 2
```
### Advancing controller with Oracle
To use advancing controller with OracleDBs, the following .yml files must be edited, configuring the right environment variables:
#### Advancing controller
If the parallel advancing configuration already exists, resetting the 'advancing' database must be done by executing the SQL command `DROP DATABASE advancing;`. Once the database has been dropped, the Liquibase script will automatically re-enable it.
* `SPRING_JPA_DATABASE` - value: `oracle`
* `SPRING_JPA_DATABASE_PLATFORM`
* `SPRING_DATASOUCE_URL`
* `SPRING_DATASOURCE_DRIVERCLASSNAME`
[Advancing controller setup](../../docs/platform-setup-guides/flowx-engine-setup-guide/advancing-controller-setup-guide)
#### Process engine
* `SPRING_JPA_DATABASE` - value: `oracle`
* `SPRING_JPA_DATABASE_PLATFORM`
* `SPRING_DATASOUCE_URL` - environment variable used to configure a data source URL for a Spring application, it typically contains the JDBC driver name, the server name, port number, and database name
* `SPRING_DATASOURCE_DRIVERCLASSNAME` - environment variable used to set the class name of the JDBC driver that the Spring datasource will use to connect to the database
* `ADVANCING_DATASOURCE_DRIVERCLASSNAME`
* `ADVANCING_DATASOURCE_URL`
* `ADVANCING_DATASOURCE_JDBC_URL`
[Process engine setup](../../docs/platform-setup-guides/flowx-engine-setup-guide)
# v3.1.0 March 2023
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.1.0-march-2023/v3.1.0-march-2023
Drumroll please... 🥁 We are excited to announce **FLOWX.AI 3.1** release 🔥.

So, what are you waiting for? Grab a snack, sit back, and get ready to explore the newest version of **FLOWX.AI**.
## **New features** 🆕
### UI Designer ✍️
#### Undo/redo actions in UI Designer
The latest update now allows users to easily undo or redo any actions they perform within the UI Designer. This includes tasks such as dragging, dropping, or deleting elements from the preview section, as well as adjusting settings within the styling and settings panel.
To undo or redo an action, users can simply click the corresponding icons in the UI Designer toolbar, or use the keyboard commands (undo: Cmd/Ctrl + Z, redo: Cmd/Ctrl + Shift + Z) for even quicker access. This new functionality provides users with greater control and flexibility, allowing them to easily make changes and adjustments without fear of losing progress or making mistakes.

[Deployment guidelines v3.1](./deployment-guidelines-v3.1.0#undoredo)
#### New UI element: file preview
We are excited to announce the addition of a new ready-made UI component. This new component allows users to easily display previews of documents within their designs, whether the documents are uploaded, generated during a process, or static.
With this new feature, users can create more dynamic and interactive designs that incorporate real-time document previews.

[File preview](../../docs/building-blocks/ui-designer/ui-component-types/file-preview)
### Process designer
#### Generate data model
A data model can be generated using data values from a [process instance](../../docs/building-blocks/process/active-process/process-instance). This can be done by either merging the data model with an existing one or replacing it entirely.

[Data model](../../docs/building-blocks/process/process-definition#data-model)
## **Fixed** 🔧
* Fixed an issue where [Engine](../../docs/platform-deep-dive/core-components/flowx-engine) and [Advancing controller](../../docs/platform-deep-dive/core-components/flowx-engine#advancing-controller) should be restarted if the advancing controller's database is down for a couple of minutes.
* Fixed an issue where image preview is not displayed.
* Fixed an issue where `GET child enumerations` request is not using the correct version.
* Fixed an issue where the [scheduler](../../docs/platform-deep-dive/core-components/core-extensions/scheduler) was sending messages multiple times.
## **Changed** 🛠️
### FLOWX.AI Engine 🚂
* [Advancing controller](../../docs/platform-deep-dive/core-components/flowx-engine#advancing-controller) now supports OracleDBs
[Deployment guidelines v3.1](deployment-guidelines-v3.1.0#advancing-controller-with-oracle)
### UI Designer ✍️
UI Designer improvements:
* You can now select enumeration data source for [Select](../../docs/building-blocks/ui-designer/ui-component-types/form-elements/select-form-field), [Checkbox](../../docs/building-blocks/ui-designer/ui-component-types/form-elements/checkbox-form-field), and [Radio](../../docs/building-blocks/ui-designer/ui-component-types/form-elements/radio-form-field) UI elements.
* Clear content button option is available in the UI Designer, as a setting at element level for [form elements](../../docs/building-blocks/ui-designer/ui-component-types/form-elements) populated with content.
* Form elements under hidden containers can be disabled now.
:::info
Here is a brief example: form → disabled = `true` + child form element: disabled = `false` ⇒ entire form is disabled.
:::
* Improved UI element configuration error experience - when there is an error in the element’s settings, then the element is marked and on hover, an error message is displayed.
* Added code editor in expression fields.
[UI Designer](../../docs/building-blocks/ui-designer)
### Process designer
* Added a new flag in process settings to use a process in task management.

[Task management](../../docs/platform-deep-dive/plugins/custom-plugins/task-management)
## **Known issues** 🙁
### Reporting
* Reporting plugin is not compatible with Oracle DBs.
### UI Designer
* When configuring an [Input UI element](../../docs/building-blocks/ui-designer/ui-component-types/form-elements/input-form-field) with a 'Has clear' (content clear) property, we recommend setting a fixed width value for the element. If the width is set to 'fill', it may cause the UI to break
* The 'auto' size property for Fit W is not available for [input](../../docs/building-blocks/ui-designer/ui-component-types/form-elements/input-form-field), [select](../../docs/building-blocks/ui-designer/ui-component-types/form-elements/select-form-field), [switch](../../docs/building-blocks/ui-designer/ui-component-types/form-elements/switch-form-field), [textarea](../../docs/building-blocks/ui-designer/ui-component-types/form-elements/text-area) and [datepicker](../../docs/building-blocks/ui-designer/ui-component-types/form-elements/datepicker-form-field) UI elements
[Deployment guidelines v3.1](./deployment-guidelines-v3.1.0)
# Deployment guidelines v3.2.0
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.2.0-april-2023/deployment-guidelines-v3.2.0
Do not forget, when upgrading to a new platform version, always check and make sure your installed component versions match the versions stated in the release. To do that, go to **FLOWX.AI Designer > Platform Status**.
After updating to **3.2.0** FLOWX.AI release, importing old processes definitions in the new platform release is not possible (available for exports from **\< 3.2.0** releases).

## Component versions
| :ballot\_box\_with\_check: | 3.2.0 | 3.1.0 | 3.0.0 | 2.14.0 | 2.13.0 | 2.12.0 | 2.11.0 | 2.10.0 | 2.9.0 | 2.8.1 | 2.8.0 | 2.7.0 | 2.6.0 | 2.5.0 | 2.4.0 | 2.3.0 | 2.2.0 | 2.1.0 |
| ------------------------------ | ---------- | ------ | ------ | -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | --------- |
| **Process engine** | **2.2.1** | 2.1.2 | 2.0.7 | 0.4.104 | 0.4.95 | 0.4.90 | 0.4.83 | 0.4.60 | 0.4.49 | 0.4.44 | 0.4.42 | 0.4.42 | 0.4.36 | 0.4.29 | 0.4.22 | 0.4.21 | 0.4.18 | 0.4.13 |
| **Admin** | **2.2.2** | 2.1.3 | 2.0.8 | 0.3.119 | 0.3.103 | 0.3.92 | 0.3.81 | 0.3.60 | 0.3.55 | 0.3.47 | 0.3.43 | 0.3.40 | 0.3.36 | 0.3.34 | 0.3.29 | 0.3.23 | 0.3.21 | 0.3.13 |
| **Designer** | **3.21.1** | 3.15.1 | 3.2.1 | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 | 2.33.0 | 2.28.1 | 2.24.2 | 2.23.0 | 2.19.2 | 2.18.2 | 2.17.4 | 2.15.2 | 2.14.4 | 2.11.2 |
| **@flowx/ui-sdk** | **3.21.1** | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-toolkit** | **3.21.1** | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-theme** | **3.21.1** | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **paperflow-web-components** | - | - | - | 2.78.4-1 | 2.63.6 | 2.60.7 | 0.2.10 | 0.2.10 | 0.2.10 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.5 | 0.2.4 |
| **flowx-process-renderer** | - | - | - | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 | 2.33.0 | 2.28.1 | 2.24.2 | 2.23.0 | 2.19.2 | 2.18.2 | 2.17.4 | 2.15.2 | 2.14.4 | 2.11.2 |
| **CMS Core** | **1.2.0** | 1.0.3 | 1.0.2 | 0.2.38 | 0.2.36 | 0.2.33 | 0.2.30 | 0.2.25 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.20 | 0.2.20 | 0.2.18 | 0.2.17 | 0.2.17 |
| **Scheduler Core** | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.34 | 0.0.34 | 0.0.34 | 0.0.33 | 0.0.28 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.24 | 0.0.24 | 0.0.23 | 0.0.23 | 0.0.23 |
| **Notification Plugin** | **2.0.5** | 2.0.3 | 2.0.1 | 1.0.206 | 1.0.206 | 1.0.206 | 1.0.205 | 1.0.200 | 1.0.198 | 1.0.198 | 1.0.197 | 1.0.194 | 1.0.194 | 1.0.191 | 1.0.191 | 1.0.190 | 1.0.190 | 1.0.186-1 |
| **Document Plugin** | 2.0.3 | 2.0.3 | 2.0.2 | 1.0.53 | 1.0.53 | 1.0.53 | 1.0.52 | 1.0.47 | 1.0.42 | 1.0.41 | 1.0.38 | 1.0.37 | 1.0.37 | 1.0.35 | 1.0.35 | 1.0.31 | 1.0.31 | 1.0.30 |
| **OCR Plugin** | **1.0.2** | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.0.109 | 0.0.109 | 0.0.109 |
| **License Core** | 1.0.2 | 1.0.2 | 1.0.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 | 0.1.19 | 0.1.18 | 0.1.18 | 0.1.18 | 0.1.18 | 0.1.15 | 0.1.15 | 0.1.13 | 0.1.13 | 0.1.12 |
| **Customer Management Plugin** | 0.2.3 | 0.2.3 | 0.2.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.20 | 0.1.20 | 0.1.18 | 0.1.18 | 0.1.18 |
| **Task Management Plugin** | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.42 | 0.0.42 | 0.0.40 | 0.0.37 | 0.0.29 | 0.0.28 | 0.0.28 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.22 | 0.0.22 | 0.0.21 | 0.0.21 | 0.0.16 |
| **Data search** | 0.1.4 | 0.1.4 | 0.1.3 | 0.0.8 | 0.0.8 | 0.0.6 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **Audit Core** | **1.0.5** | 1.0.4 | 1.0.1 | 0.0.8 | 0.0.5 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **Reporting** | 0.0.40 | 0.0.40 | 0.0.39 | 0.0.39 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **advancing-controller** | 0.1.4 | 0.1.4 | 0.1.2 | 0.0.6 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **iOS renderer** | **2.0.7** | 2.0.4 | 2.0.0 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **Android renderer** | 2.0.1 | 2.0.1 | 2.0.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
With the release of **FLOWX.AI 3.0**, there have been some changes that you need to be aware when upgrading to the latest version:
* The `flowx-process-renderer` has been migrated to `@flowx\ui-sdk`.
* As of **FlowX 4.0**, the `paperflow-web-components` library will be deprecated. Instead, the new components can be found in `@flowx/ui-toolkit`.
### 3.2.0 Minimum Recommended Versions
| FLOWX.AI Platform Version | Component name | Minimum recommended version (tested versions) |
| ------------------------- | ---------------------------- | --------------------------------------------- |
| 3.2 | Keycloak | 18.0.x |
| 3.2 | Kafka | 3.2.0 |
| 3.2 | PostgreSQL | 14.3.0 |
| 3.2 | MongoDB | 5.0.8 |
| 3.2 | Redis | 6.2.6 |
| 3.2 | Elasticsearch | 7.17 |
| 3.2 | S3 (Min.IO) / minio-operator | 2022-05-26T05-48-41Z / 4.5.4 |
| 3.2 | OracleDB | 19.8.0.0.0 |
| 3.2 | Angular (Web SDK) | 15.0.0 |
FlowX.AI supports any version of the third-party components listed as prerequisites.
For optimal performance and reliability, our internal QA process validates new releases using specific versions as indicated in the provided table.
While exploring alternative versions that suit your company's specific requirements, we recommend referring to the compatibility matrix for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
Compatibility Matrix:
* FLOWX.AI Platform: Recommended and tested versions
* Third-Party Components: Supported versions based on specific requirements and client preferences
## Additional configuration
### CMS audit log
New environment variable that needs to be configured on CMS microservice:
* `KAFKA_TOPIC_AUDIT_OUT` - the identifier for the Kafka topic used to receive audit logs
[CMS setup guide](../../docs/platform-setup-guides/cms-setup-guide)
### Notifications plugin
New environment variables added to configure the error handler on Notifications plugin - Kafka consumer (default values below):
```
KAFKA_CONSUMER_ERROR_HANDLING_ENABLED: FALSE
KAFKA_CONSUMER_ERROR_HANDLING_RETRIES: 0
KAFKA_CONSUMER_ERROR_HANDLING_RETRY_INTERVAL: 1000
```
More information about the new environment variables:
[Notifications plugin setup guide](../../docs/platform-deep-dive/plugins/plugins-setup-guide/notifications-plugin-setup#error-handling)
### Client and environment
Only users with the following admin role can set/edit the new client and environment feature:
* `ROLE_ADMIN_MANAGE_PLATFORM_ADMIN`
### OCR plugin
Replaced `MINIO_` prefix with `STORAGE_S3_` for storage related environment variables.
New environment variables:
* `STORAGE_S3_ACCESS_KEY`
* `STORAGE_S3_SECRET_KEY`
* `STORAGE_S3_HOST`
* `STORAGE_S3_LOCATION`
* `STORAGE_S3_OCR_SCANS_BUCKET`
* `STORAGE_S3_OCR_SIGNATURE_BUCKET`
* `STORAGE_S3_OCR_SIGNATURE_FILENAME`
The following environment from previous releases must be removed in order to use OCR plugin: `CELERY_BROKER_URL`.
More information available in the below section:
[OCR setup guide](../../docs/platform-deep-dive/plugins/plugins-setup-guide/ocr-plugin-setup)
# v3.2.0 April 2023
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.2.0-april-2023/v3.2.0-april-2023
Drumroll please... 🥁 We are excited to announce **FLOWX.AI 3.2** release 🔥.
So, what are you waiting for? Grab a snack, sit back, and get ready to explore the newest version of **FLOWX.AI**.
## **New features** 🆕
### UI Designer ✍️
New features for the UI Designer:
#### Segmented button
Added a new type of button - **segmented button**. It allows users to pick only one option from a group of options, and you can choose to have between 2 and 5 options in the group. The segmented button is easy to use, and can help make your application easier for people to use.

#### Shadows
The new shadow feature is a visual effect that creates a sense of depth and dimensionality in UI designer elements.
The shadow option can be customized to suit the overall design aesthetic of the interface, with properties for shadow color, opacity, blur and distance from the element. It can be applied to a range of UI elements, including [root elements](../../docs/building-blocks/ui-designer/ui-component-types/root-components), forms, [collections](../../docs/building-blocks/ui-designer/ui-component-types/collection), [buttons](../../docs/building-blocks/ui-designer/ui-component-types/buttons), messages, [images](../../docs/building-blocks/ui-designer/ui-component-types/image) and [file preview](../../docs/building-blocks/ui-designer/ui-component-types/file-preview).

#### Disable button - add disabled condition
A new feature has been added that allows a button to be disabled in cases where custom form validations are applied. The user can set a disabled condition for the button element, and the button will remain disabled until the validation criteria are met.

### Platform
#### Set client and environment
The Designer app now allows users to set the client and environment through the **Platform Status** tab. In cases where these are not configured, a modal will be displayed on the Flowx instance, and the user will be required to enter the necessary information before accessing any other section.

If the user lacks the required authorization, a separate modal will be prompted.

[Additional configuration for client and environment](./deployment-guidelines-v3.2.0#client-and-environment)
### Process designer
#### Floating menu
Added a floating menu with multiple actions, including "Home", "View instance data", "Process definition", and "Info". The menu will be visible during error or loading states, with toast error displayed over it and the menu displayed above during loading states:
* the "View instance data" action opens a process instance view that overlays the rendered interface
* the "Process definition" action opens the process definition in the same tab, displaying the same version that was run
* the "Info" action displays a card with process definition info and instance start and end

### Content Management
#### CMS audit log
Added audit options for the CMS system. It keeps track of changes made to different parts of the CMS, like [Enumerations](../../docs/platform-deep-dive/core-components/core-extensions/content-management/enumerations), [Substitution Tags](../../docs/platform-deep-dive/core-components/core-extensions/content-management/substitution-tags), [Languages](../../docs/platform-deep-dive/core-components/core-extensions/content-management/languages), [Source Systems](../../docs/platform-deep-dive/core-components/core-extensions/content-management/source-systems), and [Media Library](../../docs/platform-deep-dive/core-components/core-extensions/content-management/media-library).
Whether you need to troubleshoot an error or are just interested in keeping a close eye on changes, the audit log is a useful tool for monitoring and debugging

There is also a dedicated audit section for each element mentioned above.

[Additional configuration for CMS audit log](./deployment-guidelines-v3.2.0#cms-audit-log)
## **Bug fixes** 🔧
* **\[WEB\_SDK]** Fixed an issue where custom components were unable to access UI Actions when input keys were not defined
## **Changed** 🛠️
### UI Designer ✍️
#### Improved rendering mechanism
* The UI Designer now has an enhanced rendering mechanism that prevents the page from scrolling up upon reloading, improving the user experience
* Element bounding box in UI Designer view now reflects Spacing and Sizing settings
#### Added new "requiredTrue" validator - SWITCH element
The "requiredTrue" validator for the SWITCH element enforces that the toggle must be switched on to be valid.

### Notifications plugin
* New environment variables added to configure the error handler on Notifications plugin - Kafka consumer
[Additional configuration for Notifications plugin](./deployment-guidelines-v3.2.0#notifications-plugin)
### OCR plugin
* With the new release of ocr-plugin **1.0.x** version, RabbitMQ is no longer needed
* New environment variables
[Additional configuration here](./deployment-guidelines-v3.2.0#ocr-plugin)
### Process designer
* Improvement: process definitions names can only contain letters, numbers and the following special characters \`\[] () . \_ -
## **Known issues** 🙁
### Reporting
* Reporting plugin is not compatible with Oracle DBs.
### UI Designer
* When configuring an [Input UI element](../../docs/building-blocks/ui-designer/ui-component-types/form-elements/input-form-field) with a 'Has clear' (content clear) property, we recommend setting a fixed width value for the element. If the width is set to 'fill', it may cause the UI to break
[Deployment guidelines v3.2](./deployment-guidelines-v3.2.0)
# Deployment guidelines v3.3.0
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.3.0-july-2023/deployment-guidelines-v3.3.0
Do not forget, when upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FLOWX.AI Designer > Platform Status**.
After updating to **3.3.0** FLOWX.AI release, it is not possible to import old process definitions into the new platform release (available for exports from releases **\< 3.3.0**).

## Component versions
| 🧩 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 | 2.14.0 | 2.13.0 | 2.12.0 | 2.11.0 | 2.10.0 | 2.9.0 | 2.8.1 | 2.8.0 | 2.7.0 | 2.6.0 | 2.5.0 | 2.4.0 | 2.3.0 | 2.2.0 | 2.1.0 |
| ------------------------------ | ----------- | ------ | ------ | ------ | -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | --------- |
| **Process engine** | **3.6.0** | 2.2.1 | 2.1.2 | 2.0.7 | 0.4.104 | 0.4.95 | 0.4.90 | 0.4.83 | 0.4.60 | 0.4.49 | 0.4.44 | 0.4.42 | 0.4.42 | 0.4.36 | 0.4.29 | 0.4.22 | 0.4.21 | 0.4.18 | 0.4.13 |
| **Admin** | **2.5.2** | 2.2.2 | 2.1.3 | 2.0.8 | 0.3.119 | 0.3.103 | 0.3.92 | 0.3.81 | 0.3.60 | 0.3.55 | 0.3.47 | 0.3.43 | 0.3.40 | 0.3.36 | 0.3.34 | 0.3.29 | 0.3.23 | 0.3.21 | 0.3.13 |
| **Designer** | **3.28.13** | 3.21.1 | 3.15.1 | 3.2.1 | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 | 2.33.0 | 2.28.1 | 2.24.2 | 2.23.0 | 2.19.2 | 2.18.2 | 2.17.4 | 2.15.2 | 2.14.4 | 2.11.2 |
| **@flowx/ui-sdk** | **3.28.13** | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-toolkit** | **3.28.13** | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-theme** | **3.28.13** | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **paperflow-web-components** | - | - | - | - | 2.78.4-1 | 2.63.6 | 2.60.7 | 0.2.10 | 0.2.10 | 0.2.10 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.5 | 0.2.4 |
| **flowx-process-renderer** | - | - | - | - | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 | 2.33.0 | 2.28.1 | 2.24.2 | 2.23.0 | 2.19.2 | 2.18.2 | 2.17.4 | 2.15.2 | 2.14.4 | 2.11.2 |
| **CMS Core** | **1.3.0** | 1.2.0 | 1.0.3 | 1.0.2 | 0.2.38 | 0.2.36 | 0.2.33 | 0.2.30 | 0.2.25 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.20 | 0.2.20 | 0.2.18 | 0.2.17 | 0.2.17 |
| **Scheduler Core** | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.34 | 0.0.34 | 0.0.34 | 0.0.33 | 0.0.28 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.24 | 0.0.24 | 0.0.23 | 0.0.23 | 0.0.23 |
| **events-gateway** | **1.0.2** | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| **Notification Plugin** | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 | 1.0.206 | 1.0.206 | 1.0.206 | 1.0.205 | 1.0.200 | 1.0.198 | 1.0.198 | 1.0.197 | 1.0.194 | 1.0.194 | 1.0.191 | 1.0.191 | 1.0.190 | 1.0.190 | 1.0.186-1 |
| **Document Plugin** | **2.0.4** | 2.0.3 | 2.0.3 | 2.0.2 | 1.0.53 | 1.0.53 | 1.0.53 | 1.0.52 | 1.0.47 | 1.0.42 | 1.0.41 | 1.0.38 | 1.0.37 | 1.0.37 | 1.0.35 | 1.0.35 | 1.0.31 | 1.0.31 | 1.0.30 |
| **OCR Plugin** | **1.0.8** | 1.0.2 | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.0.109 | 0.0.109 | 0.0.109 |
| **License Core** | 1.0.2 | 1.0.2 | 1.0.2 | 1.0.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 | 0.1.19 | 0.1.18 | 0.1.18 | 0.1.18 | 0.1.18 | 0.1.15 | 0.1.15 | 0.1.13 | 0.1.13 | 0.1.12 |
| **Customer Management Plugin** | **0.2.4** | 0.2.3 | 0.2.3 | 0.2.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.20 | 0.1.20 | 0.1.18 | 0.1.18 | 0.1.18 |
| **Task Management Plugin** | **2.1.2** | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.42 | 0.0.42 | 0.0.40 | 0.0.37 | 0.0.29 | 0.0.28 | 0.0.28 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.22 | 0.0.22 | 0.0.21 | 0.0.21 | 0.0.16 |
| **Data search** | **0.2.0** | 0.1.4 | 0.1.4 | 0.1.3 | 0.0.8 | 0.0.8 | 0.0.6 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **Audit Core** | **1.0.6** | 1.0.5 | 1.0.4 | 1.0.1 | 0.0.8 | 0.0.5 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **Reporting** | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 | 0.0.39 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **advancing-controller** | **0.3.0** | 0.1.4 | 0.1.4 | 0.1.2 | 0.0.6 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **iOS renderer** | **2.1.4** | 2.0.7 | 2.0.4 | 2.0.0 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **Android renderer** | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
With the release of **FLOWX.AI 3.0**, there have been some changes that you need to be aware when upgrading to the latest version:
* The `flowx-process-renderer` has been migrated to `@flowx\ui-sdk`.
* As of **FlowX 4.0**, the `paperflow-web-components` library will be deprecated. Instead, the new components can be found in `@flowx/ui-toolkit`.
### Recommended Versions for FLOWX.AI 3.3.0 ☑️
| FLOWX.AI Platform Version | Component name | Recommended version (tested versions) |
| ------------------------- | ---------------------------- | ------------------------------------- |
| 3.3 | Keycloak | 18.0.x |
| 3.3 | Kafka | 3.2.0 |
| 3.3 | PostgreSQL | 14.3.0 |
| 3.3 | MongoDB | 5.0.8 |
| 3.3 | Redis | 6.2.6 |
| 3.3 | Elasticsearch | 7.17 |
| 3.3 | S3 (Min.IO) / minio-operator | 2022-05-26T05-48-41Z / 4.5.4 |
| 3.3 | OracleDB | 19.8.0.0.0 |
| 3.3 | Angular (Web SDK) | 15.0.0 |
FlowX.AI supports any version of the third-party components listed as prerequisites.
For optimal performance and reliability, our internal QA process validates new releases using specific versions as indicated in the provided table.
While exploring alternative versions that suit your company's specific requirements, we recommend referring to the compatibility matrix for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
Compatibility Matrix:
* FLOWX.AI Platform: Recommended and tested versions
* Third-Party Components: Supported versions based on specific requirements and client preferences
## Additional configuration
This section describes the additional configuration that is required to use the new features in FlowX.AI.
### Process engine
#### Process Instance Indexing through Kafka transport
Introducing a new Kafka transport strategy for sending details about process instances to be indexed in Elasticsearch. To enable indexing of process instances in Elasticsearch through Kafka, configure the following environment variables:
* `FLOWX_INDEXING_ENABLED`
| Variable Name | Values | Description |
| ------------------------ | ------ | ------------------------------------------------------ |
| FLOWX\_INDEXING\_ENABLED | true | Enables indexing with Elasticsearch for the whole app |
| FLOWX\_INDEXING\_ENABLED | false | Disables indexing with Elasticsearch for the whole app |
* `FLOWX_INDEXING_PROCESSINSTANCE_INDEXING_TYPE`
| Variable Name | Values | Definition |
| ------------------------------------------------ | ----------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| FLOWX\_INDEXING\_PROCESSINSTANCE\_INDEXING\_TYPE | no-indexing | No indexing is performed for process instances |
| FLOWX\_INDEXING\_PROCESSINSTANCE\_INDEXING\_TYPE | http | Process instances are indexed via HTTP (direct connection from process-engine to Elastic Search thorugh HTTP calls) |
| FLOWX\_INDEXING\_PROCESSINSTANCE\_INDEXING\_TYPE | kafka | Process instances are indexed via Kafka (send data to be indexed through a kafka topic - the new strategy for the applied solution) |
For Kafka indexing, the Kafka Connect with Elastic Search Sink Connector must be deployed in the infrastructure.
* `FLOWX_INDEXING_PROCESSINSTANCE_INDEX_NAME`: specify the name of the index used for process instances
| Variable Name | Values | Definition |
| ------------------------------------------------------- | ----------------- | ----------------------------------------------------------------------------------------------- |
| FLOWX\_INDEXING\_PROCESSINSTANCE\_INDEXING\_INDEX\_NAME | process\_instance | The name of the index used for storing process instances. It is also part of the search pattern |
* `FLOWX_INDEXING_PROCESSINSTANCE_SHARDS`: set the number of shards for the index
| Variable Name | Values | Definition |
| ---------------------------------------- | ------ | -------------------------------------------------------------------------- |
| FLOWX\_INDEXING\_PROCESSINSTANCE\_SHARDS | 1 | The number of shards for the Elasticsearch index storing process instances |
* `FLOWX_INDEXING_PROCESSINSTANCE_REPLICAS`: set the number of replicas for the index
| Variable Name | Values | Definition |
| ------------------------------------------ | ------ | ---------------------------------------------------------------------------- |
| FLOWX\_INDEXING\_PROCESSINSTANCE\_REPLICAS | 1 | The number of replicas for the Elasticsearch index storing process instances |
#### Topics related to process event messages
##### Process engine new kafka topics
| Default parameter (env var) | Default FLOWX.AI value (can be overwritten) |
| --------------------------------- | ------------------------------------------- |
| KAFKA\_TOPIC\_PROCESS\_INDEX\_OUT | ai.flowx.dev.core.index.process.v1 |
For more details please check the following section:
[Process Instance Indexing through Kafka transport](../../docs/platform-setup-guides/flowx-engine-setup-guide/configuring-elasticsearch-indexing)
#### New service account
Added a new service account `flowx-process-engine-sa`. This service account is needed so the use of Start Catch Event node is possible.
[Service accounts](../../docs/platform-setup-guides/access-management/configuring-an-iam-solution#process-engine-service-account)
### Events gateway
Added a new **events-gateway** microservice, which requires the following configuration.
The events-gateway is designed specifically for handling events. Previously, each [**process-engine**](../../docs/terms/flowxai-process-engine) pod had a WebSocket (WS) server, and the front-end (FE) would connect to the process-engine to receive messages.
Now, instead of a server holding the messages, they are stored in Redis. However, the process-engine sends the messages to the events-gateway, which is responsible for sending them to Redis. Users connect to the events-gateway using an HTTP request and wait for Server-Sent Events (SSE) to flow in that request. They keep the request open for as long as they want SSE on a specific instance.
### Events-gateway kafka topics
New Kafka topics have been added for the events-gateway. These topics are used to send and receive messages between the events-gateway and the process-engine.
| Topic Name | Description | Value |
| ---------------------------------------------- | --------------------------------------------------------- | ---------------------------------------------------- |
| KAFKA\_TOPIC\_EVENTS\_GATEWAY\_OUT\_MESSAGE | Outgoing messages from process-engine to events-gateway | ai.flowx.eventsgateway.engine.commands.message.v1 |
| KAFKA\_TOPIC\_EVENTS\_GATEWAY\_OUT\_DISCONNECT | Disconnect commands from process-engine to events-gateway | ai.flowx.eventsgateway.engine.commands.disconnect.v1 |
| KAFKA\_TOPIC\_EVENTS\_GATEWAY\_OUT\_CONNECT | Connect commands from process-engine to events-gateway | ai.flowx.eventsgateway.engine.commands.connect.v1 |
New kafka topics that should be added in the events-gateway configuration.
| Topic Name | Description | Value |
| ---------------------------------------------------------------- | ------------------------------------------------------------- | ---------------------------------------------------- |
| KAFKA\_TOPIC\_EVENTS\_GATEWAY\_PROCESS\_INSTANCE\_IN\_MESSAGE | Where events-gateway listens for messages from process-engine | ai.flowx.eventsgateway.engine.commands.message.v1 |
| KAFKA\_TOPIC\_EVENTS\_GATEWAY\_PROCESS\_INSTANCE\_IN\_DISCONNECT | Disconnect commands from events-gateway to process-engine | ai.flowx.eventsgateway.engine.commands.disconnect.v1 |
| KAFKA\_TOPIC\_EVENTS\_GATEWAY\_PROCESS\_INSTANCE\_IN\_CONNECT | Connect commands from events-gateway to process-engine | ai.flowx.eventsgateway.engine.commands.connect.v1 |
[Events gateway](../../docs/platform-deep-dive/core-components/events-gateway)
[Events gateway setup guide](../../docs/platform-setup-guides/events-gateway-setup)
[Process engine topics](../../docs/platform-setup-guides/flowx-engine-setup-guide#configuring-events-gateway)
### SSE
Starting with the 3.3 platform release, the WebSocket protocol has been removed. Therefore, if you are using `socket.io-client`, you will need to make some changes. Here's what you should do:
1. Uninstall `socket.io-client`:
Before proceeding, ensure that you uninstall `socket.io-client` from your project. You can do this using the following command:
```bash
npm uninstall socket.io-client
```
2. Install `event-source-polyfill@1.0.31`:
To replace the functionality provided by socket.io-client, you will need to use a new package called `event-source-polyfill@1.0.31` (as mentioned in the [**Installing the library**](../../docs/platform-deep-dive/core-components/renderer-sdks/angular-renderer#installing-the-library) section). This package serves as a polyfill for the EventSource API, which enables servers to send events to clients over HTTP. The EventSource API is commonly used for server-sent events (SSE) and real-time web applications.
```bash
npm install event-source-polyfill@1.0.31
```
### Message events
#### Topics related to message events
New kafka topics that should be added in the process-engine configuration.
| Default parameter (env var) | Default FLOWX.AI value (can be overwritten) | Definition |
| -------------------------------------------- | ---------------------------------------------------- | ------------------------------------------------------------ |
| KAFKA\_TOPIC\_PROCESS\_EVENT\_MESSAGE | ai.flowx.dev.core.message.event.process.v1 | This topic is used for throwing intermediate event messages. |
| KAFKA\_TOPIC\_PROCESS\_START\_FOR\_EVENT\_IN | ai.flowx.dev.core.trigger.start-for-event.process.v1 | This topic is used to start processes. |
### Bulk updates
New kafka topics that should be added in the process-engine configuration, related to task management plugin - bulk updates.
| Default parameter (env var) | Default FLOWX.AI value (can be overwritten) | Definition |
| ------------------------------------------- | ------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| KAFKA\_TOPIC\_PROCESS\_OPERATIONS\_BULK\_IN | ai.flowx.core.trigger.operations.bulk.v1 | On this topic, you can perform operations from the "KAFKA\_TOPIC\_PROCESS\_OPERATIONS\_IN" topic and send them as an array, allowing you to send multiple operations at once. |
#### Example
```json
{
"operations": [
{
"operationType": "TERMINATE",
"processInstanceUuid": "6ae8274a-2778-4ff9-8fcb-6c84a5eb2bc6",
"taskId": "doesn't matter"
},
{
"operationType": "HOLD",
"processInstanceUuid": "6ae8274a-2778-4ff9-8fcb-6c84a5eb2bc6",
"taskId": "doesn't matter"
}
]
}
```
[Process engine topics](../../docs/platform-setup-guides/flowx-engine-setup-guide#topics-related-to-the-task-management-plugin)
## Migration Steps
To upgrade to FLOWX.AI 3.3.0, follow these steps:
* Make sure you have taken a backup of your current platform and database configurations.
* Verify that your current installed component versions match the versions specified in the release notes.
* Update the FLOWX.AI platform and all related components to the recommended versions.
* Update the necessary configuration files according to the additional configuration requirements.
* Restart the FLOWX.AI platform and related services.
* Verify that the platform is running correctly and all processes are functioning as expected.
* If you encounter any issues or errors during the upgrade process, refer to the troubleshooting section in the release notes or contact FLOWX.AI support for assistance.
## Troubleshooting
If you encounter any issues during the upgrade process or while running the FLOWX.AI platform, refer to the troubleshooting section in the release notes or contact FLOWX.AI support for assistance.
# v3.3.0 July 2023
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.3.0-july-2023/v3.3.0-july-2023
We can't reinvent the wheel... but we can certainly give it a whole new spin! Drumroll, please! 🥁 **FLOWX.AI 3.3** has arrived, bringing a new wave of exciting new features and enhancements.

Buckle up, hold on tight, and prepare for an extraordinary experience! 🚀
## **New features** 🆕
### New nodes: message events
Message events play a crucial role in integrating messaging capabilities within business process modeling. These [**events**](../../docs/terms/events) are specifically crafted to capture the interaction between different participants in a [**process**](../../docs/terms/flowx-process) by referencing messages. By leveraging message events, processes can temporarily halt their execution until the anticipated messages are received, facilitating efficient coordination and communication among [**process instances**](../../docs/terms/flowx-process-instance).

* **Message Throw Intermediate Event** - the events can be triggered at any time while the associated task is being performed
* **Interrupting event** - when the message is received, the user task is finished and the [**token**](../../docs/terms/token) advances in the process flow
* **Non-Interrupting event** - when messages are received, the user task, to which the catching boundary event is attached, is not finished immediately.
* **Message Catch Intermediate Event** - waits for a message to catch before continuing with the process flow
* **Message Catch Start Event** - starts an instance after receiving a message
[Message events](../../docs/building-blocks/node/message-events)
### 📢 Announcement: Transitioning from WebSockets to SSE (Server-Sent Events)
We are replacing WebSockets with SSE (Server-Sent Events) as the preferred technology for real-time communication in our system. SSE offers a lightweight and efficient solution for delivering server-initiated updates to the clients, enhancing the responsiveness and user experience.
With SSE, we can streamline the communication flow by eliminating the need for bidirectional channels. Instead, the server can send events directly to the clients, reducing network overhead and simplifying the implementation. SSE is built on top of the HTTP protocol, making it widely supported and easily integrable into existing systems.
Check the deployment guildelines for more information about the impact on the configs:
[SSE](deployment-guidelines-v3.3.0#sse)
### Events gateway
Added a new [**FLOWX.AI microservice**](../../docs/terms/microservices). The Events Gateway is a central communication service that processes and distributes SSE (Server-sent events) messages from Backend to Frontend. It acts as an intermediary between different system components, handling event processing, message distribution, and event publication. By reading messages from a Kafka topic and publishing events to the frontend renderer, it enables real-time updates in the user interface. Additionally, it integrates with Redis to publish events on a stream for other system components to consume. The Events Gateway ensures efficient event handling and facilitates seamless communication within the system.

[Events gateway](../../docs/platform-deep-dive/core-components/events-gateway)
[Events gateway setup guide](../../docs/platform-setup-guides/events-gateway-setup)
### Process engine
#### NEW: process instance indexing through Kafka transport
Sending data through **Kafka** : Rather than sending data directly from the process engine to ElasticSearch (ES), a new strategy is introduced where the [**process engine**](../../docs/terms/flowxai-process-engine) sends messages to a Kafka topic whenever there is something to be indexed from a [**process instance**](../../docs/terms/flowx-process-instance). [Kafka Connect](https://kafka.apache.org/documentation.html#connect) is configured to read these messages and send them to ElasticSearch for indexing. This approach allows for fire-and-forget communication, eliminating the need for the process engine to wait for indexing requests to complete.
[Process instance indexing](../../docs/platform-setup-guides/flowx-engine-setup-guide/configuring-elasticsearch-indexing)
#### KafkaConnect ElasticSearch sink plugin
A new component, Kafka Connect with configuration, has been added. This component enables Kafka Connect to listen to a specific topic where process instances generate messages and sends them to ElasticSearch indexes. The configuration includes the utilization of a KafkaConnect ElasticSearch sink connector plugin, which is responsible for handling this task. The plugin is configured with a connector.
[Example configuration for applying the solution with Kafka Connect](../../docs/platform-setup-guides/flowx-engine-setup-guide/configuring-elasticsearch-indexing#example-configuration-for-applying-the-solution-with-kafka-connect)
Check the deployment guidelines for version 3.3:
[Kafka transport](./deployment-guidelines-v3.3.0#process-engine)
### UI Designer ✍️
#### Dynamic values
Added the possibility to add dynamic values in various element settings. You can now use process parameters or [**substitution tags**](../../docs/terms/flowx-substitutions-tags) for the following element properties: default value (excluding switch), label, placeholder, helpertext, error message, prefix, and suffix. Additionally, dynamic values are supported for specific elements such as [**Document Preview**](../../docs/building-blocks/ui-designer/ui-component-types/file-preview), [**Card**](../../docs/building-blocks/ui-designer/ui-component-types/root-components/card), [**Form**](../../docs/building-blocks/ui-designer/ui-component-types/form-elements), Message, [**Button**](../../docs/building-blocks/ui-designer/ui-component-types/buttons), [**Upload**](../../docs/building-blocks/ui-designer/ui-component-types/buttons), [**Select**](../../docs/building-blocks/ui-designer/ui-component-types/form-elements/select-form-field), [**Checkbox**](../../docs/building-blocks/ui-designer/ui-component-types/form-elements/checkbox-form-field), [**Radio**](../../docs/building-blocks/ui-designer/ui-component-types/form-elements/radio-form-field), [**Segmented button**](../../docs/building-blocks/ui-designer/ui-component-types/form-elements/segmented-button), Text, Link, Modal, and Step. This enhancement allows for greater flexibility and customization.

#### Computed values
Computed values are dynamically generated or calculated values based on JavaScript expressions instead of static predefined values.
Computed values can be created using the [**UI Designer**](../../docs/terms/flowx-ai-ui-designer) by writing JavaScript expressions that operate on process parameters or other variables within the application. These expressions can perform calculations, transformations, or other operations to generate the desired value at runtime. By enabling computed values, the application provides flexibility and the ability to create dynamic and responsive user interfaces.

[Dynamic & computed values](../../docs/building-blocks/ui-designer/dynamic-and-computed-values)
#### New value slider UI element
Introducing a new slider UI element that allows users to select and adjust numerical values within a specified range. The slider element can be added under a parent form element by dragging and dropping or pasting.

#### Icons
The new Icons feature enhances the visual appeal and customization options for the following UI components: [**Input**](../../docs/building-blocks/ui-designer/ui-component-types/form-elements/input-form-field), [**Select**](../../docs/building-blocks/ui-designer/ui-component-types/form-elements/select-form-field) and [**Button**](../../docs/building-blocks/ui-designer/ui-component-types/buttons).

* Added support for customizing icons in UI elements by uploading and managing SVG files through the [Media Library](../../docs/platform-deep-dive/core-components/core-extensions/content-management/media-library)
* Improved UI Designer integration for easy icon customization.
* Introduced options to set colors for icons in UI Designer
* Enhanced icon management with the ability to mark SVG files as icons in the Media Library

For more information, check the following section:
[Media Library](../../docs/platform-deep-dive/core-components/core-extensions/content-management/media-library#icons)
## **Bug Fixes** 🔧
#### Document Plugin
* Fixed an issue that caused an error when splitting PDF documents without a barcode on the first page.
#### Admin
* Addressed an issue that resulted in the creation of duplicate processes with the same UUID when cloning and renaming a published process. Now, a new UUID is generated for the renamed process, ensuring uniqueness and preventing duplication.
#### Integrations
* Fixed an issue where the received message in the callback action, configured with parameters from integration, was incorrectly saved on the node ID instead of the configured key. The fix ensures that integrations correctly map the received message to the intended key, saving it in the intended location.
#### Task Manager
* Resolved an issue where processes started with the "inherit" option did not connect to the correct namespace when opened from the task manager. Subprocesses now establish the connection using the appropriate contextInstanceUuid from the Child Process, ensuring they connect to the correct namespace and receive the expected updates.
:::caution
Currently the two triggers lacks support for Service Level Agreement (SLA) functionality.
:::
## **Changed** 🛠️
### Task Management - Bulk updates
Now you have the ability to send bulk update requests on [**Kafka**](../../docs/terms/flowx-kafka). You can now perform multiple operations at once.
[Bulk updates](./deployment-guidelines-v3.3.0#bulk-updates)
### Data model
#### Viewing data model references
This update introduces the ability to view attribute usage within the process. You can now easily see where a specific attribute is being used by accessing the "View References" feature. This feature provides a list of process data keys associated with each attribute and displays possible references, such as UI Elements.

Please note that the option to view references is not available for object and array attribute types.
[Data model reference](../../docs/building-blocks/process/process-definition#data-model-reference)
To ensure smooth upgrading to the v3.3 platform release and to automatically link already existing data models, please execute the following command:
```shell
curl --location --request PATCH '{{baseUrl}}/api/internal/ui-templates/data-model/link' \
--header 'Authorization: Bearer XXXXXXXXXXXXXXXXXXXXXXXX' \
--data ''
```
Please replace `{{baseUrl}}` with the appropriate base URL for your platform.
Make sure to execute this command with the necessary permissions and valid authorization token to ensure a successful upgrade process: `manage-processes` and `admin`. For more information about the needed roles and scopes, check the [**Configuring access rights for Admin**](../../docs/flowx-designer/designer-setup-guide/configuring-access-rights-for-admin) section.
#### Reporting
* You can now set "Used in reporting" and “Sensitive data” flags for an object or array of objects (all the child attributes will inherit its value - "true" or "false"), without the need to edit each attribute
#### Copy-paste objects
* Copy-paste objects structure under data model

### Platform
#### Set client and environment
To configure the client and environment settings in the platform, you can use the following environment variables:
* `FLOWX_CLIENT_NAME`
* `FLOWX_ENVIRONMENT_NAME`
Both configurations must be set for the admin component to retrieve them. In case the environment variables are not overridden, the administrator can manually configure them in FLOWX Designer. Here's how you can do it:

By setting the appropriate values for these environment variables, you ensure that the platform is correctly configured with the desired client and environment settings.
### Process designer
#### Swimlanes interaction
* We have introduced a new and improved way to interact with process swimlanes by incorporating a contextual menu.
Check out the animation below to see the new swimlanes interaction.

### Other
* The autoarrange function has been removed from [**Process Designer**](../../docs/terms/flowx-process-designer).
## **Known issues** 🙁
* **Slider UI element**: Currently, there is an issue where the value thumb of a slider component does not display the correct value when sourced from process data.
* **Business rules**: Presently, there is an issue where changing the language of a [**business rule**](../../docs/terms/business-rules) does not result in its execution using the new language. Despite updating the language value in the database, the business rule continues to be executed with the original language, leading to unexpected behavior.
* **Process Designer**:
* In certain cases, deleting a boundary node in the process designer and navigating back to the [**process designer**](../../docs/terms/flowx-process-designer) from the [**UI Designer**](../../docs/terms/flowx-ai-ui-designer) does not remove the associated sequence from the boundary event. This issue specifically occurs when the sequence is linked to the deleted boundary node.
* Select Sequence buttons in the nodes UI interface may overlap.
* There is a known issue where users are unable to select the node name with the mouse in the user interface.
* **Plugins**: Reporting plugin is not compatible with Oracle DBs.
[Deployment guidelines v3.3](./deployment-guidelines-v3.3.0)
# null
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.4.0-september-2023/deployment-guidelines-v3.4.0
Do not forget, when upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FLOWX.AI Designer > Platform Status**.
After updating to **3.4.0** FLOWX.AI release, it is not possible to import old process definitions into the new platform release (available for exports from releases **\< 3.4.0**).

## Component versions
| 🧩 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 | 2.14.0 | 2.13.0 | 2.12.0 | 2.11.0 | 2.10.0 | 2.9.0 | 2.8.1 | 2.8.0 | 2.7.0 | 2.6.0 | 2.5.0 | 2.4.0 | 2.3.0 |
| ------------------------------ | ----------- | ------- | ------ | ------ | ------ | -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |
| **process-engine** | **4.2.4** | 3.6.0 | 2.2.1 | 2.1.2 | 2.0.7 | 0.4.104 | 0.4.95 | 0.4.90 | 0.4.83 | 0.4.60 | 0.4.49 | 0.4.44 | 0.4.42 | 0.4.42 | 0.4.36 | 0.4.29 | 0.4.22 | 0.4.21 |
| **admin** | **3.2.3** | 2.5.2 | 2.2.2 | 2.1.3 | 2.0.8 | 0.3.119 | 0.3.103 | 0.3.92 | 0.3.81 | 0.3.60 | 0.3.55 | 0.3.47 | 0.3.43 | 0.3.40 | 0.3.36 | 0.3.34 | 0.3.29 | 0.3.23 |
| **designer** | **3.33.10** | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 | 2.33.0 | 2.28.1 | 2.24.2 | 2.23.0 | 2.19.2 | 2.18.2 | 2.17.4 | 2.15.2 |
| **@flowx/ui-sdk** | **3.33.10** | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-toolkit** | **3.33.10** | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-theme** | **3.33.10** | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **paperflow-web-components** | - | - | - | - | - | 2.78.4-1 | 2.63.6 | 2.60.7 | 0.2.10 | 0.2.10 | 0.2.10 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 |
| **flowx-process-renderer** | - | - | - | - | - | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 | 2.33.0 | 2.28.1 | 2.24.2 | 2.23.0 | 2.19.2 | 2.18.2 | 2.17.4 | 2.15.2 |
| **cms-core** | **1.3.6** | 1.3.0 | 1.2.0 | 1.0.3 | 1.0.2 | 0.2.38 | 0.2.36 | 0.2.33 | 0.2.30 | 0.2.25 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.20 | 0.2.20 | 0.2.18 |
| **scheduler-core** | **1.2.0** | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.34 | 0.0.34 | 0.0.34 | 0.0.33 | 0.0.28 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.24 | 0.0.24 | 0.0.23 |
| **events-gateway** | **1.0.6** | 1.0.2 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| **notification-plugin** | **2.0.5** | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 | 1.0.206 | 1.0.206 | 1.0.206 | 1.0.205 | 1.0.200 | 1.0.198 | 1.0.198 | 1.0.197 | 1.0.194 | 1.0.194 | 1.0.191 | 1.0.191 | 1.0.190 |
| **document-plugin** | **2.0.6** | 2.0.4 | 2.0.3 | 2.0.3 | 2.0.2 | 1.0.53 | 1.0.53 | 1.0.53 | 1.0.52 | 1.0.47 | 1.0.42 | 1.0.41 | 1.0.38 | 1.0.37 | 1.0.37 | 1.0.35 | 1.0.35 | 1.0.31 |
| **ocr-plugin** | 1.0.8 | 1.0.8 | 1.0.2 | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.0.109 |
| **license-core** | **1.0.4** | 1.0.2 | 1.0.2 | 1.0.2 | 1.0.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 | 0.1.19 | 0.1.18 | 0.1.18 | 0.1.18 | 0.1.18 | 0.1.15 | 0.1.15 | 0.1.13 |
| **customer-management-plugin** | **0.2.6** | 0.2.4 | 0.2.3 | 0.2.3 | 0.2.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.20 | 0.1.20 | 0.1.18 |
| **task-management-plugin** | **3.0.0** | 2.1.2 | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.42 | 0.0.42 | 0.0.40 | 0.0.37 | 0.0.29 | 0.0.28 | 0.0.28 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.22 | 0.0.22 | 0.0.21 |
| **data-search** | **0.2.3** | 0.2.0 | 0.1.4 | 0.1.4 | 0.1.3 | 0.0.8 | 0.0.8 | 0.0.6 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **audit-core** | **2.1.0** | 1.0.6 | 1.0.5 | 1.0.4 | 1.0.1 | 0.0.8 | 0.0.5 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **reporting-plugin** | **0.1.2** | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 | 0.0.39 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **advancing-controller** | **0.3.2** | 0.3.0 | 0.1.4 | 0.1.4 | 0.1.2 | 0.0.6 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **iOS renderer** | **2.3.0** | 2.1.0 | 2.0.1 | 2.0.0 | 2.0.0 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **Android renderer** | **2.1.4** | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
With the release of **FLOWX.AI 3.0**, there have been some changes that you need to be aware when upgrading to the latest version:
* The `flowx-process-renderer` has been migrated to `@flowx\ui-sdk`.
* As of **FLOWX.AI 3.0**, the `paperflow-web-components` library will be deprecated (some components still can be found inside this lib). Instead, the new components can be found in `@flowx/ui-toolkit@3.0`.
### Recommended Versions for FLOWX.AI 3.4.0 ☑️
| FLOWX.AI Platform Version | Component name | Recommended version (tested versions) |
| ------------------------- | ----------------- | ------------------------------------- |
| 3.4 | Keycloak | 18.0.x |
| 3.4 | Kafka | 3.2.3 |
| 3.4 | PostgreSQL | 14.3.0 |
| 3.4 | MongoDB | 5.0.8 |
| 3.4 | Redis | 6.2.6 |
| 3.4 | Elasticsearch | 7.17 |
| 3.4 | OracleDB | 19.8.0.0.0 |
| 3.4 | Angular (Web SDK) | 15.0.0 |
FlowX.AI supports any version of the third-party components listed as prerequisites.
For optimal performance and reliability, our internal QA process validates new releases using specific versions as indicated in the provided table.
While exploring alternative versions that suit your company's specific requirements, we recommend referring to the compatibility matrix for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
Compatibility Matrix:
* FLOWX.AI Platform: Recommended and tested versions
* Third-Party Components: Supported versions based on specific requirements and client preferences
# Additional configuration
This section outlines the supplementary configurations required to leverage the newly introduced features within FLOWX.AI.
## Data Model Update Procedure
When deploying the new version, it's mandatory to migrate the data model index using the following procedure.
Follow the next steps:
1. **Access Kibana**: Start by connecting to Kibana and locate the data model index, which is typically named "process-data-model."
2. **Open Kibana Devtools**: Navigate to Kibana → Devtools. In the Console tab, you should proceed with the following steps.
3. **Check Index Documents**: Copy and paste the following script into the left-hand window, replacing any existing content, and then execute it. This script will retrieve the existing documents in the index to validate that the index name is correct.
```json
GET process-data-model/_search
{
"query": {
"match_all": {}
}
}
```
4. **Update Documents**: Replace any existing content in the left window with the following script, and then run it. This script updates the documents in the index by adding a new attribute, "processDefinitionVersionId," with the same value as "processDefinitionId."
```json
POST process-data-model/_update_by_query?wait_for_completion=false
{
"query": {
"match_all": {}
},
"script": {
"source": "ctx._source.processDefinitionVersionId=ctx._source.processDefinitionId",
"lang": "painless"
}
}
```
5. **Verify the Update**: Re-run the script from step 3 to confirm that the new attribute, "processDefinitionVersionId," has been added with the same value as "processDefinitionId."
These steps will ensure a smooth migration of the data model index when deploying the new version.
## Access rights for Fonts
In order to utilize the new fonts feature in the CMS microservice, it's mandatory to configure the following access rights:
| Module | Scope | Role default value | Microservice |
| ------------- | ------ | -------------------- | ------------------ |
| manage-themes | import | ROLE\_THEMES\_IMPORT | Content Management |
| | import | ROLE\_THEMES\_EDIT | Content Management |
| | import | ROLE\_THEMES\_ADMIN | Content Management |
| manage-themes | read | ROLE\_THEMES\_READ | Content Management |
| | read | ROLE\_THEMES\_EDIT | Content Management |
| | read | ROLE\_THEMES\_ADMIN | Content Management |
| | read | ROLE\_THEMES\_IMPORT | Content Management |
| manage-themes | edit | ROLE\_THEMES\_EDIT | Content Management |
| | edit | ROLE\_THEMES\_ADMIN | Content Management |
| manage-themes | admin | ROLE\_THEMES\_ADMIN | Content Management |
## Markdown support
New lib added:
* `marked@^5.0.0` refers to a specific version of the "marked" library or package in the npm ecosystem. "marked" is an open-source JavaScript library used for parsing and rendering Markdown text into HTML.
To install the lib please make sure to run the following command:
`npm install marked@^5.0.0`
## Timer events
Incorporating new Kafka Topics and Consumer Groups.
### Admin
1. **KAFKA\_TOPIC\_PROCESS\_SCHEDULED\_TIMER\_EVENTS\_OUT\_SET**:
* Facilitates the transmission of scheduled message requests from the Admin microservice to the Scheduler.
* Utilize when setting up scheduled messages for precise timing and orchestration.
2. **KAFKA\_TOPIC\_PROCESS\_SCHEDULED\_TIMER\_EVENTS\_OUT\_STOP**:
* Responsible for forwarding requests from the Admin microservice to the Scheduler to halt scheduled messages.
* Utilize to cease previously scheduled messages for streamlined management.
3. **KAFKA\_TOPIC\_PROCESS\_START\_FOR\_EVENT\_IN**:
* Enabling the initiation of process definitions that commence with a timer start event node.
* Aids in the launch of process flows driven by time-based triggers.
### Scheduler
1. **KAFKA\_TOPIC\_PROCESS\_SCHEDULED\_TIMER\_EVENTS\_IN\_SET**:
* Receives scheduled message setting requests from the Admin and Process engine microservices.
* Utilized for setting up precise message scheduling with a focus on timing accuracy.
2. **KAFKA\_TOPIC\_PROCESS\_SCHEDULED\_TIMER\_EVENTS\_IN\_STOP**:
* Handles requests from the Admin and Process engine microservices to terminate scheduled messages.
* Essential for promptly halting scheduled messages according to operational requirements.
3. **KAFKA\_TOPIC\_AUDIT\_OUT**:
* Introduction of audit functionality to the Scheduler microservice through Kafka.
* Enables tracking and management of scheduling activities.
New consumer groups:
```yaml
kafka:
consumer:
threads: 1
scheduled-timer-events:
threads: 1
group-id: scheduled-timer-events
stop-scheduled-timer-events:
threads: 1
group-id: stop-scheduled-timer-events
```
### Process engine
1. **KAFKA\_TOPIC\_PROCESS\_SCHEDULED\_TIMER\_EVENTS\_OUT\_SET**:
* Sent to the scheduler for setting scheduled messages.
2. **KAFKA\_TOPIC\_PROCESS\_SCHEDULED\_TIMER\_EVENTS\_OUT\_STOP**:
* Sent to the scheduler for stopping scheduled messages.
New consumer groups:
```yaml
kafka:
consumer:
threads: 1
scheduled-timer-events:
threads: 1
group-id: scheduled-timer-events
stop-scheduled-timer-events:
threads: 1
group-id: stop-scheduled-timer-events
```
### New service account
A new service account has been introduced. This service account is essential for enabling the usage of the Start Timer Event node. Detailed information is available in the following section:
[Service accounts](../../docs/platform-setup-guides/access-management/configuring-an-iam-solution#scheduler-service-account)
## Scheduler: New timer-event-scheduler (Cron)
Introducing a new timer event scheduler designed to manage timer events. This scheduler scans for expired messages every second, processing batches of 100 messages per iteration. For situations with higher message volumes, the scheduler ensures thorough message consumption:
```yaml
timer-event-scheduler:
batchSize: 100
cronExpression: "*/1 * * * * *" #every 1 seconds
```
## Scheduler: new recovery mechanism
```yaml
flowx:
timer-calculator:
delay-max-repetitions: 1000000
```
You have a "next execution" set for 10:25, and the cycle step is 10 minutes. If the instance goes down for 2 hours, the next execution time should be 12:25, not 10:35. To calculate this, you add 10 minutes repeatedly to 10:25 until you reach the current time. So, it would be 10:25 + 10 min + 10 min + 10 min, until you reach the current time of 12:25. This ensures that the next execution time is adjusted correctly after the downtime.
* `FLOWX_TIMER_CALCULATOR_DELAY_MAX_REPETITIONS` - This means that, for example, if our cycle step is set to one second and the system experiences a downtime of two weeks, which is equivalent to 1,209,600 seconds, and we have the "max repetitions" set to 1,000,000, it will attempt to calculate the next schedule. However, when it reaches the maximum repetitions, an exception is thrown, making it impossible to calculate the next schedule. As a result, the entry remains locked and needs to be rescheduled. This scenario represents a critical case where the system experiences extended downtime, and the cycle step is very short (e.g., 1 second), leading to the inability to determine the next scheduled event.
# v3.4.0 September 2023
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.4.0-september-2023/v3.4.0-september-2023
Welcome to the FLOWX.AI 3.4 release! 🚀 This update introduces exciting new features and improvements to enhance your experience with FLOWX.AI. Get ready for an extraordinary journey! 🚀

## **What's New?** 🆕
# Introducing the Enhanced Versioning Module
We are excited to unveil the latest enhancements to our Versioning module, designed to improve your experience and streamline your workflow.

**Watch our video overview** to discover all the new features and improvements:
For more in-depth information and to explore the Versioning module further, please visit our documentation:
[Versioning Module Documentation](/3.x/docs/building-blocks/process/versioning)
Stay updated and take advantage of these exciting updates in our Versioning module!
### Fresh nodes: Timer Events
These nodes enable you to trigger specific actions or events at predefined time intervals, durations, or cycles. With timer event nodes, you can design processes that respond to time-related conditions, ensuring smoother workflow execution and enhanced automation.

Three primary Timer Event node types:
* [**Timer Start Event**](/3.x/docs/building-blocks/node/timer-events/timer-start-event) (interrupting/non-interrupting)
* [**Timer Intermediate Event**](/3.x/docs/building-blocks/node/timer-events/timer-intermediate-event)
* [**Timer Boundary Event**](/3.x/docs/building-blocks/node/timer-events/timer-boundary-event) (interrupting/non-interrupting)
So whether it's reminders, recurring tasks, or tasks with deadlines, these Timer Event nodes are your go-to for keeping things in sync with the clock.
[Timer Events](/3.x/docs/building-blocks/node/timer-events)
### FLOWX.AI Designer
#### Font Management
Font Management allows you to upload and manage multiple font files, which can be later utilized when configuring UI templates using the UI Designer. You can now upload multiple TTF font files, the platform will identify additional data like font family, weight, and style for each file. That can be done using the new menu entry added under **Content Management > Font files**.

[Font Management](../../docs/platform-deep-dive/core-components/core-extensions/content-management/font-files)
### UI Designer
#### Attributed strings for Markdown support
Enhance the design of UI components with the new Markdown support, including features such as bold, italic, strikethrough, and clickable URLs. This feature integrates with the following UI components: text, [switch](/3.x/docs/building-blocks/ui-designer/ui-component-types/form-elements/switch-form-field), and [message indicators](/3.x/docs/building-blocks/ui-designer/ui-component-types/indicators), ensuring a consistent and polished rendering experience.

Supported tags in the current iteration: bold, italic, bold italic, strikethrough and URLs.
#### Example:
* **Bold**
```markdown
**Bold**
```
* *italic*
```markdown
*italic*
```
* ***bold italic***
```markdown
***bold italic***
```
* strikethrough
```markdown
~~strikethrough~~
```
* URL
```markdown
[URL](https://url.net)
```
Let's take the following Markdown text example:
```markdown
Be among the *first* to receive updates about our **exciting new products** and releases. Subscribe [here](flowx.ai/newsletter) to stay in the loop! Do not ~~miss~~ it!
```
When running the process, it will be displayed like this:

## **Bug Fixes** 🔧
* Addressed a bug where the boundary sequence incorrectly moved to the parent node after copy-paste.
* Resolved an issue where the "Select Sequence" buttons within the node UI interface could overlap, ensuring a better user experience.
## **Changed** 🛠️
### Process Designer
#### Keyboard commands
* To edit a selected node label, press "R," which puts the label in edit mode. After editing, press "Enter" to save the new name.
* To copy selected nodes, use "CMD/Ctrl + C," and to paste them into a selected swimlane, use "CMD/Ctrl + V."
* To delete selected node(s), press "Backspace."
#### Data model
* Revamped Object-Level Settings with Enhanced Attribute Flags.

### Other Bits
* Bid farewell to the autoarrange function in the [**Process Designer**](../../docs/terms/flowx-process-designer).
## **Gremlins to Watch Out For** 🙁
* **Slider UI element**: Our slider component can be a bit mysterious at times. Currently, it enjoys a game of hide-and-seek with the correct value when sourced from process data.
* **Document preview UI element**: Our document preview component has a unique sense of style. It prefers to take up only a portion of the screen, even when told to "fill" the entire width. It's a rebel with a cause.
* **Business rules**: Our [**business rule**](../../docs/terms/business-rules) have a language barrier, but they're working on it. Changing the language of a business rule doesn't always lead to using the new language for execution. It's like they have a favorite phrase they won't let go of.
* **Process Designer**: Deleting a boundary node in the process designer and coming back from the UI Designer doesn't always clean up the associated sequence from the boundary event. It's like they left a party and forgot their hat.
* **Timer Events**:
* Our timer events can sometimes be a bit shy and not show up on the canvas when added after creating a new process version or branch. They need a little nudge to make their appearance after refreshing the page.
* Mandatory Fields Error Messages: Our system has a sense of humor when it comes to mandatory fields. It forgets to deliver the error messages when these fields are left empty. It's a bit too laid-back.
* Timer Expression Validators: Our timer expressions can be a bit wild and free-spirited because they don't always follow the rules. We haven't implemented their validators yet, so they do as they please.
* Timer Events on Read-Only Process Versions: Our timer events are a bit of a rebel when it comes to read-only process versions. They refuse to disable their fields, as if they have a mind of their own.
* **Versioning**:
* Our versioning system can be a bit finicky at times. It might throw a server error when merging branches or refuse to ignore the Flowx UUID key, causing conflicts. And sometimes, the branching graph prefers to play hide-and-seek during import/export. But hey, we're working on it!
* Swimlane Allocation UI: Even after a process definition is deleted, you might catch a glimpse of the UI for swimlane allocation. It's like a ghost from the past that refuses to fade away.
## **Additional information**
For deployment guidelines, refer to:
[Deployment guidelines v3.4.0](./deployment-guidelines-v3.4.0)
# Deployment guidelines v3.4.1
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.4.1-september-2023/deployment-guidelines-v3.4.1
Do not forget, when upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FLOWX.AI Designer > Platform Status**.
After updating to **3.4.1** FLOWX.AI release, it is not possible to import old process definitions into the new platform release (available for exports from releases **\< 3.4.1**).

## Component versions
| 🧩 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 | 2.14.0 | 2.13.0 | 2.12.0 | 2.11.0 | 2.10.0 | 2.9.0 | 2.8.1 | 2.8.0 | 2.7.0 | 2.6.0 | 2.5.0 | 2.4.0 | 2.3.0 | 2.2.0 |
| ------------------------------ | ---------- | ------ | ------- | ------ | ------ | ------ | -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |
| **process-engine** | **4.3.1** | 4.1.0 | 3.6.0 | 2.2.1 | 2.1.2 | 2.0.7 | 0.4.104 | 0.4.95 | 0.4.90 | 0.4.83 | 0.4.60 | 0.4.49 | 0.4.44 | 0.4.42 | 0.4.42 | 0.4.36 | 0.4.29 | 0.4.22 | 0.4.21 | 0.4.18 |
| **admin** | **3.3.7** | 3.1.1 | 2.5.2 | 2.2.2 | 2.1.3 | 2.0.8 | 0.3.119 | 0.3.103 | 0.3.92 | 0.3.81 | 0.3.60 | 0.3.55 | 0.3.47 | 0.3.43 | 0.3.40 | 0.3.36 | 0.3.34 | 0.3.29 | 0.3.23 | 0.3.21 |
| **designer** | **3.35.6** | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 | 2.33.0 | 2.28.1 | 2.24.2 | 2.23.0 | 2.19.2 | 2.18.2 | 2.17.4 | 2.15.2 | 2.14.4 |
| **@flowx/ui-sdk** | **3.35.6** | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-toolkit** | **3.35.6** | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-theme** | **3.35.6** | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **paperflow-web-components** | **3.35.6** | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | 2.78.4-1 | 2.63.6 | 2.60.7 | 0.2.10 | 0.2.10 | 0.2.10 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.5 |
| **flowx-process-renderer** | - | - | - | - | - | - | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 | 2.33.0 | 2.28.1 | 2.24.2 | 2.23.0 | 2.19.2 | 2.18.2 | 2.17.4 | 2.15.2 | 2.14.4 |
| **cms-core** | **1.3.9** | 1.3.6 | 1.3.0 | 1.2.0 | 1.0.3 | 1.0.2 | 0.2.38 | 0.2.36 | 0.2.33 | 0.2.30 | 0.2.25 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.20 | 0.2.20 | 0.2.18 | 0.2.17 |
| **scheduler-core** | **1.2.4** | 1.1.0 | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.34 | 0.0.34 | 0.0.34 | 0.0.33 | 0.0.28 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.24 | 0.0.24 | 0.0.23 | 0.0.23 |
| **events-gateway** | **1.1.0** | 1.0.6 | 1.0.2 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| **notification-plugin** | **2.0.8** | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 | 1.0.206 | 1.0.206 | 1.0.206 | 1.0.205 | 1.0.200 | 1.0.198 | 1.0.198 | 1.0.197 | 1.0.194 | 1.0.194 | 1.0.191 | 1.0.191 | 1.0.190 | 1.0.190 |
| **document-plugin** | **2.0.8** | 2.0.6 | 2.0.4 | 2.0.3 | 2.0.3 | 2.0.2 | 1.0.53 | 1.0.53 | 1.0.53 | 1.0.52 | 1.0.47 | 1.0.42 | 1.0.41 | 1.0.38 | 1.0.37 | 1.0.37 | 1.0.35 | 1.0.35 | 1.0.31 | 1.0.31 |
| **ocr-plugin** | **1.0.12** | 1.0.8 | 1.0.8 | 1.0.2 | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.0.109 | 0.0.109 |
| **license-core** | **1.0.7** | 1.0.4 | 1.0.2 | 1.0.2 | 1.0.2 | 1.0.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 | 0.1.19 | 0.1.18 | 0.1.18 | 0.1.18 | 0.1.18 | 0.1.15 | 0.1.15 | 0.1.13 | 0.1.13 |
| **customer-management-plugin** | **0.2.8** | 0.2.6 | 0.2.4 | 0.2.3 | 0.2.3 | 0.2.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.20 | 0.1.20 | 0.1.18 | 0.1.18 |
| **task-management-plugin** | **3.0.3** | 3.0.0 | 2.1.2 | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.42 | 0.0.42 | 0.0.40 | 0.0.37 | 0.0.29 | 0.0.28 | 0.0.28 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.22 | 0.0.22 | 0.0.21 | 0.0.21 |
| **data-search** | **0.2.6** | 0.2.3 | 0.2.0 | 0.1.4 | 0.1.4 | 0.1.3 | 0.0.8 | 0.0.8 | 0.0.6 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **audit-core** | **2.1.3** | 2.1.0 | 1.0.6 | 1.0.5 | 1.0.4 | 1.0.1 | 0.0.8 | 0.0.5 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **reporting-plugin** | 0.1.2 | 0.1.2 | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 | 0.0.39 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **advancing-controller** | **0.3.5** | 0.3.2 | 0.3.0 | 0.1.4 | 0.1.4 | 0.1.2 | 0.0.6 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **iOS renderer** | 2.3.0 | 2.3.0 | 2.1.0 | 2.0.1 | 2.0.0 | 2.0.0 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **Android renderer** | 2.1.4 | 2.1.4 | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
With the release of **FLOWX.AI 3.0**, there have been some changes that you need to be aware when upgrading to the latest version:
* The `flowx-process-renderer` has been migrated to `@flowx\ui-sdk`.
* As of **FLOWX.AI 4.0**, the `paperflow-web-components` library will be deprecated. Instead, the new components can be found in `@flowx/ui-toolkit`.
### Recommended Versions for FLOWX.AI 3.4.1 ☑️
| FLOWX.AI Platform Version | Component name | Recommended version (tested versions) |
| ------------------------- | ----------------- | ------------------------------------- |
| 3.4.1 | Keycloak | 18.0.x |
| 3.4.1 | Kafka | 3.2.3 |
| 3.4.1 | PostgreSQL | 14.3.0 |
| 3.4.1 | MongoDB | 5.0.8 |
| 3.4.1 | Redis | 6.2.6 |
| 3.4.1 | Elasticsearch | 7.17 |
| 3.4.1 | OracleDB | 19.8.0.0.0 |
| 3.4.1 | Angular (Web SDK) | 15.0.0 |
FlowX.AI supports any version of the third-party components listed as prerequisites.
For optimal performance and reliability, our internal QA process validates new releases using specific versions as indicated in the provided table.
While exploring alternative versions that suit your company's specific requirements, we recommend referring to the compatibility matrix for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
Compatibility Matrix:
* FLOWX.AI Platform: Recommended and tested versions
* Third-Party Components: Supported versions based on specific requirements and client preferences
# Additional configuration
This section outlines the supplementary configurations required to leverage the newly introduced features within FLOWX.AI (for a smooth transition please check first what's new in [3.4.0](../v3.4.0-september-2023/v3.4.0-september-2023.md)).
## Scheduler configuration
```yaml
scheduler:
thread-count: 30 # Configure the number of threads to be used for sending expired messages.
callbacks-thread-count: 60 # Configure the number of threads for handling Kafka responses, whether the message was successfully sent or not
cronExpression: "*/10 * * * * *" #every 10 seconds
retry: # new retry mechanism
max-attempts: 3
seconds: 1
thread-count: 3
cronExpression: "*/10 * * * * *" #every 10 seconds
cleanup:
cronExpression: "*/25 * * * * *" #every 25 seconds
```
### Explanation
* `SCHEDULER_THREAD_COUNT` - Used to configure the number of threads to be used for sending expired.
* `CALLBACKS_THREAD_COUNT` - Used to configure the number of threads for handling Kafka responses, whether the message was successfully sent or not.
### New retry mechanism
* `SCHEDULER_RETRY_THREAD_COUNT` - Specify the number of threads to use for resending messages that need to be retried.
* `SCHEDULER_RETRY_MAX_ATTEMPTS` - This configuration parameter sets the number of retry attempts. For instance, if it's set to 3, it means that the system will make a maximum of three retry attempts for message resending.
* `SCHEDULER_RETRY_SECONDS` - This configuration parameter defines the time interval, in seconds, for retry attempts. For example, when set to 1, it indicates that the system will retry the operation after a one-second delay.
[Scheduler setup guide](../../docs/platform-setup-guides/scheduler-setup-guide)
## Revised Cache Key Organization
To ensure a smooth transition for the **3.4.1** release, it is crucial to make use of the clear cache endpoint with the following request body:
**Request:**
```http
POST /api/internal/cache/clear
```
This endpoint is designed to purge Redis caches selectively. It will exclusively delete caches that are specified in the admin microservice properties under the property key: "application.redis.clearable-caches".
Request body:
```json
{
"cacheNames": [
"events",
"admin",
"allowedSwimlanes",
"initiatedProcessFromStartEvent",
"flowx:core"
]
}
```
Please take note that after upgrading to the new system version, you should refrain from including the flowx:core cache in the request body when invoking the clear cache endpoint to avoid unintended consequences.
# v3.4.1 September 2023
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.4.1-september-2023/v3.4.1-september-2023
Welcome to the FLOWX.AI v3.4.1 patch release! In this update, we've fine-tuned your FLOWX.AI experience with fixes and enhancements. While it may not be a major version update, it's packed with valuable improvements.
## **What's new**
### Scheduler Configuration
* We've introduced a new configuration for the Scheduler microservice. Learn more in our [**deployment guidelines**](./deployment-guidelines-v3.4.1.md#scheduler-configuration).
## **Bug Fixes** 🔧
### Timer Events
* **Fixed Timer Events Shyness**
* We've given our timer events a little nudge, and they will now reliably appear on the canvas after refreshing the page.
* **Resolved Mandatory Fields Error Messages**
* Our system has become more serious about mandatory fields and will now consistently display error messages when required fields are empty.
* **Improved Timer Expression Validators**
* We've implemented validators for timer expressions, ensuring they follow the rules as expected.
* **Fixed Timer Events in Read-Only Process Versions**
* Timer events will now behave appropriately and disable their fields in read-only process versions.
### Versioning
* **Enhanced Versioning System**
* We've made significant improvements to the versioning system to resolve these issues. Branch merging and UUID handling have been optimized, and the branching graph now consistently appears during import/export operations.
### Swimlane allocation
* **Removed Lingering UI After Process Definition Deletion**
* We've cleared away this lingering UI, so you won't encounter remnants of deleted process definitions in the swimlane allocation interface anymore.
### Revised Cache Key Organization
* Check the deployment guidelines for more information:
[Revised Cache Key Organization](deployment-guidelines-v3.4.1#revised-cache-key-organization)
### Other Bits
* Bid farewell to the autoarrange function in the **Process Designer**.
## **Gremlins to Watch Out For** 🙁
Keep an eye out for these quirks:
* **Slider UI element**: Our slider component can be a bit mysterious at times. Currently, it enjoys a game of hide-and-seek with the correct value when sourced from process data.
* **Document preview UI element**: Our document preview component has a unique sense of style. It prefers to take up only a portion of the screen, even when told to "fill" the entire width. It's a rebel with a cause.
* **Business rules**: Our business rule have a language barrier, but they're working on it. Changing the language of a business rule doesn't always lead to using the new language for execution. It's like they have a favorite phrase they won't let go of.
* **Process Designer**: Deleting a boundary node in the process designer and coming back from the UI Designer doesn't always clean up the associated sequence from the boundary event. It's like they left a party and forgot their hat.
## **Additional information**
For deployment guidelines, refer to:
[Deployment guidelines v3.4.1](./deployment-guidelines-v3.4.1)
# Deployment guidelines v3.4.2
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.4.2-october-2023/deployment-guidelines-v3.4.2
Do not forget, when upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FLOWX.AI Designer > Platform Status**.
After updating to **3.4.2** FLOWX.AI release, it is not possible to import old process definitions into the new platform release (available for exports from releases **\< 3.4.2**).

## Component versions
| 🧩 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 | 2.14.0 | 2.13.0 | 2.12.0 | 2.11.0 | 2.10.0 | 2.9.0 | 2.8.1 | 2.8.0 | 2.7.0 | 2.6.0 | 2.5.0 | 2.4.0 | 2.3.0 |
| ------------------------------ | ---------- | ------ | ------ | ------- | ------ | ------ | ------ | -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |
| **process-engine** | **4.3.2** | 4.3.1 | 4.1.0 | 3.6.0 | 2.2.1 | 2.1.2 | 2.0.7 | 0.4.104 | 0.4.95 | 0.4.90 | 0.4.83 | 0.4.60 | 0.4.49 | 0.4.44 | 0.4.42 | 0.4.42 | 0.4.36 | 0.4.29 | 0.4.22 | 0.4.21 |
| **admin** | **3.3.10** | 3.3.7 | 3.1.1 | 2.5.2 | 2.2.2 | 2.1.3 | 2.0.8 | 0.3.119 | 0.3.103 | 0.3.92 | 0.3.81 | 0.3.60 | 0.3.55 | 0.3.47 | 0.3.43 | 0.3.40 | 0.3.36 | 0.3.34 | 0.3.29 | 0.3.23 |
| **designer** | **3.35.9** | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 | 2.33.0 | 2.28.1 | 2.24.2 | 2.23.0 | 2.19.2 | 2.18.2 | 2.17.4 | 2.15.2 |
| **@flowx/ui-sdk** | **3.35.9** | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-toolkit** | **3.35.9** | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-theme** | **3.35.9** | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **paperflow-web-components** | **3.35.9** | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | 2.78.4-1 | 2.63.6 | 2.60.7 | 0.2.10 | 0.2.10 | 0.2.10 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 |
| **flowx-process-renderer** | - | - | - | - | - | - | - | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 | 2.33.0 | 2.28.1 | 2.24.2 | 2.23.0 | 2.19.2 | 2.18.2 | 2.17.4 | 2.15.2 |
| **cms-core** | 1.3.9 | 1.3.9 | 1.3.6 | 1.3.0 | 1.2.0 | 1.0.3 | 1.0.2 | 0.2.38 | 0.2.36 | 0.2.33 | 0.2.30 | 0.2.25 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.20 | 0.2.20 | 0.2.18 |
| **scheduler-core** | 1.2.4 | 1.2.4 | 1.1.0 | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.34 | 0.0.34 | 0.0.34 | 0.0.33 | 0.0.28 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.24 | 0.0.24 | 0.0.23 |
| **events-gateway** | 1.1.0 | 1.1.0 | 1.0.6 | 1.0.2 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| **notification-plugin** | 2.0.8 | 2.0.8 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 | 1.0.206 | 1.0.206 | 1.0.206 | 1.0.205 | 1.0.200 | 1.0.198 | 1.0.198 | 1.0.197 | 1.0.194 | 1.0.194 | 1.0.191 | 1.0.191 | 1.0.190 |
| **document-plugin** | 2.0.8 | 2.0.8 | 2.0.6 | 2.0.4 | 2.0.3 | 2.0.3 | 2.0.2 | 1.0.53 | 1.0.53 | 1.0.53 | 1.0.52 | 1.0.47 | 1.0.42 | 1.0.41 | 1.0.38 | 1.0.37 | 1.0.37 | 1.0.35 | 1.0.35 | 1.0.31 |
| **ocr-plugin** | 1.0.12 | 1.0.12 | 1.0.8 | 1.0.8 | 1.0.2 | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.0.109 |
| **license-core** | 1.0.7 | 1.0.7 | 1.0.4 | 1.0.2 | 1.0.2 | 1.0.2 | 1.0.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 | 0.1.19 | 0.1.18 | 0.1.18 | 0.1.18 | 0.1.18 | 0.1.15 | 0.1.15 | 0.1.13 |
| **customer-management-plugin** | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.4 | 0.2.3 | 0.2.3 | 0.2.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.20 | 0.1.20 | 0.1.18 |
| **task-management-plugin** | 3.0.3 | 3.0.3 | 3.0.0 | 2.1.2 | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.42 | 0.0.42 | 0.0.40 | 0.0.37 | 0.0.29 | 0.0.28 | 0.0.28 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.22 | 0.0.22 | 0.0.21 |
| **data-search** | 0.2.6 | 0.2.6 | 0.2.3 | 0.2.0 | 0.1.4 | 0.1.4 | 0.1.3 | 0.0.8 | 0.0.8 | 0.0.6 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **audit-core** | 2.1.3 | 2.1.3 | 2.1.0 | 1.0.6 | 1.0.5 | 1.0.4 | 1.0.1 | 0.0.8 | 0.0.5 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **reporting-plugin** | 0.1.2 | 0.1.2 | 0.1.2 | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 | 0.0.39 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **advancing-controller** | 0.3.5 | 0.3.5 | 0.3.2 | 0.3.0 | 0.1.4 | 0.1.4 | 0.1.2 | 0.0.6 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **iOS renderer** | 2.3.0 | 2.3.0 | 2.3.0 | 2.1.0 | 2.0.1 | 2.0.0 | 2.0.0 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **Android renderer** | 2.1.4 | 2.1.4 | 2.1.4 | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
With the release of **FLOWX.AI 3.0**, there have been some changes that you need to be aware when upgrading to the latest version:
* The `flowx-process-renderer` has been migrated to `@flowx\ui-sdk`.
* As of **FlowX 4.0**, the `paperflow-web-components` library will be deprecated. Instead, the new components can be found in `@flowx/ui-toolkit`.
### Recommended Versions for FLOWX.AI 3.4.2 ☑️
| FLOWX.AI Platform Version | Component name | Recommended version (tested versions) |
| ------------------------- | ----------------- | ------------------------------------- |
| 3.4.2 | Keycloak | 18.0.x |
| 3.4.2 | Kafka | 3.2.3 |
| 3.4.2 | PostgreSQL | 14.3.0 |
| 3.4.2 | MongoDB | 5.0.8 |
| 3.4.2 | Redis | 6.2.6 |
| 3.4.2 | Elasticsearch | 7.17 |
| 3.4.2 | OracleDB | 19.8.0.0.0 |
| 3.4.2 | Angular (Web SDK) | 15.0.0 |
FlowX.AI supports any version of the third-party components listed as prerequisites.
For optimal performance and reliability, our internal QA process validates new releases using specific versions as indicated in the provided table.
While exploring alternative versions that suit your company's specific requirements, we recommend referring to the compatibility matrix for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
Compatibility Matrix:
* FLOWX.AI Platform: Recommended and tested versions
* Third-Party Components: Supported versions based on specific requirements and client preferences
# Additional configuration
## Designer values configuration
New environment variable
* `LEGACY_HTTP_VERSION`: false (default value) - Set this to `true` only for HTTP versions \< 2 in order for SSE to work properly. Can be omitted otherwise.
# v3.4.2 October 2023
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.4.2-october-2023/v3.4.2-october-2023
Welcome to the FLOWX.AI v3.4.2 patch release! This update brings several enhancements and fixes to improve your FLOWX.AI experience. While it may not be a major version update, it's packed with valuable improvements.
## **What's new**
### Web Renderer - Enhanced SSE Connection Handling
* Introducing the `LEGACY_HTTP_VERSION` variable: When set to `true`, it will automatically disconnect the Server-Sent Events (SSE) tab when it becomes inactive, and then seamlessly reconnect and retrieve the status when it becomes active. By default, `LEGACY_HTTP_VERSION` is set to `false`, meaning there will be no SSE disconnection when switching tabs, ensuring a smoother user experience.
Set `LEGACY_HTTP_VERSION` to `true` only for HTTP versions \< 2 to ensure proper SSE functionality. It can be omitted for other cases.
## **Bug Fixes** 🔧
* Addressed various bugs from previous versions to enhance stability and reliability.
### Other Bits
* We've bid farewell to the autoarrange function in the [**Process Designer**](../../docs/terms/flowx-process-designer).
## **Gremlins to Watch Out For** 🙁
Keep an eye out for these quirks:
* **Slider UI element**: Our slider component can be a bit mysterious at times. Currently, it enjoys a game of hide-and-seek with the correct value when sourced from process data.
* **Document preview UI element**: Our document preview component has a unique sense of style. It prefers to take up only a portion of the screen, even when told to "fill" the entire width. It's a rebel with a cause.
* **Business rules**: Our business rule have a language barrier, but they're working on it. Changing the language of a business rule doesn't always lead to using the new language for execution. It's like they have a favorite phrase they won't let go of.
* **Process Designer**: Deleting a boundary node in the process designer and coming back from the UI Designer doesn't always clean up the associated sequence from the boundary event. It's like they left a party and forgot their hat.
* **Datepicker Date Transformation**: Our Datepicker seems to possess a hidden talent. It mysteriously transforms random text into the current date when used with validators in UI Designer.
* **Text Element Issue**: Our text element tends to vanish when set to "0" or "-". Expected it to show up, but it's playing hide-and-seek instead!
## **Additional information**
For deployment guidelines, refer to:
[Deployment guidelines v3.4.2](./deployment-guidelines-v3.4.2)
# Deployment guidelines v3.4.3
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.4.3-october-2023/deployment-guidelines-v3.4.3
Do not forget, when upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FLOWX.AI Designer > Platform Status**.
After updating to **3.4.3** FlowX release, it is not possible to import old process definitions into the new platform release (available for exports from releases **\< 3.4.3**).

## Component versions
| 🧩 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 | 2.14.0 | 2.13.0 | 2.12.0 | 2.11.0 | 2.10.0 | 2.9.0 | 2.8.1 | 2.8.0 | 2.7.0 | 2.6.0 | 2.5.0 | 2.4.0 |
| ------------------------------ | ----------- | ------ | ------ | ------ | ------- | ------ | ------ | ------ | -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |
| **process-engine** | **4.3.5** | 4.3.2 | 4.3.1 | 4.1.0 | 3.6.0 | 2.2.1 | 2.1.2 | 2.0.7 | 0.4.104 | 0.4.95 | 0.4.90 | 0.4.83 | 0.4.60 | 0.4.49 | 0.4.44 | 0.4.42 | 0.4.42 | 0.4.36 | 0.4.29 | 0.4.22 |
| **admin** | **3.3.19** | 3.3.10 | 3.3.7 | 3.1.1 | 2.5.2 | 2.2.2 | 2.1.3 | 2.0.8 | 0.3.119 | 0.3.103 | 0.3.92 | 0.3.81 | 0.3.60 | 0.3.55 | 0.3.47 | 0.3.43 | 0.3.40 | 0.3.36 | 0.3.34 | 0.3.29 |
| **designer** | **3.35.18** | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 | 2.33.0 | 2.28.1 | 2.24.2 | 2.23.0 | 2.19.2 | 2.18.2 | 2.17.4 |
| **@flowx/ui-sdk** | **3.35.18** | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-toolkit** | **3.35.18** | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-theme** | **3.35.18** | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **paperflow-web-components** | **3.35.18** | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | 2.78.4-1 | 2.63.6 | 2.60.7 | 0.2.10 | 0.2.10 | 0.2.10 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 |
| **flowx-process-renderer** | - | - | - | - | - | - | - | - | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 | 2.33.0 | 2.28.1 | 2.24.2 | 2.23.0 | 2.19.2 | 2.18.2 | 2.17.4 |
| **cms-core** | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.6 | 1.3.0 | 1.2.0 | 1.0.3 | 1.0.2 | 0.2.38 | 0.2.36 | 0.2.33 | 0.2.30 | 0.2.25 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.20 | 0.2.20 |
| **scheduler-core** | 1.2.4 | 1.2.4 | 1.2.4 | 1.1.0 | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.34 | 0.0.34 | 0.0.34 | 0.0.33 | 0.0.28 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.24 | 0.0.24 |
| **events-gateway** | 1.1.0 | 1.1.0 | 1.1.0 | 1.0.6 | 1.0.2 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| **notification-plugin** | **2.0.9** | 2.0.8 | 2.0.8 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 | 1.0.206 | 1.0.206 | 1.0.206 | 1.0.205 | 1.0.200 | 1.0.198 | 1.0.198 | 1.0.197 | 1.0.194 | 1.0.194 | 1.0.191 | 1.0.191 |
| **document-plugin** | **2.0.10** | 2.0.8 | 2.0.8 | 2.0.6 | 2.0.4 | 2.0.3 | 2.0.3 | 2.0.2 | 1.0.53 | 1.0.53 | 1.0.53 | 1.0.52 | 1.0.47 | 1.0.42 | 1.0.41 | 1.0.38 | 1.0.37 | 1.0.37 | 1.0.35 | 1.0.35 |
| **ocr-plugin** | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.8 | 1.0.8 | 1.0.2 | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 |
| **license-core** | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.4 | 1.0.2 | 1.0.2 | 1.0.2 | 1.0.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 | 0.1.19 | 0.1.18 | 0.1.18 | 0.1.18 | 0.1.18 | 0.1.15 | 0.1.15 |
| **customer-management-plugin** | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.4 | 0.2.3 | 0.2.3 | 0.2.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.20 | 0.1.20 |
| **task-management-plugin** | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.0 | 2.1.2 | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.42 | 0.0.42 | 0.0.40 | 0.0.37 | 0.0.29 | 0.0.28 | 0.0.28 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.22 | 0.0.22 |
| **data-search** | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.3 | 0.2.0 | 0.1.4 | 0.1.4 | 0.1.3 | 0.0.8 | 0.0.8 | 0.0.6 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **audit-core** | **2.2.0** | 2.1.3 | 2.1.3 | 2.1.0 | 1.0.6 | 1.0.5 | 1.0.4 | 1.0.1 | 0.0.8 | 0.0.5 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **reporting-plugin** | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 | 0.0.39 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **advancing-controller** | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.2 | 0.3.0 | 0.1.4 | 0.1.4 | 0.1.2 | 0.0.6 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **iOS renderer** | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.1.0 | 2.0.1 | 2.0.0 | 2.0.0 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **Android renderer** | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
With the release of **FLOWX.AI 3.0**, there have been some changes that you need to be aware when upgrading to the latest version:
* The `flowx-process-renderer` has been migrated to `@flowx\ui-sdk`.
* As of **FlowX 4.0**, the `paperflow-web-components` library will be deprecated. Instead, the new components can be found in `@flowx/ui-toolkit`.
### Recommended Versions for FLOWX.AI 3.4.3 ☑️
| FLOWX.AI Platform Version | Component name | Recommended version (tested versions) |
| ------------------------- | ----------------- | ------------------------------------- |
| 3.4.3 | Keycloak | 18.0.x |
| 3.4.3 | Kafka | 3.2.3 |
| 3.4.3 | PostgreSQL | 14.3.0 |
| 3.4.3 | MongoDB | 5.0.8 |
| 3.4.3 | Redis | 6.2.6 |
| 3.4.3 | Elasticsearch | 7.17 |
| 3.4.3 | OracleDB | 19.8.0.0.0 |
| 3.4.3 | Angular (Web SDK) | 15.0.0 |
FlowX.AI supports any version of the third-party components listed as prerequisites.
For optimal performance and reliability, our internal QA process validates new releases using specific versions as indicated in the provided table.
While exploring alternative versions that suit your company's specific requirements, we recommend referring to the compatibility matrix for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
Compatibility Matrix:
* FLOWX.AI Platform: Recommended and tested versions
* Third-Party Components: Supported versions based on specific requirements and client preferences
# v3.4.3 October 2023
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.4.3-october-2023/v3.4.3-october-2023
In FLOWX.AI version 3.4.3, we've addressed several bugs to improve the stability and reliability of the platform. Here are the key bug fixes in this release.
## **Bug Fixes** 🔧
### Notification and Document plugins
* Fixed limited entries problem in Document plugin and Notification plugin, previously restricted to 20 entries.
* Resolved the issue causing the Document plugin to restart when starting with lag in Kafka topics.
### Web SDK
* Addressed CSS class assignment for `` issue, ensuring correct class assignment.
* Fixed a UI issue where adding a left margin to COLLECTION\_PROTOTYPE caused it to extend beyond the parent card.
* Corrected a bug in certain UI elements where selecting a 2nd level child value didn't clear when the parent value was updated.
* Resolved rendering issues for '0' and '-' values in text elements.
### Designer and Versioning
* Fixed the Designer issue where options in certain UI elements weren't displayed in dropdowns after previous selections.
* Resolved the Import Published Version Selection issue during process import, now displaying only available versions.
### Admin
* Fixed an issue that previously prevented the creation of new versions for processes with swimlane names containing empty spaces.
## **Other Bits**
* We've bid farewell to the autoarrange function in the [**Process Designer**](../../docs/terms/flowx-process-designer).
## **Gremlins to Watch Out For** 🙁
Keep an eye out for these quirks:
* **Document preview UI element**: Our document preview component has a unique sense of style. It prefers to take up only a portion of the screen, even when told to "fill" the entire width. It's a rebel with a cause.
* **Business rules**: Our business rule have a language barrier, but they're working on it. Changing the language of a business rule doesn't always lead to using the new language for execution. It's like they have a favorite phrase they won't let go of.
* **Process Designer**: Deleting a boundary node in the process designer and coming back from the UI Designer doesn't always clean up the associated sequence from the boundary event. It's like they left a party and forgot their hat.
* **Datepicker Date Transformation**: Our Datepicker seems to possess a hidden talent. It mysteriously transforms random text into the current date when used with validators in UI Designer.
## **Additional information**
For deployment guidelines, refer to:
[Deployment guidelines v3.4.3](./deployment-guidelines-v3.4.3)
# Deployment guidelines v3.4.4
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.4.4-november-2023/deployment-guidelines-v3.4.4
Do not forget, when upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FLOWX.AI Designer > Platform Status**.
After updating to **3.4.4** FLOWX.AI release, it is not possible to import old process definitions into the new platform release (available for exports from releases **\< 3.4.4**).

## Component versions
| 🧩 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 | 2.14.0 | 2.13.0 | 2.12.0 | 2.11.0 | 2.10.0 | 2.9.0 | 2.8.1 | 2.8.0 | 2.7.0 | 2.6.0 | 2.5.0 |
| ------------------------------ | ------------- | ------- | ------ | ------ | ------ | ------- | ------ | ------ | ------ | -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |
| **process-engine** | **4.3.5-2v1** | 4.3.5 | 4.3.2 | 4.3.1 | 4.1.0 | 3.6.0 | 2.2.1 | 2.1.2 | 2.0.7 | 0.4.104 | 0.4.95 | 0.4.90 | 0.4.83 | 0.4.60 | 0.4.49 | 0.4.44 | 0.4.42 | 0.4.42 | 0.4.36 | 0.4.29 |
| **admin** | **3.3.19-1** | 3.3.19 | 3.3.10 | 3.3.7 | 3.1.1 | 2.5.2 | 2.2.2 | 2.1.3 | 2.0.8 | 0.3.119 | 0.3.103 | 0.3.92 | 0.3.81 | 0.3.60 | 0.3.55 | 0.3.47 | 0.3.43 | 0.3.40 | 0.3.36 | 0.3.34 |
| **designer** | **3.35.18-1** | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 | 2.33.0 | 2.28.1 | 2.24.2 | 2.23.0 | 2.19.2 | 2.18.2 |
| **@flowx/ui-sdk** | **3.35.18-1** | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-toolkit** | **3.35.18-1** | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-theme** | **3.35.18-1** | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **paperflow-web-components** | **3.35.18-1** | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | 2.78.4-1 | 2.63.6 | 2.60.7 | 0.2.10 | 0.2.10 | 0.2.10 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 |
| **flowx-process-renderer** | - | - | - | - | - | - | - | - | - | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 | 2.33.0 | 2.28.1 | 2.24.2 | 2.23.0 | 2.19.2 | 2.18.2 |
| **cms-core** | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.6 | 1.3.0 | 1.2.0 | 1.0.3 | 1.0.2 | 0.2.38 | 0.2.36 | 0.2.33 | 0.2.30 | 0.2.25 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.20 |
| **scheduler-core** | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.1.0 | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.34 | 0.0.34 | 0.0.34 | 0.0.33 | 0.0.28 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.24 |
| **events-gateway** | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.0.6 | 1.0.2 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| **notification-plugin** | 2.0.9 | 2.0.9 | 2.0.8 | 2.0.8 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 | 1.0.206 | 1.0.206 | 1.0.206 | 1.0.205 | 1.0.200 | 1.0.198 | 1.0.198 | 1.0.197 | 1.0.194 | 1.0.194 | 1.0.191 |
| **document-plugin** | 2.0.10 | 2.0.10 | 2.0.8 | 2.0.8 | 2.0.6 | 2.0.4 | 2.0.3 | 2.0.3 | 2.0.2 | 1.0.53 | 1.0.53 | 1.0.53 | 1.0.52 | 1.0.47 | 1.0.42 | 1.0.41 | 1.0.38 | 1.0.37 | 1.0.37 | 1.0.35 |
| **ocr-plugin** | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.8 | 1.0.8 | 1.0.2 | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 |
| **license-core** | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.4 | 1.0.2 | 1.0.2 | 1.0.2 | 1.0.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 | 0.1.19 | 0.1.18 | 0.1.18 | 0.1.18 | 0.1.18 | 0.1.15 |
| **customer-management-plugin** | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.4 | 0.2.3 | 0.2.3 | 0.2.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.20 |
| **task-management-plugin** | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.0 | 2.1.2 | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.42 | 0.0.42 | 0.0.40 | 0.0.37 | 0.0.29 | 0.0.28 | 0.0.28 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.22 |
| **data-search** | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.3 | 0.2.0 | 0.1.4 | 0.1.4 | 0.1.3 | 0.0.8 | 0.0.8 | 0.0.6 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **audit-core** | 2.2.0 | 2.2.0 | 2.1.3 | 2.1.3 | 2.1.0 | 1.0.6 | 1.0.5 | 1.0.4 | 1.0.1 | 0.0.8 | 0.0.5 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **reporting-plugin** | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 | 0.0.39 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **advancing-controller** | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.2 | 0.3.0 | 0.1.4 | 0.1.4 | 0.1.2 | 0.0.6 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **iOS renderer** | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.1.0 | 2.0.1 | 2.0.0 | 2.0.0 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **Android renderer** | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
With the release of **FLOWX.AI 3.0**, there have been some changes that you need to be aware when upgrading to the latest version:
* The `flowx-process-renderer` has been migrated to `@flowx\ui-sdk`.
* As of **FlowX 4.0**, the `paperflow-web-components` library will be deprecated. Instead, the new components can be found in `@flowx/ui-toolkit`.
### Recommended Versions for FLOWX.AI 3.4.4 ☑️
| FLOWX.AI Platform Version | Component name | Recommended version (tested versions) |
| ------------------------- | ----------------- | ------------------------------------- |
| 3.4.4 | Keycloak | 18.0.x |
| 3.4.4 | Kafka | 3.2.3 |
| 3.4.4 | PostgreSQL | 14.3.0 |
| 3.4.4 | MongoDB | 5.0.8 |
| 3.4.4 | Redis | 6.2.6 |
| 3.4.4 | Elasticsearch | 7.17 |
| 3.4.4 | OracleDB | 19.8.0.0.0 |
| 3.4.4 | Angular (Web SDK) | 15.0.0 |
FlowX.AI supports any version of the third-party components listed as prerequisites.
For optimal performance and reliability, our internal QA process validates new releases using specific versions as indicated in the provided table.
While exploring alternative versions that suit your company's specific requirements, we recommend referring to the compatibility matrix for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
Compatibility Matrix:
* FLOWX.AI Platform: Recommended and tested versions
* Third-Party Components: Supported versions based on specific requirements and client preferences
# v3.4.4 November 2023
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.4.4-november-2023/v3.4.4-november-2023
In version 3.4.4 of FLOWX.AI, we have addressed some issues to improve the stability and reliability of the platform. Here are the key bug fixes in this release:
## **Bug Fixes** 🔧
### FLOWX.AI Engine 🚂
* Fixed an issue where sending an extra key when using [**bulk operations**](../../docs/platform-setup-guides/flowx-engine-setup-guide#topics-related-to-the-task-management-plugin) was not working as expected. Now you can attach extra keys to the header and they will be copied to the response. See the example below:

### Web SDK
* Fixed a small performance issue
## **Other Bits**
* We've bid farewell to the autoarrange function in the [**Process Designer**](../../docs/terms/flowx-process-designer).
## **Gremlins to Watch Out For** 🙁
Keep an eye out for these quirks:
* **Document preview UI element**: Our document preview component has a unique sense of style. It prefers to take up only a portion of the screen, even when told to "fill" the entire width. It's a rebel with a cause.
* **Business rules**: Our business rule have a language barrier, but they're working on it. Changing the language of a business rule doesn't always lead to using the new language for execution. It's like they have a favorite phrase they won't let go of.
* **Process Designer**: Deleting a boundary node in the process designer and coming back from the UI Designer doesn't always clean up the associated sequence from the boundary event. It's like they left a party and forgot their hat.
* **Datepicker Date Transformation**: Our Datepicker seems to possess a hidden talent. It mysteriously transforms random text into the current date when used with validators in UI Designer.
## **Additional information**
For deployment guidelines, refer to:
[Deployment guidelines v3.4.4](./deployment-guidelines-v3.4.4)
# Deployment guidelines v3.4.5
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.4.5-november-2023/deployment-guidelines-v3.4.5
Do not forget, when upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FLOWX.AI Designer > Platform Status**.
After updating to **3.4.5** FLOWX.AI release, it is not possible to import old process definitions into the new platform release (available for exports from releases **\< 3.4.5**).

## Component versions
| 🧩 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 | 2.14.0 | 2.13.0 | 2.12.0 | 2.11.0 | 2.10.0 | 2.9.0 | 2.8.1 | 2.8.0 | 2.7.0 | 2.6.0 |
| ------------------------------ | ------------- | --------- | ------- | ------ | ------ | ------ | ------- | ------ | ------ | ------ | -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |
| **process-engine** | **4.3.5-2v2** | 4.3.5-2v1 | 4.3.5 | 4.3.2 | 4.3.1 | 4.1.0 | 3.6.0 | 2.2.1 | 2.1.2 | 2.0.7 | 0.4.104 | 0.4.95 | 0.4.90 | 0.4.83 | 0.4.60 | 0.4.49 | 0.4.44 | 0.4.42 | 0.4.42 | 0.4.36 |
| **admin** | **3.3.19-3** | 3.3.19-1 | 3.3.19 | 3.3.10 | 3.3.7 | 3.1.1 | 2.5.2 | 2.2.2 | 2.1.3 | 2.0.8 | 0.3.119 | 0.3.103 | 0.3.92 | 0.3.81 | 0.3.60 | 0.3.55 | 0.3.47 | 0.3.43 | 0.3.40 | 0.3.36 |
| **designer** | **3.35.18-2** | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 | 2.33.0 | 2.28.1 | 2.24.2 | 2.23.0 | 2.19.2 |
| **@flowx/ui-sdk** | **3.35.18-2** | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-toolkit** | **3.35.18-2** | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-theme** | **3.35.18-2** | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **paperflow-web-components** | **3.35.18-2** | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | 2.78.4-1 | 2.63.6 | 2.60.7 | 0.2.10 | 0.2.10 | 0.2.10 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 |
| **flowx-process-renderer** | - | - | - | - | - | - | - | - | - | - | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 | 2.33.0 | 2.28.1 | 2.24.2 | 2.23.0 | 2.19.2 |
| **cms-core** | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.6 | 1.3.0 | 1.2.0 | 1.0.3 | 1.0.2 | 0.2.38 | 0.2.36 | 0.2.33 | 0.2.30 | 0.2.25 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.23 |
| **scheduler-core** | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.1.0 | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.34 | 0.0.34 | 0.0.34 | 0.0.33 | 0.0.28 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.27 |
| **events-gateway** | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.0.6 | 1.0.2 | - | - | - | - | - | - | - | - | - | - | - | - | - |
| **notification-plugin** | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.8 | 2.0.8 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 | 1.0.206 | 1.0.206 | 1.0.206 | 1.0.205 | 1.0.200 | 1.0.198 | 1.0.198 | 1.0.197 | 1.0.194 | 1.0.194 |
| **document-plugin** | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.8 | 2.0.8 | 2.0.6 | 2.0.4 | 2.0.3 | 2.0.3 | 2.0.2 | 1.0.53 | 1.0.53 | 1.0.53 | 1.0.52 | 1.0.47 | 1.0.42 | 1.0.41 | 1.0.38 | 1.0.37 | 1.0.37 |
| **ocr-plugin** | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.8 | 1.0.8 | 1.0.2 | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 |
| **license-core** | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.4 | 1.0.2 | 1.0.2 | 1.0.2 | 1.0.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 | 0.1.19 | 0.1.18 | 0.1.18 | 0.1.18 | 0.1.18 |
| **customer-management-plugin** | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.4 | 0.2.3 | 0.2.3 | 0.2.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.22 |
| **task-management-plugin** | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.0 | 2.1.2 | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.42 | 0.0.42 | 0.0.40 | 0.0.37 | 0.0.29 | 0.0.28 | 0.0.28 | 0.0.27 | 0.0.27 | 0.0.27 |
| **data-search** | 0.2.8 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.3 | 0.2.0 | 0.1.4 | 0.1.4 | 0.1.3 | 0.0.8 | 0.0.8 | 0.0.6 | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **audit-core** | 2.2.0 | 2.2.0 | 2.2.0 | 2.1.3 | 2.1.3 | 2.1.0 | 1.0.6 | 1.0.5 | 1.0.4 | 1.0.1 | 0.0.8 | 0.0.5 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **reporting-plugin** | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 | 0.0.39 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **advancing-controller** | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.2 | 0.3.0 | 0.1.4 | 0.1.4 | 0.1.2 | 0.0.6 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **iOS renderer** | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.1.0 | 2.0.1 | 2.0.0 | 2.0.0 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **Android renderer** | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
With the release of **FLOWX.AI 3.0**, there have been some changes that you need to be aware when upgrading to the latest version:
* The `flowx-process-renderer` has been migrated to `@flowx\ui-sdk`.
* As of **FlowX 4.0**, the `paperflow-web-components` library will be deprecated. Instead, the new components can be found in `@flowx/ui-toolkit`.
### Recommended Versions for FLOWX.AI 3.4.5 ☑️
| FLOWX.AI Platform Version | Component name | Recommended version (tested versions) |
| ------------------------- | ----------------- | ------------------------------------- |
| 3.4.5 | Keycloak | 18.0.x |
| 3.4.5 | Kafka | 3.2.3 |
| 3.4.5 | PostgreSQL | 14.3.0 |
| 3.4.5 | MongoDB | 5.0.8 |
| 3.4.5 | Redis | 6.2.6 |
| 3.4.5 | Elasticsearch | 7.17 |
| 3.4.5 | OracleDB | 19.8.0.0.0 |
| 3.4.5 | Angular (Web SDK) | 15.0.0 |
FlowX.AI supports any version of the third-party components listed as prerequisites.
For optimal performance and reliability, our internal QA process validates new releases using specific versions as indicated in the provided table.
While exploring alternative versions that suit your company's specific requirements, we recommend referring to the compatibility matrix for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
Compatibility Matrix:
* FLOWX.AI Platform: Recommended and tested versions
* Third-Party Components: Supported versions based on specific requirements and client preferences
# v3.4.5 November 2023
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.4.5-november-2023/v3.4.5-november-2023
In version 3.4.5 of FLOWX.AI, we have addressed some issues to improve the stability and reliability of the platform. Here are the key bug fixes in this release:
## **Changed** 🛠️
### Process Versioning
* **Delete process definition** - After deleting or renaming a process definition, you can now add a new one with the same name, whether manually or through import, without any limitations or constraints.
## **Bug Fixes** 🔧
### Versioning
* Our version control got tangled in a game of interdimensional hopscotch, leaving branches stranded without their past travel tickets. We've straightened out the interdimensional portals now, ensuring that branches maintain their historical travel logs. No more lost branches in the space-time continuum!
### Web SDK
* **UI Designer** - We trained our containers in the art of origami, hoping they'd master the art of rearranging themselves gracefully. Unfortunately, they got carried away with their newfound skill and refused to listen when asked to reorder themselves properly. **We've had a serious talk with them, and now they promise to behave! You can copy, paste, and rearrange containers without them folding into chaos.**
## **Other Bits**
* We've bid farewell to the autoarrange function in the [**Process Designer**](../../docs/terms/flowx-process-designer).
## **Gremlins to Watch Out For** 🙁
Keep an eye out for these quirks:
* **Document preview UI element**: Our document preview component has a unique sense of style. It prefers to take up only a portion of the screen, even when told to "fill" the entire width. It's a rebel with a cause.
* **Business rules**: Our business rule have a language barrier, but they're working on it. Changing the language of a business rule doesn't always lead to using the new language for execution. It's like they have a favorite phrase they won't let go of.
* **Process Designer**: Deleting a boundary node in the process designer and coming back from the UI Designer doesn't always clean up the associated sequence from the boundary event. It's like they left a party and forgot their hat.
* **Datepicker Date Transformation**: Our Datepicker seems to possess a hidden talent. It mysteriously transforms random text into the current date when used with validators in UI Designer.
## **Additional information**
For deployment guidelines, refer to:
[Deployment guidelines v3.4.5](./deployment-guidelines-v3.4.5)
# Deployment guidelines v3.4.6
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.4.6-december-2023/deployment-guidelines-v3.4.6
Do not forget, when upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FLOWX.AI Designer > Platform Status**.
After updating to **3.4.6** FLOWX.AI release, it is not possible to import old process definitions into the new platform release (available for exports from releases **\< 3.4.6**).

## Component versions
| 🧩 | 3.4.6 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 | 2.14.0 | 2.13.0 | 2.12.0 | 2.11.0 | 2.10.0 | 2.9.0 | 2.8.1 | 2.8.0 | 2.7.0 |
| ------------------------------ | ------------- | --------- | --------- | ------- | ------ | ------ | ------ | ------- | ------ | ------ | ------ | -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |
| **process-engine** | **4.3.5-2v6** | 4.3.5-2v2 | 4.3.5-2v1 | 4.3.5 | 4.3.2 | 4.3.1 | 4.1.0 | 3.6.0 | 2.2.1 | 2.1.2 | 2.0.7 | 0.4.104 | 0.4.95 | 0.4.90 | 0.4.83 | 0.4.60 | 0.4.49 | 0.4.44 | 0.4.42 | 0.4.42 |
| **admin** | **3.3.19-4** | 3.3.19-3 | 3.3.19-1 | 3.3.19 | 3.3.10 | 3.3.7 | 3.1.1 | 2.5.2 | 2.2.2 | 2.1.3 | 2.0.8 | 0.3.119 | 0.3.103 | 0.3.92 | 0.3.81 | 0.3.60 | 0.3.55 | 0.3.47 | 0.3.43 | 0.3.40 |
| **designer** | **3.35.18-3** | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 | 2.33.0 | 2.28.1 | 2.24.2 | 2.23.0 |
| **@flowx/ui-sdk** | **3.35.18-3** | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-toolkit** | **3.35.18-3** | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-theme** | **3.35.18-3** | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **paperflow-web-components** | **3.35.18-3** | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | 2.78.4-1 | 2.63.6 | 2.60.7 | 0.2.10 | 0.2.10 | 0.2.10 | 0.2.6 | 0.2.6 | 0.2.6 |
| **flowx-process-renderer** | - | - | - | - | - | - | - | - | - | - | - | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 | 2.33.0 | 2.28.1 | 2.24.2 | 2.23.0 |
| **cms-core** | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.6 | 1.3.0 | 1.2.0 | 1.0.3 | 1.0.2 | 0.2.38 | 0.2.36 | 0.2.33 | 0.2.30 | 0.2.25 | 0.2.23 | 0.2.23 | 0.2.23 | 0.2.23 |
| **scheduler-core** | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.1.0 | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.34 | 0.0.34 | 0.0.34 | 0.0.33 | 0.0.28 | 0.0.27 | 0.0.27 | 0.0.27 | 0.0.27 |
| **events-gateway** | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.0.6 | 1.0.2 | - | - | - | - | - | - | - | - | - | - | - | - |
| **notification-plugin** | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.8 | 2.0.8 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 | 1.0.206 | 1.0.206 | 1.0.206 | 1.0.205 | 1.0.200 | 1.0.198 | 1.0.198 | 1.0.197 | 1.0.194 |
| **document-plugin** | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.8 | 2.0.8 | 2.0.6 | 2.0.4 | 2.0.3 | 2.0.3 | 2.0.2 | 1.0.53 | 1.0.53 | 1.0.53 | 1.0.52 | 1.0.47 | 1.0.42 | 1.0.41 | 1.0.38 | 1.0.37 |
| **ocr-plugin** | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.8 | 1.0.8 | 1.0.2 | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 | 0.1.5 |
| **license-core** | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.4 | 1.0.2 | 1.0.2 | 1.0.2 | 1.0.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 | 0.1.19 | 0.1.18 | 0.1.18 | 0.1.18 |
| **customer-management-plugin** | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.4 | 0.2.3 | 0.2.3 | 0.2.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 | 0.1.22 | 0.1.22 | 0.1.22 | 0.1.22 |
| **task-management-plugin** | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.0 | 2.1.2 | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.42 | 0.0.42 | 0.0.40 | 0.0.37 | 0.0.29 | 0.0.28 | 0.0.28 | 0.0.27 | 0.0.27 |
| **data-search** | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.3 | 0.2.0 | 0.1.4 | 0.1.4 | 0.1.3 | 0.0.8 | 0.0.8 | 0.0.6 | n/a | n/a | n/a | n/a | n/a | n/a |
| **audit-core** | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.1.3 | 2.1.3 | 2.1.0 | 1.0.6 | 1.0.5 | 1.0.4 | 1.0.1 | 0.0.8 | 0.0.5 | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **reporting-plugin** | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 | 0.0.39 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **advancing-controller** | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.2 | 0.3.0 | 0.1.4 | 0.1.4 | 0.1.2 | 0.0.6 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **iOS renderer** | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.1.0 | 2.0.1 | 2.0.0 | 2.0.0 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| **Android renderer** | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
With the release of **FLOWX.AI 3.0**, there have been some changes that you need to be aware when upgrading to the latest version:
* The `flowx-process-renderer` has been migrated to `@flowx\ui-sdk`.
* As of **FlowX v4.0**, the `paperflow-web-components` library will be deprecated. Instead, the new components can be found in `@flowx/ui-toolkit`.
### Recommended Versions for FLOWX.AI 3.4.6 ☑️
| FLOWX.AI Platform Version | Component name | Recommended version (tested versions) |
| ------------------------- | ----------------- | ------------------------------------- |
| 3.4.6 | Keycloak | 18.0.x |
| 3.4.6 | Kafka | 3.2.3 |
| 3.4.6 | PostgreSQL | 14.3.0 |
| 3.4.6 | MongoDB | 5.0.8 |
| 3.4.6 | Redis | 6.2.6 |
| 3.4.6 | Elasticsearch | 7.17 |
| 3.4.6 | OracleDB | 19.8.0.0.0 |
| 3.4.6 | Angular (Web SDK) | 15.0.0 |
FlowX.AI supports any version of the third-party components listed as prerequisites.
For optimal performance and reliability, our internal QA process validates new releases using specific versions as indicated in the provided table.
While exploring alternative versions that suit your company's specific requirements, we recommend referring to the compatibility matrix for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
Compatibility Matrix:
* FLOWX.AI Platform: Recommended and tested versions
* Third-Party Components: Supported versions based on specific requirements and client preferences
# v3.4.6 December 2023
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.4.6-december-2023/v3.4.6-december-2023
In version 3.4.6 of FLOWX.AI, we have addressed some issues to improve the stability and reliability of the platform. Here are the key bug fixes in this release:
## **Bug Fixes** 🔧
### Process Designer
* Cured a case of the "Start-Subprocess Stumper": now configuring existing start subprocess actions is a breeze! No more invisible barriers.
### Process Engine
* Liberated process versioning on Oracle from its migration woes! No more version control drama, it's all harmonious now.
* Put an end to the confusion: JavaScript arrays will no longer masquerade as Java maps. They've embraced their true identities.
* Our token instance ID DB index on Oracle has been reincarnated! It’s back and better than ever, ready to keep track of those elusive tokens.
### Versioning
* Put an end to the process definition merge branch mayhem! No more tangled branches, just clean, pristine definitions merging like old friends at a reunion.
## **Other Bits**
* We've bid farewell to the autoarrange function in the [**Process Designer**](../../docs/terms/flowx-process-designer).
## **Gremlins to Watch Out For** 🙁
Keep an eye out for these quirks:
* **Document preview UI element**: Our document preview component has a unique sense of style. It prefers to take up only a portion of the screen, even when told to "fill" the entire width. It's a rebel with a cause.
* **Business rules**: Our business rule have a language barrier, but they're working on it. Changing the language of a business rule doesn't always lead to using the new language for execution. It's like they have a favorite phrase they won't let go of.
* **Process Designer**: Deleting a boundary node in the process designer and coming back from the UI Designer doesn't always clean up the associated sequence from the boundary event. It's like they left a party and forgot their hat.
* **Datepicker Date Transformation**: Our Datepicker seems to possess a hidden talent. It mysteriously transforms random text into the current date when used with validators in UI Designer.
## **Additional information**
For deployment guidelines, refer to:
[Deployment guidelines v3.4.6](./deployment-guidelines-v3.4.6)
# Deployment guidelines v3.4.7
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.4.7-january-2024/deployment-guidelines-v3.4.7
Do not forget, when upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FLOWX.AI Designer > Platform Status**.
After upgrading to **3.4.7** FlowX.AI release, it is not possible to import old process definitions into the new platform release (exported from releases **\< 3.4.7**).

## Component versions
| 🧩 | 3.4.7 | 3.4.6 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 | 2.14.0 | 2.13.0 | 2.12.0 | 2.11.0 | 2.10.0 |
| ------------------------------ | -------------- | --------- | --------- | --------- | ------- | ------ | ------ | ------ | ------- | ------ | ------ | ------ | -------- | ------- | ------- | ------- | ------- |
| **process-engine** | **4.3.5-2v11** | 4.3.5-2v6 | 4.3.5-2v2 | 4.3.5-2v1 | 4.3.5 | 4.3.2 | 4.3.1 | 4.1.0 | 3.6.0 | 2.2.1 | 2.1.2 | 2.0.7 | 0.4.104 | 0.4.95 | 0.4.90 | 0.4.83 | 0.4.60 |
| **admin** | **3.3.19-6** | 3.3.19-4 | 3.3.19-3 | 3.3.19-1 | 3.3.19 | 3.3.10 | 3.3.7 | 3.1.1 | 2.5.2 | 2.2.2 | 2.1.3 | 2.0.8 | 0.3.119 | 0.3.103 | 0.3.92 | 0.3.81 | 0.3.60 |
| **designer** | **3.35.18-5** | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 |
| **@flowx/ui-sdk** | **3.35.18-5** | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-toolkit** | **3.35.18-5** | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-theme** | **3.35.18-5** | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a |
| **paperflow-web-components** | **3.35.18-5** | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | 2.78.4-1 | 2.63.6 | 2.60.7 | 0.2.10 | 0.2.10 |
| **flowx-process-renderer** | - | - | - | - | - | - | - | - | - | - | - | - | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 |
| **cms-core** | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.6 | 1.3.0 | 1.2.0 | 1.0.3 | 1.0.2 | 0.2.38 | 0.2.36 | 0.2.33 | 0.2.30 | 0.2.25 |
| **scheduler-core** | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.1.0 | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.34 | 0.0.34 | 0.0.34 | 0.0.33 | 0.0.28 |
| **events-gateway** | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.0.6 | 1.0.2 | - | - | - | - | - | - | - | - |
| **notification-plugin** | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.8 | 2.0.8 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 | 1.0.206 | 1.0.206 | 1.0.206 | 1.0.205 | 1.0.200 |
| **document-plugin** | **2.0.10-1** | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.8 | 2.0.8 | 2.0.6 | 2.0.4 | 2.0.3 | 2.0.3 | 2.0.2 | 1.0.53 | 1.0.53 | 1.0.53 | 1.0.52 | 1.0.47 |
| **ocr-plugin** | **1.0.15** | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.8 | 1.0.8 | 1.0.2 | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.5 | 0.1.5 | 0.1.5 |
| **license-core** | **1.1.0** | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.4 | 1.0.2 | 1.0.2 | 1.0.2 | 1.0.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 |
| **customer-management-plugin** | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.4 | 0.2.3 | 0.2.3 | 0.2.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 |
| **task-management-plugin** | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.0 | 2.1.2 | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.42 | 0.0.42 | 0.0.40 | 0.0.37 | 0.0.29 |
| **data-search** | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.3 | 0.2.0 | 0.1.4 | 0.1.4 | 0.1.3 | 0.0.8 | 0.0.8 | 0.0.6 | n/a | n/a |
| **audit-core** | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.1.3 | 2.1.3 | 2.1.0 | 1.0.6 | 1.0.5 | 1.0.4 | 1.0.1 | 0.0.8 | 0.0.5 | n/a | n/a | n/a |
| **reporting-plugin** | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 | 0.0.39 | n/a | n/a | n/a | n/a |
| **advancing-controller** | **0.3.5-1** | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.2 | 0.3.0 | 0.1.4 | 0.1.4 | 0.1.2 | 0.0.6 | n/a | n/a | n/a | n/a |
| **iOS renderer** | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.1.0 | 2.0.1 | 2.0.0 | 2.0.0 | n/a | n/a | n/a | n/a | n/a |
| **Android renderer** | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 | n/a | n/a | n/a | n/a | n/a |
With the release of **FLOWX.AI 3.0**, there have been some changes that you need to be aware when upgrading to the latest version:
* The `flowx-process-renderer` has been migrated to `@flowx\ui-sdk`.
* As of **FlowX v4.0**, the `paperflow-web-components` library will be deprecated. Instead, the new components can be found in `@flowx/ui-toolkit`.
### Third-party recommended component versions
| FLOWX.AI Platform Version | Component name | Recommended versions (tested versions) |
| ------------------------- | ----------------- | -------------------------------------- |
| 3.4.7 | Keycloak | 18.x |
| 3.4.7 | Kafka | 3.2.3 |
| 3.4.7 | PostgreSQL | 14.3.0 |
| 3.4.7 | MongoDB | 5.0.8 |
| 3.4.7 | Redis | 6.2.6 |
| 3.4.7 | Elasticsearch | 7.17 |
| 3.4.7 | OracleDB | 19.8.0.0.0 |
| 3.4.7 | Angular (Web SDK) | 15.0.0 |
FlowX.AI supports any listed version of the prerequisite third-party components in the table above.
For optimal performance and reliability, our internal QA process validates new releases using specific versions as indicated in the provided table.
While exploring alternative versions that suit your company's specific requirements, we recommend referring to the compatibility matrix for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
Compatibility Matrix:
* FLOWX.AI Platform: Recommended and tested versions
* Third-Party Components: Supported versions based on specific requirements and client preferences
## Additional Configuration
### Process Engine
Introducing a new environment variable designed to facilitate the coordinated or independent expiration of subprocesses within a parent process.
| Environment Variable | Default Value | Explanation |
| ----------------------------------- | ------------- | ---------------------------------------------------------------------------------------------------------------- |
| `FLOWX_PROCESS_EXPIRE_SUBPROCESSES` | true | When set to true, expiration of a parent process triggers simultaneous expiration of all associated subprocesses |
For further details, refer to the documentation [**here**](../../3.x/setup-guides/flowx-engine-setup-guide/engine-setup#managing-subprocesses-expiration).
### Documents Plugin
The DPI value for the PDF to JPEG conversion can now be configured via the `FLOWX_CONVERT_DPI` environment variable.
| Environment Variable | Default Value | Explanation |
| -------------------- | ------------- | ---------------------------------------------------------------------------------------------------------- |
| `FLOWX_CONVERT_DPI` | 150 | Sets the DPI (dots per inch) for PDF to JPEG conversion. Higher values result in higher resolution images. |
For further details, refer to the documentation [**here**](../../3.x/setup-guides/plugins-setup-guide/documents-plugin-setup#conversion).
### OCR Plugin
The following environment variables were introduced to control the acceptable aspect ratio range for recognizing signed documents in OCR processes. Here's a brief description of each:
| Environment Variable | Definition | Default Value |
| ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------- |
| `OCR_SIGNATURE_MAX_RATIO` | This variable sets the maximum acceptable aspect ratio for a signed scanned document (the OCR plugin will recognize a signature only if the document ratio is greater than or equal to the specified minimum ratio) | `1.43` |
| `OCR_SIGNATURE_MIN_RATIO` | This variable sets the minimum acceptable aspect ratio for a signed scanned document (in this context, the OCR plugin will consider a detected signature only if the document aspect ratio is less than or equal to the specified maximum ratio) | `1.39` |
The plugin has been tested with aspect ratio values between 1.38 and 1.43. However, caution is advised when using untested values outside this range, as they may potentially disrupt the functionality. Adjust these parameters at your own risk and consider potential consequences, as untested values might lead to plugin instability or undesired behavior.
For further details, refer to the documentation [**here**](../../3.x/setup-guides/plugins-setup-guide/ocr-plugin-setup#control-aspect-ratio).
# v3.4.7 January 2024
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.4.7-january-2024/v3.4.7-january-2024
Welcome to the FLOWX.AI v3.4.7 patch release! This update brings several enhancements and fixes to improve your FLOWX.AI experience. While it may not be a major version update, it's packed with valuable improvements.
## **What's new**
### Process Engine 🚂
#### 🆕 **Default Data Transmission to Frontend (Improvement)**
In version 3.4.7 of the FLOWX.AI Platform, we're introducing a practical improvement related to sending data to the frontend. This enhancement aims to simplify the process of displaying process data in the frontend UI, reducing the time and effort required for configuration.
###### Background
Previously, the Configurator required explicit mention of objects to be sent to the frontend for displaying data in User Task UI. This often included configuring keys for various UI elements. To make this process more straightforward, we've implemented a default mechanism that automatically sends all keys configured as sources for UI elements to the frontend.
###### Changes and Benefits
To streamline the configuration process, we've made the following updates in the UI Configuration:
* **Label Change**: We've updated the label in the configuration UI under "Message" to "Custom UI Payload" for clarity.
* **Runtime Behavior (Default Data Transmission)**: During runtime, the Backend (BE) will automatically send to the frontend all data available as process variables with matching keys. This includes default keys and objects specified in the "Message" option of the root element for User Tasks.
* **Default Keys Sent to Frontend**: For various UI elements, predefined keys will be sent to the frontend by default.
#### 🆕 **Managing Subprocesses Expiration**
We introduced a new mechanism for precise control over subprocesses expiration.
Additional Configuration needed! Check the [**deployment guidelines**](deployment-guidelines-v3.4.7#process-engine).
### Documents Plugin 📄
🆕 **PDF to JPEG Conversion Improvement**
We have enhanced the PDF to JPEG conversion functionality. This improvement allows users to configure the DPI (dots per inch) value for converting PDFs to JPEG files, resulting in higher resolution images.
Additional Configuration needed! Check the [**deployment guidelines**](deployment-guidelines-v3.4.7#additional-configuration).
### OCR Plugin 👁️
🆕 **Control Aspect Ratio**
Introduced a mechanism to fine-tune the OCR by defining a range of acceptable aspect ratios for particular documents. Adjusting these ratios of the documents can help tailor the OCR plugin to different types of documents.
Additional Configuration needed! Check the [**deployment guidelines**](deployment-guidelines-v3.4.7#ocr-plugin).
### License Core
* Improved licensing core for a refined experience, ensuring improved functionality and ease of use.
## **Bug Fixes** 🔧
### Process Engine 🚂
#### Bug Fix: Subprocess Party Time 🎉
So, to make sure the subprocesses don't miss the farewell fireworks when the parent process expires, we've equipped them with a "Subprocess Farewell Dance Routine"! Now, when the parent process decides it's time to go, the subprocesses won't be hanging around like party crashers - they'll join the exit dance and wrap things up properly.
#### Bug Fix: Subprocess Timeout Tango - Interrupted Edition 💃🕰️
In the grand dance of subprocesses, we stumbled upon a partner who wasn't quite following the choreography. The bug report stated that subprocesses were not gracefully bowing out after an interrupting timer event. Now, when the timer says it's time to leave, the subprocesses won't be sticking around for an encore!
### Advancing Controller
#### Bug Fix: Parallel Advancing Ballet - Failure Recovery 🩰🐞
To ensure our parallel advancing ballet doesn't lose its rhythm, we've introduced the "Failure Recovery Pirouette." Now, when an advancing event faces failure, the partitions won't be left wondering if the music stopped—they'll gracefully exit the stage, and inactive partitions will automatically join the backstage crew, making room for the next act.
Thank you for being part of the FLOWX.AI Platform community, where even bugs are fixed with a twirl and a leap! 😄🩰🎭
## **Gremlins to Watch Out For** 🙁
Keep an eye out for these quirks:
* **Document preview UI element**: Our document preview component has a unique sense of style. It prefers to take up only a portion of the screen, even when told to "fill" the entire width. It's a rebel with a cause.
* **Business rules**: Our business rule have a language barrier, but they're working on it. Changing the language of a business rule doesn't always lead to using the new language for execution. It's like they have a favorite phrase they won't let go of.
* **Process Designer**: Deleting a boundary node in the process designer and coming back from the UI Designer doesn't always clean up the associated sequence from the boundary event. It's like they left a party and forgot their hat.
* **Datepicker Date Transformation**: Our Datepicker seems to possess a hidden talent. It mysteriously transforms random text into the current date when used with validators in UI Designer.
## **Additional information**
For deployment guidelines, refer to:
# Deployment guidelines v3.4.8
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.4.8-june-2024/deployment-guidelines-v3.4.8
Do not forget, after upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FLOWX.AI Designer > Platform Status**.
After updgrading to **3.4.8** FlowX.AI release, it is not possible to import old process definitions into the new platform release (exported from releases **\< 3.4.8**).

## Component versions
| 🧩 | 3.4.8 | 3.4.7 | 3.4.6 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 | 2.14.0 | 2.13.0 | 2.12.0 | 2.11.0 | 2.10.0 |
| ------------------------------ | -------------- | ---------- | --------- | --------- | --------- | ------- | ------ | ------ | ------ | ------- | ------ | ------ | ------ | -------- | ------- | ------- | ------- | ------- |
| **process-engine** | **4.3.5-2v14** | 4.3.5-2v11 | 4.3.5-2v6 | 4.3.5-2v2 | 4.3.5-2v1 | 4.3.5 | 4.3.2 | 4.3.1 | 4.1.0 | 3.6.0 | 2.2.1 | 2.1.2 | 2.0.7 | 0.4.104 | 0.4.95 | 0.4.90 | 0.4.83 | 0.4.60 |
| **admin** | **3.3.19-8** | 3.3.19-6 | 3.3.19-4 | 3.3.19-3 | 3.3.19-1 | 3.3.19 | 3.3.10 | 3.3.7 | 3.1.1 | 2.5.2 | 2.2.2 | 2.1.3 | 2.0.8 | 0.3.119 | 0.3.103 | 0.3.92 | 0.3.81 | 0.3.60 |
| **designer** | **3.35.18-7** | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 |
| **@flowx/ui-sdk** | **3.35.18-7** | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-toolkit** | **3.35.18-7** | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a |
| **@flowx/ui-theme** | **3.35.18-7** | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a | n/a | n/a | n/a | n/a |
| **paperflow-web-components** | **3.35.18-7** | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | 2.78.4-1 | 2.63.6 | 2.60.7 | 0.2.10 | 0.2.10 |
| **flowx-process-renderer** | - | - | - | - | - | - | - | - | - | - | - | - | - | 2.78.4-1 | 2.63.6 | 2.60.7 | 2.48.9 | 2.39.2 |
| **cms-core** | **2.0.3** | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.6 | 1.3.0 | 1.2.0 | 1.0.3 | 1.0.2 | 0.2.38 | 0.2.36 | 0.2.33 | 0.2.30 | 0.2.25 |
| **scheduler-core** | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.1.0 | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.34 | 0.0.34 | 0.0.34 | 0.0.33 | 0.0.28 |
| **events-gateway** | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.0.6 | 1.0.2 | - | - | - | - | - | - | - | - |
| **notification-plugin** | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.8 | 2.0.8 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 | 1.0.206 | 1.0.206 | 1.0.206 | 1.0.205 | 1.0.200 |
| **document-plugin** | **2.0.10-4** | 2.0.10-1 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.8 | 2.0.8 | 2.0.6 | 2.0.4 | 2.0.3 | 2.0.3 | 2.0.2 | 1.0.53 | 1.0.53 | 1.0.53 | 1.0.52 | 1.0.47 |
| **ocr-plugin** | 1.0.15 | 1.0.15 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.8 | 1.0.8 | 1.0.2 | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.33 | 0.1.5 | 0.1.5 | 0.1.5 |
| **license-core** | **1.2.1** | 1.1.0 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.4 | 1.0.2 | 1.0.2 | 1.0.2 | 1.0.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 |
| **customer-management-plugin** | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.4 | 0.2.3 | 0.2.3 | 0.2.1 | 0.1.28 | 0.1.28 | 0.1.28 | 0.1.27 | 0.1.23 |
| **task-management-plugin** | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.0 | 2.1.2 | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.42 | 0.0.42 | 0.0.40 | 0.0.37 | 0.0.29 |
| **data-search** | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.3 | 0.2.0 | 0.1.4 | 0.1.4 | 0.1.3 | 0.0.8 | 0.0.8 | 0.0.6 | n/a | n/a |
| **audit-core** | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.1.3 | 2.1.3 | 2.1.0 | 1.0.6 | 1.0.5 | 1.0.4 | 1.0.1 | 0.0.8 | 0.0.5 | n/a | n/a | n/a |
| **reporting-plugin** | **0.1.5** | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 | 0.0.39 | n/a | n/a | n/a | n/a |
| **advancing-controller** | 0.3.5-1 | 0.3.5-1 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.2 | 0.3.0 | 0.1.4 | 0.1.4 | 0.1.2 | 0.0.6 | n/a | n/a | n/a | n/a |
| **iOS renderer** | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.1.0 | 2.0.1 | 2.0.0 | 2.0.0 | n/a | n/a | n/a | n/a | n/a |
| **Android renderer** | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 | n/a | n/a | n/a | n/a | n/a |
With the release of **FLOWX.AI 3.0**, there have been some changes that you need to be aware when upgrading to the latest version:
* The `flowx-process-renderer` has been migrated to `@flowx\ui-sdk`.
* As of **FlowX v4.0**, the `paperflow-web-components` library will be deprecated. Instead, the new components can be found in `@flowx/ui-toolkit`.
### Third-party recommended component versions
| FLOWX.AI Platform Version | Component name | Recommended versions (tested versions) |
| ------------------------- | ----------------- | -------------------------------------- |
| 3.4.8 | Keycloak | 18.x |
| 3.4.8 | Kafka | 3.2.3 |
| 3.4.8 | PostgreSQL | 14.3.0 |
| 3.4.8 | MongoDB | 5.0.8 |
| 3.4.8 | Redis | 6.2.6 |
| 3.4.8 | Elasticsearch | 7.17 |
| 3.4.8 | OracleDB | 19.8.0.0.0 |
| 3.4.8 | Angular (Web SDK) | 15.0.0 |
FlowX.AI supports any listed version of the prerequisite third-party components in the table above.
For optimal performance and reliability, our internal QA process validates new releases using specific versions as indicated in the provided table.
While exploring alternative versions that suit your company's specific requirements, we recommend referring to the compatibility matrix for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
Compatibility Matrix:
* FLOWX.AI Platform: Recommended and tested versions
* Third-Party Components: Supported versions based on specific requirements and client preferences
## Additional Configuration
Before upgrading to v3.4.8 version we recommend to execute the initial data partitioning setup manually. This is needed no matter if you enable partitioning or not.
Therefore, when starting the new version of the process-engine, we recommend manually executing the setup SQL commands from Liquibase, as they may take more time. After setup, all existing information will go into the initial partition.
If partitioning is enabled, the initial partition will continue to be used until a new `partition_id` is generated according to the partitioning interval (DAY, WEEK, MONTH). Future partitions will be created automatically.
If partitioning is not enabled, all data will continue to be stored in this initial partition.
### Process instance data archiving
#### FlowX.AI configuration
New settings (new environment variables) have been added for process instance data partitioning and archiving. Check the following documentation sections for more details:
# v3.4.8 June 2024
Source: https://docs.flowx.ai/release-notes/v3.x.x/v3.4.8-june-2024/v3.4.8-june-2024
Welcome to the FlowX.AI v3.4.8 patch release! This update brings several enhancements and fixes to improve your FlowX.AI experience. While it may not be a major version update, it's packed with valuable improvements.
## What's new? 🆕
### FlowX.AI Engine: Process Instance Data Archiving
Database compatibility: Oracle DBs.
We are introducing a new system for archiving data related to old process instances. This enables the implementation of data retention policies, with more predictable storage usage and costs.
Some operations involving process instance data can be snappier, given the lower amount of data in the system and new structure. Archiving relies on partitioning data based on the process instance start time. Each partition is associated with a time period (day, week, month), and retention is defined by the number of periods maintained in the system.
Archived data remains in new tables in the database, optionally compressed, and is no longer accessible in FlowX. These tables can be compressed, copied, moved, archived, or deleted without affecting other processes in FlowX.
For more details about process instance partitioning and archiving, including configuration of the feature, see:
### Documents Plugin
Introducing configurable temporary file deletion strategy. Explore configuration details in the section below:
## Bug Fixes 🛠️
We've also squashed pesky bugs to ensure a smoother and more reliable experience across the board:
### FlowX.AI Engine
* **Token stuck in END Parallel Gateway node 🚧**: We've cleared the traffic jam where tokens were stuck at the END parallel gateway node. Now, your processes will flow like a river, smooth and unstoppable!
* **Process Continuation Issue**: Fixed an issue where processes started in version 2.14 couldn't time travel to version 3.4.x. Trying to advance these processes caused errors or made them vanish into the void. We've patched the timeline by updating the start swimlane ID in the database, so your processes can now journey safely to the future!
### FlowX.AI Admin
* **Process Version Memory Loss**: Fixed a bug where some process version params (like application ID, reporting usage, task manager usage, indexing keys, and application URL) mysteriously disappeared from old process definitions when importing a new version. We've given these params a memory boost, so they stick around like they should!
### FlowX.AI CMS
* **Lost in Translation**: Fixed an issue where languages were feeling a bit `null` and `void` in newly created environments. Instead of partying with an empty list like they should, they were sitting around with null values, causing chaos when trying to add new languages. Now, languages get the memo right away and initialize themselves properly.
* **Chill Media Library Search**: Updated the Media Library search so it no longer cares about upper or lower case. Now, whether you type in CAPS or whisper softly in lowercase, it'll find what you're looking for. No more sensitive searches, just chill browsing in the library!
## Additional information
For deployment guidelines, refer to:
# Deployment guidelines v4.0
Source: https://docs.flowx.ai/release-notes/v4.0.0-april-2024/deployment-guidelines-v4.0.0
Do not forget, after upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FLOWX.AI Designer > Platform Status**.
After upgrading to **4.0** FLOWX.AI release, it is not possible to import old process definitions into the new platform release (exported from releases **\< 4.0**).

## Component versions
As of **FlowX.AI** next release (v4.1.0), the `paperflow-web-components` library will be deprecated, removing the dependencies. Instead, the new components can be found in `@flowx/ui-toolkit`.
| Component | 4.0 | 3.4.7 | 3.4.6 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 | 2.14.0 |
| ---------------------------- | ---------- | ---------- | --------- | --------- | --------- | ------- | ------ | ------ | ------ | ------- | ------ | ------ | ------ | -------- |
| **process-engine** | **5.10.3** | 4.3.5-2v11 | 4.3.5-2v6 | 4.3.5-2v2 | 4.3.5-2v1 | 4.3.5 | 4.3.2 | 4.3.1 | 4.1.0 | 3.6.0 | 2.2.1 | 2.1.2 | 2.0.7 | 0.4.104 |
| **admin** | **4.6.10** | 3.3.19-6 | 3.3.19-4 | 3.3.19-3 | 3.3.19-1 | 3.3.19 | 3.3.10 | 3.3.7 | 3.1.1 | 2.5.2 | 2.2.2 | 2.1.3 | 2.0.8 | 0.3.119 |
| **designer** | **4.0.1** | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | 2.78.4-1 |
| **@flowx/ui-sdk** | **4.0.1** | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a |
| **@flowx/ui-toolkit** | **4.0.1** | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a |
| **@flowx/ui-theme** | **4.0.1** | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | n/a |
| **paperflow-web-components** | 3.35.18-5 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 | 2.78.4-1 |
| **flowx-process-renderer** | - | - | - | - | - | - | - | - | - | - | - | - | - | 2.78.4-1 |
| **cms-core** | **2.2.5** | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.6 | 1.3.0 | 1.2.0 | 1.0.3 | 1.0.2 | 0.2.38 |
| **scheduler-core** | **2.1.4** | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.1.0 | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.34 |
| **events-gateway** | **2.0.4** | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.0.6 | 1.0.2 | - | - | - | - |
| **notification-plugin** | **3.0.6** | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.8 | 2.0.8 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 | 1.0.206 |
| **document-plugin** | **3.0.6** | 2.0.10-1 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.8 | 2.0.8 | 2.0.6 | 2.0.4 | 2.0.3 | 2.0.3 | 2.0.2 | 1.0.53 |
| **ocr-plugin** | **1.0.17** | 1.0.15 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.8 | 1.0.8 | 1.0.2 | 0.1.33 | 0.1.33 | 0.1.33 |
| **license-core** | **2.0.5** | 1.1.0 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.4 | 1.0.2 | 1.0.2 | 1.0.2 | 1.0.1 | 0.1.28 |
| **task-management-plugin** | **4.0.5** | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.0 | 2.1.2 | 1.0.4 | 1.0.4 | 1.0.1 | 0.0.42 |
| **data-search** | **1.0.6** | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.3 | 0.2.0 | 0.1.4 | 0.1.4 | 0.1.3 | 0.0.8 |
| **audit-core** | **3.1.4** | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.1.3 | 2.1.3 | 2.1.0 | 1.0.6 | 1.0.5 | 1.0.4 | 1.0.1 | 0.0.8 |
| **reporting-plugin** | **0.1.5** | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 | 0.0.39 |
| **advancing-controller** | **1.1.4** | 0.3.5-1 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.2 | 0.3.0 | 0.1.4 | 0.1.4 | 0.1.2 | 0.0.6 |
| **iOS renderer** | **3.0.0** | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.1.0 | 2.0.1 | 2.0.0 | 2.0.0 | n/a |
| **Android renderer** | **3.0.0** | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 | n/a |
### Third-party recommended component versions
| FlowX.AI Platform Version | Component Name | Recommended Versions |
| ------------------------- | ----------------- | -------------------- |
| 4.0 | Keycloak | 22.x |
| 4.0 | Kafka | 3.2.x |
| 4.0 | PostgreSQL | 16.2.x |
| 4.0 | MongoDB | 7.0.x |
| 4.0 | Redis | 7.2.x |
| 4.0 | Elasticsearch | 7.17.x |
| 4.0 | OracleDB | 19.8.0.0.0 |
| 4.0 | Angular (Web SDK) | 16.3.x |
FlowX.AI supports the Recommended Versions of the prerequisite third-party components in the above table.
For optimal performance and reliability, our internal QA process validates new FlowX.AI releases using the latest Recommended Versions. While exploring alternative versions that suit your company’s specific requirements, we recommend referring to the table above for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
# Deployment changes
Source: https://docs.flowx.ai/release-notes/v4.0.0-april-2024/migrating-from-v3.4.x/migrating-from-v3.4.x
This document outlines the additional configuration changes required for deployment in version 4.0
### Revised cache key organization
To ensure a smooth transition to the 4.0 release, it's crucial to clear the cache before upgrading to v4.0. Use the following endpoint and request body for cache clearing:
Ensure that this operation is carried out by a user with an admin role.
##### Endpoint:
`POST {{baseUrlAdmin}}/api/internal/cache/clear`
##### Body:
```json
{
"cacheNames": [
"flowx:core:cache"
]
}
```
This endpoint is designed to purge Redis caches selectively. It will exclusively delete caches that are specified in the admin microservice properties under the property key: "application.redis.clearable-caches".
### Access rights for Theme Management
To utilize the new theme management feature, make sure the following access rights are configured for the CMS microservice:
| Module | Scope | Role default value | Microservice |
| ------------- | ------ | -------------------- | ------------------ |
| manage-themes | import | ROLE\_THEMES\_IMPORT | Content Management |
| | import | ROLE\_THEMES\_EDIT | Content Management |
| | import | ROLE\_THEMES\_ADMIN | Content Management |
| manage-themes | read | ROLE\_THEMES\_READ | Content Management |
| | read | ROLE\_THEMES\_EDIT | Content Management |
| | read | ROLE\_THEMES\_ADMIN | Content Management |
| | read | ROLE\_THEMES\_IMPORT | Content Management |
| manage-themes | edit | ROLE\_THEMES\_EDIT | Content Management |
| | edit | ROLE\_THEMES\_ADMIN | Content Management |
| manage-themes | admin | ROLE\_THEMES\_ADMIN | Content Management |
Learn more
### Logging
To streamline logging and enhance readability, you can now disable health endpoint calls cluttering the logs of various deployed FlowX microservices using the following environment variables:
* `LOGGING_LEVEL_WEB: INFO` (default)
* `LOGGING_LEVEL_ORG_SPRINGFRAMEWORK_WEB: INFO` (default)
If logs lack detail, consider setting the value to ‘DEBUG’.
# Process configuration
Source: https://docs.flowx.ai/release-notes/v4.0.0-april-2024/migrating-from-v3.4.x/process-configuration
This guide outlines changes in process and UI configuration from v3.4.x to 4.0 version.
In the latest version, there have been updates and adjustments to process and UI configurations to improve performance and usability. Below are the key changes and steps to migrate your configurations:
## Business Rules
* `output.put` method is required to generate structured output data when using input to validate or filter incoming data based on certain conditions (commonly used to retrieve information needed for processing)
MVEL Syntax Change: In MVEL 2.5.2.Final, the direct property assignment syntax (input.property = value) is no longer supported. Instead, you must use the output.put method (output.put("property", value)) to generate structured output data.
* This represents a fundamental change in how MVEL scripts interact with data
* The input object should be treated as read-only for accessing incoming data\*
* The output object with its put method must be used for storing any results or modified values
* `processInstanceID` and `processInstanceUUID` - This release introduces enhancements aimed at isolating process instance related values from business/configured parameters. Key changes include the removal of `processInstanceId`, `parentProcessInstanceId`, and `parentProcessInstanceUuid` from paramValues zone on process instance, relocating them to a distinct location within process instance data - to a new object called “instanceMetadata”.
### Business rules - new object "instanceMetadata"
Introducing a new object named "instanceMetadata". This object will serve as a container for process instance related values, allowing you to access relevant attributes in your scripts more effectively. Key specifications include making certain variables/parameters read-only, controlled by FlowX, and facilitating attribute access through the `instanceMetadata` object rather than directly calling attributes.
Configurators will utilize `instanceMetadata` to access attributes instead of directly calling them as in version 3.4.x. For example, `input.processInstanceId` will be accessed through `instanceMetadata.processInstanceId`.
Review and update any affected business rules accordingly.
#### Example of business rule (with Python)
This example is made just to demonstrate the use of the new `instanceMetadata.get` object.
```python
test_string = "There are 2 apples for 4 persons"
# using List comprehension + isdigit() +split()
# getting numbers from string
res = [int(i) for i in test_string.split() if i.isdigit()]
output.put("app", {"phyton": str(res), "key3": "Value updated"});
key = input.get("app").get("key1")
id3 = instanceMetadata.get("processInstanceId")
uuid3 = instanceMetadata.get("processInstanceUuid")
output.put("id3", id3);
output.put("uuid3", uuid3);
```
## UI configuration
There are some changes that were brought together with the Theme Management feature to the UI components in **4.0** release that might impact your previous UI configuration.
### Root components
* **Card Element**: Previously (in **v3.4.x**) could have set a **Card style** property: **border** or be **raised**.
| Card 4.0 | Card 3.0 |
| :------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------: |
|  |  |
| | |
Now, for processes migrated from **v3.4.x** to achieve the previous styling (with "raised prop" or "border") you can either set up from Themes or by using **Theme Overrides** for **Card** element in **UI Designer**:
Depending on the number of Card elements present in your migrated processes, it's essential to devise a strategic approach. If a significant portion of the cards feature "border" styling, you can configure this setting within Themes Management and will be cascading through all of them. For the remaining cards, manual intervention is required to apply the "raised" effect by **overriding** their styles using **Theme Overrides**.

Read more about **Theme management** feature:
### Buttons
All **primary** and **secondary** buttons, in 4.x they transformed to **fill** buttons. If there were secondary buttons, once moved to "fill", they will appear similar to primary ones. You should perform an override in the UI Designer to make them look like secondary buttons as they did initially.

| Version | Primary | Secondary | Ghost | Text | Fill | New States: Pressed, Hover, Disabled |
| ------- | :-----: | :-------: | :---: | :--: | :--: | :----------------------------------: |
| 3.4.x | ✓ | ✓ | ✓ | ✓ | | |
| 4.0.x | | | ✓ | ✓ | ✓ | ✓ |
By following this migration guide, you can seamlessly transition to the Theming 4.0 feature, enhancing your project's design process and ensuring consistency across platforms and branches.
### Icons - no color property
Now, all icon color settings have a "No Color" option, which allows the icon (SVG) to be rendered with its original color settings.
If you are utilizing SVG icons for UI components (such as a left icon on an input element) and you desire to ensure that the color remains consistent regardless of the theme settings, it is imperative to override all states associated with the "Left icon," as demonstrated in the example below:

## Layout
In the context of the Theme Management feature, you can now apply the previously configured paddings directly from the previosuly used theme JSONs within the theme settings. Review the paddings you had set up previously and apply them in the Themes section.
**Themes → Global Settings → Styles**
### Components spacing
If you set it in **Theme → Global Settings**, it will cascade to all the following components:
* Input/ Selection
* Buttons
* Radio/Checkbox
* Message
* Segmentted Button
* Stepper
* Tabs

If you want to override the padding set in **Global Settings** for the above components, navigate to **Components → Your Component** and set your desired padding.
After editing the padding for a component, you can also reset the values and they will be set back to the values you added in **Global settings**.
### Layout elements
To set the padding for layout elements( **Cards** and **Forms**), access:
**Themes → Components → Layout Elements**
# Renderer SDKs
Source: https://docs.flowx.ai/release-notes/v4.0.0-april-2024/migrating-from-v3.4.x/renderers
This guide assists in migrating from FlowX v3.4.x to v4.0.
## Web SDK migration guide
### Theming changes
All old configurations linked with the previous theming setup (versions prior to 4.0) must be removed from your SDK deployment:
1. Review Usage: Identify where you have applied theming v1 configurations in your project.
#### 3.4.x example:
```yaml
...
themePaths: {
components: 'assets/theme/theme_components.json',
tokens: 'assets/theme/theme_tokens.json',
...
},
```
2. Update to Theming 4.0: Revise your theming configurations to use the latest theming v2 approach. Refer to our documentation or migration resources for guidance on transitioning to the new theming.
Learn more
### Authorization token
* **AuthToken Management**: The ui-sdk no longer relies on the authToken being stored in LOCAL\_STORAGE. Instead, the 'access\_token' is now passed as an input to the `` component through a private variable.
**Breaking change**: This update is mandatory for proper functionality of SSE (Server-Sent Events).
By adopting this approach, clients gain the flexibility to implement the most secure token management strategy tailored to their specific security needs. Moreover, shifting the responsibility to the container application for updating the 'access\_token' input ensures that any changes or refreshes to the authToken are securely managed within the application's domain. This proactive approach effectively mitigates potential security vulnerabilities associated with token management, offering a robust and adaptable solution.
Learn more
***
## iOS SDK migration guide
### Integration changes
The module name has changed from `FlowX` to `FlowXRenderer`.
Any files importing the SDK should be updated with the new module name.
```
//replace this
import FlowX
//with this
import FlowXRenderer
```
### Initialization config changes
A new configuration parameter, named `enginePath` was added to `FXConfig`. It represents the URL path component to the process engine.
```swift
FXConfig.sharedInstance.configure { (config) in
config.baseURL = myBaseURL
config.enginePath = "engine"
config.imageBaseURL = myImageBaseURL
config.language = "en"
config.logLevel = .verbose
}
```
### Theming changes
The theming setup mechanism was updated.
Learn more
### Custom components
* The type of `data` property of custom components has changed from `[String: Any]` to `Any`.
* As a consequence, type checking, casting and extracting the needed data must be part of the implementation details of the custom component.
Learn more
### Start process updates
* The API for starting a process has changed.
* There are now 3 methods available.
Learn more
### Continue process updates
* The API for resuming an existing process has changed.
* There are now 3 methods available.
Learn more
***
## Android SDK migration guide
### Initialization config changes
A new configuration parameter, named `enginePath` was added for identifying the FlowX Process Engine microservice.
When the SDK initialization happens through the `FlowxSdkApi.getInstance().init(...)` method, the argument has to be set inside the `config: SdkConfig` parameter value:
```kotlin
FlowxSdkApi.getInstance().init(
...
config = SdkConfig(
baseUrl = "URL to FlowX backend",
imageBaseUrl = "URL to FlowX CMS Media Library",
enginePath = "some_path",
language = "en",
validators = mapOf("exact_25_in_length" to { it.length == 25 }),
),
...
)
```
### Authentication changes
The authentication mechanism has changed, so instead of passing the `String` value for the access token, a `FlowxSdkApi.Companion.AccessTokenProvider` must be used instead.
This provider is defined as a functional interface returning the actual value of the access token:
```kotlin
fun interface AccessTokenProvider {
fun get(): String
}
```
Related changes:
1. The provider can be passed if the business logic allows it when calling the `FlowxSdkApi.getInstance().init(...)`.
As a consequence, the new signature for the `FlowxSdkApi.getInstance().init(...)` is:
```kotlin
fun init(
context: Context,
config: SdkConfig,
accessTokenProvider: AccessTokenProvider? = null,
customComponentsProvider: CustomComponentsProvider? = null,
)
```
2. The `FlowxSdkApi.getInstance().startProcess(...)` `accessToken` parameter was dropped. It is not needed anymore, as the authentication will rely solely on the `AccessTokenProvider`.
The new signature of this method is:
```kotlin
fun startProcess(
processName: String,
params: JSONObject = JSONObject(),
isModal: Boolean = false,
closeModalFunc: ((processName: String) -> Unit)? = null,
): @Composable () -> Unit
```
3. The `FlowxSdkApi.getInstance().continueProcess(...)` `accessToken` parameter was dropped. It is not needed anymore, as the authentication will rely solely on the `AccessTokenProvider`.
The new signature of this method is:
```kotlin
fun continueProcess(
processUuid: String,
isModal: Boolean = false,
closeModalFunc: ((processName: String) -> Unit)? = null,
): @Composable () -> Unit
```
4. The calls of the `FlowxSdkApi.getInstance().updateAccessToken("some_access_token")` method must be replaced by calls of the `FlowxSdkApi.getInstance().setAccessTokenProvider(accessTokenProvider = { "some_access_token" })`.
Whenever the access token changes based on your own authentication logic, it must be updated in the renderer by calling the `setAccessTokenProvider` method again.
Learn more
### Theming changes
The theming mechanism was replaced by a new approach, which enforces loading a theme before starting or resuming a process.
Related changes:
1. The `ai.flowx.android.sdk.process.model.SdkConfig` theming related parameters (i.e. `themeTokensJsonFileAssetsPath` and `themeComponentsJsonFileAssetsPath`) were dropped.
Because of that, when configuring the library through the `FlowxSdkApi.getInstance().init(...)` method, the `config` parameter will look like this:
```kotlin
FlowxSdkApi.getInstance().init(
...
config = SdkConfig(
baseUrl = "URL to FlowX backend",
imageBaseUrl = "URL to FlowX CMS Media Library",
enginePath = "some_path",
language = "en",
validators = mapOf("exact_25_in_length" to { it.length == 25 }),
),
...
)
```
2. For styling the UI components displayed when rendering a process while authenticated, the `FlowxSdkApi.getInstance().setupTheme(...)` method must be called before starting or resuming any process:
```kotlin
suspend fun setupTheme(
themeUuid: String,
fallbackThemeJsonFileAssetsPath: String? = null,
@MainThread onCompletion: () -> Unit
)
```
This will fetch a priorly defined theme in the FlowX Designer, cache it and then load its properties.
A process should be started or resumed only after the `onCompletion` closure is called, signaling the completion of setting up the theme.
Learn more
### Custom components changes
1. All `import ai.flowx.android.sdk.component.*` directives must be changed to `import ai.flowx.android.sdk.ui.components.*`
2. The type of the `data` parameter passed to a custom component through the `populateUi(data: JSONObject)` method, both for `@Composable` and for classical `View` system approaches, changed to `Any?`.
Therefore, the new signature of the method is `populateUi(data: Any?)`.
As a consequence, type checking, casting and extracting the needed data must be part of the implementation details of the custom component.
The value for the `data` parameter received in the `populateUi(data: Any?)` could be:
* `Boolean`
* `String`
* `java.lang.Number`
* `org.json.JSONObject`
* `org.json.JSONArray`
Learn more
# UI components - change log
Source: https://docs.flowx.ai/release-notes/v4.0.0-april-2024/ui-components-changelog
This log outlines the changes in component styles and props from version 3.4.x to version 4.0.
## **General - for all components**
* All components now support token overrides for color, typography, and shadow settings defined in the theme.
* Styling options are now available for different platforms: web, iOS, and Android.

* Hide expressions can be set platform-specific.
* All icon color settings offer a 'No Color' option to retain the original SVG colors.

### **TAB BAR**
* New navigation area component. The Tab Bar is a component in user interfaces, facilitating navigation and content organization allowing configuration for parallel zones (using multiple user tasks within the same tab) and for multiple tabs (users can access multiple tabs within the tab bar)

Learn more
### **TAB**
* New navigation area component. Tabs are clickable navigation elements within the Tab Bar that allow users to switch between different sections or functionalities within an application.

Learn more
### **ZONE**
* New navigation area component for web. A container-like element grouping specific navigation areas or user tasks, used for oganizing content (ideally used for processes with headers and footers).
Learn more
### **CONTAINER**
* Introduced `position` property for setting sticky containers:

* Position ***Static***: The container remains fixed and does not scroll along with the page content.
* Position ***Sticky***: When the sticky property is enabled, the container maintains its position even during scrolling.
Learn more
* Web supports top, left, bottom, and right sticky positioning.
* iOS and Android support top and bottom sticky positioning within user tasks.
* Added `scrollable` property for iOS and Android root containers, allowing control over scroll behavior (defaults to true).
* Added `screen title` property for iOS and Android root containers, defining the navigation bar title.
### **CARD**
* Removed `card type` (raised or bordered) property. See more, [**here**](./migrating-from-v3.4.x/process-configuration#root-components).
* Added `scrollable` property for iOS and Android root containers, controlling scroll behavior (defaults to true).
* Added `screen title` property for iOS and Android root containers.

### **BUTTON**
* Replaced `primary` and `secondary` types with `fill`. See more, [**here**](./migrating-from-v3.4.x/process-configuration#buttons).
* State-specific properties can now be set for **label**, **icon**, **background** and **border** colors:
* **Web**: Default, hover, pressed and disabled states.
* **Android**: Default and disabled states.
* **iOS**: Default state.
### **TEXT**
* Added `link color` property for markdown link color.
### **FILE UPLOAD**
* State-specific properties can now be set for **label**, **icon**, **background** and **border** colors:
* **web**: Default, hover, pressed and disabled states.
* **Android**: Default and disabled states.
* **iOS**: Default state.
### **MESSAGE**
* New `link color` property for markdown link color.
### **FILE PREVIEW**
* New style properties for document icon and action icon colors.
* Introduced `auto` height type on iOS and Android when scrollable is set to false on the root container/card (when you want the file preview to fill the entire available space vertically).
* New source type: Media Library.
### **ALL FORM ELEMENTS**
* Added style properties for info label, error label, helper label and helper tooltip.
### **INPUT**
* State-specific properties can now be set for **border**, **background**, **text**, **right icon**, **left icon**, **prefix/suffix** and **placeholder** colors:
* **web**: Empty, active, filled, disabled, error and hover states.
* **Android**: Empty, active, filled, disabled and error states.
* **iOS**: Empty, active, filled, disabled and error states.

### **TEXTAREA**
* State-specific properties can now be set for **border**, **background**, **text** and **placeholder** colors:
* **web**: Empty, active, filled, disabled, error and hover states.
* **Android**: Empty, active, filled, disabled and error states.
* **iOS**: Empty, active, filled, disabled and error states.
### **SELECT**
* State-specific properties can now be set for **border**, **background**, **text**, **right icon**, **left icon** and **placeholder** colors:
* **web**: Empty, active, filled, disabled, error and hover states.
* **Android**: Empty, active, filled, disabled and error states.
* **iOS**: Empty, active, filled, disabled and error states.
### **DATEPICKER**
* State-specific properties can now be set for **border**, **background**, **text**, **right icon**, **left icon** and **placeholder** colors:
* **web**: Empty, active, filled, disabled, error and hover states.
* **Android**: Empty, active, filled, disabled and error states.
* **iOS**: Empty, active, filled, disabled and error states.
### **RADIO, CHECKBOX**
* State-specific properties can now be set for **border**, **background**, **text** and **icon** colors:
* **web**: Unselected, selected, disabled unselected, disabled selected, hover unselected and hover selected states
* **Android**: Unselected, selected, disabled unselected and disabled selected states
* **iOS**: Unselected, selected, disabled unselected and disabled selected states
### **SWITCH**
* State-specific properties can now be set for **border**, **background** and **knob** colors:
* **web**: Unselected, selected, disabled unselected and disabled selected states.
* **Android**: Unselected, selected, disabled unselected and disabled selected states.
* **iOS**: Only **background** color on selected state.
### **SEGMENTED BUTTON**
* State-specific properties can now be set for **border**, **background** and **text** colors:
* **web**: Unselected, selected, disabled unselected, disabled selected, hover unselected and hover selected states.
* **Android**: Unselected, selected, disabled unselected and disabled selected states.
* **iOS**: Only **background** and **text** color on unselected and selected states.
### **SLIDER**
* State-specific properties can now be set for **limits**, **value**, **filled**, **empty** and **knob** colors:
* **web**: Default and disabled states.
* **Android**: Default and disabled states.
* **iOS**: Default and disabled states. Disabled only with **limits** and **value**.
# FlowX.AI 4.0.0 Release Notes
Source: https://docs.flowx.ai/release-notes/v4.0.0-april-2024/v4.0.0-april-2024
🎉 Welcome to the much-anticipated FlowX.AI 4.0 release! 🚀 Get ready to experience a whole new level of innovation and efficiency with FlowX.AI 4.0.
**Release Date:** 18th April 2024
In this exhilarating update, we've added a bunch of cool stuff. From a **new theming feature** to a complete overhaul of **navigation**, killing the **milestones nodes** (we know you hated them), brace yourself for a transformative journey through the latest features and enhancements...and also a surprise for you, awaiting at the [**bottom of the page**](#coming-soon)!
## Bonus: meme of the day
Start with a laugh, because it will be a lot to read!
Let's dive in and explore the exhilarating additions:
## **What's New?** 🆕
### Theme Management
The new **Theme Management** feature enhances our design process by establishing a unified visual language across various platforms.

This approach simplifies development by enabling the establishment of a foundational **theme**, which can then be customized to accommodate specific platform requirements.

Ultimately, this streamlines the development process, saving significant time and effort.
* **Design Tokens**: Represent the single source of truth for the theme, storing visual elements of the design system.
* **Color Palette, Shadows, Typography Tokens**: Configure these tokens based on your company's brand guidelines. They ensure reusability and consistency.

* **Platform-specific Settings**: Configure settings for each platform (web, iOS, Android) based on the global settings you've defined.
* **Styles and Utilities**: General settings applying to all components (styles) and labels, errors, and helper text settings (utilities).

**Component-level Configuration**: Customize the style of each component type.

#### Universal Styling
Introduced the option for platform-specific theming customization for components across Web, iOS, and Android.

More information about Theming:
### Navigation areas (removed Milestones nodes)


In this release, we've bid farewell to **Milestone Nodes**, ushering in a fresh and improved approach to organizing the user interface. Say hello to a sleeker and more efficient system with the introduction of [**Navigation areas**](../../4.0/docs/building-blocks/process/navigation-areas).
For process definitions originating from releases earlier than **4.0**, Milestone nodes will evolve into [**Zones**](../../4.0/docs/building-blocks/process/navigation-areas#zone) during migration, offering enhanced navigation capabilities.

### New navigation UI elements
* Tab Bar & Tabs
* Zones
* Parent Process Area

#### Navigation areas per platform
### UI Designer (enhancements)
* New enhanced UI designer, offering flexibility and control over your application's look and feel across all platforms.
* Added the possibility to customize the **navigation areas** through the **UI Designer**
#### UI Designer - universal configuration and styling
Introduced the option for platform-specific configuration and styling for components across Web, iOS, and Android.


The new navigation panel in the UI Designer allows you to manage navigation configurations efficiently across different platforms, ensuring consistency and clarity in your interface design.

### New node - Embedded subprocess
Introducing Embedded Subprocesses! Enhance your process management with the new embedd subprocess functionality and the new **Start Embedded Subprocess** node.
Seamlessly integrate self-contained subprocesses within parent processes for enhanced functionality and encapsulated operations. The Start Embedded Subprocess node enables the initiation of subprocesses within a parent process, offering a range of features and options for enhanced functionality.

### New nodes - Error Events
We are excited to introduce support for a new type of node in BPMN 2.0, specifically Error Events: Error Intermediate boundary event, which expand the capabilities of process modeling and error handling within your BPMN diagrams.

These Error Event nodes enhance the BPMN standard and offer improved control over error management.
### Nodes Redesign: Redefining Connectivity
Experience a redesigned interface for smoother interaction with **nodes** and **Process Designer**.

### Favorites Tab
Keep track of your favorite process definitions and streamline process development with the all-new **Favorites tab**, ensuring effortless collaboration among teams.
## **Changes** 🔧
For clients upgrading from an older release (v3.x), we recommend consulting our [**Migrating from v3.x**](./migrating-from-v3.4.x/) guide for comprehensive migration instructions.
Here's a summary of the changes introduced in version 4.0 after upgrading from v3.4.x for all microservices:
* **AuthToken Management**: The ui-sdk no longer relies on the authToken being stored in LOCAL\_STORAGE. Read more, [**here**](/4.0/sdks/angular-renderer#authorization).
* The **Subprocess Run** node is now called **Call activity** node.
* The **Start subprocess** action is now available only on [User Task](../building-blocks/node/user-task-node) nodes. For any other node types, you should use the [Call Activity node](../building-blocks/node/call-subprocess-tasks/call-activity-node) instead.
* Business Rules: Enhancements for structured data management and improved attribute access via `instanceMetadata` new object. Read more, [**here**](./migrating-from-v3.4.x/process-configuration#business-rules).
* **Timer event scheduler**: Significant optimizations have been implemented in the timer event scheduler, resulting in improved efficiency and responsiveness.
* **UI Designer** updates and improvements.
* UI components style and props changes - consult the [**UI components change log**](./ui-components-changelog) for more information.
* Revised cache key organization. Read more [**here**](./migrating-from-v3.4.x/migrating-from-v3.4.x#revised-cache-key-organization).
* New environment variables to prevent log flooding. Read more, [**here**](./migrating-from-v3.4.x/migrating-from-v3.4.x#logging).
* Java 17 integration: Integrated Java 17 (all backend services - base image: `eclipse-temurin - 17.0.10_7-jre-jammy`) as default buildpack.
Do not to forget to consult the [**migration guide**](./migrating-from-v3.4.x/) for more information.
### Admin - health monitoring
Improved health monitoring:
* Enabled role-based acccess control and added annotations to enable platform health by default.
* Established default annotations for platform health status.
* Adjusted liveness and readiness probes for improved reliability and responsiveness.
* Updated Prometheus scraping configuration for metric collection.
## **Bug Fixes** 🛠️
We've also squashed pesky bugs to ensure a smoother and more reliable experience across the board.
### Scheduler
* Tokens now leave the timer node promptly, no longer lingering like last-minute shoppers before closing time!
* Subprocesses can now rest easy knowing they won't trigger "Timer expression is not valid" errors when setting process expiry time in months—our code's time management just got a promotion!
### UI Designer
* Switch element label now obeys orders to move to the end—no more rebellious labels sticking to the start!
### Process Designer
* Small laptops users rejoice! Now you can scroll to see all subprocesses and audit logs without losing the last line—no more screen envy for external monitors!
* Start Subprocess Action now consistently saves selected version inputs and allows for multiple edits without page refreshes—no more need for extra clicks or browser gymnastics!
* Nodes now stay within swimlane borders when moved, preventing them from wandering off like lost sheep—no more unexpected node relocations disrupting your process layouts!
### FlowX Engine
* MVEL parser now (with the latest MVEL version update: 2.5.2) happily accepts arrays/lists after objects, eliminating JSON file frustrations and improving developer workflow—no more cryptic error messages ruining your day!
* `output.put` method is required to generate structured output data when using input to validate or filter incoming data based on certain conditions (commonly used to retrieve information needed for processing)
The direct property assignment syntax (input.property = value) is no longer supported. Instead, you must use the output.put method (output.put("property", value)) to generate structured output data.
* This represents a fundamental change in how MVEL scripts interact with data
* The input object should be treated as read-only for accessing incoming data
* The output object with its put method must be used for storing any results or modified values
* Python Business Rules now reliably execute during runtime, ensuring consistent behavior between test and live modes—no more mysterious token standstills!
### Web SDK
* You can now seamlessly execute actions in processes with radio buttons and validators, even after page reloads—eliminating the frustration of encountering unresponsive buttons.
* Also you can now successfully execute UI Actions on image components without encountering "undefined token uuid" errors—no more frustrations with unresponsive image interactions!
* Custom validators now retain their specified async execution type upon saving, ensuring consistency and reliability in process validation—no more frustrating switches back to sync execution!
* Forget playing favorites in the WEB\_SDK! Now you can save data from both form elements and UI action custom body simultaneously. Say goodbye to tough decisions—saving data just got a whole lot easier and funnier!
### FlowX Admin
* Fixed issues with Persistent Database (PDB) API and service account auto-mount.
* Disabled auto service links in containers (all SVC/PORT variables).
## Other
Our docs also received an upgrade...a new home...and a new AI search function! But we know that you like to read so do not be that lazy! 🔥
## Coming soon
Hey there, tired of drowning in "virtual paperwork"? Fear not! We've got you covered. Who said machines can't handle finances? 🤖💰
Coming soon: **FlowX AI Agents**. Want to find out more? Contact us about how we can make your journey smoother than ever before! 🚀
Get in contact
## Gremlins to Watch Out For
Keep an eye out for these quirks:
* Versioning: Merging branches without importing the latest committed version may result in a surprise merge conflict party. We're on it!
* UI Designer: When relocating UI elements between parents, the elements' order doesn't always get the memo, causing a mix-up in the family tree. We're untangling this knot!
## Resources
For deployment guidelines and a seamless transition to version 4.0:
# Deployment guidelines v4.1
Source: https://docs.flowx.ai/release-notes/v4.1.0-may-2024/deployment-guidelines-v4.1.0
Do not forget, after upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FlowX.AI Designer > Platform Status**.
After upgrading to the 4.1 FlowX.AI release, you can import old process definitions into the new platform only if they are from version 4.0.
It is not possible to import old process definitions from versions earlier than v4.0.

As of **FlowX.AI** release v4.1.0, the `paperflow-web-components` is deprecated, removing the dependencies. Instead, the new components can be found in `@flowx/ui-toolkit`.
## Component versions
| Component | 4.1 | 4.0 | 3.4.7 | 3.4.6 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 |
| ---------------------------- | ---------- | --------- | ---------- | --------- | --------- | --------- | ------- | ------ | ------ | ------ | ------- | ------ | ------ | ------ |
| **process-engine** | **6.0.3** | 5.10.3 | 4.3.5-2v11 | 4.3.5-2v6 | 4.3.5-2v2 | 4.3.5-2v1 | 4.3.5 | 4.3.2 | 4.3.1 | 4.1.0 | 3.6.0 | 2.2.1 | 2.1.2 | 2.0.7 |
| **admin** | **5.0.9** | 4.6.10 | 3.3.19-6 | 3.3.19-4 | 3.3.19-3 | 3.3.19-1 | 3.3.19 | 3.3.10 | 3.3.7 | 3.1.1 | 2.5.2 | 2.2.2 | 2.1.3 | 2.0.8 |
| **designer** | **4.17.1** | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/ui-sdk** | **4.17.1** | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/ui-toolkit** | **4.17.1** | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/ui-theme** | **4.17.1** | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **paperflow-web-components** | - | 3.35.18-5 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **flowx-process-renderer** | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| **cms-core** | **3.0.1** | 2.2.5 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.6 | 1.3.0 | 1.2.0 | 1.0.3 | 1.0.2 |
| **scheduler-core** | **3.0.0** | 2.1.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.1.0 | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 |
| **events-gateway** | **3.0.0** | 2.0.4 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.0.6 | 1.0.2 | - | - | - |
| **notification-plugin** | **4.0.1** | 3.0.6 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.8 | 2.0.8 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 |
| **document-plugin** | **4.0.0** | 3.0.6 | 2.0.10-1 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.8 | 2.0.8 | 2.0.6 | 2.0.4 | 2.0.3 | 2.0.3 | 2.0.2 |
| **ocr-plugin** | 1.0.17 | 1.0.17 | 1.0.15 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.8 | 1.0.8 | 1.0.2 | 0.1.33 | 0.1.33 |
| **license-core** | **3.0.0** | 2.0.5 | 1.1.0 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.4 | 1.0.2 | 1.0.2 | 1.0.2 | 1.0.1 |
| **task-management-plugin** | **5.0.2** | 4.0.5 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.0 | 2.1.2 | 1.0.4 | 1.0.4 | 1.0.1 |
| **data-search** | **2.0.3** | 1.0.6 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.3 | 0.2.0 | 0.1.4 | 0.1.4 | 0.1.3 |
| **audit-core** | **4.0.3** | 3.1.4 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.1.3 | 2.1.3 | 2.1.0 | 1.0.6 | 1.0.5 | 1.0.4 | 1.0.1 |
| **reporting-plugin** | **0.1.6** | 0.1.5 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 |
| **advancing-controller** | **2.0.0** | 1.1.4 | 0.3.5-1 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.2 | 0.3.0 | 0.1.4 | 0.1.4 | 0.1.2 |
| **iOS renderer** | **3.0.7** | 3.0.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.1.0 | 2.0.1 | 2.0.0 | 2.0.0 |
| **Android renderer** | **3.0.4** | 3.0.0 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 |
### Third-party recommended component versions
| FlowX.AI Platform Version | Component Name | Recommended Versions |
| ------------------------- | ----------------- | -------------------- |
| 4.1 | Keycloak | 22.x |
| 4.1 | Kafka | 3.2.x |
| 4.1 | PostgreSQL | 16.2.x |
| 4.1 | MongoDB | 7.0.x |
| 4.1 | Redis | 7.2.x |
| 4.1 | Elasticsearch | 7.17.x |
| 4.1 | OracleDB | 19.8.0.0.0 |
| 4.1 | Angular (Web SDK) | 16.3.x |
FlowX.AI supports the Recommended Versions of the prerequisite third-party components in the above table.
For optimal performance and reliability, our internal QA process validates new FlowX.AI releases using the latest Recommended Versions. While exploring alternative versions that suit your company's specific requirements, we recommend referring to the table above for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
A FlowX release and its patch versions (such as 4.1.x) will *all* have the same the *Recommended Versions* for the third-party components.
# Migrating from previous versions to v4.1
Source: https://docs.flowx.ai/release-notes/v4.1.0-may-2024/migrating-from-v3.4.x-to-v4.1
If you are upgrading from a v3.4.x version, first check the following **migration guide** (to capture all the changes introduced in v4.0):
If you are upgrading from v4.0 version, check the following migration guide:
## Migrating from v4.0 to v4.1
### Revised cache key organization
To ensure a smooth transition to the 4.1 release, it's essential to utilize the following clear cache endpoint and body:
##### Endpoint
`POST {{baseUrlAdmin}}/api/internal/cache/clear`
##### Body:
```json
{
"cacheNames": [
"flowx:core:cache"
]
}
```
### Misconfigurations
To enable and to compute warnings for already existing processes from previous versions, use the following endpoint to retrieve and compute all warnings:
Please note that it may take some time for all misconfigurations in older processes to become available on the platform after calling this endpoint.
#### FlowX.AI Admin
```json
{{baseUrlAdmin}}/api/process-versions/compute
```
For more details:
### Spring Boot upgrade
The following configuration changes are required after upgrading your Spring Boot application to version 3.2.x. Below is a detailed explanation of each section in the context of this upgrade:
Note that this setup is backwards compatible, it does not affect the configuration from v3.4.x. The configuration files will still work until v4.5 release.
The old environment variables (v3.4.x) will be removed in the v.4.5 FlowX.AI release.
Configuration properties updates:
#### Redis configuration
Where Redis is used (FlowX CMS, FlowX Admin, Documents plugin, events, Notifications plugin, FlowX Engine, Task Management plugin):
##### Old configuration
* `SPRING_DATA_HOST`
* `SPRING_DATA_PORT`
* `SPRING_DATA_PASSWORD`
##### New configuration
* `SPRING_DATA_REDIS_HOST`
* `SPRING_DATA_REDIS_PORT`
* `SPRING_DATA_REDIS_PASSWORD`
#### Management configuration
For all services managing metrics, especially exporting metrics to Prometheus:
##### Old configuration
* `MANAGEMENT_METRICS_EXPORT_PROMETHEUS_ENABLED`
##### New configuration
* `MANAGEMENT_PROMETHEUS_METRICS_EXPORT_ENABLED`: This variable enables or disables Prometheus metrics export dynamically based on the environment.
More details, [here](../../4.0/setup-guides/flowx-engine-setup-guide/engine-setup#configuring-application-management)
#### Authentication
For all services except the advancing-controller.
##### New configuration
Currently not required to be set as they take values from old configs.
* `SPRING_SECURITY_OAUTH2_RESOURCE_SERVER_OPAQUE_TOKEN_INTROSPECTION_URI`
* `SPRING_SECURITY_OAUTH2_RESOURCE_SERVER_OPAQUE_TOKEN_CLIENT_ID`
* `SPRING_SECURITY_OAUTH2_RESOURCE_SERVER_OPAQUE_TOKEN_CLIENT_SECRET`
##### Existing configuration
* `SECURITY_OAUTH2_BASE_SERVER_URL`
* `SECURITY_OAUTH2_REALM`
* `SECURITY_OAUTH2_CLIENT_CLIENT_ID`
* `SECURITY_OUATH2_CLIENT_CLIENT_SECRET`
#### Elasticsearch configuration
This outlines the Elasticsearch configuration for the following microservices: FlowX Admin, FlowX Engine, Data Search, and Audit Core.
##### Existing configuration
* `SPRING_ELASTICSEARCH_REST_URIS`: This environment variable is used by the microservices listed above. This variable needs to be set to the appropriate value for each environment.
Example:
```yaml
# only the value changes for the next config:
spring:
elasticsearch:
rest:
uris: localhost:9200 #no more protocol/schema anymore
```
If you do not upgrade to the new configuration, make sure that in the actual configuration you will remove the protocol/schema, it is no longer needed: for example, instead of `http://localhost:9200` you will have `localhost:9200` as value.
##### New configuration
* `SPRING_ELASTICSEARCH_REST_PROTOCOL`: Default value is `https`; should be overridden if connection to Elasticsearch needs to be done over `http`.
Example:
```yaml
# New configuration with default value:
spring:
elasticsearch:
rest:
protocol: https / http # default value is https - should be overriden if connection to elastic needs to be done on http.
```
#### License core configuration
For the License core microservice you need to configure two different data sources: one for the engine database and one for the license database.
##### Old Engine database source
* `ENGINE_DATASOURCE_URL`
* `ENGINE_DATASOURCE_JDBC_URL`
* `ENGINE_DATASOURCE_USERNAME`
* `ENGINE_DATASOURCE_PASSWORD`
##### New Engine database source
* `SPRING_DATASOURCE_ENGINE_URL: ${spring.datasource.jdbc-url}`: Pointing to old config for backwards compatibility.
* `SPRING_DATASOURCE_ENGINE_JDBC_URL: ${spring.datasource.jdbc-url}`: Pointing to old config for backwards compatibility
* `SPRING_DATASOURCE_ENGINE_USERNAME: ${spring.datasource.username}`: Pointing to old config for backwards compatibility
* `SPRING_DATASOURCE_ENGINE_PASSWORD: ${spring.datasource.password}`: Pointing to old config for backwards compatibility
##### Old License database source
* `SPRING_DATASOURCE_URL`
* `SPRING_DATASOURCE_JDBC_URL`
* `SPRING_DATASOURCE_USERNAME`
* `SPRING_DATASOURCE_PASSWORD`
##### New License database source
* `SPRING_DATASOURCE_LICENSE_URL: ${spring.datasource.jdbc-url}`: Pointing to old config for backwards compatibility.
* `SPRING_DATASOURCE_LICENSE_JDBC_URL: ${spring.datasource.jdbc-url}`: Pointing to old config for backwards compatibility.
* `SPRING_DATASOURCE_LICENSE_USERNAME: ${engine.datasource.username:postgres}`: Pointing to old config for backwards compatibility.
* `SPRING_DATASOURCE_LICENSE_PASSWORD: ${engine.datasource.password:wrongpwd}`: Pointing to old config for backwards compatibility.
#### Customer management plugin
##### New configuration
* `ELASTICSEARCH_PROTOCOL`: Possible values are `https` / `http`; default value is `https` - should be overridden if connection to Elasticsearch needs to be done over `http`.
### Open Telemetry
The FlowX.AI platform uses a mix of both auto instrumentation with Java agent and manual instrumentation using the Open Telemetry API.
Enabling Open Telemetry is optional.
For more information about how to leverage Open Telemetry with FlowX, check the following section:
**Additional configuration needed**! For more information about Open Telemetry deployment/configuration, check the following section:
For more information about microservices Open Telemetry default properties, check the following section:
# Components change log v4.1
Source: https://docs.flowx.ai/release-notes/v4.1.0-may-2024/ui-components-changelog
This log outlines the changes in component styles and props from v4.0 to v4.1.
## Navigation areas
### Stepper
* Implemented fixed height settings on the Web (for sticky sections use case).
### Tab bar
* Added Sizing options: Tabs Gap & Component Gap properties.

### Page
* Added fixed height settings on the Web (for sticky sections use case)

### Modal
* Added fixed height settings on the Web (for sticky sections use case).

### Zone
* Incorporated fixed height settings on the Web (for sticky sections use case).

## UI components
### Container
* Integrated fixed height settings on the Web for better management of sticky sections.

### Card
* Enabled fixed height settings on the Web for sticky sections.

### Form
* Implemented fixed height settings on the Web for improved usability.

### File preview
* Introducing a new source type: Media Library. Check the following section for more details:

### Collection
* Added fixed height settings on the Web for smoother handling.

Note that for all components, the overflow property is set to scroll when the height is fixed.
The "overflow" property in CSS controls how content that exceeds the dimensions of a container is handled. It's particularly useful when the content inside a container is larger than the container itself.

# FlowX.AI 4.1.0 Release Notes (LTS)
Source: https://docs.flowx.ai/release-notes/v4.1.0-may-2024/v4.1.0-may-2024
🎉 Welcome to the FlowX.AI 4.1.0 release!
**Release Date:** 30th May 2024
This release is a **Long-Term Support (LTS)** version, ensuring extended support and stability.
In this release, we've added some cool new features and enhancements to our already mind-blowing v4.0! Let's dive in and explore:
## **What's New?** 🆕
### Misconfigurations
The Misconfigurations Warnings feature introduces a proactive alert system that ensures alignment between process configurations and selected platforms.
With dynamic platform-specific settings, users receive alerts that guide them toward optimal configurations for navigation and UI design. These alerts, integrated into the frontend interface, empower users to make informed decisions, thereby enhancing the process configuration.

An additional step is required to compute and enable misconfigurations in existing processes from older releases:
#### Generate for available platforms only
You can control which platforms you want to make available configurations for navigation areas, UI Designer or to enable/disable misconfigurations. Options include: web only, mobile, and omnichannel.

### Navigation areas - navigation inside zones and pages
We're enhancing navigation within zones and pages to enable a step-by-step or wizard-style experience.
In the Navigation Panel, a new option will be available exclusively for the web platform. Users can choose between "Single Page Form," which displays all tasks in the same zone (in parallel), and "Wizard," presenting tasks one at a time with custom navigation buttons.

Check the following section for more information:
### Static document management via CMS
Decoupled static document management from the Document Plugin, enabling independent management of documents such as terms and conditions, rules and regulations, and brochures.

The changes involve enhancing the platform's capabilities for managing static documents, particularly **PDFs**, within the **Media Library**. This includes enabling the upload of PDF files, updating information text to reflect PDF format specifications.

PDF documents uploaded to the Media Library must adhere to a maximum file size limit of 10 MB. If you need to increase this limit for larger files, please refer to the following [**configuration options**](../../4.1.x/setup-guides/cms-setup#configuring-the-maximum-size-of-the-files).
Please note that raising the file size limit may increase vulnerability to potential attacks. Consider carefully before making this change.
With the introduction of the Media Library's static document management feature, the previous method of utilizing paths within the **Process Data** for **File Preview** component to display static documents will be phased out.
You can still use it if you have scenarios in which you need to [**generate templates from HTML**](../../4.1.x/docs/platform-deep-dive/plugins/custom-plugins/documents-plugin/generating-from-html-templates) and then display them in a file preview.
### Dismiss navigation on secondary token end
When a navigation sequence is initiated by a secondary token triggered by a boundary event, it will automatically be dismissed when the token reaches the stop process event. This feature ensures that informative modals are closed without requiring extra configurations.

### Theme Management
We've introduced preview options for more UI components. Now, you can review changes in the theming configurations before finalizing them.

### UI Designer
We've added new properties, settings and styles across different UI components for an enhanced experience. Check the UI components - change log for the full list of improvements:
### Tracing - open telemetry
In our 4.1 release, we're thrilled to introduce tracing with Open Telemetry with additional FlowX custom span attributes, offering enhanced visibility and insights into application behavior. With its capabilities, users can pinpoint issues and optimize performance.
Consequently, we are phasing out Jaeger tracing to focus on the advanced features and broader support of Open Telemetry, ensuring a smooth transition.
For traces visualization, use tools like Grafana to filter and search spans based on custom attributes like `fx.processInstanceUuid`, `fx.nodeName`, `fx.actionName`, and others. This helps in pinpointing specific operations or issues within a trace.
Check the following section for more information and for more scenarios:
**Additional configuration needed**! If you want to enable/configure Open Telemetry, check the following section:
For more information about the default Open Telemetry properties of FlowX.AI microservices, please refer to the following section:
### Spring Boot upgrade (for BE Java services)
In the 4.1 release, we upgraded Spring Boot from version 2.5.4 to 3.2.x for all Java libraries and services. This update delivers significant performance enhancements and includes important bug fixes, thereby offering enhanced stability and functionality.
## **Changes** 🔧
* **Process Designer**: The "Start Subprocess" functionality on Service Task nodes has been discontinued.
* **UI Designer**: Introduced `data test id` within the `Settings > Generic` tab across all UI components, facilitating seamless identification and interaction with UI elements during automated testing.
## **Bug Fixes** 🛠️
We've also squashed pesky bugs to ensure a smoother and more reliable experience across the board.
* **Process Designer**: The error hiccups with embedded subprocesses and their error events have been smoothed out! No more getting stuck in a loop; now, you'll gracefully proceed to the embedded subprocess as expected.
* **Process Designer**: The versioning hiccup causing merge conflicts when trying to merge a branch with a main branch that wasn't imported into the environment has been fixed!
* **UI Designer**: The slider UI element's identity crisis has been resolved. No longer will it confuse its maximum value with its minimum counterpart.
* **UI Designer**: Concatenation glitch in text elements, where values from the collection prototype play hide and seek, has been patched up! No more disappearing acts - they'll be displayed side by side as intended.
* **UI Designer**: The glitch preventing the assignment of actions in button UI Actions with blank spaces in their names has been smoothed out!
## Gremlins to Watch Out For
Keep an eye out for these quirks:
* **UI Designer**: When relocating UI elements between parents, the elements' order doesn't always get the memo, causing a mix-up in the family tree. We're untangling this knot!
## Resources
For deployment guidelines and a seamless transition to version 4.1:
# Deployment guidelines v4.1.1
Source: https://docs.flowx.ai/release-notes/v4.1.1-june-2024/deployment-guidelines-v4.1.1
Do not forget, after upgrading to a new platform version, always ensure that your installed component versions match the versions specified in the release notes. To verify this, navigate to **FlowX.AI Designer > Platform Status**.
After upgrading to the 4.1.1 FlowX.AI release, you can import old process definitions into the new platform only if they are from versions 4.0/4.1.0.
It is not possible to import old process definitions from versions earlier than v4.0.

As of **FlowX.AI** release v4.1.0, the `paperflow-web-components` is deprecated, removing the dependencies. Instead, the new components can be found in `@flowx/ui-toolkit`.
## Component versions
| Component | 4.1.1 | 4.1 | 4.0 | 3.4.7 | 3.4.6 | 3.4.5 | 3.4.4 | 3.4.3 | 3.4.2 | 3.4.1 | 3.4.0 | 3.3.0 | 3.2.0 | 3.1.0 | 3.0.0 |
| ---------------------------- | --------- | ------ | --------- | ---------- | --------- | --------- | --------- | ------- | ------ | ------ | ------ | ------- | ------ | ------ | ------ |
| **process-engine** | **6.1.2** | 6.0.3 | 5.10.3 | 4.3.5-2v11 | 4.3.5-2v6 | 4.3.5-2v2 | 4.3.5-2v1 | 4.3.5 | 4.3.2 | 4.3.1 | 4.1.0 | 3.6.0 | 2.2.1 | 2.1.2 | 2.0.7 |
| **admin** | **5.1.3** | 5.0.9 | 4.6.10 | 3.3.19-6 | 3.3.19-4 | 3.3.19-3 | 3.3.19-1 | 3.3.19 | 3.3.10 | 3.3.7 | 3.1.1 | 2.5.2 | 2.2.2 | 2.1.3 | 2.0.8 |
| **designer** | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/ui-sdk** | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/ui-toolkit** | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **@flowx/ui-theme** | 4.17.1 | 4.17.1 | 4.0.1 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **paperflow-web-components** | - | - | 3.35.18-5 | 3.35.18-5 | 3.35.18-3 | 3.35.18-2 | 3.35.18-1 | 3.35.18 | 3.35.9 | 3.35.6 | 3.33.2 | 3.28.11 | 3.21.1 | 3.15.1 | 3.2.1 |
| **flowx-process-renderer** | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| **cms-core** | **3.0.2** | 3.0.1 | 2.2.5 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.9 | 1.3.6 | 1.3.0 | 1.2.0 | 1.0.3 | 1.0.2 |
| **scheduler-core** | **3.0.1** | 3.0.0 | 2.1.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.2.4 | 1.1.0 | 1.0.4 | 1.0.4 | 1.0.4 | 1.0.1 |
| **events-gateway** | **3.0.2** | 3.0.0 | 2.0.4 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.1.0 | 1.0.6 | 1.0.2 | - | - | - |
| **notification-plugin** | **4.0.3** | 4.0.1 | 3.0.6 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.9 | 2.0.8 | 2.0.8 | 2.0.5 | 2.0.4 | 2.0.4 | 2.0.3 | 2.0.1 |
| **document-plugin** | **4.0.2** | 4.0.0 | 3.0.6 | 2.0.10-1 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.10 | 2.0.8 | 2.0.8 | 2.0.6 | 2.0.4 | 2.0.3 | 2.0.3 | 2.0.2 |
| **ocr-plugin** | 1.0.17 | 1.0.17 | 1.0.17 | 1.0.15 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.12 | 1.0.8 | 1.0.8 | 1.0.2 | 0.1.33 | 0.1.33 |
| **license-core** | **3.0.2** | 3.0.0 | 2.0.5 | 1.1.0 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.7 | 1.0.4 | 1.0.2 | 1.0.2 | 1.0.2 | 1.0.1 |
| **task-management-plugin** | **5.0.4** | 5.0.2 | 4.0.5 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.3 | 3.0.0 | 2.1.2 | 1.0.4 | 1.0.4 | 1.0.1 |
| **data-search** | **2.0.4** | 2.0.3 | 1.0.6 | 0.2.8 | 0.2.8 | 0.2.8 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.6 | 0.2.3 | 0.2.0 | 0.1.4 | 0.1.4 | 0.1.3 |
| **audit-core** | **4.0.4** | 4.0.3 | 3.1.4 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.2.0 | 2.1.3 | 2.1.3 | 2.1.0 | 1.0.6 | 1.0.5 | 1.0.4 | 1.0.1 |
| **reporting-plugin** | 0.1.6 | 0.1.6 | 0.1.5 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | 0.0.40 | 0.0.40 | 0.0.40 | 0.0.39 |
| **advancing-controller** | **2.0.2** | 2.0.0 | 1.1.4 | 0.3.5-1 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.5 | 0.3.2 | 0.3.0 | 0.1.4 | 0.1.4 | 0.1.2 |
| **iOS renderer** | 3.0.16 | 3.0.7 | 3.0.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.3.0 | 2.1.0 | 2.0.1 | 2.0.0 | 2.0.0 |
| **Android renderer** | 3.0.21 | 3.0.4 | 3.0.0 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.1.4 | 2.0.1 | 2.0.1 | 2.0.1 | 2.0.1 |
### Third-party recommended component versions
| FlowX.AI Platform Version | Component Name | Recommended Versions |
| ------------------------- | ----------------- | -------------------- |
| 4.1.1 | Keycloak | 22.x |
| 4.1.1 | Kafka | 3.2.x |
| 4.1.1 | PostgreSQL | 16.2.x |
| 4.1.1 | MongoDB | 7.0.x |
| 4.1.1 | Redis | 7.2.x |
| 4.1.1 | Elasticsearch | 7.17.x |
| 4.1.1 | OracleDB | 19.8.0.0.0 |
| 4.1.1 | Angular (Web SDK) | 16.3.x |
FlowX.AI supports the Recommended Versions of the prerequisite third-party components in the above table.
For optimal performance and reliability, our internal QA process validates new FlowX.AI releases using the latest Recommended Versions. While exploring alternative versions that suit your company's specific requirements, we recommend referring to the table above for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://flowxai.zendesk.com/), and our dedicated team will address and resolve any identified bugs following our standard support process.
A FlowX release and its patch versions (such as 4.1.x) will *all* have the same the *Recommended Versions* for the third-party components.
### Process instance data archiving
#### FlowX.AI configuration
New settings (new environment variables) have been added for process instance data partitioning and archiving. Check the following documentation sections for more details:
# Migrating from previous versions to v4.1.1
Source: https://docs.flowx.ai/release-notes/v4.1.1-june-2024/migrating-from-previous-to-v4.1.1
If you are upgrading from a v3.4.x version, first check the following **migration guide** (to capture all the changes introduced in v4.0):
If you are upgrading from v4.0 to v4.1.1, do not forget to check the migration guides for the previous versions:
If you are upgrading from v4.1 to v4.1.1, check the following migration guide:
## Migrating from v4.1 to v4.1.1
Before upgrading to v4.1.1, we recommend executing the initial data partitioning DB migrations manually. We recommend this no matter if you enable partitioning or not, as these DB migrations could take a lot of time.
After executing manually the SQL commands from Liquibase, all existing process instance related information will go into the initial partition.
If partitioning is enabled, the initial partition will continue to be used until a new partition is generated according to the partitioning interval (DAY, WEEK, MONTH). Future partitions will be created automatically.
If partitioning is not enabled, all data will continue to be stored in this initial partition.
[Partitioning docs](../../4.1.x/setup-guides/flowx-engine-setup-guide/process-instance-data-archiving)
# FlowX.AI 4.1.1 Release Notes (LTS)
Source: https://docs.flowx.ai/release-notes/v4.1.1-june-2024/v4.1.1-june-2024
**Release Date:** 17th June 2024
This is a patch version part of the the long-term support for version 4.1.
Welcome to FlowX.AI 4.1.1 release. Let's dive in and explore:
## **What's New?** 🆕
### FlowX.AI Engine: Process Instance Data Archiving
Database Compatibility: Oracle and Postgres.
We are introducing a new system for archiving data related to old process instances. This enables the implementation of data retention policies, with more predictable storage usage and costs.
Some operations involving process instance data can be snappier, given the lower amount of data in the system and new structure. Archiving relies on partitioning data based on the process instance start time. Each partition is associated with a time period (day, week, month), and retention is defined by the number of periods maintained in the system.
Archived data remains in new tables in the database, optionally compressed, and is no longer accessible in FlowX. These tables can be compressed, copied, moved, archived, or deleted without affecting other processes in FlowX.
For more details about process instance partitioning and archiving, including configuration of the feature, see:
## **Changes** 🔧
### Task Manager: base URL configuration
We have enhanced the Task Manager plugin by adding the ability to update the baseUrl for tasks. If the `task.baseUrl` is specified in the process parameters, it will now be sent to the Task Manager to update the tasks accordingly.
Example of a Business Rule:
```java
output.put("task", {"baseUrl": "https://your_base_url"});
```
This update streamlines the configuration process, ensuring that task URLs are dynamically updated and properly managed within the Task Manager.
## Gremlins to Watch Out For
Keep an eye out for these quirks:
* **UI Designer**: When relocating UI elements between parents, the elements' order doesn't always get the memo, causing a mix-up in the family tree. We're untangling this knot!
## Resources
For deployment guidelines and a seamless transition to version 4.1.1: